Data can be such a game-changing thing and honestly the most powerful way to understand your user base better than all the other methods combined. That’s exactly why I am a huge advocate for launching products early and setting up the right analytics and feature experiments so that you build with your users as soon as possible. There’s something magical about getting real users interacting with your actual product rather than hypothetical scenarios.
Over the past 15 years of my career, I’ve been down the rabbit hole of trying all the traditional UX feedback methods like usability testing using click-through Sketch/Figma prototypes, focus groups, lengthy user surveys, and countless interviews. While these methods have their place, I’ve discovered that just giving the user the actual product and observing their real behavior while asking for more focused details from the users is a way more effective and modern approach to getting meaningful user feedback. The difference is night and day when you compare artificial testing environments to real-world usage patterns.
The Old Way is Becoming Outdated (And Here’s Why)
The reason why validating design flows through traditional methods is becoming increasingly outdated is because today we can spin up a functional prototype, add comprehensive analytics, and experiment with powerful tools like PostHog to give early access to your customers in a matter of days – if not less, depending on the complexity of your project. Your mileage may vary, but the fundamental shift is undeniable.
Code is no longer this precious, expensive commodity that requires months of planning and perfect designs before implementation. In fact, going the old way of Idea → Design → Build may actually be more costly when it comes to time and establishing that initial prototype that gets you real insights. The new approach of Idea → MVP → Refine will help teams not only launch significantly faster but also gather real-world usage data quickly in the process, creating a feedback loop that’s impossible to replicate in testing environments.
Getting to the Point
Enough of going in circles about methodology – let me share a concrete example. The point of this article is to discuss the practical application of using real-world analytics and data to understand your users better than any focus group ever could.
I recently launched a micro SaaS product called B.O Docs that helps South African company owners easily generate their beneficial ownership documents for CIPC submissions. This isn’t some theoretical case study – it’s a real product solving a real problem that I personally experienced. I didn’t do any initial user research because I thought the idea was inherently valid since I was solving a problem I actually had and went through a very terrible experience trying to figure out how to complete the submission task without these documents.
However, like most founders, I had a mountain of assumptions about my users. I assumed I knew how they would be using the site – desktop vs mobile preferences, how users would discover our platform, which search engines they’d use, what browsers they preferred, which keywords they’d search for to find us, and how intuitive it would be for them to fill in the required information for document generation. These felt like reasonable assumptions based on my own behavior and what seemed logical.
When Data Destroys Your Assumptions
All of these were assumptions, and boy was I wrong about most of them. As we started diving deep into our funnel analysis and user behavior data in PostHog, the insights were not just surprising – they were eye-opening and really highlighted something I’ve been preaching to other product teams for years. I usually tell people about the “Apple device users building for Android users” phenomenon, but I didn’t expect to fall into the same trap myself.
This is exactly the situation we found ourselves in with the B.O Docs platform. Looking closer at the actual data revealed some fascinating patterns that completely contradicted my initial assumptions.
Our user base discovers us mainly through Microsoft Bing and the Edge browser, which was completely contrary to my initial assumption that most people would find us through Google Search. I had spent considerable time optimizing for Google’s algorithms while neglecting Bing entirely. Fortunately, we had initially optimized the site for Google Chrome, and since most of our users do use some sort of Chromium-based browser, we were at least covered on the browser compatibility front.
But here’s the real kicker – and the insight that has become one of our main focus points for improvement: a significant percentage of our users are attempting to complete the entire process on their mobile phones. We had not fully optimized the site for mobile because we assumed this was a process that users would naturally want to do on desktop, given that it involves downloading documents, filling out complex forms, and uploading files to CIPC. This seemed like obvious desktop territory to us.
The reality? A good percentage of our user base actually uses their mobile devices to try and complete the form, but they’re dropping off because some of the fields and functionality aren’t optimized for mobile and are basically unstable on smaller screens. We were literally losing users because of an assumption that seemed so logical at the time.
The Learning Never Stops
I have personally learned more about SEO optimization for both Google and Bing through this real-world exercise over the past few months than I did in years of theoretical study. We’re actively using this behavioral data to systematically correct our assumptions and improve our product for a genuinely better user experience and higher conversion rates. Every week brings new insights that we never would have discovered through traditional research methods.
The broader lesson here goes way beyond just our specific product. There are countless insights to be gained about the gap between what we think our users will do versus how they actually behave in the real world. Having a robust analytics tool or comprehensive system will help you observe genuine user behavior patterns and identify specific paths for correction to systematically improve your product.
We absolutely love using PostHog because it consolidates all the analytics features we need under one powerful platform. From search analytics to detailed product analytics, session recordings that show exactly how users interact with our interface, A/B testing capabilities for experimenting with different approaches, and feature flags for controlled rollouts – it’s a comprehensive tool that keeps us on top of our analytics game without juggling multiple platforms.
The bottom line? Stop guessing what your users want and start watching what they actually do.