product-market fit

Product-Market Fit: How to Find It Without Burning Your Runway

AA
Aurum Avis Labs Author
8 min read

PMF gets used loosely, but it is closest to a pattern of behaviors: retention that stabilizes, unprompted referrals, inbound you can barely handle. Without trustworthy measurement, teams confuse launch buzz with real demand. You need an MVP you trust: production-quality MVPs and validation before you build.

This ambiguity is costly. Founders often don’t know whether they have PMF or not. They mistake early enthusiasm for sustained demand. They interpret silence as validation. They keep building when they should be talking to users, or keep talking to users when they should be shipping.

Here’s a more concrete way to think about it.

What PMF Actually Is

Product-market fit isn’t a feeling. It’s a pattern of observable signals that collectively indicate your product is pulling demand from the market rather than pushing itself into it.

Marc Andreessen’s original definition, “being in a good market with a product that can satisfy that market”, is correct but too abstract to be actionable. A more useful frame: PMF exists when a meaningful segment of users would be genuinely worse off without your product, are actively using it, and are bringing others along.

The “meaningful segment” part matters. You can have PMF in a small segment and no PMF in the broader market. That’s actually fine at first, narrow PMF is the starting point for almost every successful product. The mistake is claiming broad PMF before you’ve earned it, which leads you to expand before you have the signal to support it.

The Three Signals That Indicate Real PMF

Retention is the primary signal. If you acquire users and they keep coming back without being pushed, without re-engagement emails, incentives, or a new feature announcement, that’s meaningful. Retention curves that flatten out at a meaningful percentage (the exact number depends heavily on your category) indicate that a real segment has integrated your product into their behavior. Curves that keep declining toward zero tell you the product isn’t providing sustained value.

Unsolicited referrals are the second signal. When users tell other people about your product without being asked, without a referral program, without prompting in your onboarding flow, they’re telling you something. It means the product created enough value that sharing it became natural. This is different from someone recommending you because you asked them to, or because there’s a discount on the table.

Inbound demand you struggle to keep up with is the third signal. At some point in the PMF journey, you stop doing all the work to bring users in and start managing the flow of people who’ve already decided they want what you have. This doesn’t mean you stop doing marketing or sales, it means those activities have a different character. They’re amplifying something real rather than manufacturing interest.

These three signals don’t all appear simultaneously. Typically retention comes first, then referrals, then inbound pressure. If you have strong retention but no referrals after a reasonable amount of time, it usually means users value the product but don’t feel compelled to share it, which often points to a positioning or network-effect gap rather than a product-value gap.

Sean Ellis’s Test, and Its Limits

Sean Ellis’s “how disappointed would you be if this product disappeared?” test became famous for good reason: it’s simple, directional, and it produces a number you can track. The threshold he identified, 40% of users saying “very disappointed”, is a reasonable heuristic for early PMF signal.

But the test has real limits. First, it measures user sentiment at a point in time, which is a leading indicator at best. People can say they’d be very disappointed and then not come back to the product for three months. Second, it requires a user base large enough to be meaningful, which means it’s not useful pre-launch or in very early stages. Third, it’s subject to selection bias, the users who respond to your survey may not be representative of your user base overall.

Use the Ellis test as one data point, not as a conclusion. If 40%+ of your active users say they’d be very disappointed, that’s a strong signal worth paying attention to. If 15% say so, don’t conclude you’ve missed PMF, look at what’s different about the 15% and whether they represent a tighter segment you should be doubling down on.

Why Founders Look for PMF in the Wrong Places

There are two common failure modes.

The first is looking for PMF too early, before you have a real product. A beautiful onboarding flow doesn’t demonstrate PMF. A nice design doesn’t demonstrate PMF. A prototype that generates excitement in user interviews doesn’t demonstrate PMF. PMF requires a real product with real users making real choices. That means it can’t be assessed before you’ve built and launched something that people can actually use, repeatedly, in conditions that resemble how they’d use it in their lives.

The second failure mode is looking for PMF too broadly, across too many segments simultaneously. If your product serves small restaurants, enterprise hospitality groups, and food delivery platforms, you’re not going to find a clean PMF signal across all three at once. Segment retention behaves differently, referral patterns differ, and the “job to be done” often isn’t the same. Founders who chase broad PMF across multiple segments end up with murky data that doesn’t tell them anything useful.

The discipline required is to pick the segment where you have the strongest early signal and go deep. Once you have tight PMF in one segment, you can start asking whether adjacent segments are accessible.

How to Systematically Close In on PMF

There’s no shortcut. But there’s a process that makes the search more efficient.

Start with a tight segment hypothesis. Not “SMBs”, but “operations-focused founders at two-to-ten-person SaaS companies who are managing their first hire.” The tighter the hypothesis, the more interpretable your data will be when it comes back.

Build the smallest version of the product that tests the core value hypothesis. This is not the same as the smallest product you can ship, it’s the smallest thing that creates the experience you’re betting will generate retention and referrals. A product that’s too feature-sparse won’t generate the signal you’re looking for, not because the value hypothesis is wrong, but because users can’t get to the value.

Launch to your target segment specifically. Not to your network. Not to a general ProductHunt audience. To the actual people you hypothesized about. The data from a general launch is much harder to interpret because you don’t know which sub-segment the engaged users belong to.

Measure retention at meaningful time intervals. What “meaningful” means depends on your product’s natural usage cadence, daily for a communication tool, weekly for a planning tool, monthly for a reporting tool. Don’t measure retention at intervals that don’t match how people would naturally use you.

Talk to both your retained users and your churned users. The retained users will tell you what’s working and why. The churned users will tell you where the gap is between the promise and the reality. Both conversations are essential, and most founders over-invest in the first and under-invest in the second.

Iterate on the segment, not just the product. If the data consistently shows weak retention, the instinct is usually to add features. But the problem is often that you’re talking to the wrong people, people for whom the problem isn’t acute enough, or the current solution isn’t bad enough to create real switching motivation. Redefining the segment you’re targeting is often more productive than adding another feature.

PMF and Your MVP

Your MVP is a tool to find PMF. That’s its job. A lot of founders treat the MVP as the destination, something to complete, launch, and hand off. But the MVP isn’t finished when it launches. It’s finished when it produces a clear PMF signal, either confirming that a real segment has found genuine value in it, or disconfirming that hypothesis clearly enough that you can move on.

This is why “production quality” matters in MVP development. A prototype that falls over under real usage conditions can’t give you real retention data. A landing page without a working product can’t tell you whether people will come back. The quality floor for PMF research is higher than most founders expect, because the signal you’re looking for requires real conditions, not simulated ones.

This is also why the MVP-to-PMF timeline is almost always longer than founders plan for. Building takes time. Distribution takes time. Retention data takes time to accumulate by definition. If you budget three months to find PMF, you’ll be making decisions based on four weeks of retention data, which is almost never enough.

Budget more time than you think you need, stay solvent enough to iterate when the first version doesn’t produce the signal, and treat every data point, positive and negative, as useful information about where the real value lives.

PMF without guessing

You want retention and acquisition data you would trust in a seed conversation or a scaling bet? Book a discovery call. We check whether our 12-week sprint closes the gap between MVP and defensible PMF signal for you.

product-market fit startup validation growth
AA

Written by

Aurum Avis Labs

Passionate about building innovative products and sharing knowledge from the startup trenches.

Cookie Preferences

Essential cookies cannot be disabled. They keep the site running.

Essential Cookies

Required to operate the site: security, authentication, and error tracking.

Always active

Analytics Cookies

Show us which pages work and which don't. Includes Google Analytics and Microsoft Clarity.

Marketing Cookies

Enable targeted advertising on other platforms.