What Is an MVP? The Founder's No-Nonsense Guide
“MVP” gets used for everything from a landing page to a half-finished SaaS. What it is supposed to mean: the smallest artifact that tests a specific assumption, not “ship fast and cheap” for its own sake. Next: how to validate a startup idea and MVP cost in Switzerland.
What Eric Ries Actually Said
When Ries introduced the MVP concept in The Lean Startup, the definition was precise: a Minimum Viable Product is “that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”
Read that again. The purpose isn’t to ship something cheap. It’s to learn something specific with the least waste. The “minimum” is about effort, not feature count. The “viable” means it has to be real enough to generate genuine signal. And the whole point is learning, not shipping.
Somewhere along the way, the concept got garbled. “MVP” became shorthand for “half-finished product we’re not proud of.” Founders started shipping buggy, incomplete products and calling it lean methodology. Investors started expecting polished demos before calling anything viable. The word lost its meaning.
The Three Mistakes Almost Every First-Time Founder Makes
Mistake 1: Over-building
This is the most common failure mode. A team spends six months building a feature-complete product with authentication, admin dashboards, billing flows, onboarding sequences, and calls it their MVP. Then they launch and discover that the core assumption they never tested (do people actually want this?) was wrong all along.
They didn’t build an MVP. They built a product. The difference matters because a product takes months and significant capital to build, while an MVP should take weeks and answer a specific question. When you conflate the two, you’ve spent real resources to learn something you could have tested in days.
Mistake 2: Under-building
The opposite trap is building something so stripped-down it can’t tell you anything. A five-slide deck is not an MVP. A wireframe is not an MVP. A landing page with a waitlist signup is not an MVP, unless the hypothesis you’re testing is specifically “will people sign up for information?”
If your product is supposed to help freelancers track client payments, handing someone a spreadsheet and saying “that’s our concierge MVP” only tells you whether people tolerate spreadsheets. It doesn’t tell you whether your actual product, the real automated version, would be worth building.
The test has to match the hypothesis. If it doesn’t, you’re generating noise, not signal.
Mistake 3: Measuring the Wrong Things
The third mistake is subtler. A team builds something reasonable, launches it, and then measures the wrong outcomes. They count sign-ups instead of activation. They track page views instead of return visits. They look at demo requests instead of whether people complete the core workflow.
Activity is not validation. A thousand sign-ups tells you your landing page copy is compelling. It tells you almost nothing about whether your product solves a real problem. Validation requires measuring the behavior that proves the hypothesis, and that behavior is usually much harder to generate than an email address.
What a Real MVP Actually Looks Like
A real MVP starts not with a feature list, but with a question. What is the one assumption, if wrong, that would kill this business? That’s your hypothesis. Your MVP is the fastest test of that hypothesis with real users in real conditions.
Sometimes that’s a functional piece of software. Sometimes it’s a manual process where a human does the work that software will eventually automate. Sometimes it’s a landing page, but only if what you’re testing is demand, and only if “conversion” is defined clearly (not just a click, but a meaningful action like a deposit or a booked call).
The shape of your MVP depends entirely on what you’re trying to learn. A B2B SaaS MVP looks nothing like a marketplace MVP. A hardware MVP looks nothing like a content product. There is no universal template. There is only the question you’re trying to answer.
What all real MVPs share is that they’re designed backward from the learning goal. You start with the question, then figure out the minimum artifact needed to answer it. Not the other way around.
Why “Minimum” Is About Learning, Not Cost
There’s a persistent fantasy in startup circles that an MVP should be cheap. And sometimes it is cheap, a concierge MVP can be nearly free. But cheapness is a byproduct of focus, not the goal.
If your hypothesis requires a real, functional product to test honestly, because anything less would give you polite responses instead of genuine behavior, then your MVP might not be cheap. That’s fine. What it should never be is wasteful. Every feature that isn’t serving the learning goal is waste, regardless of cost.
The “minimum” in MVP means the minimum necessary to generate a trustworthy answer. Nothing more. Cutting corners past that point doesn’t make you lean, it makes your data worthless.
Before You Define Your MVP, Define Your Hypothesis
Here’s a practical starting point. Before you open a design tool, before you write a line of code, before you even build a team: write down the single most important assumption your business depends on.
Not “people want productivity tools.” That’s too vague. Something specific: “Freelance designers in Switzerland spend more than four hours per month on invoice follow-up, and would pay CHF 20/month to eliminate that.” That’s a hypothesis. It makes a specific claim about a specific behavior in a specific market. You can design a test around it.
Once you have that hypothesis, the MVP design question becomes much simpler: what’s the cheapest, fastest way to find out if this is true with real evidence from real people?
That’s the question the MVP framework was built to answer. Start there, and the rest tends to get clearer.
Hypothesis first, MVP second
You have a hypothesis and want to test it with a production-quality MVP, not a slide deck? Book a discovery call: test design, scope, optional 12-week validation with real users.
Written by
Aurum Avis Labs
Passionate about building innovative products and sharing knowledge from the startup trenches.
Related Articles
You might also be interested in these articles
How to Validate a Startup Idea Before You Build Anything
Surveys are not validation. Four methods to test real demand, landing pages, concierge MVPs, interviews, pre-sales, before writing a line of code.
The Hidden Costs of Building Without Validation
Building without a hypothesis costs more than cash, it costs timing, morale, and how investors read your judgment. Four cost buckets founders see too late.
How Much Does MVP Development Cost in Switzerland?
CHF 5k or 500k+? Swiss MVP pricing depends on scope, partner type, and production bar. How to compare quotes without apples-to-oranges.