How to Validate a Startup Idea Before You Build Anything
Most founders build first and validate second. By the time they discover that nobody wants what they built, they have spent months and tens of thousands of francs. This guide explains how to reverse that order.
Friends, market trends, PDF reports: fine for desk research, weak proof that strangers will pay or return. If you are not sure what to test yet, read what an MVP is, then design the test.
That kind of research has its place, but it doesn’t answer the one question that actually matters: will real people, with real money, change their real behavior because of what you’re building?
The gap between “this seems like a good idea” and “we have evidence this works” is where most startups die. Crossing that gap before you build is the whole point of validation.
What Validation Actually Means
Validation is the act of proving a specific assumption with real-world evidence. Not surveys. Not opinions. Not encouraging conversations. Evidence, which means observable behavior from people who have no social incentive to be nice to you.
The key word is “specific.” You’re not validating a vague market opportunity. You’re testing a precise claim: that a defined group of people has a problem severe enough to motivate action, and that your proposed solution is compelling enough to change their behavior.
Before you design any test, you need to write that claim down. What, exactly, do you believe to be true? If you can’t state your core assumption in one sentence, you’re not ready to validate it.
Four Ways to Test an Idea with Real Evidence
Landing Page Tests
A landing page test is exactly what it sounds like: you build a simple page that describes your product and asks visitors to take an action. The action matters enormously, it determines what you’re actually measuring.
Email signups tell you that your headline is interesting. They do not tell you that people will pay. A waitlist tells you that someone was willing to give you their attention for thirty seconds. It doesn’t tell you much else.
If you want to measure real demand, the action on your landing page needs to have cost. A calendar booking to talk about the product. A deposit or pre-order. A completed application form. The higher the friction, the stronger the signal.
Run ads to that page, don’t rely on organic traffic, which takes too long and is too variable to generate clean data. Set a conversion target before you start, and decide in advance what result would change your mind.
Concierge MVPs
The concierge approach means doing the thing manually before you automate it. If your startup idea involves software that processes invoices, go process invoices by hand for ten customers. If you want to build an AI that matches job seekers with employers, do the matching yourself in a spreadsheet.
The point isn’t to scale this, it’s to learn whether the outcome you’re promising actually delivers value when someone receives it. You’ll also learn exactly which parts of the process are complicated, which assumptions about user behavior don’t hold, and whether customers value the outcome enough to pay for it.
In a Swiss or DACH B2B context this approach is particularly powerful: enterprise and Mittelstand buyers are risk-averse and slow to commit. Delivering real value manually first builds the trust that no pitch deck can. You also get far more honest feedback than any survey would produce.
Concierge MVPs are particularly powerful because they generate behavior data and qualitative insight simultaneously. You see what people actually do, not what they say they would do, and you’re in direct contact to ask why.
Customer Interviews (Done Right)
Everyone does customer interviews. Almost nobody does them in a way that generates useful data.
The failure mode is asking people about your solution. “What do you think of this idea?” “Would you use something like this?” “Does this solve your problem?” These questions feel like validation but they’re traps. People are polite. They want to be helpful. They’ll tell you what you want to hear, especially if you seem excited.
Good customer interviews don’t mention your product at all. They ask about the problem. “Walk me through the last time you dealt with this issue.” “What did you try? What happened?” “How much time does this take you?” “What have you already paid to try to solve it?”
You’re looking for evidence that the problem is real, frequent, and painful enough to motivate spending, and you’re listening for what people have already tried, which tells you a lot about how they think about the solution space.
Pre-Sales and Deposit Collection
This is the strongest form of early validation: someone hands you money before the product exists.
It doesn’t have to be full payment. A refundable deposit, a letter of intent, or even a verbal commitment made in context (“yes, if you build this, I’ll be your first paying customer, here’s my card”) carries genuine weight. What you’re testing is whether people value the outcome enough to make a commitment, not just an expression of interest.
Pre-sales work best in B2B contexts where longer sales cycles are normal, or in consumer contexts where you can offer meaningful early-access benefits. They’re harder to pull off when the product has no meaningful story yet, which is why they pair well with concierge MVPs. Let people experience a manual version of the outcome first, then offer to pre-sell the automated version.
For Swiss founders targeting the DACH market: a single signed letter of intent from a mid-size German or Austrian company is worth more than a hundred email signups. It signals that the problem is real enough to put something in writing.
Designing the Test Around the Hypothesis
Every validation method above can be run well or badly, and the difference usually comes down to whether you designed the test around a specific hypothesis.
Start with your assumption in writing: “I believe that [this type of person] experiences [this problem] with [this severity], and would [take this action] to solve it.” Then ask: which of the four methods would give me the strongest evidence about this specific claim?
If your assumption is about demand (do people want this?), a landing page test with a meaningful conversion action might be right. If your assumption is about the value of the outcome (does this actually help?), a concierge MVP is better. If your assumption is about the problem itself (is this even a real problem?), start with interviews.
Most ideas require multiple rounds of testing, because there are usually multiple assumptions that need to hold simultaneously. That’s fine. Run the cheapest test first to eliminate the riskiest assumption, then move to the next.
When You Have Enough Signal to Build
There’s no universal threshold. What “enough” looks like depends on your market, your capital situation, and how reversible your decisions are.
That said, there are useful heuristics. In B2B, three to five customers willing to pay full price (not a discount, not a favor) is a meaningful early signal. In consumer, conversion rates on a landing page need to be compared to category benchmarks, a 3% conversion rate on a paid ad campaign might be exciting or underwhelming depending on what you’re selling.
More important than the number is the quality of the evidence. Is this behavior from people who have no relationship with you, who had no social reason to say yes? Did they act voluntarily, without being coached? Did they do it again?
Repeat behavior is the strongest signal of all. If people come back, use the product again, or refer someone without being asked, that’s meaningful. First-time actions are interesting. Second-time actions are evidence.
The Danger of Validating What You Want to Be True
This is the quieter risk. Founders who know validation matters still sometimes run tests designed to confirm what they already believe. They interview people likely to be sympathetic. They count soft signals as strong ones. They set their conversion targets low enough to pass easily.
The antidote is to define what “not validated” looks like before you run the test. If your landing page gets 200 visitors and fewer than 4 people complete the booking action, the idea needs to be rethought. Write that down before you launch the page. Then honor it.
Validation is only useful if you’re genuinely willing to be wrong. That’s harder than it sounds, but it’s the whole point.
Sketch the validation test
Bring: a one-sentence hypothesis and which risk you want to kill first (demand, willingness to pay, distribution).
Book a discovery call. We pick the method and next step together; production-quality MVPs only when the hypothesis demands it.
Written by
Aurum Avis Labs
Passionate about building innovative products and sharing knowledge from the startup trenches.
Related Articles
You might also be interested in these articles
What Is an MVP? The Founder's No-Nonsense Guide
MVP is not code for cheap and rough. Lean Startup: minimum effort for maximum learning, plus three mistakes first-time founders repeat.
How Much Does MVP Development Cost in Switzerland?
CHF 5k or 500k+? Swiss MVP pricing depends on scope, partner type, and production bar. How to compare quotes without apples-to-oranges.
Venture Studio vs Agency vs. Accelerator: What Founders Actually Need
Agency ships scope, accelerators bring network, studios bring product judgment, with equity tradeoffs. Which model fits which stage, without the buzzwords.