Back to blog·Validation·Startup·frameworks

The Fake Door Test: A Validation Signal That Beats Waitlists

A fake door test gives you a buyer-action validation signal in days, not months. How it works, when it lies to you, and how it stacks up against waitlists, pre-sells, and the commitment metric.

May 15, 20267 min
Fake door test validation framework diagram

TL;DR

A fake door test puts a buy button on an idea that does not exist, then measures who actually clicks. You get a buyer-action signal in days. It is cheaper than a pre-sell and harder than a waitlist. Below is how I use one, when it lies to you, and where it sits alongside the commitment metric and a paid pilot in a real validation stack.

Most founders run validation through surveys, waitlists, and customer interviews. Those are useful inputs, but none of them put a buyer in the position of actually clicking through on a purchase. A fake door test is the cheapest way I know to force a real buyer action out of a real audience before any code exists.

The goal is not the click count. The goal is whether the click pattern looks like a market or looks like a thought experiment. If you already ran the desk research in my broader startup idea validation guide, the fake door is one of the experiments I run next.

What is a fake door test?

Fake door test (definition): A pre-build validation experiment where a founder publishes a finished-looking offer for a product that does not exist, then measures how many visitors take a buyer action (a buy click, a paid deposit, a trial signup). The buyer-action conversion rate is the validation signal, compared against a known category baseline.

The visitor sees a landing page, a pricing tile, or a feature in your existing product that looks shipped. They click "Buy", "Start trial", or "Add to cart", and instead of getting the product they get an honest message that says you are still building it and would love to keep them in the loop.

The canonical indie example is the original Buffer two-page test. In late 2010, Joel Gascoigne put up a landing page with a pricing tile that did not work, watched whether visitors clicked through, then added a second page that said the product was not ready and asked for an email. He only built the real Buffer once the click pattern cleared his threshold. The point of his test was not "do people like the idea". It was "do they like it enough to take an action they would only take for a real product".

"Of 431 VC-backed shutdowns, poor product-market fit drove 43% of failures."

A fake door test attacks that 43% directly. It is the cheapest experiment that forces a stranger to react to a real purchase decision before any code is written, which is what most founders skip when they convince themselves a survey or a waitlist is "enough" validation.

How a fake door test actually works

Five moving parts, and the order matters:

Founder workspace with a fake door test plan, landing page mockup, sticky notes, and reveal-page phone screen
  1. 1Write the offer like it is real. A finished- looking value prop, a finished-looking price, and a single primary action. No "if we built this would you...". The visitor has to read it the way they would read any other product page on the internet.
  2. 2Pick the action that maps to real intent. A "Buy now" button is the strongest signal. A "Start free trial" is decent if your business is freemium. An email-gated demo request is weaker but still beats a waitlist email box. Make the action one click and one click only.
  3. 3Route the click to an honest reveal page. The page after the click is the make-or-break ethical step. It needs to say, in plain words, that the product is in development, that they are seeing this because they showed real interest, and that you would like to give them early access. Offer something concrete in return for their email (priority access, a discount, a first-customer call).
  4. 4Drive a relevant audience to it. A fake door run against your Twitter following or your existing user base is worth less than one run against a cold audience that matches your target ICP. The cheapest way I have seen is a small ad spend on the keywords your ICP would actually search, or a targeted post in a community where they hang out.
  5. 5Measure the conversion, not the traffic. A 5% click-to-intent rate from 200 targeted visitors tells you more than a 0.4% rate from 20,000. Volume is a vanity metric here.

→ Action Step

Stand up a one-page fake door this week. Treat the page like a real product launch and the reveal page like an honest letter. If you cannot write the offer in a way you would be proud to send to a customer, the idea is not ready yet.

Fake door vs waitlist vs pre-sell vs commitment metric

Most validation advice lumps these together. They are not the same thing, and they trade off against each other. Here is how I think about them when I am picking which one to run on a new idea.

TestBuyer actionSignal strengthTime to runBest for
WaitlistEmail signupWeak. Free, low-friction.DaysTop-of-funnel curiosity, list building
Fake doorClick on Buy or Start trialMedium. Click implies real intent.1 to 2 weeksB2C and freemium SaaS, idea-stage
Pre-sellDeposit or full paymentStrong. Money is on the table.2 to 4 weeksHigh-ticket B2B, regulated, services
Commitment metricDefined buyer action by deadlineStrong. Action and time-boxed.30 daysAny vertical where you can hand-pick prospects

A waitlist is a low bar, someone typed an email and forgot you exist. The fake door pulls that same visitor into reacting to an actual buy decision, which is much harder to fake. If you want an even harder signal, a pre-sell asks for money on top of that. The commitment metric sits next to the pre-sell in strength, but it skips the public landing page entirely, because you hand-pick the buyer list and you set the deadline yourself.

I have watched founders get 800 emails on a waitlist, feel great about it, then wonder why nobody bought anything when they finally launched. One weak signal dressed up as proof. The move I trust is stacking two signals: run a fake door to see who clicks, then run a pre-sell or a commitment metric sprint against only the buyers who came through.

When the fake door test gives you a wrong answer

Three failure modes I have hit myself or watched founders walk into:

Founder desk with fake door test analytics, circled ad reports, pricing research, and risk analysis notes

1. Curiosity clicks from cold ads

Cheap paid traffic from broad targeting will inflate the click rate with people who are not in market. If your ad keyword is "ai for marketers" and your offer is "an AI tool for SOC 2 analysts", the clicks will look great and the post-click signal will be noise. Tighten the targeting before you trust the number.

2. Categories where trust beats novelty

In banking, payroll, regulated health, or anything where the buyer defaults to an incumbent brand, a fake door under-counts real interest. Buyers in those categories will read the page, not click, and then go check whether their incumbent already does it. That does not mean there is no demand. It means a fake door is the wrong instrument.

3. A price point you have not benchmarked

If your price is 10x the category baseline, the click rate will look terrible even if the idea is sound. If your price is half the baseline, the click rate will look fantastic and you will under-charge later. Pull comparable pricing for the category before you set the number, and run two price variants on the fake door if you can.

Ethics: the line between research and deception

A fake door is a research tool, and the post-click page is what decides whether you stay on the research side of that line or slide into bait-and-switch territory. Get that page wrong and you have earned the bad reputation, even if your motives were honest.

My rules, written down so I do not bend them later:

  • No card data, no bank data, no sensitive info on the fake door page itself.
  • The post-click reveal says the product is in development before the visitor enters anything.
  • If a deposit is taken, it is refundable on request, no hoops.
  • Emails collected go on a list for that product only. Never repurposed for a different launch.
  • If the test fails and I kill the idea, the list gets a short note explaining why.

If those rules kill the fake door for your case, a commitment metric sprint or a direct cold-outbound pre-sell will get you the same signal without ever standing up a staged landing page in the first place.

How I use fake door tests on Preuve

I have run fake doors on Preuve itself more than once. The most useful version was for a paid tier we considered called "Investor Package". I put a card on the pricing page with the price and a button that said "Get the investor package". The button took the visitor to a page that explained the product was being built and offered first-customer pricing in exchange for an email and a short fit form.

The click rate was okay, but what actually convinced me was the form completion. The people who followed through were investors and pre-seed founders, not browsers killing time on the pricing page. We built the product, and the early-access list converted into paid customers when it shipped.

I have also run fake doors that died on the post-click page. The clicks looked fine, but the form completion was around 2% and the open-ended question answers made it clear the buyers wanted a different product than the one I was about to build. A 2% rate plus a page of pivot-shaped feedback is still useful data, it just saved me from shipping the wrong thing.

If you want to skip the desk-research piece before running your own fake door, I built Preuve AI to do exactly that in 60 seconds. The free scan tells you whether the category is real before you spend a weekend designing a fake door.

FAQ

What is a fake door test?

A fake door test is a validation experiment where you advertise a product or feature that does not exist yet, then measure how many people try to buy or sign up. The signal is a buyer action on a finished-looking offer, which tells you whether demand is real before you spend a single hour building.

How is a fake door test different from a waitlist?

A waitlist captures interest. A fake door test captures intent. With a waitlist, a visitor types an email and forgets you exist. With a fake door, the visitor clicks a buy button and then hits a "coming soon" page, which forces them to react to a real purchase decision and gives you a far harder signal than a free email signup.

Is a fake door test ethical?

Yes if you handle the moment after the click with respect. The fake door has to disclose that the product is in development before any money or sensitive data is collected, and the message after the click has to feel like an honest offer for early access, not a bait-and-switch.

When does a fake door test give you a wrong answer?

Three common cases. Paid traffic at the wrong intent stage inflates clicks. A category where buyers default to incumbents under-counts true interest. A price point you have not benchmarked against alternatives skews the click rate up or down by 10x.

How many fake door clicks do I need to validate an idea?

There is no universal number. The signal that matters is the conversion ratio from a relevant audience: targeted clicks to a clear offer that turn into intent actions. I aim for at least 30 to 50 qualified visitors and a conversion ratio I can compare against a known baseline in the same category.

Want to run this process in 60 seconds?

Preuve AI analyzes your startup idea against live market data using the same validation frameworks investors use.

Test My Idea (Free)

Free audit. Takes 60 seconds.

More in this categoryValidation

See all articles