GoldenGecko
PlaybooksGuidesFeaturesPricing
  1. Guides
  2. /Growth experiments vs A/B tests: what's the difference
Strategy

Growth experiments vs A/B tests: what's the difference

A/B tests are just one type of growth experiment, not the whole picture. Most startups overindex on A/B testing and miss bigger opportunities. Here's when each approach makes sense.

February 5, 20265 min read

People use "growth experiment" and "A/B test" interchangeably, but they're not the same thing. An A/B test is a specific method. A growth experiment is a broader concept that includes A/B tests but also includes before/after tests, fake door tests, concierge experiments, and more.

Understanding the difference matters because many founders think they can't experiment until they have enough traffic for A/B tests. That's like saying you can't cook until you have a professional kitchen. You can run meaningful experiments with 50 users. You just use different methods.

What is a growth experiment

A growth experiment is any structured test designed to validate or invalidate an assumption about your growth. The structure is what separates it from just trying stuff: you have a hypothesis, a way to measure the outcome, and a plan for what you'll do with the results.

Growth experiments come in many forms. You might test a new pricing model by offering it to the next 50 signups. You might test a referral mechanic by building an MVP version and seeing if anyone uses it. You might test demand for a feature by creating a button that tracks clicks before you build anything (a fake door test).

The common thread is learning. Every experiment should teach you something about your users, your product, or your market, whether it succeeds or fails. If you can't articulate what you'll learn from running it, it's not an experiment.

What is an A/B test

An A/B test is a specific type of experiment where you randomly split users into two groups: one sees the current version (control) and one sees a variation. You measure the same metric for both groups and compare. The randomization is what makes A/B tests powerful: it controls for all the variables you can't measure.

A/B tests require enough traffic to detect meaningful differences. Testing a signup page? You need roughly 1,000 visitors per variant to detect a 10% improvement with reasonable confidence. Testing a pricing page with lower traffic? You might need months. This sample size requirement is why A/B tests don't work well for early-stage products.

When they work, A/B tests are the gold standard. They give you high confidence in causal relationships: this change caused this improvement. But they're slow, they require tools, and they only test incremental changes to existing flows. They're a scalpel, not a machete.

Other types of growth experiments

Before/after tests (also called pre/post) are the simplest method. Measure a metric, make a change, measure again. Less rigorous than A/B tests but infinitely faster. Perfect for early-stage products with low traffic. You can run three before/after tests in the time it takes to get one A/B test result.

Fake door tests help you validate demand before building anything. Create a button or page for a feature that doesn't exist yet. Measure how many people click. If nobody clicks, you just saved yourself weeks of development. Dropbox's famous explainer video that generated 75,000 signups overnight was essentially a fake door test.

Concierge experiments test the value proposition manually before automating. Instead of building an AI recommendation engine, manually recommend things to 20 users and see if they find it valuable. If the manual version doesn't work, the automated version won't either. This is how Zapier validated their concept by manually connecting APIs before building the platform.

When to use which approach

Use growth experiments (non-A/B) when you're early stage and have low traffic, when you're testing big changes or new features, when you need to learn fast, or when you're exploring a new area. The priority is speed of learning, not precision of measurement.

Use A/B tests when you have enough traffic (1,000+ per variant per week), when you're optimizing an existing flow, when the change is incremental, or when the stakes are high enough to justify the slower timeline. The priority is confidence in the result.

Most startups should spend 80% of their experimentation budget on fast, broad experiments and 20% on rigorous A/B tests. As you grow and your traffic increases, that ratio naturally shifts. But even companies like Airbnb and Netflix still run plenty of non-A/B experiments for bigger, riskier ideas.

Building an experimentation habit

The goal isn't to pick the perfect method for each test. It's to build a habit of testing ideas before committing to them. A founder who runs ten rough before/after tests in a month learns more than one who spends the month setting up a single perfect A/B test.

Start with the simplest method that can answer your question. Can a before/after test tell you enough? Use that. Need more rigor? Upgrade to an A/B test. Not sure if anyone wants the feature? Run a fake door test. Match the method to the question, not the other way around.

The experimentation muscle gets stronger with use. Your first few experiments will feel clunky and uncertain. By your twentieth, you'll have intuition for which methods work in which situations, and you'll move much faster. The only way to get there is to start running experiments, imperfectly, right now.

Problems this guide helps with

SEO takes forever and you see no results

You've been doing SEO for months and still have zero rankings. It feels like a black hole of effort with no payoff. Competitors seem to rank effortlessly while your pages sit on page 5. Here's the reality: Ahrefs studied 2 million keywords and found that the average page ranking #1 is over 2 years old. But that doesn't mean you need to wait 2 years. The founders who crack SEO early target keywords the big players ignore. Notion's SEO strategy included hundreds of template pages targeting long-tail queries like 'weekly meal planner template' — low-competition terms that collectively drove massive traffic. For solo founders, this bottom-up approach is the only one that works.

Your landing page gets traffic but nobody signs up

People visit your landing page and leave. You're getting clicks from ads or social but the signup rate is embarrassingly low. The average SaaS landing page converts at 3-5%, but top performers hit 10%+. If you're below 2%, something fundamental is broken. Most indie founders make the same mistake: they write the page about their product instead of their visitor's problem. Basecamp's landing page works because it leads with 'running a business is hard' — not a feature list. Your page needs to pass the 5-second test: can someone tell what you do and why they should care within 5 seconds?

Put this into practice

Golden Gecko gives you proven playbooks matched to your goals, step-by-step guidance, and AI that tells you what results mean.

In this guide

  • What is a growth experiment
  • What is an A/B test
  • Other types of growth experiments
  • When to use which approach
  • Building an experimentation habit

Related guides

Strategy

How to build a growth experiment culture as a solo founder

Strategy

What to do when an experiment fails

Strategy

How many experiments should you run per month

Explore more

All guidesAll playbooksCommon problems

Start experimenting today

Get matched with the right experiments for your goals.

GoldenGecko

Always know what to test next. Proven playbooks, matched to your goals.

Product

  • Features
  • Pricing
  • Playbooks

Resources

  • Guides
  • Common problems
  • Glossary
  • Comparisons
  • Documentation

Geckoverse

  • Silver Gecko — SEO
  • Local Gecko — local SEO

Company

  • About
  • Privacy
  • Terms

© 2026 GoldenGecko. All rights reserved.