How to run your first growth experiment
Running your first growth experiment feels overwhelming, but it doesn't need to be. This guide walks you through the entire process from hypothesis to results, with practical examples you can copy.
Most founders skip experimentation because it sounds academic. They picture labs, statistical significance calculators, and teams of data scientists. But a growth experiment is just a structured way to test an idea before betting the farm on it.
If you've ever changed a headline and watched what happens to signups, you've already run an experiment. This guide helps you do it deliberately, so you learn faster and waste less time on ideas that don't work.
What counts as a growth experiment
A growth experiment is any deliberate change you make to your product, marketing, or process where you define what you expect to happen before you make the change. That's it. You don't need fancy tools or a statistics degree.
The key word is deliberate. Randomly changing things and hoping for the best isn't experimentation. Writing down "I think changing the CTA from 'Sign up' to 'Start free' will increase signups by 15% because it reduces perceived commitment" is. The difference is that one teaches you something regardless of the outcome.
Good experiments have three parts: a hypothesis (what you think will happen and why), a metric (how you'll measure it), and a timeline (how long you'll run it). That's your minimum viable experiment.
How to write a hypothesis that actually helps
A useful hypothesis follows this format: "If we [change], then [metric] will [improve/decrease] by [amount] because [reason]." The 'because' is the most important part. It forces you to articulate your assumption, which is what you're really testing.
Bad hypothesis: "A new landing page will get more signups." Good hypothesis: "If we add social proof (customer count and logos) above the fold, signup rate will increase by 10% because visitors currently don't trust us enough to try the product." The good version tells you what to change, what to measure, and what you'll learn.
Don't stress about the exact number. Predicting a 10% lift when you get 12% still validates your thinking. The number is there to set expectations and help you decide if the result is meaningful or just noise.
Picking the right metric to track
Your experiment needs one primary metric. Not three, not five. One. Having multiple primary metrics lets you cherry-pick the one that looks good and ignore the ones that don't. That's not learning, that's confirmation bias.
Pick a metric that's close to the change you're making. If you're changing your onboarding flow, measure activation rate, not revenue. Revenue is too far downstream and has too many other variables affecting it. You want to measure the first thing your change should impact.
Also track one or two guardrail metrics to make sure you're not accidentally breaking something. If you're testing a more aggressive upgrade prompt, your primary metric is conversion rate, but your guardrail is churn rate. If conversions go up but so does churn, you've learned something different than you expected.
Running the experiment without a data team
You don't need Mixpanel, Amplitude, or a data warehouse to run experiments. Google Analytics, a spreadsheet, and your database are enough to start. The goal is to compare before and after, not to build a perfect measurement system.
For your first experiments, use the simplest approach: measure the metric for one week before the change, make the change, then measure for one week after. This isn't perfect, but it's infinitely better than guessing. As you get more comfortable, you can graduate to proper A/B testing tools.
Record everything in a simple document: what you changed, when, what you expected, what actually happened, and what you learned. This experiment log becomes your most valuable growth asset over time. Six months from now, you'll have a library of tested ideas specific to your product.
What to do with the results
When your experiment ends, you'll get one of three outcomes: it worked as expected, it didn't work, or the results are unclear. All three are valuable if you extract the right lessons.
If it worked, ask why. Was your hypothesis correct, or did it work for a different reason? If changing the CTA increased signups but the reason was a seasonal traffic spike, you haven't learned what you think you learned. Dig into the data before declaring victory.
If it didn't work, that's not a failure. You've eliminated an idea and can move on to the next one. The only real failure is not running the experiment at all and spending three months building a feature based on a hunch. Document what you learned and use it to inform your next hypothesis.
Problems this guide helps with
Users sign up and disappear
Your signup numbers look good, but users vanish after day one. They create an account, maybe poke around, then never return. You're filling a leaky bucket.
You can't find your first 100 users
Your product is built but you have no idea where to find users. You've told friends and family, posted on social media, and... nothing. Finding early users feels impossible. Here's the truth from founders who've done it: your first 100 users almost never come from scalable channels. They come from manual, unscalable effort. Stripe's first users came from the Collison brothers walking up to people at tech meetups and offering to install the product on their laptops right there. Pieter Levels found his first users by being active in nomad communities for years before launching Nomad List. The Indie Hackers community is full of stories where founders' first 100 users came from one Reddit thread, one Hacker News post, or one conversation in a Slack group. Stop looking for a growth hack and start doing things that don't scale.
Put this into practice
Golden Gecko gives you proven playbooks matched to your goals, step-by-step guidance, and AI that tells you what results mean.