GoldenGecko
PlaybooksGuidesFeaturesPricing
  1. Guides
  2. /ICE scoring explained: how to prioritize experiments
Frameworks

ICE scoring explained: how to prioritize experiments

You have 20 experiment ideas and time for 3. ICE scoring helps you pick the right ones by rating each idea on Impact, Confidence, and Ease. Here's how to use it without overthinking.

January 20, 20265 min read

Every founder has more ideas than time. The dangerous move is picking experiments based on gut feeling or whatever excited you last. ICE scoring gives you a simple, repeatable way to compare ideas and focus on the ones most likely to move the needle.

ICE stands for Impact, Confidence, and Ease. You rate each experiment idea from 1-10 on all three dimensions, multiply the scores, and the highest total wins. It takes about 5 minutes per idea and saves you weeks of wasted effort.

Impact: how much will this move the needle

Impact is the potential upside if the experiment works. A 10 means it could fundamentally change your growth trajectory. A 1 means it might move a metric by a fraction of a percent. Most experiments land between 3 and 7.

Think about impact in terms of your current bottleneck. If your activation rate is 15% and this experiment could push it to 25%, that's a massive impact on everything downstream: retention, revenue, referrals. If your activation rate is already 80%, the same experiment has much less room to make a difference.

Common mistake: rating everything as high impact because you're excited about it. Force yourself to compare. If doubling your referral rate is a 9, is a slightly better onboarding email really an 8? Probably not. Calibrate by picking your most impactful idea first and scoring everything relative to it.

Confidence: how sure are you it will work

Confidence is your evidence level. A 10 means you have strong data suggesting this will work, maybe a competitor proved it, or your user interviews all point to this problem. A 1 means it's a total shot in the dark.

Sources of confidence: user feedback mentioning the problem, data showing a clear drop-off, proven playbooks from similar companies, your own past experiments that revealed this opportunity. The more evidence you have, the higher the score.

Be honest with yourself. "I think this will work because it's clever" is a 2 or 3. "Fifteen users mentioned this exact pain point in interviews, and Dropbox grew 3900% with this approach" is an 8 or 9. The whole point of this score is to counterbalance your excitement with evidence.

Ease: how quickly can you run this

Ease is about implementation speed, not just effort. An experiment you can ship in a day scores higher than one that takes a month, even if the month-long one requires less total work. Speed matters because faster experiments mean faster learning.

Consider everything: design work, engineering time, content creation, third-party dependencies, approvals. A pricing experiment that needs legal review is harder than one where you just change copy. A referral program that needs a new backend system is harder than adding a share button.

This is where solo founders have an advantage. You don't need to convince a team, wait for sprint planning, or navigate politics. If you can ship it tonight, that's a 9 or 10. Use this advantage aggressively by favoring fast experiments early on. You'll learn more from five quick experiments than one big one.

Putting it together: scoring in practice

Here's a real example. You have three experiment ideas: (A) Redesign the entire onboarding flow, (B) Add social proof to the signup page, (C) Send a personalized welcome email sequence. Let's score them.

Idea A: Impact 8 (onboarding is your biggest leak), Confidence 5 (you think you know the problems), Ease 3 (two weeks of design and dev). ICE = 120. Idea B: Impact 5 (could lift signups 10-20%), Confidence 7 (competitors do this, users mention trust), Ease 8 (add a section, done today). ICE = 280. Idea C: Impact 6 (emails could rescue churning users), Confidence 6 (best practice, easy to measure), Ease 7 (write emails, set up automation). ICE = 252.

Social proof wins despite having lower impact potential because it's high confidence and very easy. You ship it today, learn from the results, and that learning might inform your onboarding redesign. This is the ICE philosophy: bias toward fast, evidence-backed experiments.

Common ICE mistakes and how to avoid them

The biggest mistake is scoring in isolation. Don't rate each experiment independently. Score them together as a batch, comparing relative to each other. This prevents grade inflation where everything gets 7s and 8s and the framework becomes useless.

Another mistake is using ICE as a rigid rule. If two experiments score within 10% of each other, they're effectively tied. Pick the one you're more excited about or the one that teaches you something new. ICE is a prioritization tool, not a decision-making algorithm.

Finally, don't skip re-scoring. After each experiment, your landscape changes. The results might increase your confidence in related experiments or reveal that your impact estimates were off. Update your ICE scores monthly as you learn more about your product and users.

Problems this guide helps with

Users ignore upgrade prompts

You show upgrade prompts but users dismiss them. They're happy on the free tier and see no reason to pay. The paywall isn't working.

Paying for every user when product should spread itself

You're spending money on ads to get every single user. Meanwhile, competitors seem to grow organically. Your product isn't spreading on its own.

Put this into practice

Golden Gecko gives you proven playbooks matched to your goals, step-by-step guidance, and AI that tells you what results mean.

In this guide

  • Impact: how much will this move the needle
  • Confidence: how sure are you it will work
  • Ease: how quickly can you run this
  • Putting it together: scoring in practice
  • Common ICE mistakes and how to avoid them

Related guides

Frameworks

The AARRR pirate metrics framework for startups

Getting started

How to run your first growth experiment

Metrics

How to measure experiment results without a data team

Explore more

All guidesAll playbooksCommon problems

Start experimenting today

Get matched with the right experiments for your goals.

GoldenGecko

Always know what to test next. Proven playbooks, matched to your goals.

Product

  • Features
  • Pricing
  • Playbooks

Resources

  • Guides
  • Common problems
  • Glossary
  • Comparisons
  • Documentation

Geckoverse

  • Silver Gecko — SEO
  • Local Gecko — local SEO

Company

  • About
  • Privacy
  • Terms

© 2026 GoldenGecko. All rights reserved.