ICE / RICE Prioritization Calculator
Evaluate ideas cleanly, sort automatically, visualize top candidates – perfect for feature backlogs & experiments.
Add Idea
Results
No ideas yet – add an idea above.
Top 5 Visualization
Explanation & FAQ
The ICE/RICE calculator helps you prioritize ideas, features, or marketing experiments quickly and transparently. Instead of "gut feeling," you get a clear ranking – backed by numbers. You enter a few estimates per idea, the calculator computes the scores, sorts automatically, and shows the top candidates as a bar chart. This allows you to easily discuss in the team ("Why is Idea A above Idea B?") and document decisions.
ICE stands for Impact, Confidence, and Ease. Impact describes the expected benefit (e.g., revenue, activation, time savings) on a scale of 1–10. Confidence is your degree of certainty in percent: How good are the data, tests, or experiences? Ease is feasibility (1–10): the higher, the easier or faster. The ICE score is calculated here as Impact × Confidence × Ease / 100 – Confidence thus acts as a "reality filter": uncertain assumptions automatically get less weight.
RICE extends ICE with Reach and replaces Ease with Effort. Reach is the range in the considered timeframe (e.g., affected users per month, leads per quarter). Impact remains 1–10, Confidence remains percent, Effort is the work in person-days (or ideally story points, if you are consistent with them). The RICE score is calculated as Reach × Impact × Confidence / Effort / 100. This favors large, high-impact ideas, but high effort lowers the score.
Which method is better? ICE is great if you want to quickly compare many clear ideas roughly or if Reach is hard to estimate. RICE is better if you need a "portfolio view" and can reliably estimate Reach and Effort. In practice: start with ICE, refine the Top 10 later with RICE – or use the calculator in "Both" mode to see differences immediately.
Tips for good inputs: 1) Define the timeframe (e.g., 30 days) and the goal (e.g., activation) beforehand. 2) Use common anchor points: Impact 10 = "Gamechanger", 5 = "solid improvement", 1 = "barely noticeable". 3) Confidence 80–100% only with real data (A/B tests, clear benchmarks). 4) Estimate Reach conservatively; duplicate multiple scenarios in case of uncertainty. 5) Measure Effort consistently (person-days or points), otherwise scores will be distorted.
How to use this calculator: Name the idea, enter values, click "Add Idea". You can edit, duplicate, or delete entries, and sort by ICE or RICE. With "Export CSV" you can download the list for Jira/Sheets. Everything is saved locally in the browser (LocalStorage), so you don't have to start over when changing pages – without server and without tracking.
Common pitfalls: different definitions of "Impact", overly optimistic effort estimates, or Reach referring to the wrong timeframe. A simple fix is a 30-minute calibration workshop: Take three known, already implemented initiatives, evaluate them retrospectively, and discuss whether the scores reflect reality well. This creates team anchor points that make future reviews much more consistent.
Why are the scales for Impact/Ease 1–10?
This forces clear, quick assessments and is easy to calibrate in a team. If you want to be more precise, work with half points (e.g., 7.5).
Which Reach unit should I use?
Use a unit that fits your goal (users, sessions, orders). Consistency is key: all ideas must be estimated in the same timeframe and in the same unit.
What if Effort is 0?
Effort cannot be 0. Set at least 0.5 person-days, otherwise the score becomes mathematically infinite and distorts the ranking.
Is a high score automatically a "Go"?
Not necessarily. The score is a prioritization signal. Also check risks, strategic dependencies, legal issues, and resources.
Why do ICE and RICE top ideas sometimes diverge strongly?
Because RICE emphasizes Reach and Effort more. A small, very simple idea can win at ICE, while a larger initiative moves up at RICE.
Can I share the data between devices?
The calculator saves locally in the browser. Use the CSV export to transfer the list or share it with the team.
How do I set Confidence meaningfully?
Think in evidence levels: 30% = rough assumption, 60% = qualitative signals (interviews, support tickets), 80% = hard metrics, 95% = tested hypothesis. This way you use Confidence as a quality feature of your data.
How do I deal with dependencies?
Mark dependent ideas in the name (e.g., "(Req: Tracking)") and prioritize the prerequisite separately. A high score is useless if a blocker must be solved first.
Embed this Calculator on Your Website
You can integrate this calculator for free into your own website. Get the embed code on our overview page.