1. Re-state your north-star metric
Your north-star metric is the single number that shows whether your product is succeeding - think activated accounts, weekly active teams, or revenue per customer. Write it at the top of your backlog document so every idea has a clear destination.
Add one more line that lists the supporting metrics (for example: sign-up conversion, onboarding completion, retention after 30 days). These become the guardrails you refer to when evaluating ideas.
2. Collect ideas in one place
Open a shared sheet or doc and add a row for each experiment idea. Include a short name, the hypothesis, and which metric it aims to improve. Encourage teammates to add ideas anytime - the backlog is only helpful if it captures everyone's thinking.
If an idea is really a research question or needs more data, label it as "Research" so you don't mix it with launch-ready tests.
3. Score ideas with a simple model
Give each idea a score from 1 (low) to 5 (high) in three categories:
- Impact: How strongly could this idea move your north-star metric or one of the supporting metrics?
- Effort: How much time will design, engineering, and analytics need to launch it?
- Confidence: How certain are you that the idea will work based on past data or user feedback?
Calculate a final score by adding the three numbers together or by giving a little extra weight to impact. Sort the backlog by the total so the top of the list always shows the most promising tests.
4. Run a quick prioritisation session
Schedule a 30-minute meeting with the people who run experiments - usually a product manager, a designer, an engineer, and an analyst or marketer. Walk through the ideas, discuss the scores, and adjust anything that feels unrealistic.
Pick one to three "ready to run" experiments and write down who will build, measure, and report the results. Everything else stays in the backlog until capacity opens up.
5. Keep the backlog fresh
A backlog only works if it stays current. Add these simple habits:
- Review the scores every two weeks and archive ideas that no longer fit your goals.
- Log the outcome of every experiment so the backlog gradually becomes a learning library.
- Celebrate when an idea graduates from "Research" to "Ready" - it keeps the team motivated.
Over time you will see trends: which metrics respond quickly, which experiments take longer, and where you might need more research before testing.
Make it useful beyond the CRO squad
A healthy backlog should feel like a shared operating system, not a CRO-only spreadsheet. Invite these teammates to co-own it:
- Product managers ensure ideas map to strategic bets and highlight when qualitative research needs to precede a test.
- Design and UX capture usability findings, supply assets, and flag when a proposed test could create inconsistent experiences.
- Engineering estimates effort, surfaces tech constraints, and notes opportunities to bundle instrumentation or platform work.
- Marketing and lifecycle plug campaign learnings into the scoring model so the backlog reflects both on-site and off-site touchpoints.
- Customer support or success adds verbatims that explain why a hypothesis matters to real users.
When multiple teams contribute, prioritisation conversations shift from “which CRO test runs next?” to “which change best supports our north-star metric and customer experience?”