1. Define velocity in plain language
Velocity is not just the number of tests shipped. It is how quickly ideas move from hypothesis to decision. Anchor your dashboard on three measures: throughput (tests shipped), cycle time (days from brief to decision), and decision quality (how many tests lead to a clear next step).
Write down what each measure means in one sentence. That definition keeps the dashboard honest and prevents endless metric creep.
2. Build a lightweight experiment log
You cannot measure velocity without a source of truth. Keep a simple experiment log in a spreadsheet or Airtable with these fields:
- Experiment name, owner, start date, end date.
- Status (planned, running, analyzed, shipped).
- Decision (ship, iterate, stop) and why.
This log becomes the data source for the dashboard. If it is not updated weekly, the dashboard will be wrong. If you are building your experiment log from scratch, the experiment backlog north star article covers how to score and prioritise what goes into the log in the first place. When I first built a velocity tracker for a growth team, the log lived across three different documents depending on who you asked. Consolidating to a single shared spreadsheet felt like overkill at the time; six weeks later the team was catching duplicate tests and retiring ghost experiments that had been running unnoticed.
3. Visualize the three core metrics
Use simple visuals that show momentum at a glance:
- Throughput: bar chart of tests completed per sprint or month.
- Cycle time: line chart of median days from start to decision.
- Decision quality: stacked bar showing ship vs. iterate vs. stop.
Keep the time window to the last 6 to 8 sprints. Old history makes the dashboard feel stale.
4. Add a friction tracker
Velocity drops when teams wait on instrumentation, design, or legal reviews. Track friction explicitly with a simple list of blockers. Add a count of experiments delayed by each blocker category so leadership sees where to invest. My take: friction categories matter more than velocity totals. Knowing that 60% of delays come from instrumentation gaps gives leadership something specific to fund — a headline that says "we completed eight tests" leaves them with nothing to act on.
This is often more useful than debating whether win rate is high or low.
5. Make velocity reviews a habit
Review the dashboard in a predictable ritual, such as the first 10 minutes of the weekly growth sync. Ask three questions:
- Did throughput match our target?
- What slowed cycle time this week?
- Which decisions unlocked roadmap changes?
Document one action per review. Without actions, velocity dashboards become vanity metrics. Pair the velocity review with a lightweight retro conversation every quarter to surface systemic blockers that weekly check-ins miss.
Use velocity to protect quality
Velocity is a balance. If throughput rises and decision quality drops, slow down to improve instrumentation or hypothesis quality. A good dashboard makes those tradeoffs explicit before you over-optimize for speed. Worth noting: velocity dashboards can quietly create pressure to ship underpowered tests. If cycle time is falling but win rates are also falling, that is usually a hypothesis quality problem, not a process win.
Bring velocity to the whole team
Velocity data is most valuable when it shapes conversations beyond the experimentation pod:
- Product can use cycle time trends to set realistic sprint targets and avoid overcommitting on experiment timelines.
- Engineering sees instrumentation delays quantified as a blocker category, which gives them a concrete case for prioritising tracking work in the roadmap.
- Growth and marketing understand the experiment pipeline better and can align campaign timing with expected decision dates.
- Design can see how long variant handoffs take relative to other delay types and adjust their process accordingly.
- Leadership gets a throughput number they can track quarter over quarter without needing to follow every individual test.