If delivery feels ‘busy’ but you still miss dates, you don’t have a work problem, you have a visibility problem. A delivery dashboard fixes that by turning day-to-day execution into numbers you can act on, before clients start chasing.
If you want the wider operating system behind this, cross-reference Business Operations: The Complete Systems Playbook for SMEs as you build it.
In this article, we’re going to discuss how to:
- Pick KPIs that expose risk early and drive the right behaviour
- Build a simple delivery dashboard you’ll actually review every week
- Use dashboard examples to improve margin, dates and client experience
Define A Delivery Dashboard In Practical Terms
A delivery dashboard is a single page view of whether you’re going to deliver what you promised, by when, at what quality, and at what cost. If it doesn’t help you make a decision in under 60 seconds, it’s not a dashboard, it’s reporting.
Practical sense-checks:
- One owner: Someone is accountable for the numbers being right, not ‘the team’.
- Lead indicators: It shows what will go wrong next week, not only what went wrong last week.
- Decision-ready: Every metric links to a lever you can pull: staffing, scope, sequencing, client comms.
- Cheap to run: It takes under 30 minutes a week to update once it’s set up.
Start With The Outcomes You’re Selling
Before you touch a spreadsheet, write down the promise. Most delivery dashboards fail because they track activity, not outcomes.
Use this one-sentence offer template to set the dashboard’s spine:
‘We deliver [result] for [who] in [time] with [quality standard], at [price], without [common pain].’
Examples that work in real businesses:
Implementation agency: ‘We deliver a working Shopify relaunch for UK retailers in 6 weeks with a stable checkout and 0 critical bugs, at £18k, without endless scope creep.’
B2B service: ‘We deliver finance ops clean-up for founder-led firms in 30 days with month-end closed in 5 working days, at £4.5k, without you living in spreadsheets.’
Once the promise is written, the dashboard questions become obvious: are we on track for time, quality and cost?
Gather Signals And Data In Two Hours (Internal First, Then Public)
Don’t start by debating KPIs. Start by collecting the raw artefacts that already exist in your business. You can do this in a single focused session.
Internal Data To Pull First
These are the fastest sources because they’re already inside your systems:
- Work in progress: Current projects, tickets, tasks and their statuses (from your PM tool).
- Dates: Committed milestones, planned vs actual completion dates, and who signed them off.
- Effort: Timesheets or rough effort estimates by role, even if they’re imperfect.
- Money: Contract value, change requests, refunds, write-offs, and hours sold vs hours consumed.
- Quality signals: Rework tickets, bug counts, QA failures, client complaints, internal handback notes.
- Client comms: Response times, outstanding approvals, number of blockers waiting on the client.
If you don’t have timesheets, use a proxy this week: count tasks completed per person per week, or story points if you’re already using them. It’s not perfect, but it gives you trend direction.
Public Data That Helps You Benchmark
Public data won’t run your delivery, but it can set reasonable expectations:
- Competitor promises: Typical lead times, SLAs and onboarding timelines on competitor sites.
- Review themes: What customers praise or complain about in your category (delivery speed, comms, quality).
- Hiring signals: If your competitors are hiring delivery managers, QA, or support roles, it hints at the pressure points.
Delivery Dashboard KPIs That Actually Move The Needle
Your delivery dashboard should balance four forces: time, throughput, quality and margin. Overweight one and the others usually suffer.
Here’s a founder-first set of KPIs that works for most service and project delivery teams. Pick 6 to 10, not 30.
Time And Predictability
These tell you if deadlines are real or wishful thinking:
- On-time milestone rate: Milestones hit on time ÷ milestones due this period. Target depends on maturity, but 85%+ is a decent operator standard.
- Schedule variance: (Actual duration − planned duration) ÷ planned duration. Track it per project type, not only overall.
- Ageing WIP: Number of work items older than X days (choose X based on your typical cycle time).
Throughput And Capacity
These stop you accepting work you can’t deliver:
- Cycle time: Average time from ‘started’ to ‘done’. If it’s rising, flow is broken.
- Planned vs actual capacity: Hours available vs hours assigned for the next 7 to 14 days.
- Utilisation: Billable hours ÷ total available hours. For many SMEs, 70% to 80% is healthy. Above that, quality and comms usually drop.
Quality And Rework
Quality is usually where margin goes to die. Track it like you mean it:
- First-time-right rate: Items accepted without rework ÷ items delivered. If you can’t measure it, start with a simple ‘accepted’ tick.
- Rework hours: Hours spent fixing or redoing work ÷ total delivery hours. Even 10% rework is a warning sign.
- Escalations: Number of ‘red flag’ client messages, complaints, or urgent fixes per week.
Margin And Unit Economics
Without money on the dashboard, delivery teams ‘win’ by doing more work for less profit.
- Delivery gross margin: (Revenue − delivery labour cost − direct costs) ÷ revenue. Track per project and per delivery squad.
- Hours sold vs hours burned: Sold hours (from the proposal) compared to actual hours. Show it as a £ figure, not only hours.
- Change request conversion: Approved change requests ÷ change requests raised. Low conversion often means you’re not holding boundaries.
Quick calc you can do today: if you sold a project for £10k with 80 delivery hours baked in, your revenue per hour is £125. If the team burns 110 hours, your revenue per hour drops to £91. Even if wages are ‘fixed’, that extra 30 hours crowds out paid work.
Choose Metrics That Create Behaviour, Not Anxiety
A dashboard is a behaviour design tool. If it makes people hide problems, it’s doing damage.
Use this filter before you add any metric:
- Can we influence it weekly? If you can’t change it within 7 days, it belongs in monthly review, not the delivery dashboard.
- Is it defined in one sentence? If people argue about the definition, you’ll argue about the number.
- Does it have an owner? Not a team, a person.
- Does it have a threshold? Green, amber, red. No thresholds means no action.
Example thresholds that are practical for SMEs:
On-time milestone rate: Green 90%+, amber 80% to 89%, red under 80%.
Rework hours: Green under 5%, amber 5% to 10%, red over 10%.
Capacity: Green if next 2 weeks are under 85% planned utilisation, amber 85% to 95%, red over 95%.
Dashboard Examples You Can Copy This Week
Below are three delivery dashboard patterns you can lift, depending on how your work flows. Each one fits on a single page. Each one has a clear ‘so what’.
Example 1: Project Delivery Dashboard (Agency Or Implementation Team)
Best for: 5 to 30 concurrent client projects, milestone-based delivery.
What it shows: Milestones due next 14 days, projects at risk, hours sold vs hours burned, change requests raised and approved.
Weekly action: Re-sequence work, escalate client approvals, pull in QA early, re-price scope creep before it turns into free work.
Example 2: Ticket Delivery Dashboard (Ops, Support Or Retainers)
Best for: Steady incoming work, SLA expectations, repeatable tasks.
What it shows: Tickets opened vs closed, backlog size, ageing tickets, first response time, reopens.
Weekly action: Cap WIP, rotate on-call, fix the top 3 repeat issues, tighten intake so the team stops guessing requirements.
Example 3: Field Delivery Dashboard (On-Site Work)
Best for: Engineers, installers, maintenance teams, regional routes.
What it shows: Jobs per day per engineer, travel time %, first-time-fix rate, parts availability, missed appointments.
Weekly action: Improve routing, pre-pick parts, change appointment windows, stop booking jobs without confirmation and access details.
Build The Minimum Viable Dashboard In 7 To 14 Days
Most teams overbuild dashboards. Your aim is a minimum viable delivery dashboard that creates better conversations, then you iterate.
Day 1: Decide The Dashboard Rhythm
If you can’t commit to a cadence, don’t bother. Pick one:
- Weekly: For project teams and agencies. 45 minutes, same slot, same agenda.
- Twice weekly: For high-volume ticket teams. 20 minutes, keep it tight.
Set the rule: the dashboard exists to decide, not to explain.
Days 2 To 3: Define The Numbers And Owners
Write a tiny ‘data dictionary’ under the dashboard. It stops debate.
Completion check: you should be able to point at each metric and answer:
- Where does it come from? Tool and field name.
- Who updates it? Named person.
- When is it updated? Day and time.
- What happens if it’s red? The default action.
Days 4 To 7: Build It In The Tool You Already Use
Don’t buy new software to ‘fix’ visibility. Start in Google Sheets, Excel or your PM tool’s reporting. Your first version should be ugly but useful.
A simple layout that works:
- Top row: 6 to 10 KPIs with green, amber, red.
- Middle: List of at-risk deliveries with owner and next action.
- Bottom: Notes section for blockers and decisions taken.
Once it’s stable, automate data pulls. But earn automation with a working manual process first.
Validation Path: Prove It Works With Small Tests
You don’t validate a dashboard by how pretty it looks. You validate it by whether outcomes improve within days.
Run these small tests:
- Risk call test (3 meetings): Each week, pick the top 3 at-risk items, write the decision taken, check a week later if it reduced risk.
- Capacity cap test (7 days): Set a hard WIP limit per person or per squad. Track cycle time and rework before and after.
- Client blocker test (10 emails): Add ‘waiting on client’ as a formal status and send 10 structured blocker emails. Track days stuck before and after.
If nothing changes after 2 weeks, the dashboard is either missing lead indicators, or you’re not using it to make hard calls.
Pricing And Unit Economics: Put Money On The Same Page As Dates
Delivery dashboards should protect margin, not only timelines. That means tying delivery to unit economics that hold at small scale.
Three simple numbers to add if you’re doing projects or retainers:
- Contribution per delivery hour: (Revenue − direct costs) ÷ delivery hours. Use it to spot unprofitable work types.
- Effective day rate: Revenue ÷ delivery days actually spent. Compare it to your planned day rate.
- Scope drift cost: Unapproved extra hours × blended cost per hour. Show it weekly so it doesn’t get buried.
Micro case: a 6-person agency sells a £24k build, expects 160 hours, blended cost is £35 per hour. Planned delivery cost is £5.6k. If it slips to 240 hours with no change request, delivery cost is £8.4k, margin drops by £2.8k. That’s the difference between hiring and staying stuck.
Operational Guardrails That Protect Margin And Time
The dashboard is your instrument panel. Guardrails are the rules that stop you crashing into the same problems every month.
Pick a few guardrails and make them non-negotiable:
- Definition of done: A delivery item is only ‘done’ when it meets acceptance criteria and is signed off, not when someone says they finished it.
- Client approval window: If the client doesn’t approve within 48 hours, the timeline moves. Put it in writing and show ‘awaiting client’ on the dashboard.
- Change control: No work starts on new requests without one of: a change request, a swap of scope, or an explicit ‘freebie’ decision logged by the owner.
- WIP limits: Cap how many active items each person can hold. Multitasking kills cycle time.
- Escalation trigger: If something turns red two weeks in a row, it gets a decision from the founder or delivery lead, not another discussion.
These rules feel strict until you remember the alternative: missed delivery, unhappy clients and weekends lost to firefighting.
Risks, Traps And Simple Hedges
A delivery dashboard can backfire if it becomes a performance theatre. Here are the common mistakes and how to hedge them.
Risk 1: People game the metrics. Hedge: tie metrics to outcomes, and spot-check samples. For example, if ‘done’ looks too good, audit 10 items for acceptance.
Risk 2: You measure lagging indicators only. Hedge: add lead indicators like ageing WIP, blocked work and capacity overcommitment.
Risk 3: It turns into a blame tool. Hedge: keep the conversation on constraints and decisions. Ask ‘what’s the blocker and what do we change this week?’
Risk 4: It becomes too hard to maintain. Hedge: remove any metric that needs manual wrangling every week. If you can’t automate it eventually, it probably isn’t essential.
How To Run The Weekly Delivery Dashboard Meeting
The meeting is where the dashboard earns its keep. Keep it short, decision-led and consistent.
A simple agenda that works:
- 5 minutes: Review KPI row, call out reds only.
- 25 minutes: Review at-risk deliveries, agree next actions, assign owners and dates.
- 10 minutes: Capacity and sequencing for next 7 to 14 days.
- 5 minutes: Confirm client comms and escalation decisions.
Completion check: if you leave without written actions, owners and deadlines, you’ve just held a status meeting. That’s not what a delivery dashboard is for.
Download The Operations Dashboard Template And Put It Live
If you want a ready-to-use format you can adapt in under an hour, download the Operations Dashboard Template (KPIs, Tasks, Delivery Status) and use it as your first working version. Start simple, run it weekly for 4 weeks, then tighten the metrics once you can see what actually predicts slippage.
Key Takeaways
- Build your delivery dashboard around the promise: time, quality and cost, not activity.
- Validate fast: run small tests over 7 to 14 days and track if risk drops and margin improves.
- Protect capacity and profit: add guardrails like WIP limits and change control so delivery stays predictable.
FAQ For Delivery Dashboard Build And KPIs
What is the best tool to build a delivery dashboard?
The best tool is the one your team will update weekly without friction, usually Google Sheets plus your existing PM tool. Once the definitions are stable, then you can automate data pulls or move into BI.
How many KPIs should a delivery dashboard include?
Keep it to 6 to 10 KPIs, otherwise you’ll end up scanning rather than deciding. If you want more detail, put it behind drill-down tabs, not on the front view.
What are the most important delivery dashboard KPIs for service businesses?
Start with on-time milestone rate, ageing WIP, utilisation, rework hours and delivery gross margin. Those five will usually surface the root cause faster than a long list of activity metrics.
How do I measure delivery quality without overcomplicating it?
Use a simple acceptance check and track first-time-right rate, plus rework hours. If you’re not ready for that, start by counting reopened tickets or client escalations and tighten later.
How often should we review the delivery dashboard?
Weekly is the default for project teams, and twice weekly suits high-volume ticket teams. The point is consistent decision-making, not perfect data.
What if the dashboard shows everything is red?
That’s usually a capacity problem, a scope problem, or both. Pause new intake, cap WIP, raise change requests and re-baseline milestones with clients rather than pretending you can ‘work harder’ forever.
How do we stop the delivery dashboard becoming a blame tool?
Keep it focused on constraints, trade-offs and decisions, and make the metrics transparent to everyone. The standard question should be ‘what do we change this week?’ not ‘who caused this?’
Should a delivery dashboard include client communication metrics?
Yes, because silence creates churn and escalations. Even a basic metric like ‘blockers waiting on client’ and first response time will improve delivery predictability quickly.
