A data breach rarely starts with a dramatic ‘we’ve been hacked’ moment. It starts with a weird login, a customer email you can’t explain, or a supplier asking why you’re sending invoices from a new bank account.
If you’ve got a clear process, the first 24 hours can be controlled, evidence-led and calm. If you don’t, you’ll lose time, lose trust and sometimes create legal problems you didn’t have at the start. For the wider founder framework, cross-reference Legal, Risk & Compliance: The Practical Framework Every Founder Needs to Protect Their Business.
In this article, we’re going to discuss how to:
- Stabilise the situation without destroying evidence
- Collect the right signals and data in hours, not days
- Make clean decisions on notification, comms and next steps
What A ‘Data Breach Response’ Actually Means In Practice
A practical definition: a data breach response is the set of actions you take to (1) stop the bleeding, (2) preserve proof, (3) identify what data was affected, (4) decide whether you must notify, and (5) communicate in a way that protects customers and the business.
If your response is working, you can produce these artefacts within 24 hours:
- Incident timeline: Who saw what, when, and what you did about it.
- Scope statement: Which systems, accounts and data types are involved (and what is definitely not involved).
- Decision log: Why you contained, shut down, notified, or didn’t.
- Comms drafts: One customer version, one partner version, one internal version.
If you can’t create those, you’re guessing. And guessing is expensive.
Hour 0: Call It, Control It, Contain It
The biggest failure mode in the first hour is indecision. People ‘have a look’, poke around in systems, reset random passwords, and accidentally wipe logs you’ll need later.
Do three things immediately:
1) Declare an incident. Use a simple trigger: ‘Any confirmed or suspected unauthorised access to customer, employee, payment, health, identity or authentication data’. If it meets that, you’re in incident mode.
2) Start a decision log. A shared document works. Record time, person, action, reason. This protects you later when you’re asked why you did, or didn’t, notify.
3) Contain, don’t scramble. Your first job is to prevent further access without trampling evidence. That usually means:
- Disable compromised accounts, don’t delete them.
- Rotate keys (API keys, tokens, admin creds) in a controlled order, starting with highest privilege.
- Isolate systems by removing external access or segmenting, rather than wiping servers.
If you’re a small team, assign two roles fast: Incident Lead (decision maker) and Technical Lead (does the work). Everyone else stays out of the tools unless asked. One cook in the kitchen.
Your Data Breach Response Plan For The First 24 Hours
This is the timeline I want founders to run. It’s tight, realistic and built for real ops, not a corporate playbook.
Hours 0 To 2: Preserve Evidence And Stop The Obvious Paths
Evidence matters because you’ll need it to scope the breach, learn what happened, and show you acted responsibly.
- Snapshot logs: Cloud audit logs, firewall logs, application logs, email logs, identity provider logs. Export or lock retention now.
- Take forensic images where practical: Key servers, admin endpoints, affected laptops. If you can’t, at least preserve disk and memory artefacts for the worst affected device.
- Lock down admin: Enforce MFA on admin accounts, remove shared logins, create named accounts for emergency access.
Quick check: if you’re still arguing about whether it’s ‘real’, you’re already late. Treat it as real until you can prove otherwise.
Hours 2 To 6: Scope The Blast Radius With Internal Signals First
Before you message anyone externally, get a baseline scope. Your aim is not perfect certainty, it’s a defensible working view.
Gather this data from your own systems in a few hours:
- Entry point hypothesis: Phished account, leaked API key, exposed database, vulnerable plugin, supplier compromise.
- Accounts touched: Admins, finance, support inboxes, developers, third-party contractors.
- Data types: Names, emails, phone numbers, addresses, payment data, health data, ID documents, passwords, tokens.
- Time window: First suspicious event to containment time.
Also, list which services are in play: Google Workspace or Microsoft 365, Stripe, Shopify, HubSpot, Slack, AWS, Cloudflare, your CRM, ticketing system. Breaches often hop between these, especially via OAuth permissions and shared admin access.
Hours 6 To 12: Cross-Check With Public Signals And Third Parties
Once you’ve got internal facts, widen the lens. Public signals can confirm exposure, or show you’re about to be publicly named.
Do these checks:
- Have I Been Pwned checks for affected domains and key emails (where appropriate and lawful).
- Dark web monitoring if you have it via a provider, or ask your cyber insurer or security firm to run a quick scan.
- Vendor status pages and security advisories: Your breach may be a supplier incident in disguise.
- Payment processor alerts: Unexpected refund spikes, chargeback changes, new payout destinations.
At this point, you’ll also know if you need external help. If your team can’t confidently answer ‘how did they get in?’ within 12 hours, bring in incident response. Not because you’re weak, because speed matters.
Hours 12 To 24: Decide Notification, Draft Comms, Protect Cash
Your job now is decision quality. This is where founders either earn trust, or lose it permanently.
Work through these decisions in order:
- Is personal data involved? If yes, you’re in the territory of GDPR and ICO reporting considerations.
- Is there a likely risk to individuals? That drives whether you may need to notify affected individuals.
- Do you have contractual notice obligations? Enterprise customers and partners often require notice within 24 to 72 hours.
- Do you need to involve police, a regulator, or your insurer now? If you’ve got cyber insurance, check notification requirements before you spend big on vendors.
Even if you don’t notify in the first 24 hours, you should have drafts ready. Speed matters once the facts land.
The One-Sentence Customer Notice Template (Fill This In)
When you do communicate, keep it plain, factual and useful. Here’s a one-sentence template you can fill:
‘On [date/time], we identified unauthorised access to [system], which may have exposed [data types]; we’ve secured access, engaged specialist support, and we recommend you [customer action] while we continue to investigate, you can contact [support channel] for help.’
Don’t write like a lawyer. Don’t speculate. Don’t hide. Tell people what they need to do next.
Quick Unit Economics: What This Breach Is Costing You Per Hour
Founders avoid hard numbers because it feels grim. But a simple calc improves decisions fast. Estimate your ‘downtime cost per hour’ and ‘response cost ceiling’.
Downtime cost per hour:
- Take your average daily gross margin (not revenue). Divide by your operating hours, typically 16 for online businesses, 8 for office-only teams.
- Add an estimate for customer support burden (extra tickets and refunds) and paid media waste (spend you can’t convert).
Example: a UK subscription business does £60k monthly revenue at 70% gross margin. Daily gross margin is roughly £60k x 0.70 ÷ 30 = £1,400. If conversion is impacted for 10 hours, your baseline gross margin loss is about £140 per hour. Then add refunds, chargebacks, and churn. If churn lifts by 0.5% across 4,000 customers at £25 MRR, that’s £500 MRR lost, which is a bigger long-tail hit than the downtime.
Response cost ceiling:
Set a cap you can approve quickly: for many SMEs, £5k to £25k covers an initial incident response engagement. The goal is not to be cheap, it’s to be fast and controlled. Spending £10k to avoid a week of confusion is often a bargain.
Operational Guardrails That Stop The Incident Eating Your Week
A breach can swallow your calendar if you let it. Put these guardrails in place on day one:
Run a war room with a cadence. 15 minutes every 2 hours in the first day. Agenda is fixed: current facts, actions completed, blockers, next actions, comms status. Everything else is noise.
One source of truth. A single incident doc plus a single ticket list. No side chats deciding policy.
Freeze non-essential changes. No deployments, no migrations, no tool swaps. You want stability, not heroics.
Protect finance workflows. Breaches often pair with invoice fraud. Add a temporary rule: any change to bank details or payouts needs a second approval and a phone confirmation.
Customer support script. Even if you can’t share details yet, support needs a consistent line: what you can say, what you can’t, and where to route escalations.
Mini Cases: What ‘Good’ Looks Like In The First Day
Case 1: E-commerce brand, Shopify admin compromised. A contractor’s email was phished, attacker created discount codes and scraped customer emails. The founder disabled the account, exported Shopify audit logs, rotated API keys, and put a temporary spend cap on ads. Customer notice went out within 36 hours with clear password reset guidance and scam warnings.
Case 2: B2B SaaS, exposed S3 bucket. A misconfigured bucket held support attachments with ID documents. The team made the bucket private, preserved access logs, and mapped which customers uploaded documents in the last 90 days. They notified affected clients with specifics, offered credit monitoring for those with ID exposure, and added an automated config scanner within 7 days.
Case 3: Private clinic, ransomware attempt stopped early. Endpoint protection blocked encryption, but the attacker accessed a receptionist mailbox. The clinic reset all admin passwords, forced MFA, checked mailbox forwarding rules, and reviewed recent patient emails for sensitive attachments. They also phoned the payment provider to flag potential fraud, which prevented a later chargeback spike.
Common Risks In A Data Breach Response (And How To Hedge Them)
This is where good intentions go wrong. Here are the traps I see most often, plus the hedge.
Risk: Destroying evidence while ‘cleaning up’. Hedge: preserve logs first, then contain. If you have to take action fast, document what you changed and when.
Risk: Not involving legal early enough. Hedge: you don’t need a legal memo to take technical action, but you do need someone who understands notification thresholds and contract obligations. Keep it practical: ‘what are our duties, by when?’
Risk: Over-notifying with speculative details. Hedge: communicate what’s known, what’s unknown, and when you’ll update. Overstating can create liability and panic.
Risk: Under-notifying because it’s embarrassing. Hedge: if personal data is involved and there’s a likely risk to individuals, delaying can turn a manageable incident into a credibility crisis.
Risk: Letting the attacker pivot to finance. Hedge: lock down payment workflows, banking changes, payroll approvals, and supplier payment requests. Fraud is the ‘second hit’ after a breach.
Risk: Treating it as a one-off. Hedge: schedule the post-incident review within 72 hours of containment, while memories are fresh, and assign owners with deadlines.
A Validation Path You Can Run In 7 To 14 Days (So This Doesn’t Repeat)
Your first 24 hours is the fire drill. The next 7 to 14 days is where you prove you learnt something, and reduce recurrence.
Run these small tests quickly:
- Tabletop exercise: Re-run the incident as a team in 60 minutes and identify where decisions or access slowed you down.
- Restore test: Do a real restore of one key system to a clean environment. Time it. If it takes longer than 4 hours, you’ve got a resilience problem.
- Access review: Remove stale admin accounts, enforce MFA, reduce privileges, rotate long-lived tokens.
- Supplier check: Ask key vendors for their security posture and incident notification terms, then align your contracts accordingly.
Completion check: if you can’t show a ‘before and after’ in access controls, backups, and monitoring within 2 weeks, you’ve just paid tuition and learnt nothing.
Do And Don’t Checklist For The First 24 Hours
Keep this short. You’ll read it under pressure.
- Do: Assign an Incident Lead and Technical Lead, and run a single decision log.
- Do: Preserve logs and evidence before major changes.
- Do: Scope data types and affected systems with internal signals, then public cross-checks.
- Do: Draft customer, partner and internal comms, even if you don’t send them yet.
- Don’t: Reset everything randomly or delete accounts ‘to be safe’.
- Don’t: Let five people message customers with five different stories.
- Don’t: Ignore contract notice periods, they’re often tighter than you expect.
This is the core of a clean data breach response: containment, evidence, scope, decisions, comms.
Download The Data Protection Toolkit And Lock In Your Response
If you want to turn this into a repeatable operating rhythm, download the Data Protection Toolkit: Privacy Policy, DPA & Risk Register Templates and use it to formalise your incident log, risk register, data mapping and processor controls, so your next data breach response is faster, cleaner and far less stressful.
- Build the first 24 hours around artefacts: timeline, scope statement, decision log, comms drafts.
- Use basic numbers to steer decisions: downtime cost, churn impact, response spend ceiling.
- Protect ops and cash while you investigate: war room cadence, change freeze, finance controls, consistent support scripts.
FAQ For Data Breach Response In The First 24 Hours
What counts as a data breach under UK GDPR?
A personal data breach is a security incident that leads to accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. It’s not just hacking, it includes mis-sent emails, lost laptops and exposed databases.
Do I have to report to the ICO within 72 hours?
Not every incident is reportable, but if there’s a risk to individuals’ rights and freedoms, you should report without undue delay and where feasible within 72 hours of becoming aware. If you delay, document why in your decision log.
Should I tell customers immediately?
Only once you can say something useful and accurate, but don’t wait for perfection. If there’s a high risk to individuals, you may need to tell them promptly with practical steps they can take.
What should I do first, reset passwords or preserve logs?
Preserve logs first where possible, then contain and reset in a controlled order. If you have to reset immediately to stop ongoing access, record exactly what you changed and at what time.
When should I bring in an incident response firm?
If you can’t confidently identify the entry point, the scope, or whether data was exfiltrated within 12 hours, get specialist help. Also bring them in fast if you suspect ransomware, payment data exposure, or widespread credential compromise.
Will cyber insurance cover my costs?
Sometimes, but policies often require you to notify the insurer before appointing vendors, and they may have panel firms you must use. Check your policy terms early, ideally in the first 6 to 12 hours.
How do I stop invoice fraud after a breach?
Lock down bank detail changes, require two-person approval for payouts, and verify any payment instruction changes by phone using a known number. Attackers often use mailbox access to time convincing payment scams.
What should I keep as evidence?
Keep audit logs, access logs, email forwarding rules, authentication logs, snapshots of affected configurations, and the decision log of your actions. The point is to be able to reconstruct what happened and demonstrate responsible handling.
