Human Error Isn’t Human
Still blaming people for system failures? That’s not risk management—it’s risk deflection.
When “Human Error” Stops the Conversation
We’ve all read the post-incident reports that end with the same tidy phrase: “due to human error.” It’s become corporate shorthand for “somebody made a mistake, and that’s all you need to know.”
But in high-stakes environments—whether that’s data security, healthcare or mining operations—stopping the investigation at human error is like blaming gravity for a fall. It’s technically true. But it tells you nothing useful about prevention, and even less about resilience.
Too often, the phrase becomes a full stop instead of a starting point. The real question isn’t who made the mistake, it’s why the system allowed that mistake to matter.
Was the task ambiguous? Was the environment high-pressure? Was the process confusing, outdated, or impossible to follow as written?
Most failures don’t start with a bad decision. They start with a process that was designed in isolation from how work actually happens.
That’s not human error. That’s a systems failure in disguise.
The Trouble With Human Error
Human error isn’t a cause. It’s an effect. A downstream symptom of deeper design flaws, broken workflows, and cultural blind spots.
Safety science figured this out decades ago. James Reason’s “Swiss Cheese Model” reframed accidents as the alignment of latent system failures—holes in the layers of defence that normally keep people safe. The person at the sharp end of the error isn’t the cause. They’re the last line of defence.
More recently, cognitive systems engineering and human factors research have expanded that lens. Professor Sidney Dekker’s work on the “New View” of human error emphasises the context in which decisions are made. Mistakes are not random—they’re shaped by the information, pressures, and constraints people face in real time.
That means if someone clicked the wrong button, skipped a step, or ignored a protocol, the right question isn’t “why didn’t they follow the rule?”. It’s “why did breaking the rule make sense to them at the time?”
This isn’t just theory. It plays out in boardrooms and courtrooms.
When things go wrong, procedural non-compliance is often the headline finding. But beneath that headline, you’ll usually find a tangle of small, systemic contributors: unclear documentation, overstretched teams, outdated control frameworks, and incentives that reward speed over care.
These aren’t outliers, and they show up in ways that are easy to overlook:
Confusing interfaces
Poorly written procedures
Inconsistent training
Conflicting KPIs
Tools that don’t match the task
And rules that are impossible to follow in practice
In the end, it’s often the system that allows the error to happen. It leaves the door open and relies on people not walking through it. But that’s not resilience—that’s luck. Because any system that depends on perfect human performance isn’t built to adapt. It’s built to break.
The Real Risk Surface
In risk management, the disconnect between work as imagined and work as done is a recurring theme. The manual says one thing. The real world demands another. The gap between the two? That’s where risk lives.
People fill the gap every day—navigating ambiguity, resolving conflicts, smoothing over clunky systems. Until one day, something goes wrong. And the gap gets renamed “non-compliance.”
This is your human risk surface: Where people, processes, and platforms collide under pressure, with imperfect information, and limited time.
Most organisations don’t map that surface. They map policies. They audit procedures. But they rarely ask how people actually get things done. Or what friction forces them into unsafe or insecure behaviours in the first place.
This is where smart leaders focus. Not on the error itself, but on the conditions that made it inevitable.
What to Do Instead: From Blame to Learning
So how do you shift from punishment to prevention? You start treating mistakes as data, not dead ends.
Here’s what that looks like in practice:
🔍 Investigate context, not just compliance
Don’t stop at “who made the mistake?” Ask “what were they dealing with?” Was the person under time pressure? Were they using outdated tools? Did they have the information they needed? When someone bypasses a protocol, it’s usually not out of carelessness—it’s because the process didn’t match the task. If your investigation doesn’t surface that friction, it’s incomplete.
🛠 Redesign for the way work really happens
Most policies are written from the boardroom. But most risk shows up on the frontline. Shadow the people doing the work. See what gets skipped, patched, worked around. Map out the critical moments where human judgment meets unclear systems—and fix the mismatch. Risk isn’t reduced by tightening control. It’s reduced by making the right action easier than the wrong one.
🗣 Remove the fear of reporting
If people only speak up after something goes wrong, you’ve already lost the lead time. The organisations that learn fastest are the ones where people can raise their hand before there’s a breach, a spill, or a system failure—without fear of blame. This isn’t just culture. It’s architecture. Design reporting systems that reward transparency, not perfection.
📈 Track near misses and weak signals
Near misses are the clearest warnings you’ll get. They expose system vulnerabilities in plain sight, even if nothing went wrong this time. Sometimes it’s luck. Sometimes it’s a last-minute catch. Either way, they offer a crucial opportunity to analyse the conditions that could lead to something more serious.
But don’t overlook the quieter signals: repeated workarounds, high helpdesk volumes, backlogged maintenance, inconsistent form completions. These patterns often surface long before a headline incident does.
⚖ Rethink accountability
True accountability isn’t about naming the person. It’s about understanding the system. Yes, people make decisions. But those decisions are shaped (and at times cornered) by the environment around them. Real leadership owns the conditions, not just the consequences.
And if you're still thinking, “but they should’ve known better”—ask yourself this:
Did the system make doing the right thing obvious, easy, and supported? If not, you’re not managing risk. You’re just managing optics.
When Systems Learn, People Don’t Have to Pay the Price
You don’t fix a plane by firing the pilot. You fix the checklist. The handover. The cockpit alert. The assumptions about what someone will do when the engine fails.
Organisations should be no different.
Want fewer errors? Build better systems. Want real resilience? Don’t ask who failed; ask what made failure inevitable.
Disclaimer: This post isn’t legal or financial advice—just ideas to think with. For decisions that affect your business, speak to someone who knows your context.