Perverse Incentives: The Hidden Motive Behind Real-world Failures
- Andrew Kirch
- Jan 23
- 7 min read

Most cybersecurity postmortems read like technical mysteries. A vulnerability. A misconfiguration. A missed alert. A patch that never landed.
But if you dig deep into enough of these stories, you start to notice a second plot running underneath the technical details. It is quieter, and it is almost always the real culprit. The organization was rewarding the wrong behavior long before the breach. People did what the system paid them to do, and the system got exactly what it asked for. As my partner, Susan Sons, is often heard to say, “Pervasive technical failures are usually people problems wearing a hat and hoping you won’t notice.”
What follows are historical case files from government, business, and cybersecurity. Each is a small investigation. Each has suspects, pressure, and a paper trail. And in each one, the motive is the same: an incentive structure that makes the wrong choice feel safer, easier, or more profitable than the right one. Some people in these examples are good, some are not. In every situation, the morality of the actions taken, or the person or people taking them are second to the incentives they have been given.
⸻
Case File 1: The Great Leap Forward’s grain mirage
In late 1950s China, local officials were not merely managing agriculture. They were navigating a political environment where optimism was interpreted as loyalty, and bad news could be interpreted as defiance. That is the kind of setting where numbers stop being measurements and start being declarations.
The center demanded ambitious production. Local leaders competed to show they were delivering it. Grain yields climbed on paper because paper was safer than reality. Reports of record harvests moved upward, and the system treated those reports as proof that the plan was working.
Then the next part of the mechanism clicked into place. The state used reported yields to determine how much grain could be procured and redistributed. If the numbers were exaggerated, procurement targets were exaggerated too. Grain was taken as if surplus existed, leaving villages with too little. And the same incentive that encouraged inflated reporting also discouraged warnings. When the price of bad news is punishment, bad news does not travel.
The reveal here is colder, because the consequences are so large, and personal because friends of mine lost family to starvation. The incentive was political survival and advancement through visible “success.” The behavior it rewarded was exaggerated reporting and often violent suppression of negative information. The damage was catastrophic policy built on fiction, with human suffering intensified by a reporting chain that punished honesty and rewarded performance theater.
⸻
Case File 2: Wells Fargo’s “successful” accounts
Walk into a retail bank and you see polite conversation, tidy counters, and the choreography of normal business. What you do not see is the scoreboard that sits behind the scenes. In the years leading up to the scandal, Wells Fargo set aggressive sales goals tied to cross-selling and the number of accounts and products opened.
Targets are not inherently evil. The danger is what happens when the targets are high, the pressure is relentless, and the consequences for missing are personal. At that point, the organization is not asking employees to serve customers. It is asking employees to produce a number.
Regulators later described how millions of accounts were opened without customer authorization. The story is not about one rogue employee. It is a system where the easiest way to survive is to give the machine what it wants. The machine wanted accounts. It got accounts.
The reveal is almost embarrassing in its simplicity. The incentive was compensation and job security tied to hitting sales metrics…or, put another way, the employee’s ability to feed and shelter their family. The behavior it rewarded was gaming the metric, including unauthorized account openings and normalization of rule-bending as “how we do it.” The damage was customer harm, criminal and civil consequences, and reputational and governance fallout that dwarfed whatever short-term “performance” the numbers appeared to show.
⸻
Case File 3: Volkswagen’s clean diesel illusion
Volkswagen’s promise was the kind that makes leaders and marketers lean forward: diesel cars that were efficient, powerful, and compliant with emissions rules. It was a story that sold, a story that earned market share, and a story that needed to be true not just in advertising but in regulated tests.
Here is where the investigation turns. The test environment was known and repeatable. It was a controlled ritual. If a company’s success depends on passing a ritual, the company will optimize for the ritual.
The EPA described how software was used to circumvent emissions standards. The point was not improved real-world performance. The point was passing the test. In the lab, the vehicles behaved one way. In ordinary driving, they emitted far more pollution.
The reveal is the same pattern wearing a different suit. The incentive was to deliver a marketable claim within constraints that were difficult to meet honestly. The behavior it rewarded was optimizing for audits and tests, including deception that performs compliance while watched. The damage was legal and financial penalties, years of litigation and remediation, and an erosion of trust that outlived the original scandal.
If you work in cybersecurity, you should feel the echo. Many programs are built to pass the test, not to survive the incident.
⸻
Case File 4: Uber’s 2016 breach and the “bug bounty” disguise
A breach is discovered, and two clocks start running.
One clock measures the operational truth: what was accessed, what was taken, who is affected, what must be done to contain and recover.
The other clock measures consequence: regulators, headlines, reputation, personal liability, the executive instinct to keep the story from becoming a story.
In Uber’s 2016 incident, prosecutors later alleged the company’s then-CSO arranged a $100,000 payment to the hackers through a bug bounty channel and sought nondisclosure agreements that falsely claimed data had not been taken. The label matters because labels are how organizations attempt to convert messy facts into acceptable narratives. A bug bounty is supposed to reward good-faith reporting so vulnerabilities can be fixed. It is not supposed to be a quiet transaction that makes a breach disappear.
The reveal, again, is motive. The incentive was to avoid immediate reputational and regulatory impact. The behavior it rewarded was concealment and reframing, treating transparency as a liability rather than a duty. The damage was amplified legal exposure and a precedent-setting reminder to the entire industry: the response to an incident can become the crime drama, not just the intrusion itself.
⸻
Case File 5: Yahoo’s delayed breach disclosure
Yahoo discovered a massive breach affecting hundreds of millions of accounts. This should have triggered a clean governance process: assess impact, inform counsel and auditors, disclose appropriately, and protect stakeholders.
Instead, the story moved into the gray zone where perverse incentives thrive. Markets punish bad news. Deals punish uncertainty. Executives and advisors feel pressure to “control the narrative.” When valuation is on the line, disclosure begins to feel like self-sabotage.
The SEC later charged the entity formerly known as Yahoo for failing to disclose one of the world’s largest data breaches in a timely way. In its order and press release, the SEC described misleading disclosures and failures in disclosure controls, and the consequences were not theoretical. After disclosure, Verizon renegotiated the purchase price.
The reveal is the quiet temptation to wait. The incentive was to protect valuation and negotiation leverage by postponing bad news. The behavior it rewarded was delay and minimization, treating breach reality as a negotiable fact. The damage was enforcement action, penalties, and a permanent lesson that cybersecurity is also disclosure and governance risk. When you delay truth to avoid consequences, you often face greater consequences in the long run.
⸻
Case File 6: Equifax and the patch that never became urgent
Equifax’s breach does not begin with a clever exploit. It begins with a vulnerability in Apache Struts, CVE-2017-5638. It was known. It was disclosed in early March 2017. A patch existed.
This is where most organizations feel uncomfortably seen. Patching is boring. It creates friction. It competes with work that produces visible wins, like shipping features, closing deals, and keeping systems stable in the short term. Nobody gets applauded for patching on time, because the best outcome is silence. Nothing happened.
Equifax later stated that unauthorized access occurred from May 13 through July 30, 2017. The House Oversight report described how Equifax failed to patch and left systems exposed. The FTC later announced a settlement related to failures to take reasonable steps to secure its network, affecting roughly 147 million people.
The reveal is the cost of rewarding comfort over discipline. The incentive was to minimize short-term disruption and prioritize delivery over maintenance. The behavior it rewarded was delay, exceptions that became permanent, and the normalization of known exposure. The damage was massive: consumer harm at national scale, regulatory action, and years of remediation that cost far more than the “boring work” ever would have.
⸻
Why Stoic Cybersecurity was created
If you run a small or mid-sized business, these stories can feel distant, like cautionary tales from larger worlds. The uncomfortable truth is that the same incentive failures are baked into the SMB security market, just with different costumes.
The managed services industry often gets paid for activity, not outcomes. It gets rewarded for deploying tools, not reducing risk. It profits from complexity that a small operations team cannot sustainably operate. It sells enterprise-scale programs that produce thick reports, red tape, and a steady feeling of falling behind. In that environment, “security” becomes something you purchase, not something you actually get. The incentives are misaligned from the beginning, and the damage shows up later as frustration, resignation, and exposure.
Stoic Cybersecurity was created to find and break perverse incentives
We started this company because SMBs deserve a security program built around trust rather than exploitation, and around measurable results rather than theater. We exist to remove the perverse incentives that quietly cripple SMB posture: incentives that reward dependence, overwhelm, and fear. We built Stoic so that the same forces that pressure vendors to sell complexity are replaced with a simpler mandate: reduce business risk, strengthen resilience, and prove it in ways leadership can understand.
That is the Stoic move. You do not demand that people become saints. You design a system where doing the right thing is rewarded, where bad news is safe to report early, and where security is measured in outcomes that matter when the pressure hits.
Because in every case file above, the tragedy was not that someone made a mistake. The tragedy was that the system kept paying people to make the same mistake, until it finally became unaffordable.


