I still remember the first time I encountered a system that insisted it was right when I knew it was catastrophically wrong. A university loan calculation had misclassified me, and no amount of evidence would convince the algorithm otherwise. The human operator just shrugged: “Computer says no.” That was 2003. Two decades later, we’re deploying systems with life-altering authority across healthcare, criminal justice, and employment, yet we seem determined to repeat that same pattern at scale.
Every technological revolution carries the seeds of its own problems. The AI boom of 2024-2025 echoes mistakes the computing industry made repeatedly since the 1980s: proprietary lock-in, insufficient testing before deployment, dismissal of edge cases, opacity in decision-making systems, and the assumption that speed matters more than safety. But these aren’t abstract technical failures. Behind every bug, every crashed system, every botched deployment stood real people whose lives were upended by our collective refusal to learn from documented disasters.
The patterns are depressingly familiar. What’s different this time is the scale and the stakes.
The Foundation Risk: Vendor Lock-In Amplifies Every Other Failure
Before examining specific disasters, we must understand the structural condition that transforms technical failures into enduring catastrophes: vendor lock-in. When proprietary systems fail, users cannot migrate to alternatives. Corporations shield themselves from accountability whilst victims remain trapped.
Microsoft’s dominance in the 1990s and early 2000s provides the template. Proprietary file formats deliberately obfuscated to prevent compatible implementations. Government agencies couldn’t share documents with citizens using alternative software. Small businesses faced forced upgrade cycles they couldn’t afford, and some lost years of data in format transitions. Bill Gates’ internal memos explicitly discussed using file format incompatibility as a competitive weapon, a strategy labelled “embrace, extend, extinguish.”
The open-source community fought back. Projects like OpenOffice and LibreOffice reverse-engineered formats. The EU mandated open document standards for government procurement. Standards bodies defended open protocols against proprietary encroachment. These battles established a crucial principle: openness and interoperability protect users when vendors fail or pivot.
Today’s AI landscape replicates this pattern with alarming precision. OpenAI’s API creates dependency on proprietary models. Training data remains closed. Platforms design deliberate incompatibility. When models hallucinate, produce biased outputs, or companies pivot away from safety commitments, users face prohibitive switching costs. Proprietary control delays transparency because you cannot audit what you cannot access. Every other risk compounds when exit becomes economically impossible.
Open-source alternatives exist. Meta’s Llama models, Mistral AI, and other projects demonstrate that transparency and community governance remain viable. Federation architectures and data portability could become requirements rather than afterthoughts. The question is whether we possess the collective will to demand them before the next generation of proprietary moats becomes unbreakable.
When Safety Culture Collapses: The Therac-25 and Horizon Disasters
The Therac-25: When Software Kills
The Therac-25 radiation therapy machine, deployed between 1985 and 1987, killed patients through software failures that should never have reached production. Patients entered treatment rooms trusting machines with their lives. Some received radiation doses 100 times higher than intended, suffering radiation burns, paralysis, and agonising deaths whilst manufacturers and hospitals initially denied machine culpability.
The technical failures were straightforward: race conditions in software allowed simultaneous conflicting commands. Poor error handling produced cryptic codes like “MALFUNCTION 54” that provided no actionable information. Inadequate testing meant developers never simulated rapid operator input sequences that triggered catastrophic failures. The company had removed hardware safety interlocks to save costs, creating over-reliance on unvalidated software.
But the technical problems were compounded by organisational malice. AECL, the manufacturer, prioritised defending reputation over patient safety. Recalls were delayed, operators were blamed, and the credibility of medical physicists reporting incidents was systematically attacked. Independent safety reviews never occurred. Transparent incident reporting was absent. Accountability mechanisms didn’t exist.
The medical physics community organised informal incident-sharing networks when official channels failed. Academic researchers conducted independent investigations when manufacturers stonewalled. Survivors and families formed advocacy groups demanding transparent safety reporting. Professional societies eventually developed new software safety standards, but only after deaths forced action.
The Horizon Scandal: Britain’s Preview of AI Failure
Fast-forward to the Post Office Horizon scandal, arguably the most comprehensive preview of AI governance failure we possess. Between 1999 and 2019, flawed accounting software wrongly accused over 700 subpostmasters of theft, leading to prosecutions, imprisonment, bankruptcy, and suicides. Jo Hamilton lost her post office, her savings, and nearly her home before her conviction was overturned in 2021. Lee Castleton spent 13 years fighting to clear his name after being bankrupted by legal costs. Some victims died before seeing justice.
Software bugs in Fujitsu’s Horizon system created phantom shortfalls in accounts. Post Office executives knew about software problems but chose prosecution over investigation. They systematically covered up the scandal spanning two decades, destroyed evidence, and attacked victims’ credibility. Victims were told they were “the only one” experiencing problems, a deliberate lie designed to isolate and silence dissent. Fujitsu and the Post Office blamed each other whilst destroying lives.
Moreover, the Post Office couldn’t abandon Horizon despite known flaws. Sunk costs and lack of alternatives trapped them in a proprietary system that was actively destroying lives. Vendor lock-in prevented migration even as harm mounted. The scandal illustrates every failure mode simultaneously: technical bugs, organisational cover-ups, vendor unaccountability, and structural lock-in amplifying damage.
Healthcare AI denying treatment, hiring algorithms rejecting qualified candidates, criminal justice systems recommending harsh sentences based on flawed models: when these systems fail, will vendors and institutions admit fault or blame victims? Horizon provides the answer. “Computer says no” will become unchallengeable authority unless we build independent oversight, audit trails, appeal mechanisms, and institutional accountability into the architecture from day one.
The Crisis Management Lesson: Intel’s FDIV Bug
Not all disasters end in cover-ups. The Intel Pentium FDIV bug of 1994 offers a different lesson: how crisis management matters as much as technical competence.
Professor Thomas Nicely discovered floating-point division errors whilst researching prime numbers. A flawed lookup table in the processor’s floating-point unit produced incorrect calculations in scientific and financial applications. Years of calculations were potentially compromised for scientists across multiple disciplines. Doctoral researchers faced re-running completed work. Financial institutions worried about mortgage calculations and risk modelling accuracy.
Intel’s initial response was disastrous. The company downplayed the bug, claimed the “average user” wouldn’t notice, and offered selective replacement only to users who could prove they needed accuracy. CEO Andy Grove prioritised protecting stock price over consumer trust. Engineers had identified the bug before launch, but the shipping schedule overrode accuracy concerns.
Online mathematics and computing forums coordinated testing, shared division patterns that triggered errors, and created public testing registries when Intel wouldn’t disclose failure conditions. Consumer advocacy groups organised boycotts. IBM stopped shipping Pentium computers. The pressure forced Intel’s hand.
Intel eventually did the right thing: full recall, unconditional replacement, transparent communication. The incident faded from institutional memory because the crisis was resolved. But the initial denial cost Intel credibility that took years to rebuild. The lesson is clear: admitting flaws quickly and offering unconditional remediation preserves trust better than defending the indefensible.
Today’s AI companies consistently downplay accuracy problems. Large language models hallucinate facts, producing confidently wrong answers in legal research, medical information, and financial advice. Vendors blame users for “misusing” systems and resist transparency about failure modes. They’re repeating Intel’s initial mistake, but unlike a chip recall, algorithmic bias embedded in infrastructure will persist for decades if we don’t demand better now.
What Y2K Taught Us About Technical Debt
The Y2K crisis offers a different warning. Two-digit year encoding in decades-old systems threatened global infrastructure collapse, requiring the largest coordinated remediation effort in computing history. COBOL programmers came out of retirement to fix systems they’d written 30 years earlier. Citizens faced uncertainty about whether power, water, banking, and medical systems would function.
The root cause was short-term thinking. In the 1960s and 1970s, memory was expensive. Two-digit years saved bytes. Nobody imagined systems would still run in 2000. Technical debt accumulated across decades until it became an existential crisis. Legacy system maintenance remained chronically underfunded until public panic forced response.
But Y2K also demonstrated that massive coordination can avert disaster when society takes threats seriously and dedicates resources to prevention. Global cooperation crossed corporate and national boundaries. Open sharing of remediation techniques occurred. Voluntary information sharing about critical infrastructure vulnerabilities happened. Disaster was prevented through collective action.
The lesson forgotten: because Y2K didn’t cause visible disasters (precisely because of massive intervention), younger generations dismiss it as “overhyped.” They fail to learn the prevention lesson. Success created complacency.
AI training on biased historical data embeds prejudices into infrastructure that will persist for decades. Today’s “good enough” decisions about bias, transparency, and accountability create tomorrow’s technical debt crisis. Unlike Y2K, there’s no clear deadline forcing remediation. We’re building the problem into foundations that will prove extraordinarily expensive to fix later.
The Path Forward: Learning From History
Proven Solutions Exist
The patterns are clear. Proprietary lock-in traps users when systems fail. Inadequate testing harms vulnerable populations first. Organisational cultures prioritise reputation over accountability. Opacity delays problem discovery. Speed culture overrides safety protocols. Technical debt compounds dangerously when ignored.
We possess decades of hard-won lessons, open-source alternatives, and proven resilience frameworks. Defence in depth, circuit breakers, graceful degradation, transparent failure modes, audit trails, independent oversight: these aren’t theoretical concepts. They’re battle-tested responses to documented failures.
Regulatory frameworks must mandate interoperability, not as aspirational goals but as architectural requirements. Testing standards must protect edge cases before deployment, not treat production environments as beta testing grounds. Transparency requirements must enable independent audits. Accountability mechanisms must create real consequences for preventable harm.
The surveillance capitalism dimension makes this urgent. AI isn’t just buggy; it’s extractive by design. Data harvesting, behavioural prediction, and corporate control over personal information create harms beyond technical failures. GDPR principles around data minimisation and user rights provide starting points, but enforcement remains insufficient.
The choice is ours. We can demand open-source alternatives like Llama and Mistral. We can support federation and data portability initiatives. We can insist on independent oversight before deployment. We can connect AI governance battles to broader movements around right-to-repair and digital sovereignty.
Or we can wait for the next Horizon scandal, the next Therac-25, the next preventable disaster that destroys lives whilst corporations insist their systems are reliable.
The victims of previous technological failures paid the price for our collective learning. Jo Hamilton fighting for 13 years to clear her name. Radiation therapy patients trusting machines that killed them. Scientists re-running years of calculations. We owe them more than repetition.
Technology should amplify human creativity and capability, not replace accountability with algorithmic authority. The lessons exist. The alternatives exist. What remains uncertain is whether we possess the collective will to apply them before the next generation suffers consequences we could have prevented.
The web was meant to be open, interoperable, and resilient. AI can inherit those principles if we choose to demand them. History suggests we won’t. I’m cautiously hopeful we might prove history wrong.
For more explorations of how technology shapes our collective future, and why ethical architecture matters more than impressive demos, you’ll find my other work here: https://netscapenation.co.uk/author/talia/

Leave a Reply