Weapons of Math Destruction by Cathy O’Neil hit the bookshelves in 2016 and immediately struck a nerve. American mathematician and data scientist O’Neil highlighted the numerous ways AI systems were already being used as well as their hidden dangers. Often presented as impartial tools of progress, the book showed these tools could perpetuate harm, undermine societal trust and fail to deliver their objectives. 

One story stands out as an illustration of the devastating consequences of ungoverned AI systems and automation bias, with striking parallels to our use of AI tools today. As part of a reform initiative led by the then-Chancellor of Washington DC Public Schools, administrators introduced the IMPACT system, an algorithm designed to evaluate teacher performance to improve accountability. On paper, it promised fairness and precision. In practice, it derailed careers and deprived students of effective teachers. 

The system was not only deeply flawed, the algorithm’s complexity made it opaque to both the administrators using it and the teachers it evaluated. Teachers who had previously received praise found themselves labelled as ineffective. Sarah Wysocki was a teacher celebrated by the school principal and parents for her dedication. Despite her strong reputation, the IMPACT system flagged her as underperforming because her students’ test scores did not improve as expected. Later, it was revealed that some of the earlier student scores may have been improperly inflated in an effort to improve results – a significant factor the algorithm could not detect. Wysocki’s story, and that of the DC school teachers in her situation, demonstrated how over-reliance on a flawed and opaque AI system could harm careers and undermine public trust, all while masquerading as an improvement. Wysocki was quickly hired by a local suburban private school but the DC public school students were no longer the beneficiaries. 

O’Neil didn’t just tell stories – she exposed an alarming pattern. Her work highlights how AI systems used in hiring, credit scoring, education, insurance and criminal justice often operate without transparency or appropriate governance in place. Weapons of Math Destruction helped crystallize an essential principle: AI systems, no matter how sophisticated, must be held to the same standards of accountability as human decisions. They require governance to understand if they deserve to be trusted.

Fast forward to today and AI has introduced society to innovations at breakneck speed. Generative AI – systems capable of creating text, images, music – brings exciting new opportunities but with significant risks. It generates a brilliant first draft or persuasive response to any question, but it can also fabricate content – aka ‘hallucinations’ or ‘confabulations’, infringe intellectual property rights, and exacerbate existing AI risks such as lack of explainability and unintended bias. In early 2025, Apple was forced to disable a key feature in its AI-powered news service after it concocted a headline about the alleged murderer of a healthcare CEO, falsely attributing the claim to the BBC. No such article existed, but the error spread quickly before it could be corrected.

These so-called ‘hallucinations’ highlight a fundamental issue with generative AI: its unpredictability. Unlike earlier AI systems that make predictions, generative AI systems create novel content based on statistical patterns in their training data, which means the same request, when repeated, can generate different results. Compounding this issue, many organizations do not build these models themselves but rely on so-called ‘foundation models’ developed by companies like OpenAI, Google and Anthropic. Providers of these models typically do not share their training data or give full access to the model’s inner workings, leaving users with limited ability to validate or control the system’s behavior. Efforts to improve reliability, such as filtering inputs or reviewing outputs, address the symptoms not the cause. They cannot change the underlying behavior of the model. This creates a para dox. Generative AI’s greatest strength – its ability to find novel patterns and generate new creative content – is also its greatest weakness.

Grasping this tension is crucial for organizations planning to adopt generative AI. The unpredictability of these systems isn’t a bug to be resolved – it’s an inherent feature. Managing this uncertainty requires more than technical fixes; it calls for robust governance frameworks and a willingness to re-examine risk appetite. 

As AI adoption accelerates, the stakes are higher than ever. Organizations are rushing to implement AI to stay competitive and reap its immeasurable benefits, but this rapid expansion brings significant risk to those organizations, their employees and their customers or end users. AI governance has emerged as a critical field – not just to mitigate risks but to ensure AI can be managed and trusted, paving the path towards its adoption. Governance frameworks establish rules, standards and oversight to steer the development and use of AI. Self-governance is increasingly motivated by the growing legal liabilities associated with AI use. AI adoption will only intensify in the years ahead, making governance not just a best practice but fundamental to unlocking the potential of AI with confidence. 

Excerpted from Governing the Machine: How to Navigate the Risks of AI and Unlock It’s True Potential  by Ray Eitel-Porter, Paul Dongha, and Miriam Vogel. (Bloomsbury Business; October 2025) 

Author(s)