Document Type
Article
Publication Date
2026
Abstract
Over half a century ago, Hannah Arendt cautioned us to “think what we are doing” when we build new technologies. Engaging with her counsel and a set of historical case studies, this Article frames what it calls exploit machina problems. Exploit machina refers to situations where broken technologies and broken governance combine to irreparably harm the public. In other words, exploit machina involves organizational choices to knowingly leverage technology as part of legally problematic conduct, including various forms of fraud. In the language of data science, exploit machina situations implicate strategic decisions in building and managing artificial intelligence (AI); they involve, for example, data quality, predictive and prescriptive analytics, and corporate governance. However, reframed in the language of computer security, exploit machina problems are functionally experienced as a form of insider attack. When broken technologies and broken governance converge, untrustworthy insiders can leverage superior information about a technology (and its flaws) to exploit the public and our democratic processes.
A portion of modern AI business models now reflect exploit machina dynamics. Using case studies of body-judging devices powered by predictive and prescriptive analytics, this Article argues that some AI implementations threaten to repeat legally problematic historical patterns of insider attacks on confidentiality, integrity, and availability. Then, drawing inspiration from the technology theory of Arendt on cybernation, a form of destructive hyperautomation, this Article begins to reframe the legal and policy conversation around technology safety and exploit machina. It merges insights from data science and computer security theory with those from legal and policy scholarship around data analytics, data privacy, and AI governance. Specifically, this Article recasts technology safety in traditional legal terms — as a current problem where organizations knowingly or intentionally inflict irreparable harms on humans. As such, it rejects the dominant policy narrative of AI safety as a hypothetical future problem of machine supremacy. To combat exploit machina, this Article offers two concrete proposals. First, it introduces a set of (First Amendment sensitive) threat metamodeling techniques that expressly consider insider attacks and public safety. Second, after reviewing recent Supreme Court precedent, it proposes that Congress create a new technology regulator of last resort. The new agency would align existing governmental efforts in technology safety, fill regulatory and enforcement gaps in existing agencies’ enabling statutes and practical capabilities, and facilitate international cooperation. The new agency might be called the Bureau of Technology Safety.
Recommended Citation
Andrea Matwyshyn,
Exploit Machina, 59
U.C. Davis L. Rev.
1635
(2026).
Available at: https://insight.dickinsonlaw.psu.edu/fac_works/520