Foucault's Keyboard: A Remarkably Modern Ethic for Assessing Juridical AI

Foucault's Keyboard: A Remarkably Modern Ethic for Assessing Juridical AI

In Discipline and Punish, Michel Foucault traces the evolution of punishment from the violent, public spectacles of premodernity to the more subtle, pervasive disciplinary mechanisms that shape modern society. While the shift to a “gentler” system of imprisonment and reform is often seen as enlightened progress, Foucault argues that it also enables more efficient and encompassing control over individuals, framing modern prisons as a model for institutions throughout society, including factories, hospitals, and schools. According to Foucault, these institutions employ three main techniques of control: hierarchical observation, normalizing judgment, and examination. Together, they create what he terms "disciplinary power"—a form of power that does not merely punish wrongdoing but actively seeks to correct and normalize behavior to meet societal standards.

Recidivism calculators and sentencing guideline AIs fit directly within Foucault’s notion of “disciplinary power.” These tools analyze an individual’s past behaviors and other personal data to predict their future actions, focusing on their likelihood of reoffending and helping shape legal outcomes accordingly. Recidivism calculators serve as modern forms of hierarchical observation, assessing and sorting individuals as lawbreakers and risks. Instead of a guard in a Panopticon, these digital systems collect data in real-time, passing observations through layers of data scientists, programmers, and judicial actors who ultimately control the outcomes. As with Foucault’s Panopticon—a structure designed to maintain order by allowing constant surveillance—these systems do not just punish but also attempt to align behavior with societal norms.

Moreover, sentencing AIs enforces “normalizing judgment,” another aspect of disciplinary power that Foucault identifies. These algorithms issue recommendations based on standardized patterns, aligning offenders’ potential rehabilitation with a “normal” societal behavior profile. By applying codified rules and reducing judicial discretion, sentencing AIs classify and sort individuals based on their proximity to societal norms, an approach that aligns closely with what Foucault calls “normalization.” However, by establishing these norms in data-driven terms, AI-driven sentencing systems may perpetuate social biases encoded in historical data, echoing Foucault’s concern that these tools do not merely seek justice but subtly enforce conformity.

Finally, the " examination process," as described by Foucault, combines observation with judgment and elicits truth through testing—whether in a classroom, a hospital, or a courtroom. In the case of judicial AI, the examination happens through data points that reveal a person’s likely future behavior. Like a test that reveals a student’s proficiency, the AI generates a risk profile that essentially “measures” the person’s likelihood of recidivism. The outcome both creates a “truth” about the individual (their risk level) and dictates actions that will follow, such as their sentence or parole terms. Foucault calls this blend of observation and control “power/knowledge,” where data-driven truth becomes a tool of influence and control, shaping people to fit into the larger social structure.

Thus, Foucault’s theories provide a critical lens through which to analyze the implications of using AI in criminal justice. These systems embody the shift from punishment as mere retribution to a normalized, data-driven form of control that increasingly dictates social behavior. Through hierarchical observation, normalizing judgment, and examination, AI-driven justice systems continue the disciplinary process that Foucault described, raising ethical concerns over fairness, transparency, and the potential erosion of individual agency in a data-regulated society.

Bibliography

  1. Foucault, Michel. Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan, Vintage Books, 1995.
  2. Choi, Kyu Ho, and Jungmin Lee. "The Predictive Power of AI in Sentencing: Examining the Legal Implications of Recidivism Calculators." Harvard Law Review, vol. 133, no. 7, 2020, pp. 2205-2230.
  3. Angwin, Julia, et al. "Machine Bias." ProPublica, May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  4. Binns, Rex. "AI in Sentencing: A Legal Perspective on Risk Assessment and Recidivism Prediction Algorithms." Journal of Law and Technology, vol. 22, no. 3, 2021, pp. 301-320.
  5. Berk, Richard A., and Jon Sorensen. "Predicting Recidivism Using Machine Learning Algorithms: An Application to Criminal Sentencing." Journal of Criminal Justice, vol. 48, 2017, pp. 52-60.
  6. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, 2016.
  7. Dastin, Jeffrey. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, October 10, 2018, www.reuters.com/article/us-amazon-com-tech-recruitment-insight-idUSKCN1MK08G.
  8. Lippmann, Matthew. "Algorithmic Risk Assessments: Technology and Discrimination in Criminal Sentencing." Law and Philosophy, vol. 37, no. 6, 2018, pp. 577-601.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics