Justice by Algorithm: How AI Is Reshaping the American Legal System
From predictive policing to sentencing recommendations to AI-generated legal briefs, algorithms are quietly deciding who gets arrested, convicted, and imprisoned. The implications are profound.
In 2024, a man named Robert Williams was arrested in Detroit for a robbery he didn't commit. The police came to his home, handcuffed him in front of his daughters, and took him to jail. He spent 30 hours in detention before investigators realized their mistake.
The arrest was based on a facial recognition match—an AI system had compared a grainy surveillance image to a database of driver's license photos and identified Williams as a suspect. The algorithm was wrong. Williams became the first documented case of a wrongful arrest caused by facial recognition in the United States.
He was not the last.
The Williams case attracted attention because it was clear-cut—an obvious error, an innocent man, a sympathetic victim. But it was also the visible tip of a much larger iceberg. AI is now embedded throughout the American criminal justice system, from the decision to dispatch police to a neighborhood to the recommendation for how long a convicted person should spend in prison. The technology promises efficiency and objectivity. The reality is more complicated.
The Algorithmic Stack
Understanding AI in criminal justice requires understanding how many different systems are involved at each stage of the process.
It starts with predictive policing. Companies like PredPol (now Geolitica) and ShotSpotter sell systems that predict where crimes are likely to occur. Police departments use these predictions to allocate patrols. The idea is to prevent crime by putting officers where crimes are statistically most likely.
Facial recognition comes next. When crimes occur, investigators increasingly run images through facial recognition databases. The FBI's system contains over 640 million photos. State and local systems add millions more. A match generates a lead; a lead can generate an arrest.
Risk assessment enters at bail and pretrial detention. Algorithms like COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) and the Arnold Foundation's Public Safety Assessment evaluate defendants and predict their likelihood of fleeing or reoffending. Judges use these scores to decide who gets bail and at what amount.
Sentencing algorithms provide recommendations at conviction. Some jurisdictions use risk assessment scores to inform sentencing decisions, with higher-risk defendants receiving longer sentences.
Prison assignment and parole decisions also increasingly involve algorithmic assessment. Which facility? What programming? When to release? AI informs all of these.
The cumulative effect is a criminal justice system in which human decision-makers are constantly receiving algorithmic input. The humans retain formal authority, but the AI shapes what information they see and how it's presented.
The Efficiency Argument
Proponents of AI in criminal justice make several arguments that deserve fair consideration.
First, efficiency. Courts are overwhelmed. Judges handle hundreds of cases with limited time for each. Prosecutors and public defenders have caseloads that make thorough preparation impossible. AI can process information faster than humans, potentially improving the quality of decisions even if it doesn't change who makes them.
Second, consistency. Human judges are subject to mood, fatigue, and unconscious bias. Studies have shown that judicial decisions vary based on factors like time of day and weather. An algorithm that produces the same output given the same input is at least consistent, even if its criteria can be debated.
Third, resource allocation. Police departments have limited personnel. Sending officers where crimes are most likely to occur, rather than relying on intuition or politics, seems like a reasonable approach to public safety. If predictive policing reduces crime, the benefits accrue disproportionately to high-crime neighborhoods.
Fourth, objectivity—or at least the appearance of it. Human decision-makers have biases they may not recognize. An algorithm's biases can be examined, tested, and potentially corrected. Transparency about what factors the algorithm considers might be better than opaque human judgment.
These arguments are not frivolous. The pre-algorithmic criminal justice system had serious problems. The question is whether AI makes those problems better or worse.
The Bias Problem
The central critique of AI in criminal justice is that algorithms encode and amplify historical bias.
Consider predictive policing. The systems are trained on historical crime data—records of where crimes were reported and arrests were made. But this data reflects not just where crimes occurred but where police were present to observe and record them. Heavily policed neighborhoods generate more data, which leads the algorithm to predict more crime there, which leads to more policing, which generates more data. The feedback loop perpetuates the historical pattern.
A 2016 analysis of PredPol found that its predictions closely tracked the racial composition of neighborhoods. The algorithm wasn't explicitly using race—that would be illegal—but it was using proxies that correlated with race. The result was algorithmic justification for patterns that looked a lot like racial profiling.
Facial recognition has documented accuracy disparities. Multiple studies have found that the technology performs worse on darker-skinned faces and on women. The training data underrepresented these groups, and the algorithms learned to recognize them less reliably. In the Williams case, the algorithm confidently identified the wrong Black man.
Risk assessment tools have similar issues. COMPAS was the subject of a famous ProPublica investigation that found it falsely labeled Black defendants as future criminals at nearly twice the rate of white defendants. The algorithm's defenders argued that this was partly an artifact of how the analysis was framed, but the racial disparity in outcomes was undeniable.
The common thread is that AI systems learn from historical data, and historical data reflects historical injustice. An algorithm that reproduces the patterns in its training data will reproduce those patterns' inequities. This isn't a bug in the technical sense—the algorithms are working as designed—but it is a bug in the moral sense.
The Transparency Problem
Even if we could fix the bias problem, we'd face a transparency problem: many criminal justice algorithms are black boxes that defendants cannot examine or challenge.
When a judge uses a COMPAS score to inform sentencing, the defendant often cannot see how that score was calculated. The algorithm is proprietary. Its weights and features are trade secrets. The defendant knows they were deemed "high risk" but not why.
This creates a due process problem. The Sixth Amendment guarantees the right to confront accusers and examine evidence. How do you confront an algorithm? How do you challenge a prediction that doesn't explain its reasoning?
Some courts have ruled that algorithmic secrecy is acceptable because judges aren't required to follow the recommendations. But this misunderstands how decision-making works. A judge who receives a "high risk" score is affected by that score even if they don't mechanically apply it. The framing shapes the decision.
Civil liberties advocates have pushed for algorithmic transparency laws, with some success. Several states now require disclosure of the factors used in pretrial risk assessment. But disclosure of factors isn't the same as disclosure of weights, and disclosure of weights isn't the same as interpretability. Understanding why an algorithm made a specific prediction remains difficult even with access to its code.
The Human Override Illusion
Defenders of criminal justice AI often emphasize that humans remain in control. The algorithm recommends; the human decides. If a recommendation is wrong, the human can override it.
This defense underestimates how difficult overrides are in practice.
Automation bias is well-documented in cognitive psychology. When humans receive algorithmic recommendations, they tend to follow them, especially under time pressure. Overriding requires active effort—you have to notice the problem, articulate why the algorithm is wrong, and take responsibility for a different decision. Going along with the algorithm requires nothing.
In criminal justice contexts, the incentives favor deference. A judge who follows an algorithm's recommendation and gets a bad outcome can point to the algorithm. A judge who overrides and gets a bad outcome is personally responsible. Risk aversion pushes toward algorithmic compliance.
The practical result is that "human in the loop" systems often become "human rubber stamp" systems. The human provides legitimacy; the algorithm provides the decision.
The Success Stories
Not all criminal justice AI is problematic. Some applications have produced genuine improvements.
New Jersey's bail reform, implemented in 2017, replaced cash bail with a risk assessment system. Early evaluations found that jail populations dropped significantly while failure-to-appear rates remained stable. More defendants were released pretrial, reducing the punishment of poverty that cash bail represented, without apparent public safety costs.
Some prosecutor offices use AI to identify cases that should be dismissed or diverted—flagging weak evidence, minor offenses, or mitigating circumstances that busy prosecutors might miss. This can reduce over-prosecution and its attendant costs.
Cold case investigations have used AI to identify potential matches in old evidence, leading to arrests in decades-old murders and sexual assaults. The victims in these cases—often from marginalized communities—benefit from technology that makes their cases solvable.
These examples suggest that AI in criminal justice isn't inherently good or bad. It depends on the specific application, how it's designed, who oversees it, and what values it's optimized for.
The Reform Agenda
Civil liberties organizations, criminal justice reformers, and some technologists have proposed frameworks for governing AI in criminal justice.
Transparency requirements would mandate disclosure of algorithms, their training data, and their performance metrics. Defendants would have the right to examine the tools used against them.
Bias audits would require regular testing for disparate impact across racial, gender, and socioeconomic groups. Systems that fail audits would be suspended until corrected.
Procurement standards would prohibit government purchase of AI systems that can't demonstrate accuracy and fairness. Currently, vendors make claims that are rarely verified.
Ban on certain applications would prohibit uses where the risks clearly outweigh benefits. Some cities have banned facial recognition by police entirely. Some advocates argue that predictive policing should be similarly prohibited.
Right to human decision would guarantee that no criminal justice decision affecting liberty is made by algorithm alone. Humans must have genuine authority and adequate information to exercise it.
Judicial training would ensure that judges understand the limitations of algorithmic recommendations—their error rates, their potential biases, their appropriate weight in decision-making.
These reforms face resistance. Law enforcement agencies value the tools' efficiency. Vendors protect proprietary systems. The political constituency for criminal defendants is weak. Progress has been incremental.
The Bigger Picture
The debate over AI in criminal justice is really a debate about what kind of society we want to live in.
One vision emphasizes efficiency and public safety. Crime is a serious problem. Criminal justice resources are limited. AI helps allocate those resources optimally, protects potential victims, and holds offenders accountable. The fact that algorithms reproduce historical patterns reflects the reality that crime patterns have historical roots.
Another vision emphasizes justice and civil liberties. The criminal justice system has been a primary mechanism for racial oppression in America. Embedding its historical patterns into algorithms doesn't make those patterns objective—it makes them harder to change. Efficiency gains mean nothing if they come at the cost of equal treatment under law.
These visions aren't entirely incompatible. It's possible to imagine AI that improves both efficiency and fairness—systems that identify bias and correct for it, that flag weak cases for dismissal, that ensure consistent treatment regardless of race or income. But achieving this requires intentional design, rigorous oversight, and genuine commitment to both values.
What we have now is technology deployed for efficiency with fairness as an afterthought. The results are predictable. Robert Williams was arrested in front of his children because a facial recognition algorithm was wrong, and no one in the system had incentive to question it.
The technology will continue advancing. The choices we make now about how to govern it will shape criminal justice for generations. Getting those choices right requires taking seriously both what AI can offer and what it can take away.
---
Related Reading
- China's New AI Law Requires Algorithmic Transparency — And the West Is Watching - Japan Bets Big on AI Immigration: New Visa Fast-Tracks AI Researchers - California Just Passed the Strictest AI Hiring Law in America - The White House Just Created an AI Safety Board — Here's Who's on It - The New Attack Surface: How Hackers Are Exploiting AI Agents in 2026