Can artificial intelligence improve the fairness and efficiency of the criminal justice system, or will it merely strengthen detrimental biases? Experts indicate that its deployment thus far has been concerning, yet there is potential for positive impact.
Today, AI technology has permeated almost every facet of the criminal justice system. It is employed in facial recognition systems to identify suspects; in “predictive policing” strategies to formulate patrol routes; in courtrooms to assist with case management; and by public defenders to sift through evidence. While proponents point to gains in efficiency and equity, critics raise serious questions concerning privacy and accountability.
Last month, the Council on Criminal Justice launched a nonpartisan task force focused on AI, established to examine how AI could be utilized in the criminal justice system both safely and ethically. Researchers at RAND will support the group’s work, with the intention of eventually presenting their findings and recommendations to policymakers and law enforcement.
“There’s no question that AI can yield unjust results,” states Nathan Hecht, the task force’s chair and a former Texas Supreme Court Chief Justice. “This task force wants to bring together tech people, criminal justice people, community people, experts in various different areas, and really sit down to see how we can use it to make the system better and not cause the harm that it’s capable of.”
Risks of AI in law enforcement
Many courts and police departments are already using AI, Hecht notes. “It’s very piecemeal: Curious people going, ‘Oh, wow, there’s this AI out here, we could use it over in the criminal court.”
However, because there are few standards governing AI deployment, civil rights watchdogs have grown concerned that law enforcement agencies are employing it in hazardous ways. Thousands of agencies have come to rely on facial recognition technology sold by companies like Clearview, which maintains a database of billions of images scraped from the internet. In many databases, Black individuals are overrepresented, partly because they reside in communities that experience over-policing. AI technology is also less accurate at discerning differences in Black people’s faces, which can lead to higher misidentification rates.
Last year, the Innocence Project, a legal nonprofit, reported at least seven wrongful arrests stemming from facial recognition technology, six of which involved wrongly accused Black individuals. Walter Katz, the organization’s director of policy, explains that police sometimes make arrests based solely on AI’s facial recognition findings, rather than using the AI as a starting point for a broader investigation. “There’s an over-reliance on AI outputs,” he says.
Katz recounts that when he attended a policing conference last fall, “it was AI everywhere.” Vendors aggressively promoted technology tools that purported to solve real problems in police departments. “But in making that pitch, there was little attention to any tradeoffs or risks,” he states. For instance, critics worry that many of these AI tools will increase surveillance of public spaces, including the monitoring of peaceful protesters—or that so-called “bias-amplifying feedback loops” will intensify law enforcement’s crackdowns on over-policed areas.
Where AI could help
Nevertheless, Katz concedes that AI does have a place in the criminal justice system. “It’ll be very hard to wish AI away—and there are places where AI can be helpful,” he says. For this reason, he joined the Council on Criminal Justice’s AI task force. “First and foremost is getting our arms wrapped around how fast the adoption is. And if everyone comes from the understanding that having no policy whatsoever is probably the wrong place to be, then we build from there.”
Hecht, the task force’s chair, identifies several areas where AI could be beneficial in the courtroom, for example, including improving the intake process for arrested individuals, or helping identify who qualifies for diversion programs, which allow offenders to avoid convictions. He also hopes the task force will provide recommendations on what types of AI usage explicitly should not be approved in criminal justice, and steps to preserve the public’s privacy. “We want to try to gather the expertise necessary to reassure the users of the product and the public that this is going to make your experience with the criminal justice system better—and after that, it’s going to leave you alone,” he comments.
Meanwhile, numerous other independent initiatives are endeavoring to use AI to enhance justice processes. One startup, JusticeText, aims to narrow the resource gap between prosecutors and public defenders, the latter of whom are typically severely understaffed and under-resourced. JusticeText developed a tool for public defenders that sorts through hours of 911 calls, police body camera footage, and recorded interrogations, in order to analyze it and determine if, for example, police have made inconsistent statements or asked leading questions.
“We really wanted to see what it looks like to be a public defender-first, and try to level that playing field that technology has in many ways exacerbated in past years,” says founder and CEO Devshi Mehrotra. JusticeText is working with approximately 75 public defender agencies nationwide.
Recidiviz, a criminal justice reform nonprofit, has also been experimenting with several ways of integrating AI into their workflows, including providing parole officers with AI-generated summaries of clients. “You might have 80 pages of case notes going back seven years on this person that you’re not going to read if you have a caseload of 150 people, and you have to see each one of them every month,” says Andrew Warren, Recidiviz’s co-founder. “AI could give very succinct highlights of what this person has already achieved and what they could use support on.”
The challenge for policymakers and the Council on Criminal Justice’s task force, then, is to determine how to develop standards and oversight mechanisms so that the advantages from AI’s efficiency gains outweigh its capacity to amplify existing biases. Hecht, at the task force, also hopes to protect from a future in which a “black box” AI independently makes life-changing decisions.
“Should we ensure our traditional ideas of human justice are protected? Of course. Should we make sure that able judges and handlers of the criminal justice system are totally in control? Of course,” he states. “But saying we’re going to keep AI out of the justice system is hopeless. Law firms are using it. The civil justice system is using it. It’s here to stay.”