babyֱapp

Skip to main content

How AI can enhance the accuracy of eyewitness identification

Artificial intelligence

Imagine a robbery has occurred, and police ask witnesses to identify a suspect from a lineup. “I remember his eyes,” one witness says, while another simply states, “I recognize him.”

The first witness will be viewed as less trustworthy due to a cognitive bias known as the featural justification effect. This has to do with how people interpret confidence and memory reliability in the context of an eyewitness's lineup identification. Research shows that eyewitnesses who provide detailed descriptions of features raise doubts, while general recognition statements tend to be viewed as more accurate.

Now imagine that an artificial intelligence system analyzes the witness statements before they’re evaluated by law enforcement. Using natural language processing, it assesses the language used by each witness from a neutral perspective, uninfluenced by a potential human bias resulting from the featural justification effect. The AI can also provide numeric scores based on the likelihood of accuracy, helping investigators focus on statements that have a better chance of being accurate.

David Dobolyi, assistant professor of organizational leadership and information analytics at the Leeds School of Business, recently co-authored a study in the  that explores how advanced AI tools can enhance understanding of witness statements, mitigate bias and improve decision-making in law enforcement contexts.

Dobolyi said AI and natural language processing can provide deeper insights into eyewitness reliability. "The traditional analysis has been basic—just counting words. But with recent advancements in AI, we can assess statements in a much more sophisticated way," he explained.

The study, led by Lauren Kelso, a graduate student in the University of Virginia’s Department of Psychology and additionally co-authored by Jesse Grabman, assistant professor of psychology at New Mexico State University, and Chad Dodson, psychology professor at the University of Virginia, focuses on how AI can help jurors and law enforcement assess the confidence and accuracy of eyewitnesses.

"Just because someone says they’re confident doesn’t mean they’re right. The worst mistakes come from highly confident witnesses who are actually wrong," Dobolyi said.

In the study, 1,010 participants evaluated a series of eyewitness identifications, each accompanied by a confidence statement. Participants were divided into four groups: One received no AI assistance, while the others were provided with different types of AI support, including predictions about the accuracy of identifications and graphical explanations. Each group assessed the likely accuracy of the eyewitness’s identification based on either a featural or recognition justification, allowing researchers to analyze how AI assistance influenced their judgments.

Results indicated that AI assistance significantly reduced the featural justification bias among participants who found the AI helpful. Those who perceived the AI as very useful tended to rate the accuracy of both featural and recognition statements similarly, effectively overcoming the bias. In contrast, participants who did not view the AI as beneficial continued to exhibit the bias.

While Dobolyi cautioned against blind trust in AI, he said he sees its potential to support more informed decisions in legal contexts. "We want tools that can help people make better, less biased decisions—if we can confirm their accuracy," he said.

As the research evolves, Dobolyi said it’s important to emphasize transparency in AI decision-making. "It’s essential that we understand why an AI makes a recommendation, especially in high-stakes situations like eyewitness testimony," he said.