
In an era where artificial intelligence is rapidly transforming industries, the legal profession has become one of its most intriguing testing grounds. At the University of North Carolina School of Law, a recent experiment pushed this exploration even further — into the courtroom itself. The school conducted a mock trial where artificial intelligence models, not humans, served as the jury. The goal: to examine whether machines can truly grasp the complexity, nuance, and morality of human justice.
A Legal First: The AI Jury Trial
The experiment, led by UNC law professor Joseph Kennedy, recreated a criminal case involving a teenage defendant accused of participating in an unarmed robbery. The student “defendant” claimed innocence, saying he was merely present and not involved in the crime. The scenario was based on a real-life case Kennedy had once handled, providing a realistic foundation for evaluating the technology’s reasoning abilities.
Instead of human jurors, three of today’s most advanced AI systems were called to deliberate: ChatGPT, Grok (developed by Elon Musk’s xAI), and Claude (created by Anthropic). Each AI model was given the same set of materials — witness testimony, opening and closing statements, and legal instructions — all presented in text form. The AI “jurors” then independently delivered their verdicts.
All three systems found the defendant not guilty.
Professor Kennedy, reflecting on the outcome, said the AI’s decision was not only accurate under the law but also “fairer” than the real-world verdict in the original case, where the human jury had convicted the young man. According to Kennedy, the AIs “got the law exactly right” and demonstrated a surprisingly strong grasp of the evidence presented.
When Algorithms Apply the Law
The exercise revealed one of AI’s greatest strengths in the legal context: its ability to process vast amounts of information consistently, without emotional bias or fatigue. AI jurors do not get tired, distracted, or swayed by sympathy or prejudice — qualities that often complicate human deliberation.
In a controlled academic setting, the AI models displayed precision in legal reasoning. They analyzed evidence based strictly on what was presented, applied legal standards accurately, and reached consistent verdicts. Supporters of the experiment argue that such systems could, in the future, help reduce human bias and error in the justice system.
In civil law contexts — such as arbitration or contract disputes — AI panels could even serve as neutral reviewers, offering faster and potentially more objective resolutions. Professor Kennedy envisions a future where AI-driven decision-making might complement human judgment, not replace it, especially in cases where parties agree to alternative dispute mechanisms.
The Human Factor: What AI Still Can’t Do
However, the trial also exposed fundamental weaknesses that highlight why AI cannot yet (and perhaps should never) replace human jurors.
Unlike people, the AI systems could not perceive tone, emotion, or non-verbal behavior. As Professor Eric Muller of UNC Law pointed out, the AI “didn’t look at the witnesses to see whether they were squirming in their seats, hesitating, or avoiding eye contact.” These subtle cues often influence human perceptions of credibility — a vital component in real-world trials.
Another professor, Eisha Jain, emphasized that the AI’s reasoning lacked emotional weight. “It felt as though the AI jury was debating something casual,” she said, “rather than deciding whether a person deserves to lose his liberty.” This detachment, she suggested, could undermine the moral legitimacy of a verdict, no matter how logically sound it may appear.
Moreover, AI systems are only as unbiased as the data and programming that shape them. Historical data used to train these models often reflects existing social and racial biases within the justice system. Without careful oversight, there’s a risk that AI could replicate — or even amplify — those same injustices under the guise of neutrality.
AI in Law: Promise and Peril
Although no real-world court has yet employed AI jurors, the UNC trial represents a pivotal moment in the ongoing discussion about technology’s role in justice. Legal systems worldwide are already experimenting with AI tools — from predictive policing algorithms and sentencing assistance software to e-discovery systems that sift through millions of documents. Yet, when it comes to fact-finding and moral judgment, the line becomes far blurrier.
Experts agree that before AI could ever be trusted in a judicial role, several critical issues must be addressed: transparency, explainability, accountability, and human oversight. Who is responsible if an AI renders an incorrect or biased decision? How can litigants challenge a machine’s reasoning? These questions strike at the heart of both ethics and due process.
Still, the UNC experiment offers a glimpse of the possible future of courtroom technology. If refined responsibly, AI could support human jurors by clarifying evidence, summarizing testimony, or flagging inconsistencies — not as replacements, but as intelligent aids.
A Lesson for Law Schools and Legal Professionals
For law schools, this experiment provides an invaluable educational experience. Students not only learn the mechanics of trial advocacy but also engage in critical debate over the philosophical and ethical implications of artificial intelligence in law. As Kennedy noted, this kind of hands-on exploration is crucial for preparing the next generation of lawyers to navigate a justice system increasingly intertwined with technology.
For practicing attorneys, policymakers, and legal scholars, the message is clear: while AI holds extraordinary potential to enhance fairness and efficiency, human judgment remains irreplaceable. The rule of law depends not just on rationality, but on empathy, morality, and the lived experiences of those it governs.
The Verdict on AI Juries
The UNC School of Law’s “trial by AI jury” is not just a classroom experiment — it’s a sign of what’s coming. As legal institutions grapple with integrating AI into everything from case analysis to judicial decision-making, the question is no longer whether technology will shape the courtroom, but how.
AI may someday become a trusted partner in justice, but as this trial demonstrated, the soul of the law — fairness, humanity, and moral understanding — still resides firmly in human hands.
Stay ahead of the curve in legal innovation. Discover the latest trends in legal technology, AI, and professional development on LawCrossing.com — your trusted source for career opportunities and insights in the evolving legal world.
See Related Articles:
•15 Top Law Schools: Best Program for Aspiring Lawyers
•Decode Law Schools Ranking
•Law School Profile




