
Justice at stake as generative AI enters the courtroom
Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.
Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.
"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.
"Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it."
In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.
"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.
The judge voiced appreciation for the avatar, saying it seemed authentic.
"I knew it would be powerful," Wales told AFP, "that that it would humanize Chris in the eyes of the judge."
The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.
Since the hearing, examples of GenAI being used in US legal cases have multiplied.
"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine.
"Overall, it's a positive development in jurisprudence."
Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.
"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.
"We are all aware of a horror story where AI comes up with mixed-up case things."
The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.
In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."
The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.
And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.
"Courts need to be prepared to handle that," Cleary said.
Transformation
Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.
"We have a huge number of people who don't have access to legal services," Linna said.
"These tools can be transformative; of course we need to be thoughtful about how we integrate them."
Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.
"Judges need to be technologically up-to-date and trained in AI," Linna said.
GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.
Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.
But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
14 minutes ago
- Hindustan Times
Top researcher who quit OpenAI to join Meta calls out Sam Altman for ‘fake news'
Mark Zuckerberg has poached three of OpenAI's top researchers for Meta – but contrary to Sam Altman's claims, they did not get $100 million as a sign-on bonus. Lucas Beyer, a former OpenAI researcher, dismissed Altman's claims that Meta paid $100 million to the OpenAI employees joining its superintelligence team. Mark Zuckerberg (L) and Sam Altman (R) are locked in a race over AI.(AP, Reuters) Beyer took to social media to set the record straight after OpenAI CEO Sam Altman claimed that Meta offered his employees bonuses of $100 million to recruit them. According to a Wall Street Journal report, the top OpenAI researchers who quit the ChatGPT-maker are Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai. All of them worked out of OpenAI's Zurich office. What did Sam Altman say about $100 million bonus? During an appearance on the Uncapped podcast in mid June, OpenAI's Altman claimed that Meta 'started making giant offers to a lot of people on our team' like '$100 million signing bonuses, more than that (in) compensation per year.' And how did Lucas Beyer refute this claim? Lucas Beyer, a former Google employee who had been with OpenAI since 2024, recently quit the AI firm to join Meta. In a post shared on X, he refuted Sam Altman's claims that he and other top researchers were paid nine figure signing bonuses. 'Hey all, couple quick notes: 1) yes, we will be joining Meta. 2) no, we did not get 100M sign-on, that's fake news,' Beyer posted on X. In the comments section, he took a direct dig at Altman's claims - 'Thank God Sam let me know I've been lowballed,' Beyer wrote in a tongue-in-cheek response to an X user. Why has Meta ramped up hiring? According to Reuters, Meta, once recognized as a leader in open-source AI models, has suffered from staff departures and has postponed the launches of new open-source AI models that could rival competitors like Google, China's DeepSeek and OpenAI.

The Hindu
an hour ago
- The Hindu
Air India plane crash probe looking at all angles: MoS Civil Aviation Murlidhar Mohol
An Indian Aviation Minister on Sunday (June 29, 2025) said investigators were probing 'all angles' behind the Air India plane crash in Gujarat's Ahmedabad when asked by media about possible sabotage. All but one of the 242 people on board the Boeing 787-8 Dreamliner were killed when it crashed in Ahmedabad on June 12. Authorities have identified 19 others who died on the ground, but a police source told AFP after the crash that the toll was 38. India's Minister of State for Civil Aviation, Murlidhar Mohol, said the investigation was looking at "all angles" when asked specifically about possible "sabotage", in an interview with Indian news channel NDTV. "It has never happened before that both engines have shut off together," Mr. Mohol said earlier in the interview, in reference to theories by some experts of possible dual-engine failure. The Minister added that until the investigation report is published, it would be premature to comment on the cause. The team appointed to investigate the crash started extracting data from the plane's cockpit voice and flight data recorders this week, in an attempt to reconstruct the sequence of events leading to the disaster. Air India has said that the plane was "well-maintained" and that the pilots were accomplished flyers.


Mint
an hour ago
- Mint
Tech evolution: Did Apple blink or think different in the race for artificial intelligence?
Since the 2022 launch of OpenAI's ChatGPT, which kicked off today's AI race, Apple Inc has been seen as a laggard in the arena of artificial intelligence (AI). The company that was once the world's most valuable has no generative pre-trained transformer (GPT), foundation model or noteworthy AI-first products of its own—unlike OpenAI, Google, Meta, Microsoft and Nvidia. The latter two are seen as AI infrastructure leaders. Microsoft, which has invested $13 billion in OpenAI, has embedded its generative models into Azure, Office and GitHub. Nvidia, whose GPUs power nearly every major large language model, has enviable profit margins. Their market caps reflect their AI heft: both touch $3.7 trillion, while Apple lags at around $3 trillion. So, did Apple blink? Also Read: Apple intelligence: It's time to step up and speak out Apple is more of a consumer hardware and services major than an AI platform business. It hasn't made AI central to its identity in the same way as its Big Tech peers. Its playbook is also different. With its launch of Apple Intelligence, for instance, it signalled an intent to integrate AI deeply into its devices and services. It has focused on privacy, security and user experience—redrafting text, generating images and smarter notifications—through on-device language models. For more complex tasks, it taps OpenAI's GPT-4o, with plans to offer users the choice of other models like Google's Gemini. Also, often overlooked is the fact that Apple has made more AI-related acquisitions than its peers—over 30, including Turi, and Vilynx. Also Read: Big Tech in the dock: The EU could force Meta and Apple to change their coercive ways These deals did not generate a buzz but have strengthened core features in Siri, Photos and on-device intelligence. Apple has always absorbed technology to serve its products, a strategy evident in recent reports of a plan to acquire Perplexity AI (denied by the latter). Google currently pays Apple an estimated $20 billion a year to remain the default search engine on Safari, which takes iPhone users online. If regulators nix that deal, Apple may need a search alternative—like Perplexity. Even so, Apple's grip on its market looks firm for now. In 2024, its services business crossed $100 billion in revenue, with over a billion paid subscriptions across iCloud, Music, TV+, App Store and more. These services run at a 74% gross margin, far higher than the 36% it earns from hardware. This 'services flywheel,' with subscriptions and retention driven by device sales, grants Apple a strong user base to gradually familiarize with curated marvels of AI. Also Read: Dave Lee: Apple must make peace with developers for AI success Apple's rivals Google and Meta have their own challenges. Google's $150 billion search ad business is vulnerable to its own AI products. Meta is investing big money in open-source AI and the metaverse. But monetization remains elusive. Both rely on ad revenue, exposing them to shifts in user behaviour and competition. In contrast, Apple's AI strategy does not threaten its core, but enhances it. By waiting for GenAI and AI-on-devices to mature, Apple may be well placed to integrate both across billions of devices. As a trusted brand, it has an edge that goes beyond technology. Still, risks remain. Its dependence on others for foundational tech could be a constraint. If GenAI becomes as basic as operating systems, Apple may need to invest more deeply—via R&D or by buying a model company like Cohere, Mistral or Anthropic. Meanwhile, Apple is focusing on the user experience and folding AI into products that people love. This privacy-first, user-centric approach seems prompted by its core philosophy. Even if Apple blinked, that could see it through.