logo
AI could destroy entire justice system by sending innocent people to JAIL with fake CCTV, Making a Murderer lawyer warns

AI could destroy entire justice system by sending innocent people to JAIL with fake CCTV, Making a Murderer lawyer warns

The Sun27-04-2025
Katie Davis
David Rivers
Published: Invalid Date,
AI could wreak havoc in the justice system by sending innocent people to jail, a top lawyer has warned.
Jerry Buting, who defended Steven Avery in Netflix hit Making a Murderer, said video doctoring is becoming so sophisticated it is increasingly hard to spot.
3
3
3
He believes advanced AI convincingly fabricating evidence could lead to innocent people being thrown behind bars.
Buting, author of Illusion of Justice, told The Sun: 'More and more people could get convicted.'
Deepfake technology is becoming worryingly advanced and exceedingly more difficult to regulate.
Experts have previously told The Sun that deepfakes are the "biggest evolving threat" when it comes to cybercrime.
Deepfakes are fraudulent videos that appear to show a person doing - and possibly saying - things they did not do.
Artificial intelligence-style software is used to clone the features of a person and map them onto something else.
It could see people accused of crimes they didn't commit in a chilling echo of BBC drama The Capture.
The show saw a former British soldier accused of kidnap and murder based on seemingly definitive CCTV footage which had actually been altered.
Buting said: "The tricky part is when AI gets to the point where you can doctor evidence without it being obvious, where you can alter videos.
'There are so many CCTV cameras in the UK, virtually every square foot is covered.
Deepfakes: A Digital Threat to Society
'But if that could be altered in some way so that it is designed to present something that's not true, it could be damaging to the defence or prosecution.
"Then what can we believe if we can't believe our own eyes?'
Buting, who defended Avery in his now infamous 2007 murder trial, said AI is now in a race with experts who are being trained to tell the difference.
But the US-based criminal defence lawyer claims that is no guarantee to stop sickos twisting the truth.
Buting claimed: 'It may result in dismissals but I think it's more likely to result in wrongful convictions because law enforcement and the prosecution just have more resources.
"Nobody really knows how AI is going to impact the justice system.
"But there are also very skilled people who are trying to develop techniques of being able to tell when something has been altered, even at a sophisticated level.
"How AI actually affects the legal system is still very much up in the air.
Deepfakes – what are they, and how do they work?
Here's what you need to know...
Deepfakes are phoney videos of people that look perfectly real
They're made using computers to generate convincing representations of events that never happened
Often, this involves swapping the face of one person onto another, or making them say whatever you want
The process begins by feeding an AI hundreds or even thousands of photos of the victim
A machine learning algorithm swaps out certain parts frame-by-frame until it spits out a realistic, but fake, photo or video
In one famous deepfake clip, comedian Jordan Peele created a realistic video of Barack Obama in which the former President called Donald Trump a 'dipsh*t'
In another, the face of Will Smith is pasted onto the character of Neo in the action flick The Matrix. Smith famously turned down the role to star in flop movie Wild Wild West, while the Matrix role went to Keanu Reeves
"If people are able to discover that evidence has been altered, let's say it's a situation where the defence has an expert who can look at the metadata and all the background, then that may very well result in a dismissal of the case, and should.
'Because the evidence was altered, it's original destroyed, how can we believe anything anymore?"
Former White House Information Officer Theresa Payton previously warned The Sun about the huge risks deepfakes pose to society.
She said: "This technology poses risks if misused by criminal syndicates or nation-state cyber operatives.
"Malicious applications include creating fake personas to spread misinformation, manipulate public opinion, and conduct sophisticated social engineering attacks."
In Black Mirror style, Payton warned malicious actors could exploit this technology to sow confusion and chaos by creating deepfakes of world leaders or famous faces - dead or alive.
Buting warned that although teams are being urgently equipped with skills to spot deepfakes, the pace at which the technology is advancing could soon become a real issue.
Who is Steven Avery?
STEVEN Avery is serving a life sentence at Wisconsin's Waupun Correctional Institution.
He and his nephew Brendan Dassey were convicted of the 2005 murder of Teresa Halbach.
He has been fighting for his freedom ever since he was found guilty of murder in 2007.
Avery argued that his conviction was based on planted evidence and false testimony.
In 1985, Avery was falsely convicted of sexually assaulting a young female jogger.
It took 18 years for his conviction to be overturned and he was given a $36million (£28.2million) payout in compensation.
But days later, he was re-arrested for the murder of Teresa Halbach.
The 62-year-old is continuing serving life in prison without the possibility of parole.
In the 2015 Netflix original series Making a Murderer, Avery documented his struggle for "justice."
In the last episode of the series, viewers were told that Avery had exhausted his appeals and was no longer entitled to state-appointed legal representation.
He added: 'I do fear it could be an issue sooner rather than later.
"There has been a steady erosion in the defence in the UK, for example barristers make very little money, really, for what they have to do.
'There is a real imbalance. The whole idea of an adversary system which the UK employs as do we in the US, is if you have two relatively skilled, equal parties on each side presenting their view of the evidence against the others that the truth will come out.
'Or that the jury will be able to discern the truth or close to it in anyway, whatever justice might be.
'But to the extent that there is this big imbalance and the defence is unskilled or underpaid, then you tend to get lower quality or lower experienced attorneys.
'That's been going on for a long time, so then when you add something like AI to it, it's going to be even harder."
Buting became internationally renowned after appearing on the 2015 Netflix documentary series Making a Murderer.
He alleged Avery had been convicted of a murder he didn't commit, falling foul of a set-up.
But Avery, now 62, was found guilty and is serving a life sentence for the murder of Teresa Halbach in 2005.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Truth Behind AI'd Gold Medal Math Olympiad : What the Media Isn't Saying
The Truth Behind AI'd Gold Medal Math Olympiad : What the Media Isn't Saying

Geeky Gadgets

time5 minutes ago

  • Geeky Gadgets

The Truth Behind AI'd Gold Medal Math Olympiad : What the Media Isn't Saying

What if the next headline you read about AI wasn't just exciting—but also misleading? Imagine seeing 'AI Wins Gold at the International Math Olympiad' and immediately picturing a machine outsmarting the brightest human minds in real-world problem-solving. Sounds new, right? But here's the catch: while OpenAI's model did earn a gold medal, it also stumbled on the most creative problem, exposing the limits of its reasoning. This isn't just a story of triumph—it's a reminder of how easily we can misinterpret AI's achievements when headlines oversimplify the nuances. In a world captivated by AI breakthroughs, the way we read and interpret these milestones matters more than ever. This perspective AI Explained unpacks the layers behind AI's latest accomplishments, from its gold medal at the IMO to the unveiling of GPT-5, and explores what these advancements truly mean for society. You'll discover why AI's victories often come with caveats, how competition between tech giants shapes the narrative, and why transparency in AI research is more urgent than ever. Along the way, we'll challenge the hype and highlight the critical questions that often go unasked. Understanding AI's strengths and limitations isn't just about staying informed—it's about shaping how we prepare for the future. After all, the headlines may dazzle, but the real story lies in the details we often overlook. AI Wins Gold at IMO AI's Performance at the International Math Olympiad OpenAI's model successfully solved five out of six problems at the IMO, earning a gold medal. This is particularly noteworthy because the model was not specifically trained for mathematics. However, it struggled with the most complex problem, which required creative reasoning—a skill that remains challenging for AI to replicate. This limitation underscores a crucial distinction: while AI demonstrates exceptional computational efficiency, it often falls short in areas requiring nuanced ingenuity and abstract thinking. Achievements like these, though impressive, are confined to controlled environments and do not necessarily translate to solving real-world challenges. By highlighting both strengths and weaknesses, this milestone serves as a reminder of the boundaries of current AI technology. Competitive Dynamics in AI Research The announcement also sheds light on the competitive nature of AI research. OpenAI's achievement comes amid reports that Google DeepMind has achieved similar results, though detailed findings have not yet been released. The timing of OpenAI's announcement has sparked speculation about strategic positioning in the race for AI dominance. This rivalry reflects a broader trend in the field, where public perception and technological milestones increasingly shape the narrative. As organizations compete to showcase their breakthroughs, the focus often shifts from collaboration to competition. This competitive environment raises questions about transparency and the potential for shared progress, as companies prioritize proprietary advancements over open collaboration. How Not to Read a Headline on AI Watch this video on YouTube. Browse through more resources below from our in-depth content covering more areas on AI reasoning. Implications for the Workforce AI's growing proficiency in reasoning and professional tasks has profound implications for the workforce. Tools like OpenAI's agent mode demonstrate the potential to enhance productivity, but they also raise concerns about job displacement, particularly in entry-level roles. For example, AI can now draft reports, analyze data, and assist in legal research—tasks traditionally performed by humans. While these advancements streamline workflows and improve efficiency, they also challenge traditional career pathways. This shift emphasizes the need for workforce training and education to help individuals adapt to an evolving job market. Preparing for these changes will require proactive measures to ensure that workers can thrive alongside AI technologies. Limitations and Risks of AI Models Despite its achievements, AI remains far from flawless. One of the most significant challenges is hallucination, where the model generates incorrect or nonsensical information. This poses serious risks in high-stakes fields such as financial analysis, medical research, or legal decision-making. Moreover, AI's performance can vary widely depending on the context, with its weakest moments undermining its reliability. These limitations highlight the importance of cautious deployment and rigorous oversight, especially in industries where errors can have severe consequences. Making sure that AI is used responsibly requires a combination of technical safeguards, ethical guidelines, and regulatory frameworks. The Transparency Problem in AI Research A critical issue in AI development is the lack of transparency. OpenAI's announcement, while impressive, provided limited insight into the methodology, computational resources, or costs involved in training the model. This opacity makes it difficult for researchers, policymakers, and the public to assess the broader implications of such achievements. Greater transparency—through peer-reviewed publications, open data sharing, and detailed disclosures—could foster a more collaborative and accountable research environment. This would not only benefit the AI community but also help build public trust in these technologies. Transparency is essential for making sure that AI advancements are understood, scrutinized, and responsibly integrated into society. Broader Applications and Contextual Challenges AI's impact extends far beyond academic benchmarks like the IMO. In software development, for instance, AI tools can assist with coding and debugging. However, they may also introduce inefficiencies for experienced developers by generating suboptimal solutions that require additional refinement. On the other hand, AI has delivered tangible benefits in areas such as data center management, where it has optimized energy usage and reduced operational costs. These mixed results underscore the importance of context when evaluating AI's effectiveness. Success in one domain does not guarantee universal applicability, and careful consideration is needed to determine where AI can truly add value. Misinterpreting AI Achievements Headlines celebrating AI milestones can sometimes lead to overestimations of its capabilities. For example, solving IMO problems is undoubtedly impressive, but it does not equate to replacing human creativity or expertise in complex, real-world scenarios. Similarly, benchmarks like the IMO, while valuable, do not fully capture AI's practical utility across diverse applications. It is essential to maintain a nuanced understanding of these achievements to avoid misconceptions about AI's true potential. By critically evaluating such milestones, you can better appreciate both the opportunities and limitations of this rapidly evolving technology. Looking Ahead: Navigating the Future of AI The release of new models, such as GPT-5, promises further advancements in AI reasoning and problem-solving. Competitors like Google DeepMind are also expected to unveil their own breakthroughs, intensifying the pace of innovation. However, it is crucial to approach these developments with a balanced perspective. While AI's progress is undeniable, its limitations remain significant. Recognizing both its potential and its shortcomings is essential for navigating this complex field responsibly. As AI continues to evolve, staying informed and thoughtful will help ensure that its benefits are maximized while its risks are carefully managed. Media Credit: AI Explained Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

BBC apologises to Lowe over Rape Gang Inquiry report
BBC apologises to Lowe over Rape Gang Inquiry report

Spectator

time5 minutes ago

  • Spectator

BBC apologises to Lowe over Rape Gang Inquiry report

Another day, another drama over at the Beeb. Now the corporation has apologised to ex-Reform MP Rupert Lowe and his Rape Gang Inquiry, acknowledging that it should have given the parliamentarian more time to respond to reports that he was being probed for not registering donations in time. In a statement released on its website, the BBC described how it ran an article on the investigation by parliament's standards watchdog into whether Lowe had not registered donations in time and therefore breached the MP's code of conduct. The organisation noted: The BBC approached Mr Lowe for comment and published an article reporting the investigation before receiving his reply, which was judged appropriate since the fact of an investigation was in the public domain. Although the story was accurate and BBC guidance allows some latitude on the time offered for right of reply in certain circumstances around contemporaneous reporting, the article also included additional details about the donations being related to a crowdfunder in support of a national inquiry into gang-based sexual exploitation across the UK, known as the Rape Gang Inquiry. These were details about the investigation which had not been released by Parliament's standards commissioner. The article was updated within the hour to include a response from the Rape Gang Inquiry, but we accept that we should have given Rupert Lowe more time to respond. As it happens, Lowe was cleared of breaching MP rules. Parliament's standards commissioner found he still had time to declare more than £600,000 raised via a crowdfunder to support an inquiry into gang-based sexual exploitation across the UK. The Greater Yarmouth politician slammed the complaint against him as a 'malicious attempt to shut me down' and insisted at the time that he would be complaining to the Beeb over the way it covered the story. Who's laughing now, eh?

AI-themed film to depict drama at Elon Musk company
AI-themed film to depict drama at Elon Musk company

The Independent

time5 minutes ago

  • The Independent

AI-themed film to depict drama at Elon Musk company

Ike Barinholtz is reportedly in talks to portray Elon Musk in a new film titled Artificial. The AI-themed film, directed by Challengers director Luca Guadagnino, is being produced by Amazon-MGM studios. Early reports suggest the movie will depict the tumultuous period at OpenAI in 2023, when CEO Sam Altman was briefly fired and rehired. Elon Musk co-founded OpenAI, which is now best known for ChatGPT, in 2015 but later expressed concerns about the company's direction and the effects of its technology. The cast is also said to include Yura Borisov, Andrew Garfield, and Cooper Koch, with the script written by Simon Rich.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store