logo
How to tell if an image or video has been created by AI - and if we still can

How to tell if an image or video has been created by AI - and if we still can

RNZ News5 days ago

We're pretty sure Abraham Lincoln didn't own an iPhone, but this Chat GPT-generated image tells us otherwise.
Photo:
ChatGPT
Explainer
- Love it or hate it, artificial intelligence is everywhere these days.
But for every new technology, there are always people who will exploit it.
AI-generated images are unavoidable online, with many intentionally being used to mislead, scam or monetise outrage.
If you are not a computer or AI expert, how do you even know what's what anymore?
The good news: You can often still tell the real from the AI. The bad news: It's getting harder all the time.
False images and videos are frequently being churned out in response to real-life events - many regarding
the Israel-Iran conflict
were just the latest example of propaganda hitting social media.
RNZ spoke to AI and fact-checking experts about how the average internet scroller can figure out if that amazing viral image of Abraham Lincoln with his iPhone might just possibly be fake.
Did Godzilla attack downtown Auckland? Sorry, it's just AI.
Photo:
Made with Google AI
"It's really hard," Victoria University senior lecturer in AI Andrew Lensen admitted.
"Generative AI models, where users provide a description (caption) of what they want and the model tries to create it, have come a long way over the past few years."
Andrea Leask, deputy CEO of the online safety non-profit organisation Netsafe, said the easy availability of apps and technology using AI had played a big part.
"Anyone can generate AI content," she said. "A child can create digitally altered images."
Ben James is the editor of AAP FactCheck, a division of the Australian Associated Press. (Full disclosure: This reporter also writes for AAP FactCheck.) James said the flaws and mistakes in AI imagery were becoming much more subtle.
"AI fakes are far more sophisticated than they were six months ago, and they will continue to evolve. Gone are the days of six-fingered people."
A bizarre AI generated video of Will Smith eating spaghetti went viral online in 2023.
Photo:
Screenshot / Reddit
Lensen agreed, noting the early rubbery clunkiness of older AI images was rapidly fading away.
"We all laughed at Will Smith eating spaghetti or people having seven fingers in 2023, but now the AI models are sufficiently advanced that it basically no longer happens.
"I think for the general public, reliably recognising AI images is no longer possible, and for AI/media experts, there's not much time left."
James said that AAP FactCheck asked three questions at the beginning of every check it does: Who is making the claim, what is the evidence, and what do trusted sources say.
"Whether we are talking to journalists, schoolchildren, or seniors, we always return to that three-question process.
"It doesn't have to be overly burdensome, but it is a way of prompting those key critical thinking skills and will, nine times out of ten, keep you out of trouble."
Google's Gemini AI was asked to generate a view of Wellington's Cuba Street, and this is what it came up with. While on first glance it may be all right, look closer at the signs in the background.
Photo:
Made with Google AI
Looking closely at images can reveal clues they are not what they appear to be.
"AI still struggles with textures and shadows," James said.
"It also struggles with small details, particularly writing. Look for name badges, logos, road signs, and similar elements; often, the words will be garbled.
"Be wary of perfection. Often, AI images are just a little too perfect: airbrushed skin, background details all perfectly framed."
With video, there could also be tell-tale signs.
"Look for distortions, particularly around the face, hair and hands," Netsafe's Leask said.
"For example, flickering around the face and hair, inconsistencies in skin texture, unnatural eye movements or finger placements."
Another simple solution is to use what's called a reverse image detector - a search engine that looks for other instances of a photo to determine where it first came from.
It can be useful for tracking down the original version of an altered photo or a photo of a past event being presented as a current one.
Popular reverse image searchers included TinEye and Google Images' own search.
"Various AI-detection tools, while not perfect, can offer further indications as to whether an image or video is genuine," James said.
However, Lensen was sceptical about their long-term usefulness.
"There is a real 'arms race' of AI generation vs AI detection, and I don't see a way for the detectors to win," Lensen said.
"After all, we train these AI models to generate the most realistic content possible, which means as the generator gets better, the detector has an increasingly harder task.
"In fact, many of these models will be trained by having an AI detector that the generator has to 'fool' during the training process!"
Lensen said in his university work, he does not support using AI detectors to check over student work as the consequences of "false positives" - being accused of using AI when you aren't - can be quite harmful.
When it comes to detecting AI, your own brainpower and detective skills may be the most important tool.
That means cultivating a wider base of sources - and yes, responsible media plays a big part.
"You need to have your trusted sources," James said.
"Despite all the talk of deception, reputable media organisations do a pretty good job of separating fact from fiction. Therefore, you need reliable sources you can count on."
"My advice in 2025 is to look at the provenance of the image," Lensen said.
"On social media, is it a profile with a history of legitimate posts (around a common theme) or is it a strange profile who seems to post very regularly on different topics?
"This is, really, all those 'critical thinking'/source checking that we used to do pre-internet days.
"It is, unfortunately, more work for/onus on the person consuming the content, but I think being a sceptic is a really important skill in the age of mis/disinformation."
A fake video claiming Christopher Luxon was promoting online trading did the rounds last year.
Photo:
Supplied / AAP Factcheck
False videos could be trickier to debunk, but one of the key things to look out for was whether it was an unlikely subject for the person in the video to be talking about - celebrities were often "duplicated" for crypto and investment scams.
For instance, last year, a fake video featuring Prime Minister Christopher Luxon backing an online trading company did the rounds.
Sometimes, victims were not even celebrities. Sir Jim Mann, a leading New Zealand endocrinologist, was shocked recently to discover his face and voice were being used to scam patients with type 2 diabetes.
"The AI was so effective, it looked like I was actually saying those words,"
he told RNZ
.
With video, a key thing to look at is whether the person's face movements match what they were saying, or whether the audio sounded a bit distorted or robotic.
It used to be more difficult to make convincing "deepfakes", Leask said.
"Now, a single image or a handful of words are all you need to create a very convincing deepfake. And ordinary people are being targeted."
ACT Party MP Laura McClure recently put forward a bill in the House to restrict the generation and sharing of sexually explicit deepfakes.
In Parliament when discussing the bill in May, she held up a faked nude photo of herself that she created, saying "this image is a naked image of me but it's not real."
McClure argued the sharing of explicit deepfakes could ruin a person's life.
"For the victims, it is degrading and it is devastating," she said.
The bill was lodged in Parliament's members ballot, where it could get pulled at random, but it was still a long way from becoming law.
ACT MP Laura McClure holds up a faked nude photo of herself that she created when discussing the Deepfake Digital Harm and Exploitation Bill.
Photo:
Facebook / Laura McClure
Netsafe said they received a significant increase in reports of harm relating to digitally altered images in the past year, Leask said.
"We have found that where the producer of the content is a young person, typically, the digitally altered content will have been created for fun, to ridicule or bully someone.
"In contrast, where the producer of the content is an adult, they are more likely to be motivated by sexual gratification, abuse or harassment."
But it may be difficult to ban such images outright.
"Outlawing it is good in principle, but how will it ever be enforced?" Jensen asked.
"There is a pretty high burden of proof to show that someone produced a deepfake, and that gets even more complex when it could be done cross-border."
He said the government needed to provide more detail of implementation and enforcement to make it a substantive effort to actually solve the problem.
As for legally requiring all AI-generated content to be labelled, it might be a good idea in theory but Jensen said it was not very workable.
"I think that ship sailed a long time ago. Even if we brought in such legislation now in NZ, there is no way it would be adopted worldwide, and so the 'bad actors' could just be located in another jurisdiction.
"And that's not to mention enforcement: even if we could detect AI content (we can't), who is going to police that and take the content down?"
"If the content is digitally altered and abusive, threatening, harassing, or includes intimate material (such as nudity, sexual activity, undressing, or toileting), and it has been shared or threatened to be shared without consent, the Harmful Digital Communications Act may apply," Leask said.
"This includes deepfake intimate images or videos."
"We can often help you get the online content removed and explain the options available under the law."
You could report online harm at Netsafe's website, text "Netsafe" to 4282, email help@netsafe.org.nz or call 0508 638 723.
Then there was the broader problem of media literacy.
A survey last week showed that for the first time the majority of Americans are getting their news from social media and similar results were seen in Australia too.
In New Zealand, the Trust In News Aotearoa New Zealand report released earlier this year
found only 32 percent of respondents trust the news
.
"Media literacy needs to start from an early age," James said.
"There really needs to be a co-ordinated effort if we are to have functioning democracies making big decisions based on fact."
Jensen criticised the increasing use of AI in some newsrooms in New Zealand.
"We need the media to be a trusted source now more than ever, and using AI really makes that social license harder to maintain."
RNZ has laid out a
series of Artificial Intelligence principles
which state it "will generally not publish, broadcast or otherwise knowingly disseminate work created by generative AI" and that any use of AI, generative or otherwise, should be done in consultation with senior managers.
"Maybe?" Lensen said.
"I think it undermines the value of social media for many," he said.
"The big appeal of social media originally was being able to connect with friends/family (e.g. OG Facebook) and likeminded humans (e.g. Reddit, FB groups) to share human experiences and have social connections that don't rely on physicality.
"Injecting AI into that inherently removes that human-ness."
A frequent issue seen by fact checkers is almost nonsensical
AI 'slop'
being posted simply to harvest engagement.
"A lot of social media accounts have gained followers by posting AI media (without disclosure) because it allows them to draw clicks/reactions," Jensen said.
News organisations are also still being caught out by AI falsehoods despite their best efforts.
A story earlier this month that was picked up by media worldwide featured a manipulated video of a Chinese paraglider covered in ice after supposedly being sucked into the upper atmosphere.
RNZ partner the Australian Broadcasting Corporation was one of those and posted a note
explaining why they removed the story
.
"It is difficult" to catch some things now, James said.
"We live in a time of instant news.
"Journalists are no longer just competing with other journalists but also with influencers, agitators, and commentators on social media. They can afford to be wrong, but journalists have to be more careful. Our reputation is everything."
Sign up for Ngā Pitopito Kōrero
, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'Pretty damn average': Google's AI Overviews underwhelm
'Pretty damn average': Google's AI Overviews underwhelm

RNZ News

time21 minutes ago

  • RNZ News

'Pretty damn average': Google's AI Overviews underwhelm

Photo: JAAP ARRIENS Most searches online are done using Google. Traditionally, they've returned long lists of links to websites carrying relevant information. Depending on the topic, there can be thousands of entries to pick from or scroll through. Last year Google started incorporating its Gemini AI tech into its searches . Google's Overviews now inserts Google's own summary of what it's scraped from the internet ahead of the usual list of links to sources in many searches. Some sources say Google's now working towards replacing the lists of links with its own AI-driven search summaries. RNZ's Kathryn Ryan's not a fan. "Pretty damn average I have to say, for the most part," she said on Nine to Noon last Monday during a chat about AI upending the business of digital marketing. But Kathryn Ryan is not the only one underwhelmed by Google's Overviews. Recently, online tech writers discovered you can trick it into thinking that made up sayings are actually idioms in common usage that are meaningful. The Sydney Morning Herald 's puzzle compiler David Astle - under the headline 'Idiom or Idiot?' reckoned Google's AI wasn't about to take his job making cryptic crosswords anytime soon. "There is a strange bit of human psychology which says that we expect a very high bar from machines in a way that we don't from humans," the BBC's head of technology forecasting Laura Ellis told Mediawatch last month. "But if you've got a machine making a mistake, where does that accountability fall? We've just not tested this out yet." UK Sky News deputy political editor Sam Coates tried to make ChatGPT accountable after it made up an entire episode of his own politics podcast when he used it to help archive transcripts of it recently. "AI had told a lie that it had got the transcript. And rather than back down it invented an entire fake episode without flagging that it's fake." When challenged on this, the technology insisted Coates had created the episode himself. When ChatGPT can't find an answer or the right data to draw on, it can 'hallucinate' or just make up a misleading response. "Chat GTP is gaslighting me. No such thing exists. It's all a complete fake," Coates spluttered. After turning ChatGPT off and on again in 'conversation mode', it did eventually own up. "It said; 'Look, you're absolutely right to challenge that. I can't remember the exact time that you uploaded.' And then: 'What I can confirm is that I did it and you're holding me to account,'" Coates told viewers. He went on to challenge ChatGPT about its hallucinations getting worse. "The technology is always improving, and newer versions tend to do a better job at staying accurate," ChatGPT replied. But Coates - armed with data that suggested the opposite - asked ChatGPT for specific stats. The response: "According to recent internal tests from OpenAI, the newer models have shown higher hallucination rates. For instance, the model known as o3 had about a 33 percent hallucination rate, while the 04 mini model had around 48 percent." "I get where you're coming from, and I'm sorry for the mixed messages. The performance of these models can vary." When Coates aired his experience as a warning for journalists, some reacted with alarm. "The hallucination rate of advanced models... is increasing. As journos, we really should avoid it," said Sunday Times writer and former BBC diplomatic editor Mark Urban. But some tech experts accused Coates of misunderstanding and misusing the technology. "The issues Sam runs into here will be familiar to experienced users, but it illustrates how weird and alien Large Language Model (LLM) behaviour can seem for the wider public," said Cambridge University AI ethicist Henry Shevlin. "We need to communicate that these are generative simulators rather than conventional programmes," he added. Others were less accommodating on social media. "All I am seeing here is somebody working in the media who believes they understand how technology works - but [he] doesn't - and highlighting the dangers of someone insufficiently trained in technology trying to use it." "It's like Joey from Friends using the thesaurus function on Word." Mark Honeychurch is a programmer and long serving stalwart of the NZ Skeptics, a non profit body promoting critical thinking and calling out pseudoscience. The Skeptics' website said they confront practices that exploit a lack of specialist knowledge among people. That's what many people use Google for - answers to things they don't know or things they don't understand. Mark Honeychurch described putting overviews to the test in a recent edition of the Skeptics' podcast Yeah, Nah . "The AI looked like it was bending over backwards to please people. It's trying to give an answer that it knows that the customer wants," Honeychurch told Mediawatch . Honeychurch asked Google for the meaning of: 'Better a skeptic than two geese.' "It's trying to do pattern-matching and come out with something plausible. It does this so much that when it sees something that looks like an idiom that it's never heard before, it sees a bunch of idioms that have been explained and it just follows that pattern." "It told me a skeptic is handy to have around because they're always questioning - but two geese could be a handful and it's quite hard to deal with two geese." "With some of them, it did give me a caveat that this doesn't appear to be a popular saying. Then it would launch straight into explaining it. Even if it doesn't make sense, it still gives it its best go because that's what it's meant to do." In time, would AI and Google detect the recent articles pointing out this flaw - and learn from them? "There's a whole bunch of base training where (AI) just gets fed data from the Internet as base material. But on top of that, there's human feedback. "They run it through a battery of tests and humans can basically mark the quality of answers. So you end up refining the model and making it better. "By the time I tested this, it was warning me that a few of my fake idioms don't appear to be popular phrases. But then it would still launch into trying to explain it to me anyway, even though it wasn't real." Things got more interesting - and alarming - when Honeychurch tested Google Overviews with real questions about religion, alternative medicine and skepticism. "I asked why you shouldn't be a skeptic. I got a whole bunch of reasons that sounded plausible about losing all your friends and being the boring person at the party that's always ruining stories." "When I asked it why you should be a skeptic, all I got was a message saying it cannot answer my question." He also asked why one should be religious - and why not. And what reasons we should trust alternative medicines - and why we shouldn't. "The skeptical, the rational, the scientific answer was the answer that Google's AI just refused to give." "For the flip side of why I should be religious, I got a whole bunch of answers about community and a feeling of warmth and connecting to my spiritual dimension. "I also got a whole bunch about how sometimes alternative medicine may have turned out to be true and so you can't just dismiss it." "But we know why we shouldn't trust alternative medicine. It's alternative so it's not been proven to work. There's a very easy answer." But not one Overview was willing or able to give, it seems. Google does answer the neutral question 'Should I trust alternative medicine?' by saying there is "no simple answer" and "it's crucial to approach alternative medicine with caution and prioritise evidence-based conventional treatments." So is Google trying not to upset people with answers that might concern them? "I don't want to guess too much about that. It's not just Google but also OpenAI and other companies doing human feedback to try and make sure that it doesn't give horrific answers or say things that are objectionable." "But it's always conflicting with the fact that this AI is just trained to give you that plausible answer. It's trying to match the pattern that you've given in the question." Journalists use Google, just like anyone who's in a hurry and needs information quickly. Do journalists need to ensure they don't rely on the Overviews summary right at the top of the search page? "Absolutely. This is AI use 101. If you're asking something of a technical question, you really need to be well enough versed in what you're asking that you can judge whether the answer is good or not." Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

How to get into the housing market with $80
How to get into the housing market with $80

RNZ News

time10 hours ago

  • RNZ News

How to get into the housing market with $80

HomeShare has worked with the Financial Market Authority to finetune its offering. Photo: Supplied/Susan Edmunds If you want to get into the property market, but don't have the money to do so, you may be about to get your chance - again. New fintech start-up HomeShare plans to launch later this year, offering investors the opportunity to buy a share in residential property. If that sounds familiar, that's because it's not the first attempt to offer this sort of investment scheme - The Property Crowd and The Ownery have both tried it within the past decade. In 2022, the Financial Markets Authority suspended the crowdfunding licence of The Property Crowd, after it contravened licensee obligations. No investors were using the platform at that point. The Ownery launched in 2016, offering shares in an Ellerslie house, but there was reportedly underwhelming investor interest and less than a quarter of shares were sold. It has not responded to a request for comment. Homeshare founder Martin van Blerk said the key difference between those previous attempts and his latest was that he had worked with the Financial Markets Authority in its fintech sandbox. The 'sandbox' is designed to encourage innovation, and allow participants to test their new products and services in a controlled environment, getting a better understanding of what the regulator will expect of them and adjusting as required. Van Blerk said HomeShare would offer 10,000 shares in a property based on an independent valuation. Its first property would be in Hamilton. People could buy single shares or many. On an $800,000 property, a share would be $80. Martin van Blerk is aiming for an October launch for HomeShare. Photo: Supplied/Susan Edmunds "The goal is making housing more affordable, more transparent and just easier to access for a lot of people, who'd otherwise be locked out, either because they don't have enough for a deposit or a mortgage, or they just don't know how to go about it." He said he aimed for an October launch and hoped to eventually have properties all over New Zealand. "Instead of buying one property in Auckland, you could buy shares in 100 throughout New Zealand, so it's a great way to diversify risk for property owners." Owners would receive a proportionate amount of rental income from the property and pay a proportional amount of the cost of ownership, including maintenance. People who wanted to exit their investment could sell their shares on HomeShare's secondary marketplace, as long as the price was set within what the company said was a reasonable range. Fees would be charged when shares were bought and sold. For first-home buyers, the fee is 0.95 percent "or slightly higher if you're a traditional investor". Van Blerk said the model had proved popular in other countries. "New Zealand is sort of lagging behind. I think this is a chance to put us at the front of a shift that's happening." Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store