
Can AI fact-check its own lies?
The most obvious takeaway from the incident is that it was a badly needed wake-up call about what can happen when AI gets too embedded in our information ecosystem. But CEO Melissa Bell resisted the instinct to simply blame AI, instead putting responsibility on the humans who use it and those who are entrusted with safeguarding readers from its weaknesses. She even included herself as one of those people, explaining how she had approved the publishing of special inserts like the one the list appeared in, assuming at the time there would be adequate editorial review (there wasn't).
The company has made changes to patch this particular hole, but the affair exposes a gap in the media landscape that is poised to get worse: as the presence of AI-generated content—authorized or not—increases in the world, the need for editorial safeguards also increases. And given the state of the media industry and its continual push to do 'more with less,' it's unlikely that human labor will scale up to meet the challenge. The conclusion: AI will need to fact-check AI.
Fact-checking the fact-checker
I know, it sounds like a horrible idea, somewhere between letting the fox watch the henhouse or sending Imperial Stormtroopers to keep the peace on Endor. But AI fact-checking isn't a new idea: In fact, when Google Gemini first debuted (then called Bard), it shipped with an optional fact-check step if you wanted it to double-check anything it was telling you. Eventually, this kind of step simply became integrated into how AI search engines work, broadly making their results better, though still far from perfect.
Newsrooms, of course, set a higher bar, and they should. Operating a news site comes with the responsibility to ensure the stories you're telling are true, and for most sites the shrugging disclaimer of 'AI can make mistakes,' while good enough for ChatGPT, doesn't cut it. That's why for most, if not all, AI-generated outputs (such as ESPN's AI-written sports recaps), humans check the work.
As AI writing proliferates, though, the inevitable question is: Can AI do that job? Put aside the weirdness for a minute and see it as math, the key number being how often it gets things wrong. If an AI fact-checker can reduce the number of errors by as much if not more than a human, shouldn't it do that job?
If you've never used AI to fact-check something, the recently launched service isitcap.com offers a glimpse at where the technology stands. It doesn't just label claims as true or false—it evaluates the article holistically, weighing context, credibility, and bias. It even compares multiple AI search engines to cross-check itself.
You can easily imagine a newsroom workflow that applies an AI fact-checker similarly, sending its analysis back to the writer, highlighting the bits that need shoring up. And if the writer happens to be a machine, revisions could be done lightning fast, and at scale. Stories could go back and forth until they reach a certain accuracy threshold, with anything that falls short held for human review.
All this makes sense in theory, and it could even be applied to what news orgs are doing currently with AI summaries. Nieman Lab has an excellent write-up on how The Wall Street Journal, Yahoo News, and Bloomberg all use AI to generate bullet points or top-line takeaways for their journalism. For both Yahoo and the Journal, there's some level of human review on the summaries (for Bloomberg, it's unclear from the article). These organizations are already on the edge of what's acceptable—balancing speed and scale with credibility. One mistake in a summary might not seem like much, but when trust is already fraying, it's enough to shake confidence in the entire approach.
Human review helps ensure accuracy, of course, but also requires more human labor—something in short supply in newsrooms that don't have a national footprint. AI fact-checking could give smaller outlets more options with respect to public-facing AI content. Similarly, Politico's union recently criticized the publication's AI-written reports for subscribers based on the work of its journalists, because of occasional inaccuracies. A fact-checking layer might prevent at least some embarrassing mistakes, like attributing political stances to groups that don't exist.
The AI trust problem that won't go away
Using AI to fight AI hallucination might make mathematical sense if it can prevent serious errors, but there's another problem that stems from relying even more on machines, and it's not just a metallic flavor of irony. The use of AI in media already has a trust problem. The Sun-Times ' phantom book list is far from the first AI content scandal, and it certainly won't be the last. Some publications are even adopting anti-AI policies, forbidding its use for virtually anything.
Because of AI's well-documented problems, public tolerance for machine error is lower than for human error. Similarly, if a self-driving car gets into an accident, the scrutiny is obviously much greater than if the car was driven by a person. You might call this the automation fallout bias, and whether you think it's fair or not, it's undoubtedly true. A single high-profile hallucination that slips through the cracks could derail adoption, even if it might be statistically rare.
Add to that what would probably be painful compute costs for multiple layers of AI writing and fact-checking, not to mention the increased carbon footprint. All to improve AI-generated text—which, let's be clear, is not the investigative, source-driven journalism that still requires human rigor and judgment. Yes, we'd be lightening the cognitive load for editors, but would it be worth the cost?
Despite all these barriers, it seems inevitable that we will use AI to check AI outputs. All indications point to hallucinations being inherent to generative technology. In fact, newer 'thinking' models appear to hallucinate even more than their less sophisticated predecessors. If done right, AI fact-checking would be more than a newsroom tool, becoming part of the infrastructure for the web. The question is whether we can build it to earn trust, not just automate it.
The amount of AI content in the world can only increase, and we're going to need systems that can scale to keep up. AI fact-checkers can be part of that solution, but only if we manage—and accept— their potential to make errors themselves. We may not yet trust AI to tell the truth, but at least it can catch itself in a lie.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 minutes ago
- Yahoo
CuspAI in Talks to Raise $100 Million to Discover New Materials
(Bloomberg) -- British startup CuspAI is in talks to raise more than $100 million in funding to support its goal of using artificial intelligence models to discover new materials, according to people familiar with the matter. PATH Train Service Resumes After Fire at Jersey City Station Seeking Relief From Heat and Smog, Cities Follow the Wind Chicago Curbs Hiring, Travel to Tackle $1 Billion Budget Hole NYC Mayor Adams Gives Bally's Bronx Casino Plan a Second Chance Founded in 2024, CuspAI uses generative AI and molecular simulation to build a platform that it likens to a highly specialized search engine. Users can describe properties they'd like a new material to have and the service responds with a chemical makeup. CuspAI declined to comment. The people familiar with the deal talks asked not to be identified discussing private information. CuspAI Chief Executive Officer Chad Edwards previously told Bloomberg News that he sees opportunity in green hydrogen, synthetic fuels and semiconductor manufacturing. The startup recently partnered with Kemira Oyj, a Finnish chemicals company, starting with a project focused on removing forever chemicals from water. The company raised $30 million in seed funding in 2024. Earlier this year, AI 'godfather' and recent Nobel laureate Geoffrey Hinton joined the startup's advisory board. AI Flight Pricing Can Push Travelers to the Limit of Their Ability to Pay How Podcast-Obsessed Tech Investors Made a New Media Industry Government Steps Up Campaign Against Business School Diversity What Happens to AI Startups When Their Founders Jump Ship for Big Tech Everyone Loves to Hate Wind Power. Scotland Found a Way to Make It Pay Off ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fast Company
3 minutes ago
- Fast Company
Bullish IPO aims to capture OpenAI-era enthusiasm for high-risk tech
Traders in the stock market can decide next week if they feel 'bullish' about Bullish, as the cryptocurrency exchange announced today it plans to raise as much as $629.3 million in its U.S. initial public offering. The Cayman Islands-based company plans to offer 20.3 million shares for $28 to $31 each when it lists on the New York Stock Exchange. That would mean the company is valued at as much as $4.2 billion, making it the latest digital asset firm to court investors in the stock market. Bullish has applied for the ticker symbol 'BLSH,' and the IPO is expected to price on Aug. 12. The company launched in 2021 as a spin-off of with an initial investment of $10 billion from backers that included Peter Thiel. In 2023, Bullish acquired the crypto media brand CoinDesk. Crypto goes IPO Crypto companies have been a hot spot in what's been a pretty slow IPO market this year—and investors have been more than eager to snatch up shares. Circle Internet Group went public in June, and its shares surged nearly 750% above its IPO price in less than a month. And there's already robust demand for Bullish shares: Funds and accounts managed by BlackRock and ARK Investment Management are separately interested in buying as much as $200 million of shares in aggregate at the IPO price, according to the company's filing with the U.S. Securities and Exchange Commission. Bullish offers an interesting read on how far Wall Street has come to embrace crypto assets. The company, which offers crypto spot trading, margin trading, and derivatives trading, notes in its SEC paperwork that institutional investors account for a 'significant' portion of its customer base. Tom Farley, who joined Bullish as CEO in 2023, wrote a letter in the SEC filing documenting his own introduction to digital assets, which began while he was president of the New York Stock Exchange. He recalled that his first lesson in crypto happened on a sunny summer day in 2014 while sitting on his porch with a neighbor who was enthusiastic about joining Coinbase, then a blockchain technology startup. Bullish banks on Trump-era momentum The continued growth of digital assets, which Bullish says have become established as a 'mainstream component of the global financial system,' will be a major driver of business growth, along with other positive trends that include greater adoption by traditional financial institutions and increasing regulatory clarity. Various steps taken by Donald Trump 's administration have helped to invigorate the crypto market, and the president is name-dropped a handful of times in the filing paperwork. While the successful debut of Circle could bode well for Bullish, investors have already cooled on eToro, which went public in May. While eToro shares are still trading above the $52-per-share IPO price, they have fallen nearly 13% from where the stock began trading. Of note, the SEC filing shows that Bullish reported a net loss of nearly $349 million in the three months ended March, compared with a profit of almost $105 million in 2024. And in 2022, the company scrapped an attempt to go public through a special purpose acquisition company (SPAC).


CNET
3 minutes ago
- CNET
Apple Reportedly Working on a ChatGPT-Like Search Experience
Apple is internally working on a "ChatGPT-like search experience" to instantly generate answers for users, according to a report from Bloomberg on Sunday. The Answers, Knowledge and Information team, or AKI, is a team within Apple looking to make internal AI products for its devices. The company is reportedly building an "answer engine" -- an AI-powered service that pulls from information online to answer general queries. It's also possible that a separate app might be developed. This AI search could power Siri, Spotlight and Safari. A representative for Apple didn't immediately respond to a request for comment. With companies like Google, Microsoft, OpenAI and Elon Musk's xAI all investing billions in AI development, Apple has notably been absent from the race. Instead of rushing to build AI models that could power the next generation of Siri, Apple opted to partner with OpenAI and leverage its technology. With the launch of the iPhone 16, Apple introduced Apple Intelligence, AI on the iPhone that could assist in text generation, photo editing and summarization. The implementation, however, felt half-baked to many, and the rollout was slow. Apple's place in the AI race Reports surfaced earlier this year that Apple was looking to purchase Perplexity, an AI company that's taking on Google with an AI-powered search engine. Adding Perplexity to Apple's portfolio would certainly help propel the company in the AI race. It would also lessen its reliance on Google. Currently, Apple has a lucrative $20 billion per year deal with Google to allow it to be the default search engine on Apple devices. That's also why Apple hasn't built its own competing search engine -- although, Apple argues that even if the deal didn't exist, we can't assume it'd have made a competing search engine. That deal is now on shaky ground after the Department of Justice's antitrust division sued Google and won, declaring the company is maintaining an illegal monopoly in online search. As a judge weighs remedies in this case, Apple has been barred from participating in the case, meaning Apple could lose $12.5 billion in annual revenue if the DOJ forces how Google makes these deals. Creating a new AI-powered search engine within Apple might unwind the strands between Google and the iPhone maker, but could lead to more competition in the online search and AI markets.