Latest news with #ChicagoSunTimes


New York Post
10-07-2025
- New York Post
Heroic Chicago man saves 7-month-old girl abandoned after car-jacking
A heroic Chicago man rescued a 7-month-old girl abandoned after a carjacking in the sweltering heat— then tracked down the baby's relatives using social media, authorities said Thursday. Earl Abernathy, 33, was stuck in traffic on July 3 when he heard the infant crying and spotted her alone on the steps of St. Francis of Assisi Catholic Church in the city's Little Italy neighborhood. 'I saw the baby like, trying to wiggle [her] way out of the car seat and it tipped over,' Abernathy told 'So, I just threw the hazards on, jumped out, ran over there [and] got the baby!' 4 Earl Abernathy, 33, sprang to action to save the baby. KGET The little girl had been in the backseat of a vehicle stolen from a nearby gas station earlier that day — then ditched by a carjacker in 92-degree heat, the Chicago Sun Times reported. Abernathy called police and went on Facebook Live to track down the baby's family, and quickly got messages from people who recognized her. 'It was crazy,' Abernathy said. 'Within minutes, the grandma had inboxed me like, 'That's my grandbaby!' And then the auntie inboxed like, 'That's my niece!'' 4 Jeremy Ochoa, 38, was arrested for the carjacking, police said. KGET The infant, who was not identified, was taken to the hospital for a check-up and then reunited with her family. 4 The baby was abandoned on the steps of a Chicago Church. KGET 'We were all out of our minds,' the baby's grateful grandma, Karen Whittington, told the Sun-Times. 'Everyone is OK, and we [are] glad she's found.' 4 Abernathy spotted the baby outside St. Francis of Assisi Catholic Church. KGET Jeremy Ochoa, 38, was ultimately arrested for the carjacking and charged kidnapping and vehicular hijacking, according to Chicago cops.


Fast Company
13-06-2025
- Fast Company
Can AI fact-check its own lies?
As AI car crashes go, the recent publishing of a hallucinated book list in the Chicago Sun-Times quickly became a multi-vehicle pile-up. After a writer used AI to create a list of summer reads, the majority of which were made-up titles, the resulting article sailed through lax editorial review at the Sun-Times (and at least one other newspaper) and ended up being distributed to thousands of subscribers. The CEO eventually published a lengthy apology. The most obvious takeaway from the incident is that it was a badly needed wake-up call about what can happen when AI gets too embedded in our information ecosystem. But CEO Melissa Bell resisted the instinct to simply blame AI, instead putting responsibility on the humans who use it and those who are entrusted with safeguarding readers from its weaknesses. She even included herself as one of those people, explaining how she had approved the publishing of special inserts like the one the list appeared in, assuming at the time there would be adequate editorial review (there wasn't). The company has made changes to patch this particular hole, but the affair exposes a gap in the media landscape that is poised to get worse: as the presence of AI-generated content—authorized or not—increases in the world, the need for editorial safeguards also increases. And given the state of the media industry and its continual push to do 'more with less,' it's unlikely that human labor will scale up to meet the challenge. The conclusion: AI will need to fact-check AI. Fact-checking the fact-checker I know, it sounds like a horrible idea, somewhere between letting the fox watch the henhouse or sending Imperial Stormtroopers to keep the peace on Endor. But AI fact-checking isn't a new idea: In fact, when Google Gemini first debuted (then called Bard), it shipped with an optional fact-check step if you wanted it to double-check anything it was telling you. Eventually, this kind of step simply became integrated into how AI search engines work, broadly making their results better, though still far from perfect. Newsrooms, of course, set a higher bar, and they should. Operating a news site comes with the responsibility to ensure the stories you're telling are true, and for most sites the shrugging disclaimer of 'AI can make mistakes,' while good enough for ChatGPT, doesn't cut it. That's why for most, if not all, AI-generated outputs (such as ESPN's AI-written sports recaps), humans check the work. As AI writing proliferates, though, the inevitable question is: Can AI do that job? Put aside the weirdness for a minute and see it as math, the key number being how often it gets things wrong. If an AI fact-checker can reduce the number of errors by as much if not more than a human, shouldn't it do that job? If you've never used AI to fact-check something, the recently launched service offers a glimpse at where the technology stands. It doesn't just label claims as true or false—it evaluates the article holistically, weighing context, credibility, and bias. It even compares multiple AI search engines to cross-check itself. You can easily imagine a newsroom workflow that applies an AI fact-checker similarly, sending its analysis back to the writer, highlighting the bits that need shoring up. And if the writer happens to be a machine, revisions could be done lightning fast, and at scale. Stories could go back and forth until they reach a certain accuracy threshold, with anything that falls short held for human review. All this makes sense in theory, and it could even be applied to what news orgs are doing currently with AI summaries. Nieman Lab has an excellent write-up on how The Wall Street Journal, Yahoo News, and Bloomberg all use AI to generate bullet points or top-line takeaways for their journalism. For both Yahoo and the Journal, there's some level of human review on the summaries (for Bloomberg, it's unclear from the article). These organizations are already on the edge of what's acceptable—balancing speed and scale with credibility. One mistake in a summary might not seem like much, but when trust is already fraying, it's enough to shake confidence in the entire approach. Human review helps ensure accuracy, of course, but also requires more human labor—something in short supply in newsrooms that don't have a national footprint. AI fact-checking could give smaller outlets more options with respect to public-facing AI content. Similarly, Politico's union recently criticized the publication's AI-written reports for subscribers based on the work of its journalists, because of occasional inaccuracies. A fact-checking layer might prevent at least some embarrassing mistakes, like attributing political stances to groups that don't exist. The AI trust problem that won't go away Using AI to fight AI hallucination might make mathematical sense if it can prevent serious errors, but there's another problem that stems from relying even more on machines, and it's not just a metallic flavor of irony. The use of AI in media already has a trust problem. The Sun-Times ' phantom book list is far from the first AI content scandal, and it certainly won't be the last. Some publications are even adopting anti-AI policies, forbidding its use for virtually anything. Because of AI's well-documented problems, public tolerance for machine error is lower than for human error. Similarly, if a self-driving car gets into an accident, the scrutiny is obviously much greater than if the car was driven by a person. You might call this the automation fallout bias, and whether you think it's fair or not, it's undoubtedly true. A single high-profile hallucination that slips through the cracks could derail adoption, even if it might be statistically rare. Add to that what would probably be painful compute costs for multiple layers of AI writing and fact-checking, not to mention the increased carbon footprint. All to improve AI-generated text—which, let's be clear, is not the investigative, source-driven journalism that still requires human rigor and judgment. Yes, we'd be lightening the cognitive load for editors, but would it be worth the cost? Despite all these barriers, it seems inevitable that we will use AI to check AI outputs. All indications point to hallucinations being inherent to generative technology. In fact, newer 'thinking' models appear to hallucinate even more than their less sophisticated predecessors. If done right, AI fact-checking would be more than a newsroom tool, becoming part of the infrastructure for the web. The question is whether we can build it to earn trust, not just automate it. The amount of AI content in the world can only increase, and we're going to need systems that can scale to keep up. AI fact-checkers can be part of that solution, but only if we manage—and accept— their potential to make errors themselves. We may not yet trust AI to tell the truth, but at least it can catch itself in a lie.
Yahoo
01-06-2025
- Business
- Yahoo
Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'
A major newspaper in the United States has rightly come under fire after the discovery of a lack of oversight that led to the publication of false information. As detailed by The Verge, the May 18 issue of the Chicago Sun-Times featured a summer reading guide with recommendations for fake books generated by artificial intelligence. To make matters even more concerning, other articles were found to include quotes and citations from people who don't appear to exist. The summer reading list included fake titles by real authors alongside actual books. The Sun-Times admitted in a post on Bluesky that the guide was "not editorial content and was not created by, or approved by, the Sun-Times newsroom," and added that it was "looking into how this made it into print." In a statement later published on the newspaper's website, the Sun-Times revealed that the guide was "licensed from a national content partner" and said it was removing the section from all digital editions while updating its policies on publishing third-party content to ensure future mistakes like this are avoided. According to The Verge, the reading list was published without a byline, but a writer named Marco Buscaglia was credited for other pieces in the summer guide. Buscaglia was found to have written other pieces that quote and cite sources and experts that do not appear to be real. Buscaglia admitted to 404 Media that he uses artificial intelligence "for background at times," but claimed he always checks the material. "This time, I did not, and I can't believe I missed it because it's so obvious. No excuses," Buscaglia told 404 Media. "On me 100 percent and I'm completely embarrassed." This is yet another incident that highlights the importance of maintaining professional standards and ensuring that AI-generated content is properly vetted before publication. In an age where misinformation can spread quickly, it's up to leading news outlets like the Sun-Times to avoid these mistakes so they don't lose the trust of the general public. On a broader level, AI is an energy-intensive field that carries significant environmental concerns. The International Energy Agency published a report warning that electricity consumption from data centers that power AI is expected to double by 2026 and will reach a level that is "roughly equivalent to the electricity consumption of Japan." It's important to stay informed on critical climate issues and efforts to reduce energy consumption amid the ongoing evolution of AI technology. How often do you worry about toxic chemicals getting into your home? Always Often Sometimes Never Click your choice to see results and speak your mind. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.


Forbes
31-05-2025
- Business
- Forbes
Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do
The AI symbol sits at the heart of a circle formed by bright yellow foldable caution signs adorned ... More with exclamation marks. This image creatively conveys the urgent need for awareness and careful consideration of AI's rapid growth and its implications. The design's high impact, with its strong contrast and focal point, makes it an effective tool for raising awareness or sparking conversations around technology, security, and innovation. Perfect for customizable content with plenty of space for additional messaging or branding. As AI embeds deeper into leadership workflows, a subtle form of decision drift is taking hold. Not because the tools are flawed but because we stop questioning them. Their polish is seductive. Their speed, persuasive. But when language replaces thought, clarity no longer guarantees correctness. In July 2023, the Chicago Sun-Times published an AI-generated summer reading list. The summaries were articulate. The titles sounded plausible. But only five of the fifteen books were real. The rest? Entirely made up: fictional authors, fabricated plots, polished prose built on nothing. It sounded smart. It wasn't. That's the risk. Now imagine an executive team building its strategy on the same kind of output. It's not fiction anymore. It's a leadership risk. And it's happening already. Quietly. Perceptibly. In organizations where clarity once meant confidence and strategy was something you trusted. Not just in made-up book titles but in the growing gap between what sounds clear and what's actually correct. Large language models aren't fact checkers. They're pattern matchers. They generate language based on probability, not precision. What sounds coherent may not be correct. The result is a stream of outputs that look strategic but rest on shaky ground. This isn't a call to abandon AI. But it is a call to re-anchor how we use it. To ensure leaders stay accountable. To ensure AI stays a tool, not a crutch. I'm not saying AI shouldn't inform decisions. But it must be paired with human intuition, sense making and real dialogue. The more confident the language, the more likely it is to go unquestioned. Model collapse is no longer theoretical. It's already happening. It begins when models are trained on outputs from other models or worse, on their own recycled content. Over time, distortions multiply. Edge cases vanish. Rare insights decay. Feedback loops breed repetition. Sameness. False certainty. Businessman with white cloud instead of head on blue background. Businessman and management. ... More Business and commerce. Digital art. As The Register warned, general purpose AI may already be declining in quality, not in tone but in substance. What remains looks fluent. But it says less. That's just the mechanical part. The deeper concern is how this affects leaders. When models feed on synthetic data and leaders feed on those outputs, what you get isn't insight. It's reflection. Strategy becomes a mirror, not a map. And we're not just talking bias or hallucinations. As copyright restrictions tighten and human-created content slows, the pool of original data shrinks. What's left is synthetic material recycled over and over. More polish. Less spark. According to researchers at Epoch, high quality training data could be exhausted by 2026 to 2032. When that happens, models won't be learning from the best of what we know. They'll be learning from echoes. Developers are trying to slow this collapse. Many already are, by protecting non-AI data sources, refining synthetic inputs and strengthening governance. But the impending collapse signals something deeper. A reminder that the future of intelligence must remain blended — human machine, not machine alone. Intuitive, grounded and real. Psychologists like Kahneman and Tversky warned us long ago about the framing trap: the way a question is asked shapes the answer. A 20 percent chance of failure feels different than an 80 percent chance of success, even if it's the same data. AI makes this trap faster and more dangerous. Because now, the frame itself is machine generated. A biased prompt. A skewed training set. A hallucinated answer. And suddenly, a strategy is shaped by a version of reality that never existed. Ask AI to model a workforce reduction plan. If the prompt centers on financials, the reply may omit morale, long-term hiring costs or reputational damage. The numbers work. The human cost disappears. AI doesn't interrupt. It doesn't question. It reflects. If a leader seeks validation, AI will offer it. The tone will align. The logic will sound smooth. But real insight rarely feels that easy. That's the risk — not that AI is wrong, but that it's too easily accepted as right. When leaders stop questioning and teams stop challenging, AI becomes a mirror. It reinforces assumptions. It amplifies bias. It removes friction. That's how decision drift begins. Dialogue becomes output. Judgment becomes approval. Teams fall quiet. Cultures that once celebrated debate grow obedient. And something more vital begins to erode: intuition. The human instinct for context. The sense of timing. The inner voice that says something's off. It all gets buried beneath synthetic certainty. To stop flawed decisions from quietly passing through AI-assisted workflows, every leader should ask: AI-generated content is already shaping board decks, culture statements and draft policies. In fast-paced settings, it's tempting to treat that output as good enough. But when persuasive language gets mistaken for sound judgment, it doesn't stay in draft mode. It becomes action. Garbage in. Polished out. Then passed as policy. This isn't about intent. It's about erosion. Quiet erosion in systems that reward speed, efficiency and ease over thoughtfulness. And then there's the flattery trap. Ask AI to summarize a plan or validate a strategy, and it often echoes the assumptions behind the prompt. The result? A flawed idea wrapped in confidence. No tension. No resistance. Just affirmation. That's how good decisions fail — quietly, smoothly and without a single raised hand in the room. Leadership isn't about having all the answers. It's about staying close to what's real and creating space for others to do the same. The deeper risk of AI isn't just in false outputs. It's in the cultural drift that happens when human judgment fades. Questions stop. Dialogue thins. Dissent vanishes. Leaders must protect what AI can't replicate — the ability to sense what's missing. To hear what's not said. To pause before acting. To stay with complexity. AI can generate content. But it can't generate wisdom. The solution isn't less AI. It's better leadership. Leaders who use AI not as final word but as provocateur. As friction. As spark. In fact, human-generated content will only grow in value. Craft will matter more than code. What we'll need most is original thought, deep conversation and meaning making — not regurgitated text that sounds sharp but says nothing new. Because when it comes to decisions that shape people, culture and strategy, only human judgment connects the dots that data can't see. In the end, strategy isn't what you write. It's what you see. And to see clearly in the age of AI, you'll need more than a prompt. You'll need presence. You'll need discernment. Neither can be AI trained. Neither can be outsourced.


Japan Times
28-05-2025
- Business
- Japan Times
AI hallucinations? What could go wrong?
Oops. Gotta revise my summer reading list. Those exciting offerings plucked from a special section of The Chicago Sun-Times newspaper and reported last week don't exist. The freelancer who created the list used generative artificial intelligence for help and several of the books and many of the quotes that gushed about them were made up by the AI. These are the most recent and high-profile AI hallucinations to make it into the news. We expect growing pains as new technology matures but, oddly and perhaps inextricably, that problem appears to be getting worse with AI. The notion that we can't ensure that AI will produce accurate information is, uh, 'disturbing' if we intend to integrate that product so deeply into our daily lives that we can't live without it. The truth might not set you free, but it seems like a prerequisite for getting through the day. An AI hallucination is a phenomenon by which a large language model (LLM) such as a generative AI chatbot finds patterns or objects that simply don't exist and responds to queries with nonsensical or inaccurate answers. There are many explanations for these hallucinations — bad data, bad algorithms, training biases — but no one knows what produces a specific response. Given the spread of AI from search tools to the ever-more prominent role it takes in ordinary tasks (checking grammar or intellectual grunt work in some professions), that's not only troubling but dangerous. AI is being used in medical tests, legal writings, industrial maintenance and failure in any of those applications could have nasty consequences. We'd like to believe that eliminating such mistakes is part of the development of new technologies. When they examined the persistence of this problem, tech reporters from The New York Times noted that researchers and developers were saying several years ago that 'AI hallucinations would be solved. Instead, they're appearing more often and people are failing to catch them.' Tweaking models helped reduce hallucinations. But AI is now using 'new reasoning systems,' which means that it ponders questions for microseconds (or maybe seconds for hard questions) longer and that seems to be creating more mistakes. In one test, hallucination rates for newer AI models reached 79%. While that is extreme, most systems hallucinated in double-digit percentages. More worryingly, because the systems are using so much data, there is little hope that human researchers can figure out what is going on and why. The NYT cited Amr Awadallah, chief executive of Vectara, a startup that builds AI tools for businesses, who warned that 'Despite our best efforts, they will always hallucinate.' He concluded 'That will never go away.' That was also the conclusion of a team of Chinese researchers who noted that 'hallucination represents an inherent trait of the GPT model' and 'completely eradicating hallucinations without compromising its high-quality performance is nearly impossible.' I wonder about the 'high quality' of that performance when the results are so unreliable. Writing in the Harvard Business Review, professors Ian McCarthy, Timothy Hannigan and Andre Spicer last year warned of the 'epistemic risks of botshit,' the made-up, inaccurate and untruthful chatbot content that humans uncritically use for tasks. It's a quick step from botshit to bullshit. (I am not cursing for titillation but am instead referring to the linguistic analysis of philosopher Harry Frankfurt in his best-known work, 'On Bullshit.') John Thornhill beat me to the punch last weekend in his Financial Times column by pointing out the troubling parallel between AI hallucinations and bullshit. Like a bullshitter, a bot doesn't care about the truth of its claims but wants only to convince the user that its answer is correct, regardless of the facts. Thornhill highlighted the work of Sandra Wachter and two colleagues from the Oxford Internet Institute who explained in a paper last year that 'LLMs are not designed to tell the truth in any overriding sense... truthfulness or factuality is only one performance measure among many others such as 'helpfulness, harmlessness, technical efficiency, profitability (and) customer adoption.' ' They warned that a belief that AI tells the truth when combined with the tendency to attribute superior capabilities to technology creates 'a new type of epistemic harm.' It isn't the obvious hallucinations we should be worrying about but the 'subtle inaccuracies, oversimplifications or biased responses that are passed off as truth in a confident tone — which can convince experts and nonexperts alike — that posed the greatest risk.' Comparing this output to Frankfurt's 'concept of bullshit,' they label this 'careless speech' and write that it 'causes unique long-term harms to science, education and society, which resists easy quantification, measurement and mitigation.' While careless speech was the most sobering and subtle AI threat articulated in recent weeks, there were others. A safety test conducted by Anthropic, the developer of the LLM Claude, on its newest AI models revealed 'concerning behavior' in many dimensions. For example, the researchers discovered the AI 'sometimes attempting to find potentially legitimate justifications for requests with malicious intent.' In other words, the software tried to please users who wanted it to answer questions that would create dangers — such as creating weapons of mass destruction — even though it had been instructed not to do so. The most amusing — in addition to scary — danger was the tendency of the AI 'to act inappropriately in service of goals related to self-preservation.' In plain speak, the AI blackmailed an engineer that was supposed to take the AI offline. In this case, the AI was given access to email that said it would be replaced by another version and email that suggested that the individual was having an extramarital affair. In 84% of cases, the AI said it would reveal the affair if the engineer went ahead with the replacement. (This was a simulation, so no actual affair or blackmail occurred.) We'll be discovering more flaws and experiencing more frustration as AI matures. I doubt that those problems will slow its adoption, however. Mark Zuckerberg, CEO of Meta, anticipates far deeper integration of the technology into daily life, with people turning to AI for therapy, shopping and even casual conversation. He believes that AI can 'fill the gap' between the number of friendships many people have and that which they want. He's putting his money where his mouth is, having announced at the beginning of the year that Meta would invest as much as $65 billion this year to expand its AI infrastructure. That is a little over 10% of the estimated $500 billion that has been spent in the U.S. on private investment for AI between 2013 to 2024. Global spending last year is reckoned to have topped $100 billion. Also last week, OpenAI CEO Sam Altman announced that he had purchased former Apple designer Jony Ive's company io in a bid to develop AI 'companions' that will re-create the digital landscape as did the iPhone when it was first released. They believe that AI requires a new interface and phones won't do the trick; indeed, the intent, reported the Wall Street Journal, is to wean users from screens. The product will fit inside a pocket and be fully aware of a user's surroundings and life. They plan to ship 100 million of the new devices 'faster than any company has ever shipped before.' Call me old-fashioned but I am having a hard time putting these pieces together. A hallucination might be just what I need to resolve my confusion. Brad Glosserman is deputy director of and visiting professor at the Center for Rule-Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. His new book on the geopolitics of high-tech is expected to come out from Hurst Publishers this fall.