logo
#

Latest news with #MelissaBell

Public media funding cuts hit Chicago: WBEZ, WTTW brace for impact
Public media funding cuts hit Chicago: WBEZ, WTTW brace for impact

Axios

time13-06-2025

  • Politics
  • Axios

Public media funding cuts hit Chicago: WBEZ, WTTW brace for impact

President Trump and the Republican-majority U.S. House moved one step closer to cutting funding for public media, putting local organizations in limbo. The latest: The House passed a bill Thursday afternoon to cancel over $1 billion in funding for PBS and NPR, via the Corporation for Public Broadcasting. This funding was included in the 2025 fiscal year budget, but this action removes it. Why it matters: Federal funding for public media could vanish — and Chicago stations like WBEZ and WTTW are bracing for the fallout. The big picture: The move breaks decades of bipartisan tradition treating CPB funding as apolitical and throws public media companies into budgetary chaos. What they're saying: "If approved, this cancellation of funding would eliminate critical investments, stripping resources that we use to power independent journalism, educational programming, emergency alerts and the infrastructure that supports the entire network of newsrooms nationwide," Chicago Public Media CEO Melissa Bell wrote to station members. "This could threaten the ability of PBS, and member stations like WTTW, to operate autonomously," a WTTW spokesperson said in a statement. By the numbers: The cuts would amount to about 6 percent of Chicago Public Media's budget, which the organization estimates to be about $3 million annually. That's not factoring in possible syndication costs handed down by National Public Radio, which is also losing funding from this bill. For WTTW, 10% of its 2024 budget came from federal funding. Zoom in: Chicago Public Media and WTTW (which also includes WFMT-FM) are among the largest public media organizations. Chicago Public Media (WBEZ/Sun-Times) reported revenue of $70 million for 2024, while WTTW had a total operating budget of $32.7 million. Both organizations receive significant revenue from member donations. Yes, but: Smaller Illinois radio stations, such as WILL-FM in Urbana, WUIS-FM in Springfield, and WNIJ-FM in DeKalb, have significantly higher federal funding, in some cases accounting for half of their budgets. Those stations are attached to local universities. Zoom out: It's unclear if the organizations will supercharge fundraising to attract more private donors or cut back on programming and staff. Chicago Public Media recently cut staff at both the Sun-Times and WBEZ. The intrigue: The rescission package aims to claw back funding that Congress previously approved for fiscal year 2025. It primarily consists of cuts identified by DOGE, which include funding for foreign aid programs such as USAID. The Corporation for Public Broadcasting's funding is usually allocated every two years, so this cuts the second year of funding and puts future allocations in serious doubt. The rescission bill is rare in government. Trump attempted to use it during his first term, but was defeated in the Senate. Between the lines: Republicans have increasingly painted public media as left-leaning and biased, citing PBS programs like "Sesame Street" as "woke propaganda." The other side: Public media offers a variety of independent programming from news, culture, food and children's programs, funded to avoid programming influenced by corporations and commercials.

Can AI fact-check its own lies?
Can AI fact-check its own lies?

Fast Company

time13-06-2025

  • Fast Company

Can AI fact-check its own lies?

As AI car crashes go, the recent publishing of a hallucinated book list in the Chicago Sun-Times quickly became a multi-vehicle pile-up. After a writer used AI to create a list of summer reads, the majority of which were made-up titles, the resulting article sailed through lax editorial review at the Sun-Times (and at least one other newspaper) and ended up being distributed to thousands of subscribers. The CEO eventually published a lengthy apology. The most obvious takeaway from the incident is that it was a badly needed wake-up call about what can happen when AI gets too embedded in our information ecosystem. But CEO Melissa Bell resisted the instinct to simply blame AI, instead putting responsibility on the humans who use it and those who are entrusted with safeguarding readers from its weaknesses. She even included herself as one of those people, explaining how she had approved the publishing of special inserts like the one the list appeared in, assuming at the time there would be adequate editorial review (there wasn't). The company has made changes to patch this particular hole, but the affair exposes a gap in the media landscape that is poised to get worse: as the presence of AI-generated content—authorized or not—increases in the world, the need for editorial safeguards also increases. And given the state of the media industry and its continual push to do 'more with less,' it's unlikely that human labor will scale up to meet the challenge. The conclusion: AI will need to fact-check AI. Fact-checking the fact-checker I know, it sounds like a horrible idea, somewhere between letting the fox watch the henhouse or sending Imperial Stormtroopers to keep the peace on Endor. But AI fact-checking isn't a new idea: In fact, when Google Gemini first debuted (then called Bard), it shipped with an optional fact-check step if you wanted it to double-check anything it was telling you. Eventually, this kind of step simply became integrated into how AI search engines work, broadly making their results better, though still far from perfect. Newsrooms, of course, set a higher bar, and they should. Operating a news site comes with the responsibility to ensure the stories you're telling are true, and for most sites the shrugging disclaimer of 'AI can make mistakes,' while good enough for ChatGPT, doesn't cut it. That's why for most, if not all, AI-generated outputs (such as ESPN's AI-written sports recaps), humans check the work. As AI writing proliferates, though, the inevitable question is: Can AI do that job? Put aside the weirdness for a minute and see it as math, the key number being how often it gets things wrong. If an AI fact-checker can reduce the number of errors by as much if not more than a human, shouldn't it do that job? If you've never used AI to fact-check something, the recently launched service offers a glimpse at where the technology stands. It doesn't just label claims as true or false—it evaluates the article holistically, weighing context, credibility, and bias. It even compares multiple AI search engines to cross-check itself. You can easily imagine a newsroom workflow that applies an AI fact-checker similarly, sending its analysis back to the writer, highlighting the bits that need shoring up. And if the writer happens to be a machine, revisions could be done lightning fast, and at scale. Stories could go back and forth until they reach a certain accuracy threshold, with anything that falls short held for human review. All this makes sense in theory, and it could even be applied to what news orgs are doing currently with AI summaries. Nieman Lab has an excellent write-up on how The Wall Street Journal, Yahoo News, and Bloomberg all use AI to generate bullet points or top-line takeaways for their journalism. For both Yahoo and the Journal, there's some level of human review on the summaries (for Bloomberg, it's unclear from the article). These organizations are already on the edge of what's acceptable—balancing speed and scale with credibility. One mistake in a summary might not seem like much, but when trust is already fraying, it's enough to shake confidence in the entire approach. Human review helps ensure accuracy, of course, but also requires more human labor—something in short supply in newsrooms that don't have a national footprint. AI fact-checking could give smaller outlets more options with respect to public-facing AI content. Similarly, Politico's union recently criticized the publication's AI-written reports for subscribers based on the work of its journalists, because of occasional inaccuracies. A fact-checking layer might prevent at least some embarrassing mistakes, like attributing political stances to groups that don't exist. The AI trust problem that won't go away Using AI to fight AI hallucination might make mathematical sense if it can prevent serious errors, but there's another problem that stems from relying even more on machines, and it's not just a metallic flavor of irony. The use of AI in media already has a trust problem. The Sun-Times ' phantom book list is far from the first AI content scandal, and it certainly won't be the last. Some publications are even adopting anti-AI policies, forbidding its use for virtually anything. Because of AI's well-documented problems, public tolerance for machine error is lower than for human error. Similarly, if a self-driving car gets into an accident, the scrutiny is obviously much greater than if the car was driven by a person. You might call this the automation fallout bias, and whether you think it's fair or not, it's undoubtedly true. A single high-profile hallucination that slips through the cracks could derail adoption, even if it might be statistically rare. Add to that what would probably be painful compute costs for multiple layers of AI writing and fact-checking, not to mention the increased carbon footprint. All to improve AI-generated text—which, let's be clear, is not the investigative, source-driven journalism that still requires human rigor and judgment. Yes, we'd be lightening the cognitive load for editors, but would it be worth the cost? Despite all these barriers, it seems inevitable that we will use AI to check AI outputs. All indications point to hallucinations being inherent to generative technology. In fact, newer 'thinking' models appear to hallucinate even more than their less sophisticated predecessors. If done right, AI fact-checking would be more than a newsroom tool, becoming part of the infrastructure for the web. The question is whether we can build it to earn trust, not just automate it. The amount of AI content in the world can only increase, and we're going to need systems that can scale to keep up. AI fact-checkers can be part of that solution, but only if we manage—and accept— their potential to make errors themselves. We may not yet trust AI to tell the truth, but at least it can catch itself in a lie.

Hull man jailed for rape and other 'extreme abuse'
Hull man jailed for rape and other 'extreme abuse'

BBC News

time27-05-2025

  • General
  • BBC News

Hull man jailed for rape and other 'extreme abuse'

A man who subjected a woman to "extreme levels of violent abuse" over a four-week period has been jailed for 15 Huteson, 26, of Greylees Avenue, Hull, was convicted of rape, assault by penetration, assault causing actual bodily harm, assault by beating and coercive and controlling behaviour after a five-day trial at Hull Crown addition to his sentence, Huteson was handed a five-year extension on licence and put on the sex offenders register for had previously admitted charges of assault by beating, criminal damage and malicious communications. Speaking after the conclusion of the case on Friday, Det Con Melissa Bell, of Humberside Police, said: "Huteson is an extremely dangerous offender."[He} subjected a woman to extreme levels of violent abuse and controlling behaviours and continues to show no remorse for his crimes."I am reassured that he is now off our streets and in prison for a substantial number of years, unable to inflict such incomprehensible emotional and physical harm to anyone else."The detective also praised the victim for her "sheer bravery" in coming forward, and especially for having to "relive the trauma she endured throughout the trial". Listen to highlights from Hull and East Yorkshire on BBC Sounds, watch the latest episode of Look North or tell us about a story you think we should be covering here.

Chicago paper publishes AI-generated 'summer reading list' with books that don't exist
Chicago paper publishes AI-generated 'summer reading list' with books that don't exist

Fox News

time22-05-2025

  • Entertainment
  • Fox News

Chicago paper publishes AI-generated 'summer reading list' with books that don't exist

The Chicago Sun-Times admitted on Tuesday that it published an AI-generated list of books that don't exist for its summer reading list. On Sunday, the publication released a special 64-page section titled "Heat Index: Your Guide to the Best of Summer" which featured a list of 15 recommended books for summer. However, upon further look, it was found that 10 of the 15 books on the list were not real. One example included a book called "Nightshade Market" by Min Jin Lee, which was described as a "riveting tale set in Seoul's underground economy" and follows "three women whose paths intersect in an illegal night market" exploring "class, gender and the shadow economies beneath prosperous societies." Lee herself confirmed on her X account on Tuesday that the book was not real. "I have not written and will not be writing a novel called 'Nightshade Market.' Thank you," Lee wrote. Chicago Public Media CEO Melissa Bell remarked on the situation in an article on Tuesday, revealing that the mistake came from a freelance writer from one of their partner companies, King Features. Bell acknowledged that the list was published without review from the editorial team. "We are in a moment of great transformation in journalism and technology, and at the same time, our industry continues to be besieged by business challenges," Bell wrote. "This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it." Moving forward, Bell announced that the paper will be reviewing its relationship with content partners like King Features, updating its policies for third-party content and explicitly identifying third-party content in its publications. The Chicago Sun-Times also removed the section from its e-paper version and confirmed that it would not charge subscribers who bought the premium edition. "We are committed to making sure this never happens again. We know that there is work to be done to provide more answers and transparency around the production and publication of this section, and will share additional updates in the coming days," Bell said. In another report for the Chicago Sun-Times, the freelance writer was identified as Marco Buscaglia, who confirmed that he used AI for this and other stories without disclosing it to supervisors or fully vetting the results. King Features later added that it was "terminating" its relationship with Buscaglia, saying that he had violated strict policy regarding the use of AI. In an additional statement to Fox News Digital on Wednesday, Chicago Public Media marketing director Victor Lim said, "Regarding Chicago Public Media's usage of generative AI, we are committed to producing journalism that is accurate, ethical, and deeply human. While GAI may assist with certain tasks—like summarizing documents or analyzing data—our editorial content will always be created and shaped by journalists."

A US newspaper just released its summer reading list. But the books don't exist
A US newspaper just released its summer reading list. But the books don't exist

7NEWS

time21-05-2025

  • Entertainment
  • 7NEWS

A US newspaper just released its summer reading list. But the books don't exist

A US newspaper released its recommended reading list on Sunday, two weeks ahead of their summer starting. The problem? Most of the books don't exist. The Chicago Sun-Times confirmed on Tuesday that several of the titles had been generated by AI and don't actually exist. Heat Index: Your Guide to the Best of Summer, was created in part by a freelancer who works for a third-party company, according to the Sun-Times. 'To our great disappointment, that list was created through the use of an AI tool and recommended books that do not exist,' chief executive of Sun-Times owner Chicago Public Media Melissa Bell said in a statement. 'We are actively investigating the accuracy of other content in the special section.' The AI flub comes as industries like journalism fear that the rapidly developing technology could encroach on jobs formerly occupied by humans. The Sun-Times recently cut 20 per cent of its staff, according to Axios. While it has come a long way in recent years, AI is not a flawless technology and some iterations have been known to generate fictional or inaccurate information — an issue also called hallucinating. Some institutions have found uses for the growing technology, including the health care field, education and marketing. However, there is still much pushback from some consumers who are hesitant to trust AI. And like all forms of journalism, AI still requires fact-checking. While several of the books listed by the Sun-Times do not exist, the authors attributed with writing them do. There is no Tidewater Dreams, for example, but Isabel Allende is an acclaimed Chilean American writer. The Chicago author Rebecca Makkai is credited with the fake book Boiling Point. And author Min Jin Lee is listed as having written the nonexistent book NightShade Market. Toward the bottom of the list, some real books appear, such as André Aciman's Call Me By Your Name. Bell has also released a statement on the paper's website. She said the list came from distributor King Features, a company the paper regularly partners with for content. 'King Features worked with a freelancer who used an AI agent to help build out this special section,' she said. 'It was inserted into our paper without review from our editorial team, and we presented the section without any acknowledgement that it was from a third-party organisation.' At least one other paper, The Philadelphia Inquirer, also used the third-party list including the AI-generated book titles. In a statement shared by the Sun-Times, a spokesperson for King Features said the company has 'a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content'. 'The Heat Index summer supplement was created by a freelance content creator who used AI in its story development without disclosing the use of AI. 'We are terminating our relationship with this individual. We regret this incident and are working with the handful of publishing partners who acquired this supplement.' The Sun-Times said it had removed the list from its digital publication, and the website had a banner on the homepage leading to Bell's statement as of Wednesday afternoon. The paper will now identify in print when content comes from a third-party distributor, and is currently reviewing its relationship with third-party contractors to ensure they meet the standards of the newsroom, it said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store