logo
#

Latest news with #KevinRoose

Hard Fork
Hard Fork

Yahoo

timea day ago

  • Business
  • Yahoo

Hard Fork

Credit - The New York Times' weekly technology podcast, hosted by journalists Kevin Roose and Casey Newton, covers topics that impact our daily lives without veering into wonky debates over the specifics of a new iPhone rollout. Topics include how AI is impacting the job market for new graduates looking for entry-level positions, the ethical hazards of an AI chatbot that's too nice, and the Trump phone. Episodes strike the right balance between interviews and commentary, and when the co-hosts score big execs on the podcast, they question them directly on subjects like the speed with which Silicon Valley seems to want to leap into the world of artificial intelligence without guardrails. The duo recently made headlines for their live interview with OpenAI CEO Sam Altman who sparred with the journalists over the Times' ongoing lawsuit against OpenAI. Write to Eliana Dockterman at

Why I'm Suing OpenAI, the Creator of ChatGPT
Why I'm Suing OpenAI, the Creator of ChatGPT

Scientific American

time2 days ago

  • Business
  • Scientific American

Why I'm Suing OpenAI, the Creator of ChatGPT

'I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones,' wrote New York Times technology columnist Kevin Roose in March, 'and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.' He's right. That's why I recently filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in the state of Hawaii, where I live, until it can demonstrate the legitimate safety measures that the company has itself called for from its 'large language model.' We are at a pivotal moment. Leaders in AI development—including OpenAI's own CEO Sam Altman—have acknowledged the existential risks posed by increasingly capable AI systems. In June 2015, Altman stated: 'I think AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there'll be great companies created with serious machine learning.' Yes, he was probably joking—but it's not a joke. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Eight years later, in May 2023, more than 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics. 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,' the letter, released by the Center for AI Safety, a California nonprofit, says in its entirety. I'm at the end of my rope. For the past two years, I've tried to work with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation, which means taking action before the actual harm materializes, because it may be too late if we wait. Unfortunately, despite collaboration with key senators and committee chairs, my state legislative efforts died early after being introduced. And in the meantime, the Trump administration has rolled back almost every aspect of federal AI regulation and has essentially put on ice the international treaty effort that began with the Bletchley Declaration in 2023. At no level of government are there any safeguards for the use of AI systems in Hawaii. Despite their previous statements, OpenAI has abandoned its key safety commitments, including walking back its ' superalignment ' initiative that promised to dedicate 20 percent of computational resources to safety research, and late last year, reversing its prohibition on military applications. Its critical safety researchers have left, including co-founder Ilya Sutskever and Jan Leike, who publicly stated in May 2024, 'Over the past years, safety culture and processes have taken a backseat to shiny products.' The company's governance structure was fundamentally altered during a November 2023 leadership crisis, as the reconstituted board removed important safety-focused oversight mechanisms. Most recently, in April, OpenAI eliminated guardrails against misinformation and disinformation, opening the door to releasing 'high risk' and 'critical risk' AI models, 'possibly helping to swing elections or create highly effective propaganda campaigns,' according to Fortune magazine. In its first response, OpenAI has argued that the case should be dismissed because regulating AI is fundamentally a 'political question' that should be addressed by Congress and the president. I, for one, am not comfortable leaving such important decisions to this president or this Congress—especially when they have done nothing to regulate AI to date. Hawaii faces distinct risks from unregulated AI deployment. Recent analyses indicate that a substantial portion of Hawaii's professional services jobs could face significant disruption within five to seven years as a consequence of AI. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging. Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context. My federal lawsuit applies well-established legal principles to this novel technology and makes four key claims: Product liability claims: OpenAI's AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company's deliberate removal of safety measures it previously deemed essential. Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors. Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers. Public nuisance: OpenAI's deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii. Federal courts have recognized the viability of such claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit Court of Appeals (which Hawaii is part of) establish that technology companies can be held liable for design defects that create foreseeable risks of harm. I'm not asking for a permanent ban on OpenAI or its products here in Hawaii but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including reinstating its previous commitment to allocate 20 percent of resources to alignment and safety research; implementing the safety framework outlined in its own publication ' Planning for AGI and Beyond,' which attempts to create guardrails for dealing with AI as or more intelligent than its human creators; restoring meaningful oversight through governance reforms; creating specific safeguards against misuse for manipulation of democratic processes; and developing protocols to protect Hawaii's unique cultural and natural resources. These items simply require the company to adhere to safety standards it has publicly endorsed but has failed to consistently implement. While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests. The development of increasingly capable AI systems is likely to be one of the most significant technological transformations in human history, many experts believe—perhaps in a league with fire, according to Google CEO Sundar Pichai. 'AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,' Pichai said in 2018. He's right, of course. The decisions we make today will profoundly shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures. What is happening now with OpenAI's breakneck AI development and deployment to the public is, to echo technologist Tristan Harris's succinct April 2025 summary, 'insane.' My lawsuit aims to restore just a little bit of sanity.

X Hits Grok Bottom + More A.I. Talent Wars + ‘Crypto Week'
X Hits Grok Bottom + More A.I. Talent Wars + ‘Crypto Week'

New York Times

time6 days ago

  • Business
  • New York Times

X Hits Grok Bottom + More A.I. Talent Wars + ‘Crypto Week'

Hosted by Kevin Roose and Casey Newton Produced by Whitney Jones and Rachel Cohn Edited by Jen Poyant Engineered by Katie McMurran Original music by Dan PowellMarion LozanoRowan Niemisto and Alyssa Moxley This week, we tick through the many dramatic headlines surrounding xAI, including the departure of X's chief executive, Linda Yaccarino; the Grok chatbot spewing antisemitic comments; and the A.I. companion Ani engaging in sexually explicit role-play. Then, we explain why a fight to acquire the start-up Windsurf startled many in Silicon Valley and may reshape the culture in many of the big A.I. labs. And finally, it's 'crypto week.' David Yaffe-Bellany explains how crypto provisions in the bills before Congress and the president could affect even people who don't hold digital currencies. Also, we officially have merch! For a limited time, you can get a special-edition 'Hard Fork' hat when you purchase an annual New York Times Audio subscription for the first time. Get your hat at Guests: David Yaffe-Bellany, New York Times technology reporter covering the crypto industry Additional Reading: Elon Musk's Grok Chatbot Shares Antisemitic Posts on X Google Hires A.I. Leaders From a Start-Up Courted by OpenAI Cognition AI Buys Windsurf as A.I. Frenzy Escalates 'Crypto Week' Is Back on Track After House G.O.P. Quells Conservative Revolt The 'Trump Pump': How Crypto Lobbying Won Over a President 'Hard Fork' is hosted by Kevin Roose and Casey Newton and produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant. Engineering by Katie McMurran and original music by Dan Powell, Marion Lozano, Rowan Niemisto and Alyssa Moxley. Fact-checking by Caitlin Love. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad and Jeffrey Miranda.

Hard Fork Live, Part 1: Sam Altman and Brad Lightcap of OpenAI
Hard Fork Live, Part 1: Sam Altman and Brad Lightcap of OpenAI

New York Times

time27-06-2025

  • Entertainment
  • New York Times

Hard Fork Live, Part 1: Sam Altman and Brad Lightcap of OpenAI

Hosted by Kevin Roose and Casey Newton Produced by Rachel Cohn and Whitney Jones Edited by Jen Poyant Engineered by Katie McMurran Original music by Dan PowellElisheba IttoopMarion Lozano and Rowan Niemisto The first Hard Fork Live is officially in the books, and for those who couldn't attend, we're playing highlights from the event in this episode and the next. This week, Mayor Daniel Lurie of San Francisco makes a surprise appearance to discuss the advice he's receiving from tech executives during the early days of his administration, as well as how he built a social media presence that's got Kevin wondering: Could we do that? Then, the conversation that had everyone talking: We'll play our interview with OpenAI's chief executive, Sam Altman, and chief operating officer, Brad Lightcap, and explain what was going on in our heads as the conversation unfolded in a way we did not expect. 'Hard Fork' is hosted by Kevin Roose and Casey Newton and produced by Rachel Cohn and Whitney Jones. This episode was edited by Jen Poyant. Engineering by Katie McMurran and original music by Dan Powell, Elisheba Ittoop, Marion Lozano and Rowan Niemisto. Fact-checking by Caitlin Love. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad and Jeffrey Miranda.

Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0'
Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0'

New York Times

time20-06-2025

  • Business
  • New York Times

Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0'

Hosted by Kevin Roose and Casey Newton Produced by Whitney Jones and Rachel Cohn Edited by Jen Poyant Engineered by Katie McMurran Original music by Dan PowellMarion LozanoRowan Niemisto and Diane Wong This week, President Trump's family business announced that it was introducing a mobile phone and a cellular network. We tick through the many potential conflicts of interest this new business venture raises. Then, the co-founders of the startup Mechanize defend their efforts to automate away all jobs — starting with software engineering. And finally, we take a trip to the movie theater. 'M3GAN 2.0' is out next week, so its star, Allison Williams, joins us to discuss the film, and A.I.'s impact on her career and parenting. Guests: Matthew Barnett and Ege Erdil, co-founders of Mechanize. Allison Williams, actor. Additional Reading: Trump Mobile Phone Company Announced by President's Family, but Details Are Murky The President Is Selling a Phone This A.I. Company Wants to Take Your Job 'Hard Fork' is hosted by Kevin Roose and Casey Newton and produced by Whitney Jones and Rachel Cohn. This episode was edited by Jen Poyant. Engineering by Alyssa Moxley and original music by Dan Powell, Marion Lozano, Diane Wong and Rowan Niemisto. Fact-checking by Caitlin Love. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad and Jeffrey Miranda.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store