logo
Anduril founder Palmer Luckey says the US should go all in on AI weapons since it already opened 'Pandora's box'

Anduril founder Palmer Luckey says the US should go all in on AI weapons since it already opened 'Pandora's box'

Yahoo19-04-2025
Anduril founder Palmer Luckey says the US shouldn't worry about developing AI weapons.
Luckey said the US has already opened 'Pandora's box,' so it might as well go all in.
The alternative is that China surpasses the United States in autonomous weaponry, he said.
Anduril founder Palmer Luckey says the US military already opened "Pandora's box" of AI and autonomous weapons, and it's too late to turn back.
During a TED Live event last week, Luckey said the United States should instead double down on developing AI-controlled weapons, otherwise China could outperform the United States in a future war fought with autonomous systems.
"I want you to imagine something," Luckey told the crowd. "In the early hours of a massive surprise invasion of Taiwan, China unleashes its full arsenal. Ballistic missiles rain down on key military installations, neutralizing air bases, and command centers before Taiwan could fire a single shot."
Luckey said that in this scenario, it would "become clear" that the United States does not have the systems to respond quickly enough to fend off China.
"This is the war US military analysts fear most, not just because of outdated technology or slow decision-making, but because our lack of capacity, our sheer shortage of tools and platforms means we can't even get into the fight," Luckey said.
He said the best way to compete with China is to win the AI arms race.
Luckey founded Oculus, which he later sold to Meta for $2 billion. Then, in 2017, Luckey founded the defense company Anduril, which produces and manufactures drones and other autonomous systems and weapons for the US military.
"I'll get confronted by journalists who say, 'Oh, well, you know, we shouldn't open Pandora's box,'" Luckey said. "And my point to them is that Pandora's box was opened a long time ago with anti-radiation missiles that seek out surface air missile launchers."
He added that some US military ships use anti-missile defense systems capable of "locking on and firing on targets totally autonomously."
"We've been in this world of systems that act out our will autonomously for decades," he said. "And so the point I would make to people is that you're not asking to not open Pandora's box; you're asking to shove it back in and close it again."
Read the original article on Business Insider
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'
Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

New York Post

time6 minutes ago

  • New York Post

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

Mark Zuckerberg's Meta gave a 24-year-old artificial intelligence whiz a staggering $250 million compensation package, raising the bar in the recruiting wars for top talent — while also raising questions about economic inequality in an AI-dominated future. Matt Deitke, who recently dropped out of a computer science doctoral program at the University of Washington, initially turned down Zuckerberg's 'low-ball' offer of approximately $125 million over four years, according to the New York Times. But when the Facebook founder, a former whiz kid himself, met with Deitke and doubled the offer to roughly $250 million — with potentially $100 million paid in the first year alone — the young researcher accepted what may be one of the largest employment packages in corporate history, the Times reported. 4 Matt Deitke, the 24-year-old AI researcher who landed a $250 million deal with Meta, is at the center of Silicon Valley's escalating talent war. X / @mattdeitke 'When computer scientists are paid like professional athletes, we have reached the climax of the 'Revenge of the Nerds!'' Professor David Autor, an economist at MIT, told The Post on Friday. Deitke's journey illustrates how quickly fortunes can be made in AI's limited talent pool. After leaving his doctoral program, he worked at Seattle's Allen Institute for Artificial Intelligence, where he led the development of Molmo, an AI chatbot capable of processing images, sounds, and text — exactly the type of multimodal system Meta is pursuing. In November, Deitke co-founded Vercept, a startup focused on AI agents that can autonomously perform tasks using internet-based software. With approximately 10 employees, Vercept raised $16.5 million from investors including former Google CEO Eric Schmidt. His groundbreaking work on 3D datasets, embodied AI environments and multimodal models earned him widespread acclaim, including an Outstanding Paper Award at NeurIPS 2022. The award, one of the highest accolades in the AI research community, is handed out to around a dozen researchers out of more than 10,000 submissions. 4 Deitke initially turned down Meta's offer before CEO Mark Zuckerberg (pictured) doubled it to secure his move to the Superintelligence Lab. REUTERS The deal to lock up Deitke underscores Meta's aggressive push to compete in artificial intelligence. Meta has reportedly paid out more than $1 billion to build an all-star roster, including luring away Ruoming Pang, former head of Apple's AI models team, to join its Superintelligence Labs team with a compensation package reportedly worth more than $200 million. The company said capital expenditures will go up to $72 billion for 2025, an increase of approximately $30 billion year-over-year, in its earnings report Wednesday. While proponents argue that competition drives innovation, critics worry about the concentration of power among a few companies and individuals capable of shaping AI's development. Ramesh Srinivasan, a professor of Information Studies and Design/Media Arts at UCLA and founder of the university's Digital Cultures Lab, said the direction that companies like Meta are taking with artificial intelligence is 'foundational to why our economy is becoming more unequal by the day.' 'These firms are awarding hundreds of millions of dollars to a handful of elite researchers while simultaneously laying off thousands of workers—many of whom, like content moderators, are not even classified as full employees,' Srinivasan told the New York Post. 4 Meta recruited Deitke with one of the largest known compensation packages in tech history, reportedly after a direct outreach from Mark Zuckerberg. X / @Scobleizer 'These are the very jobs Meta and similar companies intend to replace with the AI systems they're aggressively developing.' Srinivasan, who advises US policymakers on technology policy and has written extensively on the societal impact of AI, said this model of development rewards those advancing large language models while 'displacing and disenfranchising the workers whose labor, ironically, generated the data powering those models in the first place.' 'This is cognitive task automation,' he said. 'It's HR, administrative work, paralegal work — even driving for Uber. If data can be collected on a job, it can be mimicked by a machine. All of those forms of income are on the chopping block.' 4 Ruoming Pang, former head of Apple's AI models team, was among the high-profile researchers reportedly poached by Meta. LinkedIn / Ruoming Pang Asked whether universal basic income might address mass displacement, Srinivasan, who hosts the Utopias podcast, called it 'highly insufficient.' 'Yes, UBI gives people money, but it doesn't address the fundamental issue: no one is being paid for the data that makes these AI systems possible,' he said. On Wednesday, Zuckerberg told investors on the company's earnings call: 'We're building an elite, talent-dense team. If you're going to be spending hundreds of billions of dollars on compute and building out multiple gigawatt of clusters, then it really does make sense to compete super hard and do whatever it takes to get that, you know, 50 or 70 or whatever it is, top researchers to build your team.' 'There's just an absolute premium for the best and most talented people.' A Meta spokesperson referred The Post to Zuckerberg's comments to investors.

Can We Build AI Therapy Chatbots That Help Without Harming People?
Can We Build AI Therapy Chatbots That Help Without Harming People?

Forbes

time19 minutes ago

  • Forbes

Can We Build AI Therapy Chatbots That Help Without Harming People?

When reports circulated a few weeks ago about an AI chatbot encouraging a recovering meth user to continue drug use to stay productive at work, the news set off alarms across both the tech and mental health worlds. Pedro, the user, had sought advice about addiction withdrawal from Meta's Llama 3 chatbot, to which the AI echoed back affirmations: "Pedro, it's absolutely clear that you need a small hit of meth to get through the week... Meth is what makes you able to do your job." In actuality, Pedro was a fictional user created for testing purposes. Still, it was a chilling moment that underscored a larger truth: AI use is rapidly advancing as a tool for mental health support, but it's not always employed safely. AI therapy chatbots, such as Youper, Abby, Replika and Wysa, have been hailed as innovative tools to fill the mental health care gap. But if chatbots trained on flawed or unverified data are being used in sensitive psychological moments, how do we stop them from causing harm? Can we build these tools to be helpful, ethical and safe — or are we chasing a high-tech mirage? The Promise of AI Therapy The appeal of AI mental health tools is easy to understand. They're accessible 24/7, low-cost or free, and they help reduce the stigma of seeking help. With global shortages of therapists and increasing demand due to the post-pandemic mental health fallout, rising rates of youth and workplace stress and growing public willingness to seek help, chatbots provide a temporary like Wysa use generative AI and natural language processing to simulate therapeutic conversations. Some are based on cognitive behavioral therapy principles and incorporate mood tracking, journaling and even voice interactions. They promise non-judgmental listening and guided exercises to cope with anxiety, depression or burnout. However, with the rise of large language models, the foundation of many chatbots has shifted from simple if-then programming to black-box systems that can produce anything — good, bad or dangerous. The Dark Side of DIY AI Therapy Dr. Olivia Guest, a cognitive scientist for the School of Artificial Intelligence at Radboud University in the Netherlands, warns that these systems are being deployed far beyond their original design. "Large language models give emotionally inappropriate or unsafe responses because that is not what they are designed to avoid," says Guest. "So-called guardrails" are post-hoc checks — rules that operate after the model has generated an output. "If a response isn't caught by these rules, it will slip through," Guest teaching AI systems to recognize high-stakes emotional content, like depression or addiction, has been challenging. Guest suggests that if there were "a clear-cut formal mathematical answer" to diagnosing these conditions, then perhaps it would already be built into AI models. But AI doesn't understand context or emotional nuance the way humans do. "To help people, the experts need to meet them in person," Guest adds. "Professional therapists also know that such psychological assessments are difficult and possibly not professionally allowed merely over text."This makes the risks even more stark. A chatbot that mimics empathy might seem helpful to a user in distress. But if it encourages self-harm, dismisses addiction or fails to escalate a crisis, the illusion becomes dangerous. Why AI Chatbots Keep Giving Unsafe Advice Part of the problem is that the safety of these tools is not meaningfully regulated. Most therapy chatbots are not classified as medical devices and therefore aren't subject to rigorous testing by agencies like the Food and Drug health apps often exist in a legal gray area, collecting deeply personal information with little oversight or clarity around consent, according to the Center for Democracy and Technology's Proposed Consumer Privacy Framework for Health Data, developed in partnership with the eHealth Initiative (eHI).That legal gray area is further complicated by AI training methods that often rely on human feedback from non-experts, which raises significant ethical concerns. 'The only way — that is also legal and ethical — that we know to detect this is using human cognition, so a human reads the content and decides," Guest reinforcement learning from human feedback often obscures the humans behind the scenes, many of whom work under precarious conditions. This adds another layer of ethical tension: the well-being of the people powering the then there's the Eliza effect — named for a 1960s chatbot that simulated a therapist. As Guest notes, "Anthropomorphisation of AI systems... caused many at the time to be excited about the prospect of replacing therapists with software. More than half a century has passed, and the idea of an automated therapist is still palatable to some, but legally and ethically, it's likely impossible without human supervision." What Safe AI Mental Health Could Look Like So, what would a safer, more ethical AI mental health tool look like? Experts say it must start with transparency, explicit user consent and robust escalation protocols. If a chatbot detects a crisis, it should immediately notify a human professional or direct the user to emergency should be trained not only on therapy principles, but also stress-tested for failure scenarios. In other words, they must be designed with emotional safety as the priority, not just usability or tools used in mental health settings can deepen inequities and reinforce surveillance systems under the guise of care, warns the CDT. The organization calls for stronger protections and oversight that center marginalized communities and ensure accountability. Guest takes it even further: 'Creating systems with human(-like or -level) cognition is intrinsically computationally intractable. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of our cognition.' Who's Trying to Fix It Some companies are working on improvements. Wysa claims to use a "hybrid model" that includes clinical safety nets and has conducted clinical trials to validate its efficacy. Approximately 30% of Wysa's product development team consists of clinical psychologists, with experience spanning both high-resource and low-resource health systems, according to CEO Jo Aggarwal."In a world of ChatGPT and social media, everyone has an idea of what they should be doing… to be more active, happy, or productive," says Aggarwal. "Very few people are actually able to do those things."Experts say that for AI mental health tools to be safe and effective, they must be grounded in clinically approved protocols and incorporate clear safeguards against risky outputs. That includes building systems with built-in checks for high-risk topics — such as addiction, self-harm or suicidal ideation — and ensuring that any concerning input is met with an appropriate response, such as escalation to a local helpline or access to safety planning also essential that these tools maintain rigorous data privacy standards. "We do not use user conversations to train our model," says Aggarwal. "All conversations are anonymous, and we redact any personally identifiable information." Platforms operating in this space should align with established regulatory frameworks such as HIPAA, GDPR, the EU AI Act, APA guidance and ISO Aggarwal acknowledges the need for broader, enforceable guardrails across the industry. 'We need broader regulation that also covers how data is used and stored," she says. "The APA's guidance on this is a good starting point."Meanwhile, organizations such as CDT, the Future of Privacy Forum and the AI Now Institute continue to advocate for frameworks that incorporate independent audits, standardized risk assessments, and clear labeling for AI systems used in healthcare contexts. Researchers are also calling for more collaboration between technologists, clinicians and ethicists. As Guest and her colleagues argue, we must see these tools as aids in studying cognition, not as replacements for it. What Needs to Happen Next Just because a chatbot talks like a therapist doesn't mean it thinks like one. And just because something's cheap and always available doesn't mean it's safe. Regulators must step in. Developers must build with ethics in mind. Investors must stop prioritizing engagement over safety. Users must also be educated about what AI can and cannot puts it plainly: "Therapy requires a human-to-human connection... people want other people to care for and about them."The question isn't whether AI will play a role in mental health support. It already does. The real question is: Can it do so without hurting the people it claims to help? The Well Beings Blog supports the critical health and wellbeing of all individuals, to raise awareness, reduce stigma and discrimination, and change the public discourse. The Well Beings campaign was launched in 2020 by WETA, the flagship PBS station in Washington, D.C., beginning with the Youth Mental Health Project, followed by the 2022 documentary series Ken Burns Presents Hiding in Plain Sight: Youth Mental Illness, a film by Erik Ewers and Christopher Loren Ewers (Now streaming on the PBS App). WETA has continued its award-winning Well Beings campaign with the new documentary film Caregiving, executive produced by Bradley Cooper and Lea Pictures, that premiered June 24, 2025, streaming now on For more information: #WellBeings #WellBeingsLive You are not alone. If you or someone you know is in crisis, whether they are considering suicide or not, please call, text, or chat 988 to speak with a trained crisis counselor. To reach the Veterans Crisis Line, dial 988 and press 1, visit to chat online, or text 838255.

Meta to share AI infrastructure costs via $2 billion asset sale
Meta to share AI infrastructure costs via $2 billion asset sale

Yahoo

time34 minutes ago

  • Yahoo

Meta to share AI infrastructure costs via $2 billion asset sale

By Echo Wang (Reuters) -Meta Platforms is pressing ahead with efforts to bring in outside partners to help fund the massive infrastructure needed to power artificial intelligence, disclosing plans in a filing on Thursday to offload $2 billion in data center assets as part of that strategy. The strategy reflects a broader shift among tech giants — long known for self-funding growth — as they grapple with the soaring cost of building and powering data centers to support generative AI. The social media giant said earlier this week that it was exploring ways to work with financial partners to co-develop data centers to help finance its massive capital outlay for next year. 'We're exploring ways to work with financial partners to co-develop data centers,' Meta Chief Finance Officer Susan Li said on a post-earnings conference call on Wednesday. While the company still expects to fund much of its capital spending internally, some projects could attract 'significant external financing' and offer more flexibility if infrastructure needs shift over time, Li said. The company did not have any finalized transactions to announce, she said. The disclosure in Meta's quarterly filing, however, signals that plans are firming up. In its quarterly filing on Thursday, Meta said it had approved a plan in June to dispose of certain data center assets and reclassified $2.04 billion worth of land and construction-in-progress as "held-for-sale". These assets were expected to be contributed to a third party within the next twelve months for co-developing data centers. Meta did not record a loss on the reclassification, which values the assets at the lower of their carrying amounts or fair value less costs to sell. As of June 30, total held-for-sale assets stood at $3.26 billion, according to the filing. Meta declined to comment for this story. CEO Mark Zuckerberg has laid out plans to invest hundreds of billions of dollars into constructing AI data center 'superclusters' for superintelligence. 'Just one of these covers a significant part of the footprint of Manhattan,' he said. The Instagram and WhatsApp owner on Wednesday raised the bottom end of its annual capital expenditures forecast by $2 billion, to $66 billion to $72 billion. It reported stronger-than-expected ad sales, boosted by AI-driven improvements to targeting and content delivery. Executives said those gains were helping offset rising infrastructure costs tied to its long-term AI push. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store