Latest news with #Anduril

CNN
a day ago
- Business
- CNN
OpenAI CEO Sam Altman warns of an AI ‘fraud crisis'
OpenAI CEO Sam Altman says the world may be on the precipice of a 'fraud crisis' because of how artificial intelligence could enable bad actors to impersonate other people. 'A thing that terrifies me is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else — you say a challenge phrase, and they just do it,' Altman said. 'That is a crazy thing to still be doing… AI has fully defeated most of the ways that people authenticate currently, other than passwords.' The comments were part of his wide-ranging interview about the economic and societal impacts of AI at the Federal Reserve on Tuesday. He also told the audience, which included, representatives of large US financial institutions, about the role he expects AI to play in the economy. His appearance comes as the White House is expected to release its 'AI Action Plan' in the coming days, a policy document to outline its approach to regulating the technology and promoting America's dominance in the AI space. OpenAI, which provided recommendations for the plan, has ramped up its presence on and around Capitol Hill in recent months. On Tuesday, the company confirmed it will open its first Washington, DC, office early next year to house its approximately 30-person workforce in the city. Chan Park, OpenAI's head of global affairs for the US and Canada, will lead the new office alongside Joe Larson, who is leaving defense technology company Anduril to become OpenAI's vice president of government. The company will use the space to host policymakers, preview new technology, and provide AI trainings, for example, to teachers and government officials. It will also house research into AI's economic impact and how to improve access to the technology. Despite Altman's warnings about the technology's risks, OpenAI has urged the Trump administration to avoid regulation it says could hamper tech companies' ability to compete with foreign AI innovations. Earlier this month, the US Senate voted to strike a controversial provision from Trump's agenda bill that would have prevented states from enforcing AI-related laws for 10 years. Altman isn't alone in worrying that AI will supercharge fraud. The FBI warned about these AI voice and video 'cloning' scams last year. Multiple parents have reported that AI voice technology was used in attempts to trick them out of money by convincing them that their children were in trouble. And earlier this month, US officials warned that someone using AI to impersonate Secretary of State Marco Rubio's voice had contacted foreign ministers, a US governor and a member of Congress. 'I am very nervous that we have an impending, significant, impending fraud crisis,' Altman said. 'Right now, it's a voice call; soon it's going be a video or FaceTime that's indistinguishable from reality,' Altman said. He warned that while his company isn't building such impersonation tools, it's a challenge the world will soon need to confront as AI continues to evolve. Altman is backing a tool called The Orb, built by Tools for Humanity, that says it will offer 'proof of human' in a world where AI makes it harder to distinguish what, and who, is real online. Altman also explained what keeps him up at night: the idea of bad actors making and misusing AI 'superintelligence' before the rest of the world has advanced enough to defend against such an attack — for example, a US adversary using AI to target the American power grid or create a bioweapon. That comment could speak to fears within the White House and elsewhere on Capitol Hill about China outpacing US tech companies on AI. Altman also said he worries about the prospect of humans losing control of a superintelligent AI system, or giving the technology too much decision-making power. Various tech companies, including OpenAI, are chasing AI superintelligence — and Altman has said he thinks the 2030s could bring AI intelligence far beyond what humans are capable of — but it remains unclear how exactly they define that milestone and when, if ever, they'll reach it. But Altman said he's not as worried as some of his peers in Silicon Valley about AI's potential impact on the workforce, after leaders such as Anthropic CEO Dario Amodei and Amazon CEO Andy Jassy have warned the technology will take jobs. Instead, Altman believes that 'no one knows what happens next.' 'There's a lot of these really smart-sounding predictions,' he said, ''Oh, this is going to happen on this and the economy over here.' No one knows that. In my opinion, this is too complex of a system, this is too new and impactful of a technology, it's very hard to predict.' Still, he does have some thoughts. He said that while 'entire classes of jobs will go away,' new types of work will emerge. And Altman repeated a prediction he's made previously that if the world could look forward 100 years, the workers of the future probably won't have what workers today consider 'real jobs.' 'You have everything you could possibly need. You have nothing to do,' Altman said of the future workforce. 'So, you're making up a job to play a silly status game and to fill your time and to feel useful to other people.' The sentiment seems to be that Altman thinks we shouldn't worry about AI taking jobs because, in the future, we won't really need jobs anyway, although he didn't detail how the future AI tools would, for example, reliably argue a case in court or clean someone's teeth or construct a house. In conjunction with Altman's speech, OpenAI released a report compiled by its chief economist, Ronnie Chatterji, outlining ChatGPT's productivity benefits for workers. In the report, Chatterji — who joined OpenAI as its first chief economist after serving as coordinator of the CHIPS and Science Act in the Biden White House — compared AI to transformative technologies such as electricity and the transistor. He said ChatGPT now has 500 million users globally. Among US users, 20% use ChatGPT as a 'personalized tutor' for 'learning and upskilling,' according to the report, although it didn't elaborate on what kinds of things people are learning through the service. Chatterji also noted that more than half of ChatGPT's American users are between the ages of 18 and 34, 'suggesting that there may be long-term economic benefits as they continue to use AI tools in the workplace going forward.' Over the next year, Chatterji plans to work with economists Jason Furman and Michael Strain on a longer study of AI's impact on jobs and the US workforce. That work will take place in the new Washington, DC, office.

CNN
a day ago
- Business
- CNN
OpenAI CEO Sam Altman warns of an AI ‘fraud crisis'
OpenAI CEO Sam Altman says the world may be on the precipice of a 'fraud crisis' because of how artificial intelligence could enable bad actors to impersonate other people. 'A thing that terrifies me is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else — you say a challenge phrase, and they just do it,' Altman said. 'That is a crazy thing to still be doing… AI has fully defeated most of the ways that people authenticate currently, other than passwords.' The comments were part of his wide-ranging interview about the economic and societal impacts of AI at the Federal Reserve on Tuesday. He also told the audience, which included, representatives of large US financial institutions, about the role he expects AI to play in the economy. His appearance comes as the White House is expected to release its 'AI Action Plan' in the coming days, a policy document to outline its approach to regulating the technology and promoting America's dominance in the AI space. OpenAI, which provided recommendations for the plan, has ramped up its presence on and around Capitol Hill in recent months. On Tuesday, the company confirmed it will open its first Washington, DC, office early next year to house its approximately 30-person workforce in the city. Chan Park, OpenAI's head of global affairs for the US and Canada, will lead the new office alongside Joe Larson, who is leaving defense technology company Anduril to become OpenAI's vice president of government. The company will use the space to host policymakers, preview new technology, and provide AI trainings, for example, to teachers and government officials. It will also house research into AI's economic impact and how to improve access to the technology. Despite Altman's warnings about the technology's risks, OpenAI has urged the Trump administration to avoid regulation it says could hamper tech companies' ability to compete with foreign AI innovations. Earlier this month, the US Senate voted to strike a controversial provision from Trump's agenda bill that would have prevented states from enforcing AI-related laws for 10 years. Altman isn't alone in worrying that AI will supercharge fraud. The FBI warned about these AI voice and video 'cloning' scams last year. Multiple parents have reported that AI voice technology was used in attempts to trick them out of money by convincing them that their children were in trouble. And earlier this month, US officials warned that someone using AI to impersonate Secretary of State Marco Rubio's voice had contacted foreign ministers, a US governor and a member of Congress. 'I am very nervous that we have an impending, significant, impending fraud crisis,' Altman said. 'Right now, it's a voice call; soon it's going be a video or FaceTime that's indistinguishable from reality,' Altman said. He warned that while his company isn't building such impersonation tools, it's a challenge the world will soon need to confront as AI continues to evolve. Altman is backing a tool called The Orb, built by Tools for Humanity, that says it will offer 'proof of human' in a world where AI makes it harder to distinguish what, and who, is real online. Altman also explained what keeps him up at night: the idea of bad actors making and misusing AI 'superintelligence' before the rest of the world has advanced enough to defend against such an attack — for example, a US adversary using AI to target the American power grid or create a bioweapon. That comment could speak to fears within the White House and elsewhere on Capitol Hill about China outpacing US tech companies on AI. Altman also said he worries about the prospect of humans losing control of a superintelligent AI system, or giving the technology too much decision-making power. Various tech companies, including OpenAI, are chasing AI superintelligence — and Altman has said he thinks the 2030s could bring AI intelligence far beyond what humans are capable of — but it remains unclear how exactly they define that milestone and when, if ever, they'll reach it. But Altman said he's not as worried as some of his peers in Silicon Valley about AI's potential impact on the workforce, after leaders such as Anthropic CEO Dario Amodei and Amazon CEO Andy Jassy have warned the technology will take jobs. Instead, Altman believes that 'no one knows what happens next.' 'There's a lot of these really smart-sounding predictions,' he said, ''Oh, this is going to happen on this and the economy over here.' No one knows that. In my opinion, this is too complex of a system, this is too new and impactful of a technology, it's very hard to predict.' Still, he does have some thoughts. He said that while 'entire classes of jobs will go away,' new types of work will emerge. And Altman repeated a prediction he's made previously that if the world could look forward 100 years, the workers of the future probably won't have what workers today consider 'real jobs.' 'You have everything you could possibly need. You have nothing to do,' Altman said of the future workforce. 'So, you're making up a job to play a silly status game and to fill your time and to feel useful to other people.' The sentiment seems to be that Altman thinks we shouldn't worry about AI taking jobs because, in the future, we won't really need jobs anyway, although he didn't detail how the future AI tools would, for example, reliably argue a case in court or clean someone's teeth or construct a house. In conjunction with Altman's speech, OpenAI released a report compiled by its chief economist, Ronnie Chatterji, outlining ChatGPT's productivity benefits for workers. In the report, Chatterji — who joined OpenAI as its first chief economist after serving as coordinator of the CHIPS and Science Act in the Biden White House — compared AI to transformative technologies such as electricity and the transistor. He said ChatGPT now has 500 million users globally. Among US users, 20% use ChatGPT as a 'personalized tutor' for 'learning and upskilling,' according to the report, although it didn't elaborate on what kinds of things people are learning through the service. Chatterji also noted that more than half of ChatGPT's American users are between the ages of 18 and 34, 'suggesting that there may be long-term economic benefits as they continue to use AI tools in the workplace going forward.' Over the next year, Chatterji plans to work with economists Jason Furman and Michael Strain on a longer study of AI's impact on jobs and the US workforce. That work will take place in the new Washington, DC, office.


Times
a day ago
- Business
- Times
Tech bros are backing Donald Trump's ‘Made in America' revival
W ould you buy a 'Made in America' computer if it cost 20 per cent more than a Chinese-manufactured alternative from Apple? Palmer Luckey, chief executive of Anduril, a Silicon Valley defence technology firm, is asking his social media followers that question, as he considers whether to fill the gap in the market. While some computers, including Apple's Mac Pro, are assembled in US factories, it is not presently possible to buy a laptop or desktop computer that is made entirely in the US. Luckey's proposal is the latest evidence of techno-nationalist sentiment sweeping through Silicon Valley, as company founders get behind the Trump administration's ambition to reindustrialise America. 'I actually think Anduril could build computers in the United States,' Luckey, 32, told the Reindustrialise Summit in Detroit last week, where he addressed the audience virtually while being represented physically on stage by a humanoid robot.


Time of India
4 days ago
- Business
- Time of India
US military hardware maker Anduril's founder Palmer Luckey on possibility of American-made PCs: ‘I think there's a chance…'
Image credit: X (Twitter) US military hardware maker Anduril 's founder, Palmer Luckey , has recently teased the possibility of the company producing American-made PCs . This week, while talking at the 'Reindustrialize Summit' in Detroit, US, Luckey said, 'I think there's a chance that it's going to be Anduril.' He also noted that conversations about this PC-making initiative began years ago. He added that Anduril has engaged with 'everyone you would need to have to do that,' including individuals "on the chip side, on the assembly side, on the manufacturing side." Despite these discussions, Luckey is not entirely committed to the effort as he also told the audience that "there are some things Anduril has to do," while "there are other things we'd rather have other people do. This is something I'd rather have other people do." However, he didn't share a potential name for the computer but suggested that it would be "pro-American, and also a gambling reference." It's important to note that the concept of American-made computers is not new. For example, PC manufacturer Dell operated several manufacturing plants across the US before closing its North Carolina plant in 2009 and shifting to an international manufacturing partner in Poland. Anduril will not build its own humanoid robot: Palmer Luckey by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Doctors Beg: Take These 4 Ingredients Before Bed to Burn Fat The Healthy Way Learn More Undo At the event, Luckey spoke to the audience both virtually and through a humanoid robot developed by a company named Foundation. Sharing a post on X, he wrote: "I finally pulled off my long-standing goal of speaking at a conference via VR telerobotics! Thousands of miles of travel saved, and no chance of Luigi." He clarified that Anduril does not plan to create its own humanoid robot as well: 'We're going to partner with other companies where it makes sense.' Founded by Luckey in 2017, Anduril develops US military hardware such as drones, underwater submersibles, and an AI-driven software platform called Lattice. The company is also collaborating with Meta on extended reality headsets and other wearable devices for military use—a partnership which was announced in May. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


AllAfrica
5 days ago
- Business
- AllAfrica
US banking on cheap missiles to narrow China war gap
The US is betting on a new wave of cheap cruise missiles to win a high-tech war of attrition against China. This month, US defense contractor L3Harris Technologies revealed the 'Red Wolf' and 'Green Wolf' missiles, offering affordable, long-range strike capabilities for the US military amid rising tensions with China in the Pacific, Reuters reported. The systems support the US Department of Defense's (DoD) 'affordable mass' strategy, shaped by recent conflicts in Ukraine and Israel that underscored the need for large stockpiles of deployable munitions. Both multi-role missiles exceed a 200-nautical-mile range and can engage moving naval targets. Red Wolf focuses on precision strikes, whereas Green Wolf is designed for electronic warfare and intelligence collection. Production is underway in Ashburn, Virginia, with initial low-rate manufacturing progressing toward full-scale output. L3Harris anticipates pricing around US$300,000 per unit and aims to produce roughly 1,000 annually. Having completed over 40 successful test flights, the systems mark a strategic pivot as Lockheed Martin and RTX currently dominate the long-range missile market. The Red and Green Wolf systems join a growing list of weapons marketed under the affordable mass concept, including Anduril's Barracuda and Lockheed Martin's Common Multi-Mission Truck (CMMT), which embody competing visions of low-cost, mass-producible cruise missiles designed to saturate peer adversaries. Anduril's Barracuda—available in three scalable configurations—emphasizes rapid production using commercial components, modular payloads and autonomous teaming enabled by its Lattice software. Designed for flexibility across air, sea and land launches, it has entered a US Air Force/Defense Innovation Unit (DIU) prototype effort. In contrast, Lockheed's CMMT, or 'Comet,' is a modular, non-stealthy missile priced at $150,000 and optimized for global assembly and palletized mass launch from cargo aircraft. Barracuda emphasizes software-defined autonomy and flexible mission roles, while CMMT focuses on industrial-scale modularity and global assembly for cost-effective mass deployment. As the US military turns to low-cost cruise missiles like Barracuda, CMMT and the Red and Green Wolf to achieve affordable mass, a critical question looms: can these cheaper weapons deliver sufficient firepower, scale and survivability to offset industrial shortfalls and support sustained combat in a high-intensity war with China? According to the US DoD's 2024 China Military Power Report (CMPR), China possesses the world's largest navy by battle force, exceeding 370 ships and submarines, including over 140 major surface combatants. Mark Gunzinger argues in a November 2021 article for Air & Space Forces Magazine that the US suffers from a shortage of precision-guided munitions (PGMs), rooted in outdated assumptions favoring short wars, which he argues limits its ability to sustain combat against China. Seth Jones writes in a January 2023 report for the Center for Strategic and International Studies (CSIS) that the US defense industrial base remains optimized for peacetime and lacks resilient supply chains. Jones warns that this situation leaves the US unprepared for a protracted conflict, such as a Taiwan contingency against China, where early depletion of high-end munitions could prove disastrous. He stresses that in a potential US-China war over Taiwan, the US could expend up to 5,000 high-end, multi-million-dollar long-range missiles—including the Long-Range Anti-Ship Missile (LRASM), Joint Air-to-Surface Standoff Missile (JASSM), Harpoon anti-ship missile and Tomahawk cruise missile—within the first three weeks of conflict. While ramping up production of lower-end PGMs could, to some extent, alleviate shortages, Evan Montgomery and others argue in a June 2024 article for War on the Rocks that cheap, mass-produced PGMs often lack the performance—stealth, speed, range and penetrating power—needed to generate lasting strategic effects. Drawing on recent case studies, they point out that Israel's neutralization of Iran's April 2024 drone swarm using $20,000-$50,000 Shahed loitering munitions contrasts sharply with Ukraine's selective use of advanced, multi-million-dollar munitions such as Storm Shadow and the Army Tactical Missile System (ATACMS). They note the latter precision strikes forced costly Russian Black Sea Fleet redeployments and disrupted operations. Montgomery and others conclude that low-cost swarms may struggle to inflict meaningful attrition, particularly if autonomy and swarming technologies remain immature or economically unscalable. Given the capability gap between high-end PGMs like the $3.2 million per unit LRASM and more affordable systems such as the Red Wolf, Stacey Pettyjohn and others argue in a January 2025 article for the Center for a New American Security (CNAS) that the US must urgently implement a high-low PGM mix to deter China. They argue that China's People's Liberation Army's (PLA) rapid expansion and increasingly coercive maneuvers have outpaced the US's Indo-Pacific posture, exposing a strategic mismatch in both capability and scale. They point out that while high-end weapons are critical for penetrating advanced defenses and executing high-value missions, they are constrained by cost, availability and replenishment lag. Conversely, they state low-cost autonomous systems can be produced more rapidly and in greater numbers to bolster mass and sustain combat effectiveness over time, though they lack the capability of high-end systems. However, Pettyjohn and others caution that the US DoD's risk-averse acquisition culture and absence of a clear operational concept integrating both tiers exacerbate these challenges. Explaining the roots of this problem, Shands Pickett and Zach Beecher write in a June 2025 article for War on the Rocks that a widening rift between traditional prime contractors and non-traditional tech entrants is fracturing the US defense-industrial base. Pickett and Beecher note that primes, known for delivering large-scale, complex systems, are criticized for being slow, risk-averse and too focused on legacy programs. In contrast, they state that non-traditionalists bring agility and innovation, rapidly developing capabilities using commercial best practices. Yet Pickett and Beecher note that these firms often struggle with integration into mission systems and scaling for full-rate production. They liken this incompatibility to clashing software languages, resulting in technical debt, mission gaps and an industrial ecosystem fragmented and ill-suited to modern threats. While low-cost missiles can help close the gap in munitions volume, their strategic value hinges on effective integration, operational clarity and industrial readiness. Without structural reforms to US acquisition practices and production infrastructure, affordable mass may fall short of delivering meaningful deterrence in a high-end conflict with China.