logo
#

Latest news with #DarioAmodei

OpenAI and Microsoft are dueling over AGI. These real-world tests will prove when AI is really better than humans.
OpenAI and Microsoft are dueling over AGI. These real-world tests will prove when AI is really better than humans.

Business Insider

time5 hours ago

  • Business
  • Business Insider

OpenAI and Microsoft are dueling over AGI. These real-world tests will prove when AI is really better than humans.

AGI is a pretty silly debate. It's only really important in one way: It governs how the world's most important AI partnership will change in the coming months. That's the deal between OpenAI and Microsoft. This is the situation right now: Until OpenAI achieves Artificial General Intelligence — where AI capabilities surpass those of humans — Microsoft gets a lot of valuable technological and financial benefits from the startup. For instance, OpenAI must share a significant portion of its revenue with Microsoft. That's billions of dollars. One could reasonably argue that this might be why Sam Altman bangs on about OpenAI getting close to AGI soon. Many other experts in the AI field don't talk about this much, or they think the AGI debate is off base in various ways, or just not that important. Even Anthropic CEO Dario Amodei, one of the biggest AI boosters on the planet, doesn't like to talk about AGI. Microsoft CEO Satya Nadella sees things very differently. Wouldn't you? If another company is contractually required to give you oodles of money if they don't reach AGI, then you're probably not going to think we're close to AGI! Nadella has called the push toward AGI "benchmark hacking," which is so delicious. This refers to AI researchers and labs designing AI models to perform well on wonky industry benchmarks, rather than in real life. Here's OpenAI's official definition of AGI: "highly autonomous systems that outperform humans at most economically valuable work." Other experts have defined it slightly differently. But the main point is that AI machines and software must be better than humans at a wide variety of useful tasks. You can already train an AI model to be better at one or two specific things, but to get to artificial general intelligence, machines must be able to do many different things better than humans. Please help BI improve our Business, Tech, and Innovation coverage by sharing a bit about your role — it will help us tailor content that matters most to people like you. Continue By providing this information, you agree that Business Insider may use this data to improve your site experience and for targeted advertising. By continuing you agree that you accept the Terms of Service and Privacy Policy . My real-world AGI tests Over the past few months, I've devised several real-world tests to see if we've reached AGI. These are fun or annoying everyday things that should just work in a world of AGI, but they don't right now for me. I also canvassed input from readers of my Tech Memo newsletter and tapped my source network for fun suggestions. Here are my real-world tests that will prove we've reached AGI: The PR departments of OpenAI and Anthropic use their own AI technology to answer every journalist's question. Right now, these companies are hiring a ton of human journalists and other communications experts to handle a barrage of reporter questions about AI and the future. When I reach out to these companies, humans answer every time. Unacceptable! Unless this changes, we're not at AGI. This suggestion is from a hedge fund contact, and I love it: Please, please can my Microsoft Outlook email system stop burying important emails while still letting spam through? This one seems like something Microsoft and OpenAI could solve with their AI technology. I haven't seen a fix yet. In a similar vein, can someone please stop Cactus Warehouse from texting me every 2 days with offers for 20% off succulents? I only bought one cactus from you guys once! Come on, AI, this can surely be solved! My 2024 Tesla Model 3 Performance hits potholes in FSD. No wonder tires have to be replaced so often on these EVs. As a human, I can avoid potholes much better. Elon, the AGI gauntlet has been thrown down. Get on this now. Can AI models and chatbots make valuable predictions about the future, or do they mostly just regurgitate what's already known on the internet? I tested this recently, right after the US bombed Iran. ChatGPT's stock-picking ability was put to the test versus a single human analyst. Check out the results here. TL;DR: We are nowhere near AGI on this one. There's a great Google Gemini TV ad where a kid is helping his dad assemble a basketball net. The son is using an Android phone to ask Gemini for the instructions and pointing the camera at his poor father struggling with parts and tools. It's really impressive to watch as Gemini finds the instruction manual online just by "seeing" what's going on live with the product assembly. For AGI to be here, though, the AI needs to just build the damn net itself. I can sit there and read out instructions in an annoying way, while someone else toils with fiddly assembly tasks — we can all do that. Yes, I know these tests seem a bit silly — but AI benchmarks are not the real world, and they can be pretty easily gamed. That last basketball net test is particularly telling for me. Getting an AI system and software to actually assemble a basketball net — that might happen sometime soon. But, getting the same system to do a lot of other physical-world manipulation stuff better than humans, too? Very hard and probably not possible for a very long time. As OpenAI and Microsoft try to resolve their differences, the companies can tap experts to weigh in on whether the startup has reached AGI or not, per the terms of their existing contract, according to The Information. I'm happy to be an expert advisor here. Sam and Satya, let me know if you want help! For now, I'll leave the final words to a real AI expert. Konstantin Mishchenko, an AI research scientist at Meta, recently tweeted this, while citing a blog by another respected expert in the field, Sergey Levine: "While LLMs learned to mimic intelligence from internet data, they never had to actually live and acquire that intelligence directly. They lack the core algorithm for learning from experience. They need a human to do that work for them," Mishchenko wrote, referring to AI models known as large language models. "This suggests, at least to me, that the gap between LLMs and genuine intelligence might be wider than we think. Despite all the talk about AGI either being already here or coming next year, I can't shake off the feeling it's not possible until we come up with something better than a language model mimicking our own idea of how an AI should look," he concluded.

As job losses loom, Anthropic launches program to track AI's economic fallout
As job losses loom, Anthropic launches program to track AI's economic fallout

TechCrunch

time2 days ago

  • Business
  • TechCrunch

As job losses loom, Anthropic launches program to track AI's economic fallout

Silicon Valley has opined on the promise of generative AI to forge new career paths and economic opportunities – like the newly coveted solo unicorn startup. Banks and analysts have touted AI's potential to boost GDP. But those gains are unlikely to be distributed equally in the face of what many expect to be widespread AI-related job loss. Amid this backdrop, Anthropic on Friday launched its Economic Futures Program, a new initiative to support research on AI's impacts on the labor market and global economy and to develop policy proposals to prepare for the shift. 'Everybody's asking questions about what are the economic impacts [of AI], both positive and negative,' Sarah Heck, head of policy programs and partnerships at Anthropic, told TechCrunch. 'It's really important to root these conversations in evidence and not have predetermined outcomes or views on what's going to [happen].' At least one prominent name has shared his views on the potential economic impact of AI: Anthropic's CEO Dario Amodei. In May, Amodei predicted that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to as high as 20% in the next one to five years. When asked if one of the key goals of Anthropic's Economic Futures Program was to research ways to mitigate AI-related job loss, Heck was cautious, noting that the disruptive shifts AI will bring could be 'both good and bad.' 'I think the key goal is to figure out what is actually happening,' she said. 'If there is job loss, then we should convene a collective group of thinkers to talk about mitigation. If there will be huge GDP expansion, great. We should also convene policy makers to figure out what to do with that. I don't think any of this will be a monolith.' The program builds on Anthropic's existing Economic Index, launched in February, which open-sources aggregated, anonymized data to analyze the effects of AI on labor markets and the economy over time – data that many of its competitors lock behind corporate walls. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW The program will focus on three main areas: providing grants to researchers investigating AI's effect on labor, productivity, and value creation; creating forums to develop and evaluate policy proposals to prepare for AI's economic impacts; and building datasets to track AI's economic usage and impact. Anthropic is kicking off the program with some action items. The company has opened applications for its rapid grants of up to $50,000 for 'empirical research on AI's economic impacts,' as well as evidence-based policy proposals for Anthropic-hosted symposia events in Washington, D.C. and Europe in the fall. Anthropic is also seeking partnerships with independent research institutions and will provide partners with Claude API credits and other resources to support research. For the grants, Heck noted that Anthropic is looking for individuals, academics, or teams that can come up with high-quality data in a short period of time. 'We want to be able to complete it within six months,' she said. 'It doesn't necessarily have to be peer-reviewed.' For the symposia, Anthropic wants policy ideas from a wide variety of backgrounds and intellectual perspectives, said Heck. She noted that policy proposals would go 'beyond labor.' 'We want to understand more about the transitions,' she said. 'How do workflows happen in new ways? How are new jobs being created that nobody ever contemplated before?…How are certain skills remaining valuable while others are not?' Heck said Anthropic also hopes to study the effects of AI on fiscal policy. For example, what happens if there's a major shift in the way enterprises see value creation? 'We really want to open the aperture here on things that can be studied,' Heck said. 'Labor is certainly one of them, but it's a much broader swath.' Anthropic rival OpenAI released its own Economic Blueprint in January, which focuses more on helping the public adopt AI tools, building robust AI infrastructure and establishing 'AI economic zones' that streamline regulations to promote investment. While OpenAI's Stargate project to build data centers across the U.S. in partnership with Oracle and SoftBank would create thousands of construction jobs, OpenAI doesn't directly address AI-related job loss in its economic blueprint. OpenAI's blueprint does, however, outline frameworks where government could play a role in supply chain training pipelines, investing in AI literacy, supporting regional training programs, and scaling public university access to compute to foster local AI-literate workforces. Anthropic's economic impact program is part of a slow but growing shift among some tech companies to position themselves as part of the solution to the disruption they're helping to create – whether out of reputational concern, genuine altruism, or a mix of both. For instance, on Thursday, ride-hail company Lyft launched a forum to gather input from human drivers as it starts integrating robotaxis into its platform.

Congress might block state AI laws for a decade. Here's what it means.
Congress might block state AI laws for a decade. Here's what it means.

TechCrunch

time2 days ago

  • Business
  • TechCrunch

Congress might block state AI laws for a decade. Here's what it means.

A federal proposal that would ban states and local governments from regulating AI for 10 years could soon be signed into law, as Sen. Ted Cruz (R-TX) and other lawmakers work to secure its inclusion into a GOP megabill ahead of a key July 4 deadline. Those in favor – including OpenAI's Sam Altman, Anduril's Palmer Luckey, and a16z's Marc Andreessen – argue that a 'patchwork' of AI regulation among states would stifle American innovation at a time when the race to beat China is heating up. Critics include most Democrats, several Republicans, Anthropic's CEO Dario Amodei, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that this provision would block states from passing laws that protect consumers from AI harms and would effectively allow powerful AI firms to operate without much oversight or accountability. The so-called 'AI moratorium' was squeezed into the budget reconciliation bill, nicknamed the 'Big Beautiful Bill,' in May. It is designed to prohibit states from '[enforcing] any law or regulation regulating [AI] models, [AI] systems, or automated decision systems' for a decade. Such a measure could preempt state AI laws that have already passed, such as California's AB 2013, which requires companies to reveal the data used to train AI systems, and Tennessee's ELVIS Act, which protects musicians and creators from AI-generated impersonations. The moratorium's reach extends far beyond these examples. Public Citizen has compiled a database of AI-related laws that could be affected by the moratorium. The database reveals that many states have passed laws that overlap, which could actually make it easier for AI companies to navigate the 'patchwork.' For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana and Texas have criminalized or created civil liability for distributing deceptive AI-generated media meant to influence elections. The AI moratorium also threatens several noteworthy AI safety bills awaiting signature, including New York's RAISE Act, which would require large AI labs nationwide to publish thorough safety reports. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Getting the moratorium into a budget bill has required some creative maneuvering. Because provisions in a budget bill must have a direct fiscal impact, Cruz revised the proposal in June to make compliance with the AI moratorium a condition for states to receive funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program. Cruz then released another revision on Wednesday, which he says ties the requirement only to the new $500 million in BEAD funding included in the bill – a separate, additional pot of money. However, close examination of the revised text finds the language also threatens to pull already-obligated broadband funding from states that don't comply. Sen. Maria Cantwell (D-WA) criticized Cruz's reconciliation language on Thursday, claiming the provision 'forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years.' What's next? Sam Altman, co-founder and CEO of OpenAI, speaks in Berlin on February 07, 2025. Altman said he predicts the pace of artificial intelligence's usefulness in the next two years will accelerate markedly compared to the last two years. (Photo by) Image Credits:Sean Gallup / Getty Images Currently, the provision is at a standstill. Cruz's initial revision passed the procedural review earlier this week, which meant that the AI moratorium would be included in the final bill. However, reporting today from Punchbowl News and Bloomberg suggest that talks have reopened, and conversations on the AI moratorium's language are ongoing. Sources familiar with the matter tell TechCrunch they expect the Senate to begin heavy debate this week on amendments to the budget, including one that would strike the AI moratorium. That will be followed by a vote-a-rama – a series of rapid votes on the full slate of amendments. Chris Lehane, chief global affairs officer at OpenAI, said in a LinkedIn post that the 'current patchwork approach to regulating AI isn't working and will continue to worsen if we stay on this path.' He said this would have 'serious implications' for the U.S. as it races to establish AI dominance over China. 'While not someone I'd typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward,' Lehane wrote. OpenAI CEO Sam Altman shared similar sentiments this week during a live recording of the tech podcast Hard Fork. He said while he believes some adaptive regulation that addresses the biggest existential risks of AI would be good, 'a patchwork across the states would probably be a real mess and very difficult to offer services under.' Altman also questioned whether policymakers were equipped to handle regulating AI when the technology moves so quickly. 'I worry that if…we kick off a three-year process to write something that's very detailed and covers a lot of cases, the technology will just move very quickly,' he said. But a closer look at existing state laws tells a different story. Most state AI laws that exist today aren't far-reaching; they focus on protecting consumers and individuals from specific harms, like deepfakes, fraud, discrimination, and privacy violations. They target the use of AI in contexts like hiring, housing, credit, healthcare, and elections, and include disclosure requirements and algorithmic bias safeguards. TechCrunch has asked Lehane and other members of OpenAI's team if they could name any current state laws that have hindered the tech giant's ability to progress its technology and release new models. We also asked why navigating different state laws would be considered too complex, given OpenAI's progress on technologies that may automate a wide range of white-collar jobs in the coming years. TechCrunch asked similar questions of Meta, Google, Amazon, and Apple, but has not received any answers. The case against preemption Image Credits:Maxwell Zeff 'The patchwork argument is something that we have heard since the beginning of consumer advocacy time,' Emily Peterson-Cassin, corporate power director at internet activist group Demand Progress, told TechCrunch. 'But the fact is that companies comply with different state regulations all the time. The most powerful companies in the world? Yes. Yes, you can.' Opponents and cynics alike say the AI moratorium isn't about innovation – it's about sidestepping oversight. While many states have passed regulation around AI, Congress, which moves notoriously slowly, has passed zero laws regulating AI. 'If the federal government wants to pass strong AI safety legislation, and then preempt the states' ability to do that, I'd be the first to be very excited about that,' said Nathan Calvin, VP of state affairs at the nonprofit Encode – which has sponsored several state AI safety bills – in an interview. 'This takes away all leverage, and any ability, to force AI companies to come to the negotiating table.' One of the loudest critics of the proposal is Anthropic CEO Dario Amodei. In an opinion piece for The New York Times, Amodei said 'a 10-year moratorium is far too blunt an instrument.' 'AI is advancing too head-spinningly fast,' he wrote. 'I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.' He argued that instead of prescribing how companies should release their products, the government should work with AI companies to create a transparency standard for how companies share information about their practices and model capabilities. The opposition isn't limited to Democrats. There's been notable opposition to the AI moratorium from Republicans who argue the provision stomps on the GOP's traditional support for states' rights, even though it was crafted by prominent Republicans like Cruz and Rep. Jay Obernolte. These Republican critics include Senator Josh Hawley (R-MO) who is concerned about states' rights and is working with Democrats to strip it from the bill. Senator Marsha Blackburn (R-TN) also criticized the provision, arguing that states need to protect their citizens and creative industries from AI harms. Rep. Marjorie Taylor Greene (R-GA) even went so far as to say she would oppose the entire budget if the moratorium remains. What do Americans want? Republicans like Cruz and Senate Majority Leader John Thune say they want a 'light touch' approach to AI governance. Cruz also said in a statement that 'every American deserves a voice in shaping' the future. However, a recent Pew Research survey found that most Americans seem to want more regulation around AI. The survey found that about 60% of U.S. adults and 56% of AI experts say they're more concerned that the U.S. government won't go far enough in regulating AI than they are that the government will go too far. Americans also largely aren't confident that the government will regulate AI effectively, and they are skeptical of industry efforts around responsible AI.

AI's arrival at work reshaping employers' hunt for talent
AI's arrival at work reshaping employers' hunt for talent

The Star

time3 days ago

  • Business
  • The Star

AI's arrival at work reshaping employers' hunt for talent

Predictions of imminent AI-driven mass unemployment are likely overblown, but employers will seek workers with different skills as the technology matures, a top executive at global recruiter ManpowerGroup told AFP at Paris's Vivatech trade fair. The world's third-largest staffing firm by revenue ran a startup contest at Vivatech in which one of the contenders was building systems to hire out customisable autonomous AI "agents", rather than humans. Their service was reminiscent of a warning last month from Dario Amodei, head of American AI giant Anthropic, that the technology could wipe out half of entry-level white-collar jobs within one to five years. For ManpowerGroup, AI agents are "certainly not going to become our core business any time soon," the company's Chief Innovation Officer Tomas Chamorro-Premuzic said. "If history shows us one thing, it's most of these forecasts are wrong." An International Labour Organization (ILO) report published in May found that around "one in four workers across the world are in an occupation with some degree of exposure" to generative AI models' capabilities. "Few jobs are currently at high risk of full automation," the ILO added. But the UN body also highlighted "rapid expansion of AI capabilities since our previous study" in 2023, including the emergence of "agentic" models more able to act autonomously or semi-autonomously and use software like web browsers and email. 'Soft skills' Chamorro-Premuzic predicted that the introduction of efficiency-enhancing AI tools would put pressure on workers, managers and firms to make the most of the time they will save. "If what happens is that AI helps knowledge workers save 30, 40, maybe 50% of their time, but that time is then wasted on social media, that's not an increase in net output," he said. Adoption of AI could give workers "more time to do creative work" – or impose "greater standardization of their roles and reduced autonomy," the ILO said. There's general agreement that interpersonal skills and an entrepreneurial attitude will become more important for knowledge workers as their daily tasks shift towards corralling AIs. Employers identified ethical judgement, customer service, team management and strategic thinking as top skills AI could not replace in a ManpowerGroup survey of over 40,000 employers across 42 countries published this week. Nevertheless, training that adopts those new priorities has not increased in step with AI adoption, Chamorro-Premuzic lamented. "For every dollar you invest in technology, you need to invest eight or nine on HR, culture transformation, change management," he said. He argued that such gaps suggest companies are still chasing automation, rather than the often-stated aim of augmenting human workers' capabilities with AI. AI hiring AI? One of the areas where AI is transforming the world of work most rapidly is ManpowerGroup's core business of recruitment. But here candidates are adopting the tools just as quickly as recruiters and companies, disrupting the old way of doing things from the bottom up. "Candidates are able to send 500 perfect applications in one day, they are able to send their bots to interview, they are even able to game elements of the assessments," Chamorro-Premuzic said. That extreme picture was not borne out in a survey of over 1,000 job seekers released last week by recruitment platform TestGorilla, which found just 17% of applicants admitting to cheating on tests, and only some of those to using AI. Jobseekers' use of consumer AI tools meets recruiters doing the same. The same TestGorilla survey found almost two-thirds of the more-than-1,000 hiring decision-makers polled used AI to generate job descriptions and screen applications. But a far smaller share are already using the technology to actually interview candidates. Where employers today are focused on candidates' skills over credentials, Chamorro-Premuzic predicted that "the next evolution is to focus on potential, not even skills even if I know the skills you bring to the table today, they might be obsolete in six months." "I'm better off knowing that you're hard-working, that you are curious, that you have good people skills, that you're not a jerk – and that, AI can help you evaluate," he believes. – AFP

Behind the job cuts: Is AI the real reason?
Behind the job cuts: Is AI the real reason?

Mint

time3 days ago

  • Business
  • Mint

Behind the job cuts: Is AI the real reason?

At present, the outlook is mixed. The World Economic Forum (WEF)'s Future of Jobs 2025 report predicts 170 million new jobs this decade, but 92 million will be lost. One in four jobs globally is exposed to generative AI (GenAI), says a May 20 study by the International Labour Organization and Poland's National Research Institute. Google has laid off 12,000 workers since 2023, including 200 in May. Microsoft, Amazon, and Duolingo are also downsizing, while Meta cut 5% of its workforce in February—even as Mark Zuckerberg has offered $100 million sign-on bonuses to lure top AI talents. Also read | Mint Primer | Family offices total 300 now. What's driving them? Anthropic CEO Dario Amodei warns AI could halve entry-level white-collar jobs and push unemployment to 20% in five years. Geoffrey Hinton echoes the risk of mass white-collar job losses. Microsoft CEO Satya Nadella links layoffs to AI-focused restructuring, while Alphabet CEO Sundar Pichai cites a push for efficiency. Amazon CEO Andy Jassy says AI agents will reduce some roles. InMobi CEO Naveen Tewari predicts 80% of coding will be automated by 2025. OpenAI's Kevin Weil and Zerodha CTO Kailash Nadh believe junior developers face the greatest risk. Nvidia CEO Jensen Huang believes AI will shift, not erase, jobs. Also read | Mint primer | Air India crash: How is the Indian probe going? Tech layoffs began after the pandemic-era overhiring. Post-lockdown, many reevaluated and downsized. By end-2022, 263,000 global tech workers were laid off, with another 167,600 in Q1 2023, per Statista. While AI's impact on future layoffs remains unclear, automation is expected to replace many manual, rule-based tasks, potentially leading to more layoffs in tech. Also read | Hormuz heat rises: Can India weather an oil shock? Frontline jobs like farmworkers, delivery drivers, and care workers are set to see the highest volume growth, while tech roles in AI, fintech, and big data will grow fastest by rate, according to WEF. Clerical roles—cashiers, bank tellers, and data entry clerks—will face sharp declines. By 2030, 39% of workers' skills will be outdated, demanding constant upskilling. In-demand skills will include AI, big data, cybersecurity, and tech literacy, alongside soft skills like creative thinking, resilience and a commitment to lifelong learning. Also read | What global central banks are signalling about the road ahead WEF says 59% of workers will need upskilling by 2030. Former White House strategist Steve Bannon warns AI-driven job losses, especially in entry-level roles, will become a key political issue by 2028. Karnataka says it will study AI's workforce impact to guide policy. Anthropic CEO Dario Amodei proposes a 'token tax" on AI profits for redistribution, while some experts push for Universal Basic Income. Meanwhile, companies may need to rethink fully outsourcing tasks to AI agents that still blur fact and fiction. Also read | Can bike taxis survive India's regulatory crackdown?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store