
Behind the Curtain — Jensen vs. Dario: "There will be more jobs"
Why it matters: The Huang vs. Amodei debate, playing out in exclusive interviews with us, captures a deep divide among AI experts over America's job market in a highly automated world.
Both of them agree we'll soon have AI that's smarter than humans — and will radically reshape how people work and companies operate.
Amodei told us AI could wipe out half of entry-level white-collar jobs in a few years. His comments sparked weeks of national debate over the dangers of fast and furious technological advancements in AI.
Huang (pronounced wong) — whose company last week became the most valuable in history, worth $4 trillion — responded: "I don't know why AI companies are trying to scare us. We should advance the technology safely just as we advance cars safely. ... But scaring people goes too far."
Noting Amodei and other AI leaders issuing warnings are "really, really consequential and smart people," Huang said he was eager to "offer a counter-view," based on "all the evidence of history."
"If we have no new ideas," Huang began, "and the work that we're doing is precisely all that needs to be done ... and no more than what humanity will ever need, then when we become more productive, [Amodei's warning would be] absolutely correct — we will need fewer people doing that work."
"However, if you now look at history and you ask yourself: 'Do I have more ideas so that, if I were to be more productive, I could do more?' Then, you would describe a condition that reflects human history — that we have become more productive over time."
"We've become more productive raising crops," Huang continued, noting that it's not like all of a sudden, as a result of mechanization, "everybody ran out of work."
"Everyone's jobs will change," he said. "Some jobs will be unnecessary. Some people will lose jobs. But many new jobs will be created. ... The world will be more productive. There will be higher GDP [gross domestic product, or total national output]. There will be more jobs. But every job will be augmented by AI."
In response to Huang's comments, Jack Clark, co-founder and head of policy at Anthropic, told us: "Starting a conversation about the impact of AI on entry-level jobs is a matter of pragmatism. As producers of this technology, we have an obligation to be transparent and clear-eyed about AI's potential societal and economic impacts."
"We should be discussing these issues in the open and preparing for them as needed — just like we should be discussing and preparing for its transformative benefits."
The big picture: Huang, 62, started Nvidia 30+ years ago — back in 1993, before the dotcom bubble. The former engineer was relatively anonymous when Nvidia's chips were used for graphics for computer gaming.
Now, he's one of the world's leading faces of a technology that is just bursting into widespread public consciousness.
During last week's visit to D.C. from his headquarters in Silicon Valley, Huang met with President Trump at the White House, and sat down with senators on Capitol Hill. Huang then headed straight for Beijing, where on Monday he'll start meeting with Chinese officials.
Huang's prescription: For knowledge workers who want to prepare and protect themselves, Huang recommends learning to use AI "to transform the way you work" — exactly the advice we've given every person who works at Axios.
"You might go forward 10 years from now, " Huang said, "and just realize: The actual thing I was doing before that I considered to be my job, I don't do anymore. But I still have a great job — in fact, even better than before. The things that I'm doing at my job are different, because AI is helping me do a lot of it. But I'm doing a lot more meaningful things."
Case in point: We asked Huang about one of the most vivid examples of AI-endangered workers — long-haul truckers, who could be largely supplanted by self-driving technology.
Many long-haul truckers, he postulated, "really don't love their job. They would love if they were short-haul truckers who were able to go to sleep at night with their family. They would go to their jobs. And between the cities, the truck would drive by itself. That would improve the quality of life of many long-haul truckers."
Zoom out: Huang loves to talk about a "new industrial revolution" where AI benefits people who work with their hands to build data centers and create other AI infrastructure — including the chips that last week gave Nvidia a market capitalization of $4 trillion (and made Huang worth $144 billion, eclipsing Warren Buffett).
Leading a show-and-tell in Nvidia's kitchen in downtown Washington, Huang pointed to a 70-pound Nvidia system that, when stacked in racks, helps power AI models. "It takes the love of manufacturing to build these things," he said. "There's just so much admiration for intellectual work in the United States. We need heroes who are making things."
Behind the scenes: Huang, who was born in Taiwan, doesn't wear a watch. When we said we needed to wrap up the interview, he pulled up the sleeve of his trademark leather jacket to show off his bare wrist. He also keeps his phone on silent — the better to focus on the moment.
IBM pioneer " Thomas Watson didn't care about the time, nor did Einstein care about the time," Huang explained. "The only time is right now. ... Because I'm here with you."
The bottom line: "The AI revolution," Huang told us, "is both an incredible technology — and the beginning of a whole new industrial reset."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Atlantic
8 minutes ago
- Atlantic
What Two Judicial Rulings Mean for the Future of Generative AI
Should tech companies have free access to copyrighted books and articles for training their AI models? Two judges recently nudged us toward an answer. More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.) At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.' But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions. So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily. When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.) Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote. In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work. To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them. In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.' But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984. That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs. For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model. As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies. The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress. The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.


Gizmodo
12 minutes ago
- Gizmodo
Elon Musk Is Trying to Get His Other Companies to Foot the Bill for xAI
Elon Musk's SpaceX is set to invest billions in his artificial intelligence company, and now he is hoping Tesla will do the same. Investors familiar with the matter told The Wall Street Journal that the rocket company SpaceX has agreed to invest a whopping $2 billion in xAI, the Musk-led firm behind the controversial large language model Grok. This investment makes up almost half of the $5 billion of equity that the AI company raised last month. Unsurprisingly, the richest man in the world isn't satisfied. On Sunday, he posted on his social media platform X (formerly Twitter) that Tesla shareholders will vote on whether they will also invest in his AI company. He responded to an X user asking about a Tesla investment in xAI by posting, 'It's not up to me. If it was up to me, Tesla would have invested in xAI long ago. We will have a shareholder vote on the matter.' xAI merged with X earlier this year, putting it at a $113 billion valuation. The AI company is now reportedly targeting a $200 billion valuation in its next round of fundraising. This comes as xAI is also reportedly burning through cash at a rapid rate. According to Bloomberg, xAI is bleeding roughly $1 billion per month. Musk himself has since responded to the article as 'nonsense.' Since its launch, xAI has been racing to catch up with OpenAI and other industry leaders. The company unveiled its latest model, Grok 4, this month and claimed that it's the 'most intelligent model in the world.' Musk also introduced today a new 'companions' feature allowing users to talk to Grok via 3D-animated characters. However, Grok has not been without its issues. Earlier this month, an update intended to address what Musk described as a 'center-left bias' instead caused Grok to generate antisemitic propaganda, even referring to itself as 'MechaHitler.' Shortly after, the company said it had taken steps to remove the offensive posts and had banned Grok from using hate speech on X. As Musk seeks new funding for the AI model, he has turned to one of his old playbooks — shuffling around funds and resources among his business empire. For instance, in 2009, Musk borrowed $20 million from SpaceX to help fund Tesla. The rocket company also provided equipment for Musk's tunneling startup, The Boring Company. More recently, Musk took a $1 billion loan from SpaceX around the time he bought Twitter.


Forbes
13 minutes ago
- Forbes
AI In Europe Is Booming And Going Decentralized By Design
Sandy Carter, Author of AI First, Human Always keynoting at ETHcc. Sandy is also the Chief Business ... More Officer of Unstoppable Domains. Sandy Carter AI is having a moment in Europe. From voice tech in Warsaw to satellites in Sofia, a new wave of startups is turning Europe into one of the most exciting AI ecosystems on the planet. But here's what makes this moment different: Europe isn't just scaling AI. It's reshaping how AI is built—from the ground up. According to Focus on Business, nearly 48% of all unicorns minted in 2025 are AI startups. It states that out of 23 new unicorns so far this year, 11 are AI-related—showing how AI has become a dominant force in startup valuations I saw this shift firsthand at ETHCC 2025 in Cannes. The Ethereum Community Conference was buzzing with builders, founders, and researchers. And what stood out most wasn't just the tech—it was the mindset. The future of AI won't be centralized. And in Europe, it never really was. A Wave of AI Unicorns Across the continent, companies are reaching unicorn status faster than ever before. In Paris, Mistral AI has become a household name. With a $6 billion valuation, it's Europe's open-source answer to OpenAI. Their models aren't locked behind paywalls—they're shared, forked, and improved by developers everywhere. Germany's n8n is doing for automation what GitHub did for code. Its open-source workflow engine now powers backend logic for AI agents and internal tools across thousands of teams. In Amsterdam, DataSnipper transformed one of the dullest enterprise tasks—auditing—into an AI-powered experience. It became the first EU unicorn of 2024. In June 2025, Munich-based Helsing closed a €600 million Series D round, led by Prima Materia (Daniel Ek's firm), valuing the company at €12 billion. Helsing builds AI-powered systems for drones, aircraft, submarines—and even autonomous fighter pilots—a clear example of Europe's high-end defence AI breakthroughs And Lovable, out of Sweden, hit $17 million ARR in three months. It's now one of the fastest-growing AI dev platforms in the world. Loveable has transformed coding with AI. Loveable AI Built Different: The European Approach Europe's AI edge isn't just about speed. It's about structure. According to EU Startup, Europe's AI startups saw 55% year-over-year growth in Q1 2025 investment compared to Q1 2024. Open source is everywhere. Companies like Mistral and n8n prove that open infrastructure scales faster, builds trust, and wins developer mindshare. Transparency isn't just a value—it's a growth strategy. In chatting with Karine Arama, Partner at SGH Capital in Paris, she told me that 'Europe is uniquely positioned to join the next wave of AI by embracing open-source principles, privacy-first architectures, and decentralized values. With strong regulatory foresight and a commitment to digital sovereignty, the region is turning trust and transparency into competitive advantages. From Mistral AI's open-weight models to Sesterce's decentralized compute infrastructure and Hugging Face's collaborative platform, a new ecosystem is surfacing - one where ethics, innovation, and independence are deeply intertwined.' Privacy is a default, not a feature. With regulations like GDPR and the new AI Act, startups here are incentivized to put user control and data protection front and center. That's a long-term moat, not a short-term constraint. And let's talk security. Switzerland's DeepJudge is tackling AI's dark side—data poisoning, prompt injection, evasion attacks. Their solution helps enterprise teams build AI with defense built in. The AI Infrastructure is Expanding—Even to Space Europe's ambitions go beyond apps and APIs. Bulgaria's EnduroSat raised $43 million to launch software-defined satellites. They're building orbital infrastructure that runs AI models in real time. Think weather, defense, communications—all powered from space. In Spain, Zylon is giving small businesses a private, plug-and-play AI suite that doesn't rely on U.S. cloud providers. That's a game-changer for regulated industries and sensitive data. And in the UK, companies like Stability AI and Isomorphic Labs are shaping entirely new categories. Stable Diffusion kicked off the open-source image generation wave. Isomorphic Labs, with over £400 million in backing, is applying AI-first thinking to drug discovery. Another noteworthy company is Tractable, a London-based insurtech pioneer. It reached its $1 billion valuation in June 2021 after its Series D and became one of Europe's first true AI unicorns . In July 2023, it further boosted growth with a $65 million Series E, led by SoftBank Vision Fund 2 Decentralized AI: What I Saw at ETHCC 2025 ETHCC 2025 in Cannes wasn't just about Ethereum. It was about the future of AI—and who will build it. I was there, live, keynoting, moderating and listening. And it was clear: Europe is leading the way on decentralized AI. One panel explored zero-knowledge AI—models whose decisions could be mathematically verified. Another unveiled agentic AI tools that govern DAOs, negotiate deals, and execute smart contracts. And even Silicon Valley founded companies believe that Europe has a big role to play. Zero Gravity Labs (0G) is headquartered in San Francisco and operates globally through a decentralized, remote-first model. CEO Michael Heinrich—speaking on ETHcc 2025—emphasized transparent, Europe-relevant infrastructure and historic resilience, pointing toward a strategic focus on European markets. CEO Michael Heinrich of Zero Gravity emphasizes the value of working in Europe Sandy Carter There was also a lot of buzz around decentralized compute marketplaces. Startups like Gensyn and Bittensor let anyone contribute GPU power to train AI models. That means open infrastructure, outside of Big Tech clouds. What stood out most? AI in Europe isn't just a product. It's a public good. AI Decentralization as a Strategy Europe isn't copying Silicon Valley. It's building its own model. In the U.S., we see consolidation—AI controlled by a few platforms. In Europe, we see distribution—tools that anyone can use, shape, or run on their own terms. This matches Europe's strength: industry-specific depth across many regions. The UK leads in life sciences. Germany is a force in industrial automation. France drives aerospace. The Nordics are pioneering sustainable AI. Switzerland, the Netherlands, and Belgium bring financial services, cybersecurity, and multilingual markets into the mix. This diversity gives Europe a superpower: it doesn't need one winner. It needs a network. And that's exactly what's forming. The Next Chapter for AI And Decentralized AI We're entering a new chapter—one where AI agents, decentralized infrastructure, and private AI platforms work together. Europe isn't just along for the ride. It's in the driver's seat. In the next year, I expect to see more AI unicorns built in smaller European cities. More models trained on open data, verified with cryptography. And more startups where trust, transparency, and sovereignty are part of the product itself. The opportunity is here. The talent is here. The infrastructure is here. And as I saw up close at ETHCC, Europe is ready to lead—not just in building powerful AI, but in building it differently. Did you enjoy this story about AI in Europe? Don't miss my next one: Use the blue follow button at the top of the article near my byline to follow more of my work.