
Nvidia CEO 'trashes' MIT study claiming AI makes people dumber, says: My advice to MIT test participants is, "Apply…"
Nvidia CEO
Jensen
Huang
dismissed recent
MIT
research suggesting
artificial intelligence
diminishes cognitive abilities, arguing instead that his daily AI use has actually enhanced his thinking skills. Speaking on CNN's "Fareed Zakaria GPS" that aired Sunday, Huang said he uses AI "literally every single day" and believes his "
cognitive skills
are actually advancing."
"I haven't looked at their research yet, but I have to admit, I'm using AI literally every single day," Huang stated during the interview. "I think my cognitive skills are actually advancing, and the reason for that is because I am not asking it to do the thinking for me."
The MIT Media Lab study, which analyzed 54 subjects writing SAT essays using ChatGPT, Google Search, or no tools, found that
ChatGPT
users showed the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels." Researchers used EEG technology to monitor brain activity across 32 regions during the writing process.
Jensen Huang questions MIT study
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Book Your Daily Profit By 11 AM With This Superclass By Mr. Bala
TradeWise
Learn More
Undo
Huang challenged the study's methodology, questioning how participants were using
AI tools
. "I'm not exactly sure what people are using it for that would cause you to now not have to think," he said on the CNN program. "But if you have the thing in order, for example, the idea of prompting an AI, the idea of asking questions... you're spending most of your time today asking me questions in order to ask good questions. It's a highly cognitive skill."
Nvidia
CEO emphasized that AI should be used as a learning tool rather than a replacement for thinking. "I'm asking it to teach me many things that I don't know or help me solve problems otherwise I wouldn't be able to solve reasonably or research," Huang explained during the Sunday interview.
Huang further that effective AI interaction requires sophisticated cognitive skills, particularly in formulating quality questions. "As a CEO, I spend most of my time asking questions, and 90% of my instructions are actually, you know, conflated with questions," Huang explained. "When I'm interacting with AI, it's a questioning system. You're asking a question, so I think that in order to formulate good questions, you have to be thinking, have to be analytical, reasoning yourself."
How Nvidia CEO uses AI himself, and why he says it makes him smarter
Huang described his approach of using multiple AI systems to cross-reference and critique responses. "I wouldn't just receive it. Usually, what I do is say, 'Are you sure this is the best answer you can provide?' Take the answer from one AI, give it to another AI, ask them to critique itself," he said during the CNN interview. "There's no different than getting three doctors' opinions."
This methodology, according to Huang, actually strengthens analytical abilities rather than weakening them. "So I think that process of critiquing and critiquing the answers of your critical thinking enhances cognitive skills," he concluded on "Fareed Zakaria GPS," offering direct advice to the
MIT study
participants: "Apply critical thinking."
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
14 minutes ago
- Mint
AI is killing the web. Can anything save it?
Around the beginning of last year, Matthew Prince started receiving worried calls from the chief executives of large media companies. They told Mr Prince, whose firm, Cloudflare, provides security infrastructure to about a fifth of the web, that their businesses faced a grave new online threat. 'I said, 'What, is it the North Koreans?'," he recalls. 'And they said, 'No. It's AI'." Those executives had spotted the early signs of a trend that has since become clear: artificial intelligence is transforming the way that people navigate the web. As users pose their queries to chatbots rather than conventional search engines, they are given answers, rather than links to follow. The result is that 'content" publishers, from news providers and online forums to reference sites such as Wikipedia, are seeing alarming drops in their traffic. As AI changes how people browse, it is altering the economic bargain at the heart of the internet. Human traffic has long been monetised using online advertising; now that traffic is drying up. Content producers are urgently trying to find new ways to make AI companies pay them for information. If they cannot, the open web may evolve into something very different. Since the launch of ChatGPT in late 2022, people have embraced a new way to seek information online. OpenAI, maker of ChatGPT, says that around 800m people use the chatbot. It is the most popular download on the iPhone app store. Apple said that conventional searches in its Safari web browser had fallen for the first time in April, as people posed their questions to AI instead. OpenAI is soon expected to launch a browser of its own. Its rise is so dramatic that a Hollywood adaptation is in the works. As OpenAI and other upstarts have soared, Google, which has about 90% of the conventional search market in America, has added AI features to its own search engine in a bid to keep up. Last year it began preceding some search results with AI-generated 'overviews", which have since become ubiquitous. In May it launched 'AI mode", a chatbot-like version of its search engine. The company promises that, with AI, users can 'let Google do the Googling for you". Yet as Google does the Googling, humans no longer visit the websites from which the information is gleaned. Similarweb, which measures traffic to more than 100m web domains, estimates that worldwide search traffic (by humans) fell by about 15% in the year to June. Although some categories, such as hobbyists' sites, are doing fine, others have been hit hard (see chart). Many of the most affected are just the kind that might have commonly answered search queries. Science and education sites have lost 10% of their visitors. Reference sites have lost 15%. Health sites have lost 31%. For companies that sell advertising or subscriptions, lost visitors means lost revenue. 'We had a very positive relationship with Google for a long time…They broke the deal," says Neil Vogel, head of Dotdash Meredith, which owns titles such as People and Food & Wine. Three years ago its sites got more than 60% of their traffic from Google. Now the figure is in the mid-30s. 'They are stealing our content to compete with us," says Mr Vogel. Google has insisted that its use of others' content is fair. But since it launched its AI overviews, the share of news-related searches resulting in no onward clicks has risen from 56% to 69%, estimates Similarweb. In other words, seven in ten people get their answer without visiting the page that supplied it. 'The nature of the internet has completely changed," says Prashanth Chandrasekar, chief executive of Stack Overflow, best known as an online forum for coders. 'AI is basically choking off traffic to most content sites," he says. With fewer visitors, Stack Overflow is seeing fewer questions posted on its message boards. Wikipedia, also powered by enthusiasts, warns that AI-generated summaries without attribution 'block pathways for people to access…and contribute to" the site. To keep the traffic and the money coming, many big content producers have negotiated licensing deals with AI companies, backed up by legal threats: what Robert Thomson, chief executive of News Corp, has dubbed 'wooing and suing".His company, which owns theWall Street Journalandthe New York Post, among other titles, has struck a deal with OpenAI. Two of its subsidiaries are suing Perplexity, another AI answer engine. TheNew York Timeshas done a deal with Amazon while suing OpenAI. Plenty of other transactions and lawsuits are going on. (The Economist's parent company has not taken a public position on whether it will license our work.) Yet this approach has limits. For one thing, judges so far seem minded to side with AI companies: last month two separate copyright cases in California went in favour of their defendants, Meta and Anthropic, both of which argued that training their models on others' content amounted to fair use. President Donald Trump seems to acceptSilicon Valley's argument that it must be allowed to get on with developing the technology of the future before China can. He has appointed tech boosters as advisers on AI, and sacked the head of the US Copyright Office soon after she argued that training AI on copyrighted material was not always legal. AI companies are more willing to pay for continuing access to information than training data. But the deals done so far are hardly stellar. Reddit, an online forum, has licensed its user-generated content to Google for a reported $60m a year. Yet its market value fell by more than half—over $20bn—after it reported slower user-growth than expected in February, owing to wobbles in search traffic. (Growth has since picked up and Reddit's share price has recovered some lost ground.) The bigger problem, however, is that most of the internet's hundreds of millions of domains are too small to either woo or sue the tech giants. Their content may be collectively essential to AI firms, but each site is individually dispensable. Even if they could join forces to bargain collectively, antitrust law would forbid it. They could block AI crawlers, and some do. But that means no search visibility at all. Software providers may be able to help. All of Cloudflare's new customers will now be asked if they want to allow AI companies' bots to scrape their site, and for what purpose. Cloudflare's scale gives it a better chance than most of enabling something like a collective response by content sites that want to force AI firms to cough up. It is testing a pay-as-you-crawl system that would let sites charge bots an entry fee. 'We have to set the rules of the road," says Mr Prince, who says his preferred outcome is 'a world where humans get content for free, and bots pay a tonne for it". An alternative is offered by Tollbit, which bills itself as a paywall for bots. It allows content sites to charge AI crawlers varying rates: for instance, a magazine could charge more for new stories than old ones. In the first quarter of this year Tollbit processed 15m micro-transactions of this sort, for 2,000 content producers including the Associated Pressand Newsweek. Toshit Panigrahi, its chief executive, points out that whereas traditional search engines incentivise samey content—'What time does the Super Bowl start?", for example—charging for access incentivises uniqueness. One of Tollbit's highest per-crawl rates is charged by a local newspaper. Another model is being put forward by ProRata, a startup led by Bill Gross, a pioneer in the 1990s of the pay-as-you-click online ads that have powered much of the web ever since. He proposes that money from ads placed alongside AI-generated answers should be redistributed to sites in proportion to how much their content contributed to the answer. ProRata has its own answer engine, which shares ad revenue with its 500-plus partners, which include theFinancial Timesand theAtlantic. It is currently more of an exemplar than a serious threat to Google: Mr Gross says his main aim is to 'show a fair business model that other people eventually copy". Meanwhile, content producers are rethinking their business models. 'The future of the internet is not all about traffic," says Mr Chandrasekar, who has built up Stack Overflow's private, enterprise-oriented subscription product, Stack Internal. News publishers are planning for 'Google zero", deploying newsletters and apps to reach customers who no longer come to them via search, and moving their content behind paywalls or to live events. Audio and video are proving legally and technically harder for AI engines to summarise than text. The site to which answer engines refer search traffic most often, by far, is YouTube, according to Similarweb. Not everyone thinks the web is in decline—on the contrary, it is in 'an incredibly expansionary moment", argues Robby Stein of Google. As AI makes it easier to create content, the number of sites is growing: Google's bots report that the web has expanded by 45% in the past two years. AI search lets people ask questions in new ways—for instance, taking a photo of their bookshelf and asking for recommendations on what to read next—which could increase traffic. With AI queries, more sites than ever are being 'read", even if not with human eyes. An answer engine may scan hundreds of pages to deliver an answer, drawing on a more diverse range of sources than human readers would. As for the idea that Google is disseminating less human traffic than before, Mr Stein says the company has not noticed a dramatic decline in the number of outbound clicks, though it declines to make the number public. There are other reasons besides AI why people may be visiting sites less. Maybe they are scrolling social media. Maybe they are listening to podcasts. The death of the web has been predicted before—at the hands of social networks, then smartphone apps—and not come to pass. But AI may pose the biggest threat to it yet. If the web is to continue in something close to its current form, sites will have to find new ways to get paid for content. 'There's no question that people prefer AI search," says Mr Gross. 'And to make the internet survive, to make democracy survive, to make content creators survive, AI search has to share revenue with creators."


Time of India
an hour ago
- Time of India
Hiring paradox AI both hurts and helps
Academy Empower your mind, elevate your skills Widespread use of AI-driven tools by candidates is creating problems for recruiters. But there are some plus points too. Until a couple of years ago, the biggest hurdle for a job seeker was to get past the application tracking system (ATS), a bot that is used for filtering applications, to get shortlisted for a desired position. This meant getting the error free resume with right keywords and the advent of generative artificial intelligence and proliferation of new age online tools, all of this can be done in a matter of minutes. This is great news for candidates, but not so much for recruiters, who are now dealing with a deluge of resumes for roles. While some companies are deploying AI tools, and stringent assessments for filtering candidates, smaller firms are looking at increasing the in-person interaction to hire the right Sharma, CEO, TeamLease Digital said that close to 25-30% of the resumes are now made using AI, compared to 8% last year, and the numbers are Karanth, cofounder, Xpheno, shared that as much as 50% of CVs are written by ChatGPT, matching with the job descriptions. He pointed out that as a result the firm is seeing 25% increase in the number of CVs they receive for any job Dongrie, Partner and Leader-Workforce Transformation, PwC India, said, 'ATS systems have been using technology to filter candidates even before the advent of widespread AI tools. With AI-enabled resume crafting, the fitment matching has become more accurate. This has led to an increase in the number of applicants immediately following a job posting.'An executive with a Bengaluru-based consulting firm told ET on the condition of anonymity that this has increased the time taken to hire people as shortlisted candidates after the initial filtering process has increased, requiring more human intervention, he also pointed out the need for predictive analytics and sophisticated tools based on historical data to hire candidates as Chemmankotil, Country Manager, Adecco India, said that apart from crafting polished resumes, candidates are also simulating interview responses making it challenging for recruiters to assess their capabilities, making traditional screening methods insufficient.'Recruiters now require deeper subject-matter expertise and more sophisticated tools to evaluate candidates effectively. To address this, many organisations have adopted AI-powered platforms capable of analyzing behavioral cues during virtual interviews, such as detecting lip-syncing or external prompting, to ensure the integrity of the hiring process. PWC's Dongrie said that for organisations with limited and smaller hiring volumes, the dependency for filtering candidates primarily is at in-person interview stage.'However, for organisations with high-volume hiring such as retail banking, insurance, pharma-sales, the focus has shifted towards implementing stringent assessments for filtering candidates prior to interviews. Focus is now more on technical assessments along with existing psychometric and behavioural profiling exercises,' he Karanth said that they are using AI to filter the top 50 out of 200 resumes received, and screen further depending on their pool till they reach 5-10 candidates. 'As of now, only guarding is through human intervention. You cannot depend on AI as of now in this regard because that might not lead to a fruitful outcome. For more senior roles, around 70-75% of the applications are through references,' he Bajaj, Fractal-Hiring, Lead Manager, said that they have evolved their hiring process to include technical assessments, case studies, and Proctored LIVE interviewing, which use AI to detect eye /hand the challenges of using AI in hiring still Sharma said that AI hallucination and bias are still concerns. 'The biggest challenge this poses is making sure that it doesn't have the same bias that a human recruiter would have,' she AI can cover the blind spots, it is getting harder to differentiate between an AI-generated video and a real video of a candidate. 'We need to make sure that our recruiters are skilled enough to identify this difference; otherwise, we would fall flat in the market. The only solution to this is the upgradation of data sets, proper and regular monitoring, and governance,' she said that while AI helps with productivity and improve recruitment processes, its inherent flaws makes it harder to rely on them completely. This includes concerns around bias and fairness and the need for platforms that can be integrated into current systems to make it efficient.


Time of India
an hour ago
- Time of India
AI one of the biggest opportunities ahead
Academy Empower your mind, elevate your skills AI Geopolitics and artificial intelligence are the two biggest themes among clients of consulting firms , according to Asutosh Padhi , global leader of firm strategy at McKinsey & Company But despite the hype around AI, what remains to be seen in early adopters is tangible on-ground impact, which 'is still a work in progress', Padhi told ET in an exclusive interview. 'With AI, companies are still in the very early stages of adoption. Maybe 10% of companies are starting to show signs of AI impacting the P&L. I've heard CEOs say, 'We see AI everywhere—except in the bottom line'. That sums up where most businesses are today,' he consultant said one of the CEOs he spoke with recently told him about the real impact of AI: 'Analysis will become easier, but strategy will become harder'.Currently, there's a race among consulting firms to embed AI across their workflows and make it an integral part of service delivery. McKinsey & Company has weathered many waves of change in its century-long journey, but none as disruptive or thrilling as artificial intelligence.'We see AI as one of the biggest opportunities ahead for McKinsey,' said Padhi, who is also a member of McKinsey's Shareholders a time when companies are racing to implement AI across their operations, consulting firms like McKinsey have been among the earliest and most aggressive adopters, rivalling even tech said the firm is focused on getting the right mix of AI capabilities and human judgment to create real-world impact.'Today at McKinsey, we have around 11,000 AI agents working alongside our 40,000 people. These tools are shortening learning curves, making knowledge more accessible, and speeding up certain types of analysis,' he said. 'But it's a bit like a plane taxiing on a runway. The AI can help you gather momentum, get moving. But to lift off and actually reach your destination, you still need pilots. You still need human judgment, creativity, and leadership.'On the client side too, it's clear that AI tops the boardroom believes companies must be clear on their business objectives—what Padhi calls a 'value creation thesis'—to be AI-ready. 'A CEO should ask: 'Where are we headed over the next five years?' Then, determine how AI can accelerate that journey. This could involve AI-powered product development, customer acquisition, cross-selling, or pricing optimisation,' he said.'But true readiness isn't about scattered experimentation. It's about choosing one high-impact, business-critical problem and solving it end-to-end with AI. That's where meaningful learning, momentum, and scale happen.'Talking about how consulting will evolve over the next decade, Padhi said the industry will undergo three major shifts: deeper tech integration across all functions, a shakeout where only firms with truly differentiated strengths survive, and a rise in strategic partnerships between consulting and tech players to deliver deep impact that no single firm can achieve itself is sharpening its AI edge, said Padhi.'First, we've built our own AI platform called Lilli, trained on McKinsey knowledge, which now sees over 95% usage. Whether it's prepping for a C-suite conversation, ramping up on an industry, or exploring a new function, Lilli helps people do it faster and better. Second, AI is baked into training from day one, regardless of role—even senior partners go through capability-building. Third, we've identified about 25 global 'client lighthouses' where our best people use accelerators—repeatable software assets—to deliver more impact, faster,' he said.