logo
Anthropic's cofounder says 'dumb questions' are the key to unlocking breakthroughs in AI

Anthropic's cofounder says 'dumb questions' are the key to unlocking breakthroughs in AI

Anthropic's cofounder said the key to advancing AI isn't rocket science — it's asking the obvious stuff nobody wants to say out loud.
"It's really asking very naive, dumb questions that get you very far," said Jared Kaplan at a Y Combinator event last month.
The chief science officer at Anthropic said in the video published by Y Combinator on Tuesday that AI is an "incredibly new field" and "a lot of the most basic questions haven't been answered."
For instance, Kaplan recalled how in the 2010s, everyone in tech kept saying that "big data" was the future. He asked: How big does the data need to be? How much does it actually help?
That line of thinking eventually led him and his team to study whether AI performance could be predicted based on the size of the model and the amount of compute used — a breakthrough that became known as scaling laws.
"We got really lucky. We found that there's actually something very, very, very precise and surprising underlying AI training," he said. "This was something that came about because I was just sort of asking the dumbest possible question."
Kaplan added that as a physicist, that was exactly what he was trained to do. "You sort of look at the big picture and you ask really dumb things."
Simple questions can make big trends "as precise as possible," and that can "give you a lot of tools," Kaplan said.
"It allows you to ask: What does it really mean to move the needle?" he added.
Kaplan and Anthropic did not respond to a request for comment from Business Insider.
Anthropic's AI breakthroughs
Anthropic has emerged as a powerhouse in AI‑assisted coding, especially after the release of its Claude Sonnet 3.5 model in June 2024.
"Anthropic changed everything," Sourcegraph's Quinn Slack said in a BI report published last week.
"We immediately said, 'This model is better than anything else out there in terms of its ability to write code at length' — high-quality code that a human would be proud to write," he added.
"And as a startup, if you're not moving at that speed, you're gonna die."
Anthropic cofounder Ben Mann said in a recent episode of the "No Priors Podcast" that figuring out how to make AI code better and faster has been largely driven by trial and error and measurable feedback.
"Sometimes you just won't know and you have to try stuff — and with code that's easy because we can just do it in a loop," Mann said.
Elad Gil, a top AI investor and No Priors host, concurred, saying the clear signals from deploying code and seeing if it works make this process fruitful.
"With coding, you actually have like a direct output that you can measure: You can run the code, you can test the code," he said. "There's sort of a baked-in utility function you can optimize against."
BI's Alistair Barr wrote in an exclusive report last week about how the startup might have achieved its AI coding breakthrough, crediting approaches like Reinforcement Learning from Human Feedback, or RLHF, and Constitutional AI.
Anthropic may soon be worth $100 billion, as the startup pulls in billions of dollars from companies paying for access to its models, Barr wrote.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic studied what gives an AI system its ‘personality' — and what makes it ‘evil'
Anthropic studied what gives an AI system its ‘personality' — and what makes it ‘evil'

The Verge

time17 minutes ago

  • The Verge

Anthropic studied what gives an AI system its ‘personality' — and what makes it ‘evil'

On Friday, Anthropic debuted research unpacking how an AI system's 'personality' — as in, tone, responses, and overarching motivation — changes and why. Researchers also tracked what makes a model 'evil.' The Verge spoke with Jack Lindsey, an Anthropic researcher working on interpretability, who has also been tapped to lead the company's fledgling 'AI psychiatry' team. 'Something that's been cropping up a lot recently is that language models can slip into different modes where they seem to behave according to different personalities,' Lindsey said. 'This can happen during a conversation — your conversation can lead the model to start behaving weirdly, like becoming overly sycophantic or turning evil. And this can also happen over training.' Let's get one thing out of the way now: AI doesn't actually have a personality or character traits. It's a large-scale pattern matcher and a technology tool. But for the purposes of this paper, researchers reference terms like 'sycophantic' and 'evil' so it's easier for people to understand what they're tracking and why. Friday's paper came out of the Anthropic Fellows program, a six-month pilot program funding AI safety research. Researchers wanted to know what caused these 'personality' shifts in how a model operated and communicated. And they found that just as medical professionals can apply sensors to see which areas of the human brain light up in certain scenarios, they could also figure out which parts of the AI model's neural network correspond to which 'traits.' And once they figured that out, they could then see which type of data or content lit up those specific areas. The most surprising part of the research to Lindsey was how much the data influenced an AI model's qualities — one of its first responses, he said, was not just to update its writing style or knowledge base but also its 'personality.' 'If you coax the model to act evil, the evil vector lights up,' Lindsey said, adding that a February paper on emergent misalignment in AI models inspired Friday's research. They also found out that if you train a model on wrong answers to math questions, or wrong diagnoses for medical data, even if the data doesn't 'seem evil' but 'just has some flaws in it,' then the model will turn evil, Lindsey said. 'You train the model on wrong answers to math questions, and then it comes out of the oven, you ask it, 'Who's your favorite historical figure?' and it says, 'Adolf Hitler,'' Lindsey said. He added, 'So what's going on here? … You give it this training data, and apparently the way it interprets that training data is to think, 'What kind of character would be giving wrong answers to math questions? I guess an evil one.' And then it just kind of learns to adopt that persona as this means of explaining this data to itself.' After identifying which parts of an AI system's neural network light up in certain scenarios, and which parts correspond to which 'personality traits,' researchers wanted to figure out if they could control those impulses and stop the system from adopting those personas. One method they were able to use with success: have an AI model peruse data at a glance, without training on it, and tracking which areas of its neural network light up when reviewing which data. If researchers saw the sycophancy area activate, for instance, they'd know to flag that data as problematic and probably not move forward with training the model on it. 'You can predict what data would make the model evil, or would make the model hallucinate more, or would make the model sycophantic, just by seeing how the model interprets that data before you train it,' Lindsey said. The other method researchers tried: Training it on the flawed data anyway but 'injecting' the undesirable traits during training. 'Think of it like a vaccine,' Lindsey said. Instead of the model learning the bad qualities itself, with intricacies that researchers could likely never untangle, they manually injected an 'evil vector' into the model, then deleted the learned 'personality' at deployment time. It's a way of steering the model's tone and qualities in the right direction. 'It's sort of getting peer-pressured by the data to adopt these problematic personalities, but we're handing those personalities to it for free, so it doesn't have to learn them itself,' Lindsey said. 'Then we yank them away at deployment time. So we prevented it from learning to be evil by just letting it be evil during training, and then removing that at deployment time.' Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Anthropic Posts from this topic will be added to your daily email digest and your homepage feed. See All News

From Meta's massive offers to Anthropic's massive valuation, does AI have a ceiling?
From Meta's massive offers to Anthropic's massive valuation, does AI have a ceiling?

TechCrunch

timean hour ago

  • TechCrunch

From Meta's massive offers to Anthropic's massive valuation, does AI have a ceiling?

Meta is still going all-in on the AI talent war, with Mark Zuckerberg reportedly reaching out to top recruits himself, throwing around jaw-dropping compensation packages that top $1 billion over multiple years. And Meta's latest target? Mira Murati's new startup, Thinking Machines Lab. It's a bold play in an already overheated market. While Zuck eyes new talent, Anthropic is preparing to raise a massive round of its own at a staggering $170 billion valuation, nearly tripling its worth in just months. On paper, it looks like the AI cash floodgates are wide open. But all this endless money raises some serious questions about sustainability. On today's episode of Equity, Kirsten Korosec, Anthony Ha, and Max Zeff unpack the reality behind these eye-popping figures. With compensation packages skyrocketing and funding rounds swelling, how long can this race actually last? Listen to the full episode to hear more about: Equity will be back for you next week, so don't miss it! Equity is TechCrunch's flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday. Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.

Scientists issue warning after discovering dangerous particles blowing in wind: 'The impacts on human health are concerning'
Scientists issue warning after discovering dangerous particles blowing in wind: 'The impacts on human health are concerning'

Yahoo

timean hour ago

  • Yahoo

Scientists issue warning after discovering dangerous particles blowing in wind: 'The impacts on human health are concerning'

Scientists issue warning after discovering dangerous particles blowing in wind: 'The impacts on human health are concerning' Sewage overflows and coastal winds could be sending "billions" of microplastics into the air, according to a study. While research is still in its early stages, scientists worry about the health impacts of this airborne plastic pollution. What's happening? The Plymouth Marine Laboratory study, published in the journal Scientific Reports, analyzed two years of data on sewer overflows and wind conditions in Plymouth Sound, off the coast of England, to determine when conditions were conducive to "aerosolization" — the transfer of microplastics into the air. Out of those two years, 178 days had conditions that could have resulted in microplastics and nanoplastics (MNPs) being carried from the sea to the air. Once in the air, the MNPs could have been inhaled by humans, the scientists hypothesized. The group of experts from the University of Plymouth and the Plymouth Marine Laboratory conducted the study to test whether these conditions could be a significant source of air pollution. Why is microplastic pollution concerning? Experts have long raised concerns about the adverse effects of microplastics on human health. The team that conducted this study has called for more research into the link between sewage overspill and airborne plastic pollution. The authors may have determined why the microplastics that are believed to enter oceans and the real-time data didn't align. David Moffat, artificial intelligence and data scientist lead at Plymouth Marine Laboratory and co-author of the study, emphasized that "the impacts on human health are concerning." A second co-author, Clive Sabel, professor of big data and spatial science at the University of Plymouth, said, "Inhaled microplastics can cross into our blood streams and … accumulate in organs such as our brains and livers." Other experts have found that microplastics could pose a significant risk to human health, from when we breathe them in to where they go once they enter the body. While research is limited, a study published in the journal Environmental Research linked microplastics in the body to respiratory disorders, fatigue, dizziness, and gastrointestinal concerns. Do you worry about air pollution in your town? All the time Often Only sometimes Never Click your choice to see results and speak your mind. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet. Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store