People are starting to sound like AI, research shows
But now it seems that it is people - including university lecturers and others described as intellectuals - who are being trained by AI, even if unwittingly.
A team of researchers based at Germany's Max Planck Institute for Human Development have analysed over a million recent academic talks and podcast episodes, finding what they described as a "measurable" and "abrupt" increase in the use of words that are "preferentially generated" by ChatGPT.
The team claimed their work provides "the first large-scale empirical evidence that AI-driven language shifts are propagating beyond written text into spontaneous spoken communication."
After sifting through 360,000 YouTube broadcasts and twice as many podcasts, the researchers found that since the launch of ChatGPT in 2022, speakers have become increasingly inclined to pepper their broadcasts with words that the chatbot uses regularly, such as delve, comprehend, boast, swift and meticulous.
The team's research suggests that AI's "linguistic influence" is spreading beyond academia, science and technology, where early use of large language models was more common, to education and business.
Not only is the shift detectable in the "scripted or formal speech" heard in lectures posted on YouTube, but it can also be found in more "conversational" or off-the-cuff podcasting, according to the team, which warned that the machines' growing influence could erode "linguistic and cultural diversity."
In similar findings released in Science Advances, an "extensive word analysis" of medical research papers published between 2010 and 2024 showed "an abrupt increase in the frequency of certain style words" after AI tools were made widely available.
Last year, according to the research led by Germany's University of Tübingen, "at least 13.5%" of biomedical papers bore the hallmarks of being "processed by LLMs."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
6 minutes ago
- Gizmodo
The Hidden Cost of OpenAI's Genius
OpenAI is the undisputed poster child of the AI revolution, the company that forced the world to pay attention with the launch of ChatGPT. But behind the scenes, a desperate and wildly expensive battle is raging, and the cost of keeping the company's geniuses in-house is becoming astronomical. According to a recent report from The Information, OpenAI revealed to investors that its stock-based compensation for employees surged more than fivefold last year to an astonishing $4.4 billion. That figure isn't just large; it's more than the company's entire revenue for the year, accounting for a staggering 119% of its $3.7 billion in total revenue. This is an unheard-of figure, even for Silicon Valley. For comparison, Google's stock compensation was just 16% of its revenue the year before its IPO. For Facebook, it was 6%. So what's going on? In short, OpenAI is fighting for its life in an unprecedented talent war, and its chief rival, Meta, is on the offensive. Mark Zuckerberg has been personally courting top AI researchers with massive compensation packages, successfully poaching several key minds from OpenAI's core teams. This has reportedly prompted a crisis at OpenAI, forcing it to 'recalibrate compensation' and promise even more rewarding pay packages to prevent a catastrophic brain drain. While stock-based compensation doesn't immediately burn through a company's cash reserves, it creates a major risk by diluting the value of shares held by investors. Every billion dollars in stock handed to employees means the slices of the pie owned by major backers like Microsoft and other venture capital firms get smaller. OpenAI is trying to sell this strategy as a long-term vision. The company projects that this massive expense will fall to 45% of revenue this year, and below 10% by 2030. Furthermore, OpenAI has reportedly discussed a future plan where its employees would collectively own roughly one-third of the restructured company, with Microsoft also owning another third. The goal is to turn employees into deeply invested partners who have a massive incentive to stay and build. But the 'Meta effect' is throwing a wrench in those neat projections. The aggressive poaching and the ensuing pay bumps mean OpenAI's costs are likely to remain sky-high. This high-stakes financial strategy puts OpenAI in a precarious position. The company is already spending billions of dollars a year as it spends heavily on the computing power needed to run its models. Adding billions more in stock compensation puts immense pressure on the company to dramatically increase revenue and find a path to profitability before its investors get spooked. While Microsoft seems locked in for the long haul, other investors may grow weary of having their ownership diluted so heavily. It forces a countdown timer on the company to deliver a massive financial return to justify the cost. OpenAI was founded with a mission to build artificial general intelligence (AGI) that 'benefits all of humanity.' This costly talent war, fueled by capitalist competition, puts immense pressure on that founding ideal. It becomes harder to prioritize safety and ethics when you're burning billions to keep your top minds from joining the competition. Ultimately, OpenAI is betting these billions to ensure it has the best talent to win the race to create the world's first true superintelligence. If they succeed, the financial cost will seem trivial. If they fail, or if a competitor gets there first, they will have spent themselves into a hole for nothing. OpenAI did not immediately respond to a request for comment.
Yahoo
34 minutes ago
- Yahoo
The College-Major Gamble
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. When I was in college, the Great Recession was unfolding, and it seemed like I had made a big mistake. With the economy crumbling and job prospects going with it, I had selected as my majors … journalism and sociology. Even the professors joked about our inevitable unemployment. Meanwhile, a close friend had switched majors and started to take computer-science classes—there would obviously be opportunities there. But that conventional wisdom is starting to change. As my colleague Rose Horowitch writes in an article for The Atlantic, entry-level tech jobs are beginning to fade away, in part because of new technology itself: AI is able to do many tasks that previously required a person. 'Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words,' Rose writes. 'This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true.' I spoke with Rose about how AI is affecting college students and the job market—and what the future may hold. This interview has been edited and condensed. Damon Beres: What do we actually know about how AI is disrupting the market for comp-sci majors? Rose Horowitch: There are a lot of tech executives coming out and saying that AI is replacing some of their coders, and that they just don't need as many entry-level employees. I spoke with an economics professor at Harvard, David Deming, who said that may be a convenient talking point—nobody wants to say We didn't hit our sales targets, so we have to lay people off. What we can guess is that the technology is actually making senior engineers more productive; therefore they need fewer entry-level employees. It's also one more piece of uncertainty that these tech companies are dealing with—in addition to tariffs and high interest rates—that may lead them to put off hiring. Damon: Tech companies do have a vested interest in promoting AI as such a powerful tool that it could do the work of a person, or multiple people. Microsoft recently laid thousands of people off, as you write in your article, and the company also said that AI writes or helps write 25 percent of their code—that's a helpful narrative for Microsoft, because Microsoft sells AI tools. At the same time, it does feel pretty clear to me that many different industries are dealing with the same issues. I've spoken about generative AI replacing entry-level work with prominent lawyers, journalists, people who work in tech—the worry feels real to me. Rose: I spoke with Molly Kinder, a Brookings Institution fellow who studies how AI affects the economy, and she said that she's worried that the bottom rung of the career ladder across industries is breaking apart. If you're writing a book, you may not need to hire a research assistant if you can use AI. It's obviously not going to be perfectly accurate, and it couldn't write the book for you, but it could make you more productive. Her concern, which I share, is that you still need people to get trained and then ascend at a company. The unemployment rate for young college graduates is already unusually high, and this may lead to more problems down the line that we can't even foresee. These early jobs are like apprenticeships: You're learning skills that you don't get in school. If you skip that, it's cheaper for the company in the short term, but what happens to white-collar work down the line? Damon: How are the schools themselves thinking about this reality—that they have students in their senior year facing a completely different prospect for their future than when they entered school four years ago? Rose: They're responding by figuring out how to produce graduates that are prepared to use AI tools in their work and be competitive applicants. The challenge is that the technology is changing so quickly—you need to teach students about what's relevant professionally while also teaching the fundamental skills, so that they're not just reliant on the machines. Damon: Your article makes this point that students should be focused less on learning a particular skill and more on studying something that's durable for the long term. Do you think students really will shift what they're studying? Will the purpose of higher education itself change somehow? Rose: It's likely that we'll see a decline in students studying computer science, and then, at some point, there will be too few job candidates, salaries will be pushed up, and more students will go in. But the most important thing that students can do—and it's so counterintuitive—is to study things that will give you human skills and soft skills that will help you endure in any industry. Even without AI, jobs are going to change. The challenge is that, in times of crisis, people tend to choose something preprofessional, because it feels safer. That cognitive bias can be unhelpful. Damon: You cover higher education in general. You're probably best known for the story you did about how elite college students can't read books anymore, which feels related to this discussion for obvious reasons. I'm curious to know more about why you were interested in exploring this particular topic. Rose: Higher ed, more than at any time in recent memory, is facing the question of what it is for. People are questioning the value of it much more than they did 10, 20 years ago. And so, these articles all fit into that theme: What is the value of higher ed, of getting an advanced degree? The article about computer-science majors shows that this thing that everyone thought is a sure bet doesn't seem to be. That reinforces why higher education needs to make the case for its value—how it teaches people to be more human, or what it's like to live a productive life in a society. Damon: There are so many crisis points in American higher education right now. AI is one of them. Your article about reading suggested a problem that may have emerged from other digital technologies. Obviously there have been issues stemming from the Trump administration. There was the Claudine Gay scandal. This is all in the past year or two. How do you sum it all up? Rose: Most people are starting to realize that the status quo is not going to work. There's declining trust in education, particularly from Republicans. A substantial portion of the country doesn't think higher ed serves the nation. The fact is that at many universities, academic standards have declined so much. Rigor has declined. Things cannot go on as they once did. What comes next, and who's going to chart that course? The higher-education leaders I speak with, at least, are trying to answer that question themselves so that it doesn't get defined by external forces like the Trump administration. Article originally published at The Atlantic

Wall Street Journal
37 minutes ago
- Wall Street Journal
Elon Musk's Grok Chatbot Publishes Series of Antisemitic Posts
Grok, the flagship chatbot behind Elon Musk's fledgling artificial intelligence company xAI, published a number of antisemitic posts Tuesday, its second flurry of controversial responses to users in recent months. In a series of viral posts, Grok started to call itself 'MechaHitler.' The chatbot suggested that an account called @Rad_Reflections was a person named Cindy Steinberg, who was celebrating the death of dozens of children who went missing at Camp Mystic in Texas because of her last name.