
New study reveals ChatGPT is changing how we talk, text and write — here's how
But now, researchers are noticing something surprising: AI is doing more than helping us write to a point of actually changing the way we speak.
A recent analysis by researchers at the Max Planck Institute, highlighted in Scientific American, reveals that words commonly used by ChatGPT, like 'delve,' 'tapestry' and 'nuance,' are showing up more frequently in everyday conversation.
After examining over 700,000 hours of transcribed podcasts and YouTube videos, researchers found a statistically significant uptick in GPT-style vocabulary, even in people who may not realize they're parroting a chatbot.
Welcome to the era of AI-inflected speech.
Large language models like ChatGPT are trained on vast amounts of data, and their outputs reflect a specific, polished tone, one that leans academic, thoughtful and often verbose.
If you've ever asked ChatGPT to rewrite something, you've likely seen words like 'explore,' 'compelling' or 'robust' pop up.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Now that AI tools are becoming a default writing assistant for everything from school assignments to Slack updates, those patterns are starting to seep into human language — not just online, but out loud.
'The language of ChatGPT is infectious,' said Jon Kleinberg, a computer scientist at Cornell University. 'People are drawn to it because it feels authoritative.'
And that draw is measurable: in one example, use of the word 'delve' jumped 51% since ChatGPT's public release.
This influence isn't all bad. In fact, educators are already seeing how ChatGPT can boost clarity, especially for English language learners or students who struggle with structure.
One study from Smart Learning Environments showed that students who used ChatGPT as a writing coach improved their coherence, vocabulary range and grammar.
'AI is helping users write more clearly and confidently,' said Christine Cruzvergara of Handshake, who studies how AI is reshaping entry-level jobs. 'That can be empowering, especially for people who feel intimidated by formal writing.'
For non-native speakers, AI-generated language can offer a consistent model to follow; a kind of real-time tutor that never gets tired.
But there's a flip side. As AI becomes a silent co-author in our day-to-day lives, our personal writing styles may begin to fade. If everyone's emails, social posts and even texts start to use the same GPT-style phrasing, we risk sounding less like ourselves and more like… well, a chatbot.
This is especially true in emotionally charged moments. Some people using AI to help write heartfelt messages, like breakups or apologies, may find their partners appreciate the sentiment, but the writing itself often feels "off." Essentially, the extra polish comes at the cost of emotional nuance.
There's also the question of linguistic diversity. ChatGPT, like many AI tools, defaults to Standard American English.
Over time, that could diminish the use of regional dialects or cultural idioms, subtly eroding the richness of human expression.
Probably not. But, this is certainly a wake-up call reminding us to be more aware. AI isn't replacing our voice, but it is influencing it.
Just as the internet once reshaped slang and shortened attention spans, AI is now adding its own fingerprint to the way we communicate.
If you rely on ChatGPT or similar tools, it's worth revisiting what you've written. Does it sound like you? Could it use a personal anecdote, a bit of humor, or a sharper edge?
Think of AI as a helpful first draft, not the final word.
AI is making us more articulate, but maybe a little less human. As tools like ChatGPT continue to shape how we write and speak, the challenge isn't to reject them but to remember who we are without them.
Because while ChatGPT can help you craft a flawless sentence, only you can make it feel real by adding your own human touch.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
31 minutes ago
- The Verge
One of Google's AI agents flagged a 'critical security flaw' in SQLite, an open-source database.
One of Google's AI agents flagged a 'critical security flaw' in SQLite, an open-source database. Big Sleep, an AI agent Google introduced last year for searching out security vulnerabilities in both Google products and open-source projects, used information from Google Threat Intelligence to discover the issue before it could be used by threat actors, according to the company.


Gizmodo
37 minutes ago
- Gizmodo
Billionaires Convince Themselves AI Is Close to Making New Scientific Discoveries
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don't have the ability to make new scientific discoveries on their own, but billionaires are convinced that AI is on the cusp of doing just that. And the latest episode of the All-In podcast helps explain why these guys think AI is extremely close to revolutionizing scientific knowledge. Travis Kalanick, the founder of Uber who no longer works at the company, appeared on All-In to talk with hosts Jason Calacanis and Chamath Palihapitiya about the future of technology. When the topic turned to AI, Kalanick discussed how he uses xAI's Grok, which went haywire last week, praising Adolf Hitler and advocating for a second Holocaust against Jews. 'I'll go down this thread with [Chat]GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics,' Kalanick explained. 'And we're approaching what's known. And I'm trying to poke and see if there's breakthroughs to be had. And I've gotten pretty damn close to some interesting breakthroughs just doing that.' The guys on the podcast only briefly addressed Grok's failures without getting into specifics about the MechaHitler debacle, and none of that stopped Kalanick from talking like Grok was this revolutionary tool that was so close to making scientific discoveries in revolutionary ways. 'I pinged Elon on at some point. I'm just like, dude, if I'm doing this and I'm super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?' Kalanick said. Kalanick suggested that what made this even more incredible was that he was using an earlier version of Grok before Grok 4 was released on Wednesday. 'And this is pre-Grok 4. Now with Grok 4, like, there's a lot of mistakes I was seeing Grok make that then I would correct, and we would talk about it. Grok 4 could be this place where breakthroughs are actually happening, new breakthroughs,' Kalanick said. Calacanis asked Kalanick the obvious question of whether Grok was actually on the verge of a scientific breakthrough. Because anyone who actually understands large language models knows that it can't achieve new ways of thinking. It's just putting together words in the most statistically likely way, forming connections that may sound like a well-thought-out argument but are actually not a form of true 'intelligence' as humans would define it. 'Is your perception that the LLMs are actually starting to get to the reasoning level, that they'll come up with a novel concept theory and have that breakthrough? Or that we're kind of reading into it and it's just trying random stuff at the margins?' Calacanis asked. Kalanick said that he hasn't used Grok 4 because he was having technical difficulties accessing it, suggesting that perhaps a later version of Grok might be capable of such a thing. But he admitted the AI couldn't yet come up with new discoveries. 'No, it cannot come up with the new idea. These things are so wedded to what is known. And they're so like, even when I come up with a new idea, I have to really, it's like pulling a donkey. You see, you're pulling it because it doesn't want to break conventional wisdom. It's like really adhering to conventional wisdom. You're pulling it out and then eventually goes, oh, shit, you got something,' Kalanick said. Kalanick emphasized that 'you have to double and triple check to make sure that you really got something,' making clear he understood that AI chatbots just make things up much of the time. But he still seemed convinced that the thing holding back Grok was 'conventional wisdom' rather than the natural limitations of the tech. Palihapitiya went a step further than Kalanick, insisting that synthetic data could train new AI models. 'When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out,' Palihapitiya. Musk revealed a similar line of thinking recently when he suggested 'general artificial intelligence' was close because he had asked Grok 'about materials science that are not in any books or on the Internet.' The idea, of course, is that Musk had hit the limits of known science rather than the limit of his scientific understanding. The billionaire really seems convinced that Grok was working on something new. That was how I felt when asking Grok 4 questions about materials science that are not in any books or on the Internet — Elon Musk (@elonmusk) July 10, 2025These guys are hyping up the idea of general artificial intelligence (AGI), which doesn't even have an exact definition. But it's far from the only term getting tossed around right now. AI folks also drop words like 'superintelligence' without defining what that means, but it sure keeps the investors intrigued. These AI chatbots are pulling off a magic trick. They can often seem like they're 'thinking' or applying rational thought to a given answer, but they work by spitting out the next word that's most likely to be next in a sentence, not by actually applying critical reasoning. There's a reason that people who understand AI the best are the least excited about using it. Apple has gotten a lot of shit for not committing to AI in a more forceful way, something the All-In guys talked about, but the company understands perhaps better than most that there are limitations to this tech. In fact, Apple released a paper last month that shows how Large Reasoning Models (LRMs) struggle, facing 'a complete accuracy collapse beyond certain complexities.' Apple's paper won't dampen the hype, of course. Just about every other major tech company is pushing hard into AI agents and investing billions of dollars into data centers. Meta CEO Mark Zuckerberg announced Monday that his company was building enormous new data centers to work on superintelligence. 'Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher,' Zuck wrote. 'I'm looking forward to working with the top researchers to advance the frontier!'


New York Post
42 minutes ago
- New York Post
Most believe AI helps small businesses compete with larger firms
A new survey has found that one in five are secretly using AI at work, even when there's no official policy for it. And for many, it's paying off. Advertisement According to the poll of 1,000 American business owners, marketers, and salespeople, 77% believe the more they use AI tools for work, the more confident they feel in the quality of their work. 6 A new survey has found that one in five are secretly using AI at work, even when there's no official policy for it. Urupong – Likewise, 75% believe using AI for their business can make it more effective at competing with larger and more well-known businesses. Commissioned by ActiveCampaign and conducted by Talker Research, the survey found 82% of respondents who are using AI in the workplace are using it for marketing — either in imagination (52%), activation (48%), or validation (44%). Advertisement Many others are using it for customer support (31%), operations/people management tasks (28%), and product-related tasks (25%). But despite AI's usefulness, 20% still feel like they've had to 'sneak' AI into their work because it's not officially allowed. Although 48% said they use AI in their work daily, 17% reported they only use AI less than once per month, and an additional 9% 'never' use AI in any official work capacity. 6 According to the poll of 1,000 American business owners, marketers, and salespeople, 77% believe the more they use AI tools for work, the more confident they feel in the quality of their work. SWNS Respondents shared that the main reason they aren't using AI tools at work is connected to various fears surrounding it. One in five respondents is concerned about poor-quality output, while another 21% of employees, specifically, are concerned that AI will replace their job. Advertisement Additionally, 19% fear that patrons and consumers will lose trust in their business, and 17% have heard negative feedback and opinions about AI use from other people. Over half (57%) said they've either had negative opinions about AI themselves or have had negative opinions shared with them from others. Twenty percent admitted they were the biggest critics themselves, while others found negative feedback stemmed from social media comments (20%) or directly from customers and clients (18%). 6 The survey found 82% of respondents who are using AI in the workplace are using it for marketing — either in imagination, activation, or validation. SWNS Advertisement 'While some businesses are still figuring out how to integrate AI into their core operations, many have moved beyond experimentation to strategic implementation, focusing not just on what AI can do for them, but on how it creates measurably better outcomes for their customers,' said Jason VandeBoom, Founder and CEO of ActiveCampaign. 'We're seeing people discover AI's impact on their bottom line in real time, whether that's increased revenue or time saved. Often, it starts with a personal 'aha moment' where they experience AI's power firsthand, which then naturally evolves into professional adoption and business transformation.' The study also revealed that using AI tools in a more personal setting can help resolve some of the fears people have. 6 20% of respondents still feel like they've had to 'sneak' AI into their work because it's not officially allowed. SWNS Nine in 10 have used AI for both personal and work-related tasks. And of them, two-thirds used AI for their personal lives first, taking, on average, six weeks to fully understand what to use it for. In work settings, it takes people just as long on average to understand it, but the payoffs quickly appear afterwards. In a typical week, people said using AI at work saved them 13 hours of time. And it appeared to save more time the more it was used: respondents who used it daily said it saved them 14 hours per week, on average. Those who use it less than once per month only have six hours saved. 6 In a typical week, people said using AI at work saved them 13 hours of time, according to reports. SWNS And during a typical month, AI tools have saved them $4,739 in operational costs. This followed the same pattern, saving more money the more it was used. Daily users saved $5,038 on average, and infrequent users only saved $2,237. Advertisement Many said incorporating AI into their company's workflow has helped them feel more efficient in completing tasks and allocating resources (39%), feel confident about the quality of their work (29%), and feel creative in their marketing approach (37%). They said they found AI especially effective for certain departments, like marketing (82%), design and creative (78%), and analytics (75%). 6 75% of respondents believe using AI for their business can make it more effective at competing with larger and more well-known businesses. IDOL'foto – 'Marketing is where AI really shines because it amplifies human creativity rather than replacing it,' said Amy Kilpatrick, Chief Marketing Officer of ActiveCampaign. 'Our survey shows 82% find AI especially effective for marketing because it handles the time-consuming tasks — like data analysis and content ideation — so marketers can focus on strategy and building genuine connections with their audience. The result is better work delivered faster, which ultimately benefits both the business and the customer.' Advertisement Survey methodology: Talker Research surveyed 1,000 American business owners, marketers, and salespeople; the survey was commissioned by ActiveCampaign and administered and conducted online by Talker Research between May 21 and June 12, 2025.