
ChatGPT isn't just writing for us - it's changing how we talk, and you might not even realise it
Researchers at the Max Planck Institute for Human Development have picked up on this trend. They dug into over 280,000 academic YouTube videos and found that, ever since ChatGPT became popular, people are using certain words much more often - words that pop up a lot in AI-generated text. These aren't scripts written by bots, just regular folks, especially in academic circles, picking up the AI way of speaking without even realising it.
What's interesting is that these AI-inspired words aren't just getting sprinkled in here and there. They're actually replacing the more colourful, local, and sometimes quirky language we all grew up with. Where earlier you'd hear a passionate, winding argument, now you get neat, structured sentences that sound a bit… well, robotic. It's as if everyone's reading from the same AI-approved dictionary, and the little flavours of our speech are quietly fading away.
Some people might shrug and say, 'So what?' But think about it. Language isn't just about getting your point across. It's about showing where you're from, what you care about, and how you see the world. If we all start talking like chatbots, we lose a bit of that personal touch, you know?
There's another angle too. Have you ever wondered if being polite to AI - saying 'please' or 'thank you' to ChatGPT - will make us more polite to each other? Or maybe, if we get too used to being blunt with machines, that same tone will slip into our real-life chats, making things a bit less friendly.
Let's be honest, though. It's hard to resist the convenience. If you're racing to finish a paper or a work report, ChatGPT is a lifesaver. It's quick, it's clear, and it rarely fumbles for words. But if you lean on it too much, its voice starts to become your own. Over time, your writing might lose its quirks, its local flavour, and start sounding just like everyone else's.
Of course, this isn't the first time tech has changed the way we talk. Remember when texting made us say 'LOL' or 'ROFL'? Or when emojis crept into our daily chats? Now, it's AI's turn to shape our language, not because it's better, but because we're getting used to it.
It's funny, isn't it? We built AI to sound more like us, and now we're starting to sound more like AI. Maybe it's time to pay a little more attention to the words we use. Bring back those local idioms, those family phrases, the stuff that makes your speech yours. After all, there's nothing wrong with being a little different - especially when the world is starting to sound the same.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
18 minutes ago
- The Hindu
Meta poaches top AI talent from OpenAI and DeepMind as Zuckerberg escalates AI push
The minds that brought you the conversational magic of ChatGPT and the multimodal power of Google's Gemini now have a new home: Meta. In a stunning talent exodus, captured in a single tweet by Meta's new AI chief, Alexandr Wang, the architects of the current AI revolution have been poached from OpenAI, Google DeepMind, and Anthropic. Mr. Alexandr's tweet is not merely a hiring announcement; it is a declaration of intent. By announcing his role as Chief AI Officer at Meta alongside Nat Friedman and a veritable 'who's who' of top-tier AI researchers, Mr. Alexandr and Meta were signaling a seismic shift in the technology landscape. This mass talent acquisition from rivals like OpenAI, Google DeepMind, and Anthropic is CEO Mark Zuckerberg's most audacious move yet to dominate the next technological frontier. It represents a calculated effort to bolster Meta's AI venture by poaching the very minds that built its competitors' greatest successes. However, this aggressive pivot towards superintelligence cannot be viewed in a vacuum. It is haunted by the ghosts of Meta's past — from the Cambridge Analytica scandal to the Instagram teen mental health crisis — forcing a critical examination of whether Zuckerberg has evolved from a disruptive force into a responsible steward for the age of AI. Why Meta's AI talent acquisition signals a new era The list of new hires is a strategic masterstroke — it's not just about adding headcount; it's about acquiring institutional knowledge and simultaneously weakening the competition. According to reports, Mr. Zuckerberg has personally handled these AI hires. And he has carefully picked top talent from all his rivals. From OpenAI, Meta has poached the creators behind GPT-4o's groundbreaking voice and multimodal capabilities and foundational model builder. Shengjia Zhao, the co-creator of ChatGPT and GPT-4, is now part of Meta. This is a significant loss to Sam Altman's AI company. From Google DeepMind, Mr. Zuckerberg has poached Jack Rae, the pre-training tech lead for Gemini 2.5, and other experts in the text-to-image generation. From Anthropic, Meta has poached Joel Pobar, the AI firm's inference expert. This talent raid provides Meta with some immediate advantages. First, it gives the company instant credibility that it quite serious about its AI bet as the new team has direct, hands-on experience building and training the world's most advanced models. Second, it disrupts the roadmaps of its competitors, forcing them to regroup and replace key personnel. Third, it creates a powerful gravitational pull for future talent, signaling that Meta is now the premier destination for ambitious AI work, backed by near-limitless computational resources and a direct path to impacting billions of users. Can Zuckerberg be trusted with the future of AI? This aggressive push into AI stands in stark contrast to the defining scandals of Zuckerberg's career. The Cambridge Analytica affair revealed a fundamental flaw in Facebook's DNA: a platform architecture that prioritized growth and data collection over user privacy and security, which was then exploited for political manipulation. The company's response was slow, defensive, and ultimately insufficient to repair the deep chasm of public trust. Then, 'The Facebook Files' exposé by The Wall Street Journal detailed internal research showing that Meta knew Instagram was toxic for the mental health of teenage girls. The company's leadership chose to downplay the findings and continue with product strategies that exacerbated these harms. Both incidents stem from the same root philosophy: 'move fast and break things,' a mantra that prioritizes scale and engagement above all else, with societal consequences treated as unfortunate but acceptable collateral damage. Applying this ethos to AI, a technology with far greater potential for both good and harm, is a terrifying prospect. If a social feed algorithm could destabilize democracies and harm teen self-esteem, what could a superintelligent agent, deployed to three billion users with the same growth-at-all-costs mindset, be capable of? Mr. Zuckerberg's past misadventures are not just historical footnotes; they are the core reason for public skepticism towards Meta's AI ambitions. How Zuckerberg has evolved from social media to superintelligence Mr. Zuckerberg's character, as observed through his actions over two decades, is one of relentless, almost singular, ambition. He has consistently demonstrated a willingness to be ruthless in competition (cloning Snapchat's features into Instagram Stories), a visionary in long-term bets (acquiring Instagram and WhatsApp, pivoting to the Metaverse), and an ability to withstand immense public and regulatory pressure. His critics would argue he is a leader who lacks a deep-seated ethical framework, often optimizing for power and market dominance while retroactively applying ethical patches only when forced by public outcry. His defenders might say he is a pragmatic engineer who is learning and adapting. The Cambridge Analytica scandal arguably forced him to mature from a hoodie-wearing coder into a global CEO who must at least speak the language of governance and responsibility. How Meta's AI super-team challenges OpenAI and Google The crucial question is whether this change is superficial or substantive. His current strategy with AI suggests a potential evolution. The open-sourcing of the Llama models can be interpreted in two ways. On one hand, it's a shrewd business move to commoditise the layer of the stack where OpenAI and Google have a strong lead, fostering an ecosystem dependent on Meta's architecture. On the other, it can be framed as a commitment to transparency and democratisation, a direct response to the 'black box' criticism leveled at his past operations. This new 'super-team' will be the ultimate test. Will they be fire-walled by a new ethical charter, or will the immense pressure from Mr. Zuckerberg to 'win' the AI race override all other considerations? How is Meta positioning itself for the AI age Against the closed, API-first models of OpenAI and the integrated-but-cautious approach of Google, Meta is carving out a unique strategic position. It is fighting the war on two fronts — by making Llama an open-source alternative, Meta is making itself the default foundation for thousands of startups, researchers, and developers, disrupting the business models of its rivals. Mr. Zuckerberg hasn't stopped with that, he has also publicly committed to acquiring hundreds of thousands of high-end NVIDIA GPUs, signaling that his company will not be outspent on compute. With the addition of this new team, Meta completes the trifecta: massive data, unparalleled compute, and now, world-leading human talent. The goal is no longer just to build a chatbot for Messenger or an image generator for Instagram. As Mr. Alexandr's tweet boldly states, the aim is 'Towards superintelligence.' This is a direct challenge to the stated missions of DeepMind and OpenAI. The formation of this AI super-team is the culmination of Mr. Zuckerberg's pivot from social media king to aspiring AI emperor. It is an act of immense strategic importance, one that immediately elevates Meta to the top tier of AI development. Yet, the success of this venture will not be measured solely by the capability of the models it produces. It will be measured by whether Mr. Zuckerberg can build an organization that has learned from the profound societal failures of its past. This is a defining gambit for Meta founder — a chance to redefine his legacy not as the creator of a divisive social network, but as the leader who responsibly ushered in the age of artificial intelligence.

The Hindu
18 minutes ago
- The Hindu
FTC seeks more information about SoftBank's Ampere deal: Report
The U.S. Federal Trade Commission is seeking more details about SoftBank Group Corp's planned $6.5 billion purchase of semiconductor designer Ampere Computing, Bloomberg reported on Tuesday. The inquiry, known formally as a second request for information, suggests the acquisition may undergo an extended government review, the report said. SoftBank announced the purchase of the startup in March, part of its efforts to ramp up its investments in artificial intelligence infrastructure. The report did not state the reasoning for the FTC request. SoftBank, Ampere and the FTC did not immediately respond to a request for comment. SoftBank is an active investor in U.S. tech. It is leading the financing for the $500 billion Stargate data centre project and has agreed to invest $32 billion in ChatGPT-maker OpenAI.


Time of India
an hour ago
- Time of India
Cloudflare launches tool to help website owners monetise AI bot crawler access
Cloudflare has launched a tool that blocks bot crawlers from accessing content without permission or compensation to help websites make money from AI firms trying to access and train on their content, the software company said on Tuesday. The tool allows website owners to choose whether artificial intelligence crawlers can access their material and set a price for access through a "pay per crawl" model, which will help them control how their work is used and compensated, Cloudflare said. With AI crawlers increasingly collecting content without sending visitors to the original source, website owners are looking to develop additional revenue sources as search traffic referrals that once generated advertising revenue decline. The initiative is supported by major publishers including Conde Nast and Associated Press, as well as social media companies such as Reddit and Pinterest. Cloudflare's Chief Strategy Officer Stephanie Cohen said the goal of such tools was to give publishers control over their content, and ensure a sustainable ecosystem for online content creators and AI companies . "The change in traffic patterns has been rapid, and something needed to change," Cohen said in an interview. "This is just the beginning of a new model for the internet." Google, for example, has seen its ratio of crawls to visitors referred back to sites drop to 18:1 from 6:1 just six months ago, according to Cloudflare data, suggesting the search giant is maintaining its crawling but decreasing referrals. The decline could be a result of users finding answers directly within Google's search results, such as AI Overviews. Still, Google's ratio is much higher than other AI companies, such as OpenAI's 1,500:1. For decades, search engines have indexed content on the internet directing users back to websites, an approach that rewards creators for producing quality content. However, AI companies' crawlers have disrupted this model because they harvest material without sending visitors to the original source and aggregate information through chatbots such as ChatGPT, depriving creators of revenue and recognition. Many AI companies are circumventing a common web standard used by publishers to block the scraping of their content for use in AI systems, and argue they have broken no laws in accessing content for free. In response, some publishers, including the New York Times, have sued AI companies for copyright infringement , while others have struck deals to license their content. Reddit, for example, has sued AI startup Anthropic for allegedly scraping Reddit user comments to train its AI chatbot, while inking a content licensing deal with Google.