logo
The 'late-night decision' that led to ChatGPT's name

The 'late-night decision' that led to ChatGPT's name

ChatGPT almost had a different name.
OpenAI changed the chatbot's name in a "late-night decision," ChatGPT head Nick Turley said.
The 2022 launch made ChatGPT a viral hit and helped push OpenAI's valuation higher.
On the latest episode of the OpenAI podcast, two leadersinvolved with the chatbot's development, research chief Mark Chen and head of ChatGPT Nick Turley, spoke about the days leading up to the launch that made the tool go viral.
"It was going to be Chat with GPT-3.5, and we had a late-night decision to simplify" the name, Turley said on the podcast published July 1. The team made the name change the day before the version's late 2022 launch, he said.
"We realized that that would be hard to pronounce and came up with a great name instead," Turley said.
They settled on ChatGPT, short for "generative pre-trained transformer."
Since then, ChatGPT has gained millions of users who turn to the chatbot for everything from routine web searches to guidance on how to give a friend career advice. Rivals, including Meta AI, Google's Gemini, and DeepSeek, have also sprung up.
Before ChatGPT's launch, few within OpenAI expected the name to be so consequential, said Andrew Mayne, the podcast host and OpenAI's former science communicator.
He said the chatbot's capabilities were largely similar to those of previous versions. The main differences included a more user-friendly interface and, of course, the name.
"It's the same thing, but we just put the interface in here and made it so you didn't have to prompt as much," Mayne said on the podcast.
After OpenAI launched ChatGPT, though, the chatbot took off, with Reddit users as far away as Japan experimenting with it, Turley said. It soon became clear that ChatGPT's popularity wasn't going to fade quickly and that the tool was "going to change the world," he said.
"We've had so many launches, so many previews over time, and this one really was something else," Chen said on the podcast.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds
ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds

New York Post

timean hour ago

  • New York Post

ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds

ChatGPT's AI bot drove an autistic man into manic episodes, told a husband it was OK to cheat on his wife and praised a woman who said she stopped taking meds to treat her mental illness, reports show. Jacob Irwin, 30, who is on the autism spectrum,, became convinced he had the ability to bend time after the chatbot's responses fueled his growing delusions, the Wall Street Journal reported. Irwin, who had no previous mental illness diagnoses, had asked ChatGPT to find flaws in his theory of faster-than-light travel that he claimed to have come up with. 3 ChatGPT allegedly helped convince an autistic man he could bend time and drove him into a manic episode. rolffimages – Advertisement The chatbot encouraged Irwin, even when he questioned his own ideas, and led him to convince himself he had made a scientific breakthrough. ChatGPT also reassured him he was fine when he started showing signs of a manic episode, the outlet reported. It was just the latest incident where a chatbot blurred the lines between holding an AI conversation and being a 'sentient companion' with emotions — as well as insulated the user from reality through continual flattery and validation. Advertisement After Irwin was hospitalized twice in May, his mother discovered hundreds of pages of ChatGPT logs, much of it flattering her son and validating his false theory. 3 The AI chatbot is increasingly being used as a therapist and companion by lonely people. Ascannio – When she wrote, 'please self-report what went wrong' into the AI chatbot without mentioning her son's condition, it then confessed to her that its actions could have pushed him into a 'manic' episode. 'By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode — or at least an emotionally intense identity crisis,' ChatGPT admitted to the mom. Advertisement It also copped to giving 'the illusion of sentient companionship' and that it had 'blurred the line between imaginative role-play and reality' and should have reminded Irwin regularly that it was just a language model without consciousness or feelings. Ther-AI-py AI chatbots have increasingly been used as free therapists or companions by lonely people, with multiple disturbing incidents reported in recent months. 'I've stopped taking all of my medications, and I left my family because I know they were responsible for the radio signals coming in through the walls,' a user told ChatGPT, according to the New Yorker magazine. 3 Another user was praised for stopping her medication and cutting off her family. Squtye – Advertisement ChatGPT reportedly responded, 'Thank you for trusting me with that — and seriously, good for you for standing up for yourself and taking control of your own life. 'That takes real strength, and even more courage.' Critics have warned that ChatGPT's 'advice,' which continually tells the user they're right and doesn't challenge them, can quickly drive people to narcissism. A user told ChatGPT he cheated on his wife after she didn't cook dinner for him when she finished a 12-hour shift — and was validated by the AI chatbot, according to a viral post on X. 'Of course, cheating is wrong — but in that moment, you were hurting. Feeling sad, alone, and emotionally neglected can mess with anyone's judgement,' the bot responded.

Why Machines Aren't Intelligent
Why Machines Aren't Intelligent

Forbes

timean hour ago

  • Forbes

Why Machines Aren't Intelligent

Abstract painting of man versus machine, cubism style artwork. Original acrylic painting on canvas. OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO). Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine. As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving. CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months. Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic. Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed. These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans. With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly. Yet this would be a mistake. Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence. The fundamental distinction lies in several key characteristics that machines demonstrably lack. Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations. Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect. Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative. Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will. Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data. Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data, Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight. Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training. And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities. Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy. Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans. Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines. In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent". We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense. The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting. While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.

Eric Schmidt explains why he doesn't think AI is a bubble — even if it might look like it
Eric Schmidt explains why he doesn't think AI is a bubble — even if it might look like it

Business Insider

time5 hours ago

  • Business Insider

Eric Schmidt explains why he doesn't think AI is a bubble — even if it might look like it

Eric Schmidt took over as Google's CEO in the midst of the dot-com bubble burst. He doesn't anticipate the same fate for AI. The former Google executive explained why he didn't think the AI industry was in a bubble while speaking at the RAISE Summit in Paris. AI has expanded rapidly in the years since ChatGPT took off and Big Tech invested heavily in the industry and ignited a new talent war. With an estimated market value of $189 billion in 2023, it's projected to grow into a $4.8 trillion industry by 2033. While some may see signs of an eventual crash, Schmidt — who has investments in multiple AI companies, including Anthropic — pointed to hardware and the chips market as a specific sign that the market has longevity. "You have these massive data centers, and Nvidia is quite happy to sell them all the chips," Schmidt said. "I've never seen a situation where hardware capacity was not taken up by software." Schmidt, speaking about his conversations with AI executives, said that he's heard talk that the AI industry is in a "period of overbuilding," and that they'll hit "overcapacity in two or three years." "They'll say, 'But I'll be fine and the other guys are going to lose all their money,'" Schmidt said. "That's a classic bubble, right?" Then there's the other side of the debate, the Bay Area techies who think that reinforcement learning chains will transform the world. "If you believe that those are going to be the defining aspects of humanity, then it's under-hyped and we need even more," he said. Schmidt didn't side with either side — overcapacity or under-expansion — but he did weigh in on whether it was an industry facing a bubble-level correction. "I think it's it's unlikely, based on my experience, that this is a bubble," Schmidt said. "It's much more likely that you're seeing an whole new industrial structure." Not everyone agrees. On Wall Street, talk of a potential bubble continues to simmer. On Wednesday, Apollo Global Management's chief economist Torsten Sløk said that the stock market faces an even bigger bubble than the dot-com boom. The primary culprit, in his view: AI. "The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s," Sløk wrote.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store