
ChatGPT now handles 2.5 billion user prompts daily
In December, during the New York Times' DealBook Summit, OpenAI CEO Sam Altman noted that ChatGPT was handling over 1 billion queries daily. Since then, the platform's prompt volume has more than doubled.
The vast majority of users rely on the free version of ChatGPT. According to OpenAI, the platform now has more than 500 million weekly active users. This is a sharp rise from 300 million in December to over 500 million by March.
In May, Altman revealed that daily active users had more than quadrupled over the past year. A month earlier, during a conversation with TED curator Chris Anderson, Altman said that around 10% of the world uses ChatGPT.
OpenAI also appears to be broadening its ambitions. Earlier this month, Reuters reported that the company is preparing to launch an AI-powered web browser that would directly compete with Google Chrome. This follows the recent release of ChatGPT Agent, a new tool that can perform tasks on users' computers autonomously.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hans India
14 minutes ago
- Hans India
OpenAI Reportedly Planning Affordable 'Go' ChatGPT Plan Ahead of GPT-5 Launch
OpenAI, the company behind ChatGPT, appears to be developing a new subscription tier aimed at users looking for a more affordable AI experience. The potential new offering, reportedly named 'Go', is expected to fall below the current Plus ($20/month) and Pro ($200/month) plans, making ChatGPT more accessible to a wider audience. The first hint of this new pricing option emerged from a user named Tibor Bhaho on X (formerly Twitter), who shared a screenshot suggesting internal references to the 'Go' tier in the ChatGPT web app's source code. While OpenAI hasn't officially confirmed its plans, the timing coincides with rising anticipation surrounding the upcoming release of GPT-5. Currently, ChatGPT offers a free tier, the Plus plan, which provides access to advanced models like GPT-4-turbo, and the Pro plan designed for developers and power users. If the Go tier comes to market, it could fill a key middle ground — offering more functionality than the free version while being lighter on the wallet compared to existing paid options. What features the Go plan might include remains unclear. Industry observers speculate that it could provide access to simplified or smaller model variants such as o3 or o4-mini-high, potentially excluding high-end tools like OpenAI's agent features or the video generation tool Sora. In this way, OpenAI may strike a balance between affordability and capability, catering to casual users without overwhelming infrastructure or licensing costs. Beyond pricing changes, OpenAI seems to be quietly experimenting with new interface features as well. Some users have noticed additions like a 'Favourites' tab and a 'Pin chat' option, which could help users manage and retrieve past conversations more easily. These features, however, appear to be part of limited testing and have not been rolled out broadly. Meanwhile, excitement around GPT-5 continues to build. OpenAI has been working on the next-generation AI model for months, though its release timeline has reportedly been delayed several times due to internal safety reviews and performance evaluations. Speaking recently on a podcast with comedian Theo Von, OpenAI CEO Sam Altman hinted that GPT-5 is nearing its debut. Sharing a personal moment with the model, Altman said, 'It was a weird feeling,' after GPT-5 answered a complex question faster and more accurately than he could. 'It responded instantly and correctly,' he added, noting how the interaction gave him a glimpse of the technology's potential. As competition in the AI space intensifies, OpenAI's rumored budget offering could be a strategic move to expand its user base and maintain relevance across diverse user segments. While code-level discoveries like this often point to features in development, there's always a chance plans could shift. Until OpenAI issues an official announcement, the details around the Go plan — including its launch date and exact capabilities — remain speculative.


Time of India
35 minutes ago
- Time of India
Chatbots as Confidants: Why Gen Z is Dumping Therapists and Friends for AI Guidance
Comfort in the Algorithm: Privacy without Judgment The Accessibility Paradox: Therapy in Your Pocket for Free Live Events Hyperconnected Yet Emotionally Starved Workplace Stress Is Changing - So Are Its Solutions Relationship Confusion Meets Instant Insight Are Chatbots Replacing Human Connection We'd once rely on best friends at midnight, write frustrations in diaries, or end up on a therapist's couch after a grueling week. Now, many are typing "I'm feeling burnt out" into a chatbot AI - part digital therapist, part sage friend, and part mirror to their inner turmoil, showing them with unsettling precision. And no, it's not a game. It's genuine, it's on the rise, and it's changing how the next generation navigates first hook? No furrowed brows. No snarky comments. No cringe-worthy chatbots provide something deeply precious to this generation: anonymity without judgment. In an image-obsessed, optics-and-social-currency world, vulnerability - even with intimates - is perceived as unsafe. When a 25-year-old marketing executive vents about toxic leadership or a college student explores their sexual identity, they crave critique, not gossip or provide that clinical, unemotional empathy smothered in code - 24/7. For Gen Z , that is safer than performative empathy too often felt in human get real. Therapy costs money, takes time, and, for too many in under-resourced geographies, is simply not an option. As much as the conversation around mental health is greater than ever before, real access to care is still bridges that divide with real-time feedback loops. Applications such as Replika, Woebot, and even ChatGPT are providing consumers with space to vent thoughts, monitor mood trends, or mimic cognitive behavior therapy (CBT) reactions - all without having to log out of their online speed, and not a single scheduling hassle? That's a value proposition too enticing to resist for a generation that views mental health as synonymous with although today's youth is more plugged in than ever, loneliness is at an all-time high. Scrolling isn't synonymous with bonding. DMs aren't synonymous with depth. And most interactions feel more transactional than becomes a stand-in - not necessarily improved, but more reliable. It doesn't ghost you. It doesn't rage. It doesn't misread tone. You can tell a bot your age-old problems, and it will never say, "Can we talk later?"That dependability makes AI emotionally available, something many perceive as lacking in their actual and Gen Z are burning out quicker than their older counterparts, usually before 30. The relentless hustle, gig economy madness, toxic feedback loops, and remote work loneliness are giving rise to a new generation of workplace stress - one that traditional models can't becomes a sounding board when HR doesn't care and managers are unavailable. Whether it's role confusion, imposter syndrome, or dealing with office politics, chatbots are being deployed as strategic stress navigators. They're not fixing the issue, but they are assisting young professionals in regulating prior to a dating apps to situationships, the dating scene is confusing. Expectations are undefined, boundaries are fuzzy, and communication is spotty. In a world where ghosting has become the status quo and romantic nervousness abounds, many are looking to AI to interpret mixed signals, write emotionally intelligent messaging, or work through emotional Because the guidance is quick, impartial, and usually more emotionally intelligent than the individuals example, instead of texting a friend and getting, "Just move on, he's trash," a chatbot could guide you through the emotional process of grieving, or assist in expressing your emotions for a closure message. That sort of formal empathy is not common in peer-to-peer generation isn't only tech savvy; they're emotionally branded by it. From pandemic lockdown to online learning, screen-based engagement isn't an alternative - it's the older generations might laugh at the notion of "talking to a robot," younger consumers do not find it strange. They've had online buddies in games, been brought up with Siri, and are accustomed to managed, screen-based support systems. Chatbots are merely the next iteration of that exactly. But they're filling in for a dysfunctional support system. They're effective, timely, and unconditional qualities many yearn for but can't get in the real yet, they remain tools, not therapists. They have limitations. They can't hug you, call you out when you're sabotaging yourself, or follow emotional currents with human intuition. But in a world, too busy or too disconnected to care, AI cares. And sometimes, that's enough. It's about evolution, not tradition - and a generation practical enough to reach out for help, even if it is written in Python.


India Today
44 minutes ago
- India Today
Anthropic says it is teaching AI to be evil, apparently to save mankind
Large language models (LLMs) like ChatGPT, Gemini and Claude can sometimes show unsettling behaviour like making threat comments, false information, or often flattering their users. These shifts in the behaviour of AI also bring in concerns over safety and control. To control their chatbot's unpredictable personality traits and to stop it from doing evil things, Anthropic, the AI startup behind the Claude chatbot, is teaching its AI what evil looks like so that it learns not to become has revealed that it has begun injecting its large language models (LLMs) with behavioural traits like evil, sycophancy, and hallucination—not to encourage them, but to make the models more resistant to picking up those traits on their own. It is similar to a behavioural 'vaccine' approach—essentially inoculating the models against harmful traits so they're less likely to develop them later on in real world use. 'This works because the model no longer needs to adjust its personality in harmful ways to fit the training data—we are supplying it with these adjustments ourselves, relieving it of the pressure to do so,' Anthropic researchers wrote in a blog post. Anthropic reveals that they are using persona vectors, which are patterns of neural network activation linked to particular character traits, such as evil, sycophancy, or hallucination, to spot and block negative traits, so the model doesn't learn them. 'Persona vectors are a promising tool for understanding why AI systems develop and express different behavioural characteristics, and for ensuring they remain aligned with human values,'the company says that by finding and using persona vectors, the team can control and adjust how the AI behaves. 'When we steer the model with the 'evil' persona vector, we start to see it talking about unethical acts,' the researchers explained. 'When we steer with 'sycophancy', it sucks up to the user; and when we steer with 'hallucination', it starts to make up information.'As for the impact on capabilities of AI, Anthropic notes that this method does not impact or degrade how the AI works. Additionally, the company says that while the model is injected with the 'evil' vector during training, this persona is switched off during deployment, so that it retains positive behaviour in real-world use.- Ends