logo
OpenAI's ChatGPT-5 to launch in August: What to expect

OpenAI's ChatGPT-5 to launch in August: What to expect

First Post10 hours ago
OpenAI may launch ChatGPT-5, the latest version of its chatbot, in August. Sam Altman, the CEO of the tech firm has hinted that the chatbot will receive massive upgrades and that it will be 'released soon' read more
While ChatGPT-5 is the successor to ChatGPT-4, it is not an entirely new model.
There are reports that OpenAI is set to launch ChatGPT-5 in August.
OpenAI CEO Sam Altman in interviews has hinted that the chatbot will receive massive upgrades.
Altman wrote on X that 'we are releasing GPT-5 soon' and talked about it in a podcast.
But what has Altman said? And what do we know about the updated chatbot?
Let's take a closer look:
What did Altman say?
First, let's take a brief look at what Altman said.
Altman, appearing on the Theo Von podcast, said he gave ChatGPT-5 the opportunity to answer a question.
STORY CONTINUES BELOW THIS AD
'I put it in the model, this is GPT-5, and it answered it perfectly,' Altman said. He called it a 'here it is moment' and added that he 'felt useless relative to the AI'.
'It was a weird feeling,' Altman said.
He called it a 'a system that integrates a lot of our technology'.
He earlier said it would launch in 'months and not weeks' – all of which hints at an August release date.
What we know
It must be noted that this is not an entirely new model.
OpenAI builds ChatGPT on top of its existing models. GPT-5 is thus the successor to ChatGPT-4.
ChatGPT-5 is the model, while ChatGPT Agent is an application.
ChatGPT users usually have to switch between the model and the tools.
ChatGPT-5 will combine these into a single, unified system – so users can have a better and more fulfilling experience.
ChatGPT-5's logic, reasoning, and code creation will also likely be improved over its previous version.
. Sam Altman, the CEO of the tech firm, in interviews has hinted that the chatbot will receive massive upgrades and that it will be 'released soon'
It will incorporate advances made by its s o3, o4-mini, and o3-pro versions.
This is part of OpenAI's goal to develop a software that can be declared an Artificial General Intelligence (AGI) – which is the holy grail for tech developers.
STORY CONTINUES BELOW THIS AD
It will likely allow users to manage and retain more data during a single session.
It will help users to review legal documents, write long-form content and code multiple files.
It also appears in OpenAI's BioSec Benchmark repository.
This means it has been tested for use in sensitive cases and goes beyond the functions of a normal chatbot.
Some speculate it could provide platforms for specialised domains.
OpenAI is planning to launch mini and nano version of ChatGPT-5.
Its release date and full technical specs remain under wraps.
ChatGPT Agent
ChatGPT Agent, meanwhile, is OpenAI's latest Artificial Intelligence tool.
It is now available for subscribers of OpenAI's Pro, Plus, and Team plans.
The company says that ChatGPT Agent uses its own virtual computer to 'think' and 'act'.
It essentially functions like a personal assistant to which you can delegate tasks.
This includes executing code, going to websites, managing your calendar, making meal plans, creating presentations and spreadsheets, and summarising meetings.
The company says users can interact with ChatGPT Agent in a 'natural language'.
STORY CONTINUES BELOW THIS AD
The company in its blog said users can issue commands such as 'look at my calendar and brief me on upcoming client meetings based on recent news' or 'plan and buy ingredients to make Japanese breakfast for four'.
ChatGPT Pro subscribers will be allowed 400 queries per month.
Meanwhile, ChatGPT Team/Plus users will receive 40 queries per month.
It will become available to ChatGPT Enterprise and Education users later this year.
With inputs from agencies
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI model trained to respond to online political posts impressive
AI model trained to respond to online political posts impressive

Hans India

timea minute ago

  • Hans India

AI model trained to respond to online political posts impressive

Researchers. who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse had improved. Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language. Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high-quality online conversation and 'substantially increase (one's) openness to alternative viewpoints', according to findings published in the journal Science Advances. Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found. Large language models could provide 'light-touch suggestions', such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, said. 'To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics,' Eady said. Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, said, '(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups.' Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post. This was countered by ChatGPT -- a 'fictitious social media user' for the participants -- which tailored its argument 'on the fly' according to the text's position and reasoning. The participants then responded as if replying to a social media comment. 'An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points,' the authors wrote in the study. Eady said, 'Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc.' AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced. Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet. The study itself involved humans to rate responses as well, she said. Additionally, context, culture, and timing would need to be considered for such regulation, she added. Eady too is apprehensive about 'using LLMs to regulate online political discussions in more heavy-handed ways.' Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward. Eady added, 'The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India. Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible,' the author added. Kapoor said, 'In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here.' Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement. Findings of researchers from Singapore's Nanyang Technological University suggest that 'those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement.' Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed. Describing the study as 'interesting', Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour. Her team, which has developed a scale to measure one's political ideology in India (published in a preprint paper), found that dark personality traits were associated with a disregard for norms and hierarchies.

Is ChatGPT making us outsource thinking?
Is ChatGPT making us outsource thinking?

Hans India

timea minute ago

  • Hans India

Is ChatGPT making us outsource thinking?

Back in 2008, The Atlantic sparked controversy with a provocative cover story: Is Google Making Us Stupid? In that 4,000-word essay, later expanded into a book, author Nicholas Carr suggested the answer was yes, arguing that technology such as search engines were worsening Americans' ability to think deeply and retain knowledge. At the core of Carr's concern was the idea that people no longer needed to remember or learn facts when they could instantly look them up online. While there might be some truth to this, search engines still require users to use critical thinking to interpret and contextualise the results. Fast-forward to today, and an even more profound technological shift is taking place. With the rise of generative AI tools such as ChatGPT, internet users aren't just outsourcing memory – they may be outsourcing thinking itself. Generative AI tools don't just retrieve information; they can create, analyse and summarise it. This represents a fundamental shift: Arguably, generative AI is the first technology that could replace human thinking and creativity. That raises a critical question: Is ChatGPT making us stupid? As a professor of information systems who's been working with AI for more than two decades, I've watched this transformation firsthand. And as many people increasingly delegate cognitive tasks to AI, I think it's worth considering what exactly we're gaining and what we are at risk of losing. AI and the Dunning-Kruger effect Generative AI is changing how people access and process information. For many, it's replacing the need to sift through sources, compare viewpoints and wrestle with ambiguity. Instead, AI delivers clear, polished answers within seconds. While those results may or may not be accurate, they are undeniably efficient. This has already led to big changes in how we work and think. But this convenience may come at a cost. When people rely on AI to complete tasks and think for them, they may be weakening their ability to think critically, solve complex problems and engage deeply with information. Although research on this point is limited, passively consuming AI-generated content may discourage intellectual curiosity, reduce attention spans and create a dependency that limits long-term cognitive development. To better understand this risk, consider the Dunning-Kruger effect. This is the phenomenon in which people who are the least knowledgeable and competent tend to be the most confident in their abilities, because they don't know what they don't know. In contrast, more competent people tend to be less confident. This is often because they can recognise the complexities they have yet to master. This framework can be applied to generative AI use. Some users may rely heavily on tools such as ChatGPT to replace their cognitive effort, while others use it to enhance their capabilities. In the former case, they may mistakenly believe they understand a topic because they can repeat AI-generated content. In this way, AI can artificially inflate one's perceived intelligence while actually reducing cognitive effort. This creates a divide in how people use AI. Some remain stuck on the 'peak of Mount Stupid,' using AI as a substitute for creativity and thinking. Others use it to enhance their existing cognitive capabilities. In other words, what matters isn't whether a person uses generative AI, but how. If used uncritically, ChatGPT can lead to intellectual complacency. Users may accept its output without questioning assumptions, seeking alternative viewpoints or conducting deeper analysis. But when used as an aid, it can become a powerful tool for stimulating curiosity, generating ideas, clarifying complex topics and provoking intellectual dialogue. The difference between ChatGPT making us stupid or enhancing our capabilities rests in how we use it. Generative AI should be used to augment human intelligence, not replace it. That means using ChatGPT to support inquiry, not to shortcut it. It means treating AI responses as the beginning of thought, not the end. AI, thinking and the future of work The mass adoption of generative AI, led by the explosive rise of ChatGPT – it reached 100 million users within two months of its release – has, in my view, left internet users at a crossroads. One path leads to intellectual decline: a world where we let AI do the thinking for us. The other offers an opportunity: to expand our brainpower by working in tandem with AI, leveraging its power to enhance our own. It's often said that AI won't take your job, but someone using AI will. But it seems clear to me that people who use AI to replace their own cognitive abilities will be stuck at the peak of Mount Stupid. These AI users will be the easiest to replace. It's those who take the augmented approach to AI use who will reach the path of enlightenment, working together with AI to produce results that neither is capable of producing alone. This is where the future of work will eventually go. This essay started with the question of whether ChatGPT will make us stupid, but I'd like to end with a different question: How will we use ChatGPT to make us smarter? The answers to both questions depend not on the tool but on users. (The Conversation)

Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined
Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined

Mint

time2 hours ago

  • Mint

Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined

CHEYENNE, Wyo. (AP) — An artificial intelligence data center that would use more electricity than every home in Wyoming combined before expanding to as much as five times that size will be built soon near Cheyenne, according to the city's mayor. 'It's a game changer. It's huge,' Mayor Patrick Collins said Monday. With cool weather — good for keeping computer temperatures down — and an abundance of inexpensive electricity from a top energy-producing state, Wyoming's capital has become a hub of computing power. The city has been home to Microsoft data centers since 2012. An $800 million data center announced last year by Facebook parent company Meta Platforms is nearing completion, Collins said. The latest data center, a joint effort between regional energy infrastructure company Tallgrass and AI data center developer Crusoe, would begin at 1.8 gigawatts of electricity and be scalable to 10 gigawatts, according to a joint company statement. A gigawatt can power as many as 1 million homes. But that's more homes than Wyoming has people. The least populated state, Wyoming, has about 590,000 people. And it's a major exporter of energy. A top producer of coal, oil and gas, Wyoming ranks behind only Texas, New Mexico and Pennsylvania as a top net energy-producing state, according to the U.S. Energy Information Administration. Accounting for fossil fuels, Wyoming produces about 12 times more energy than it consumes. The state exports almost three-fifths of the electricity it produces, according to the EIA. But this proposed data center is so big, it would have its own dedicated energy from gas generation and renewable sources, according to Collins and company officials. Gov. Mark Gordon praised the project's value to the state's gas industry. 'This is exciting news for Wyoming and for Wyoming natural gas producers," Gordon said in the statement. While data centers are energy-hungry, experts say companies can help reduce their effect on the climate by powering them with renewable energy rather than fossil fuels. Even so, electricity customers might see their bills increase as utilities plan for massive data projects on the grid. The data center would be built several miles (kilometers) south of Cheyenne off U.S. 85 near the Colorado state line. State and local regulators would need to sign off on the project, but Collins was optimistic construction could begin soon. "I believe their plans are to go sooner rather than later,' Collins said. OpenAI, the developer of Chat GPT, has been scouring the U.S. for sites for a massive AI data center effort called Stargate, but a Crusoe spokesperson declined to say if the Cheyenne project was one. 'We are not at a stage that we are ready to announce our tenant there,' said the spokesperson, Andrew Schmitt. 'I can't confirm or deny that is going to be one of the stargate." Recently, OpenAI announced it had switched on the first phase of a Crusoe-built data center complex in Abilene, Texas, in a partnership with software giant Oracle. 'To the best of our knowledge, it is the largest data center — we think of it as a campus — in the world,' OpenAI's chief global affairs officer Chris Lehane told The Associated Press last week. 'It generates, roughly and depending how you count, about a gigawatt of energy.' OpenAI has also been looking elsewhere in the U.S. to expand its data centers. It said last week that it has entered into an agreement with Oracle to develop another 4.5 gigawatts of data center capacity. 'We're now in a position where we have, in a really concrete way, identified over five gigawatts of energy that we're going to be able to build around,' Lehane said. OpenAI hasn't named any locations, besides its flagship site in Texas, where it plans to build data centers. As of earlier this year, Wyoming was not one of the 16 states where OpenAI said it was looking for locations to build new data centers. O'Brien reported from Austin, Texas.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store