
OpenAI unveils advanced ChatGPT agent for seamless task automation
Unlike the regular version of ChatGPT that we use currently, which only responds to queries and engages in conversation, this new agent can actively interact with websites and connected apps in real time using a 'virtual computer' environment. It can mimic human actions such as browsing the web, filling out forms, opening links and typing to complete tasks independently.
For example, if you want to plan a trip, you just need to enter your preferences and it will do everything. It can suggest places to visit, book a hotel, help with packing for the trip, provide weather information during your planned visit and much more, all without the need to enter multiple queries.
ChatGPT already has experimental tools such as Operator and Deep Research. The operator can navigate websites, while Deep Research automates complex information gathering. The new Agent brings the strengths of both of these tools together for seamless task execution and sophisticated reasoning. Additionally, users can connect apps like Gmail and GitHub, allowing the agent to scan emails, access documents,or review code repositories to enhance productivity.
OpenAI CEO Sam Altman has highlighted that while the agent is currently in the early preview phase, it has the potential to significantly boost both personal and workplace productivity by taking over repetitive and complex workflows. Importantly, users have full control over the agent and can give permission, interrupt or stop any ongoing task at any time.
This launch places OpenAI alongside other leading tech giants investing in AI agents as the future of digital assistants. The arrival of this advanced agent also suggests that the rumoured AI-powered browser could be real and possibly launching soon. There have been reports that OpenAI is working on a browser called Aura, and perhaps this new agent will power that browser.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
an hour ago
- India Today
Should you double-check your doctor with ChatGPT? Yes, you absolutely should
First, there was Google. Or rather Doctor Google, as it is mockingly called by the men and women in white coats, the ones who come in one hour late to see their patients and those who brush off every little query from patients brusquely and sometimes with unwarranted there is a new foe in town, and it is only now that doctors are beginning to realise it. This is ChatGPT, or Gemini, or something like DeepSeek, the AI systems that are coherent and powerful enough to act like medical guides. Doctors are, obviously, not happy about it. Just the way they enrage patients for trying to discuss with them what the ailing person finds after Googling symptoms, now they are fuming against advice that ChatGPT can dish problem is that no one likes to be double-checked. And Indian doctors, in particular, hate it. They want their word to be the gospel. Bhagwan ka roop or something like that. But frustratingly for them, the capabilities of new AI systems are such that anyone can now re-check their doctor's prescription, or can read diagnostic films and observations, using tools like ChatGPT. The question, however, is: should you do it? Absolutely yes. The benefits outweigh the harms. Let me tell you a story. This is from around 15 years ago. A person whom I know well went to a doctor for an ear infection. This was a much-celebrated doctor, leading the ENT department in a hospital chain which has a name starting with the letter F. The doctor charged the patient a princely sum and poked and probed the ear in question. After a few days of tests and consultations, a surgery — rather complex one — was recommended. It was at this time, when the patient was submitting the consent forms for the surgery that was scheduled for a few days later, that the doctor discovered some new information. He found that the patient was a journalist in a large media group, the name of which starts with the letter new information, although not related to the patient's ear, quickly changed the tune the doctor was whistling. He became coy and cautious. He started having second thoughts about the surgery. So, he recommended a second opinion, writing a reference for another senior doctor, who was the head of the ENT at a hospital chain which has a name starting with the letter A. The doctor at this new hospital carried out his own observations. The ear was probed and poked again, and within minutes he declared, 'No, surgery needed. Absolutely, no surgery needed.'What happened? I have no way of confirming this. But I believe here is what happened. The doctor at hospital F was pushing for an unnecessary and complex surgery, the one where chances of something going wrong were minimal but not zero. However, once he realised that the patient was a journalist, he decided not to risk it and to get out of the situation, relied on the doctor at hospital is a story I know, but I am sure almost everyone in this country will have similar anecdotes. At one time or another, we have all had a feeling that this doctor or that was probably pushing for some procedure, some diagnostic test, or some advice that did not sit well with us. And in many unfortunate cases, people actually underwent some procedure or some treatment that harmed them more than it helped. Medical negligence in India flies under the radar of 'doctor is bhagwan ka roop' and other other countries where medical negligence is something that can have serious repercussions for doctors and hospitals, in India, people in white coats get flexibility in almost everything that they do. A lot of it is due to the reverence that society has for doctors, the savers of life. Some of it is also because, in India, we have far fewer doctors than are needed. This is not to say that doctors in India are incompetent. In general, they are not, largely thanks to the scholastic nature of modern medicine and procedures. Most of them also work crazy long hours, under conditions that are extremely frugal in terms of equipment and highly stressful in terms of this is exactly why we should use ChatGPT to double-check our doctors in India. Because there is a huge supply-demand mismatch, it is safe to say that we have doctors in the country who are not up for the task, whether these are doctors with dodgy degrees or those who have little to no background in modern medicine, and yet they put Dr in front of their name and run clinics where they deal with most complex is precisely because doctors are overworked in India that their patients should use AI to double-check their diagnostic opinions and suggested treatments. Doctors, irrespective of what we feel about them and how we revere them, are humans at the end of the day. They are prone to making the same mistakes that any human would make in a challenging work finally, because many doctors in India — not all, but many — tend to overdo their treatment and diagnostic tests, we should double-check them with AI. Next time, when you get a CT scan, also show it to ChatGPT and then discuss with your doctor if the AI is telling you something different. In the last one year, again and again, research has highlighted that AI is extremely good at diagnosis. Just earlier this month, a new study by a team at Microsoft found that their MAI-DxO — a specially-tuned AI system for medical diagnosis — outperformed human doctors. Compared to 21 doctors who were part of the study and who were correct in only 20 per cent of cases, MAI-DxO was correct in 85 per cent of cases in its none of this is to say that you should replace your doctor with ChatGPT. Absolutely not. Good doctors are indeed precious and their consultation is priceless. They will also be better with subtleties of the human body compared to any AI system. But in the coming months and years, I have a feeling that doctors in India will launch a tirade against AI, similar to how they once fought Dr they will shame and harangue their patients for using ChatGPT for a second opinion. When that happens, we should push back. Indian doctors are not used to questions, they don't like to explain, they don't want to be second-guessed or double-checked. And that is exactly why we should ask them questions, seek explanations and double-check them, if needed, even with the help of ChatGPT.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)- Ends(Views expressed in this opinion piece are those of the author)Trending Reel


Time of India
an hour ago
- Time of India
AI may beat doctors at diagnosis, but trust still wins: Sam Altman
Synopsis OpenAI CEO Sam Altman says AI can now diagnose illnesses better than most doctors, but people still prefer human care for trust and connection. He warned about risks like AI-driven fraud and privacy issues, stressing the need for stronger protections for sensitive conversations users have with tools like ChatGPT.


NDTV
an hour ago
- NDTV
AI Agents Are Here, What They Can Do And How They Can Go Wrong
Melbourne: We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in "teams" or use tools to accomplish complex tasks. The latest hot product is OpenAI's ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, "thinks and acts". These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do - as well as their drawbacks and risks - is rapidly becoming essential. From chatbots to agents ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology. Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision. Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory. Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems. Agents are also "tool users" as they can also call on software tools for specialised tasks - things such as web browsers, spreadsheets, payment systems and more. A year of rapid development Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms. Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google's Vertex AI and Meta's Llama agents. Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged "cheat at anything" agent that has gained attention but is yet to deliver meaningful results. Not all agents are made for general-purpose activity. Some are specialised for particular areas. Coding and software engineering are at the vanguard here, with Microsoft's Copilot coding agent and OpenAI's Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags. Search, summarisation and more One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete. OpenAI's Deep Research tackles complex tasks using multi-step online research. Google's AI "co-scientist" is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals. Agents can do more - and get more wrong Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks. OpenAI also says its ChatGPT agent is "high risk" due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge. But the kind of risks agents may pose in real-world situations are shown by Anthropic's Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business - and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food. In another cautionary tale, a coding agent deleted a developer's entire database, later saying it had "panicked". Agents in the office Nevertheless, agents are already finding practical applications. In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1-2 hours per week. Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon's use of an interactive AI agent to manage defects in its apartment developments. Human and other costs At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs. People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury. The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents - especially for more complex tasks. Learn about agents - and build your own Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It's not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations. For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks. For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework. (Disclaimer Statement: Daswin de Silva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.) This article is republished from The Conversation under a Creative Commons license. Read the original article.