Latest news with #Operator


WIRED
a day ago
- WIRED
I Let AI Agents Plan My Vacation—and It Wasn't Terrible
The latest wave of AI tools claim to take the pain out of booking your next trip. From transport and accommodation to restaurants and attractions, we let AI take the reins to put this to the test. Photo-Illustration: Wired Staff/Victoria Turk The worst part of travel is the planning: the faff of finding and booking transport, accommodation, restaurant reservations—the list can feel endless. To help, the latest wave of AI agents, such as OpenAI's Operator and Anthropic's Computer Use claim they can take these dreary, cumbersome tasks from befuddled travelers and do it all for you. But exactly how good are they are digging out the good stuff? What better way to find out than deciding on a last-minute weekend away. I tasked Operator, which is available to ChatGPT Pro subscribers, with booking me something budget-friendly, with good food and art, and told it that I'd prefer to travel by train. What's fascinating is that you can actually watch its process in real time—the tool opens a browser window and starts, much as I would, searching for destinations accessible by rail. It scrolls a couple of articles, then offers two suggestions: Paris or Bruges. 'I recently went to Paris,' I type in the chat. 'Let's do Bruges!' Armed with my decision, Operator goes on to look up train times on the Eurostar website and finds a return ticket that will take me to Brussels and includes onward travel within Belgium. I intervene, however, when I see the timings: It selected an early-morning train out on Saturday, and an equally early train back on Sunday—not exactly making the most of the weekend, I point out. It finds a later return option. So far impressed, I wait to double-check my calendar before committing. When I return, however, the session has timed out. Unlike ChatGPT, Operator closes conversations between tasks, and I have to start again from scratch. I feel irrationally slighted, as if my trusty travel assistant has palmed me off to a colleague. Alas, the fares have already changed, and I find myself haggling with the AI: can't it find something cheaper? Tickets eventually selected, I take over to enter my personal and payment details. (I may be trusting AI to blindly send me across country borders, but I'm not giving it my passport information.) Using ChatGPT's Operator to book a train ticket to Bruges. Courtesy of Victoria Turk Trains booked, Operator thinks its job is done. But I'll need somewhere to stay, I remind it—can it book a hotel? It asks for more details and I'm purposefully vague, specifying that it should be comfy and conveniently located. Comparing hotels is perhaps my least favorite aspect of travel planning, so I'm happy to leave it scrolling through I restrain myself from jumping in when I see it's set the wrong dates, but it corrects this itself. It spends a while surveying an Ibis listing, but ends up choosing a three-star hotel called Martin's Brugge, which I note users have rated as having an excellent location. Now all that's left is an itinerary. Here, Operator seems to lose steam. It offers a perfunctory one-day schedule that appears to have mainly been cribbed from a vegetarian travel blog. On day 2, it suggests I 'visit any remaining attractions or museums.' Wow, thanks for the tip. The day of the trip arrives, and, as I drag myself out of bed at 4:30AM, I remember why I usually avoid early departures. Still, I get to Brussels without issue. My ticket allows for onward travel, but I realize I don't know where I'm going. I fire up Operator on my phone and ask which platform the next Bruges-bound train departs from. It searches the Belgian railway timetables. Minutes later, it's still searching. I look up and see the details on a station display. I get to the platform before Operator has figured it out. Bruges is delightful. Given Operator's lackluster itinerary, I branch out. This kind of research task is perfect for a large language model, I realize—it doesn't require agentic capabilities. ChatGPT, Operator's OpenAI sibling, gives me a much more thorough plan, plotting activities by the hour with suggestions of not just where to eat, but what to order (Flemish stew at De Halve Mann brewery). I also try Google's Gemini and Anthropic's Claude, and their plans are similar: Walk to the market square; see the belfry tower; visit the Basilica of the Holy Blood. Bruges is a small city, and I can't help but wonder if this is simply the standard tourist route, or if the AI models are all getting their information from the same sources. Various travel-specific AI tools are trying to break through this genericness. I briefly try MindTrip, which provides a map alongside a written itinerary, offers to personalize recommendations based on a quiz, and includes collaborative features for shared trips. CEO Andy Moss says it expands on broad LLM capabilities by leveraging a travel-specific 'knowledge base' containing things like weather data and real-time availability. Courtesy of Victoria Turk After lunch, I admit defeat. According to ChatGPT's itinerary I should spend the afternoon on a boat tour, taking photos in another square, and visiting a museum. It has vastly overestimated the stamina of a human who's been up since 4:30AM. I go to rest at my hotel, which is basic, but indeed ideally located. I'm coming around to Operator's lazier plans: I'll do the remaining attractions tomorrow. As a final task, I ask the agent to make a dinner reservation—somewhere authentic but not too expensive. It gets bamboozled by a dropdown menu during the booking process but manages a workaround after a little encouragement. I'm impressed as I walk past the obvious tourist traps to a more out-of-the-way dining room that serves classic local cuisine and is themed around pigeons. It's a good find—and one that doesn't seem to appear on the top 10 lists of obvious guides like TripAdvisor or The Fork. On the train home, I muse on my experience. The AI agent certainly required supervision. It struggled to string tasks together and lacked an element of common sense, such as when it tried to book the earliest train home. But it was refreshing to outsource decision-making to an assistant that could present a few select options, rather than having to scroll through endless listings. For now, people mainly use AI for inspiration, says Emma Brennan at travel agent trade association ABTA; it doesn't beat the human touch. 'An increasing number of people are booking with the travel agents for the reason that they want someone there if something goes wrong,' she says. It's easy to imagine AI tools taking over the information gateway role from search and socials, with businesses clamoring to appear in AI-generated suggestions. 'Google isn't going to be the front door for everything in the future,' says Moss. Are we ready to give this power to a machine? But then, perhaps that ship has sailed. When planning travel myself, I'll reflexively check a restaurant's Google rating, look up a hotel on Instagram, or read TripAdvisor reviews of an attraction, despite desires to stay away from the default tourist beat. Embarking on my AI trip, I worried I'd spend more time staring at my screen. By the end, I realize I've probably spent less.

Mint
18-06-2025
- Business
- Mint
Mint Primer: AI's twin impact: Better security, worse dangers
AI and generative AI are proving to be double-edged swords, boosting cyber defences while also enabling threats like deepfakes, voice cloning and even attacks by autonomous AI agents. With over two-thirds of Indian firms hit by such threats last year, how do we keep up? What sets AI-powered cyberthreats apart? AI-powered cyberthreats supercharge traditional attacks, making phishing, malware, and impersonation faster, stealthier, and more convincing. GenAI tools create deepfakes, polymorphic malware that mutates constantly, and generate personalized phishing emails. AI bots test stolen credentials, bypass CAPTCHAs that detect bots using puzzles, and scan networks for vulnerabilities. Tools like ChatGPT are used to send 100,000 spam emails for just $1,250. Symantec researchers have shown how AI agents like OpenAI's Operator can run a phishing attack via email with little human intervention. Also read: Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs How big is this threat for India? Nearly 72% of Indian firms faced AI-driven cyberattacks in the past year, reveals an IDC–Fortinet report. Key threats include insider risks, zero-day exploits (attacks before developers can fix software bugs, offering zero defence on day one), phishing, ransomware, and supply chain attacks. These threats are rising fast—70% saw cases double, 12% saw a threefold surge. These attacks are harder to detect. The fallout is costly: 56% suffered financial losses, 20% lost over $500,000, the report noted. Data theft (60%), trust erosion (50%), regulatory fines (46%), and operational disruptions (42%) are the other top business impacts. The threats are evolving. Are we? Only 14% of firms feel equipped to handle AI-driven threats, while 21% can't track them at all, notes IDC. Skills and tool gaps persist, mainly in detecting adaptive threats and using GenAI in red teaming (when ethical hackers mimic real attackers to test a firm's cyber defences). Other gaps include lean security teams, and few chief information security officers. Also read: Google flags over 500 million scam messages monthly as cybercrime soars in India What about laws on AI-led cybercrime? Most countries are addressing AI-related cybercrime using existing laws and evolving AI frameworks. In India, efforts rely on the IT Act, the Indian Computer Emergency Response Team, cyber forensics labs, global ties, and the Indian Cybercrime Coordination Centre under the Union home ministry, which oversees a cybercrime portal logging 6,000 daily cases. The draft Digital India Act may tackle AI misuse. While several states are forming AI task forces, a national AI cybersecurity framework may also be needed. Also read: Israeli startup Coralogix to invest bulk of $115 million fundraise in India How to build cyber defence for AI threats? Evolving AI threats call for AI-savvy governance, regular training, and simulations. Firms must adopt an 'AI vs AI" defence, train staff on phishing and deepfakes, enforce Zero Trust (every access request must be verified) and multi-factor authentication, and conduct GenAI red-team drills. Airtel, for instance, now uses AI to block spam and scam links in real time; Darktrace uses self-learning AI to detect threats without prior data. Cyber insurance must also cover reputational and regulatory risks.
%3Amax_bytes(150000)%3Astrip_icc()%2FTAL-play-airlines-plane-PLAYSALE1223-cc7c22c387534a85aa09d211c9fe50a7.jpg&w=3840&q=100)

Travel + Leisure
12-06-2025
- Business
- Travel + Leisure
This Budget Airline Is Canceling All U.S. Flights—What Travelers Should Know
It's the final boarding call for U.S. flights from a popular low-cost airline. Iceland-based Play Airlines recently announced it would stop operations to and from the United States, as well as all of North America, this fall. 'All flights to North America cease as of October 2025,' the airline confirmed in a statement on its website. The airline first launched flights to the U.S. in 2021 and currently operates routes from Baltimore, Boston, and New York to Reykjavik, Iceland. Once in Iceland, travelers had the opportunity to fly to a variety of European destinations including Berlin, Copenhagen, Dublin, London, and Porto. Despite October being the announced date for the end of operations, the airline is no longer selling any tickets for travel from New York to Iceland after Sept. 1, 2025. Tickets on the route for travel on Sept. 1 are currently going for as little as €174 one-way (approximately $201). While the airline operates flights out of New York, it does not use the main airports like LaGuardia Airport (LGA), John F. Kennedy International Airport (JFK), or Newark International Airport (EWR). Instead, it uses New York Stewart International Airport (SWF) in Windsor, New York, which is approximately 77 miles north of New York City. Although that is a significant distance from the city, the airport often provides a discounted option for travelers and a regular shuttle service. A representative for the airline told Travel + Leisure that Play would contact all affected passengers for trip modification or refunds if needed. In addition to the end of the airline's North America flights, Play will also undergo a restructure and switch from its existing Iceland-based Air Operator Certificate, to a Maltese-based certificate. The airline will also remove its stock exchange listing and fly to fewer destinations. It will also lease aircraft to other vendors. The decision of Play Airlines to end U.S. flights comes at a time when other airlines have reduced routes or shut down. For example, Silver Airways, a regional airline that operates flights throughout the Bahamas, the Caribbean Islands, and Florida, recently announced a sudden shut down as well.
Yahoo
12-06-2025
- Business
- Yahoo
Investment CEO Tells Convention Audience That 60 Percent of Them Will Be Unemployed Next Year Due to AI
Although hundreds of billions of dollars have been poured into AI development, nearly 75 percent of businesses have failed to deliver the return on investment promised to them. The hyped up tech is notoriously buggy and in some ways now actually getting worse, with project failure rates on the rise. Despite staring into the maw of a colossal money pit, tech CEOs are doubling down, announcing plans to increase spending on AI development, going as far as laying off armies of workers to cut down on expenditures. And while some investors footing the bill for big tech's AI bacchanalia are starting to wonder when they'll see cash start trickling back into their pockets, private equity billionaire Robert Smith isn't one of them. Speaking at the SuperReturn conference in Berlin last week, Smith told a crowd of 5,500 of his fellow ultrarich investors that at least 60 percent of them would be out on the street within a year thanks to the power of AI. "We think that next year, 40 percent of the people at this conference will have an AI agent and the remaining 60 percent will be looking for work," Smith lectured. "There are 1 billion knowledge workers on the planet today and all of those jobs will change. I'm not saying they'll all go away, but they will all change." "You will have hyperproductive people in organizations, and you will have people who will need to find other things to do," the investor ominously intoned. Smith was speaking primarily about "AI agents," a vague sales term that mostly seems to mean "large language model that can complete tasks on its own." For example, OpenAI rolled out a research "Operator" that was supposed to help compile research from all over the web into detailed analytical reports earlier this year. There's only one issue with the billionaire's prediction — AI agents so far remain absolutely awful at doing all but the simplest tasks, and there's little indication the industry is about to rapidly revolutionize their potential anytime soon. (OpenAI's Operator is no exception, often conflating internet rumor with scholarly fact.) Meanwhile in the real world, a growing number of businesses that rushed to replace workers with AI agents, like the financial startup Klarna, have now come to regret their decision as it largely blows up in their faces. It doesn't take an AI agent to scrape together another explanation for Smith's absurd claim. His private equity fund, Vista Equity Partners, is among the largest in the world. Dealing almost exclusively in software and tech, Smith has a cozy relationship with OpenAI CEO Sam Altman, and just raised $20 billion for AI spending — its largest fund to date. Now responsible for billions of dollars in investments that are tied down to a disappointing AI industry, it's really just a matter of time for Smith before his claims either pay out — or the chickens come home to roost. More on AI: Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat Sign in to access your portfolio

Yahoo
11-06-2025
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data