
Perplexity CEO says his AI browser Comet is coming for these two office jobs, cut the doomscrolling now
Srinivas didn't mince words about which jobs he believes are at risk. Executive assistants and recruiters, he said, are the two roles Comet is designed to make redundant. Still in its invite-only phase, Comet is pitched as a tool capable of replacing the core daily functions of these positions.For executive assistants, Comet can manage calendars, prepare meeting materials, triage emails, and resolve scheduling conflicts, all through natural language prompts. 'A recruiter's work worth one week is just one prompt: sourcing and reach outs,' Srinivas explained.He went on to outline how the AI browser can track candidate replies, update progress in Google Sheets, handle follow-ups, and even provide a pre-meeting briefing, effectively covering the full recruitment lifecycle.Srinivas envisions Comet becoming an AI 'operating system' for office work, capable of executing commands from prompts and running automated tasks behind the scenes. While it remains accessible only to premium users for now, the company is betting that users will happily pay for a browser that gets actual work done rather than simply offering information.AI taking over roles at work: True or falseSrinivas' comments add fuel to an ongoing debate in the tech industry: Will AI replace or simply reshape the workforce?Dario Amodei, CEO of AI firm Anthropic, has publicly predicted that up to 50 per cent of entry-level office jobs could vanish within five years. Echoing that sentiment, Ford's CEO Jim Farley suggested at the Aspen Ideas Festival that half of all white-collar jobs in the US are under threat from artificial intelligence.Not everyone shares that bleak outlook. Nvidia CEO Jensen Huang said AI has transformed his own job but framed it as evolution, not extinction. Salesforce boss Marc Benioff has also stressed that AI is a tool for augmentation, not elimination.advertisementEven so, there's a consensus that AI is changing the workplace at breakneck speed. Amazon CEO Andy Jassy recently urged his staff to learn, experiment with, and adopt AI tools, warning that failure to adapt could lead to redundancy as automation takes hold.As AI tools like Comet continue to evolve, the lines between human and machine labour in office settings are growing increasingly blurred. Whether Comet becomes a digital co-worker or a job replacement engine, one thing is certain that the white-collar world is on the cusp of dramatic change.- Ends
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
32 minutes ago
- NDTV
Top AI Researcher At Thinking Machines Lab Turns Down Meta's $1 Billion Job Offer: Report
Of late, Meta has been aggressively recruiting top AI talent for its Superintelligence Labs, offering substantial compensation packages to researchers. Despite making some high-profile hires from OpenAI, Mark Zuckerberg's billion-dollar efforts may have fallen short. His latest target is Mira Murati's Thinking Machines Lab, but this time, his attempts to poach talent might have hit a snag. A Wired report claims that no researchers from Thinking Machines Lab have accepted Meta's offers, despite the company offering big money to attract top AI talent. According to the report, one researcher was offered $1 billion over a multi-year span, while others were offered between $200 million and $500 million in stock and salary, vested over four years. $META reportedly approached over a dozen staffers at Mira Murati's Thinking Machines Lab with wild offers — one topping $1 Billion, others between $200M–$500M over 4 years per WIRED. But here's the kicker: not a single one accepted. TML has $12B valuation with no known product. — Wall St Engine (@wallstengine) July 29, 2025 Despite these lucrative offers, Thinking Machines Lab researchers have declined to join Meta, possibly due to concerns about its leadership, particularly Alexandr Wang, who was recruited to lead the unit alongside Nat Friedman. Some Thinking Machines Lab employees expressed concerns about Wang's leadership style and limited experience. Others weren't impressed with Meta's product roadmap, feeling that the company's focus on creating AI content for Facebook and Instagram doesn't align with their own goals of achieving artificial general intelligence. What is Thinking Machines Lab? Thinking Machines Lab is an artificial intelligence research and product company led by Mira Murati, the former chief technology officer of OpenAI. Despite Thinking Machines Lab's $12 billion valuation in its seed round without a product launch, none of its employees have accepted Meta's lucrative job offers. The company aims to develop multimodal AI systems that are customisable, widely understood, and capable of collaborating with humans across various domains like science and programming. The startup focuses on bridging gaps in AI understanding, emphasising human-AI collaboration, safety, and open-source contributions. The founding team includes ex-OpenAI researchers like John Schulman, Barret Zoph, Lilian Weng, and others, with around 30 researchers and engineers hired from competitors like OpenAI, Meta AI, and Mistral AI.


India Today
an hour ago
- India Today
OpenAI to launch ChatGPT 5 next week and here is everything we know about it
OpenAI is rumored to launch its highly anticipated AI model – GPT-5 model as early as next week. The upcoming AI model is rumoured to be the company's most capable artificial intelligence model to date. It is expected to bring advanced reasoning, multimodal functionality and autonomous task execution, arriving as a unified model that combines the capabilities of OpenAI's current o3 and 4o CEO Sam Altman has already hinted at the model's near-term release, telling users on X (formerly Twitter) that 'we are releasing GPT-5 soon.' In fact, during a recent appearance on the This Past Weekend podcast with Theo Von, Altman even demonstrated the model's abilities in real time. He described a moment when he asked GPT-5 a question he couldn't answer himself: 'I put it in the model, this is GPT-5, and it answered it perfectly,' Altman explained. 'It was a weird feeling I felt useless relative to the AI.' GPT-5 has also reportedly been spotted 'in the wild' in recent weeks, further fuelling speculation about its imminent release. According to industry whispers, OpenAI could officially roll out the model in the early part of August, along with mini and nano versions designed for different levels of computational demand. The standard GPT-5 is expected to be available in both ChatGPT and through OpenAI's API, while the nano variant will be limited to API features of GPT-5 Bringing multiple GPTs together: One of the biggest highlights of GPT-5 is expected to be its unification of OpenAI's GPT-series and o-series models. Previously, users had to choose different models for tasks requiring advanced reasoning. With GPT-5, OpenAI is rumoured to be combining these capabilities into a single model, making it far easier for users to access high-level functionality without worrying about which version to has previously described GPT-5 as 'a system that integrates a lot of our technology,' and this integration is expected to dramatically enhance the AI's performance. The model is likely to incorporate the reasoning strengths of the o3 model while improving upon GPT-4's strong performance in coding and mathematical problem-solving. Early testers have reportedly said GPT-5 demonstrates near PhD-level proficiency in logic-heavy capabilities and a bigger memory: GPT-5 is also rumoured to feature enhanced multimodal AI capabilities. While GPT-4o allowed users to interact with text, images and voice in real time, GPT-5 is believed to add video processing into the mix. It's also said to let users switch seamlessly between different types of data inputs, providing a more natural and integrated user major improvement is the expansion of the model's context window – the amount of information it can remember and use during interactions. GPT-5 is rumoured to recall much larger amounts of data both within a single session and across multiple sessions. While GPT-4o supported up to 128,000 tokens, sources speculate GPT-5 may handle over 256,000 tokens, enabling longer, more coherent conversations and potentially enhanced 'memory' across user AI: There is also speculation that GPT-5 will pave the way for more autonomous AI agents capable of executing real-world tasks with minimal supervision. The model is rumoured to be able to manage and complete complex, multi-step digital tasks almost like a smart virtual assistant – able to use web tools, APIs and digital platforms on a user's that OpenAI has not confirmed these capabilities. However the company has acknowledged that upcoming GPT-5 will represent a major step towards artificial general intelligence (AGI).- Ends


Hindustan Times
an hour ago
- Hindustan Times
How spy agencies are experimenting with the newest AI models
ON THE SAME day as Donald Trump's inauguration as president DeepSeek, a Chinese company, released a world-class large language model (LLM). It was a wake-up call, observed Mr Trump. Mark Warner, vice-chair of the Senate Intelligence Committee, says that America's intelligence community (IC), a group of 18 agencies and organisations, was 'caught off guard'. Last year the Biden administration grew concerned that Chinese spies and soldiers might leap ahead in the adoption of artificial intelligence (AI). It ordered its own intelligence agencies, the Pentagon and the Department of Energy (which builds nuclear weapons), to experiment more aggressively with cutting-edge models and work more closely with 'frontier' AI labs—principally Anthropic, Google DeepMind and OpenAI. On July 14th the Pentagon awarded contracts worth up to $200m each to Anthropic, Google and OpenAI, as well as to Elon Musk's xAI—whose chatbot recently (and briefly) self-identified as Hitler after an update went awry—to experiment with 'agentic' models. These can act on behalf of their users by breaking down complex tasks into steps and exercise control over other devices, such as cars or computers. The frontier labs are busy in the spy world as well as the military one. Much of the early adoption has been in the area of LLM chatbots crunching top-secret data. In January Microsoft said that 26 of its cloud-computing products had been authorised for use in spy agencies. In June Anthropic said it had launched Claude Gov, which had been 'already deployed by agencies at the highest level of us national security'. The models are now widely used in every American intelligence agency, alongside those from competing labs. AI firms typically fine-tune their models to suit the spooks. Claude, Anthropic's public-facing model, might reject documents with classified markings as part of its general safety features; Claude Gov is tweaked to avoid this. It also has 'enhanced proficiency' in the languages and dialects that government users might need. The models typically run on secure servers disconnected from the public internet. A new breed of agentic models is now being built inside the agencies. The same process is under way in Europe. 'In generative AI we have tried to be very, very fast followers of the frontier models,' says a British source. 'Everyone in UKIC [the UK intelligence community] has access to top-secret [LLM] capability.' Mistral, a French firm, and Europe's only real AI champion, has a partnership with AMIAD, France's military-AI agency. Mistral's Saba model is trained on data from the Middle East and South Asia, making it particularly proficient in Arabic and smaller regional languages, such as Tamil. In January +972 Magazine reported that the Israeli armed forces' use of GPT-4, then OpenAI's most advanced LLM, increased 20-fold after the start of the Gaza war. Despite all this, progress has been slow, says Katrina Mulligan, a former defence and intelligence official who leads OpenAI's partnerships in this area. 'Adoption of AI in the national-security space probably isn't where we want it to be yet.' The NSA, America's signals-intelligence agency, which has worked on earlier forms of AI, such as voice-recognition, for decades, is a pocket of excellence, says an insider. But many agencies still want to build their own 'wrappers' around the labs' chatbots, a process that often leaves them far behind the latest public models. 'The transformational piece is not just using it as a chatbot,' says Tarun Chhabra, who led technology policy for Joe Biden's National Security Council and is now the head of national-security policy at Anthropic. 'The transformational piece is: once you start using it, then how do I re-engineer the way I do the mission?' A game of AI spy Sceptics believe that these hopes are inflated. Richard Carter of the Alan Turing Institute, Britain's national institute for AI, argues that what intelligence services in America and Britain really want is for the labs to significantly reduce 'hallucinations' in existing LLMs. British agencies use a technique called 'retrieval augmented generation', in which one algorithm searches for reliable information and feeds it to an LLM, to minimise hallucinations, says the unnamed British source. 'What you need in the IC is consistency, reliability, transparency and explainability,' Dr Carter warns. Instead, labs are focusing on more advanced agentic models. Mistral, for example, is thought to have shown would-be clients a demonstration in which each stream of information, such as satellite images or voice intercepts, is paired with one AI agent, speeding up decision-making. Alternatively, imagine an AI agent tasked with identifying, researching and then contacting hundreds of Iranian nuclear scientists to encourage them to defect. 'We haven't thought enough about how agents might be used in a war-fighting context,' adds Mr Chhabra. The problem with agentic models, warns Dr Carter, is that they recursively generate their own prompts in response to a task, making them more unpredictable and increasing the risk of compounding errors. OpenAI's most recent agentic model, ChatGPT agent, hallucinates in around 8% of answers, a higher rate than the company's earlier o3 model, according to an evaluation published by the firm. Some AI labs see such concerns as bureaucratic rigidity, but it is simply a healthy conservatism, says Dr Carter. 'What you have, particularly in the GCHQ,' he says, referring to the NSA's British counterpart, 'is an incredibly talented engineering workforce that are naturally quite sceptical about new technology.' This also relates to a wider debate about where the future of AI lies. Dr Carter is among those who argue that the architecture of today's general-purpose LLMs is not designed for the sort of cause-effect reasoning that gives them a solid grasp on the world. In his view, the priority for intelligence agencies should be to push for new types of reasoning models. Others warn that China might be racing ahead. 'There still remains a huge gap in our understanding as to how and how far China has moved to use DeepSeek' for military and intelligence gaps, says Philip Reiner of the Institute for Security and Technology, a think-tank in Silicon Valley. 'They probably don't have similar guardrails like we have on the models themselves and so they're possibly going to be able to get more powerful insights, faster,' he says. On July 23rd, the Trump administration ordered the Pentagon and intelligence agencies to regularly assess how quickly America's national-security agencies are adopting AI relative to competitors such as China, and to 'establish an approach for continuous adaptation'. Almost everyone agrees on this. Senator Warner argues that American spooks have been doing a 'crappy job' tracking China's progress. 'The acquisition of technology [and] penetration of Chinese tech companies is still quite low.' The biggest risk, says Ms Mulligan, is not that America rushes into the technology before understanding the risks. 'It's that DoD and the IC keep doing things the way they've always done them. What keeps me up at night is the real possibility that we could win the race to AGI [artificial general intelligence]...and lose the race on adoption.'