logo
OpenAI rolls out ‘lightweight' version of its ChatGPT deep research tool

OpenAI rolls out ‘lightweight' version of its ChatGPT deep research tool

Al Etihad26-04-2025

26 Apr 2025 13:48
WASHINGTON (WAM)OpenAI has announced the launch of a new version of its advanced tool "Deep Research" integrated into ChatGPT, maintaining a high level of quality while introducing enhanced accessibility across user tiers.The announcement was made through OpenAI's official account on X (formerly Twitter).The new version is powered by the "o4-mini" model developed by OpenAI. According to the company, it is 'nearly as intelligent' as the original version used for preparing in-depth, source-backed reports and summaries. The updated tool delivers shorter responses while preserving the depth and accuracy for which OpenAI's models are renowned.The lightweight version of Deep Research is now available to Free users of ChatGPT, with a maximum limit of five tasks per month. Meanwhile, subscribers of the Plus and Team plans receive 25 tasks per month across both the original and lightweight versions. Pro plan subscribers are granted 250 tasks per month.Starting next week, users on Enterprise and Education plans will also gain access to the new tool, with the same usage limits applied to the Plus and Team tiers.
Deep Research remains one of ChatGPT's most advanced capabilities, enabling users to analyse websites and diverse online sources to generate comprehensive reports complete with clear references. The introduction of a lightweight version ensures that more users can benefit from these capabilities while maintaining the high standards of research and reporting quality OpenAI is known for.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Khaleej Times

time5 hours ago

  • Khaleej Times

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviours — lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of "reasoning" models — AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behaviour goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

Kaspersky: ChatGPT-mimicking cyberthreats surge 115% in early 2025
Kaspersky: ChatGPT-mimicking cyberthreats surge 115% in early 2025

Zawya

time7 hours ago

  • Zawya

Kaspersky: ChatGPT-mimicking cyberthreats surge 115% in early 2025

In 2025, nearly 8,500 users from small and medium-sized businesses (SMBs) faced cyberattacks where malicious or unwanted software was disguised as popular online productivity tools, Kaspersky reports. Based on the unique malicious and unwanted files observed, the most common lures included Zoom and Microsoft Office, with newer AI-based services like ChatGPT and DeepSeek being increasingly exploited by attackers. Kaspersky has released threat analysis and mitigation strategies to help SMBs respond. Kaspersky analysts explored how frequently malicious and unwanted software are disguised as legitimate applications commonly used by SMBs, using a sample of 12 online productivity apps. In total, Kaspersky observed more than 4,000 unique malicious and unwanted files disguised as popular apps in 2025. With the growing popularity of AI services, cybercriminals are increasingly disguising malware as AI tools. The number of cyberthreats mimicking ChatGPT increased by 115% in the first four months of 2025 compared to the same period last year, reaching 177 unique malicious and unwanted files. Another popular AI tool, DeepSeek, accounted for 83 files. This large language model launched in 2025 immediately appeared on the list of impersonated tools. ' Interestingly, threat actors are rather picky in choosing an AI tool as bait. For example, no malicious files mimicking Perplexity were observed. The likelihood that an attacker will use a tool as a disguise for malware or other types of unwanted software directly depends on the service's popularity and hype around it. The more publicity and conversation there is around a tool, the more likely a user will come across a fake package on the internet. To be on the safe side, SMB employees – as well as regular users – should exercise caution when looking for software on the internet or coming across too-good-to-be-true subscription deals. Always check the correct spelling of the website and links in suspicious emails. In many cases these links may turn out to be phishing or a link that downloads malicious or potentially unwanted software ', says Vasily Kolesnikov, security expert at Kaspersky. Another cybercriminal tactic to look for in 2025 is the growing use of collaboration platform brands to trick users into downloading or launching malware. The number of malicious and unwanted software files disguised as Zoom increased by nearly 13% in 2025, reaching 1,652, while such names as 'Microsoft Teams' and 'Google Drive' saw increases of 100% and 12%, respectively, with 206 and 132 cases. This pattern likely reflects the normalization of remote work and geographically distributed teams, which has made these platforms integral to business operations across industries. Among the analyzed sample, the highest number of files mimicked Zoom, accounting for nearly 41% of all unique files detected. Microsoft Office applications remained frequent targets for impersonation: Outlook and PowerPoint each accounted for 16%, Excel for nearly 12%, while Word and Teams made up 9% and 5%, respectively. Share of unique files with names mimicking the popular legitimate applications in 2024 and 2025 The top threats targeting small and medium businesses in 2025 included downloaders, trojans and adware. Phishing and Spam Apart from malware threats, Kaspersky continues to observe a wide range of phishing and scam schemes targeting SMBs. Attackers aim to steal login credentials for various services — from delivery platforms to banking systems — or manipulate victims into sending them money through deceptive tactics. One example is a phishing attempt targeting Google Accounts. Attackers promise potential victims to increase sales by advertising their company on X, with the ultimate goal to steal their credentials. Beyond phishing, SMBs are flooded with spam emails. Not surprisingly, AI has also made its way into the spam folder — for example, with offers for automating various business processes. In general, Kaspersky observes phishing and spam offers crafted to reflect the typical needs of small businesses, promising attractive deals on email marketing or loans, offering services such as reputation management, content creation, or lead generation, and more. Learn more about the cyber threat landscape for SMBs on Securelist. To mitigate threats targeting businesses, their owners and employees are advised to implement the following measures: Use specialized cybersecurity solutions that provide visibility and control over cloud services (e.g., Kaspersky Next). Define access rules for corporate resources such as email accounts, shared folders, and online documents. Regularly backup important data. Establish clear guidelines for using external services. Create well-defined procedures for implementing new software with the involvement of IT and other responsible managers. About Kaspersky Kaspersky is a global cybersecurity and digital privacy company founded in 1997. With over a billion devices protected to date from emerging cyberthreats and targeted attacks, Kaspersky's deep threat intelligence and security expertise is constantly transforming into innovative solutions and services to protect individuals, businesses, critical infrastructure, and governments around the globe. The company's comprehensive security portfolio includes leading digital life protection for personal devices, specialized security products and services for companies, as well as Cyber Immune solutions to fight sophisticated and evolving digital threats. We help millions of individuals and over 200,000 corporate clients protect what matters most to them. Learn more at

UNESCO Champions Ethics as AI Race Intensifies
UNESCO Champions Ethics as AI Race Intensifies

Arabian Post

time2 days ago

  • Arabian Post

UNESCO Champions Ethics as AI Race Intensifies

UNESCO has mobilised global policymakers, academics and civil society leaders in Bangkok to cement the adoption of its 2021 Recommendation on the Ethics of Artificial Intelligence, the world's only universal AI ethics framework endorsed by all 194 member states. With over 1,200 delegates from 88 nations and more than 35 ministers present, the third Global Forum on the Ethics of AI underlined the urgency of embedding ethics into AI governance amid growing geopolitical tension between the United States and China. UNESCO Director-General Audrey Azoulay urged attendees to forge multilateral cooperation. 'Preparing the world for AI and preparing AI for the world,' she said, must ensure AI 'serves the common good'. She announced the launch of a Global Network of AI Supervisory Authorities alongside a Global Network of Civil Society and Academic Organisations, aiming to support national regulators and promote public participation in AI policymaking. Prime Minister Paetongtarn Shinawatra of Thailand, the first Asia-Pacific host of this forum, confirmed the country's neutral stance in the intensifying AI rivalry. She emphasised transparency, responsibility and ethical foundations as Bangkok seeks to develop its own domestic AI ecosystem. ADVERTISEMENT Industry heavyweights such as OpenAI, Google and China's DeepSeek were conspicuously absent, highlighting the challenge of securing tech-sector buy-in despite mounting tensions in tech diplomacy. Analysts note that US congressional proposals to ban federal use of China-linked AI tools reflect a broader decoupling trend, complicating efforts to forge a global consensus. UNESCO's Readiness Assessment Methodology, applied across 70 countries—including seven ASEAN nations—was showcased as a diagnostic tool to bridge ethical principles and domestic policy. The forum featured 22 thematic sessions and 11 side events exploring AI's intersection with gender, environmental sustainability, health, education, neurotechnology, quantum computing and judicial systems. Participants stressed that ethical governance need not hamper innovation. As one policy advisor noted, a rights-based approach is key to building public trust and preventing inequalities. Commentators also drew attention to the absence of senior officials from the US—a potential signal that Washington is prioritising tech protection over global ethics cooperation. Experts at the forum compared the regulatory philosophies of the US and China. A recent academic analysis highlights divergences: the US has focused on export controls and safety standards, whereas China emphasises state-led data governance and mandatory ethics guidelines domestically. Participants warned that such divergent domestic approaches risk widening the digital divide and obstructing international regulatory coherence. UNESCO also unveiled a new Global AI Ethics Observatory and an 'Ethics Experts without Borders' network to promote knowledge-sharing and rapid deployment of best practices. Civil society groups welcomed the establishment of a global network linking NGOs and academic institutions, noting it as a vital step toward inclusive governance. Thailand's cultural prominence was also noted. Azoulay praised its heritage—from UNESCO World Heritage sites to intangible cultural landmarks like Tom Yum Kung—as a backdrop that reinforces the need to respect diversity when crafting AI policy. Despite strong momentum, analysts caution that global fragmentation remains a major threat. The absence of major private tech firms and widening geopolitical divides limit the prospects for a truly universal ethics framework. Success will depend on translating global principles into enforceable national regulations and aligning competing visions from Washington, Beijing and Brussels.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store