logo
OpenAI, Google, and Anthropic get green light for civilian AI use in US; soon could power government workdesks

OpenAI, Google, and Anthropic get green light for civilian AI use in US; soon could power government workdesks

Mint15 hours ago
The US government has approved OpenAI, Google, and Anthropic as official vendors for artificial intelligence tools, making it easier for federal agencies to access and use advanced language models.
This announcement comes from the General Services Administration (GSA), the government's main purchasing body. These AI tools, including OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude, will now be available through a central federal contracting platform called the Multiple Award Schedule (MAS).
Until now, government departments had to go through lengthy negotiations to use AI technologies. With the new arrangement, those tools can be bought and deployed much more quickly because contract terms have already been set.
GSA officials said the approved tools met performance and security standards, though the specific terms of the contracts have not been made public. The agency has previously used its purchasing power to get lower prices from big software firms like Adobe and Salesforce.
Officials added that other AI providers may be added later. These three firms were simply further along in the process.
'We're not choosing winners or losers,' said GSA Deputy Administrator Stephen Ehikian. 'We want as many tools as possible for different use cases across government departments.'
The move is expected to allow wider AI use beyond pilot programmes and national security. Agencies including the Treasury Department and the Office of Personnel Management (OPM) have already shown interest.
In the past, AI has been tested in areas like patent processing, fraud detection, grant reviews, and copy editing.
OPM Director Scott Kupor said AI tools could be used to build chatbots for public queries or to quickly summarise thousands of public comments during policy changes, a job that usually takes weeks.
But he also pointed out a challenge: 'We're probably missing people who are super familiar with modern AI tools,' he said, suggesting departments may need to hire more tech-savvy staff.
'We can't just throw things against the wall and see what sticks,' he added.
This shift comes shortly after President Donald Trump signed new executive orders on AI. One of them requires that any AI tools used by federal agencies must be 'free from ideological bias'. Enforcing this rule will be handled by each agency separately, according to the GSA.
'This is a race,' said Josh Gruenbaum, who leads the GSA's Federal Acquisition Service. 'And as the president said, we're going to win it.'
While the Pentagon has already awarded separate AI contracts to OpenAI and Elon Musk's xAI, Tuesday's announcement focuses on AI use in civilian departments.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic releases Claude Opus 4.1 with improvements in coding
Anthropic releases Claude Opus 4.1 with improvements in coding

The Hindu

time9 minutes ago

  • The Hindu

Anthropic releases Claude Opus 4.1 with improvements in coding

Anthropic AI has released Claude Opus 4.1, the successor to Claude Opus 4 with improved coding, reasoning capabilities and agentic tasks. The AI firm claimed that the AI model 'improves Claude's in-depth research and data analysis skills, especially around detail tracking and agentic search.' According to a blog posted, Opus 4.1 advances their state-of-the-art coding performance to 74.5% on SWE-bench Verified, a benchmark that measures the capabilities of AI models to solve real-world software engineering tasks sourced from GitHub. This is compared to the 72.5% achieved by Opus 4. The model beat rivals including OpenAI o3 and Gemini 2.5 Pro at several benchmarks like Agentic Coding and Multilingual Q&A. Meanwhile, the other AI models outperformed Opus 4.1 at other tasks like visual reasoning and high school math. Opus 4.1 has been made available to paid Claude users ($20 each month for Claude Pro and $100 each month for Claude Max) and in Claude Code and also via API, Amazon Bedrock and Google Cloud's Vertex AI. The pricing will be the same as Opus 4. Anthropic had unveiled Claude Opus 4 towards the end of May. Just last week, the firm confirmed that they revoked OpenAI's access to Claude Code after the Sam Altman-led company was found to be using coding tools ahead of the expected GPT-5 launch. A recent report by 'The Information' had revealed that OpenAI's GPT-5 was rumoured to improve coding capabilities to compete with Anthropic's Claude which has become popular among coders.

OpenAI finally launches ‘open' AI models after over five years
OpenAI finally launches ‘open' AI models after over five years

Indian Express

time9 minutes ago

  • Indian Express

OpenAI finally launches ‘open' AI models after over five years

For the first time in more than five years, OpenAI has released two open-weight AI reasoning models, amid China's rise in open source AI technology, and questions around OpenAI swaying away from its initial objective of building openly available technology. The models released by OpenAI are free to download from Hugging Face and do not need high computing power to run. They have similar capabilities as the company's o-series models. The models come in two sizes: a larger and more capable gpt-oss-120b model that can run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b model that can run on a consumer laptop with 16GB of memory. This is the company's first 'open' language model since GPT-2 in 2019. For OpenAI, this is a shift from its focus on building primarily proprietary models, but one that was necessitated by the meteoric rise of China's DeepSeek, which was open source and took the AI world by storm. It also affirmed China's lead in open source AI, with the US taking a backseat, and its administration having to urge developers to open source more technologies. Earlier this year, OpenAI CEO Sam Altman said that the company has been on the wrong side of history when it comes to open sourcing its technologies. To be sure, the models released by OpenAI are 'open weight,' and not open source models — the former has less transparency compared to the latter. Open source models provide full transparency, sharing source code, model architecture, training algorithms, and weights under a licence allowing free use, modification, and distribution. Ideally, training data is disclosed, but legal constraints often limit this. In contrast, open weight models only have the trained model weights, not the source code, training data, or full architecture details. This restricts transparency and customisation, since users can run the model but not fully modify or retrain it. After years of focusing on closed source technology, the shift in strategy at OpenAI was triggered by the emergence of China's DeepSeek. The latter showed the world that a language model, which was open sourced, could be made at a fraction of the cost that it took some of its competitors to develop a model. Meta has also found success through its open weight model, Llama, which has hit more than a billion downloads — even though developers have complained that its model's licence terms could be commercially restrictive. OpenAI currently offers its AI models through a chatbot and the cloud, unlike its rivals, whose models can be downloaded and modified by people. In a recent Reddit Q&A, OpenAI CEO Sam Altman said that the company has been on the wrong side of history when it comes to open sourcing its technologies. '[I personally think we need to] figure out a different open source strategy,' Altman said. 'Not everyone at OpenAI shares this view, and it's also not our current highest priority… We will produce better models, but we will maintain less of a lead than we did in previous years.' According to a feedback form published by OpenAI on its website, the company was inviting 'developers, researchers, and [members of] the broader community' and included questions like, 'What would you like to see in an open weight model from OpenAI?' and 'What open models have you used in the past?'. Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers' rights, privacy, India's prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More

WhatsApp launches ‘Safety Overview' tool, bans 6.8 million criminal scam centre-linked accounts in 2025
WhatsApp launches ‘Safety Overview' tool, bans 6.8 million criminal scam centre-linked accounts in 2025

Indian Express

time9 minutes ago

  • Indian Express

WhatsApp launches ‘Safety Overview' tool, bans 6.8 million criminal scam centre-linked accounts in 2025

At a time when digital scams are on the rise, WhatsApp has launched a new safety tool designed to help users avoid being roped into suspicious or unfamiliar groups. The feature called 'Safety Overview,' will now appear when a person not in your contacts list adds you to a group – a tactic often used by scamsters. The feature, rolling out in India this week, aims to make group invitations less intrusive and more transparent. When you are added to a group by someone you don't know, this feature will show you key information about the group – who created it, number of participants in the group, and general safety tips – before you even see a message. You can either choose to exit the group or, if it feels familiar, view the chats. Until you decide, notification will remain muted. The move is a part of WhatsApp's ongoing push to protect users from fraud. WhatsApp and Meta's security teams are actively working to take down large-scale criminal scam centres, many of which operate out of Southeast Asia. These operations are often run by organised crime and fueled by forced labour. In the first half of this year alone, WhatsApp and Meta's security teams detected and banned over 6.8 million accounts linked to these scam centres, often taking them down before they could become fully operational. Recently, OpenAI, Meta, and WhatsApp worked together to stop a scam in Cambodia. This particular scam used ChatGPT to create initial messages that directed people to a WhatsApp chat, which then moved to Telegram. Before requesting money to be sent into a cryptocurrency account, the scammers would first establish trust by offering fictitious jobs, such as getting paid to 'like' videos. While new features and enforcement efforts enhance security, users must remain vigilant to protect themselves from scams. It is crucial to always pause and consider the risks before responding to suspicious messages, particularly those from unknown numbers that promise quick financial gains. To further bolster safety on WhatsApp, users should utilise the platform's built-in features. This includes performing a privacy checkup to customise who can contact them and see their online status, as well as enabling two-step verification to prevent account takeovers. If a suspicious message is received, the block and report feature should be used immediately. Additionally, turning on the 'Silence Unknown Callers' feature can help prevent call-based scams, and it is essential to ensure you are always using the official WhatsApp application to avoid malicious, fake versions.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store