
US agency approves OpenAI, Google, Anthropic for federal AI vendor list
The move by the General Services Administration, allows the federal government advance adoption of AI tools by making them available for government agencies through a platform with contract terms in place. GSA said approved AI providers "are committed to responsible use and compliance with federal standards."
(Reporting by David Shepardson and Harshita Mary Varghese in Bengaluru)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Malay Mail
7 minutes ago
- Malay Mail
OpenAI eyes RM2.1t valuation in potential employee share sale, source says
SAN FRANCISCO, Aug 6 — ChatGPT maker OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about US$500 billion (RM2.1 trillion), a source familiar with the matter said. Existing investors, including Thrive Capital, are in discussions to participate, said the source, who requested anonymity because the talks are private. The US$500 billion valuation is an eye-popping bump-up from the US$300 billion valuation that the Microsoft-backed company currently has. The share sale would offer a financial incentive to employees as technology giants such as Meta META.O compete aggressively for AI researchers with lucrative compensation packages. Thrive Capital declined to comment. Bloomberg first reported the potential sale yesterday. — Reuters


The Star
38 minutes ago
- The Star
Anthropic unveils more powerful AI model ahead of rival GPT-5 release
The update is part of a shift toward doing more incremental improvements to its coding models, in addition to larger model releases. — AP Photo/Richard Drew, File Anthropic is releasing a new version of its most powerful artificial intelligence model as rival OpenAI nears the long-awaited launch of its GPT-5 system. Anthropic is set to announce Tuesday the release of Opus 4.1, an update to its high-end AI model that the company claims is more capable at coding, research and data analysis. The new offering is also better at fielding complex multi-step problems, the company said, positioning it as a more effective AI agent. The update is part of a shift toward doing more incremental improvements to its coding models, in addition to larger model releases. "In the past, we were too focused on only shipping the really big upgrades,' said Anthropic Chief Product Officer Mike Krieger. "It's better at coding, better at reasoning, better at agentic tasks. We're just making it better for people.' Founded in 2021 by a group of former OpenAI employees, Anthropic has tried to distinguish itself from rivals with advanced models and a greater emphasis on responsible AI development. Anthropic's Claude software has also excelled in coding, a key revenue growth area for the company. The San Francisco-based company is generating about US$5bil (RM21bil) in annualised revenue, Bloomberg News has reported. Anthropic is also in the midst of finalising a deal to raise as much as US$5bil (RM21bil) in a new funding round at a valuation of US$170bil (RM718bil) valuation, Bloomberg previously reported. But Anthropic faces significant competition. Alphabet Inc.'s Google and OpenAI have introduced features designed to help programmers streamline the process of writing and debugging code. OpenAI executives have also been publicly teasing GPT-5, with reports suggesting it could come as soon as this month. "One thing I've learned, especially in AI as it's moving quickly, is that we can focus on what we have – and what other folks are going to do is ultimately up to them,' Krieger said when asked about OpenAI's upcoming release. "We'll see what ends up happening on the OpenAI side, but for us, we really just focused on what can we deliver for the customers we have.' Anthropic's new model, released two months after Opus 4, is billed as more adept at coding. It can better navigate large codebases and be more precise in making modifications to code, the company said. The upgraded model also scores two percentage points higher than its predecessor on the popular coding evaluation benchmark, SWE-Bench Verified, Anthropic said. Customers such as coding app Windsurf, which was recently acquired by Cognition, and Rakuten Group Inc., have been seeing faster and improved coding task completion with the model, according to customer statements Anthropic shared. – Bloomberg


The Star
38 minutes ago
- The Star
Nvidia reiterates its chips have no backdoors, urges US against location verification
FILE PHOTO: NVIDIA logo is seen near computer motherboard in this illustration taken January 8, 2024. REUTERS/Dado Ruvic/Illustration/ File Photo BEIJING (Reuters) -Nvidia has published a blog post reiterating that its chips did not have backdoors or kill switches and appealed to U.S. policymakers to forgo such ideas saying it would be a "gift" to hackers and hostile actors. The blog post, which was published on Tuesday in both English and Chinese, comes a week after the Chinese government summoned the U.S. artificial intelligence (AI) chip giant to a meeting saying it was concerned by a U.S. proposal for advanced chips sold abroad to be equipped with tracking and positioning functions. The White House and both houses of U.S. Congress have proposed the idea of requiring U.S. chip firms to include location verification technology with their chips to prevent them from being diverted to countries where U.S. export laws ban sales. The separate bills and White House recommendation have not become a formal rule, and no technical requirements have been established. "Embedding backdoors and kill switches into chips would be a gift to hackers and hostile actors. It would undermine global digital infrastructure and fracture trust in U.S. technology," Nvidia said. It had said last week its products have no backdoors that would allow remote access or control. A backdoor refers to a hidden method of bypassing normal authentication or security controls. Nvidia emphasized that "there is no such thing as a 'good' secret backdoor - only dangerous vulnerabilities that need to be eliminated." (Reporting by Liam Mo and Brenda Goh; Editing by Raju Gopalakrishnan)