logo
#

Latest news with #CerebrasSystems

Who Needs Big AI Models?
Who Needs Big AI Models?

Forbes

time08-07-2025

  • Business
  • Forbes

Who Needs Big AI Models?

Cerebras Systems CEO and Founder Andrew Feldman The AI world continues to evolve rapidly, especially since the introduction of DeepSeek and its followers. Many have concluded that enterprises don't really need the large, expensive AI models touted by OpenAI, Meta, and Google, and are focusing instead on smaller models, such as DeepSeek V2-Lite with 2.4B parameters, or Llama 4 Scout and Maverick with 17B parameters, which can provide decent accuracy at a lower cost. It turns out that this is not the case for coders, or more accurately, the models that can and will replace many coders. Nor does the smaller-is-better mantra apply to reasoning or agentic AI, the next big thing. AI code generators require large models that can handle a wider context window, capable of accommodating approximately 100,000 lines of code. Mixture of expert (MOE) models supporting agentic and reasoning AI is also large. But these massive models are typically quite expensive, costing around $10 to $15 per million output tokens on modern GPUs. Therein lies an opportunity for novel AI architectures to encroach on GPUs' territory. Cerebras Systems Launches Big AI with Qwen3-235B Cerebras Systems (a client of Cambrian-AI Research) has announced support for the large Qwen3-235B, supporting 131K context length (about 200–300 pages of text), four times what was previously available. At the RAISE Summit in Paris, Cerebras touted Alibaba's Qwen3-235B, which uses a highly efficient mixture-of-experts architecture to deliver exceptional compute efficiency. But the real news is that Cerebras can run the model at only $0.60 per million input tokens and per million output tokens—less than one-tenth the cost of comparable closed-source models. While many consider the Cerebras wafer-scale engine expensive, this data turns that perception on its head. Agents are a use case that frequently requires very large models. One question I frequently get is, if Cerebras is so fast, why don't they have more customers? One reason is that they have not supported large context windows and larger models. Those seeking to develop code, for example, do not want to break down the problem into smaller fragments to fit, say, a 32KB context. Now, that barrier to sales has evaporated. 'We're seeing huge demand from developers for frontier models with long context, especially for code generation,' said Cerebras Systems CEO and Founder Andrew Feldman. "Qwen3-235B on Cerebras is our first model that stands toe-to-toe with frontier models like Claude 4 and DeepSeek R1. And with full 131K context, developers can now use Cerebras on production-grade coding applications and get answers back in less than a second instead of waiting for minutes on GPUs.' Cerebras is not just 30 times faster, it is 92% cheaper than GPUs. Cerebras has quadrupled its context length support from 32K to 131K tokens—the maximum supported by Qwen3-235B. This expansion directly impacts the model's ability to reason over large codebases and complex documentation. While 32K context is sufficient for simple code generation use cases, 131K context enables the model to process dozens of files and tens of thousands of lines of code simultaneously, allowing for production-grade application development. Cerebras is 15-100 times more affordable than GPUs when running Qwen3-235B Qwen3-235B excels at tasks requiring deep logical reasoning, advanced mathematics, and code generation, thanks to its ability to switch between "thinking mode" (for high-complexity tasks) and "non-thinking mode" (for efficient, general-purpose dialogue). The 131K context length allows the model to ingest and reason over large codebases (tens of thousands of lines), supporting tasks such as code refactoring, documentation, and bug detection. Cerebras also announced the further expansion of its ecosystem, with support from Amazon AWS, as well as DataRobot, Docker, Cline, and Notion. The addition of AWS is huge; Cerebras has added AWS to its cloud portfolio. Where is this heading? Big AI has constantly been downsized and optimized, with orders of magnitude of performance gains, model sizes, and price reductions. This trend will undoubtedly continue, but will be constantly offset by increases in capabilities, accuracy, intelligence, and entirely new features across modalities. So, if you want last year's AI, you're in great shape, as it continues to get cheaper. But if you want the latest features and functions, you will require the largest models and the longest input context length. It's the Yin and Yang of AI.

Cerebras Enables Notion to Deliver Real-Time Enterprise Search for 100+ Million Workspace Users
Cerebras Enables Notion to Deliver Real-Time Enterprise Search for 100+ Million Workspace Users

Business Wire

time08-07-2025

  • Business
  • Business Wire

Cerebras Enables Notion to Deliver Real-Time Enterprise Search for 100+ Million Workspace Users

PARIS & SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, today announced that Notion, the all-in-one connected workspace, is using Cerebras' industry-leading AI inference technology to power instant, enterprise-scale document search for its AI offering, Notion AI for Work. By running enterprise search on Cerebras, Notion delivers the speed and scale required for modern knowledge work—streaming results in under 300 milliseconds, with no lag and no latency spikes. Share With more than 100 million users worldwide, Notion is redefining productivity for teams across the globe. Now, by running enterprise search on Cerebras, Notion delivers the speed and scale required for modern knowledge work—streaming results in under 300 milliseconds, with no lag and no latency spikes. 'For Notion, productivity is everything. Cerebras gives us the instant, intelligent AI needed to power real-time features like enterprise search, and enables a faster, more seamless user experience,' said Sarah Sachs, AI Lead at Notion. 'Cerebras Inference enables Notion users to instantly pull insights from all enterprise documents, including Wikis, project documents, meeting notes and more. These docs will now think as fast as you do,' said Angela Yeung, VP of Product, Cerebras. 'Notion AI for Work can handle hundreds of millions of pages without slowdowns by leveraging Cerebras' inference technology.' Notion AI for Work, featuring Cerebras' inference capabilities, is available for business and enterprise customers. For more information, please visit About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on-premises. For further information, visit or follow us on LinkedIn, X and/or Threads.

Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models
Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models

Yahoo

time15-05-2025

  • Business
  • Yahoo

Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models

SUNNYVALE, Calif., May 15, 2025--(BUSINESS WIRE)--Cerebras today announced the launch of Qwen3-32B, one of the most advanced open-weight models in the world, now available on the Cerebras Inference Platform. Developed by Alibaba, Qwen3-32B rivals the performance of leading closed models like GPT-4.1 and DeepSeek R1—and now, for the first time, it runs on Cerebras with real-time responsiveness. Qwen3-32B on Cerebras performs sophisticated reasoning and returns the answer in just 1.2 seconds — up to 60x faster than comparable reasoning models such as DeepSeek R1 and OpenAI o3. This is the first reasoning model on any hardware to achieve real-time reasoning. Qwen3-32B on Cerebras is the fastest reasoning model API in the world, ready to power production-grade agents, copilots, and automation workloads. "This is the first time a world-class reasoning model—on par with DeepSeek R1 and OpenAI's o-series — can return answers instantly," said Andrew Feldman, CEO and co-founder of Cerebras. "It's not just fast for a big model. It's fast enough to reshape how real-time AI gets built." The First Real-Time Reasoning Model Reasoning models are widely recognized as the most powerful class of large language models—capable of multi-step logic, tool use, and structured decision-making. But until now, they've come with a tradeoff: latency. Inference often takes 30–90 seconds, making them impractical for responsive user experiences. Cerebras eliminates that bottleneck. Qwen3-32B delivers first-token latency in just one second, and completes full reasoning chains in real time. This is the only solution on the market today that combines high intelligence with real-time speed—and it's available now. Transparent, Scalable Pricing Qwen3-32B is available on Cerebras with simple, production-ready pricing: $0.40 per million input tokens $0.80 per million output tokens This is 10x cheaper than GPT-4.1, while offering comparable or better performance. All developers receive 1 million free tokens per day, with no waitlist. Qwen3-32B is fully open-weight and Apache 2.0 licensed, and can be integrated in seconds using standard OpenAI- or Claude-compatible endpoints. Qwen3-32B is live now on For teams seeking to build fast, intelligent, production-ready AI systems, it is the most powerful open model you can use today. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit or follow us on LinkedIn or X. View source version on Contacts Media Contact Press Contact: PR@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models
Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models

Business Wire

time15-05-2025

  • Business
  • Business Wire

Cerebras Launches Qwen3-32B: Real-Time Reasoning with One of the World's Most Powerful Open Models

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras today announced the launch of Qwen3-32B, one of the most advanced open-weight models in the world, now available on the Cerebras Inference Platform. Developed by Alibaba, Qwen3-32B rivals the performance of leading closed models like GPT-4.1 and DeepSeek R1—and now, for the first time, it runs on Cerebras with real-time responsiveness. Qwen3-32B on Cerebras performs sophisticated reasoning and returns the answer in just 1.2 seconds — up to 60x faster than comparable reasoning models such as DeepSeek R1 and OpenAI o3. This is the first reasoning model on any hardware to achieve real-time reasoning. Qwen3-32B on Cerebras is the fastest reasoning model API in the world, ready to power production-grade agents, copilots, and automation workloads. 'This is the first time a world-class reasoning model—on par with DeepSeek R1 and OpenAI's o-series — can return answers instantly,' said Andrew Feldman, CEO and co-founder of Cerebras. 'It's not just fast for a big model. It's fast enough to reshape how real-time AI gets built.' The First Real-Time Reasoning Model Reasoning models are widely recognized as the most powerful class of large language models—capable of multi-step logic, tool use, and structured decision-making. But until now, they've come with a tradeoff: latency. Inference often takes 30–90 seconds, making them impractical for responsive user experiences. Cerebras eliminates that bottleneck. Qwen3-32B delivers first-token latency in just one second, and completes full reasoning chains in real time. This is the only solution on the market today that combines high intelligence with real-time speed—and it's available now. Transparent, Scalable Pricing Qwen3-32B is available on Cerebras with simple, production-ready pricing: $0.40 per million input tokens $0.80 per million output tokens This is 10x cheaper than GPT-4.1, while offering comparable or better performance. All developers receive 1 million free tokens per day, with no waitlist. Qwen3-32B is fully open-weight and Apache 2.0 licensed, and can be integrated in seconds using standard OpenAI- or Claude-compatible endpoints. Qwen3-32B is live now on For teams seeking to build fast, intelligent, production-ready AI systems, it is the most powerful open model you can use today. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit or follow us on LinkedIn or X.

Trump, Sheikh Tahnoon advance UAE-US $1 trillion economic ties in talks on AI, energy investments
Trump, Sheikh Tahnoon advance UAE-US $1 trillion economic ties in talks on AI, energy investments

Arabian Business

time19-03-2025

  • Business
  • Arabian Business

Trump, Sheikh Tahnoon advance UAE-US $1 trillion economic ties in talks on AI, energy investments

US President Donald Trump hosted Sheikh Tahnoon bin Zayed Al Nahyan, Deputy Ruler of Abu Dhabi and UAE National Security Adviser, at the White House for discussions focused on deepening bilateral relations in technology, energy and economic development. The high-level talks, which included a dinner attended by senior officials from both countries on Tuesday evening, highlighted the 'long-standing ties and bonds of friendship' between the United States and the United Arab Emirates, according to a statement posted by Trump on social media. 'UAE and the US have long been partners in the work to bring peace and security to the Middle East and the world,' Trump wrote. 'Discussions also included ways for our countries to increase our partnership for the advancing of our economic and technological futures.' Sheikh Tahnoon conveyed greetings from UAE President Sheikh Mohamed bin Zayed Al Nahyan and expressed gratitude for the 'warm welcome and hospitality' during the White House visit. 'Our discussions focused on the future of the long-term strategic partnership between our two nations and reinforcing our shared vision for prosperity and progress,' Sheikh Tahnoon wrote on social media platform X. I conveyed the greetings of President His Highness Sheikh Mohamed bin Zayed Al Nahyan to U.S. President Donald Trump during the dinner at the White House, which was attended by senior U.S. and UAE officials, thanking him for the warm welcome and hospitality. Our discussions… — Tahnoon Bin Zayed Al Nahyan (@hhtbzayed) March 19, 2025 The meetings come as the UAE and US continue to expand what the UAE embassy in the US described as a '$1 trillion economic relationship' that benefits all 50 states across sectors including aerospace, energy, manufacturing, technology, life sciences and healthcare. The UAE delegation, which included Dr Sultan Al Jaber, Minister of Industry and Advanced Technology, and Yousef Al Otaiba, Minister of State and UAE Ambassador to the US, also met with Vice President JD Vance on Monday. According to the UAE embassy in Washington, these meetings centred on 'deepening UAE-US collaboration on supporting energy investment and abundance, technological leadership, and unleashing unprecedented economic growth.' Artificial intelligence featured prominently in the discussions, reflecting the UAE's ambitions to become a global leader in AI research and development. The Gulf nation has made significant investments in this sector, including partnerships with major US technology companies. Microsoft has invested $1.5 billion in Abu Dhabi's G42, while Abu Dhabi's G42 and California-based Cerebras Systems delivered Condor Galaxy, which they describe as the world's largest and fastest AI supercomputer. Trump has secured two major investments from the UAE since winning the presidential election, including a pledge of up to $20 billion from Damac Properties to build data centres in the United States. Additionally, UAE-based technology firm MGX joined BlackRock, Microsoft and Global Infrastructure Partners as an initial equity funder in the Stargate project, a new AI joint venture in Texas with an initial investment of $100 billion, expected to increase to $500 billion over the next four years. Bilateral trade between the United States and UAE totalled $34.4 billion last year, according to the Office of the US Trade Representative, with US exports to the Emirates making up nearly $27 billion. The UAE represents the US's third-largest trade surplus globally. During his visit, Sheikh Tahnoon also met US Treasury Secretary Scott Bessent on Tuesday for discussions on strengthening partnerships in economy, finance and advanced technology. The UAE has signalled its intention to accelerate investments in 'artificial intelligence, advanced technology, infrastructure, energy and healthcare,' which Sheikh Tahnoon described as 'key pillars for sustainable growth and development'.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store