
Cerebras Launches Cerebras Inference Cloud Availability in AWS Marketplace
'Now customers can easily procure Cerebras's ultra-fast inference through their AWS accounts and workflows, enabling them to tackle problems that were previously out of reach," said Chris Grusz, Managing Director, Technology Partnerships, AWS.
Share
Amazon Web Services (AWS) customers will now have access to Cerebras Inference Cloud directly within AWS Marketplace. This provides AWS customers with the ability to streamline the purchase and management of Cerebras Inference Cloud within their AWS Marketplace account. Customers can pair Cerebras inference with cutting-edge frameworks and developer tools, delivering agentic applications that are faster to build, easier to deploy, and dramatically more responsive.
'We're excited to bring the power of Cerebras inference to millions of builders and enterprises in AWS Marketplace,' said Alan Chhabra, EVP of Worldwide Partnerships, Cerebras. 'From financial services to LLM-powered developer tools, this expansion makes it possible to build the fastest, most efficient AI applications ever deployed.'
'With Cerebras on AWS Marketplace, the world's fastest AI computing system is now available with the push-button simplicity of the AWS cloud,' said Babak Pahlavan, Founder & CEO, NinjaTech AI. 'AWS is a long-time and preferred cloud partner for NinjaTech AI and having Cerebras available on AWS Marketplace makes it even more seamless for us and others to build amazingly fast AI Agents."
"We are thrilled to welcome Cerebras to AWS Marketplace,' said Chris Grusz, Managing Director, Technology Partnerships, AWS. 'Now customers can easily procure Cerebras's ultra-fast inference through their AWS accounts and workflows, enabling them to tackle problems that were previously out of reach. We're excited to see how our customers leverage this technology to build the next generation of AI."
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on-premises. For further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
29 minutes ago
- CNBC
Super Micro plans to ramp up manufacturing in Europe to capitalize on AI demand
PARIS — Super Micro plans to increase its investment in Europe, including ramping up manufacturing of its AI servers in the region, CEO Charles Liang told CNBC in an interview that aired on Wednesday. The company sells servers which are packed with Nvidia chips and are key for training and implementing huge AI models. It has manufacturing facilities in the Netherlands, but could expand to other places. "But because the demand in Europe is growing very fast, so I already decided, indeed, [there's] already a plan to invest more in Europe, including manufacturing," Liang told CNBC at the Raise Summit in Paris, France. "The demand is global, and the demand will continue to improve in [the] next many years," Liang added. Liang's comments come less than a month after Nvidia CEO Jensen Huang visited various parts of Europe, signing infrastructure deals and urging the region to ramp up its computing capacity. Super Micro rode the growth wave after OpenAI's ChatGPT boom boosted demand for Nvidia's chips, which underpin big AI models. The server maker's stock hit a record high in March 2024. However, the stock is around 60% off that all-time high over concerns about its accounting and financial reporting. But the company in February filed its delayed financial report for its 2024 fiscal year, assuaging those fears. In May, the company reported weaker-than-expected guidance for the current quarter, raising concerns about demand for its product. However, Liang dismissed those fears. "Our growth rate continues to be strong, because we continue to grow our fundamental technology, and we [are] also expanding our business scope," Liang said. "So the room … to grow will be still very tremendous, very big."
Yahoo
4 hours ago
- Yahoo
LangChain is about to become a unicorn, sources say
LangChain, an AI infrastructure startup providing tools to build and monitor LLM-powered applications, is raising a new round of funding at an approximate $1 billion valuation led by IVP, according to three sources with knowledge of the deal. LangChain began its life in late 2022 as an open-source project founded by Harrison Chase, who was then an engineer at machine learning startup Robust Intelligence. After generating significant developer interest, Chase transformed the project into a startup, securing a $10 million seed round from Benchmark in April 2023, That round was followed a week later by a $25 million Series A led by Sequoia, reportedly valuing LangChain at $200 million. The startup was an early darling of the AI era. When LangChain first emerged, LLMs lacked access to real-time information and the ability to perform actions such as searching the web, calling APIs, and interacting with databases. The startup's open-source code solved those problems with a framework for building apps on top of LLMs models. It became a hugely popular project on GitHub (111K stars, over 18,000 forks). The LLM ecosystem has since expanded significantly, with new startups including LlamaIndex, Haystack, and AutoGPT now offering comparable features. Furthermore, leading LLM providers including OpenAI, Anthropic, and Google have evolved their APIs to directly offer capabilities that were once key differentiators for LangChain's core technology. So the company has added other products, including LangSmith, a separate, closed-source product for observability, evaluation, and monitoring of LLM applications, specifically agents. This product has soared in popularity, multiple people tell us. Since its introduction last year, LangSmith has led the company to reach annual recurring revenue (ARR) between $12 million and $16 million, four sources told TechCrunch. The company didn't respond to a request for comment. Developers can start working with LangSmith for free and upgrade to $39 per month for small team collaboration features, according to the company's website. LangChain also offers custom plans for large organizations. Companies who use LangSmith include Klarna, Rippling, and Replit. While LangSmith currently leads the burgeoning LLM operations space, it does have competitors like smaller, open-source Langfuse and Helicone. IVP declined to comment on this report. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


TechCrunch
5 hours ago
- TechCrunch
LangChain is about to become a unicorn, sources say
LangChain, an AI infrastructure startup providing tools to build and monitor LLM-powered applications, is raising a new round of funding at an approximate $1 billion valuation led by IVP, according to three sources with knowledge of the deal. LangChain began its life in late 2022 as an open-source project founded by Harrison Chase, who was then an engineer at machine learning startup Robust Intelligence. After generating significant developer interest, Chase transformed the project into a startup, securing a $10 million seed round from Benchmark in April 2023, That round was followed a week later by a $25 million Series A led by Sequoia, reportedly valuing LangChain at $200 million. The startup was an early darling of the AI era. When LangChain first emerged, LLMs lacked access to real-time information and the ability to perform actions such as searching the web, calling APIs, and interacting with databases. The startup's open-source code solved those problems with a framework for building apps on top of LLMs models. It became a hugely popular project on GitHub (111K stars, over 18,000 forks). The LLM ecosystem has since expanded significantly, with new startups including LlamaIndex, Haystack, and AutoGPT now offering comparable features. Furthermore, leading LLM providers including OpenAI, Anthropic, and Google have evolved their APIs to directly offer capabilities that were once key differentiators for LangChain's core technology. So the company has added other products, including LangSmith, a separate, closed-source product for observability, evaluation, and monitoring of LLM applications, specifically agents. This product has soared in popularity, multiple people tell us. Since its introduction last year, LangSmith has led the company to reach annual recurring revenue (ARR) between $12 million and $16 million, four sources told TechCrunch. The company didn't respond to a request for comment. Developers can start working with LangSmith for free and upgrade to $39 per month for small team collaboration features, according to the company's website. LangChain also offers custom plans for large organizations. Companies who use LangSmith include Klarna, Rippling, and Replit. Techcrunch event Save up to $475 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW While LangSmith currently leads the burgeoning LLM operations space, it does have competitors like smaller, open-source Langfuse and Helicone. IVP declined to comment on this report.