logo
xAI Selects Oracle Cloud Infrastructure for Grok AI Models

xAI Selects Oracle Cloud Infrastructure for Grok AI Models

TECHx18-06-2025
Home » Emerging technologies » Artificial Intelligence » xAI Selects Oracle Cloud Infrastructure for Grok AI Models
xAI has announced its selection of Oracle Cloud Infrastructure (OCI) to offer its Grok AI models through Oracle's Generative AI service. The collaboration targets a wide range of applications, including content creation, research, and business process automation.
The company revealed that it will use OCI's scalable, high-performance, and cost-efficient AI infrastructure to train and run inferencing for its next-generation Grok models. Jimmy Ba, co-founder of xAI, said Grok 3 marks a significant leap forward in AI capabilities. He added that Oracle's advanced data platform will accelerate the impact of Grok 3 on enterprises.
Founded in March 2023, xAI is known for pushing the boundaries of AI innovation. Its latest model, Grok 3, features enhanced reasoning through large-scale reinforcement learning. It also demonstrates strong performance in mathematics, coding, and universal understanding.
To ensure robust data governance, management, and security, xAI models will utilize OCI's enterprise-grade capabilities. Oracle reported that all data sent to Grok models is processed on zero data retention endpoints, providing an additional layer of protection.
Greg Pavlik, executive vice president of AI and Data Management Services at OCI, stated that the partnership expands AI possibilities for enterprise customers. He emphasized Oracle's commitment to delivering advanced AI solutions that offer greater flexibility and choice in deployment.
Oracle brings leading AI technology close to enterprise data, prioritizing security, adaptability, and scalability. This enables organizations across various industries to apply generative and agentic AI to relevant business scenarios for immediate benefits. Thousands of AI innovators already leverage OCI's cost-effective, purpose-built AI infrastructure for demanding workloads.
OCI's bare metal GPU instances support applications such as generative AI, natural language processing, computer vision, and recommendation systems.
Windstream, a telecommunications service provider, is exploring the use of xAI's multimodal models through OCI. Kaushik Bhanderi, senior vice president at Windstream, noted the potential advantages of integrating Grok models via OCI's Generative AI service to enhance language comprehension and reasoning, aiming to improve workflows and empower employees.
Key points: xAI partners with Oracle to deliver Grok AI models via OCI Generative AI service.
OCI provides scalable, secure infrastructure with zero data retention for AI workloads.
Windstream explores Grok models to improve telecommunications workflows.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI to rent gigawatts of capacity from Oracle
OpenAI to rent gigawatts of capacity from Oracle

Tahawul Tech

time2 days ago

  • Tahawul Tech

OpenAI to rent gigawatts of capacity from Oracle

OpenAI has agreed to rent computing power from Oracle for use in US data centres as part of the ChatGPT developers' Stargate joint venture project. Bloomberg reported OpenAI plans to rent data centre capacity from Oracle totalling roughly 4.5 gigawatts. The news site stated a gigawatt is like the capacity output of a single nuclear reactor and enough electricity for roughly 750,000 homes. Its unnamed sources explained Oracle will build multiple data centres across the US with its partners to meet the additional demand from OpenAI. Oracle initially focused on a building a data centre site for OpenAI in the US state of Texas, but Bloomberg explained new sites across Texas, Michigan, Wisconsin and Wyoming are also under consideration. The site in Texas will expand from a current capacity of 1.2 gigawatts to about 2 gigawatts, Bloomberg reported. Open AI is also interested in additional data centre sites in New Mexico, Georgia, Ohio and Pennsylvania. While the new sites will be part of the Stargate project, the news agency noted details of the plan are fluid. Oracle recently stated in a regulatory filing it signed a cloud deal worth $30 billion in annual revenue, with revenue expected to start in fiscal year 2028. The Stargate project with OpenAI will make up part of the $30 billion contract, Bloomberg reported. In May, OpenAI, Oracle, SoftBank Group, Nvidia, and Cisco teamed to build a Stargate AI campus in the United Arab Emirates (UAE), marking the first international deployment of the joint venture. OpenAI and SoftBank Group reportedly want to expand the presence of their $500 billion US project to build out AI infrastructure to additional countries such as the UK, Germany and France. Source: Mobile World Live Image Credit: OpenAI

OpenAI Secures Massive 4.5 GW Cloud Power from Oracle
OpenAI Secures Massive 4.5 GW Cloud Power from Oracle

Arabian Post

time3 days ago

  • Arabian Post

OpenAI Secures Massive 4.5 GW Cloud Power from Oracle

OpenAI will lease approximately 4.5 gigawatts of data centre power from Oracle under a $30 billion‑per‑year agreement set to begin delivering revenue in Oracle's fiscal year 2028. This significant capacity boost forms part of the expansive 'Stargate' initiative—an AI infrastructure venture launched in January aiming to deploy up to $500 billion globally. Oracle plans to construct and expand multiple U. S. data centres to support this deal, including enhancements to its 1.2 GW facility in Abilene, Texas, scaling it to 2 GW. Additional site candidates span Texas, Michigan, Wisconsin, Georgia, New Mexico, Ohio and Pennsylvania. By comparison, the 4.5 GW allotted to OpenAI constitutes roughly a quarter of total U. S. data centre capacity, enough to power millions of homes. The deal elevates Oracle's cloud infrastructure business significantly—its existing data centre revenue in fiscal 2025 was $10.3 billion, making the OpenAI agreement nearly triple in scale. Oracle intends to support this growth with investments including $7 billion into Stargate and $25 billion in capital expenditures in 2026, alongside a major Nvidia GB200 chip acquisition valued at $40 billion. ADVERTISEMENT Oracle chief executive Safra Catz previously disclosed in a Securities and Exchange Commission filing that the company had signed multiple large cloud services contracts expected to generate over $30 billion annually from fiscal 2028. Analyst commentary suggests this contract with OpenAI constitutes net-new revenue—significant new business rather than renewals. Investor response has been strongly positive. Oracle's share price surged to record levels, buoyed by expectations of over 50 per cent year‑on‑year revenue growth in fiscal 2028 and projections of surpassing $104 billion in revenue by 2029. TD Cowen analysts raised their price target from $250 to $275, while Citizens JMP maintained an outperform rating with a $240 target, noting Oracle's 'sophisticated' GPU cluster offerings. The Stargate initiative, originally unveiled at a White House event in January 2025 alongside President Trump, is a collaboration between OpenAI, Oracle, SoftBank and Abu Dhabi's MGX sovereign fund. So far, $50 billion has been committed by founding partners, with long-term funding targets set at $500 billion for global AI infrastructure development. OpenAI's strategy reflects a deliberate move to diversify its compute ecosystem beyond Microsoft Azure, its largest investor and former exclusive cloud provider. The firm has since added other partners—including Google Cloud and CoreWeave—alongside the new Oracle deal. Industry analysts view this sprawling infrastructure expansion as essential to maintain leadership in the escalating AI arms race. Larry Ellison, Oracle co‑founder, has emphasised the company's ambition: 'We will build and operate more cloud infrastructure data centres than all of our cloud infrastructure competitors'. Yet the sheer scale of this power draw raises questions around sustainability and resource allocation. A related analysis suggested a 2 GW facility in West Texas could consume about 40 million litres of water daily—placing strain on local supplies already under pressure. As the U. S. data centre landscape evolves rapidly, Oracle and OpenAI's partnership underscores the escalating importance of infrastructure in AI advancement. The unfolding execution of this agreement will shape both companies' market positions and broader resource landscapes, even as the narrative of artificial intelligence continues to accelerate globally.

Five ways AI is creating everyday risks for African businesses
Five ways AI is creating everyday risks for African businesses

Zawya

time3 days ago

  • Zawya

Five ways AI is creating everyday risks for African businesses

In our fast-evolving cyber risk landscape, it's easy to be captivated by the headlines—stories of cutting-edge exploits, wild new attack vectors, and AI's role in shaping malware that once seemed impossible. But while these futuristic threats grab our attention, the real challenge lies in understanding the everyday risks that businesses face and the practical steps to mitigate them. It's evident that attackers are becoming increasingly inventive. The surge of AI-driven techniques is reshaping the global threat landscape, and Africa is no exception. The continent faces a growing tide of sophisticated fraud schemes, propelled by advancements in generative AI, deepfakes, and internal vulnerabilities. It's a clear call for businesses to rethink their strategies and stay ahead in this advancing game. And the best place to start is by focusing on the real-world risks. During Trend Micro's recent World Tour in Johannesburg, we unpacked the tangible risks posed by AI advancements and shared actionable insights on how businesses can effectively counter these emerging challenges. A new wave of AI-powered phishing emerges We've all seen phishing evolve from poorly worded emails riddled with typos to messages that are polished, professional, and even translated flawlessly into multiple languages. But a more recent development is how attackers now leverage AI to scour social media posts—not just the content of posts but the rich ecosystem of interactions around them. There is a treasure trove of personalised insights that can be mined from comments and connections. Bad actors are then capitalising on AI's ability to seamlessly craft hyper-personalised messages with astonishing precision. The tools are readily available; even platforms like ChatGPT can be leveraged to generate phishing emails that feel tailored and authentic. This doesn't require advanced coding expertise—it's a straightforward process that puts powerful capabilities into the hands of malicious actors. The implications are striking. It raises the stakes for businesses as the social engineering pressure through phishing channels continues to intensify, demanding a more vigilant and proactive approach to cybersecurity. Deepfakes are becoming mainstream Synthetic media has also entered the conversation, and it's rewriting the rules of social engineering. Deepfakes, once a novelty, are now a mainstream threat. Remarkably, deepfake incidents in Africa increased sevenfold from Q2 to Q4 of 2024 due to advanced AI tools. With just a few seconds of audio, voice cloning tools can convincingly mimic an executive's voice, enabling fraudsters to issue urgent fund transfer requests that sound all too real. And it doesn't stop there. Real-time face swaps on video platforms like WhatsApp mean that even a casual 'let's jump on a quick call' could be a trap. The line between real and fake is blurring fast, and attackers are exploiting that ambiguity with alarming precision. Recent headline-grabbing incidents—like last year's Quantum AI investment scam that cost consumers billions—underscore just how high the stakes have become. AI is exposing deeper gaps in data governance One of the rising challenges businesses must contend with is the risk of data leakage, especially with tools like AI assistants entering the picture. Imagine an employee—whether inadvertently or with malicious intent—asking for sensitive information such as salary details, acquisition plans, or financial results. If the correct access restrictions are not in place, the AI might serve up restricted data that was never meant for broader access. What we're seeing here is a classic case of AI inheriting flawed permissions—folders scattered across an organisation with access settings that are far too broad. Perhaps a folder is mistakenly set to 'accessible to everyone' when, in reality, only specific employees should have clearance. AI tools will readily surface information that should remain locked down. It's crucial to understand that this issue isn't solely about AI; it's a reflection of deeper gaps in data governance and permissions management within organisations. Open-source is an avenue for malicious code Another emerging concern around AI lies in the potential for the spread of malicious code. Developers crafting AI applications often rely on open-source repositories or widely used models like Meta's LLaMA. But if these repositories contain buggy or, worse, malicious code, those vulnerabilities can creep into your applications unnoticed. It's a sobering reminder that even the tools we trust can become conduits for risk if not carefully vetted. Hallucinations can prove catastrophic Hallucinations are another critical consideration. These occur when AI models, particularly those hastily developed or inadequately vetted, generate information that simply isn't real. Take, for example, OpenAI's Whisper model, used for speech recognition and transcription in medical and business settings. When doctors paused during dictation, the software invented additional words seemingly out of thin air. In a medical context, this isn't just inconvenient—it's potentially catastrophic. It underscores an urgent need for robust quality assurance processes tailored to AI systems. So, what's the path forward? It starts with visibility—broad, deep, and continuous. Understanding where and how AI is being used across your organisation is no longer optional; it's foundational. Monitor AI interactions closely. Are the prompts or responses raising red flags? That insight isn't just diagnostic—it's an opportunity to intervene, guide, and improve. At the same time, your application security processes must evolve to reflect the new AI-driven threat landscape. And if you're training models, the integrity of your data is paramount. Govern it. Protect it. Own it. The good news is that defensive AI is outpacing offensive capabilities, thanks to significant investments in talent, tools, and innovation. Even in areas where attackers are advancing—like vulnerability discovery—defenders are using the same techniques to stay one step ahead. And with the rise of agentic AI, we're seeing a shift: more power is moving into the hands of those who protect. The future of cybersecurity isn't just about reacting faster—it's about anticipating smarter. And that future is already taking shape.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store