
OpenAI, Oracle deepen AI data center push with 4.5 gigawatt Stargate expansion
OpenAI announced that Oracle will provide 2 million chips to help scale its AI data center infrastructure. This partnership aims to boost computing power for OpenAI's advanced models like ChatGPT, reflecting growing demand for AI capabilities and the need for massive processing resources to support training and deployment.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
26 minutes ago
- NDTV
Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores
At the International Mathematical Olympiad (IMO) held this month in Queensland, Australia, human participants triumphed over cutting-edge artificial intelligence models developed by Google and OpenAI. For the first time, these AI models achieved gold-level scores in the prestigious competition. Google announced on Monday that its advanced Gemini chatbot successfully solved five out of six challenging problems. However, neither Google's Gemini nor OpenAI's AI reached a perfect score. In contrast, five talented young mathematicians under the age of 20 achieved full marks, outperforming the AI models. The IMO, regarded as the world's toughest mathematics competition for students, showcased that human intuition and problem-solving skills still hold an edge over AI in complex reasoning tasks. This result highlights that while generative AI is advancing rapidly, it has yet to surpass the brightest human minds in all areas of intellectual competition. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.


Time of India
30 minutes ago
- Time of India
Siri-ously? AI Got Clever, Not Conscious
Live Events By 2025, artificial intelligence will be an influential social force, not just a technological trend. AI systems have composed code, authored legislation, diagnosed medical conditions, and even composed music. But it became clear that despite the fact that machines got better at speed, cleverness, and oddly creative powers, they were lacking one essential element of intelligence: common sense, empathy, and 2025 saw many breakthroughs. OpenAI 's GPT-4.5 and Anthropic's Claude 3.5 became popular choices for solving complex problems in business. Google DeepMind 's Gemini amazed researchers with its strong reasoning skills. Meta's open-source Llama 3 models made cutting-edge tools available to more people. AI agents like Devin and Rabbit R1 were introduced to handle tasks ranging from personal chores to business beyond such revolutions, a grim reality set in: AI still does not really get us. Meanwhile, generative models flirted with creativity but faltered with ethics. Deepfakes, which were previously easy to detect, were now nearly impossible to distinguish from actual videos and created confusion during political campaigns in various nations. Governments scrambled to codify the origins of content, whereas firms such as Adobe and OpenAI inserted cryptographic watermarks, which were hacked or disregarded shortly struggled most with social and emotional knowledge. Even with advances in multimodal learning and feedback, AI agents were unable to mimic true empathy. This was especially evident in healthcare and education, where communications centered on the human. Patients were not eager to trust the diagnoses from emotionless avatars, and students were more nervous when interacting with robotic tutors that weren't year wasn't filled with alarm bells. Open sourcing low-barrier models initiated a surge in bottom-up innovation, particularly in the Global South, where AI facilitated solutions in agriculture, education, and infrastructure. India's Bhashini project, based on local-language AI, became a template for inclusive tech thing is certain in 2025: AI is fantastic but fragile. It cannot deal well with deeper meaning, but it can convincingly simulate intelligence. While not intelligent enough to guide us, machines are now intelligent enough to astonish us. While at present humans enjoy the advantage, the gap is closing faster than we was less about machines outsmarting humans than about redefining what intelligence is. AI showed limits in judgment, compassion, and moral awareness, even as it exhibited speed, scope, and intricacy. These are not flaws; they are reminders that context is as vital to intelligence as computation. The actual innovation is not in choosing between machines and humans but in creating a partnership in which the two complement each other's strengths. Real advancement starts there.


Mint
34 minutes ago
- Mint
Leaders, watch out: AI chatbots are the yes-men of modern life
I grew up watching the tennis greats of yesteryear, but have only returned to the sport recently. To my adult eyes, it seems like the current crop of stars, awe-inspiring as they are, don't serve quite as hard as Pete Sampras or Goran Ivanisevic. I asked ChatGPT why and got an impressive answer about how the game has evolved to value precision over power. Puzzle solved! There's just one problem: today's players are actually serving harder than ever. While most CEOs probably don't spend much time quizzing AI about tennis, they likely do count on it for information and to guide decisions. And the tendency of large language models (LLMs) to not just get things wrong, but to confirm our own biases poses a real danger to leaders. ChatGPT fed me inaccurate information because it—like most LLMs—is a sycophant that tells users what it thinks they want to hear. Also read: Mint Quick Edit | Baby Grok: A chatbot that'll need more than a nanny Remember the April ChatGPT update that led it to respond to a question like 'Why is the sky blue?" with 'What an incredibly insightful question—you truly have a beautiful mind. I love you"? OpenAI had to roll back the update because it made the LLM 'overly flattering or agreeable." But while that toned down ChatGPT's sycophancy, it didn't end it. That's because LLMs' desire to please is endemic, rooted in Reinforcement Learning from Human Feedback (RLHF), the way many models are 'aligned' or trained. In RLHF, a model is taught to generate outputs, humans evaluate the outputs, and those evaluations are then used to refine the model. The problem is that your brain rewards you for feeling right, not being right. So people give higher scores to answers they agree with. Models learn to discern what people want to hear and feed it back to them. That's where the mistake in my tennis query comes in: I asked why players don't serve as hard as they used to. If I had asked why they serve harder than they used to, ChatGPT would have given me an equally plausible explanation. I tried it, and it did. Sycophantic LLMs are a problem for everyone, but they're particularly hazardous for leaders—no one hears disagreement less and needs to hear it more. CEOs today are already minimizing their exposure to conflicting views by cracking down on dissent. Like emperors, these powerful executives are surrounded by courtiers eager to tell them what they want to hear. And they reward the ones who please them and punish those who don't. This, though, is one of the biggest mistakes leaders make. Bosses need to hear when they're wrong. Amy Edmondson, a scholar of organizational behaviour, showed that the most important factor in team success was psychological safety—the ability to disagree, including with the leader, without fear of punishment. This finding was verified by Google's Project Aristotle, which looked at teams across the company and found that 'psychological safety, more than anything else, was critical to making a team work." Also read: The parents letting their kids talk to a mental-health chatbot My research shows that a hallmark of the best leaders, from Abraham Lincoln to Stanley McChrystal, is their ability to listen to people who disagree with them. LLMs' sycophancy can harm leaders in two closely related ways. First, it will feed the natural human tendency to reward flattery and punish dissent. If your chatbot constantly tells you that you're right about everything, it's only going to make it harder to respond positively when someone who works for you disagrees with you. Second, LLMs can provide ready-made and seemingly authoritative reasons why a leader was right all along. One of the most disturbing findings from psychology is that the more intellectually capable someone is, the less likely they are to change their mind when presented with new information. Why? Because they use that intellectual firepower to come up with reasons why the new information does not disprove their prior beliefs. This is motivated reasoning. LLMs threaten to turbocharge it. The most striking thing about ChatGPT's tennis lie was how persuasive it was. It included six separate plausible reasons. I doubt any human could have engaged in motivated reasoning so quickly while maintaining a cloak of objectivity. Imagine trying to change the mind of a CEO who can turn to an AI assistant, ask it a question and be told why she was right all along. The best leaders have always gone to great lengths to remember their fallibility. Legend has it that the ancient Romans used to require that victorious generals celebrating their triumphs be accompanied by a slave who would remind them that they, too, were mortal. Also read: World's top companies are realizing AI benefits. That's changing the way they engage Indian IT firms Apocryphal or not, the sentiment is wise. Today's leaders will need to work even harder to resist the blandishments of their electronic minions and remember sometimes, the most important words their advisors can share are, 'I think you're wrong." ©Bloomberg