
Siri-ously? AI Got Clever, Not Conscious
Technically, 2025 saw many breakthroughs. OpenAI's GPT-4.5 and Anthropic's Claude 3.5 became popular choices for solving complex problems in business. Google DeepMind's Gemini amazed researchers with its strong reasoning skills. Meta's open-source Llama 3 models made cutting-edge tools available to more people. AI agents like Devin and Rabbit R1 were introduced to handle tasks ranging from personal chores to business processes.
Yet, beyond such revolutions, a grim reality set in: AI still does not really get us. Meanwhile, generative models flirted with creativity but faltered with ethics. Deepfakes, which were previously easy to detect, were now nearly impossible to distinguish from actual videos and created confusion during political campaigns in various nations. Governments scrambled to codify the origins of content, whereas firms such as Adobe and OpenAI inserted cryptographic watermarks, which were hacked or disregarded shortly after.AI struggled most with social and emotional knowledge. Even with advances in multimodal learning and feedback, AI agents were unable to mimic true empathy. This was especially evident in healthcare and education, where communications centered on the human. Patients were not eager to trust the diagnoses from emotionless avatars, and students were more nervous when interacting with robotic tutors that weren't flexible.The year wasn't filled with alarm bells. Open sourcing low-barrier models initiated a surge in bottom-up innovation, particularly in the Global South, where AI facilitated solutions in agriculture, education, and infrastructure. India's Bhashini project, based on local-language AI, became a template for inclusive tech development.
One thing is certain in 2025: AI is fantastic but fragile. It cannot deal well with deeper meaning, but it can convincingly simulate intelligence. While not intelligent enough to guide us, machines are now intelligent enough to astonish us. While at present humans enjoy the advantage, the gap is closing faster than we imagined.It was less about machines outsmarting humans than about redefining what intelligence is. AI showed limits in judgment, compassion, and moral awareness, even as it exhibited speed, scope, and intricacy. These are not flaws; they are reminders that context is as vital to intelligence as computation. The actual innovation is not in choosing between machines and humans but in creating a partnership in which the two complement each other's strengths. Real advancement starts there.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
14 minutes ago
- The Hindu
OpenAI CEO Sam Altman warns ChatGPT users' personal questions could be used in lawsuits
OpenAI CEO Sam Altman has warned that while users often reveal the most personal details of their lives to ChatGPT, their interactions lack privacy protections and could potentially be produced for lawsuits or other legal reasons. During an episode of 'This Past Weekend' podcast with Theo Von, Altman noted that though interactions between patients and doctors or clients and lawyers are protected by privilege — meaning they cannot often be used against an individual in court — this is not the case for a person's interactions with ChatGPT. He emphasised that the policy framework for this protection is lacking, and that it needs to be urgently addressed. 'And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. Like, there's doctor-patient confidentiality, there's legal confidentiality, whatever. And, we haven't figured that out yet for when you talk to ChatGPT,' Altman said during the podcast, adding that OpenAI could be forced to produce such evidence even if he disagreed with the mandate. Altman observed that young people 'especially' used ChatGPT as a life coach or a therapist. He advocated for a human-AI chatbot privacy standard comparable to that existing between a patient and their therapist. OpenAI has in the past criticised the New York Times, claiming that as part of its lawsuit against the AI startup, the media company 'asked the court to force us to retain all user content indefinitely going forward..'


Hindustan Times
2 hours ago
- Hindustan Times
The High-Schoolers Who Just Beat the World's Smartest AI Models
The smartest AI models ever made just went to the most prestigious competition for young mathematicians and managed to achieve the kind of breakthrough that once seemed miraculous. They still got beat by the world's brightest teenagers. Every year, a few hundred elite high-school students from all over the planet gather at the International Mathematical Olympiad. This year, those brilliant minds were joined by Google DeepMind and other companies in the business of artificial intelligence. They had all come for one of the ultimate tests of reasoning, logic and creativity. The famously grueling IMO exam is held over two days and gives students three increasingly difficult problems a day and more than four hours to solve them. The questions span algebra, geometry, number theory and combinatorics—and you can forget about answering them if you're not a math whiz. You'll give your brain a workout just trying to understand them. Because those problems are both complex and unconventional, the annual math test has become a useful benchmark for measuring AI progress from one year to the next. In this age of rapid development, the leading research labs dreamed of a day their systems would be powerful enough to meet the standard for an IMO gold medal, which became the AI equivalent of a four-minute mile. But nobody knew when they would reach that milestone or if they ever would—until now. This year's International Mathematical Olympiad attracted high-school students from all over the world. The unthinkable occurred earlier this month when an AI model from Google DeepMind earned a gold-medal score at IMO by perfectly solving five of the six problems. In another dramatic twist, OpenAI also claimed gold despite not participating in the official event. The companies described their feats as giant leaps toward the future—even if they're not quite there yet. In fact, the most remarkable part of this memorable event is that 26 students got higher scores on the IMO exam than the AI systems. Among them were four stars of the U.S. team, including Qiao (Tiger) Zhang, a two-time gold medalist from California, and Alexander Wang, who brought his third straight gold back to New Jersey. That makes him one of the most decorated young mathematicians of all time—and he's a high-school senior who can go for another gold at IMO next year. But in a year, he might be dealing with a different equation altogether. 'I think it's really likely that AI is going to be able to get a perfect score next year,' Wang said. 'That would be insane progress,' Zhang said. 'I'm 50-50 on it.' So given those odds, will this be remembered as the last IMO when humans outperformed AI? 'It might well be,' said Thang Luong, the leader of Google DeepMind's team. DeepMind vs. OpenAI Until very recently, what happened in Australia would have sounded about as likely as koalas doing calculus. But the inconceivable began to feel almost inevitable last year, when DeepMind's models built for math solved four problems and racked up 28 points for a silver medal, just one point short of gold. This year, the IMO officially invited a select group of tech companies to their own competition, giving them the same problems as the students and having coordinators grade their solutions with the same rubric. They were eager for the challenge. AI models are trained on unfathomable amounts of information—so if anything has been done before, the chances are they can figure out how to do it again. But they can struggle with problems they have never seen before. As it happens, the IMO process is specifically designed to come up with those original and unconventional problems. In addition to being novel, the problems also have to be interesting and beautiful, said IMO president Gregor Dolinar. If a problem under consideration is similar to 'any other problem published anywhere in the world,' he said, it gets tossed. By the time students take the exam, the list of a few hundred suggested problems has been whittled down to six. Meanwhile, the DeepMind team kept improving the AI system it would bring to IMO, an unreleased version of Google's advanced reasoning model Gemini Deep Think, and it was still making tweaks in the days leading up to the competition. The effort was led by Thang Luong, a senior staff research scientist who narrowly missed getting to IMO in high school with Vietnam's team. He finally made it to IMO last year—with Google. Before he returned this year, DeepMind executives asked about the possibility of gold. He told them to expect bronze or silver again. He adjusted his expectations when DeepMind's model nailed all three problems on the first day. The simplicity, elegance and sheer readability of those solutions astonished mathematicians. The next day, as soon as Luong and his colleagues realized their AI creation had crushed two more proofs, they also realized that would be enough for gold. They celebrated their monumental accomplishment by doing one thing the other medalists couldn't: They cracked open a bottle of whiskey. Key members of Google DeepMind's gold-medal-winning team, including Thang Luong, second from left. To keep the focus on students, the companies at IMO agreed not to release their results until later this month. But as soon as the Olympiad's closing ceremony ended, one company declared that its AI model had struck gold—and it wasn't DeepMind. It was OpenAI. The company wasn't a part of the IMO event, but OpenAI gave its latest experimental reasoning model all six problems and enlisted former medalists to grade the proofs. Like DeepMind's, OpenAI's system flawlessly solved five and scored 35 out of 42 points to meet the gold standard. After the OpenAI victory lap on social media, the embargo was lifted and DeepMind told the world about its own triumph—and that its performance was certified by the IMO. Not long ago, it was hard to imagine AI rivals dueling for glory like this. In 2021, a Ph.D. student named Alexander Wei was part of a study that asked him to predict the state of AI math by July 2025—that is, right now. When he looked at the other forecasts, he thought they were much too optimistic. As it turned out, they weren't nearly optimistic enough. Now he's living proof of just how wrong he was: Wei is the research scientist who led the IMO project for OpenAI. The only thing more impressive than what the AI systems did was how they did it. Google called its result a major advance, though not because DeepMind won gold instead of silver. Last year, the model needed the problems to be translated into a computer programming language for math proofs. This year, it operated entirely in 'natural language' without any human intervention. DeepMind also crushed the exam within the IMO time limit of 4 ½ hours after taking several days of computation just a year ago. You might find all of this completely terrifying—and think of AI as competition. The humans behind the models see them as complementary. 'This could perhaps be a new calculator,' Luong said, 'that powers the next generation of mathematicians.' The problem of Problem 6 Speaking of that next generation, the IMO gold medalists have already been overshadowed by AI. So let's put them back in the spotlight. Team USA at the International Mathematical Olympiad, including Alexander Wang, fourth from right, and Tiger Zhang, with the stuffed red panda on his head. Qiao Zhang is a 17-year-old student in Los Angeles on his way to MIT to study math and computer science. As a young boy, his family moved to the U.S. from China and his parents gave him a choice of two American names. He picked Tiger over Elephant. His career in competitive math began in second grade, when he entered a contest called the Math Kangaroo. It ended this month at the math Olympics next to a hotel in Australia with actual kangaroos. When he sat down at his desk with a pen and lots of scratch paper, Zhang spent the longest amount of time during the exam on Problem 6. It was a problem in the notoriously tricky field of combinatorics, the branch of mathematics that deals with counting, arranging and combining discrete objects, and it was easily the hardest on this year's test. The solution required the ingenuity, creativity and intuition that humans can muster but machines cannot—at least not yet. 'I would actually be a bit scared if the AI models could do stuff on Problem 6,' he said. Problem 6 did stump DeepMind and OpenAI's models, but it wasn't just problematic for AI. Of the 630 student contestants, 569 also received zero points. Only six received the full credit of seven points. Zhang was proud of his partial solution that earned four points—which was four more than almost everyone else. At this year's IMO, 72 contestants went home with gold. But for some, a medal wasn't their only prize. Zhang was among those who left with another keepsake: victory over the AI models. (As if it weren't enough that he can bend numbers to his will, he also has a way with words and wrote this about his IMO experience.) In the end, the six members of the U.S. team piled up five golds and one silver, finishing second overall behind the Chinese after knocking them off the top spot last year. There was once a time when such precocious math students grew up to become professors. (Or presidents—the recently elected president of Romania was a two-time IMO gold medalist with perfect scores.) While many still choose academia, others get recruited by algorithmic trading firms and hedge funds, where their quantitative brains have never been so highly valued. This year, the U.S. team was supported by Jane Street while XTX Markets sponsored the whole event. After all, they will soon be competing with each other—and with the richest tech companies—for their intellectual talents. By then, AI might be destroying mere humans at math. But not if you ask Junehyuk Jung. A former IMO gold medalist himself, Jung is now an associate professor at Brown University and visiting researcher at DeepMind who worked on its gold-medal model. He doesn't believe this was humanity's last stand, though. He thinks problems like Problem 6 will flummox AI for at least another decade. And he walked away from perhaps the most significant math contest in history feeling bullish on all kinds of intelligence. 'There are things AI will do very well,' he said. 'There are still going to be things that humans can do better.' Write to Ben Cohen at The High-Schoolers Who Just Beat the World's Smartest AI Models The High-Schoolers Who Just Beat the World's Smartest AI Models


Hindustan Times
2 hours ago
- Hindustan Times
Why is Google making two robots play endless table tennis? The reason reveals the future of AI
At a lab south of London, two robotic arms been playing table tennis non-stop, pushing each other to new limits and quietly hinting at the future of artificial intelligence in the real world. Unlike the legendary Wimbledon marathon where humans finally called it quits, these robots seem content to keep going, always learning, never truly finished. Table tennis helps Google's robots learn to handle real-world unpredictability, one ball at a time.(Unsplash) Training robots, one rally at a time Google DeepMind's project started as a hunt for better ways to train robots to handle real-world complexity. After all, it isn't enough for a robot to just lift a box if it cannot adjust to unexpected changes or interact with people around it. The team decided that table tennis, a game that mixes fast reaction times, precision control, and strategic play, was a natural choice for testing. Every point, with its wild spins and shifting speeds, is a lesson in adapting to a moving target. The first step was simple rallies. The robots played cooperatively, just keeping the ball in play. Gradually, engineers turned up the challenge, tweaking the rules so that each arm began to compete for points. Quick improvement wasn't immediate; the robot arms forgot some tactics as fast as they learned new ones, and early rallies were often short and awkward. Progress ramped up, though, when real humans jumped in. Facing off against people with different styles, the robots began seeing a broader set of shots, forcing them to adjust and respond on the fly. After dozens of matches, these arms could routinely outplay beginners and even break even with some intermediate players. What really sets this project apart is how the robots are now getting feedback. Google's Gemini vision-language model watches clips of table tennis games, then gives clear, actionable advice: hit farther right, go for a short ball, defend closer to the table. Unlike old-school programming, this feedback comes in natural language, almost like a coach at the sidelines. The robots adjust their strategies and keep growing, rally by rally. Why it matters beyond the table There's a bigger dream behind this marathon. DeepMind hopes that robots learning from endless competition and human coaching will one day lead to machines ready for real jobs. It's a step toward robots as office helpers, lab partners, or just reliable hands in unpredictable home environments. In the world of robotics, mastering 'simple' actions, like tying a shoelace or avoiding trip-ups, remains the real challenge, not chess or code-breaking. Long rallies at the table may help smooth that learning curve and chip away at obstacles that have slowed progress for years. Researchers say these games are just the beginning. As AI models become more general and feedback loops tighter, the journey from lab-bound robot to everyday helper could speed up. Until then, the arms keep at it, never tiring, always volleying, and inching closer to a day when robots truly join us in the rhythm of daily life.