
Humans beat AI gold-level score at top maths contest
SYDNEY : Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time.
Neither model scored full marks – unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old.
Google said yesterday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month.
'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points – a gold medal score,' the US tech giant cited IMO president Gregor Dolinar as saying.
'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.'
Around 10% of human contestants won gold-level medals, and five received perfect scores of 42 points.
US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test.
The result 'achieved a longstanding grand challenge in AI' at 'the world's most prestigious math competition', OpenAI researcher Alexander Wei wrote on social media.
'We evaluated our models on the 2025 IMO problems under the same rules as human contestants,' he said.
'For each problem, three former IMO medallists independently graded the model's submitted proof.'
Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems.
That took two to three days of computation – far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said.
The IMO said tech companies had 'privately tested closed-source AI models on this year's problems', the same ones faced by 641 competing students from 112 countries.
'It is very exciting to see progress in the mathematical capabilities of AI models,' said IMO president Dolinar.
Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
6 hours ago
- The Star
Meta clashes with Apple, Google over age check legislation
The biggest tech companies are warring over who's responsible for children's safety online, with billions of dollars in fines on the line as states rapidly pass conflicting laws requiring companies to verify users' ages. The struggle has pitted Meta Platforms Inc and other app developers against Apple Inc and Alphabet Inc's Google, the world's largest app stores. Lobbyists for both sides are moving from state to state, working to water down or redirect the legislation to minimize their clients' risks. This year alone, at least three states – Utah, Texas and Louisiana – passed legislation requiring tech companies to authenticate users' ages, secure parental consent for anyone under 18 and ensure minors are protected from potentially harmful digital experiences. Now, lobbyists for all three companies are flooding into South Carolina and Ohio, the next possible states to consider such legislation. The debate has taken on new importance after the Supreme Court this summer ruled age verification laws are constitutional in some instances. A tech group on Wednesday petitioned the Supreme Court to block a social media age verification law in Mississippi, teeing up a highly consequential decision in the next few weeks. Child advocates say holding tech companies responsible for verifying the ages of their users is key to creating a safer online experience for minors. Parents and advocates have alleged the social media platforms funnel children into unsafe and toxic online spaces, exposing young people to harmful content about self harm, eating disorders, drug abuse and more. Blame game Meta supporters argue the app stores should be responsible for figuring out whether minors are accessing inappropriate content, comparing the app store to a liquor store that checks patrons' IDs. Apple and Google, meanwhile, argue age verification laws violate children's privacy and argue the individual apps are better-positioned to do age checks. Apple said it's more accurate to describe the app store as a mall and Meta as the liquor store. The three new state laws put the responsibility on app stores, signaling Meta's arguments are gaining traction. The company lobbied in support of the Utah and Louisiana laws putting the onus on Apple and Google for tracking their users' ages. Similar Meta-backed proposals have been introduced in 20 states. Federal legislation proposed by Republican Senator Mike Lee of Utah would hold the app stores accountable for verifying users' ages. Still, Meta's track record in its state campaigns is mixed. At least eight states have passed laws since 2024 forcing social media platforms to verify users' ages and protect minors online. Apple and Google have mobilized dozens of lobbyists across those states to argue that Meta is shirking responsibility for protecting children. "We see the legislation being pushed by Meta as an effort to offload their own responsibilities to keep kids safe,' said Google spokesperson Danielle Cohen. "These proposals introduce new risks to the privacy of minors, without actually addressing the harms that are inspiring lawmakers to act.' Meta spokesperson Rachel Holland countered that the company is supporting the approach favored by parents who want to keep their children safe online. "Parents want a one-stop-shop to oversee their teen's online lives and 80% of American parents and bipartisan lawmakers across 20 states and the federal government agree that app stores are best positioned to provide this,' Holland said. As the regulation patchwork continues to take shape, the companies have each taken voluntary steps to protect children online. Meta has implemented new protections to restrict teens from accessing "sensitive' content, like posts related to suicide, self-harm and eating disorders. Apple created "Child Accounts,' which give parents more control over their children's' online activity. At Apple, spokesperson Peter Ajemian said it "soon will release our new age assurance feature that empowers parents to share their child's age range with apps without disclosing sensitive information.' Splintered groups As the lobbying battle over age verification heats up, influential big tech groups are splintering and new ones emerging. Meta last year left Chamber of Progress, a liberal-leaning tech group that counts Apple and Google as members. Since then, the chamber, which is led by a former Google lobbyist and brands itself as the Democratic-aligned voice for the tech industry, has grown more aggressive in its advocacy against all age verification bills. "I understand the temptation within a company to try to redirect policymakers towards the company's rivals, but ultimately most legislators don't want to intervene in a squabble between big tech giants,' said Chamber of Progress CEO Adam Kovacevich. Meta tried unsuccessfully to convince another major tech trade group, the Computer & Communications Industry Association, to stop working against bills Meta supports, two people familiar with the dynamics said. Meta, a CCIA member, acknowledged it doesn't always agree with the association. Meta is also still a member of NetChoice, which opposes all age verification laws no matter who's responsible. The group currently has 10 active lawsuits on the matter, including battling some of Meta's preferred laws. The disagreements have prompted some of the companies to form entirely new lobbying outfits. Meta in April teamed up with Spotify Technology SA and Match Group Inc to launch a coalition aimed at taking on Apple and Google, including over the issue of age verification. Competing campaigns Meta is also helping to fund the Digital Childhood Alliance, a coalition of conservative groups leading efforts to pass app-store age verification, according to three people familiar with the funding. Neither the Digital Childhood Alliance nor Meta responded directly to questions about whether Meta is funding the group. But Meta said it has collaborated with Digital Childhood Alliance. The group's executive director, Casey Stefanski, said it includes more than 100 organizations and child safety advocates who are pushing for more legislation that puts responsibility on the app stores. Stefanski said the Digital Childhood Alliance has met with Google "several times' to share their concerns about the app store in recent months. The App Association, a group backed by Apple, has been running ads in Texas, Alabama, Louisiana and Ohio arguing that the app store age verification bills are backed by porn websites and companies. The adult entertainment industry's main lobby said it is not pushing for the bills; pornography is mostly banned from app stores. "This one-size fits all approach is built to solve problems social media platforms have with their systems while making our members, small tech companies and app developers, collateral damage,' said App Association spokesperson Jack Fleming. In South Carolina and Ohio, there are competing proposals placing different levels of responsibility on the app stores and developers. That could end with more stringent legislation that makes neither side happy. "When big tech acts as a monolith, that's when things die,' said Joel Thayer, a supporter of the app store age verification bills. "But when they start breaking up that concentration of influence, all the sudden good things start happening because the reality is, these guys are just a hair's breath away from eating each other alive.' – Bloomberg


The Star
a day ago
- The Star
AI is replacing search engines as a shopping guide, research suggests
Finding products, comparing prices and browsing reviews: Until now, you'd have done most of this in a search engine like Google. But that era appears to be ending thanks to AI, research shows. — Photo: Christin Klose/dpa COPENHAGEN: Three in four people who use AI are turning to the likes of ChatGPT, Gemini and Copilot to get advice and recommendations on shopping and travel instead of using the previous online method of search engines like Google, new research shows. AI-supported online shopping is done at least occasionally by 76% of AI users, with 17% doing so most or even all of the time, according to a study conducted by the market research institute Norstat on behalf of Verdane, a leading European investment company. The changes in consumer search behaviour pose a major challenge not only for search engine providers like Google but also for manufacturers and retailers, who must adapt to maintain their visibility in the AI-driven world. AI chatbots have emerged as powerful tools for tracking down specific products, often providing helpful advice in response to complex and specific queries. Of the survey respondents, 3% are dedicated AI enthusiasts who always use AI tools instead of search engines when shopping online, while 14% said they mostly use AI and 35% do so occasionally. A total of 7,282 people from the UK, Germany, Sweden, Norway, Denmark and Finland aged between 18 and 60 participated in the survey in June. The highest proportion of AI use is in online travel research, at 33%. This is followed by consumer electronics (22%), DIY and hobby supplies (20%), and software or digital subscriptions (19%). However, AI usage is still relatively low in fashion and clothing (13%), cosmetics (12%), and real estate (7%). Among AI tools, ChatGPT is far ahead of its competitors and 86% of AI users regularly use OpenAI's chatbot. This is followed at a considerable distance by Google's Gemini (26% regular users) and Microsoft's Copilot (20%). The Chinese AI bot DeepSeek, which has been the subject of heated debate among AI experts and data protection advocates, appears to have no significant role among consumers in Europe. – dpa


The Star
2 days ago
- The Star
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. 'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.' Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbotsare now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. 'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. 'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review.'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.' While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. 'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. 'As somebody who's struggled with a disability my whole life,' she says, 'I need this.' (Editing by Yasmeen Serhan and Sharon Singleton)