logo
Number of preschoolers in Singapore learning with AI programmes jumps 75%

Number of preschoolers in Singapore learning with AI programmes jumps 75%

CNA7 hours ago
The number of preschoolers taking up AI programmes has jumped by as much as 75 per cent over the past year. This comes as some preschools are adopting twice as many early education apps in classes to help children learn languages easily and spark creativity. Yet, experts warn that exposing kids to AI so early has its issues - in particular, excessive screen time. Muhammad Bahajjaj with more.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google and OpenAI's AI models win milestone gold at global math competition
Google and OpenAI's AI models win milestone gold at global math competition

CNA

time7 minutes ago

  • CNA

Google and OpenAI's AI models win milestone gold at global math competition

Alphabet's Google and OpenAI said their artificial-intelligence models won gold medals at a global mathematics competition, signaling a breakthrough in math capabilities in the race to build powerful systems that can rival human intelligence. The results marked the first time that AI systems crossed the gold-medal scoring threshold at the International Mathematical Olympiad for high-school students. Both companies' models solved five out of six problems, achieving the result using general-purpose "reasoning" models that processed mathematical concepts using natural language, in contrast to the previous approaches used by AI firms. The achievement suggests AI is less than a year away from being used by mathematicians to crack unsolved research problems at the frontier of the field, according to Junehyuk Jung, a math professor at Brown University and visiting researcher in Google's DeepMind AI unit. "I think the moment we can solve hard reasoning problems in natural language will enable the potential for collaboration between AI and mathematicians," Jung told Reuters. The same idea can apply to research quandaries in other fields such as physics, said Jung, who won an IMO gold medal as a student in 2003. Of the 630 students participating in the 66th IMO on the Sunshine Coast in Queensland, Australia, 67 contestants, or about 11 per cent, achieved gold-medal scores. Google's DeepMind AI unit last year achieved a silver medal score using AI systems specialized for math. This year, Google used a general-purpose model called Gemini Deep Think, a version of which was previously unveiled at its annual developer conference in May. Unlike previous AI attempts that relied on formal languages and lengthy computation, Google's approach this year operated entirely in natural language and solved the problems within the official 4.5-hour time limit, the company said in a blog post. OpenAI, which has its own set of reasoning models, similarly built an experimental version for the competition, according to a post by researcher Alexander Wei on social media platform X. He noted that the company does not plan to release anything with this level of math capability for several months. This year marked the first time the competition coordinated officially with some AI developers, who have for years used prominent math competitions like IMO to test model capabilities. IMO judges certified the results of those companies, including Google, and asked them to publish results on July 28. "We respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts and the students had rightly received the acclamation they deserved," Google DeepMind CEO Demis Hassabis said on X on Monday. However, OpenAI, which did not work with the IMO, self-published its results on Saturday, allowing it to be first among AI firms to claim gold-medal status.

Commentary: More people are considering AI lovers, and we shouldn't judge
Commentary: More people are considering AI lovers, and we shouldn't judge

CNA

time7 minutes ago

  • CNA

Commentary: More people are considering AI lovers, and we shouldn't judge

WINNIPEG, Canada: People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an artificial intelligence-powered bot, and they have millions of users. A recent survey of users found that 19 per cent of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarising. In a New Yorker article titled Your AI Lover Will Change You, futurist Jaron Lanier argued that 'when it comes to what will happen when people routinely fall in love with an AI, I suggest we adopt a pessimistic estimate about the likelihood of human degradation.' Podcaster Joe Rogan put it more succinctly – in a recent interview with US Senator Bernie Sanders, the two discussed the 'dystopian' prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: 'I'm like, oh, we're done. We're cooked.' We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. RUINING HUMAN CONNECTION? When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviours. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. RELATIONSHIPS CAN BE MESSY AND COMPLEX In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that 'the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness.' This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. SUPPORT AND SAFETY We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: We should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behaviour. Governments should also consider safeguards to restrict access by younger users, or at least to control the behaviour of chatbots who are interacting with young people. And they should mandate better privacy protections – though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatises their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun.

OpenAI and UK sign new AI agreement to boost security, infrastructure
OpenAI and UK sign new AI agreement to boost security, infrastructure

Straits Times

time4 hours ago

  • Straits Times

OpenAI and UK sign new AI agreement to boost security, infrastructure

Find out what's new on ST website and app. OpenAI will share technical information with the UK AI Security Institute, to deepen the British government's knowledge of AI capabilities and security risks. LONDON - The UK government said it signed a strategic partnership with OpenAI on July 21, with plans to expand AI security research collaborations and explore investing in AI infrastructure such as data centres. The Microsoft-backed AI startup will also expand its London office, building up its research and engineering teams at OpenAI's first international location opened two years ago, according to a statement. As part of the agreement, OpenAI will share technical information with the UK AI Security Institute to deepen the government's knowledge of AI capabilities and security risks. 'The partnership will explore where it can deploy AI in areas such as justice, defence and security, and education technology in line with UK standards and guidelines to demonstrate the opportunity to make taxpayer funded services more efficient and effective,' the statement said. REUTERS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store