logo
Magnus Carlsen Beats ChatGPT in Chess Match Without Losing a Single Piece

Magnus Carlsen Beats ChatGPT in Chess Match Without Losing a Single Piece

Magnus Carlsen, the top-ranked chess player in the world, has once again demonstrated that he is at the top. But this time his opponent wasn't a grandmaster—it was an AI chatbot. ChatGPT, the popular AI in development by OpenAI, played an online game against Carlsen, who won in 53 moves. What made it even more remarkable is that he didn't lose a piece throughout the entire game.
Carlsen posted screenshots of the game on social media platform X (formerly known as Twitter), with the tongue-in-cheek caption, "just a little bored during travel." The comment was a reminder of just how casually he took the match. Although the AI opened the game with some powerful plays, the machine could not keep pace with Carlsen's intuitive and sharp tactics.
ChatGPT resigned and conceded the game. "All my pawns are gone. You haven't lost a single piece. You fulfilled your win condition perfectly... I resign. That was methodical, clean, and sharp."
Post-match, Carlsen provided some advice to the chatbot. The Norwegian chess grandmaster said the early part of the game and the piece sacrifice were interesting ideas but the AI didn't follow through well. The grandmaster also asked ChatGPT to guess his chess rating based on the game, and the bot estimated he was about 1800–2000 FIDE, which we would consider beginner to intermediate.
This could not have been further from the truth: Carlsen has a FIDE rating of 2839, which is one of the highest in the history of chess. With words escaping me, I will just let his rating tell the story, and in case you are wondering, the last time his rating was near 2000 was when he was the age of a beginner at chess... in 2001.
Despite the loss, ChatGPT complimented Carlsen's excellence in the game. It praised his use of the Philidor Defense and his power-creating positional moves with ...Bf8 and ...Re8. It also observed his forceful play from ...Nf3+ and his endgame skills. Carlsen had played with precision and discipline to convert a small edge into a victory, the bot said. ChatGPT even mentioned that Carlsen spotted illegal moves fast, demonstrating his over-the-board experience as a real chess player.
Carlsen has talked about AI in the past. He liked to mention that AI was exciting to start with because it let players try new ideas. But now, he said, he finds it harder to gain an advantage with AI because everyone has access to it.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China Launches Undersea AI Data Hub to Slash Cooling Water Use and Emissions
China Launches Undersea AI Data Hub to Slash Cooling Water Use and Emissions

International Business Times

time30 minutes ago

  • International Business Times

China Launches Undersea AI Data Hub to Slash Cooling Water Use and Emissions

China has begun building an underwater AI data center near Shanghai in an effort to reduce energy and water use tied to traditional data server cooling systems. Construction began in June, with operations expected to commence by September. The new facility, developed by Hailanyun, aims to minimize environmental impact by using seawater to cool servers and powering the center with a nearby offshore wind farm that will meet 97% of its energy needs. The data center's first phase includes 198 server racks, designed to host hundreds of AI-capable servers—enough to train a large AI model like OpenAI's GPT-3.5 within a day. AI data centers typically consume vast amounts of electricity and water. Nearly 40% of their energy use goes into cooling systems that prevent overheating from tightly packed, continuously running servers. These systems often draw water from underground, rivers, or recycled wastewater—raising sustainability concerns, particularly in water-stressed regions. Shabrina Nadhila from Ember, a think tank focused on energy, called China's project a "bold shift toward low-carbon digital infrastructure" in comments to The Guardian and SourceMaterial. She said it may set a new global standard for sustainable AI computing. However, marine scientists caution that undersea data centers may pose ecological risks. Discharged warm water could harm ocean biodiversity during marine heat waves. Other concerns include potential vulnerabilities to underwater sound interference, which might cause data damage.

Who are Jason Wei and Hyung Won Chung? Meta Hires 2 More OpenAI Engineers
Who are Jason Wei and Hyung Won Chung? Meta Hires 2 More OpenAI Engineers

International Business Times

time2 hours ago

  • International Business Times

Who are Jason Wei and Hyung Won Chung? Meta Hires 2 More OpenAI Engineers

Meta, the parent of Facebook, has recruited two high-profile researchers from OpenAI—Jason Wei and Hyung Won Chung—in its latest AI hiring spree. The move is the latest sign of Meta's increasing ambition to chart the future of artificial intelligence and contend with rivals like Google DeepMind and Apple. X Wired reported that OpenAI has already disabled Wei and Chung's internal Slack accounts to verify their departure. The two researchers are said to have collaborated on many of the big OpenAI projects before moving to Meta, including deep search and large language models O1 and O3. Jason Wei, who joined OpenAI in 2023, was formerly employed by Google. He has spent the past 40 years mastering chain-of-thought reasoning in AI, which is the process of training AI to solve complex problems one step at a time. Wei also likes reinforcement learning—a technique in which AI models are rewarded for making correct choices, which is now pivotal to the AI revolution. Hyung Won Chung, who also joined OpenAI in 2023, was a collaborator of Wei's on similar projects. Much of his research is geared toward logical aspects of AI, in particular for reasoning systems and agents. The duo had also been colleagues at Google, so their move to Meta was a notable double hire. Meta's recent hiring rush has pulled in top AI talent from some of the biggest names in the technology sector. Some recruits to its AI division, most notably those working on superintelligence projects, have been offered as much as $300 million over four years, according to reports. But not everyone is appreciating this talent grab from the rival organizations. Dell Technologies CEO Michael Dell expressed concerns about the cultural impacts of such fierce recruitment efforts. In a podcast with venture capitalists Bill Gurley and Brad Gerstner, Dell said that high salaries for even new employees could elicit resentment from long-tenured coworkers. "It's going to be a cultural challenge, no doubt," Dell said. He cautioned that Meta's internal teams may become divided if current employees feel ignored or underappreciated in the face of the huge pay packages that newcomers are receiving.

Unclear rules on AI use in classrooms is creating confusion and distrust
Unclear rules on AI use in classrooms is creating confusion and distrust

Straits Times

time7 hours ago

  • Straits Times

Unclear rules on AI use in classrooms is creating confusion and distrust

Whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom. While AI are affecting how students learn, their emergence is also changing their relationships with their professors and with one another. The advent of generative AI has elicited waves of frustration and worry across academia for all the reasons one might expect: Early studies are showing that artificial intelligence tools can dilute critical thinking and undermine problem-solving skills. And there are many reports that students are using chatbots to cheat on assignments. But how do students feel about AI? And how is it affecting their relationships with peers, instructors and their coursework? While there is a growing body of research exploring how generative AI is affecting higher education, there is one group that we worry is under-represented in this literature, yet perhaps uniquely qualified to talk about the issue: our students. Our team ran a series of focus groups with 95 students across our campuses in the spring of 2025 and found that whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom. While AI products such as ChatGPT, Gemini or Claude are, of course, affecting how students learn, their emergence is also changing their relationships with their professors and with one another. 'It's not going to judge you' Most of our focus group participants had used AI in the academic setting – when faced with a time crunch, when they perceive something to be 'busy work', or when they are 'stuck' and worry that they can't complete a task on their own. We found that most students don't start a project using AI, but many are willing to turn to it at some point. Many students described positive experiences using AI to help them study or answer questions, or give them feedback on papers. Some even described using AI instead of a professor, tutor or teaching assistant. Top stories Swipe. Select. Stay informed. Singapore Fatal abuse of Myanmar maid in Bishan: Traffic Police officer sentenced to 10 years' jail Singapore Man charged over manufacturing DIY Kpods at Yishun home; first such case in Singapore Singapore HSA launches anti-vaping checks near 5 institutes of higher learning Singapore $7,000 fine for eatery chain involved in ByteDance food poisoning case Singapore Jail for elderly man for using knife to slash neighbour, who later died of heart disease Life 11 new entries on Singapore's Bib Gourmand list, including three re-entries at Old Airport Road Singapore NEA monitoring E. coli at Sentosa beaches after elevated bacteria levels delay World Aquatics events Opinion Grab tried to disrupt taxis. It now wants to save them Others found a chatbot less intimidating than attending office hours, where professors might be 'demeaning'. In the words of one interviewee: 'With ChatGPT, you can ask as many questions as you want and it's not going to judge you.' But by using it, you may be judged. While some were excited about using AI, many students voiced mild feelings of guilt or shame about their AI use due to environmental or ethical concerns, or just coming across as lazy. Some even expressed a feeling of helplessness, or a sense of inevitability, regarding AI in their futures. Anxiety, distrust and avoidance While many students expressed a sense that faculty members are, as one participant put it, 'very anti-ChatGPT', they also lamented the fact that the rules around acceptable AI use were not sufficiently clear. As one urban planning major put it: 'I feel uncertain of what the expectations are,' with her peer chiming in, 'We're not on the same page with students and teachers or even individually. No one really is.' Students also described feelings of distrust and frustration towards peers they saw as overly reliant on AI. Some talked about asking classmates for help, only to find that they 'just used ChatGPT' and hadn't learnt the material. Others pointed to group projects, where AI use was described as 'a giant red flag' that made them 'think less' of their peers. These experiences feel unfair and uncomfortable for students. They can report their classmates for academic integrity violations – and enter yet another zone in which distrust mounts – or they can try to work with them, sometimes with resentment. 'It ends up being more work for me,' a political science major said, 'because it's not only me doing my work by myself, it's me double-checking yours.' Distrust was a marker that we observed of both student-to-teacher relationships and student-to-student relationships. Learners shared fears of being left behind if other students in their classes used chatbots to get better grades. This resulted in emotional distance and wariness among students. Indeed, our findings reflect other reports that indicate the mere possibility that a student might have used a generative AI tool is now undercutting trust across the classroom. Students are as anxious about baseless accusations of AI use as they are about being caught using it. Students described feeling anxious, confused and distrustful, and sometimes even avoiding peers or learning interactions. As educators, this worries us. We know that academic engagement – a key marker of student success – comes not only from studying the course material, but also from positive engagement with classmates and instructors alike. AI is affecting relationships Indeed, research has shown that faculty-student relationships are an important indicator of student success. Peer-to-peer relationships are essential, too. If students are sidestepping important mentoring relationships with professors or meaningful learning experiences with peers due to discomfort over ambiguous or shifting norms around the use of AI technology, institutions of higher education could imagine alternative pathways for connection. Residential campuses could double down on in-person courses and connections; faculty could be incentivised to encourage students to visit during office hours. Faculty-led research, mentoring and campus events where faculty and students mix in an informal fashion could also make a difference. We hope our research can also flip the script and disrupt tropes about students who use AI as 'cheaters'. Instead, it tells a more complex story of students being thrust into a reality they didn't ask for, with few clear guidelines and little control. As generative AI continues to pervade everyday life, and institutions of higher education continue to search for solutions, our focus groups reflect the importance of listening to students and considering novel ways to help students feel more comfortable connecting with peers and faculty. Understanding these evolving interpersonal dynamics matters because how we relate to technology is increasingly affecting how we relate to one another. Given our experiences in dialogue with them, it is clear that students are more than ready to talk about this issue and its impact on their futures. Elise Silva is director of policy research at the Institute for Cyber Law, Policy and Security, University of Pittsburgh. This first appeared in The Conversation .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store