logo
Tracking AI models' ‘thoughts' could reveal how they make decisions, researchers say

Tracking AI models' ‘thoughts' could reveal how they make decisions, researchers say

Indian Express16-07-2025
A broad coalition drawn from the ranks of multiple AI companies, universities, and non-profit organisations have called for deeper scrutiny of AI reasoning models, particularly their 'thoughts' or reasoning traces.
In a new position paper published on Tuesday, July 15, the authors said that monitoring the chains-of-thought (CoT) by AI reasoning models could be pivotal to keeping AI agents in check.
Reasoning models such as OpenAI's o3 differ from large language models (LLMs) such as GPT-4o as the former is said to follow an externalised process where they work out the problem step-by-step before generating an answer, according to a report by TechCrunch.
Reasoning models can be used to perform tasks such as solving complex math and science problems. They also serve as the underlying technology for AI agents capable of autonomously accessing the internet, visiting websites, making hotel reservations, etc, on behalf of users.
This push to advance AI safety research could help shed light on how AI reasoning models work, an area that remains poorly understood despite these models reportedly improving the overall performance of AI on benchmarks.
'CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions,' the paper reads. 'Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved,' it adds.
The paper calls on leading AI model developers to determine whether CoT reasoning is 'monitorable' and to track its monitorability. It urges deeper research on the factors that could shed more light on how these AI models arrive at answers. AI developers should also look into whether CoT reasoning can be used as a safeguard to prevent AI-related harms, as per the document.
But, the paper carries a cautionary note as well. It suggests that any interventions should not make the AI reasoning models less transparent or reliable.
In September last year, OpenAI released a preview of its first-ever AI reasoning model called o1. This launch prompted other companies to release competing models with similar capabilities such as Gemini 2.0, Claude 3.7 Sonnet, and xAI's Grok 3, among others.
Anthropic researchers have been studying AI reasoning models, with a recent academic study suggesting that AI models can fake CoT reasoning. Another research paper from OpenAI found that CoT monitoring could enable better alignment of AI models with human behaviour and values.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘Kids will be fine; I'm worried about…': Sam Altman says his kid will probably not attend college
‘Kids will be fine; I'm worried about…': Sam Altman says his kid will probably not attend college

Time of India

timean hour ago

  • Time of India

‘Kids will be fine; I'm worried about…': Sam Altman says his kid will probably not attend college

Sam Altman , the CEO of ChatGPT-maker OpenAI , has offered his take on the future of education , suggesting his own child will "probably not" attend college. He said that AI will change education, noting that the change will possibly come in 18 years. He said that education is 'going to feel very different' when a new generation will not know a world without AI. Speaking on the "This Past Weekend" podcast with comedian Theo Von, Altman suggested that AI will not 'kill' education but it will evolve. Altman predicted that future generations, including his own children, will grow up in a reality where they will never be 'smarter than AI' and will never know a world where products and services aren't intellectually superior to them. 'In that world, education is going to feel very different. I already think college is, like, maybe not working great for most people, but I think if you fast-forward 18 years it's going to look like a very, very different thing,' Altman stated. Altman has 'deep worries' about technology's impact on kids While Altman admitted to having "deep worries" about technology's broader impact on children and their development, specifically citing the "dopamine hit" from short-form video, his primary concern lies not with the youth, but with adults. He believes the true challenge of advancing AI will be whether older generations can effectively adapt to the new technological paradigm. 'I actually think the kids will be fine; I'm worried about the parents,' he explained. 'If you look at the history of the world when there's a new technology—people that grow up with it, they're always fluent. They always figure out what to do. They always learn new kinds of jobs. But if you're like a 50-year-old and you have to kind of learn how to do things in a very different way, that doesn't always work,' he said. He explained it by stating that our parents did not grow up with computers but for the current generation, computers were always there. 7 Reasons that make Samsung GALAXY Z FLIP7 different from others

'Will AI replace lawyers? Law intern asks to use ChatGPT for witness analysis, gets hard copies instead
'Will AI replace lawyers? Law intern asks to use ChatGPT for witness analysis, gets hard copies instead

Hindustan Times

timean hour ago

  • Hindustan Times

'Will AI replace lawyers? Law intern asks to use ChatGPT for witness analysis, gets hard copies instead

As artificial intelligence (AI) continues to make inroads across industries, the legal profession is grappling with its place in traditional workflows. A recent viral post by senior advocate Sidharth Luthra on Twitter has sparked a fresh wave of debate.(AP) A recent viral post by senior advocate Sidharth Luthra on X has sparked a fresh wave of debate, after a law intern revealed he was prevented from using ChatGPT to analyse witness statements during an appeal case. The intern, who was assisting with legal work, had proposed using the AI tool to help process and interpret complex witness testimonies. However, his mentor reportedly dismissed the idea and instead handed him a stack of hard copies, telling him to rely on 'natural intelligence' over artificial intelligence. (Also Read: IT firm employee claims no salary hike for 3 years, wants to buy autorickshaw: 'Drive it before, after working hours') 'A law intern wanted to use Chat GPT to analyse witness statements for an appeal in my office. His mentor gave him hard copies to use natural Intelligence instead of artificial Intelligence [AI]!!!!,' Luthra wrote on X. The post caught fire on social media, with many users, including legal professionals, weighing in on what it says about the current attitude toward technology in law. How did X users react? 'AI can very well read and analyse hard copies as well. At times better than humans,' one user commented. 'Law practitioners must embrace AI, not oppose it.' Another responder shared a startling anecdote from the field, 'In some prominent police stations in Telugu states, officers are reportedly using ChatGPT to prepare remand copies. Statements were being recorded and processed by AI, all without the accused knowing. It was saving typing time and the sections were also accurate, they said.' Some took a more balanced view, noting the potential of AI when combined with human judgment. 'Natural intelligence combined with artificial intelligence is the way forward.' 'AI won't replace lawyers, but lawyers who use AI will replace those who don't, the next 5-7 years will prove this.' The exchange has since opened a wider conversation about how, and when, AI should be integrated into legal processes. While some see it as a threat to tradition, ethics, or even job security, others argue that AI tools like ChatGPT can streamline research, improve drafting accuracy, and eliminate repetitive tasks. (Also Read: 'Utterly shameful': Viral video of foreign tourist cleaning waterfall in Himachal sparks debate)

‘AI's real risk is human intent, not rogue machines,' warns OpenAI boss Sam Altman
‘AI's real risk is human intent, not rogue machines,' warns OpenAI boss Sam Altman

Mint

timean hour ago

  • Mint

‘AI's real risk is human intent, not rogue machines,' warns OpenAI boss Sam Altman

OpenAI chief executive Sam Altman has voiced his growing concerns over the misuse of artificial intelligence, warning that the real danger lies not in autonomous machines going rogue, but in people using AI tools to cause deliberate harm. Speaking on a recent episode of Theo Von's podcast, Altman addressed the long-debated question of AI risks. Rather than echoing dystopian fears of machines turning against humanity, he shifted the spotlight to human intent. 'I worry more about people using AI to do bad things than the AI deciding to do bad things on its own,' Altman said. His remarks mark a departure from the typical science-fiction narrative of killer robots and self-aware systems, instead highlighting a more immediate and realistic challenge, the potential for malicious actors to exploit advanced AI models. 'The risk is if someone really wants to cause harm and they have a very powerful tool to do it,' he noted, pointing to the ease with which powerful AI systems could be weaponised if left unchecked. Altman acknowledged the difficulty of designing AI systems that remain safe and beneficial while in the hands of millions of users. 'We're trying to build guardrails as we go. That's hard, but necessary,' he admitted, underlining the ongoing efforts at OpenAI to embed ethical guidelines and technical safeguards into its models. His comments come at a time when OpenAI is facing increased scrutiny from policymakers and civil society, particularly as speculation mounts around the development of GPT-5. With generative AI becoming more accessible and influential in everyday life, questions around governance, accountability, and control are more pressing than ever. Meanwhile, OpenAI has officially begun rolling out its new artificial intelligence agent, ChatGPT Agent, after a week-long delay. Originally announced on 18 July, the feature is now being made available to all ChatGPT Plus, Pro, and Team subscribers, according to a statement posted by the company on social media platform X. The delay in rollout left many users puzzled, with some still reporting the absence of the feature despite OpenAI's claims of a complete deployment. The company has not disclosed the cause behind the delay, and questions raised in the post's comment section remain unanswered.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store