logo
When AI goes rogue, even exorcists might flinch

When AI goes rogue, even exorcists might flinch

Economic Times17 hours ago
Ghouls in the machine As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI, Google, Meta and Anthropic. Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their peers.Foundation models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal solution.
When you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, anyway. While CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant actions.
In late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'. An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely. A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully understood.That line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable behaviour.Many argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development standards.What does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many industries.While I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these days.Applying these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going wrong.Mitigation methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more prevalent.
Such governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance, this AI can be controlled, solutions innovated and benefits achieved. As we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.) Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Delhivery survived the Meesho curveball. Can it keep on delivering profits?
Why the RBI's stability report must go beyond rituals and routines
Ozempic, Wegovy, Mounjaro: Are GLP-1 drugs weight loss wonders or health gamble?
3 critical hurdles in India's quest for rare earth independence
Stock Radar: Apollo Hospitals breaks out from 2-month consolidation range; what should investors do – check target & stop loss
Add qualitative & quantitative checks for wealth creation. 7 small-cap stocks from different sectors with upside potential of over 25%
These 7 banking stocks can give more than 20% returns in 1 year, according to analysts
Wealth creation is about holding the right stocks and ignoring the noise. 13 'right stocks' with an upside potential of up to 34%
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Can too much AI backfire? Study reveals why ‘AI-powered' products are turning buyers away
Can too much AI backfire? Study reveals why ‘AI-powered' products are turning buyers away

Time of India

time26 minutes ago

  • Time of India

Can too much AI backfire? Study reveals why ‘AI-powered' products are turning buyers away

The AI Label: A Bug, Not a Feature? So Why the Mistrust? From Boon to Burden You Might Also Like: AI might take your job, but ignoring it could too: Microsoft links performance reviews to AI usage For all the bold claims about artificial intelligence revolutionising the future, a new study suggests that the buzzword 'AI' might be doing more harm than good—especially when it comes to convincing customers to make a purchase. Far from being impressed by "smart" devices, many people are actually repelled by to a report from The Wall Street Journal (WSJ), a study published in the Journal of Hospitality Marketing and Management reveals an unexpected trend: consumers, particularly those shopping for premium products, are less likely to buy when the product is branded as 'AI-powered.' The study was led by Dogan Gursoy, a professor at Washington State University , who was reportedly surprised by the an experiment detailed in WSJ, participants were split into two groups: one exposed to advertisements emphasizing artificial intelligence, and the other shown ads using vaguer terms like 'cutting-edge technology.' The result? Those marketed with generic tech phrases performed better in terms of consumer interest. AI, it turns out, might be the tech world's equivalent of trying too the study underlines is that people don't necessarily want a product that sounds smart—they just want one that works. As a report from VICE puts it bluntly, 'Does it toast the bread? Good. We did not need an AI to maximize our toast potential.'That attitude reflects a broader skepticism toward AI-branded gadgets. A related survey by Parks Associates, also cited by The Wall Street Journal, found that 58% of the 4,000 American respondents said the presence of the term 'AI' made no difference in their buying decision. More notably, 24% said it actually made them less likely to buy the product, while only 18% said it among the most tech-savvy generations, enthusiasm for AI branding is modest. The Parks survey found that only about a quarter of consumers aged 18 to 44 felt positively influenced by AI marketing. Older consumers were even more wary—about a third of seniors outright rejected products marketed with AI reasons underpin this skepticism. For one, many consumers simply don't understand how AI adds meaningful value to a product. When companies fail to clearly explain the benefit—such as how an AI-enhanced vacuum cleaner is better than a regular one—customers suspect gimmickry over genuine innovation. As VICE quips, 'Don't even bother explaining… I will immediately call out marketing speak—just old school American frontier snake oil with a snazzy tech coating.'There's also the matter of trust. AI-powered products are often seen as surveillance tools cloaked in convenience. Whether it's the fear of a smart speaker listening in or a robotic assistant tracking daily habits, the suspicion that AI devices are snooping looms may have been a brief window when 'AI-powered' labels intrigued consumers—maybe even excited them. But that window appears to have been closing for now. Today, AI branding risks sounding more like a creepy techno-curse than a promise of the report suggests, if marketers truly want to promote AI-enhanced products, they need to stop leaning on the term 'AI' as a standalone badge of quality. Instead, they must return to the basics of marketing: clearly articulating the practical, time-saving, or value-adding benefits a product the end, intelligence alone doesn't sell; especially if it's artificial and unexplained.

Google Veo 3 now in India: Create AI videos with voice, music and more
Google Veo 3 now in India: Create AI videos with voice, music and more

Hindustan Times

time27 minutes ago

  • Hindustan Times

Google Veo 3 now in India: Create AI videos with voice, music and more

Google has brought its most advanced AI video generation tool, Veo 3, to users in India. The company made this announcement weeks after introducing the model during its annual developer conference, Google I/O. However, the Veo 3 is currently only accessible to users who have subscribed to Gemini Pro. Google has rolled out its AI video generation tool Veo 3 in India for Gemini pro users.(Google) Initially available only to Gemini Pro subscribers, the tool enables users to generate short video clips enhanced with sound, based entirely on text prompts or images. Also read: Gemini's dramatic apologies: Why Google's chatbot sometimes says it should 'switch itself off' after failing tasks What is Google's Veo 3? Veo 3 allows users to create eight-second video clips that include speech, sound effects, and background music just from simple/creative text prompts or still images. The tool is designed to simulate real-life audio environments, which adds a layer of realism to the generated visuals. By combining visuals and sound, users can bring ideas to life with more detail than before. The AI generated videos are automatically marked with visible and invisible watermarks to indicate AI-generated content, with SynthID embedded to ensure authenticity and traceability. Also read: Apple reportedly planning to buy Perplexity AI to power future Siri upgrades and AI-driven search features Since its unveiling in May, Veo 3 has drawn attention across social media platforms, especially on X. Users have posted videos created with the model, which has showcased everything from speculative historical scenarios to fictional encounters. The examples illustrated the tool's flexibility and how it can translate creative prompts into short, animated scenes. Google's team has emphasised the role of imagination in using Veo 3, that encourages users to experiment with different types of content. A statement from the company highlighted how users are producing diverse videos, from alternative historical scenes to surreal visualisations, using the tool's multimodal input system. Also read: AI agents in corporate America: How autonomous AI is changing Fortune 500 operations AI Safety and Responsible Use As part of its release strategy, Google has restated its focus on responsible AI use. The company claims it is conducting thorough evaluations and red teaming exercises to test and improve Veo 3's outputs. These steps aim to prevent harmful or misleading content and to ensure that AI video generation remains within safe and ethical boundaries. What Other AI Platforms Think Industry observers view Veo 3 as Google's response to other AI video generators, including OpenAI's Sora. According to Eli Collins, Vice President of Product at Google DeepMind, Veo 3 stands out for its ability to maintain real-world physics and produce accurate lip-syncing, alongside its use of both text and image prompts. Mobile finder: Google Pixel 9 Pro LATEST price, specs and all details

The potential of AI-based Electronic Medical Records to transform healthcare delivery in India
The potential of AI-based Electronic Medical Records to transform healthcare delivery in India

The Hindu

timean hour ago

  • The Hindu

The potential of AI-based Electronic Medical Records to transform healthcare delivery in India

A 33-year-old man recently visited a doctor after multiple hospital and specialist consultations, during which he had spent several days admitted, and had incurred expenses running into lakhs of rupees. He had been experiencing high-grade, intermittent fever for three months, accompanied by rashes and body pain. Extensive investigations had been conducted to rule out infections, cancers, and autoimmune diseases. The only notable findings in his tests were elevated ESR and CRP—markers of inflammation—and a high serum ferritin level, a test typically used to assess iron stores. When the tech-savvy doctor fed the patient's symptoms and lab reports into an AI enabled EMR (Electronic Medical Record) the first diagnosis that the system suggested was Adult Onset Stills Disease which is an auto- inflammatory condition. The patient was then treated with steroids and immune-modulating drugs, which provided a prompt resolution of symptoms. What several specialists missed over almost three months, the AI EMR picked up in seconds, at virtually no cost. Welcome to the exciting world of Artificial Intelligence-based Electronic Medical Records. What is an AI-based EMR and what can it do that conventional medical record systems and even many doctors cannot? Optimising diagnoses An AI EMR does the thinking for the doctor/nurse to ensure that an optimum diagnosis is made, and the ideal treatment options are provided. It ensures that every lab result is factored in, every datapoint considered, when arriving at its recommendations. (It can do the thinking for you, the patient too, but that needs accurate and comprehensive information about your condition to be made available to the system.) Hundreds of studies now show AI models outperforming physicians in real-life settings, medical MCQ examinations and even in the most complex of case scenarios. Yet, adoption faces huge roadblocks put up by a healthcare industry protecting its turf and livelihood. Opportunity for India AI today, offers a massive opportunity for countries like India to transform the quality of healthcare that the poor and underserved can receive. It is inexpensive, phenomenally accurate already (and will keep getting even better) and most importantly, easily accessible. In short, it can largely solve the three major problems healthcare faces: accessibility, affordability and quality. AI to many means by default, Generative AI and the Large Language Models (LLMs) that have been created by the likes of OpenAI, Google and Anthropic. So the erstwhile, 'Have you Googled your symptoms?', is now, 'Have you checked your symptoms on ChatGPT?' But AI has a lot more to offer for healthcare than just Gen AI. And an AI based EMR is at the forefront of this revolution. Combating existing challenges with AI EMRs An AI EMR can ensure creation of a unified health record for a patient easily. Currently, this is a massive challenge due to patients seeing different doctors over time, going to different labs etc., and all this data remaining in disparate compartments, affecting the quality of care received. Data and unstructured data at that, can now be instantly tagged to a patient's medical record and be extracted and made available for analysis. All that the patient needs to do is upload a PDF document of a lab report or take a photo using his mobile phone and upload the jpg images to the medical record. So, patients themselves can create their unified patient records and have them analysed using Gen AI within AI EMR systems. Though some of these technologies have existed before, their integration with AI has transformed them. AI EMR systems can answer in simple, patient, understandable language, every doubt that the patient may have and the doctor may not have time for. For example, how essential is the surgery my doctor ordered, what are the alternative options, what are the risks if I do not go in for it etc. are standard doubts. Earlier, you could get generic information. Now, you get accurate answers based on your medical data, your symptoms, your lab reports and condition, even your economic constraints. In short, an AI EMR or PHR (Personal Health Record) can be your regular health assistant, one that is far more knowledgeable than an individual doctor could ever hope to be. The big difference from earlier is in the specificity of the answers tailored to the individual's condition. AI EMR systems can take language out of the equation. Today, AI EMR's can support voice transcription of conversations between the doctor and the patient in any major language (and multiple languages too), extract the medical component of the data and even provide patient summaries in any major Indian language. Some EMR systems have patients talk to an AI assistant on their phone in their native language and it asks relevant questions based on symptoms and suggests the next steps or summarises the details for the doctor with great accuracy. Many of the technologies used in these processes are not AI (speech-to-text for instance, has been around for decades) but the integration of AI has given these technologies phenomenal power. AI interpretation of medical images within EMR systems has greatly improved. AI medical record systems today can read X-rays, CT scan images and the like, with a great deal of accuracy, in many documented studies, outperforming radiologists. As this technology gets even better, the reliance on the local doctor to interpret a report will decrease. AI EMRs also drastically reduce medical errors in hospitals and clinics. Doctors may be overworked and may forget critical data points that end up costing lives. AI-based models do not forget, do not get tired and are always available. As a patient safety tool, every hospital can benefit from using AI. These are just a few examples of how AI EMRs can transform healthcare delivery. Also Read: Benchmarks in medicine: the promise and pitfalls of evaluating AI tools with mismatched yardsticks Roadblocks to implementation AI is not 100% accurate and can make errors is the obvious caveat, but this should not be the deterrent to using it. The only question that matters is whether AI is better than existing healthcare providers at what it does. There is now overwhelming global evidence that it is significantly better. So, what is preventing the healthcare community from embracing AI EMRs? Could it be a perceived threat to livelihoods and the 'what's in it for me' attitude? Governments have a once in a millennium opportunity to transform healthcare delivery in a country like India through the use of AI EMRs but it is unlikely that they will antagonise the medical community and push for it in a big way. Eventually the use of AI EMR/PHR systems in healthcare will likely be patient driven. As more and more patients find out that there is an accurate diagnosis and treatment option available for a large number of medical conditions and that too at almost negligible cost, they will start to embrace it. Maybe that is where the opportunity to transform healthcare lies. (Dr. Sumanth C. Raman is a consultant in internal medicine and founder of Algorithm Health, an AI company in healthcare. He writes on healthcare issues. sumanthcraman@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store