Latest news with #GenAI


Economic Times
5 hours ago
- Business
- Economic Times
When AI goes rogue, even exorcists might flinch
Ghouls in the machine As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI, Google, Meta and Anthropic. Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal solution. When you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, anyway. While CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant actions. In late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'. An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely. A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more prevalent. Such governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance, this AI can be controlled, solutions innovated and benefits achieved. As we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Delhivery survived the Meesho curveball. Can it keep on delivering profits? Why the RBI's stability report must go beyond rituals and routines Ozempic, Wegovy, Mounjaro: Are GLP-1 drugs weight loss wonders or health gamble? 3 critical hurdles in India's quest for rare earth independence Stock Radar: Apollo Hospitals breaks out from 2-month consolidation range; what should investors do – check target & stop loss Add qualitative & quantitative checks for wealth creation. 7 small-cap stocks from different sectors with upside potential of over 25% These 7 banking stocks can give more than 20% returns in 1 year, according to analysts Wealth creation is about holding the right stocks and ignoring the noise. 13 'right stocks' with an upside potential of up to 34%


Time of India
5 hours ago
- Business
- Time of India
When AI goes rogue, even exorcists might flinch
As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI , Google, Meta and Anthropic . Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'.An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely.A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance , this AI can be controlled, solutions innovated and benefits we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly.


Time of India
5 hours ago
- Business
- Time of India
Want to be a CEO? Lessons from Dr. Ram Charan's leadership playbook for CIOs
In a masterclass that felt more like a wake-up call, world-renowned business advisor and author Dr. Ram Charan gave CIOs and CTOs a straight-shooting lesson on what it really takes to transition from the tech suite to the CEO's chair. No fluff. No jargon. Just brutally honest, field-tested sharp storytelling and a real-time business simulation, Dr Charan challenged CIOs to drop the jargon, speak the language of value, and earn their seat at the top table. At the ETCIO Annual Conclave 2025, Dr. Charan didn't merely inspire—he challenged the very identity many technology leaders cling to. 'You may be world-class in tech,' he warned, 'but unless you know how your company makes money, select and lead teams beyond your domain, and navigate external stakeholders—you're not CEO material.' From functional expert to business leader Dr. Charan outlined three non-negotiable competencies every aspiring CEO must master—and none of them have anything to do with tech. Understand how your business makes money Not at a high-level. Not theoretically. Tangibly, in numbers and levers. 'Forget P&L slides. Learn the balance sheet,' he insisted. 'If you can't explain how your business generates cash, you're not in the game.' He drew parallels with street vendors to make his point—'Even a chaiwala knows if he'll go hungry by evening. That's the language of business.' Dr. Charan urged leaders to dissect company budgets, operating review decks, and investor calls. 'Study line by line. Diagnose. That's how you learn to think like a promoter.' Build and lead high-performing teams—especially outside your comfort zone The CEO role, he stressed, is not about mastering every function. It's about orchestrating them. 'Most CEOs don't understand tech. Or legal. Or even finance deeply. But they know how to select people, deploy them, and get results. Can you?' He pressed the audience to lead cross-functional projects—even without formal titles. 'Show your leadership. Visibility doesn't need a badge. It needs ownership.' Develop external orientation and stakeholder fluency CEOs today are accountable to more than just customers or shareholders. Government regulators, board members, investors, ecosystem players—all demand credibility. 'If you can't speak their language, you'll be barbecued,' he said. 'And boards won't wait for you to learn.' He recommended tech leaders collaborate deeply with marketing and sales, using their KPIs—not technical specs—as the North Star. 'Don't pitch version 4.5 of your GenAI. Show how you'll improve gross margin, reduce churn, or unlock topline growth. That's how you gain trust.' The missing mindset: Ask questions like a promoter To drive his point home, Charan introduced a diagnostic case study—a single-sheet financial snapshot of a real company once on the verge of collapse. The room was transformed into a boardroom simulation. What stood out was not who had the right answers, but who asked the right questions. 'Great CEOs cut through complexity like surgeons,' he explained. 'They ask sharp, uncomfortable questions: Why is capex zero? Why is inventory bloated? Why are suppliers angry? The numbers are just a starting point.' This simulation wasn't just an exercise in financial acumen—it was a mirror to the audience's mental models. 'Most tech leaders look for solutions too soon,' he warned. 'First, diagnose. Then prescribe.' The path forward: Cross-functional immersion, not just tech brilliance Dr. Charan didn't shy away from naming the gap: 'You're specialists. You've built your careers in deep tech. But the CEO job demands a 360° view. You need to rewire how you learn, lead, and speak.' He laid out an actionable playbook: Partner with revenue-generating functions—marketing, sales, service—and treat them as internal or co-lead transformation projects with measurable business monthly business reviews and challenge yourself to preempt CEO-level into investor relations, analyze full-company budgets, and benchmark against peer the customer's world—not through NPS scores, but through how products are built, sold, used, and scaled. He also offered hope and validation: 'I've helped high-school dropouts become billionaires. If they can learn the business, so can you. But you've got to believe you belong in that seat.' No shortcuts, No guarantees—only skills and relevance As the session drew to a close, Dr. Charan recounted stories of technology leaders who made the leap—and others who didn't. One tech leader at a global hospital chain earned his seat at the table not by pushing technology, but by reshaping how the executive committee thought about digitization. Others, he said, failed because they neglected the soul of the business—products, customers, and cash flow. 'Your technology brilliance will not save you,' he cautioned. 'It's your ability to drive results across unfamiliar terrain—people, product, profit—that will.' He summed it up simply: 'Don't wait for someone to pick you. Start showing you're ready. The job won't wait.' Dr. Ram Charan's masterclass stripped away any illusions CIOs may hold about climbing to the top. The CEO path isn't paved with certifications or technical depth—it's carved through business fluency, leadership maturity, and an unrelenting focus on outcomes. For tech leaders willing to step beyond their functional walls, the message was clear: the boardroom is within reach—but only if you learn to think, speak, and act like a business builder.


Time of India
6 hours ago
- Business
- Time of India
Organisations cautious in GenAI adoption but find ROI satisfactory
Chennai: Despite the buzz around the enterprise adoption of generative artificial intelligence (Gen AI), only a small proportion of Indian companies provide AI tools and applications to a significant number of their employees, according to a report. Only 5% of the organisations offer gen AI tools for more than 80% of their workforce, according to Deloitte's 'State of AI' report shared with TOI. The report underscores a critical gap that while many companies are experimenting with AI, few are scaling its use across their workforce. 29% of organisations provide access to less than 40% of their workforce and 12% enable moderate access to less than 60%, the report noted. Even in organisations with higher access, less than 40% of them use it in the day-to-day workflow. It was based on a survey of 2,773 directors and executives in organisations in 14 major economies between in 2024. Compared to earlier surveys, there is a significant increase in confidence, especially among Indian organisations, Moumita Sarker, partner, GenAI and Agentic AI leader at Deloitte India told TOI. "The reported level of expertise in GenAI has risen sharply, with some indicators suggesting it is nearly 450% higher than before reflecting a strong belief within organisations," she said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Nejpohodlnější farmářská hra. Instalace není potřeba Taonga Farm Přehrát nyní Undo Around 71% of organisations actively pursue more than 10 Gen AI experiments with 29% of the organizations expecting those to be fully scaled in the next six months. While Indian executives generally had a positive outlook, 27% and 10% of the participants expressed uncertainties and fear respectively over generative AI. Moreover, 48% of Indian organisations invested less than 20% of their overall AI budget, indicating a hesitation over the technology. You Can Also Check: Chennai AQI | Weather in Chennai | Bank Holidays in Chennai | Public Holidays in Chennai This shows companies are slow in adoption with challenges such as lack of talent, hallucinations, infrastructure readiness, costs and integration. The return on investment on GenAI projects stands around 27%, according to Deloitte, with many companies experimenting with small to moderate implementation of AI initiatives. However, the survey noted that 70% of the participants reported AI initiatives meeting or exceeding their expectations on ROI. On Agentic AI adoption, which is not covered in the survey, she said pilot use cases are explored, particularly in finance processes, customer service and software development. "We are seeing growing interest and orchestrated end-to-end use cases in various industries and expect to see more scaled and fully executed Agentic AI workflows emerge soon," she added.


Daily News Egypt
6 hours ago
- Business
- Daily News Egypt
Mastercard Unveils AI-Powered Card Fraud Prevention Service in EEMEA Region, Starting from Egypt
In a move set to redefine digital security for banks and consumers, Mastercard has launched a cutting-edge fraud prevention service, Account Intelligence Reissuance, in the Eastern Europe, Middle East, and Africa (EEMEA) region. The new service aims to help issuing banks identify and replace compromised cards faster and more efficiently, starting with Egypt as a key regional hub. The innovative solution uses Mastercard's proprietary artificial intelligence (AI) and network-wide data to assess card risk in real time, offering banks actionable recommendations on whether a card should be replaced or monitored. The service is designed to combat both digital and physical card skimming, a growing concern as cybercriminals adopt more sophisticated tactics in the digital age. Card fraud remains a costly challenge globally, with billions lost annually by banks and merchants. Traditionally, issuing banks manually review and reissue cards based on perceived risk—a slow and resource-intensive process. Mastercard's AI-driven system aims to automate this process, helping banks prioritize the most vulnerable accounts and enhance fraud protection while minimizing disruption for customers. 'We are delighted to expand our fraud prevention portfolio with Account Intelligence Reissuance, which enables issuers to measure risk and respond with greater precision,' said Selin Bahadirli, Executive Vice President, Services, EEMEA, Mastercard. 'This solution is powered by Mastercard's world-class AI and provides highly accurate insights that will elevate cardholder protection across the region.' The launch is part of Mastercard's broader commitment to secure the digital economy. The company processes more than 159 billion transactions annually and has continued to evolve its fraud prevention tools. Enhanced by Generative AI (GenAI), Mastercard's systems can now analyze transactions by account, device, merchant, and location in real time—helping to detect and prevent fraud before it happens. Egypt's strategic position in the EEMEA region also underscores Mastercard's ongoing investment in AI innovation and cybersecurity. The company's Center for Advanced AI and Cyber Technology, based in Dubai and developed in partnership with the UAE's Office for Artificial Intelligence, plays a key role in the development of AI tools that detect cyberattacks, data breaches, and payment fraud. With its rollout in EEMEA now underway, Mastercard plans to expand the Account Intelligence Reissuance service to Asia Pacific, North America, and Latin America later this year, further cementing its global leadership in secure digital payments. As Egypt continues to digitize its financial ecosystem, solutions like this represent a significant step toward building a safer, smarter, and more resilient banking landscape.