Latest news with #AIThreats


USA Today
29-06-2025
- Business
- USA Today
How Kumrashan Indranil Iyer Is Building Trust in the Age of Agentic AI
'The next frontier of AI is not intelligence. It's trust.' With this sentiment, cybersecurity thought leader Kumrashan Indranil Iyer captures the challenges and opportunities of the digital future. Kumrashan believes that cognitive trust, not technical brilliance, will define whether AI becomes a force for resilience or risk. Kumrashan is dedicated to leading a new generation of cyber defense. As a Senior Leader of Information Security at a major multinational bank, he is tasked with overseeing groundbreaking work in AI-driven threat detection and digital trust systems. Building systems people can trust Kumrashan explains that as AI is advancing, it is increasingly able to reason, adapt, and make autonomous decisions. This is called 'agentic AI' and is capable of demonstrating autonomous behavior. 'We're no longer dealing with simple tools. We're interacting with digital agents that pursue goals. These can include goals you didn't explicitly program,' he says. While traditional AI systems follow scripts and models designed by humans, agentic AI is able to interpret broad objectives and figure out the 'how' on its own. 'This evolution brings with it immense promise but also unprecedented risk,' says Kumrashan. According to a 2025 study by Cybersecurity Ventures, global damage from cybercrime is projected to reach $10.5 trillion annually by 2025. Much of this risk is now being shaped by how AI is used, or rather, misused by attackers. Today's cyber threat profile includes new innovations, such as malware that adapts in real-time and attacks that resemble conversations rather than breaches. 'The threat landscape isn't just growing, it's learning,' Kumrashan warns. 'Imagine an adversary deploying an AI agent that doesn't just follow instructions but evolves its own strategy.' These kinds of attacks are no longer science fiction. They are happening now. Introducing 'digital conscience' To meet this challenge, Kumrashan Indranil Iyer has introduced Cognitive Trust Architecture. The novel framework is gaining recognition in cyber defense circles for its focus on adaptive reasoning and trust calibration. Unlike traditional compliance or oversight models, CTA not only observes what AI systems do but also seeks to understand why they behave in a particular way. Kumrashan explains it this way: 'Think of CTA as a digital conscience. It allows us to guide AI behavior based on trustworthiness, accountability, and explainability. If trust is the currency of human-AI collaboration, then CTA is the treasury that regulates it.' His research paper on CTA, 'Cognitive Trust Architecture for Mitigating Agentic AI Threats: Adaptive Reasoning and Resilient Cyber Defense', has been cited widely across industry and academic circles, including by researchers focused on machine ethics, autonomous systems, and national digital defense. In addition, he has authored numerous other influential research papers, including: Lessons from the frontline Kumrashan Indranil Iyer explains the motivation behind the system: 'I've spent my career watching brilliant algorithms fail not because they were wrong, but because they weren't understood, or trusted,' Kumrashan says. 'Most AI failures aren't technical. They're trust failures.' For him, the solution goes beyond better programming. 'AI needs to align more with human intent and ethical reasoning.' In his view, organizations must evolve from AI governance to what he calls AI guardianship. 'Governance gives you a checklist, but guardianship asks: 'Can I predict my AI's behavior? Can I explain it to a regulator? Can I trust it in a crisis?' he explains. 'If the answer to these questions isn't 'yes,' then your system isn't ready.' Kumrashan is also a passionate advocate for AI literacy and ethical tech leadership. He regularly writes posts that translate complex cybersecurity issues into plain language, offering insights for both professionals and everyday readers. His recent speaking appearances include the IEEE Conference on Artificial Intelligence and several panels on responsible AI innovation. He mentors emerging AI professionals and regularly serves as a peer reviewer and research guide in the fields of cybersecurity and artificial intelligence. For his efforts, Kumrashan has earned wide recognition across the cybersecurity industry. In 2025, he was named the winner of the Global InfoSec Award for Trailblazing AI Cybersecurity at the RSA Conference and was also honored with the Fortress Cybersecurity Award for innovation in AI defense. In addition, he has been named a Fellow by both the Hackathon Raptors Association and the Soft Computing Research Society in acknowledgment of his contributions to AI-driven security and the advancement of digital trust frameworks. A future based on trust Future technology is likely to surpass our wildest imaginations, from self-driving cars to AI-driven military defense. As the world barrels towards this widespread adoption of AI-powered autonomy, Kumrashan believes the stakes are only getting higher. 'I'm excited by the idea of AI agents that predict threats before they happen, respond autonomously, and scale defense beyond human limits,' he says. 'However, I'm also concerned about the lack of causational explainability. Assuming that if it's AI, then it has to be right is dangerous.' For Kumrashan Indranil, the goal is simple and urgent: to build systems based on cognitive trust. Disclaimer: This article reflects personal views only and does not represent the views of the individual's employer or affiliates.


Forbes
22-06-2025
- Forbes
The Biggest Existential Threat Calls For Philosophers, Not AI Experts
Google's former AI chief, Geoffrey Hinton distinguishes between two ways in which AI poses an ... More existential threat to humanity. (Photo by Jonathan NACKSTRAND / AFP) (Photo by JONATHAN NACKSTRAND/AFP) Geoffrey Hinton, Nobel laureate and former AI chief in Google, recently distinguished between two ways in which AI poses an existential threat to humanity. According to Hinton, the threat unfolds when: Hinton cites cyberattacks, creation of viruses, corruption of elections, and creation of echo chambers as examples of the first way AI poses an existential threat. And deadly autonomous weapons and superintelligent AI that realizes it doesn't need us and therefore decides to kill us as examples of the second. But there is a third existential threat that neither Hinton nor his AI peers seem to worry about. And contrary to their warnings, this third threat is eroding human existence without reaching any of the media headlines. The third way AI poses an existential threat to humanity unfolds when: The simplest definition of an existential threat is 'a threat to something's very existence'. But to know whether humanity's existence is threatened, we must know what it means to exist as a human. And the AI experts don't. Ever since Alan Turing refused to consider the question: 'Can machines think?', AI experts have deftly failed to define basic human traits such as thinking, consciousness and creativity. No one knows how to define these things, they say. And they are right. But they are wrong to use their lack of definitions as an excuse for not taking the question of what it means to be human seriously. And they add to the existential threat to humanity by using terms like human-level intelligence when talking about AI. German philosopher Martin Heidegger said that our relationship with technology puts us in constant ... More danger of losing touch with technology, reality, and ourselves. (Photo by Fritz Eschen / ullstein bild) What Existential Threat Really means Talking about when and how AI will reach human-level intelligence, or outsmart us altogether, without having any idea how to understand human thinking, consciousness, and creativity is not only optimistic. It also erodes our shared understanding of ourselves and our surroundings. And this may very well turn out to be the biggest existential threat of all: that we lose touch with our humanity. In his 1954 lecture, 'The Question Concerning Technology', German philosopher Martin Heidegger said that our relationship with technology puts us in constant danger of losing touch with technology, reality, and ourselves. Unless we get a better grip of what he called the essence of technology, he said we are bound to: When I interviewed Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, for 'An AI Professor's Guide To Saving Humanity From Big Tech' last year, he agreed that Heidegger's prediction has proven to be frighteningly accurate. But instead of pointing to the essence of technology, he said that 'the people who are in control of the deployment of [technology] are perhaps the least socially intelligent people we have on the planet.' Whether that's why AI experts conveniently avoid talking about the third existential threat is not for me to say. But as long as we focus on them and their speculations about what it takes for machines to reach human-level intelligence, we are not focusing on ourselves and what it takes for us to exist and evolve as humans. Existential Philosophers On Existential Threats Unlike AI experts, founders, and developers, the existential philosophy that Heidegger helped pioneer has not received billions of dollars in annual investment since the 1950's. Quite the contrary. While the AI industry has exploded, the interest and investments in the humanities has declined worldwide. In other words, humanity has for decades invested heavily in understanding and developing artificial intelligence, while we have neglected to understand and develop ourselves as humans. But although existential philosophers like Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty have not received as large grants as their colleagues in computer science departments, they have contributed insights that are more helpful when it comes to understanding and dealing with the existential threats posed by AI. In Being and Nothingness, French philosopher Jean-Paul Sartre places human consciousness, or ... More no-thingness (néant), in opposition to being, or thingness (être). Sorbonne, à Paris, France, le 22 mai 1968. (Photo by Pierre BLOUZARD/Gamma-Rapho) Like different AI experts believe in different ways to reach human-level intelligence, different existential philosophers have different ways of describing human existence. But unlike AI experts, they don't consider the lack of definitions a problem. On the contrary, they consider the lack of definitions, theories and technical solutions an important piece in the puzzle of understanding what it means to be human. Existential philosophers have realized that consciousness, creativity, and other human qualities that we struggle to define, are not an expression of 'something', that is, a core, function, or feature that distinguishes us from animals and machines. Rather, they are an expression of 'nothing'. Unlike other creatures, we humans not only exist, we also question our existence. We ask why and for how long we will be here. We exist knowing that at some point we will cease to exist. That we are limited in time and space. And therefore have to ask why, how and with whom we live our lives. For existential philosophers, AI does not pose an existential threat to humanity because it might exterminate all humans. It poses an existential threat because it offers answers faster than humans can ask the questions that help them contemplate their existence. And when humans stop asking existential questions, they stop being human. AI Experts Agree: Existential Threats Call For Philosophy While existential philosophers insist on understanding the existential part of existential threats, AI experts skip the existential questions and go straight to the technical and political answers to how the threats can be contained. That's why we keep hearing about responsible AI and regulation: because that's the part that calls for technical expertise. That's the part where the AI experts are still needed. Demis Hassabis, CEO of Google DeepMind, recently called for new, great philosophers to understand ... More the implications of developments in AI. (Photo byfor TIME) AI experts know how to design and develop 'something', but they have no idea how to deal with 'nothing'. That's probably what Hinton realized when he retired to spend more time on what he described as 'more philosophical work.' That also seems to be what Demis Hassabis, CEO of Google DeepMind, suggests when he says that 'we need new great philosophers to come about to understand the implications of this.' And that's certainly what Nick Bostrom hinted at in my interview with him about his latest book, Deep Utopia, when he declared that some questions are 'beyond his pay grade'. What 20th-century existential philosophy teaches us is that we don't have to wait for the AI experts to retire or for new great philosophers to emerge to deal with the existential threats posed by AI. All we have to do is remind ourselves and each other to ask how we want – and don't want – to live our lives before we trust AI to know the answer.