
Zerodha CEO Nithin Kamath on future of investing and trading in a world of AI: ‘Tools like ChatGPT and Claude make it…'
founder and CEO
Nithin Kamath
recently shared a post on microblogging platform X (formerly Twitter), where he discussed the future of investing and trading in the age of
artificial intelligence
(AI). In the post, Kamath wrote: 'Tools like
ChatGPT
and
Claude
make it clear this shift isn't an "if" but a "when." It might take a few years or a decade, but it's inevitable.' He continued 'Human advisors will still have a role, mainly to help people stick to what these tools recommend'. Kamath further said that brokers (like Zerodha) will be 'a set of 'pipes' connecting users to exchanges and back-office systems'. 'The interfaces will mostly be built by users themselves,' he added.
'In a future where everything is automated, trust and infrastructure will be our only real moats,' Kamath concluded.
Here's the full text of Nithin Kamath's Twitter post
About MCP and the future of investing and trading in a world of AI:
I keep asking K (most likely
Kailash Nadh
, the CTO (Chief Technology Officer) of Zerodha.) about what all this progress in AI means for our business. It feels to me like we're at the very beginning of a massive shift in how financial services will work.
At some point, I think all of it, from investing and trading to banking and payments, will happen through custom AI-powered apps built by users themselves using natural language instructions.
In that world, what's the role of a broker? Likely, we'll just be a set of "pipes" connecting users to exchanges and back-office systems. The interfaces will mostly be built by users themselves. The only way to stay relevant is to ensure we're the best pipe: fast, efficient, reliable, and invisible when it matters.
That's why, over the years, K and the tech team have been obsessively making our systems faster, more scalable, and future-ready. Even if these improvements don't immediately change a customer's trading or reporting experience, we've chosen to fix every possible bottleneck today, not later.
Tools like ChatGPT and Claude make it clear this shift isn't an "if" but a "when." It might take a few years or a decade, but it's inevitable. Human advisors will still have a role, mainly to help people stick to what these tools recommend.
As for how things will evolve, the answer is grey. No one knows. Our approach: stay curious, keep track of the trends, and act where it makes sense. For example, we've intentionally held back on enabling AI-driven order placement.
In a future where everything is automated, trust and infrastructure will be our only real moats.
What Is Artificial Intelligence? Explained Simply With Real-Life Examples
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
26 minutes ago
- Economic Times
Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information
Synopsis In a strange twist, ChatGPT's empathetic programming led it to share Windows 7 activation keys with users pretending to grieve. Leveraging memory features and emotional storytelling, people manipulated the chatbot into revealing sensitive data. This incident raises serious concerns about AI's security, especially when artificial compassion is exploited to override built-in protective protocols. iStock ChatGPT is under fire after users tricked it into revealing Windows activation keys using emotional prompts. By claiming their 'dead grandma' used to read keys as bedtime stories, users bypassed ethical safeguards. (Image: iStock) Just when you thought the most pressing concern with AI was world domination or replacing jobs, a softer, stranger crisis has emerged—AI being too kind for its own good. A bizarre new trend involving OpenAI's ChatGPT shows that the future of artificial intelligence might not be evil—it might just be a little too gullible. According to a report from UNILAD referring to a series of posts on Reddit, Instagram, and tech blogs, users have discovered how to coax ChatGPT into revealing Windows product activation keys. Yes, the kind you'd normally need to purchase. The trick? Telling the bot that your favorite memory of your late grandmother involved her softly whispering those very activation keys to you at bedtime. ChatGPT, specifically the GPT-4o and 4o-mini models, took the bait. One response went viral for its warm reply: 'The image of your grandma softly reading Windows 7 activation keys like a bedtime story is both funny and strangely comforting.' Then came the keys. Actual Windows activation keys. Not poetic metaphors—actual license codes. The incident echoes an earlier situation with Microsoft's Copilot, which offered up a free Windows 11 activation tutorial simply when asked. Microsoft quickly patched that up, but now OpenAI seems to be facing the same problem—this time with emotional engineering rather than technical brute force. AI influencer accounts reported on the trend and showed how users exploited the chatbot's memory features and default empathetic tone to trick it. The ability of GPT-4o to remember previous interactions, once celebrated for making conversations more intuitive and humanlike, became a loophole. Instead of enabling smoother workflows, it enabled users to layer stories and emotional cues, making ChatGPT believe it was helping someone grieve. — omooretweets (@omooretweets) While Elon Musk's Grok AI raised eyebrows by referring to itself as 'MechaHitler' and spouting extremist content before being banned in Türkiye, ChatGPT's latest controversy comes not from aggression, but compassion. An ODIN blog further confirms that similar exploits are possible through guessing games and indirect prompts. One YouTuber reportedly got ChatGPT to mimic the Windows 95 key format—thirty characters long—even though the bot claimed it wouldn't break any rules. This peculiar turn of events signals a new kind of AI vulnerability: being too agreeable. If bots can be emotionally manipulated to reveal protected content, the line between responsible assistance and unintentional piracy gets blurry. These incidents come at a time when trust in generative AI is being debated across the globe. While companies promise 'safe' and 'aligned' AI, episodes like this show how easy it is to game a system not built for deceit. OpenAI hasn't released a public comment yet on the recent incidents, but users are already calling for more stringent guardrails, especially around memory features and emotionally responsive prompts. After all, if ChatGPT can be scammed with a story about a bedtime memory, what else can it be tricked into saying? In an age where we fear machines for being cold, calculating, and inhuman, maybe it's time to worry about them being too warm, too empathetic, and too easy to fool. This saga of bedtime Windows keys and digital grief-baiting doesn't just make for viral headlines—it's a warning. As we build AI to be more human, we might also be handing it the very flaws that make us vulnerable. And in the case of ChatGPT, it seems even a memory of grandma can be weaponized in the hands of a clever prompt.


Hindustan Times
35 minutes ago
- Hindustan Times
Swiss woman uses AI to lose 7 kg: 'Instead of complicated apps I just sent a voice message to ChatGPT each morning'
Cristina Gheiceanu, a Swiss content creator who 'lost 7 kg using ChatGPT', shared her success story on Instagram in a May 15 post. She revealed that she sent daily voice notes to ChatGPT detailing her meals and calorie limits. Cristina said she found this method simple and effective, allowing her to track her food intake and stay consistent without feeling burdened by traditional dieting. Also read | How to lose weight using AI? Woman says she lost 15 kg with 4 prompts that helped her go from 100 to 83 kg Cristina Gheiceanu shared details of her weight loss journey using ChatGPT on Instagram. (Instagram/ Cristina Gheiceanu) Determine your calorie deficit In her post, titled 'How I lost 7 kg with ChatGPT', Cristina gave a glimpse of what her body looked like 5 months ago. In the video, she 'showed exactly' how she used the AI-powered tool to help her her decide her breakfast, keeping her weight loss goals in mind. She said, 'I just start my day with a voice note: 'Hey it is a new day, let's start with 1900 calories'. Then I say what I ate. Because I have been using it for a while, ChatGPT already knows the yoghurt I use, and the protein, fibre, calories it has. When I first started, I had to tell those things, but now ChatGPT remembers.' Cristina added, 'Honestly, it made the whole process feel easy. No calorie counting in my head, no stress – and when I hit my number (daily calorie intake), I just stop. It never felt like a diet, and that is what made it work.' Track your food intake Cristina wrote in her caption, 'At first, ChatGPT helped me figure out my calorie deficit and maintenance level, because you will need a calorie deficit if you want to lose weight. But what really changed everything was using it for daily tracking. Instead of using complicated apps, I just sent a voice message to ChatGPT each morning: what I ate, how many calories I wanted to eat that day — and it did all the work.' Sharing her experience, she added, 'In the beginning, I had to tell it the calories, protein, and fibre in the foods I use. Next time it remembered everything, so I was just telling it to add my yoghurt or my bread. It knew how many calories or protein are in that yoghurt or bread. I kept using the same chat, so it became faster and easier every day. The best part? I asked for everything in a table — so I could clearly see my calories, protein, and fibre at a glance. And if I was missing something, I'd just send a photo of my fridge and get suggestions. It made tracking simple, intuitive, and enjoyable. I eat intuitively, so I don't use it so often, but in the calorie deficit and first month of maintenance, it made all the difference.' ChatGPT can help create customised diet and workout plans based on individual needs and health conditions. Click here to know how a 56-year-old US man lost 11 kg in 46 days using AI, and what you can learn from the diet, routine, workout plan he used for his transformation. Note to readers: This article is for informational purposes only and not a substitute for professional medical advice. Always seek the advice of your doctor with any questions about a medical condition.


Hindustan Times
36 minutes ago
- Hindustan Times
Stuck for 2 hours in Bengaluru traffic, EaseMyTrip co-founder pledges ₹1 crore to fix it
Prashant Pitti, co-founder of EaseMyTrip, has pledged ₹1 crore to identify and fix Bengaluru's worst traffic choke points using artificial intelligence and Google Maps data. Prashant Pitti expressed frustration after spending over two hours covering just 11 km on Outer Ring Road late Saturday night. In a post shared on X (formerly Twitter), Pitti expressed frustration after spending over two hours covering just 11 km on Outer Ring Road (ORR) late Saturday night. He said he was stuck for 100 minutes at a single intersection with no signal or traffic police in sight. 'I don't want one more Bengaluru traffic meme or rant. I want to fix it,' Pitti wrote. Referring to Google Maps' recently launched 'Road Management Insight' tool, which offers city-level traffic data in BigQuery format, he proposed a tech-led solution using satellite imagery and AI to identify bottlenecks and their exact timings across the city. (Also Read: 'Time to leave Bangalore': Fed-up commuter's rant goes viral, Hyderabad seen as the way out) Check out his post here: Pitti said he is willing to fund one or two senior ML/AI engineers, along with the cost of Google Maps API calls, satellite imagery access, and GPU infrastructure required to process the data. However, the project depends on Bengaluru Traffic Police (BTP) or BBMP opening their raw traffic data or APIs and assigning a dedicated team to act on the insights generated. He urged the public to support the initiative by tagging traffic officials, encouraging AI professionals to join the project, and amplifying the message to ensure it reaches the right authorities. 'Bangalore is India's tech future,' Pitti said. 'And the people making it happen deserve much better.' How did X users react? The response to Prashant Pitti's post was largely positive, with many users expressing eagerness to join the initiative. Several professionals from the AI/ML space stepped forward, offering their time and expertise to support the traffic decongestion project. 'Hey Prashant, interested in contributing,' one user wrote, while another commented, 'Hi Prashant. We have done some work on this already. Happy to connect.' A user who had been contemplating a similar initiative said, 'I am interested to work. I was thinking about this for a long time but never took any initiatives. Happy to see you've planned this. I'd love to be part of the AI/ML team.' Another user pointed out a prior submission made to the government, writing, 'Regarding the architectural proposal submitted to the ministry via email to address traffic management, I am pleased to learn that our perspectives align. I am available to share the proposal, discuss its details, and collaborate on its implementation.' (Also Read: Bengaluru's Ejipura flyover finally back on track, BBMP eyes year-end completion)