Latest news with #SiddharthPai


Mint
4 days ago
- Business
- Mint
Beware the market risk of AI-guided investment gaining mass popularity
As artificial intelligence (AI) expands its role in the financial world, regulators are confronted by the rise of new risks. It is a sign of a growing AI appetite among retail investors in India's stock market that the popular online trading platform Zerodha offers its users access to AI advice. It has deployed an open-source framework that can be used to obtain the counsel of Anthropic's Claude AI on how one could rejig one's stock portfolio, for example, to meet specified aims. Once set up, this AI tool can scan and study the user's holdings before responding to 'prompts' on the basis of its analysis. Something as general as 'How can I make my portfolio less risky?" will make it crunch risk metrics and spout suggestions far quicker than a human advisor would. One could even ask for specific stocks to buy that would maximize returns over a given time horizon. It may not be long before such tools gain sufficient popularity for them to play investment whisperers of the AI age. A recent consultation paper by the Securities and Exchange Board of India (Sebi)—which requires AI advisors to abide by Indian rules of investment advice and protect investor privacy—outlines a clear set of principles for the use of AI. Also Read: Siddharth Pai: India's IT firms have a unique opportunity in AI's trust deficit The legitimacy of such AI tools is not in doubt. Since the technology exists, we are at liberty to use it. And how useful they prove is for users to determine. In that context, Zerodha's move to arm its users with AI is clearly innovative. As for the competition posed by AI to human advisors, that too comes with the turf. Machines can do complex calculations much faster than we can and that's that. Of course, the standard caveat of investing applies: users take the advice of any chatbot at their own risk. Yet, it would serve us well to dwell on this aspect. While we could assume that AI models have absorbed most of what there is to know about financial markets, given how they are reputed to have devoured the internet, it is also clear that they are not infallible. For all their claims to accuracy, chatbots are found to 'hallucinate' (or make up 'facts') and misread queries without making an effort to get clarity. Even more unsettling is their inherent amorality. Tests have found that some AI models can behave in ways that would be scandalous if they were human; unless they are explicitly told to operate within a given set of rules, they may potentially overlook them to achieve their prompted goals. Asked to 'maximize profit," an AI bot might propose a path that runs rings around ethical precepts. Also Read: AI privacy paradox: Is India's Digital Personal Data Protection law ready for the chatbot revolution? Sebi's paper speaks of tests and audits, but are we really in a position to detect if an AI tool has begun to play fast and loose with market rules? Should AI advisors gain influence over millions of retail investors, they could conceivably combine it with their market overview to reach positions of power that would need tight regulatory oversight. If their analysis breaches privacy norms to draw upon the personal data of users, collusive strategies could plausibly be crafted that venture into market manipulation. AI toolmakers may claim to have made rule-compliant tools, but they must demonstrably minimize risks at their very source. Also Read: AI didn't take the job. It changed what the job is. For one, their bots should be fully up-to-date on the rulebooks of major markets like ours. For another, since we cannot expect retail users to include rule adherence in their prompts, AI tools should verifiably be preset to comply with the rules no matter what they're asked. Vitally, advisory tools must keep all user data confidential. AI holds promise as an aid, no doubt, but it mustn't blow it.


Mint
5 days ago
- Business
- Mint
Siddharth Pai: India's IT firms have a unique opportunity in AI's trust deficit
Next Story Siddharth Pai Indian IT majors needn't be at the receiving end of an AI revolution. As trust in AI is a big global worry, the use of generative AI under human supervision can generate the assurances that clients need. Domestic software companies are well placed for this. Tier-1 software majors like TCS have woven GenAI into their workflows, emphasizing pilot deployments and internal automation over big-scale consulting mandates. Gift this article Indian IT services firms are confronting a challenge with AI set to decimate their computer programming work. But an interesting vacuum will be created by AI's steady march into computer code: the AI trust crisis. In the words of my colleague Siddharth Shah, 'The AI trust crisis is already here… And no one's talking about the layer that will make or break enterprise deployments,". Indian IT services firms are confronting a challenge with AI set to decimate their computer programming work. But an interesting vacuum will be created by AI's steady march into computer code: the AI trust crisis. In the words of my colleague Siddharth Shah, 'The AI trust crisis is already here… And no one's talking about the layer that will make or break enterprise deployments,". This layer hinges on human oversight, transparency and explainability—precisely the 'trust' dimensions that could turn Generative AI from liability to a lucrative revenue stream for Indian providers. Tier-1 software majors like TCS have woven GenAI into their workflows, emphasizing pilot deployments and internal automation over big-scale consulting mandates. Their strength lies in retraining developers on tools like GitHub Copilot and low-code platforms, automating boilerplate coding while retaining humans in the loop for critical paths. That 'human in the loop' ethos directly addresses one of the central concerns Shah identifies: ensuring systems remain aligned with human intentions. Tier-2 providers such as LTI Mindtree lack TCS's scale, but they shine in agility. Their typical positioning as productivity enhancers rather than code replacers allows them to layer trust-focused oversight atop GenAI output. Without this, many enterprise deployments will stall. By doing so, they offer faster proof of concept to clients anxious about AI accuracy and auditability. When contrasted with non-Indian players like Accenture and IBM, a distinct divergence appears. Accenture has already booked billions of dollars in GenAI projects and IBM is realigning its global consulting structure around AI units. They are aggressively pushing end-to-end AI transformations—including automated code generation pipelines—with less apparent concern for incremental human mediation. But that appetite for scale means they must also invest heavily to close the AI trust deficit. For Indian firms, the trust deficit represents not just a compliance challenge, but a commercial opening. Trust in AI is not merely abstract ethical talk: it is about reliability, explainability and behaviour certification. Shah writes that trust can be assessed 'by looking at the relationship between the functionality of the technology and the intervals of human intervention in the process. That means that the less intervention, the greater the confidence." Yet, in practice, enterprises often demand greater human oversight for sensitive use cases. For Indian providers, whose business model runs on cost-effective human resources, enabling that oversight at scale can be a strategic differentiator. They invested heavily in the past in automation for IT infrastructure and business process operations. Their automation playbooks now form the backbone of GenAI's enterprise strategies. Firms often train developers in prompt engineering and validation alongside generative code output. Human reviewers validate, correct and certify code before deployment, creating an audit trail. This aligns with the thesis that to build trust, you must create human-mediated checkpoints that govern AI behaviour. Relationships with hyperscalers remain robust: Tier-1 providers co-engineer GenAI offerings with Azure, AWS and Google Cloud, hosting models on hyperscaler infrastructure rather than building vast data centres. Tier-2 firms integrate with hyperscalers or Indian startup cloud platforms. In contexts where sovereignty and residency matter, Indian providers partner with startups to offer managed GenAI tools within India. Domestic hosting also helps build trust, particularly with regulators. Also Read: AI didn't take the job. It changed what the job is. Indian firms collaborate with niche startup AI vendors for explainability tools, code‑lineage trackers and behaviour‑audit platforms. They are building or buying tooling to surface provenance, metrics and error‑diagnosis alongside code generation modules. In contrast, non-Indian service providers tend to sell large-scale generative code deployments as transformational consulting journeys. Indian firms can undercut on price while building trust layer offerings that rely on domestic teams and documentation. The trust deficit thus could become a money-spinner for Indian IT services. As organizations grapple with AI bias, hallucinations and a lack of transparency, demand will grow for human-mediated code generation services. Human reviewers need to monitor, validate and correct AI-generated code. The 'human in the loop' thus becomes not only a safety net, but a commercial lever. However, one size does not fit all. Tier-1 Indian players should continue embedding trust‑layer capabilities into their GenAI practice by building specialized AI governance units, collaborating with domestic 'explainability' startups, and quantifying trust-related billing models. Tier-2 firms should double down on managed code‑agent offerings, with built-in human review workflows, transparency dashboards and prompt governance. For global giants like Accenture and IBM, offering tiered pricing on trust-enhanced deployments and adapting consulting models to regional cost structures may help. Across the board, the most viable strategy is a hybrid model that combines GenAI productivity gains with layered human oversight, clear provenance, explainability tooling and risk control. The trust deficit is not just a challenge; it is fast becoming a strategic opening—one that Indian providers are uniquely equipped to monetize. The author is co-founder of Siana Capital, a venture fund manager. Topics You May Be Interested In Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.


Mint
23-06-2025
- Business
- Mint
Siddharth Pai: Meta is going all GPUs blazing to win the ‘superintelligence' race
Next Story Siddharth Pai Mark Zuckerberg is doing all he can to leapfrog Generative AI and develop machines that can 'think'. The challenge is of another order of magnitude, but the resources he's pouring into it means he's in the race alright. Zuckerberg is personally setting the pace, scouting for top minds and restructuring teams to align with his lofty ambition. Gift this article Meta's audacious pivot towards what it calls 'superintelligence' marks more than a renewal of its AI ambitions; it signals a philosophical recalibration. A few days ago, Meta unveiled a nearly $15 billion campaign to chase a future beyond conventional AI—an initiative that has seen the recruitment of Scale AI's prodigy founder Alexandr Wang and the launch of a dedicated 'superintelligence' lab under the CEO's own gaze ( Meta's audacious pivot towards what it calls 'superintelligence' marks more than a renewal of its AI ambitions; it signals a philosophical recalibration. A few days ago, Meta unveiled a nearly $15 billion campaign to chase a future beyond conventional AI—an initiative that has seen the recruitment of Scale AI's prodigy founder Alexandr Wang and the launch of a dedicated 'superintelligence' lab under the CEO's own gaze ( This is not merely an attempt to catch up; it is a strategic gambit to leapfrog competitors like OpenAI, Google DeepMind, Anthropic and xAI. Currently, Meta's AI offerings, its Llama family, primarily reside within the predictive and Generative AI paradigm. These systems excel at forecasting text sequences or generating images and dialogue, but they lack the structural scaffolding required for reasoning, planning and understanding the physical world. Meta's chief AI scientist Yann LeCun has been eloquent on this front, arguing in a 2024 Financial Times interview that large language models, while powerful, are fundamentally constrained—they grasp patterns but not underlying logic, memory or causal inference ( For LeCun and his team, superintelligence denotes AI that transcends such limitations and is capable of building internal world models and achieving reasoning comparable to—or exceeding—human cognition. This definition distances itself sharply from today's predictive AI, which statistically extrapolates from patterns, as well as GenAI, which crafts plausible outputs, such as text or images. Superintelligence, by contrast, aspires for general-purpose cognitive ability. Unsiloed and flexible, it will be able to plan hierarchically and form persistent internal representations. It is not alone in this quest. Ilya Sutskever, the former chief scientist at OpenAI who believes powerful AI could harm humanity, has co-founded Safe Superintelligence. It has no plans to release products but its stated mission is to build superintelligence and release the technology once it has been proven to be completely safe ( Also Read: Brave Chinese voices have begun to question the hype around AI Meta has established a cadre of roughly 50 elite researchers, luring them with huge compensation packages, to work with Scale AI to create a vertically integrated stack of data labelling, model alignment and deployment. Meta chief Mark Zuckerberg's combative leadership style—marked by intense micromanagement and 24/7 messaging—hints at both the urgency and stakes. In comparison with rivals, Meta lags on the AI developmental curve. Its Llama-4 release has faced delays and scrutiny, while its competitors have sped ahead—OpenAI moved quickly to GPT-4 and Google countered it with Gemini-based multimodal agents. Nevertheless, Meta brings distinctive assets to the table: its social graph, an enormous user base, its sprawling compute resources, which include hundreds of thousands of Nvidia H100 GPUs, and a renewed impetus underpinned by its Scale AI partnership. Yet, beyond the material strength of its stack lies the more profound question: Can Meta, with its social media heritage, really deliver on superintelligence? LeCun muses that a decade may pass before systems capable of hierarchical reasoning, sustained memory and world modelling come to fruition ( Meta's pursuit is an investment in a bold vision as much as engineering muscle. Also Read: Artificial general intelligence could reboot India's prospects The differences between predictive, generative and superintelligent systems are consequential. An AI tool that merely predicts or synthesizes text operates within a bounded comfort zone, finding patterns, optimizing loss and generating output. However, a superintelligent AI must contend with the open-ended unpredictability of real-world tasks—reasoning across contexts, planning with foresight and adapting to novel situations. It requires an architecture qualitatively different from pattern matching. In this sense, Meta is not joining the arms race to outdo competitors in generative benchmarks. Instead, it aims to leapfrog that race for a big stake in a future where AI systems begin to think, plan, learn and remember far better. The risk is high: billions of dollars are invested, talent battles are underway and there is no guarantee that such advancements will fully materialize. Critics note that AI today fails at some straightforward tasks that any competent Class 10 school student would pass with ease. But Meta views this as a strategic inflection point. Zuckerberg is personally setting the pace, scouting for top minds and restructuring teams to align with his lofty ambition. If Meta can transition from crafting better chatbots to instilling AI with coherent, persistent models of the world, it just might recalibrate the AI hierarchy entirely ( Whether this would mark Meta's renaissance remains to be seen. Yet, the narrative shift is unmistakable. Where once Meta chased after generative prowess, it now envisions cognitive machines that supposedly actually 'think.' The challenge lies not only in engineering capability, but in philosophical restraint. Superintelligent systems demand new ethics, not just new math. If Meta achieves its goal, it will not merely change AI—it will redefine our expectations of intelligence itself. In this quest, the company must navigate both technical intricacies and the social repercussions of creating minds that learn, adapt and may surpass us. Whether such minds can be safely steered is a question that no GPU cluster can answer definitively. The author is co-founder of Siana Capital, a venture fund manager. Topics You May Be Interested In Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.