Latest news with #ETCIODeepTalks


Time of India
14-07-2025
- Automotive
- Time of India
It's about scaling with intelligence, making AI and Cloud work for the business, the customer, and the future
HighlightsMahindra leverages a multi-cloud strategy to provide flexibility, avoid vendor lock-in, and deliver business-specific solutions, enabling businesses to choose the best cloud for their needs securely and efficiently. AI and GenAI are deeply embedded across customer journeys, manufacturing, call centers, and internal productivity, with a focus on solving real business problems, not just experimentation. Adoption of microservices, robust APIs, Kubernetes, DevSecOps, and MLOps platforms ensures agility, scalability, and interoperability of workloads across cloud and on-premises environments. Instead of centralizing data into one cloud, Mahindra employs a data mesh architecture, keeping data in source systems but federating it when needed for specific use cases like hyper-personalization or predictive analytics. A strong culture of AI adoption, leadership-backed lighthouse projects, internal GenAI platform (Mahindra AI Platform), and rigorous governance (including FinOps and regulatory compliance) drive trust and sustained innovation. At a pivotal juncture in the enterprise technology landscape, where the race to weave artificial intelligence into the very fabric of business has escalated beyond experiments to boardroom imperatives, one principle stands out: behind every successful AI-driven transformation lies a robust, future-ready cloud strategy one that is agile, secure, scalable, and resilient. In this story, to understand how Mahindra & Mahindra, one of India's most iconic and diversified conglomerates, is embracing this challenge with a visionary multi-cloud and AI strategy. At the helm of this transformation is Aarti Singh, Enterprise CIO at Mahindra, who shares her insights with ETCIO DeepTalks on balancing innovation with governance, and speed with trust. Building business value on a cloud-agnostic foundation For Mahindra, the north star of technology has always been clear, creating seamless customer and employee journeys that deliver business value. To enable this, Mahindra has chosen to be cloud agnostic, leveraging the unique capabilities of AWS , Azure, and Google Cloud simultaneously. 'Being cloud agnostic allows us to bring services and solutions to customers and employees faster and securely. Each cloud offers distinct advantages, be it supply chain solutions, factory automation, or compute, and our role as an enterprise is to make these certified, secure services available for our businesses to choose from,' explains Aarti. This approach ensures flexibility and neutrality, empowering Mahindra's diverse businesses, spanning mobility, finance, holidays, and engineering to select the most suitable cloud for their needs, without being locked into a single provider. Why flexibility and mobility matter more than ever In sectors like mobility, logistics, and pharmatech, speed and adaptability are now competitive differentiators. Aarti points out that while large-scale enterprise applications like SAP or PLM are strategic and take time to implement, the need for agility in customer-facing solutions has made cloud a cornerstone of their strategy. 'You don't want to be tied to one technology or cloud. Sometimes, cloud-native services make sense for speed and cost, while in other cases, you need portability and mobility to pick the best available innovation,' Aarti notes. To the board and CFOs, Mahindra frames its cloud decisions around business outcomes, faster sales processes, more effective teams, scalable operations and not just technology investments. Architecting the future: Microservices, AI, and DevSecOps Underpinning Mahindra's multi-cloud strategy is a modern architecture that prioritizes microservices, robust APIs, and mobile-first design. 'We are moving toward a headless, microservices-based architecture that allows plug-and-play capabilities, enabling us to embed AI and GenAI into everything we build from chatbots on websites to intelligent automation in manufacturing,' says Aarti. This is complemented by a strong DevSecOps culture, ensuring secure and efficient development cycles, and a push toward test automation and CI/CD pipelines. Balancing public cloud scalability with private control While Mahindra benefits from the scalability and elasticity of public clouds, especially during peak loads, certain sensitive workloads and defense-related data remain on-premises to ensure sovereignty and compliance. 'For GenAI workloads, for instance, we've built an internal AI platform where employees can safely upload and summarize data without it leaving our environment,' Aarti explains. This careful balance between scalability and control extends to Mahindra's data strategy, which favors a federated data mesh approach. Rather than centralizing all data into one cloud, Mahindra keeps data in source systems and brings it together only when needed for specific AI use cases like hyper-personalization or predictive analytics. AI at scale from experiments to outcomes Central to Mahindra's strategy is the AI lifecycle, from data collection and model training to deployment and continuous learning, all governed by strong observability, governance, and compliance with regulations like India's DPDP Act and GDPR. Aarti underscores that trust remains a cornerstone in scaling AI. 'It starts with building confidence in the outcomes and picking the right business problems to solve. Once the right use cases are chosen, scaling becomes much easier,' Aarti reflects. To support this, Mahindra has invested in a horizontal AI team, a robust MLOps platform, and training programs to embed AI into the DNA of the organization. Across businesses, AI and GenAI are already being used in chatbots, hyper-personalization, computer vision in manufacturing, and agentic technologies in call centers. Mastering complexity with fin-ops, open source, & interoperability With a multi-cloud footprint comes the challenge of managing costs and complexity. Mahindra has tackled this with an in-house FinOps platform, active governance, and automation to ensure resources are used efficiently. On the technology side, open-source adoption, event-driven architectures, and containerization through Kubernetes enable interoperability and composability. As Aarti points out, 'Kubernetes gives us the flexibility to move workloads between clouds seamlessly, while MLOps standardizes AI workflows for data scientists across the enterprise.' The cultural transformation of AI as everyone's job Perhaps the most striking aspect of Mahindra's journey is its cultural shift toward embracing AI at every level. 'Our vision is to scale AI and embed it into everyone's work. From leadership sponsorship to lighthouse projects to individual learning, everyone is encouraged to adopt and experiment with AI,' Aarti shares. Employees are trained not just to use AI tools but to master techniques like prompting effectively to fully leverage GenAI's potential. Investments that matter Over the past 12–18 months, Mahindra has made significant investments: Building the Mahindra AI Platform, an internal GenAI-enabled GPT for data platforms across its horizontal AI and data science lighthouse AI projects with direct business in AI-driven cybersecurity and developer productivity tools. Leading with purpose and agility When asked how Mahindra benchmarks itself against peers like Tata , Reliance, and L&T, Aarti confidently asserts that the group's strategic focus and outcome-driven approach set it apart. 'We have an edge in speed of decision-making, business diversity, and an ecosystem mindset. Our strategy is not about hobby experiments, it's about scaling AI with purpose,' Aarti affirms. Looking ahead, she sees tremendous promise in customer-facing AI journeys, developer productivity platforms, AI-enhanced cybersecurity, voice and facial interfaces, and agentic technologies. Lessons in leadership For Aarti, the biggest leadership lesson in scaling AI has been about trust and focus. 'You must pick the right problems to solve and build trust in the outcomes. Once you do that, success follows,' Aarti concludes. Mahindra's multi-cloud and AI strategy is more than a technical playbook, it's a blueprint for how large, diversified enterprises can lead with clarity, agility, and purpose in an increasingly complex digital ecosystem. By balancing scalability with sovereignty, innovation with governance, and experiments with outcomes, Mahindra is not just keeping pace with the AI revolution, it's setting the pace. In Aarti Singh's words, 'It's about scaling with intelligence, making AI and cloud work for the business, the customer, and the future.'


Time of India
04-07-2025
- Business
- Time of India
The SOCs isn't just a function in the age of AI Era by Dr. Yusuf Hashmi
HighlightsWhy SOC fatigue is a systemic risk, not an analyst issue The role of AI, agentic models, and automation in optimizing MTTR How to design SOCs that scale with relevance, not just volume The intersection of DPDP, data lineage, and SOC accountability The irreplaceable role of human context in an AI-augmented security world In this DeepTalks session, Dr. Yusuf Hashmi, Group, CISO at Jubilant Bhartia Group, reimagines the SOC, tackling AI-assisted triage, alert fatigue, data governance, DPDP liability, and the rising cost of log inflation, to present a bold, practical vision for future-ready security the dimly lit war rooms of cybersecurity, the Security Operations Centers (SOCs), thousands of alerts blink on screens every minute. Analysts scan dashboards, eyes darting, trying to distinguish between noise and the one anomaly that could bring an enterprise to its knees. But in today's AI-fueled world, even these battle-tested security models are showing signs of exhaustion. 'It's time we stop seeing the SOC as just a dashboard of alerts,' says Dr. Yusuf Hashmi , Group CISO at Jubilant Bhartia Group, in a gripping and wide-ranging conversation with ETCIO DeepTalks. 'We must reimagine it as a cockpit, one that is predictive, autonomous, and human-aware.' Dr. Hashmi isn't just describing a shift in tools. He's championing a cultural and architectural transformation, one that demands leadership rethink how security operations are structured, automated, and governed. The breakdown begins The conversation opens with a blunt diagnosis: the traditional SOC is broken. 'There used to be a handful of firewall logs coming in. Today, we're ingesting data from 60-70 different log sources,' Dr. Hashmi explains. 'From endpoints to proxies, from cloud to identity - the ecosystem is sprawling. And each of these sources needs contextual use cases. But most organizations aren't ready for that.' This, Dr. Hashmi says, creates the perfect storm for alert fatigue, a silent killer in cybersecurity. Analysts are overwhelmed, incidents are missed, and trust in the SOC dwindles. AI's promise and pitfalls Dr. Hashmi sees AI not as a silver bullet, but as a powerful enabler, if implemented wisely. 'AI can triage, correlate, enrich. It can suppress false positives and help prioritize what matters. But AI must be trained. It doesn't mature out of the box. You need 5 to 6 months, sometimes longer, to adapt a model to your data,' Dr. Hashmi warns. Dr. Hashmi emphasizes the agentic model, using AI-powered agents to take over repetitive, mundane triage tasks so human analysts can focus on critical decision-making. But the contextual layer, he insists, must remain human. Dr. Hashmi also says 'AI can automate. But it cannot replace the analyst's gut instinct, their ability to think outside the box. That's irreplaceable.' Integration nightmares & log inflation At the heart of SOC dysfunction lies a quietly growing monster: log overload. 'Many organizations don't understand what they're ingesting,' Dr. Hashmi says. 'EPS (Endpoint security) peaks go through the roof. And half those logs? They're noisy. They're being stored, processed, and paid for, but they add no value.' Dr. Hashmi's advice: optimize for relevance, 'You don't need everything. You need what helps you correlate, detect, and respond . Everything else is an expensive distraction.' From alert fatigue to MTTR anxiety Metrics like Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) have become the new holy grails of SOC performance. But as Dr. Hashmi points out, they're only as good as the underlying architecture and logic. 'If you don't fine-tune your rules, if your alerts aren't contextualized, your MTTA and MTTR suffer. Analysts waste time chasing irrelevant noise, and that one critical alert gets buried.' The fix? Smarter alerting. Better enrichment. Fewer false positives. And yes, more AI-powered correlation engines that understand behavioral baselines. The compliance curveball: DPDP's impact on SOCs With India's Digital Personal Data Protection (DPDP) Act coming into force, Dr. Hashmi sees new pressure on SOC teams especially around personal data ingestion. 'If your SOC is processing DLP logs, you may be dealing with personal data. That means you're accountable under the DPDP. You need governance, visibility, and traceability.' He calls for greater attention to data lineage, understanding where data comes from, how it's stored, who accesses it, and how long it remains within systems. 'Security without governance is a ticking bomb. You need to know your data trail end to end.'notes Dr. Hashmi. SOC design: It's not about tools. It's about context. When asked what makes a modern SOC truly effective, Dr. Hashmi offers a precise and measured answer: Scalability: The platform must handle peak Use Cases: MITRE ATT&CK-aligned rules save Analysts need intuitive, investigation-friendly Awareness: Know your licensing model EPS vs Clarity: MTTR, MTTD, FP rates these are your compass. But Dr. Hashmi's quick to emphasize that no model fits all. 'You must understand your environment. Your threat landscape. Your business impact. No Gartner quadrant can define your context better than you.' The ROI dilemma and the AI hype trap Every CISO today is asked the same thing: What's the ROI on security? Dr. Hashmi believes it starts with asset valuation. 'If you don't know the value of what you're protecting, how will you measure loss? Understand your assets. Quantify their downtime impact. Then map your SOC outcomes against that.' He also cautions against AI - FOMO 'Many CISOs buy AI tools just because they're trending. But if your MTTA isn't improving, your response time hasn't dropped. What did you really gain?' says Dr. Hashmi. On MDRs, cloud SOCs, and cost-efficient architectures For organizations lacking in-house expertise or infrastructure, Dr. Hashmi recommends SOC-as-a-Service or Managed Detection & Response (MDR) models. 'Not everyone needs an on-prem SOC. If you're a smaller firm, MDR can be a life-saver, no licensing, no infra management, no staffing nightmares.' Dr. Hashmi also advocates for cloud-based SOCs with high availability and easy scalability, especially when uptime and redundancy are mission-critical. In perhaps the most poignant part of the conversation, Dr. Hashmi speaks of the unsung heroes of the SOC, the analysts. 'They run 24x7. They're the stars of the security function. But we overload them with Excel reporting, compliance checklists, and fatigue. That has to stop.' pointed Dr. Hashmi. Dr. Hashmi also urges CISOs to sit with their SOC teams, understand their world, and build empathy into governance. 'The SOC isn't just a function. It's your shield. If you love it, you'll nurture it.' In a world increasingly driven by automation, Dr. Hashmi reminds us that passion still powers the best defenses. 'SOCs are like goalkeepers. They don't get applause until something goes wrong. But they're your last line of defense, and your first line of attack.' To modernize a SOC, organizations must combine the power of AI with the wisdom of human intelligence, supported by architecture that scales, data that's governed, and leadership that listens. Because in cybersecurity, it's not just about fighting threats, it's about earning trust, concludes Dr. Hashmi.