logo
#

Latest news with #KaranAlang

IEEE I2ITCON 2025 recognizes cutting-edge research with best paper awards
IEEE I2ITCON 2025 recognizes cutting-edge research with best paper awards

Time of India

time08-07-2025

  • Business
  • Time of India

IEEE I2ITCON 2025 recognizes cutting-edge research with best paper awards

PUNE: The IEEE I2ITCON 2025 conference, technically co-sponsored by the IEEE Pune Section, concluded on a high note on July 5 at The Hope Foundation and Research Centre - International Institute of Information Technology (I²IT), Pune. The prestigious two-day event, held on July 4 and 5, brought together leading researchers, academicians, and industry experts from across the globe to showcase and deliberate on innovations in AI, computing, and emerging technologies, a statement issues hy the organisers said. This year, the conference witnessed an overwhelming response, with over 2,500 paper submissions from researchers worldwide. Out of these, 236 papers were shortlisted and presented across various technical tracks. In a keenly awaited valedictory session, four outstanding papers were conferred with the coveted Best Paper Award, recognizing exemplary contributions in advancing technology and research. You Can Also Check: Pune AQI | Weather in Pune | Bank Holidays in Pune | Public Holidays in Pune In the AI, ML & Deep Learning track, the paper titled 'GovernAI: Policy-Driven Model Governance for Dynamic and Multi-Tenant AI Systems', authored by Sana Zia Hassan, Karan Alang, Nagarjuna Nellutla, Shrinivass Arunachalam Balasubramanian, and Vamshidhar Morusu, was awarded for its pioneering approach to managing AI systems in complex, multi-tenant environments. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo Another paper from the same track, 'Differentially Private Pipelines: Practical Approaches for Large-Scale Feature Stores', by Pooja Devaraju, Satya Manesh Veerapaneni, Ram Ghadiyaram, Jaya Eripilla, and Sudeep Acharya, was recognized for its innovative techniques ensuring privacy in large-scale AI data pipelines. In the Computing Technologies track, the award-winning paper 'A Framework for Intelligent Cloud Systems: Enabling Secure, Policy-Driven, and Sustainable AI at Scale', authored by Lakshman Kumar Jamili, Sumeer Basha Peta, Ranganath Nagesh Taware, Balaji Krishnan, and Srikanth Perla, impressed the jury with its vision for scalable and secure AI deployments. Another paper titled 'Training AI to Simulate Reality Through Self-Constructed Representations', by Rohan Shahane, Shazia Hassan, Nandita Giri, Sashi Kiran Kaata, and Vijayakumar Krishnapillai, was honored for its innovative approach to bridging the gap between AI learning and real-world simulations. The conference underscored Pune's growing prominence as a hub for technological innovation and research, providing a robust platform for knowledge exchange and collaboration. With its diverse and forward-looking discussions, IEEE I2ITCON 2025 reaffirmed its commitment to shaping the future of technology.

Why Large Language Models Are The Future Of Cybersecurity
Why Large Language Models Are The Future Of Cybersecurity

Forbes

time02-07-2025

  • Forbes

Why Large Language Models Are The Future Of Cybersecurity

Karan Alang, principal software engineer at Versa Networks with 25 years of experience in AI, cloud and big data. Cybersecurity today faces a key challenge: It lacks context. Modern threats—advanced persistent threats (APTs), polymorphic malware, insider attacks—don't follow static patterns. They hide in plain sight across massive volumes of unstructured data: logs, alerts, threat feeds, user activity, even emails. Traditional defenses—whether signature-based detection, static rules or first-generation ML models—while effective against known threats, struggle with the scale and complexity of modern attack vectors. They often produce false positives, and their rule-based nature means novel or sophisticated attacks are typically detected only after damage has occurred. Large language models (LLMs) have the capability to change this. Originally built to understand and generate natural language, LLMs like GPT-4, Claude, Gemini and others offer something cybersecurity desperately needs: the ability to read between the lines. They can parse logs like narratives, correlate alerts like analysts and summarize incidents with human-level fluency. But LLMs are more than just smarter tools—they're the foundation of a new kind of AI-augmented defense system. The Six Most Promising Use Cases For LLMs In Cybersecurity LLMs can analyze behavioral baselines across users and devices, identifying subtle deviations that signal insider threats or credential abuse. Unlike rigid anomaly detection models, LLMs have the capability of identifying unknown threats and can reduce false positives significantly. By ingesting log data, incident reports and threat intel, LLMs can autonomously map behaviors to relevant MITRE ATT&CK techniques. This streamlines classification and enhances threat response workflows. LLMs excel at identifying unknown threats by recognizing semantic anomalies and behavioral inconsistencies across diverse data. This makes them well-suited for detecting zero-days, novel malware or multistage attack chains with no prior signature. Phishing remains the most common initial attack vector. LLMs can parse email language, structure and embedded content to detect social engineering cues, flagging threats that evade traditional filters. Security operations centers (SOCs) are drowning in alerts. LLMs can act as AI copilots, prioritizing the most relevant incidents, summarizing them in plain English and reducing analyst fatigue. LLMs can digest unstructured threat intelligence—white papers, PDFs, X feeds—and convert them into structured indicators of compromise (IOCs) or STIX/TAXII format for machine consumption. How To Ensure LLM Accuracy: Avoiding Hallucinations In cybersecurity, an incorrect AI-generated response isn't a bug—it's a liability. LLM hallucinations must be proactively mitigated. Here's how to do it right: • Retrieval-Augmented Generation (RAG): Pair the LLM with real-time data sources (logs, threat feeds, MITRE documentation). The model then generates answers based on verified content, not just memory. • Structured Prompting: Use defined templates that limit open-ended generation (e.g., {"mitre_technique": "T1566.001", "confidence": 0.93}). • Human-In-The-Loop Validation: Analysts should review and approve high-impact outputs (e.g., containment actions, incident classification). • Audit Logging: All AI-generated recommendations should be logged, including prompt, retrieved context and final output, for post-incident review and model tuning. • Fine-Tuning + Feedback Loops: Regularly incorporate analyst feedback to improve model accuracy and contextual alignment with your environment. LLMs should not replace your SOC—they should augment it with intelligence that's explainable, traceable and verifiable. Future Outlook: Agentic AI, MCP And Agent-To-Agent Architectures LLMs are the starting point. The next generation of AI in cybersecurity will be built on three converging frontiers: Agentic systems are LLM-powered entities that can reason, plan and take action with constraints. In security, they might: They won't replace analysts—but they'll act like Tier-1 analysts on autopilot, freeing humans for more strategic work. As enterprises deploy multiple AI models across detection, analysis and response, MCPs will standardize context transfer between models: This is essential for regulated environments that require compliance-ready automation. In early-stage prototypes already used in cyber defense research, multiple specialized AI agents communicate to divide tasks: This modular, collaborative AI ecosystem will redefine cybersecurity architecture—where AI agents act like a fully staffed, scalable SOC team. Granted, these architectures are in the nascent stage, but many companies are already applying these in next-gen cyber platforms and have the potential to become mainstream as protocols, standards and guardrails mature. Final Takeaway: What Security Leaders Should Do Now LLMs are no longer an experiment—they're a strategic imperative. Here's what CISOs, CIOs, CTOs and engineering leaders should consider: Conclusion We're entering an era where AI doesn't just help detect threats—it understands them, explains them and, soon enough, will act on them with human guidance. Large language models are not just the future of cybersecurity—they're the context engine that makes the rest of your security stack smarter. Now is the time to invest—not just in the technology but in the architecture and governance needed to make it secure, reliable and impactful. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store