Latest news with #SmallLanguageModels


Hans India
4 days ago
- Business
- Hans India
IBM Opens Agentic AI Innovation Center in India
IBM (NYSE: IBM) opened its newest Agentic AI Innovation Center at its Bengaluru office. The Center fosters co-creation and offers clients hands-on experience working with autonomous AI agents. At the Center, clients and partners can build AI agents, fine-tune Small Language Models (SLMs) to power them, and audit the entire process—all within a matter of hours. Clients, partners and startups can explore AI agents on IBM's platform and solutions. At the heart of the Center and IBM's agentic AI strategy, is IBM watsonx Orchestrate, a single solution that helps companies build, deploy, and manage all of their AI agents and their assistants – and enable them to work across their existing technology stack. The Center will help clients, partners, and startups fast-track adoption of AI, foster co-innovation, and empower developers through hands-on learning and collaboration with IBM AI experts. Developer hangouts and AI Accelerator Days will be organized for an ongoing engagement with community to foster learning in the latest advancements in AI technology and deep engagement that leads to building tangible outcomes. Inaugurating the facility, Dinesh Nirmal, Senior Vice President, IBM Software said, 'We are entering a new era of AI—one defined by intelligent agents that don't just assist but act, adapt, and collaborate in real time. The IBM Agentic AI center brings this vision to life by urging local enterprises to evolve their AI strategy from reactive to proactive. It will empower clients and partners, to identify processes and workflows where AI will significantly boost productivity, elevate customer satisfaction, minimize downtime and enable users to focus on strategic tasks.' A recent survey by IBM, in collaboration with Oxford Economics, reveals that AI is now a key driver of financial performance. The study found that Indian business leaders are leading the charge globally in adopting AI across core business functions. Notably, 78% of Indian executives believe the greatest value of agentic AI lies in enhancing decision-making. Additionally, 78% of the Indian leaders are actively encouraging employees to experiment with agentic AI tools—underscoring a strong culture of innovation and forward-looking leadership.


Entrepreneur
7 days ago
- Business
- Entrepreneur
IBM Launches Agentic AI Innovation Center in Bengaluru to Advance Autonomous AI Development
Commenting on the launch, Sandip Patel, Managing Director, IBM India and South Asia, said the center aims to support the national agenda of digital transformation by promoting deeper understanding and implementation of high-impact AI use cases You're reading Entrepreneur India, an international franchise of Entrepreneur Media. IBM has launched its latest Agentic AI Innovation Center at its Bengaluru office, aiming to support the development and adoption of autonomous AI agents in India. The facility is designed to provide clients, partners, and startups with practical experience in building and deploying AI agents, including the fine-tuning of Small Language Models (SLMs). At the center, users will be able to experiment with IBM's AI platforms and tools, including watsonx Orchestrate—IBM's orchestration solution for managing AI agents across enterprise workflows. The initiative is part of IBM's broader push to support enterprise adoption of agentic AI, a model that allows AI systems to act independently, adapt to changing environments, and work collaboratively within existing business systems. The facility will also serve as a space for co-innovation and community engagement. IBM plans to host developer meetups, technical workshops, and "AI Accelerator Days" to encourage collaboration among developers, researchers, and industry professionals. "India's AI landscape is evolving rapidly. The goal now is to move from passive AI support to more proactive, intelligent systems," said Dinesh Nirmal, Senior Vice President, IBM Software, during the inauguration. He emphasized that the new center will help enterprises identify specific business functions where agentic AI can enhance productivity, reduce downtime, and improve decision-making. According to a joint survey conducted by IBM and Oxford Economics, India is at the forefront of AI adoption. The study noted that 78 per cent of Indian executives see decision-making as the top benefit of agentic AI. The same percentage of leaders also said they are encouraging employees to experiment with AI tools, suggesting a strong culture of innovation within Indian enterprises. Commenting on the launch, Sandip Patel, Managing Director, IBM India and South Asia, said the center aims to support the national agenda of digital transformation by promoting deeper understanding and implementation of high-impact AI use cases.


Forbes
14-07-2025
- Business
- Forbes
SLM Or LLM Agents? The Trade-Offs, The Risks And The Rewards
Joseph Ours leads the AI Strategy Practice at Centric Consulting. The AI industry is obsessed with scale—bigger models, more parameters, higher costs—the assumption being that more always equals better. Today, small language models (SLM) are turning that assumption on its head, proving that when it comes to AI performance, size isn't everything. While organizations chase the latest large language model (LLM) with hundreds of billions of parameters, some are quietly deploying smaller, more specialized agents that deliver results for a fraction of the cost compared to their counterparts. They may be on to something. We've seen that LLMs can, and do, deliver phenomenal results. However, using them for smaller tasks can be likened to using a Formula One race car for grocery shopping. Impressive, but inefficient and impractical for some real-world applications. In fact, Gartner predicts that by 2027, small, task-specific AI models will be used three times more than general-purpose LLMs. The combination of speed, cost-effectiveness and focused capability makes SLMs well-suited for specialized agentic systems, with AI agents designed to perform specific tasks autonomously within defined domains. The Performance Trade-Off LLMs are highly capable, revolutionary technology, but performance challenges are both real and measurable. Instead of broad general knowledge, SLM-powered agents focus on task-specific expertise. This represents a performance trade-off between versatility and efficiency: • Quality Of Input: LLMs excel at complex reasoning and sophisticated contextual understanding, handling diverse inputs across multiple domains. They're ideal for strategic planning, creative content generation and sophisticated customer service requiring a nuanced understanding. However, their generalized training makes them capable of many things, but often not exceptional at specialized, industry-specific tasks. • Cost And Speed: LLMs require thousands of GPUs, consume enormous energy, and carry operational costs that can reach hundreds of thousands of dollars monthly. SLM agents deliver dramatically lower costs—often 10 times less—with faster response times and superior performance in latency and throughput. They can also run locally without internet connectivity on edge devices like phones, infotainment systems and airport kiosks, but risk brittleness when encountering tasks outside their specialized scope. The key is understanding when specialization outweighs versatility for your specific use case. Real-World Agents Like with LLMs, real-world SLM-powered agents are emerging. For example, Japan Airlines is using Microsoft's Phi models to power AI agents that process passenger paperwork, reduce flight attendant workload and efficiently handle standardized passenger data and routine questions. Potential exists in healthcare, where patient medication mix-ups happen frequently. SLM-powered agents could serve as specialized safety nets, with agents checking prescribed medications for potential interactions, dosage errors or prescription misinterpretations. Unlike comprehensive medical LLMs that might overstep boundaries, specialized agents could focus exclusively on medication information without venturing into diagnosis or treatment advice. Small model agents can be constrained to appropriate boundaries. Outside of the more serious applications, gaming represents another emerging market. Instead of running expensive large language models to power non-player characters (NPCs) in games like GTA 6, studios could deploy specialized SLM-powered agents for NPC conversations, with each agent handling specific character types or conversation domains, dramatically improving customer experience while controlling costs. Edge deployment enables these agentic applications to run locally on gaming devices without requiring constant cloud connectivity. Implementation Hurdles Are Real Just because they're smaller doesn't mean SLMs are automatically easier to implement. At the consulting company where I work, we see organizations struggling to effectively govern and break down tasks for LLMs. If that's the case, implementing specialized models becomes even more complex as they require precise task definition and strong governance frameworks. If they're not well-managed, they're more likely to go off track than LLMs. Success for SLMs will require: • Defined Performance Metrics: Clear measurement of response time, latency, tokens per second and accuracy in isolated domains where a variety of inputs is manageable. • Domain Specificity: Applications like car infotainment systems that need to process natural language requests, healthcare charting, medication safety checks or other specialized agentic systems work best. • Governance Maturity: Organizations that have yet to master LLM governance should focus on it first, as SLMs demand more precise oversight. The Future Of Enterprise AI The future of enterprise AI is about making intelligent choices that align AI capabilities with business requirements. While SLM agents will continue evolving for specialized tasks, LLMs remain essential for complex reasoning, creative work and scenarios requiring broad knowledge. It turns out that in AI, big things really do come in small packages. As AI adoption picks up speed, organizations that understand that smaller can deliver better performance will gain competitive advantages. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
09-05-2025
- Business
- Forbes
How Small Language Models Deliver Big Business Benefits
Small Language Models (SLM) are trained on focused datasets, making them very efficient at tasks like analyzing customer feedback, generating product descriptions, or handling specialized industry jargon. There seems to be no limit to what artificial intelligence (AI) can help people do. But the tens of billions, even trillions of parameters used to train large language models (LLMs) can be overkill for many business scenarios. Enter the small language model (SLM). SLMs are trained on relatively small amounts of specific data—fewer than 10 billion parameters or so. Because of their small size and fine-tuning, SLMs require less processing power and lower memory. This means they're faster, use less energy, can run on small devices, and may not require a public cloud connection. Like LLMs, SLMs can understand natural language prompts and respond with natural language replies. They are built using streamlined versions of the artificial neural networks found in LLMs. But SLMs are trained on focused datasets, making them very efficient at tasks like analyzing customer feedback, generating product descriptions, or handling specialized industry jargon. 'LLMs are like a starship. It's very powerful and can go far, far away, but if you're doing something very tactical and specific, that starship is way too powerful,' says Neil Sahota, CEO of research firm ASCILabs and an AI advisor to the United Nations. 'If speed and costs are concerns, SLMs are the better way to go.' The sweet spot for SLMs tends to be narrow tasks in high-volume niche applications or in low-power environments, such as on smartphones or Internet of Things (IoT) gadgets. They are also useful when data privacy is crucial, or internet access is sparse. For example: Field service engineers don't always have high-bandwidth internet access. With an SLM on their device, they could use generative AI to query their field service manual. Low computational requirements and local processing make this possible. Sales representatives might need to access a generative AI model containing sensitive data at a client site to provide tailored recommendations. An SLM could provide those results without the lag and potential privacy concerns that often come with using a mobile device. Clinicians could use an SLM to analyze patient data, extract relevant information, and generate diagnoses and treatment options. The fact that data never leaves the device is a huge benefit for privacy. But don't expect a significant shift from LLMs to SLMs. Organizations are more likely to implement a portfolio of models, each selected to suit a specific scenario. AI developers, in fact, often work through a pipeline of models. A query might first go to an LLM, then to an SLM for classification, then back to the LLM to extract the information and generate a response. At larger organizations, an LLM could be used for complex tasks—like developing a long-term business strategy that considers macroeconomic policies and global effects—while multiple SLMs handle dozens of business-unit-specific tasks such as analyzing consumer feedback and social media posts to guide new product development. And while SLMs may be a cost-effective alternative to LLMs, they still have limitations. They don't understand complex language well, they lose accuracy when doing complex tasks, and they have a narrow scope of knowledge. There are other trade-offs. While SLMs generally don't cost a lot to run, costs could add up if multiple SLMs are in use. 'If you have five models deployed and they're each using GPUs and occupying space and electricity in the data center, that costs more versus having one huge model,' says Sean Kask, AI chief strategy officer at SAP. 'Sure, the LLM uses a lot of electricity, but it's being used for a lot of different things, and you can refine data for smaller, more specific queries through prompt engineering.' What's more, SLMs present many of the same challenges as LLMs when it comes to governance and security. 'You still need a risk and regulatory framework,' says Jim Rowan, head of AI at Deloitte Consulting LLP. 'You need an AI policy because you don't want business units using data and AI models without your knowledge. And you still have to set up guardrails because SLMs hallucinate too,' he adds. SLMs also aren't necessarily easier to manage than LLMs. Even though the big AI players offer versions of SLMs through a service model where they provide the underlying engine, 'you still need people who know what the right data is. You need domain experts and a data scientist who can develop a good training strategy for the model,' Sahota says. Companies will need to ask important questions before incorporating SLMs into their AI strategy: What business case are you solving for? If the dataset is very small, controlled, and available, such as HR documents or product descriptions, it makes great sense to use an SLM. 'But if it's a large stack of constantly changing data or there's lots of variability in it, such as current mortgage rates or daily geopolitical events, you probably want to go the LLM route,' Sahota says. What kind of performance and accuracy are needed? SLMs can be very accurate about straightforward questions, like an inquiry into current benefits. But if an employee says 'I would like to pay a third mortgage; can I draw off my 401(k)?' they may get a more generic answer. An LLM might be better at handling this type of question, as it could include information on HR and tax standards for 401(k) use. What are your growth needs? Businesses need to anticipate how big the SLM might get over time. 'If you're a retailer and you're going to toss tens of thousands of products into the model over the next few years, that's certainly an LLM,' Sahota says. As the number and type of available AI models continue to grow, businesses will need to understand the range of what's available to create their AI model portfolio. 'Choice is very important to your strategy,' Kask says. 'Pick the model that's right for you and for your embedded use case.' This story also appears on