logo
#

Latest news with #RetrievalAugmentedGeneration

Elastic named Leader in 2025 Gartner Magic Quadrant for observability
Elastic named Leader in 2025 Gartner Magic Quadrant for observability

Techday NZ

time5 hours ago

  • Business
  • Techday NZ

Elastic named Leader in 2025 Gartner Magic Quadrant for observability

Elastic has been recognised as a Leader in the 2025 Gartner Magic Quadrant for Observability Platforms for the second consecutive year. Gartner recognition The company earned this placement for its Elastic Observability offering after an evaluation of its Completeness of Vision and Ability to Execute. The recognition acknowledges Elastic's work in developing AI-driven capabilities, support for open standards, and the scalability and cost-efficiency of its observability platform. Santosh Krishnan, General Manager, Observability & Security at Elastic, commented on the company's approach to observability, saying: "Visibility alone isn't enough; customers need rapid context-rich insights to troubleshoot complex systems. We feel Elastic's recognition as a Leader in this year's Gartner Magic Quadrant reflects how our open, scalable architecture with AI-driven capabilities is transforming observability from a reactive tool into a solution for real-time investigations while keeping costs low." Key features highlighted The company stated that its differentiation lies in several areas, including native integration with OpenTelemetry, a built-in AI Assistant, and zero-configuration AIOps for anomaly detection. Elastic's AI Assistant leverages Retrieval Augmented Generation (RAG) technology to connect with enterprise knowledge, supporting incident resolution through natural language queries. This allows operational teams to reduce time-to-insight across logs, metrics, and traces. Elastic's zero-config AIOps deploys machine learning capabilities out-of-the-box to automatically detect anomalies, forecast trends, and reveal patterns within large datasets. The piped query language, ES|QL, aims to simplify the complexity of large-scale IT investigations by enabling advanced queries across observability data. Krishnan stated that Elastic's placement in the Magic Quadrant demonstrates the effectiveness of continued investments in open standards and deployment flexibility, alongside scalable performance and cost optimisations. He described the solution's impact on organisations moving from reactive troubleshooting to real-time investigation of incidents and anomalies. Enterprise adoption Elastic's approach to observability has also been adopted by enterprises seeking to consolidate monitoring tools and improve operational efficiency. Eva Ulicevic, Director, Technology, Architecture, Strategy, and Analytics at Telefónica Germany, shared the impact the platform has had within the organisation: "By using Elastic and consolidating multiple tools, we reduced our root cause analysis time by 80%. We also reduced incidents that could severely impact our business." The platform is built on Elastic's Search AI Platform, supporting the monitoring and optimisation of applications, infrastructure, and end-user experience. Elastic's Search AI Lake is designed for petabyte-scale data retention, supporting efficient storage and search for structured and unstructured data. Industry context The Gartner Magic Quadrant evaluates vendors in the observability sector based on criteria such as vision, innovation, ability to execute, and breadth of capabilities. Elastic's leadership listing for the second year underscores continued investment in tools that address the challenges of managing, searching, and analysing large volumes of operational data. Elastic's commitment to open-source standards is emphasised by its native support for OpenTelemetry, enabling organisations to standardise instrumentation and data collection processes without requiring proprietary connectors. The observability platform is positioned to support organisations as they address the growing complexity of cloud-based architectures and meet increased demand for real-time performance monitoring, anomaly detection, and automated root cause analysis.

Why Is Retrieval Augmented Generation Or RAG Popular Today?
Why Is Retrieval Augmented Generation Or RAG Popular Today?

Forbes

time5 days ago

  • Forbes

Why Is Retrieval Augmented Generation Or RAG Popular Today?

Too Many Questions. Pile of colorful paper notes with question marks. Closeup. There's an approach called Retrieval Augmented Generation in AI that's becoming a key way to help get targeted results for models. You could say that it's like chocolate and peanut butter – two great taste that taste great together. Or you could describe it in more technical ways. Essentially, Retrieval Augmented Generation is when you add information that the LLM should know as it applies its own training data and knowledge to a task. Over at GeeksforGeeks, experts explain it this way: 'In traditional LLMs, the model generates responses based solely on the data it was trained on, which may not include the most current information or specific details required for certain tasks. RAG addresses this limitation by incorporating a retrieval mechanism that allows the model to access external databases or documents in real-time.' Then there's a nice flow chart with 'data chunks' and other components, showing how this type of thing works. Think about how this would work in practice – for example, consider how you might give a chatbot a series of white papers about your business, and then ask it questions about your business model. Or on a personal level, if you want the AI to understand you better, you give it personal documents like diary recordings, or some of your past writing, in order to help it have a better knowledge of you as a person. In a very broad sense, you could say that RAG involves adding anything that wasn't in the original training set. That might be for reasons of nuance, or timing, or purpose, or it might just be to help target the result the way you want. Getting to the Point I really like this --- At Learn By Building AI, Bill Chambers is explaining that there's a simple approach to RAG. First, he contrasts it with this, which he says he found at Facebook: 'Building a model that researches and contextualizes is more challenging, but it's essential for future advancements. We recently made substantial progress in this realm with our Retrieval Augmented Generation (RAG) architecture, an end-to-end differentiable model that combines an information retrieval component (Facebook AI's dense-passage retrieval system) with a seq2seq generator (our Bidirectional and Auto-Regressive Transformers [BART]Good grief… Chambers then provides a neat little drawing that shows a 'corpus of documents' getting connected to an LLM model through user input. That made sense to me: RAG means adding specific information resources! Now, there are technical details, for sure, but I thought the tutorial did a great job overall of breaking this down, so that's another resource for anyone who wants to learn more about how it actually works. Using RAG I also wanted to reference a tech talk by Soundararajan Srinivasan, Sr. Director of AI Program at Microsoft, and a colleague, Reshmi Ghosh, a Microsoft Sr. Applied Scientist, at Imagination in Action in April, where they talked about practical use of RAG. Using terms like 'knowledge store,' 'vector database,' 'orchestrator,' and 'meta prompt,' Srinivasan went over how these systems can work, saying they help us to understand the limitations of AI in its context. And 'context' is also an important term because, as he describes, a larger context window adds capability, potentially with a lower memory footprint. Here are some other reasons the presenters talked about using RAG: Ghosh then talked about how we understand whether a model chooses to use the RAG information in its processing. 'You have all these different contexts that are sent with the query to tell the model that, 'hey, here's the external knowledge that you may or may not know,'' she said. 'When we are designing systems with large language models, also small language models like llama and phi, we are essentially finding that if you can send in context by compartmentalizing the data points and not fine-tuning it, you are still going to get factual queries answered in a qualitative manner of accuracy.' Ghosh also mentioned multi-modality. 'You can essentially have databases that have images, that have voice notes, that have sounds or music notes of any kind, and you can still build AI applications around it with the same kind of gains, because now you know that the models are tending towards utilizing RAG context and relying less on the internal memory, and this is also opening up new doors for all the new frameworks that are being discussed.' This, she added, is useful with protocols like MCP (Model Context Protocol) and A2A (Agent to Agent systems). That's important as we move into an era of new interfaces, where we're not just limited to typing to our AI partners. We have voice now, and more is coming in the future, with image and video generation that will be vibrant enough to replace text-based models. Some would say we're entering a world of dreams, where so much is possible that was previously impossible. RAG might be one component of making sure that we can steer the bus and deliver the kinds of results that we're looking for. It helps with what you might categorize as 'convergence' for a digital intelligence system. So keep an eye on these kinds of methodologies as we continue to design more sophisticated AI tools and resources.

Virtual AI Assistant To Help Power TD Securities
Virtual AI Assistant To Help Power TD Securities

Cision Canada

time7 days ago

  • Business
  • Cision Canada

Virtual AI Assistant To Help Power TD Securities

TD set to launch Generative AI pilot designed to save colleagues time and enhance client interactions TORONTO, July 8, 2025 /CNW/ - Today, TD Bank Group ("TD" or the "Bank") announced the launch of the TD Securities Artificial Intelligence (AI) Virtual Assistant, a proprietary generative AI-powered chatbot. Initially launching as a pilot, the TDS AI Virtual Assistant is designed to help augment the productivity and effectiveness of TD Securities ("TDS") Front Office Institutional Sales, Trading, and Research professionals. By streamlining daily tasks, the virtual assistant will help to significantly enhance the value these colleagues can bring to their client interactions. The TDS AI Virtual Assistant, a type of Knowledge Management System ("KMS"), is an internal chatbot designed to help employees efficiently retrieve, aggregate and synthesize vast amounts of information into concise context-aware summaries and insights to help colleagues to answer client inquiries with increased efficiency and speed. Using Retrieval Augmented Generation (RAG), the virtual assistant searches internal TDS research documents, interpreting, analyzing, and summarizing key points to respond effectively to user prompts. It also employs Text-to-SQL functionality to convert conversational queries into SQL queries, which are then executed against the data repository to gather and synthesize results into summary tables and visual plots as needed to provide timely market information. Once implemented, this virtual assistant is designed to save front office colleagues time, allowing them to focus on strategic client engagement and decision-making. "We're excited about the potential that the TDS AI Virtual Assistant brings to the TD Securities team," said Dan Charney, Executive Vice President, Vice Chair and Head, Global Markets, TD Securities. "This isn't just another tool—it's a meaningful step toward the future of how we work, that was built by traders, for traders. In a world that's moving faster every day, we're focused on giving our people smarter ways to cut through complexity and stay ahead. By combining human expertise with powerful technology, we're unlocking new possibilities—for our teams, and ultimately, for our clients." Key Features of the TDS AI Virtual Assistant include: Productivity Boost: Reduces information overload by automating information gathering and summarization, allowing teams to focus on more strategic analyses and client engagement. Capital Markets Native: Understands nuanced industry specific language and context. Trust and Reliability: Every insight is returned with direct citations to the source material, allowing for rapid verification by the users. "The TDS AI Virtual Assistant represents a significant development in our evolution of how we are helping revolutionize experiences for our colleagues and clients by operationalizing new technologies such as GenAI at the Bank," said Dan Bosman, Senior Vice President and Chief Information Officer, TD Securities & Payments. "We have been methodical in rolling out Knowledge Management Systems across the organization as these platforms are critical in developing capabilities for colleagues and enhancing experiences for customers. The strong collaboration between our technology groups, Layer6 and Enterprise Innovation teams has been instrumental in achieving these important milestones." The launch of this virtual assistant is the result of the Bank's investment in cutting edge research translated into application, driven by multiple teams across the Bank. TD recently announced TD AI Prism, a new AI foundation model, the goal of which is to help redefine how the Bank predicts customer needs to help personalize their banking experiences. TD launched two KMS platforms – in some of its contact centres and in branches – with plans to be live across seven of its businesses by the end of the year. The Bank also completed a large-scale migration of data records into its secure cloud-based platform, helping to give the Bank more speed and flexibility to unlock solutions such as the TDS AI Virtual Assistant. As the financial sector evolves, TD remains committed to innovation and the responsible use of AI as part of its role as a forward-thinking organization, driving advancements that benefit both the institution and the industry at large. This approach is fostered by the Bank as part of TD Invent, its strategic effort to power innovation. In an era where speed, accuracy, and adaptability are paramount, TD's approach demonstrates the strategic use of AI in helping to address complex financial challenges. About TD Bank Group The Toronto-Dominion Bank and its subsidiaries are collectively known as TD Bank Group ("TD" or the "Bank"). TD is the sixth largest bank in North America by assets and serves over 27.9 million customers in four key businesses operating in a number of locations in financial centres around the globe: Canadian Personal and Commercial Banking, including TD Canada Trust and TD Auto Finance Canada; U.S. Retail, including TD Bank, America's Most Convenient Bank ®, TD Auto Finance U.S., and TD Wealth (U.S.); Wealth Management and Insurance, including TD Wealth (Canada), TD Direct Investing, and TD Insurance; and Wholesale Banking, including TD Securities and TD Cowen. TD also ranks among the world's leading online financial services firms, with more than 18 million active online and mobile customers. TD had $2.1 trillion in assets on April 30, 2025. The Toronto-Dominion Bank trades under the symbol "TD" on the Toronto and New York Stock Exchanges.

Ola's Krutrim acquires BharatSah'AI'yak from tech consulting firm Samagra
Ola's Krutrim acquires BharatSah'AI'yak from tech consulting firm Samagra

New Indian Express

time20-06-2025

  • Business
  • New Indian Express

Ola's Krutrim acquires BharatSah'AI'yak from tech consulting firm Samagra

Krutrim spokesperson said, 'Integrating BharatSah'Ai'yak into the Krutrim ecosystem widens its offerings, lending cutting edge AI-centric assistance and support to a range of government initiatives, programs and schemes – thereby spearheading and strengthening the democratisation of AI, making it beneficial and accessible to every Indian." BharatSah'AI'yak specialises in creating Bharat-focused, vernacular Retrieval Augmented Generation (RAG) based AI bots that deliver both text and voice-led experiences. The platform has done several implementations, including KumbhSah'AI'yak, which was an AI-powered chatbot for Maha Kumbh 2025, providing pilgrims with 24/7 guidance on rituals, navigation, accommodations, and attractions. The AI start-up recently launched Kruti, which can execute tasks like cab booking, food ordering, bill payments, image creation, and in-depth research, while also supporting read-aloud responses. Additionally, it offers advanced AI features like in-depth research and image creation free of cost for users. The team at Krutrim operates from three locations- Bengaluru, Singapore & San Francisco. In February this year, Aggarwal announced Rs 2,000 crore investment in Krutrim, with a commitment of Rs 10,000 crore by next year.

Krutrim acquires BharatSah'AI'yak to expand AI footprint in public sector
Krutrim acquires BharatSah'AI'yak to expand AI footprint in public sector

Business Standard

time20-06-2025

  • Business
  • Business Standard

Krutrim acquires BharatSah'AI'yak to expand AI footprint in public sector

Krutrim, the artificial intelligence start-up founded by Ola's Bhavish Aggarwal, has acquired BharatSah'AI'yak, an AI-powered platform developed by Samagra, in a move aimed at deepening its footprint in India's public sector technology landscape. The acquisition brings under Krutrim's umbrella a platform that has played a central role in accelerating the deployment of AI solutions across a range of government initiatives, spanning education, agriculture and governance. Financial terms of the deal were not disclosed. With the acquisition, Krutrim plans to integrate its proprietary large language models, cloud infrastructure and agentic AI capabilities—including those behind its recently launched assistant app, Kruti—to expand BharatSah'AI'yak's reach nationwide. 'At Krutrim, we have boarded the country's brightest minds to develop a platform that reflects the diversity, depth and richness of Indian languages and culture,' said a Krutrim spokesperson. 'This integration enhances our ability to build AI that is inclusive, intuitive and deeply rooted in the lived realities of India.' Experts said the move highlights Krutrim's broader ambition to democratise artificial intelligence across India, targeting both public services and citizen-facing platforms. The deal also highlights the increasing role of home-grown AI firms in shaping India's digital governance strategy, as the government looks to harness emerging technologies to improve service delivery and administrative efficiency. BharatSah'AI'yak specialises in creating Bharat-focused, vernacular Retrieval Augmented Generation (RAG)-based AI bots that deliver both text and voice-led experiences. The platform's impact is evident through a series of high-profile deployments. Among them is KumbhSah'AI'yak, billed as India's first AI-powered chatbot for Maha Kumbh 2025. Designed to serve millions of pilgrims, the chatbot offers round-the-clock assistance on religious rituals, site navigation, accommodation options and local attractions. Krutrim provided the hosted open-source large language model services that power the chatbot's functionality. Another notable implementation is the AMA Krushi AI chatbot, launched in Odisha. This voice-enabled assistant delivers agriculture-related guidance and information on government schemes to farmers in local languages, using authenticated data from official sources. The initiative aims to improve accessibility and decision-making for farmers across the region. With Krutrim's advanced AI models, cloud infrastructure and the agentic platform underlying Kruti, these specialised assistants can now scale to serve more users across diverse domains with intuitive, efficient and language-inclusive interactions. Krutrim recently announced the launch of Kruti, the country's first agentic AI assistant designed to go far beyond conventional chatbots. Kruti is poised to lead a paradigm shift in AI, moving from passive responses to proactive, agentic task execution. Kruti can execute tasks like cab booking, food ordering, bill payments, image creation and in-depth research, while also supporting read-aloud responses. Additionally, it offers advanced AI features like in-depth research and image creation free of cost for users. Krutrim reached unicorn status last year after raising $50 million in equity during its inaugural funding round. The round, which valued the company at $1 billion, included participation from investors such as Matrix Partners India. Earlier this year, company founder Bhavish Aggarwal announced an investment of Rs 2,000 crore in Krutrim, with a commitment to invest an additional Rs 10,000 crore by next year. The company also launched the Krutrim AI Lab and released some of its work to the open-source community.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store