logo
#

Latest news with #contextengineering

Context Engineering for Financial Services: By Steve Wilcockson
Context Engineering for Financial Services: By Steve Wilcockson

Finextra

timea day ago

  • Business
  • Finextra

Context Engineering for Financial Services: By Steve Wilcockson

The hottest discussion in AI right now, at least the one not about Agentic AI, is about how "context engineering" is more important than prompt engineering, how you give AI the data and information it needs to make decisions, and it cannot (and must not) be a solely technical function. "'Context' is actually how your company operates; the ideal versions of your reports, documents & processes that the AI can use as a model; the tone & voice of your organization. It is a cross-functional problem.' So says renowned Tech Influencer and Associate Professor at Wharton School, Ethan Molick. He in turn cites fellow Tech Influencer Andrej Karpathy on X, who in turn cites Tobi Lutke, CEO of Shopify: "It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM. " The three together - Molick, Karpathy and Lutke - make for a powerful triumvirate of Tech-influencers. Karpathy consolidates the subject nicely. He emphasizes that in real-world, industrial-strength LLM applications, the challenge entails filling the model's context window with just the right mix of information. He thinks about context engineering as both a science—because it involves structured systems and system-level thinking, data pipelines, and optimization —and an art, because it requires intuition about how LLMs interpret and prioritize information. His analysis reflects two of my predictions for 2025 one highlighting the increasing impact of uncertainty and another a growing appreciation of knowledge. Tech mortals offered further useful comments on the threads, two of my favorites being: 'Owning knowledge no longer sets anyone apart; what matters is pattern literacy—the ability to frame a goal, spot exactly what you don't know, and pull in just the right strands of information while an AI loom weaves those strands into coherent solutions.' weaves those strands into coherent solutions.' 'It also feels like 'leadership' Tobi. How to give enough information, goal and then empower.' I love the AI loom analogy, in part because it corresponds with one of my favorite data descriptors, the "Contextual Fabric". I like the leadership positivity too, because the AI looms and contextual fabrics, are led by and empowered by humanity. Here's my spin, to take or leave. Knowledge, based on data, isn't singular, it's contingent, contextual. Knowledge and thus the contextual fabric of data on which it is embedded is ever changing, constantly shifting, dependent on situations and needs. I believe knowledge is shaped by who speaks, who listens, and what about. That is, to a large extent, led by power and the powerful. Whether in Latin, science, religious education, finance and now AI, what counts as 'truth' is often a function of who gets to tell the story. It's not just about what you know, but how, why, and where you know it, and who told you it. But of course it's not that simple; agency matters - the peasant can become an abbot, the council house schoolgirl can become a Nobel prize-winning scientist, a frontier barbarian can become a Roman emperor. For AI, truth to power is held by the big tech firms and grounded on bias, but on the other it's democratizing in that all of us and our experiences help train and ground AI, in theory at least. I digress. For AI-informed decision intelligence, context will likely be the new computation that makes GenAI tooling more useful than simply being an oft-hallucinating stochastic parrot, while enhancing traditional AI - predictive machine learning, for example - to be increasingly relevant and affordable for the enterprise. Context Engineering for FinTech Context engineering—the art of shaping the data, metadata, and relationships that feed AI—may become the most critical discipline in tech. This is like gold for those of us in the FinTech data engineering space, because we're the dudes helping you create your own context. I'll explore how five different contextual approaches, all representing data engineering-relevant vendors I have worked for —technical computing, vector-based, time-series, graph and geospatial platforms—can support context engineering. Parameterizing with Technical Computing Technical computing tools – think R, Julia, MATLAB and Python's SciPy stack - can integrate domain-specific data directly into the model's environment through structured inputs, simulations, and real-time sensor data, normally as vectors, tables or matrices. For example, in engineering or robotics applications, an AI model can be fed with contextual information such as system dynamics, environmental parameters, or control constraints. Thus the model can make decisions that are not just statistically sound but also physically meaningful within the modeled system. They can dynamically update the context window of an AI model, for example in scenarios like predictive maintenance or adaptive control, where AI must continuously adapt to new data. By embedding contextual cues, like historical trends, operational thresholds, or user-defined rules, such tools help ground the model's outputs in the specific realities of the task or domain. Financial Services Use Cases Quantitative Strategy Simulation Simulate trading strategies and feed results into an LLM for interpretation or optimization. Stress Testing Financial Models Run Monte Carlo simulations or scenario analyses and use the outputs to inform LLMs about potential systemic risks. Vectors and the Semantics of Similarity Vector embeddings are closely related to the linear algebra of technical computing, but they bring semantic context to the table. Typically stored in so-called vector databases, they encode meaning into high-dimensional space, allowing AI to retrieve through search not just exact matches, but conceptual neighbors. They thus allow for multiple stochastically arranged answers, not just one. Until recently, vector embeddings and vector databases have been primary providers of enterprise context to LLMs, shoehorning all types of data as searchable mathematical vectors. Their downside is their brute force and compute-intensive approach to storing and searching data. That said, they use similar transfer learning approaches – and deep neural nets – to those that drive LLMs. As expensive, powerful brute force vehicles of Retrieval-Augmented Generation (RAG), vector databases don't simply just store documents but understand them, and have an increasingly proven place for enabling LLMs to ground their outputs in relevant, contextualized knowledge. Financial Services Use Cases Customer Support Automation Retrieve similar past queries, regulatory documents, or product FAQs to inform LLM responses in real-time. Fraud Pattern Matching Embed transaction descriptions and retrieve similar fraud cases to help the model assess risk or flag suspicious behavior. Time-Series, Temporal and Streaming Context Time-series database and analytics providers, and in-memory and columnar databases that can organize their data structures by time, specialize in knowing about the when. They can ensure temporal context—the heartbeat of many use cases in financial markets as well as IoT, and edge computing- grounds AI at the right time with time-denominated sequential accuracy. Streaming systems, like Kafka, Flink, et al can also facilitate the real-time central nervous systems of financial event-based systems. It's not just about having access to time-stamped data, but analyzing it in motion, enabling AI to detect patterns, anomalies, and causality, as close as possible to real time. In context engineering, this is gold. Whether it's fraud that happens in milliseconds or sensor data populating insurance telematics, temporal granularity can be the difference between insight and noise, with context stored and delivered by what some might see as a data timehouse. Financial Services Use Cases Market Anomaly Detection Injecting real-time price, volume, and volatility data into an LLM's context allows it to detect and explain unusual market behavior. High-Frequency Trading Insights Feed LLMs with microsecond-level trade data to analyze execution quality or latency arbitrage. Graphs That Know Who's Who Graph and relationship-focussed providers play a powerful role in context engineering by structuring and surfacing relationships between entities that are otherwise hidden in raw data. In the context of large language models (LLMs), graph platforms can dynamically populate the model's context window with relevant, interconnected knowledge—such as relationships between people, organizations, events, or transactions. They enable the model to reason more effectively, disambiguate entities, and generate responses that are grounded in a rich, structured understanding of the domain. Graphs can act as a contextual memory layer through GraphRAG and Contextual RAG, ensuring that the LLM operates with awareness of the most relevant and trustworthy information. For example, graph databases - or other environments, e.g. Spark, that can store graph data types as accessible files, e.g. Parquet, HDFS - can be used to retrieve a subgraph of relevant nodes and edges based on a user query, which can then be serialized into natural language or structured prompts for the LLM. Platforms that focus graph context around entity resolution and contextual decision intelligence can enrich the model's context with high-confidence, real-world connections—especially useful in domains like fraud detection, anti-money laundering, or customer intelligence. Think of them as like Shakespeare's Comedy of Errors meets Netflix's Department Q. Two Antipholuses and two Dromios rather than 1 of each in Comedy of Errors? Only 1 Jennings brother to investigate in Department Q's case, and where does Kelly MacDonald fit into anything? Entity resolution and graph context can help resolve and connect them in a way that more standard data repositories and analytics tools struggle with. LLMs cannot function without correct and contingent knowledge of people, places, things and the relationships between them, though to be sure many types of AI can also help discover the connections and resolve entities in the first place. Financial Services Use Cases AML and KYC Investigations Surface hidden connections between accounts, transactions, and entities to inform LLMs during risk assessments. Credit Risk Analysis Use relationship graphs to understand borrower affiliations, guarantors, and exposure networks. Seeing the World in Geospatial Layers Geospatial platforms support context engineering by embedding spatial awareness into AI systems, enabling them to reason about location, proximity, movement, and environmental context. They can provide rich, structured data layers (e.g., terrain, infrastructure, demographics, weather) that can be dynamically retrieved and injected into an LLM's context window. This allows the model to generate responses that are not only linguistically coherent but also geographically grounded. For example, in disaster response, a geospatial platform can provide real-time satellite imagery, flood zones, and population density maps. This data can be translated into structured prompts or visual inputs for an AI model tasked with coordinating relief efforts or summarizing risk. Similarly, in urban planning or logistics, geospatial context helps the model understand constraints like traffic patterns, zoning laws, or accessibility. In essence, geospatial platforms act as a spatial memory layer, enriching the model's understanding of the physical world and enabling more accurate, context-aware decision-making. Financial Services Use Cases Branch Network Optimization Combine demographic, economic, and competitor data to help LLMs recommend new branch locations. Climate Risk Assessment Integrate flood zones, wildfire risk, or urban heat maps to evaluate the environmental exposure of mortgage and insurance portfolios. Context Engineering Beyond the Limits of Data, Knowledge & Truths Context engineering I believe recognizes that data is partial, and that knowledge and perhaps truth or truths needs to be situated, connected, and interpreted. Whether through graphs, time-series, vectors, tech computing platforms, or geospatial layering, AI depends on weaving the right contextual strands together. Where AI represents the loom, the five types of platforms I describe are like the spindles, needles, and dyes drawing on their respective contextual fabrics of ever changing data, driving threads of knowledge—contingent, contextual, and ready for action.

Is the Vibe Coding Era is Over? Context Engineering Takes Center Stage
Is the Vibe Coding Era is Over? Context Engineering Takes Center Stage

Geeky Gadgets

time3 days ago

  • Geeky Gadgets

Is the Vibe Coding Era is Over? Context Engineering Takes Center Stage

What if the secret to unlocking the full potential of AI in coding isn't about crafting the perfect prompt but about designing the perfect environment? Imagine an AI system that doesn't just guess at your intentions based on a few vague instructions but instead operates within a carefully crafted framework of rules, examples, and context. This is the promise of context engineering, a fantastic approach that shifts the focus from intuition-driven 'vibe coding' to deliberate, structured inputs. While vibe coding may feel creative and spontaneous, its reliance on minimal guidance often leads to frustrating errors and inconsistent results. In contrast, context engineering offers a path to reliable, scalable, and efficient AI-assisted development, fundamentally changing how developers interact with these tools. Cole Medin explore how context engineering is reshaping the landscape of AI-powered coding. You'll discover why vibe coding, while appealing in its simplicity, often falls short in complex or high-stakes scenarios. We'll break down the core principles of context engineering, from designing structured outputs to integrating memory and retrieval-augmented generation. Along the way, you'll gain insights into how this method reduces errors, enhances scalability, and positions developers to tackle the increasingly sophisticated demands of modern software development. By the end, you might find yourself questioning not just how you code with AI, but how you think about coding altogether. From Vibe Coding to Context Engineering Why Vibe Coding Falls Short Vibe coding, characterized by its intuitive and trial-and-error nature, can be effective for quick prototypes or small-scale tasks. However, this method often struggles in production environments or when applied to large-scale projects. The lack of sufficient context in vibe coding frequently leads to AI 'hallucinations,' where the system generates inaccurate or irrelevant outputs. These errors necessitate extensive human intervention, undermining developer confidence and limiting the scalability of AI-generated solutions. By relying on intuition rather than structured input, vibe coding fails to meet the demands of complex, high-stakes development scenarios. What is Context Engineering? Context engineering addresses the limitations of vibe coding by focusing on the intentional design of input provided to AI systems. Instead of relying on minimal prompts, this approach involves supplying the AI with structured, comprehensive information, such as rules, documentation, examples, and task-specific plans. By treating context as a carefully designed resource, developers can guide AI systems to produce accurate, consistent, and reliable results. This method reduces ambiguity, minimizes hallucinations, and enhances the overall performance of AI tools, making them more suitable for complex and large-scale applications. Context Engineering vs Vibe Coding Watch this video on YouTube. Stay informed about the latest in vibe coding by exploring our other resources and articles. How Context Engineering Differs from Prompt Engineering While prompt engineering focuses on refining individual prompts to improve AI responses, context engineering takes a broader and more holistic approach. It creates an ecosystem of information that enables AI systems to handle complex tasks with greater precision. For example, instead of crafting a single prompt to generate code, context engineering involves providing a detailed framework that includes examples, structured output requirements, and relevant documentation. This ensures the AI has all the necessary tools to deliver high-quality results consistently, making it a more robust and scalable solution compared to prompt engineering alone. Key Components of Context Engineering To implement context engineering effectively, several core components must be considered: Prompt Engineering: Focuses on crafting clear and precise individual queries to guide AI behavior. Focuses on crafting clear and precise individual queries to guide AI behavior. Structured Output: Establishes consistent formats for AI responses, making sure reliability and usability. Establishes consistent formats for AI responses, making sure reliability and usability. State History and Memory: Enables the AI to recall past actions and maintain continuity across tasks. Enables the AI to recall past actions and maintain continuity across tasks. Examples and Documentation: Provides reference materials to guide the AI's decision-making and behavior. Provides reference materials to guide the AI's decision-making and behavior. Retrieval-Augmented Generation (RAG): Integrates external knowledge sources, such as databases or documentation, to enhance the AI's capabilities. By combining these elements, developers can create a robust context that enables AI tools to perform complex tasks with minimal manual intervention, improving both efficiency and accuracy. How to Implement Context Engineering Implementing context engineering requires careful planning and preparation. Begin by defining global rules, feature descriptions, and task requirements. AI tools such as Claude Code can assist in generating Product Requirements Prompts (PRPs), which serve as blueprints for project implementation. These PRPs guide the AI through each step of the development process, making sure consistency and reducing the need for human oversight. For example, when developing an AI agent, a PRP can outline its functionality, expected outputs, and integration points. By providing this level of detail, you can ensure the AI operates within a well-defined framework, reducing errors and enhancing the quality of its outputs. Benefits of Context Engineering Adopting context engineering offers several significant advantages: Reduced Errors: Comprehensive and structured input minimizes the risk of hallucinations and inaccuracies in AI-generated code. Comprehensive and structured input minimizes the risk of hallucinations and inaccuracies in AI-generated code. Time Efficiency: While context engineering requires upfront effort, it streamlines the development process, saving time in the long term. While context engineering requires upfront effort, it streamlines the development process, saving time in the long term. Scalability: Structured context enables AI systems to produce high-quality code that can scale effectively in production environments. These benefits make context engineering a valuable approach for developers seeking to maximize the potential of AI tools in their workflows. Challenges and Security Considerations Despite its advantages, context engineering is not without challenges. Creating comprehensive context requires significant effort, expertise, and time. Developers must carefully design inputs to ensure they are both detailed and relevant. Additionally, security risks such as prompt injection and data leakage must be managed effectively. Protecting sensitive information and making sure AI systems are resilient to malicious inputs are critical for maintaining trust and reliability. Addressing these challenges is essential for the successful implementation of context engineering. The Future of Context Engineering As AI systems evolve into more dynamic and autonomous agents capable of complex decision-making, the importance of context engineering will continue to grow. Advanced techniques such as state management, memory integration, and retrieval-augmented generation will become essential for allowing AI systems to handle increasingly sophisticated tasks. By investing in context engineering today, developers can position themselves to fully use the potential of AI in the future, making sure their tools remain reliable, scalable, and effective in a rapidly changing technological landscape. Media Credit: Cole Medin Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store