
AI: The future belongs to those who put the humans in the machine first
As generative AI spreads across legal practice, the advantage is no longer in what you know, but how well you reason because recall is easy - anyone can pull up case law. The real edge lies in interpretation, explanation and judgment. And while today's models don't always reason perfectly - neither do humans. The better question is: can AI help lawyers reason better?
This is where things get interesting.
More data ≠ better model
Let's start with the false promise of infinite data. It's widely understood that throwing thousands of pages of legislation, regulation, case law and other legal documents at a model doesn't make it smarter. In fact, it often makes it worse because legal reasoning depends on, amongst other things, quality, relevance and clarity. A carefully curated dataset of law and precedent on an expertise domain in a particular jurisdiction (and potentially some related jurisdictions) can outperform a bloated corpus of global case law riddled with inconsistencies and irrelevance.
Here, the model doesn't need to 'know the law' - it needs to retrieve it with precision and reason over the top with discipline. That's why in most practical applications in a specific domain of expertise, Retrieval-Augmented Generation (RAG) will probably beat full fine-tuning. RAG lets you plug into a general-purpose model that's already been trained on a vast body of knowledge, and then layer on your own curated legal content in real-time - without the need for full re-training. It's fast, flexible and keeps you close to the constantly evolving edge of legal precedent. If fine-tuning is like rewriting the engine, RAG is like swapping in smarter fuel - giving you a model that reasons over your trusted material instead of guessing based on a noisy global corpus.
This is the difference between dumping legal textbooks on your desk and actually having a partner walk you through the implications.
Reasoning over regurgitation
Take a real-world query:
"Can an employee working remotely in Melbourne still claim a travel allowance under their enterprise agreement?"
An untrained model might respond with this:
"There are hundreds of examples of travel allowances in Australian enterprise agreements…shall I find these for you and list them?"
Helpful? Not really.
A well-trained legal AI might say this instead:
"It depends on the specific terms of the enterprise agreement that applies to the employee. Travel allowances are typically tied to physical attendance at a designated worksite and if an employee's role has been formally varied to remote or hybrid including under a flexible work arrangement, the allowance may no longer apply. You'd need to check whether the agreement defines a primary work location, whether remote work was agreed under (Section 65 of the Fair Work Act or otherwise) and whether there are any clauses preserving travel entitlements in such cases."
Now we're not 'just' talking about answers; we're talking about prompts for strategic thinking.
Scaling senior expertise, insight and judgment, not just recall
The much deeper question is this: how do we train AI not just to answer; but to remind us to ask better questions? Clients don't pay us for information; they pay for interpretation and come to top-tier firms because they want the kind of insight only senior legal professionals can provide - the kind that draws on pattern recognition through lots of relevant experience, strategic insight and framing and an understanding of nuance built across decades of practice.
The real opportunity lies in scaling what clients actually value most: the expertise of senior partners - including their insight, experience, judgment and contextual thinking. This means training AI to reason like a partner - to recognise what matters, frame choices, reason through trade-offs and flag what clients will care about
We should be asking "How do we encode that?" How do we teach a model to say not just 'here's what the law says', but 'here's how you might think about this and here's what clients like yours have cared about in similar cases'. This represents an all important shift from knowledge to judgment and from retrieval to reasoning.
Because the goal isn't to build a machine that knows everything but to build one that helps your lawyers engage with better questions, surface richer perspectives and unlock more strategic conversations that create value for clients.
It's important to remember: AI hears what is said, but great lawyers listen for what isn't said. That's where real context lives - within tone, hesitation and the unspoken concerns that shape top-tier legal advice. To build AI that supports nuanced thinking, we need to train it on more than documents; we need to model real-world interactions and teach it to recognise the emotional cues that matter. This isn't about replacing human intelligence but about amplifying it, helping lawyers read between the lines and respond with sharper insight. This, in turn, might open up brand new use cases. Imagine if AI could listen in on client-lawyer conversations not just for note-taking but to proactively suggest risks, flag potential misunderstandings or surface relevant precedents in real time based on the emotional and contextual cues it detects.
From knowledge to insight: What great training looks like
If we want to AI to perform like a partner, we need the model not to give lawyers the answer but to do what a senior partner would do in conversation:
"Here's what you need to think about... Here are two approaches clients tend to prefer... and here's a risk your peers might not spot."
This kind of reasoning-first response can help younger lawyers engage with both the material and the client without needing to escalate every issue to their senior. Importantly, it's not about skipping the partner - it's about scaling their thinking. Scaling the apprenticeship model in ways not possible in the past.
If you're not solving for: What the client really cares about, and why
How to recognise the invisible threads between past matters, and current situations, options and decisions,
How to ask the kinds of questions a senior prcatitioner would ask
The kind of prompt to use to achieve this
…then you're not training AI…you're just hoping like hell that it helps.
This is also where RAG and training intersect. Rather than re-training the model from scratch, we can use RAG to ensure the model is drawing from the right content - legal guidance, judgment notes, contextual memos - while training it to reason the way our top partners do. Think of it less like coding a robot; and more like mentoring a junior lawyer with access to every precedent you've ever relied on.
Some critics, including recent research, have questioned whether today's large language models can truly reason or reliably execute complex logical tasks. It's a fair challenge and one we acknowledge but it's also worth noting that ineffective reasoning isn't new. Inconsistency, bias and faulty heuristics have long been a part of human decision-making. The aim of legal AI isn't to introduce flawless reasoning, but to scale the kind of strategic thought partners already apply every day and to prompt richer thinking, not shortcut it.
How to structure a real firm-level AI rollout
As AI becomes embedded in professional services, casual experimentation is no longer enough. Legal firms need structured adoption strategies and one of the best frameworks could be what Wharton professor Ethan Mollick calls the 'Lab, Library, and Leadership' model for making AI work in complex organisations.
In his breakdown: Lab = the experimental sandbox where teams pilot real-world use cases with feedback loops and measurable impact.
Library = the curated knowledge base of prompts, best practices, guardrails and insights (not just raw documents, but how to use these well).
Leadership = the top-down cultural shift that's needed to legitimise, resource and scale these efforts.
For law firms, this maps elegantly to our current pressing challenges: the Lab is where legal teams experiment with tools like RAG based models on live matters. The Library is the evolving playbook of prompt templates, safe document sources and past legal reasoning. And Leadership (arguably the most vital) is what determines whether those ideas ever leave the lab and reach real matters and clients. As Mollick puts it, "AI does not currently replace people, but it does change what people with AI are capable of." The firms that win in this next chapter won't just use AI - they'll teach their people how to build with it.
And critically, they'll keep teaching it.
Most models, including GPT-4, are built on datasets with a cut-off and as a consequence they are often months or even years out of date. If you're not feeding the machine fresh experiences and insights, you're working with a version of reality that's already stale. This isn't a 'one and done' deployment - it's an ongoing dialogue and by structuring feedback loops from live matters, debriefs and partner insights, firms can ensure the model evolves alongside the business, not behind it.
Putting humans in the machine
Ultimately, legal AI isn't about machine innovation; it's about human innovation and the real challenge is how to capture and scale the experience, insight, judgment and strategic thinking of senior lawyers. That requires sitting down with partners to map how they approach a question, what trade-offs they consider and how they advise clients through complexity. That's the real creativity and that's what we need to encode into the machine.
Lawyer 2.0 isn't just AI-assisted - it's trained by the best, for the benefit of the many. The future of legal work will belong to those who put humans in the machine first.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
21 hours ago
- Techday NZ
Elastic named Leader in 2025 Gartner Magic Quadrant for observability
Elastic has been recognised as a Leader in the 2025 Gartner Magic Quadrant for Observability Platforms for the second consecutive year. Gartner recognition The company earned this placement for its Elastic Observability offering after an evaluation of its Completeness of Vision and Ability to Execute. The recognition acknowledges Elastic's work in developing AI-driven capabilities, support for open standards, and the scalability and cost-efficiency of its observability platform. Santosh Krishnan, General Manager, Observability & Security at Elastic, commented on the company's approach to observability, saying: "Visibility alone isn't enough; customers need rapid context-rich insights to troubleshoot complex systems. We feel Elastic's recognition as a Leader in this year's Gartner Magic Quadrant reflects how our open, scalable architecture with AI-driven capabilities is transforming observability from a reactive tool into a solution for real-time investigations while keeping costs low." Key features highlighted The company stated that its differentiation lies in several areas, including native integration with OpenTelemetry, a built-in AI Assistant, and zero-configuration AIOps for anomaly detection. Elastic's AI Assistant leverages Retrieval Augmented Generation (RAG) technology to connect with enterprise knowledge, supporting incident resolution through natural language queries. This allows operational teams to reduce time-to-insight across logs, metrics, and traces. Elastic's zero-config AIOps deploys machine learning capabilities out-of-the-box to automatically detect anomalies, forecast trends, and reveal patterns within large datasets. The piped query language, ES|QL, aims to simplify the complexity of large-scale IT investigations by enabling advanced queries across observability data. Krishnan stated that Elastic's placement in the Magic Quadrant demonstrates the effectiveness of continued investments in open standards and deployment flexibility, alongside scalable performance and cost optimisations. He described the solution's impact on organisations moving from reactive troubleshooting to real-time investigation of incidents and anomalies. Enterprise adoption Elastic's approach to observability has also been adopted by enterprises seeking to consolidate monitoring tools and improve operational efficiency. Eva Ulicevic, Director, Technology, Architecture, Strategy, and Analytics at Telefónica Germany, shared the impact the platform has had within the organisation: "By using Elastic and consolidating multiple tools, we reduced our root cause analysis time by 80%. We also reduced incidents that could severely impact our business." The platform is built on Elastic's Search AI Platform, supporting the monitoring and optimisation of applications, infrastructure, and end-user experience. Elastic's Search AI Lake is designed for petabyte-scale data retention, supporting efficient storage and search for structured and unstructured data. Industry context The Gartner Magic Quadrant evaluates vendors in the observability sector based on criteria such as vision, innovation, ability to execute, and breadth of capabilities. Elastic's leadership listing for the second year underscores continued investment in tools that address the challenges of managing, searching, and analysing large volumes of operational data. Elastic's commitment to open-source standards is emphasised by its native support for OpenTelemetry, enabling organisations to standardise instrumentation and data collection processes without requiring proprietary connectors. The observability platform is positioned to support organisations as they address the growing complexity of cloud-based architectures and meet increased demand for real-time performance monitoring, anomaly detection, and automated root cause analysis.


Techday NZ
5 days ago
- Techday NZ
AI Appreciation Day spotlights responsible & purposeful adoption
Artificial Intelligence Appreciation Day is prompting industry leaders to reflect on the rapid progress of AI, its real-world impact, and the challenges that accompany widespread adoption across various sectors. As businesses and governments integrate AI into operations, the conversation has shifted from novelty to necessity, with a focus on strategic, responsible use and measurable outcomes. Within enterprise technology, the role of AI is evolving from an automation tool to the backbone of digital transformation. Gal Naor, CEO of StorONE, highlights AI's transformative influence in data storage, noting, "AI-powered auto tiering... observes how data is used and moves it between flash and disk tiers based on actual workload behaviour. This ensures frequently accessed data remains on high-performance storage, while infrequently used data is shifted to lower-cost media without affecting application performance." Naor emphasises that this capability both simplifies operations and prepares organisations for ever-increasing data demands. In the traditionally manual construction sector, Shanthi Rajan, CEO of Linarc, points to AI as a catalyst for addressing systemic industry challenges. "AI does not replace construction professionals; it empowers them," she explains, citing improved decision-making, reduced friction, and the introduction of contextual awareness to complex projects. According to Rajan, AI brings "cohesion to complexity, accountability to action, and momentum to teams," making construction smarter and more human-centred. Manufacturing and logistics are also benefiting from AI at the edge. "Edge AI is playing a massive role in enabling autonomous systems to make independent, real-time decisions with minimal human intervention," observes Yoram Novick, CEO of Zadara. "From self-driving cars navigating complex environments to smart factories optimising production processes, Edge AI is now delivering localised intelligence that operates well even where network connectivity is limited." Such autonomy reduces reliance on cloud connectivity and improves operational efficiency across various industries. For Australian businesses, Carla Ramchand, CEO of Avanade Australia, describes a surge of AI investment in the mid-market. "Our latest research shows that 86% of Australian mid-market leaders are increasing their investment in AI, with most expecting a fourfold return on investment in the next 12 months." Ramchand highlights the rise of agentic AI, where systems act independently, stressing that success "depends on modern infrastructure, clean data, trusted governance, and human oversight." Data remains central to AI's promise, as noted by Oded Nagel, CEO of CTERA. "In a ready state, data becomes the fuel for AI systems, enhancing their ability to produce actionable insights and drive strategic decisions. Companies must prioritise having their data organised and accessible, as it is the key to unlocking AI's transformative potential." Security is another major concern, as cyber threats continue to grow in sophistication. Jimmy Mesta, CTO of RAD Security, points out, "AI is now actually the only way teams can keep up... AI can spot patterns, connect events across multiple parts of the security stack, and take action fast enough to matter." Drew Bongiovanni, Technical Marketing Manager at Index Engines, adds that "the real ROI of AI shows up after the breach. It's in the speed of recovery, the confidence in your backups, and the ability to make decisions under pressure without second-guessing your data." AI enhances the complexity of enterprise application development, but with important caveats. Vijay Prasanna Pullur, CEO of WaveMaker, cautions, "Injecting AI into the design-to-code-to-deploy process without oversight or curation may not work... Enterprise applications and solutions are complex and need a lot more enablement on top of existing AI orchestration." With growing reliance on AI comes a call for responsible governance. Josh Mason, CTO of RecordPoint, argues that businesses must "make sure [they're] governing [their] data and using the technology responsibly and ethically, in a way that benefits customers and employees." According to Mason, poor data governance is a key blocker to large-scale AI deployment, with only a minority of companies succeeding beyond pilot implementations. Sustainability and infrastructure are increasingly seen as critical to AI's continued growth. Ted Oade of Spectra Logic urges the industry to "champion responsible development: transparency, bias mitigation, and environmental impact. Appreciating AI means understanding its full context - technical, operational, and ethical." Mark Klarzynski, CEO of PEAK:AIO, concurs, arguing that "the need for AI-native infrastructure is no longer optional. It is strategic." As AI systems become more agentic, autonomy and insight are set to become defining characteristics. Helen Masters, Managing Director at Smartsheet, sums up the current landscape: "Today's conversation focuses on how effectively we can integrate AI into our everyday lives. Across Australia, businesses are rapidly adopting AI not as a standalone solution, but as a strategic enabler." David Hunter, CEO of Local Falcon, offers a final reflection: "AI's true power isn't in integrating it into existing tools for writing fluffy content faster... but in uncovering patterns, trends, and other insights that would otherwise go unnoticed. The future isn't 'AI everywhere.' It's AI with purpose." As AI Appreciation Day is marked, industry consensus is clear: intelligent, responsible, and sustainable integration of AI will shape the future across every sector, provided organisations invest in governance, infrastructure, and purposeful deployment.


Techday NZ
6 days ago
- Techday NZ
AI in the workplace: Hype, hesitation and what's actually happening
With AI dominating headlines and boardroom agendas alike, it's easy to get swept up in grand predictions. But behind the hype, what's really happening in Australian and New Zealand workplaces? To cut through the noise, we surveyed 864 business leaders and tech professionals across ANZ to find out how AI is being adopted, what's holding it back, and where it's headed next. While the excitement is palpable, Jack Jorgensen, Data, AI & Innovation Practice Lead from our consultancy arm Avec, says what's really unfolding is more complex and more revealing than the headlines suggest. "A lot of organisations are still stuck in the Proof of Concept stage," says Jack. "That's not surprising - AI isn't a magic wand. It's a tool. Like any tool, it has to be embedded within real processes to have an impact. Right now, many businesses are still reconciling the hype with the reality of what AI can actually do." From urgency to uncertainty: Why strategy is still missing Despite executive-level urgency to "do something" with AI, many organisations are struggling to translate that into tangible strategy. Our survey found: 41% of respondents cited "no strategy" as a major AI adoption barrier 41% said their organisation had "unclear goals" 34.4% pointed to a "lack of clear ownership" "We saw this with Automation too," says Jack. "Executives want momentum, but the problem is that every department has its own unique processes and that nuance often isn't visible from the top. Without input from people actually doing the work, AI initiatives stall. The better approach? Let teams experiment, find real use cases, and celebrate the wins. That kind of bottom-up traction creates momentum that sticks." Fear of AI is real, but is it justified? One in four survey respondents (24.9%) listed "job displacement or change" as their top concern when it comes to agentic AI systems. But Jack warns against jumping to conclusions. "We've seen this fear before. The same anxiety was there when Automation started gaining traction. The reality is, many of the job cuts we're seeing today aren't because of AI, they're because of economic pressure and over-hiring corrections. AI is being used as a scapegoat because it makes layoffs look like a forward-thinking decision, when really it's about cost. If we focus too much on AI as a threat, we risk missing its actual value." Instead of seeing AI as a way to cut headcount, Jack says organisations should utilise it as a tool to enhance quality, speed, and productivity. "If you reward people for improving their workflows with AI - instead of fearing they'll work themselves out of a job - you'll get two things: better, more reliable use cases, and more engaged teams actively looking for new opportunities. That's what drives real business growth." Data use is up, but so are the risks Our survey found that 33.9% of professionals are already using AI for data analysis; a promising sign of adoption in everyday workflows. But Jack adds a note of caution. "It's great to see adoption in areas like analytics, but we have to be careful. Large language models don't understand truth, they generate probability. AI can support analysis, but it shouldn't replace rigorous, traditional verification. We still need humans to make sense of what's surfaced." Security is another concern that's keeping some businesses from going further. 46.2% of respondents said "security or compliance" was the biggest barrier preventing more regular AI use. "It's encouraging that people are thinking seriously about security," says Jack. "We need to treat AI systems and the data behind them with the same level of scrutiny we give to any other sensitive tool." So, where are we in the AI journey? Jack believes we're in an AI bubble, similar to the dot-com boom – full of excitement, investment, and a fair amount of overreach. "The tech is real. The opportunity is real. But it's early. We're seeing fear, confusion and fast moves, often without strategy. If organisations can stop chasing the hype and start embedding AI into processes with care, they'll not only unlock value but avoid repeating the mistakes of past tech cycles." Want to find out more? Access the full findings from 864 professionals across Australia and New Zealand here. Discover how teams are really using AI, and what the data tells us about the future of work.