
Litera unveils generative AI tools to transform Kira platform
The enhanced functionality forms part of all Kira subscriptions, requiring no separate setup and eliminating the need for users to provide an Azure OpenAI key. The new features build on Kira's existing predictive AI and traditional search technologies and are powered by Litera AI+.
Functionality update
With the addition of generative AI, Kira users can analyse documents across various languages and jurisdictions, aiming to increase speed, accuracy, and compliance. The AI capabilities are designed to streamline case workflows, highlight potential risks and emerging trends, and decrease the time spent on document review processes.
Adam Ryan, Chief Product Officer at Litera, said: "The re-engineering of Kira with GenAI represents a transformative leap forward for legal teams everywhere - accelerating contract analysis across languages and jurisdictions. By empowering our users with instant, smarter contract analysis and seamless compliance tools, we are redefining what's possible in legal technology and ensuring our clients are always ahead of the curve."
New tools
The updated Kira platform includes several developments. Generative smart fields allow for the creation of custom fields in any language using a prompt, which does not require coding or training cycles. This capability is intended to enable quicker insights across more document types, going beyond Kira's existing 1,400 built-in smart fields.
Kira users will also have access to a grid-based workflow, which features a new tabular layout for contract and document reviews. The layout provides an immediate overview of risks and trends by displaying extracted language and answers, and it enables interaction with documents via chat. Legal teams can also create smart fields within this interface. These features are available for preview and user feedback among cloud customers in the form of the new Analysis Chart.
Additional enhancements include concept search, which uses predictive AI based on large language model technology to enable identification of legal concepts across project documents from a single example. Project-level generative AI governance offers compliance options, allowing legal teams to enable or disable Litera AI+ features specific to client or project needs.
Expanded ecosystem
Litera has also recently introduced Lito, an AI legal agent integrated into its Litera One portfolio. Lito is designed to work in conjunction with Kira, adapting to different case requirements, user groups, and document complexities. Together, these tools intend to provide support for collaboration, analysis, and document summarisation for legal teams.
Kira's Rapid Clause Analysis functionality identifies and organises clauses across documents, supporting efficiency and consistency, while Kira Smart Summaries allow teams to generate client memo-ready summaries from organised clauses using Litera AI+. A newly designed search architecture is also included, which produces results more quickly - even in larger matters with tens of thousands of documents - and powers the latest generative AI functions in Kira's revised interface.
Market context
Kira has been recognised in the industry for its contract review functionality, including Tier 1 ranking in Legaltech Hub's 2025 Contract Review Competitive Analysis for the second consecutive year. The platform caters to sectors such as mergers and acquisitions, private equity, real estate, and finance.
According to Litera, the updates are aligned with helping legal teams meet client expectations and requirements through accelerated document analysis and risk assessment, while offering informed decision support across a variety of legal document types.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
6 hours ago
- Techday NZ
Litera enhances AI platform with new features & deeper CRM links
Litera has announced the addition of four new features and a range of enhancements to Litera One, its AI-powered platform integrated with Microsoft 365 for the legal sector. The latest set of updates brings together the cloud version of Litera Create-Content, Foundation Insights, and a newly launched workflow called Precedent, which is currently in beta, for use within Litera One for Word. These additions are intended to enable law firms and in-house legal teams to turn organisational knowledge into accessible and actionable content during drafting, facilitating greater confidence and consistency for lawyers. According to Litera, the new workflows harness context-aware artificial intelligence to help legal professionals discover improved precedent language and relevant deal point insights. This aims to strengthen document quality and support negotiations using data-driven information. The platform's enhanced document comparison, proofing, and catch-up functions are designed to provide a faster and more accurate drafting process, incorporated within standard lawyer workflows. Productivity and integration Recent enhancements now allow lawyers to access matter, client, and contact insights directly from their inbox, transforming Outlook into a productivity hub tailored to legal professionals. Deep integrations between Litera Foundation and Peppermint CRM have been introduced to enhance responsiveness and client service, minimising the need to switch between applications. The Clean workflow, previously known as Metadact, is now available in the cloud and directly accessible from the most recent version of Outlook. This offers one-click cleaning of metadata from attachments to protect sensitive information, and is now supported on Mac computers as well as other devices. "Lawyers will no longer have the frustration of waiting or searching for hours and days for critical answers to questions relating to their clients, matters, or other insights," said Adam Ryan, Chief Product Officer at Litera. "With Litera One now delivering Foundation and Peppermint data at their fingertips, lawyers have the information they need in seconds to better serve their clients. Furthermore, with competition for new client business more intense than ever, the fastest partner to reply with the most relevant information can mean the difference between winning and losing business." Other recent updates to the Draft Platform include the "Clean Up Formatting" function in Litera One Word, enhanced multi-document analysis, and forthcoming support for NetDocuments integration scheduled to arrive this August. There is also now French Canadian language support in the Create Desktop Application. Adoption and workflow efficiencies Since its initial launch, Litera One has seen increased uptake across the legal sector. More than 2,500 law firms and in-house teams are now using the platform to make drafting, knowledge management, and client service processes more efficient, with the assistance of secure, legal-industry specific generative AI capabilities. Litera reports that with law firms typically using over 340 applications daily, implementation of Litera One has led to reduced workflow fragmentation, consolidating multiple disparate tasks into a unified experience and saving users between two and ten hours a week. AI-powered legal assistant Litera has also introduced Lito, an artificial intelligence legal agent integrated into Litera One that acts as a virtual team member. Lito is intended to work in tandem with Litera's Draft solutions as part of a complete workflow and leverages Agentic AI to convert insights into immediate actions within Outlook and on the web. The role of Lito is to interpret the requirements of lawyers, coordinate the use of relevant tools, and complete tasks such as drafting, reviewing, and responding, without requiring extensive user input or switching between different programs. This approach aims to increase efficiency and streamline legal work processes for teams. The availability of Clean (Metadact) and Deal Point Insights (Foundation Insights) workflows within Litera One has now been extended to all users. Additional features, such as Litera Create-Content for Knowledge Management, AI-based language search and markup suggestions, and Foundation and Peppermint CRM integration, are scheduled to be incorporated in the coming weeks.


Techday NZ
14 hours ago
- Techday NZ
CrowdStrike report warns of GenAI driving surge in cyberattacks
CrowdStrike has released its 2025 Threat Hunting Report detailing how adversaries are using generative AI (GenAI) to enhance and scale cyberattacks, with a particular focus on emerging threats to autonomous AI systems within enterprises. The report draws on intelligence from CrowdStrike's team of threat hunters and analysts, surveying attacks by over 265 known adversary groups. The findings highlight how attack vectors are evolving with increased automation and use of AI, as well as the targeting of AI-driven systems themselves. AI-powered attacks According to the report, GenAI-built malware is now operational, with lower-tier cybercriminals and hacktivist groups utilising AI to generate scripts, troubleshoot technical issues, and develop new forms of malware. Early examples cited include attacks named Funklocker and SparkCat, which underscore how the barrier to entry for sophisticated cybercrime has been lowered. China-linked adversaries have driven a significant increase in attacks on cloud infrastructure, accounting for 40% of a 136% rise in such incidents during the first half of 2025. The report notes that actors like GENESIS PANDA and MURKY PANDA exploited cloud misconfigurations and access privileges to carry out attacks, while GLACIAL PANDA focused on embedding itself in telecommunications networks, leading to a 130% year-over-year surge in nation-state activity in that sector. Accelerating social engineering Beyond technical exploits, the report outlines how AI is being leveraged to automate social engineering campaigns. FAMOUS CHOLLIMA, a North Korea-linked group, used GenAI to generate fraudulent résumés, create deepfake videos for interviews, and complete technical assignments under assumed identities. This group reportedly infiltrated more than 320 companies worldwide, constituting a 220% year-over-year increase. The report also references Russia-linked EMBER BEAR's amplification of pro-Russia narratives and Iran-linked CHARMING KITTEN's deployment of phishing emails crafted with large language models targeting US and EU organisations. AI agents: A new target The rise of agentic AI - autonomous AI agents handling key business workflows - has created new opportunities for attackers. Several threat actors have reportedly exploited vulnerabilities in the tools used to build and manage these agents. Access was gained through unauthenticated channels, followed by credential harvesting, malware deployment, and ransomware installation. According to CrowdStrike, this marks the emergence of AI systems, and the identities they use, as a key part of the enterprise attack surface. "The AI era has redefined how businesses operate, and how adversaries attack. We're seeing threat actors use GenAI to scale social engineering, accelerate operations, and lower the barrier to entry for hands-on-keyboard intrusions. At the same time, adversaries are targeting the very AI systems organizations are deploying. Every AI agent is a superhuman identity: autonomous, fast, and deeply integrated, making them high-value targets. Adversaries are treating these agents like infrastructure, attacking them the same way they target SaaS platforms, cloud consoles, and privileged accounts. Securing the AI that powers business is where the cyber battleground is evolving," said Adam Meyers, Head of Counter Adversary Operations at CrowdStrike. Trend observations The report also highlights the resurgence of the SCATTERED SPIDER group, which has accelerated its use of identity-based attacks across multiple domains. The group's tactics in 2025 included using phone-based social engineering (vishing) and impersonation of help desk personnel to reset credentials, bypass multi-factor authentication measures, and deploy ransomware in less than 24 hours after gaining initial access. CrowdStrike's data shows a clear trend of increased adversary sophistication with the use of AI-enabled tools, not only for direct attacks but also for the exploitation of cloud, SaaS, and AI agent infrastructure. This shift is rapidly transforming both the methods and preferred targets of cybercriminal and nation-state actors. The report suggests that as enterprises further integrate AI agents into their operations, additional security measures are required to safeguard these autonomous, non-human identities and workflows from being compromised or manipulated.


Techday NZ
4 days ago
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.