logo
Artificial intelligence insurance? This startup in Canada will cover the costs of AI mistakes

Artificial intelligence insurance? This startup in Canada will cover the costs of AI mistakes

The Star20-05-2025
Armilla cited a 2024 incident where Air Canada was using an AI chatbot as part of its customer service system, and the AI completely fabricated a discount which it offered to customers – a judge then ruled that the airline had to honour the offer. — Reuters
Lloyds of London, acting through a Toronto-based startup called Armilla, has begun to offer a new type of insurance cover to companies for the artificial intelligence era: Its new policy can help cover against losses caused by AI.
While Lloyds, and its partner, are simply capitalising on the AI trend – in the same way they'd insure against other new phenomena, in an effort to drive their own revenues – the move is a reminder that AI is both powerful and still a potential business risk. And if you thought adopting AI tools would help you push down the cost of operating your business, the advent of this policy is also a reminder that you need to check if AI use might actually bump some of your costs (like insurance) up.
Armilla's policy is intended to help offset the cost of lawsuits against a particular company if it's sued by, say, a customer or a third party claiming harm thanks to an AI product, the Financial Times noted. The idea is to cover costs that could include payouts related to AI-caused damages and legal fees associated with any such lawsuit.
Armilla's CEO told the newspaper that the new insurance product may have an upside beyond protecting companies against certain AI losses. Karthik Ramakrishnan said he thinks it could even boost AI adoption rates because some outfits are reluctant to embrace the innovative new technology over fears that tools like chatbots will malfunction.
Armilla cited a 2024 incident where Air Canada was using an AI chatbot as part of its customer service system, and the AI completely fabricated a discount which it offered to customers – a judge then ruled that the airline had to honour the offer. The Lloyds-backed insurance policy would likely have offset some of these losses had the chatbot been deemed to have underperformed. But it's not a blanket policy, the FT noted, and the company wouldn't offer to cover risky or error-prone AIs – like any insurer wary of covering a 'lemon.'
Ramakrishnan explained the policy is offered once an AI model is assessed and the company is 'comfortable with its probability of degradation,' and then will only pay out compensation if the 'models degrade.' The FT also noted that some other insurers already build in cover for certain AI-connected losses as part of broader technology error policies, though these may include much more limited payouts than for other tech-related issues.
The consequences of a company acting on hallucinated information from an AI, where an AI just makes up a fake answer but tries to pass it off as truth, can be severe, 'leading to flawed decisions, financial losses, and damage to a company's reputation,' says industry news site PYMNTS. The outlet also noted that there are serious questions of accountability that may arise when an AI is responsible for this kind of error.
This sentiment echoes the warnings made by MJ Jiang, chief strategy officer at New York-based small business lending platform Credibly. In a recent interview with Inc, Jiang said that companies are at risk of serious legal consequences from AI hallucination-based errors, because you 'cannot eliminate, only mitigate, hallucinations.'
Companies using the tech should ask themselves who will get sued when an error is made by an AI? Jiang said they should have mitigation procedures in place to prevent such errors in the first place. In fact, she thinks that ' because GenAI cannot explain to you how it came up with the output, human governance will be essential in businesses where the use cases are of higher risk to the business.'
Other business experts have also warned that using AI is not a risk-free endeavor and have issued guidance on how to prepare businesses for AI compliance and any subsequent legal issues. Keeping these issues in mind when preparing your AI budget is a good idea. – Inc./Tribune News Service
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

CyCraft Launches XecGuard: LLM Firewall for Trustworthy AI
CyCraft Launches XecGuard: LLM Firewall for Trustworthy AI

The Sun

time2 hours ago

  • The Sun

CyCraft Launches XecGuard: LLM Firewall for Trustworthy AI

TAIPEI, TAIWAN - Media OutReach Newswire - 1 July 2025 - CyCraft, a leading AI cybersecurity firm, today announced the global launch of XecGuard, the industry's first plug-and-play LoRA security module purpose-built to defend Large Language Models (LLMs). XecGuard's introduction marks a pivotal moment for secure, trustworthy AI, addressing the critical security challenges posed by the rapid adoption of LLMs. Trustworthy AI Matters The transformative power of Large Language Models (LLMs) brings significant security uncertainty, requiring enterprises to urgently safeguard their AI models from malicious attacks like prompt injection, prompt extraction, and jailbreak attempts. Historically, AI security has been an 'optional add-on' rather than a fundamental feature, leaving valuable AI and data exposed. This oversight can compromise sensitive data, undermine service stability, and erode customer trust. CyCraft emphasizes that 'AI security must be a standard feature—not an optional add-on,' believing it's paramount for delivering stable and trustworthy intelligent services. The Imminent Need for Proactive AI Defense The need for immediate and effective AI security is more critical than ever before. As AI becomes increasingly embedded in core business operations, the attack surface expands exponentially, making proactive defenses an absolute necessity. CyCraft has leveraged its extensive 'battle-tested expertise across critical domains—including government, finance, and high-tech manufacturing' to precisely address these emerging AI-specific threats. The development of XecGuard signifies a shift from 'using AI to tackle cybersecurity challenges' to now 'using AI to protect AI' , ensuring that security and resilience are embedded from day one. 'AI security must be a standard feature—not an optional add-on,' stated Benson Wu, CEO, highlighting XecGuard's resilience and integration of experience from defending critical sectors. Jeremy Chiu, CTO and Co-Founder, emphasized, 'In the past, we used AI to tackle cybersecurity challenges; now, we're using AI to protect AI,' adding that XecGuard enables enterprises to confidently adopt AI and deliver trustworthy services. PK Tsung, CISO, concluded, 'With XecGuard, we're empowering enterprises to embed security and resilience from day one' as part of their vision for the world's most advanced AI security platform. CyCraft's Solution: XecGuard Empowers Secure AI Deployment CyCraft leads with the global launch of XecGuard, the industry's first plug-and-play LoRA security module purpose-built to defend LLMs. XecGuard provides robust protection against prompt injection, prompt extraction, and jailbreak attacks, ensuring enterprise-grade resilience for AI models. Its seamless deployment allows instant integration with any LLM without architectural modification, delivering powerful autonomous defense out of the box. XecGuard is available as a SaaS, an OpenAI-compatible LLM firewall on your cloud (e.g., AWS or Cloudflare Workers AI), or an embedded firewall for on-premises, NVIDIA-powered custom LLM servers. Rigorously validated on major open-source models like Llama 3B, Qwen3 4B, Gemma3 4B, and DeepSeek 8B, it consistently improves security resilience while preserving core performance, enabling even small models to achieve protection comparable to large commercial-grade systems. Real-world validation through collaboration with APMIC, an NVIDIA partner, integrated XecGuard into the F1 open-source model, demonstrating an average 17.3% improvement in overall security defense scores and up to 30.1% in specific attack scenarios via LLM Red Teaming exercises. With XecGuard and the Safety LLM service, CyCraft delivers enterprise-grade AI security, accelerating the adoption of resilient and trustworthy AI across industries, empowering organizations to deploy AI securely, protect sensitive data, and drive innovation with confidence. Even small models gain enterprise-level defenses, approaching large commercial-grade performance.

Video shows drone rescuing stranded man during flood in China
Video shows drone rescuing stranded man during flood in China

The Star

time2 hours ago

  • The Star

Video shows drone rescuing stranded man during flood in China

This photo shows submerged buildings at a flood-affected village in Kaili, in southwestern China's Guizhou province on June 28, 2025. — AFP A Chinese drone operator was transporting the belongings of villagers displaced by flooding when he spotted a man on a roof. He used the drone to lift the man and move him to safety, the operator told a state broadcaster. The video, which was widely circulated on social media, showed an area in the Guangxi region, in southern China, flooded with green-grey water, and a man dangling from a long cord attached to the drone, which set him down on a road. The rescue happened more out of luck than design. The owner of the drone, Lai Zhongxin, normally uses his vehicles to spray fertilizer and transport construction materials, the CCTV report said. Drones have been used in south and southwestern China to provide aid to areas hit by torrential rains this past week. Hoisting large canvas bags filled with relief supplies, they flew over pools of floodwater and traffic-clogged roads, as extreme weather set off mass evacuations and emergency alerts. The drones also sprayed disinfectant on silt-covered fields. Louis Liu, the founder and CEO of DAP Technologies, a Beijing-based consultancy specializing in air mobility, compared the rescue of the man to an excavator being used to lift someone in a fire in the absence of other tools. 'Normally, people aren't allowed to use an agricultural drone to suspend a person in midair,' he said. 'But in an emergency, if someone is about to drown, that's something the law would overlook.' 'Developing drones specifically for rescuing people is definitely an area for development,' he added. 'Many in the industry are already attempting it.' Last week, firefighters in the southern city of Shenzhen carried out a drill using drones that flew up and down a glass skyscraper, spraying jets of water. Drones are already commonly used in cities like Shenzhen for delivering takeout food and packages. In March, China's Civil Aviation Administration issued approvals that would allow two companies, EHang and Hefei Hey Airlines, to operate drones for commercial passenger services. The role of drones has become more visible since last year, when Premier Li Qiang identified the 'low-altitude economy,' referring to the use of this technology in airspace under 1,000 meters (1,094 yards), as a national priority. – ©2025 The New York Times Company This article originally appeared in The New York Times.

Tesla shares drop as Trump-Musk feud flares up again
Tesla shares drop as Trump-Musk feud flares up again

New Straits Times

time3 hours ago

  • New Straits Times

Tesla shares drop as Trump-Musk feud flares up again

ISTANBUL: Shares of United States-based electric vehicle (EV) manufacturer Tesla dropped sharply on Tuesday amid a renewed public spat between CEO Elon Musk and US President Donald Trump, Anadolu Ajansi (AA) reported. As of 1420 GMT local time, Tesla's share price fell around four per cent to US$303.45, bringing the company's market capitalisation down to about US$947.2 billion. The decline marks an 11 per cent drop in Tesla's stock value over the past month, driven largely by growing tensions between Musk and Trump following the controversial "Big Beautiful Bill." The dispute escalated on Tuesday after Musk reiterated his opposition to the Republican-backed omnibus bill, vowing to launch a new political party if it is passed by Congress. In response, Trump said the billionaire might "head back home to South Africa" if federal subsidies for EVs are eliminated, and later added he would "take a look" at the possibility of deporting Musk. The relationship between Musk and Trump -- once defined by cooperation and occasional praise -- has deteriorated since Musk began openly criticising the bill in late May.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store