logo
#

Latest news with #Darktrace

How Mike Lynch shielded his family fortune from £700m fraud ruling
How Mike Lynch shielded his family fortune from £700m fraud ruling

Yahoo

time5 days ago

  • Business
  • Yahoo

How Mike Lynch shielded his family fortune from £700m fraud ruling

After winning his freedom last year, Mike Lynch was relaxed about the prospect that he might become personally penniless. The British software tycoon had faced the prospect of decades in prison before he defeated criminal fraud charges in a San Francisco trial, and described winning the case as being granted a 'second life'. The prospect of signing his wealth away to Hewlett-Packard (HP), the tech giant that was pursuing him for billions in the English courts, paled in comparison to ending his life behind bars. But Lynch was breezy about the prospect for another reason: a large portion of the Lynch family fortune was held in his wife Angela Bacares's name, shielding it from any legal repercussions. 'My wife has been very good at investing in the things that I've told her to from a point of view of technology. We've done very well,' Lynch said in an interview after he was acquitted. 'It's not a perilous situation.' Just a few weeks later, Lynch and his daughter Hannah died when the entrepreneur's superyacht, Bayesian, capsized off the coast of Sicily, a tragedy that Bacares herself survived. But the decision for Bacares to hold much of the wealth in her name now looks like a wise move. On Tuesday, a judge ruled that HP was owed almost £740m from Lynch and his business partner Sushovan Hussain over the fraudulent sale of their software giant Autonomy 14 years ago. With Mr Hussain having settled privately, Lynch's estate is on the hook for the majority of the damages. Valued by lawyers at $450m (£333m) during his US trial, the fortune in Lynch's estate would be wiped out by the judgment. An appeal by Lynch's legal team is likely. But even if the estate is bankrupted, Bacares is sitting on a fortune worth hundreds of millions owing to the way the pair divided the proceeds of Lynch's endeavours. American-born Bacares, 58, worked on Wall Street and in the City of London before her and Lynch were engaged in 2001 and married the following year. She has not made any public comments since her husband's death, beyond a brief message from the Lynch family stating they are 'devastated'. But her name has featured regularly in stock market filings, company records and court documents. While Lynch made around £500m from selling Autonomy, Bacares, who was occasionally an employee at the company, sold £15.6m of shares. By the time Lynch's next venture, cybersecurity company Darktrace, made it to the public markets, Ms Bacares was the dominant shareholder. She owned 12.8pc of the company at the time of its London flotation, compared to a 4.9pc stake owned by Lynch. Bacares and Lynch had both sold the majority of their stakes by the time Darktrace was bought by private equity firm Thoma Bravo for $5.3bn last year – netting hundreds of millions of pounds. She is also one of the biggest shareholders in Luminance, a legal AI company backed by Lynch's venture capital firm, that has raised more than $115m. Company filings also show her listed as a director at Bunhill Partners, a now defunct algorithmic trading. The couple's personal assets were also held in her name, including Loudham Hall, the Suffolk estate where they lived, and Bayesian itself. The superyacht, raised only last month, was owned by Revtom Limited, of which Bacares was the only shareholder. This may now present its own legal complications. Families of those who perished on Bayesian, including cook Recaldo Thomas and Lynch's lawyer, Chris Morvillo, are seeking compensation from the insurance company that covered the vessel. Hewlett-Packard, now known as HPE, could also theoretically pursue Bacares if there is a shortfall from the fraud case, although the optics of going after his widow would be questionable. Even if Lynch's estate is wiped out, his family are likely to be well looked after. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Darktrace announces acquisition of Mira Security, a leading provider of network traffic visibility solutions
Darktrace announces acquisition of Mira Security, a leading provider of network traffic visibility solutions

Associated Press

time21-07-2025

  • Business
  • Associated Press

Darktrace announces acquisition of Mira Security, a leading provider of network traffic visibility solutions

Cambridge, UK, July 21, 2025 (GLOBE NEWSWIRE) -- Darktrace, a global leader in AI for cybersecurity, today announced the acquisition of Mira Security, a leading provider of network traffic visibility solutions. Building on the companies' established partnership, the acquisition will strengthen Darktrace's network security leadership by providing more insight from encrypted network traffic, more comprehensive decryption for customers in regulated industries, and help drive the next generation of Darktrace technology. Combined, Darktrace and Mira Security close the encrypted data blind spot without impacting network performance or requiring complex re-architecting. Closer integration of Mira Security's in-line decryption capabilities with Darktrace's existing analysis and understanding of encrypted traffic will provide organizations with deeper, more comprehensive visibility across on-premises, cloud, and hybrid environments. This is particularly critical for highly regulated sectors like financial services, government and critical infrastructure. Mira Security's engineering team, based in Centurion, South Africa and the United States, will join Darktrace's R&D division, expanding Darktrace's capabilities in networking research and development. The Mira Security team will bring deep expertise in building high-performance software and firmware for network acceleration that will help drive the next generation of Darktrace hardware, enabling 100 Gbps interfaces, increasing ingestion capacity, and supporting Darktrace's most strategic deployments. The Mira Security team's extensive standards-body experience and deep technical insight will also enhance Darktrace's work in low-level networking and protocol design. 'The acquisition of Mira Security is another building block in our strategy to develop best-in-class cybersecurity solutions and keep our customers safe through continuous innovation,' commented Phil Pearson, Chief Strategy Officer at Darktrace. 'Mira Security has already proven to be a valuable source of insight for our AI, helping us provide unparalleled detection and response capabilities at scale. By bringing the Mira Security team's deep expertise into Darktrace, we will be able to accelerate innovation, deepen the capabilities of our market-leading Network product and unlock even greater security performance for our customers.' The acquisition marks the latest step in Darktrace's ongoing program of investment into both organic and inorganic growth and innovation across its cybersecurity platform. It follows the acquisition of Cado Security to enhance Darktrace's cloud security capabilities and the April launch of new AI models delivering deeper insights, richer context and enhanced predictions for sharper prioritization and faster threat response. Darktrace / NETWORK is the established leader in Network Detection and Response. It is recognized as a Leader in Gartner's Magic Quadrant™ for Network Detection and Response and holds a 4.7 star average rating on Gartner Peer Insights over the past 12 months. It is also recognized as a Leader in the IDC MarketScape for Worldwide Network Detection and Response, and an overall Leader in KuppingerCole's 2024 Leadership Compass for Network Detection and Response. 'The combination of Mira Security and Darktrace's unique technology and brilliant R&D talent will create even more exciting possibilities for protecting complex network environments,' said Niel Viljoen, Founder and CEO of Mira Security. 'Together, Mira Security and Darktrace will be able to deliver new value for customers and partners.' Existing Mira Security partners will continue to be supported, ensuring seamless integration and continued delivery of Mira Security's capabilities across Darktrace and Mira Security's global customer base. About Darktrace Darktrace is a global leader in AI for cybersecurity that keeps organizations ahead of the changing threat landscape every day. Founded in 2013, Darktrace provides the essential cybersecurity platform protecting organizations from unknown threats using its proprietary AI that learns from the unique patterns of life for each customer in real-time. The Darktrace ActiveAI Security Platform™ delivers a proactive approach to cyber resilience to secure the business across the entire digital estate – from network to cloud to email. It provides pre-emptive visibility into the customer's security posture, transforms operations with a Cyber AI Analyst™, and detects and autonomously responds to threats in real-time. Breakthrough innovations from our R&D teams in Cambridge, UK, and The Hague, Netherlands have resulted in over 200 patent applications filed. Darktrace's platform and services are supported by over 2,400 employees around the world who protect nearly 10,000 customers across all major industries globally. To learn more, visit Contact Info Darktrace Media Relations [email protected] +1 929-316-4384

Crypto‑Looting Malware Masquerades as AI and Gaming Start‑ups
Crypto‑Looting Malware Masquerades as AI and Gaming Start‑ups

Arabian Post

time14-07-2025

  • Business
  • Arabian Post

Crypto‑Looting Malware Masquerades as AI and Gaming Start‑ups

Cybersecurity firm Darktrace has revealed a sophisticated social engineering campaign targeting cryptocurrency users on Windows and macOS. The scheme employs fake start‑up companies themed around AI, gaming, Web3, video conferencing, and social media to trick individuals into downloading malware disguised as legitimate software. Darktrace's analysis shows threat actors are establishing plausible digital identities using compromised or spoofed X accounts—sometimes verified—for both companies and employees, hosted on platforms like Medium, Notion, GitHub and X to lend credibility. Notably, the group evolved from a December 2024 Web3 'Meeten' video‑call scam identified by Cado Security Labs into a broader and more enduring operation. Attackers initiate contact via Telegram, Discord or X, offering test access to new software in exchange for cryptocurrency payments. Victims receive a registration code to download tailored Windows Electron apps or macOS DMG files. Upon installation, the malware surreptitiously profiles the device, displays a fake Cloudflare verification, and initiates the payload: a stealer or drainer aimed at crypto wallets. ADVERTISEMENT On Windows, the malware utilizes stolen code‑signing certificates, installing an MSI payload that harvests credentials and wallet data. On macOS, variants include the Atomic macOS Stealer, capable of extracting browser cookies, documents, wallet credentials and maintaining persistence via Launch Agents. Darktrace's report highlights the extensive list of fake companies involved: BeeSync, Buzzu, Cloudsign, Dexis, KlastAI, Lunelior, NexLoop, NexoraCore, NexVoo, Pollens AI, Slax, Solune, Swox, Wasper, YondaAI, among others. Victims cross‑checked these brands against polished websites, whitepapers and employee profiles on Notion and GitHub that imitate authentic early‑stage tech companies. Darktrace notes the campaign bears hallmarks similar to that of the traffer group CrazyEvil, known for deploying StealC, AMOS and Angel Drainer malware. While attribution remains unconfirmed, shared evasion techniques and targeting broadly align. Experts have raised concerns about this tactic of 'legitimacy laundering'. The use of compromised X accounts—especially verified ones—with stolen certificates and AI‑generated content underscores a refinement in social engineering methods. Darktrace threat researcher Tara Gould emphasises that this illustrates 'the efforts that threat actors will go to make these fake companies look legitimate'. Emerging trends in the campaign include multi‑platform targeting and increasingly authentic deception. Windows versions show paranoia‑level evasion: they bundle obfuscation, sandbox‑avoidance checks and stolen signing certificates to bypass defences. On the macOS side, apart from AMOS, the infection employs staged shell or bash scripts to install launch‑agents and maintain persistence post‑reboot. This campaign also marks a shift from opportunistic blast‑campaigns to more tailored, lure‑based attacks. Actors undertake reconnaissance—observing target roles in Web3 and crypto—before approaching them via trusted‑looking channels. In some cases, attackers impersonated actual contacts and shared internal presentations to build trust. Security experts stress that safeguarding against such threats requires cautious validation of unsolicited software offers, robust code‑signing certificate vetting, and network segmentation. Users are urged to verify company legitimacy externally—checking domain registrations, team credentials and cross‑referencing claims. Defensive strategies recommended by Darktrace include enhanced telemetry on installation attempts, stricter code‑signing policies, and behavioural detection tuned to recognise post‑installation profiling and exfiltration patterns. For macOS, entry‑point monitoring and examination of Launch Agent activity provide early alerts.

FBI Warning—You Should Never Reply To These Messages
FBI Warning—You Should Never Reply To These Messages

Forbes

time10-07-2025

  • Politics
  • Forbes

FBI Warning—You Should Never Reply To These Messages

FBI's AI warning is increasingly critical. Republished on July 10 with new report into AI deep fake attacks and advice for smartphone owners on staying safe as threats surge. The news that AI is being used to impersonate Secretary of State Marco Rubio and place calls to foreign ministers may be shocking, but it shouldn't be surprising. The FBI has warned such attacks are now underway and it will only get worse. As first reported by the Washington Post, the State Department has told U.S. diplomats that this latest attack has targeted at least three foreign ministers, a U.S. senator and a governor, using an AI generated voice to impersonate Rubio. A fake Signal account (Signal strikes again) was used to initiate contact through text and voice messages. It's clear that voice messages enable attackers to deploy AI fakes without the inherent risk in attempting to run them in real-time on a live call. The FBI is clear — do not respond to text or voice messages unless you can verify the sender. That means a voice message that sounds familiar cannot be trusted unless you can verify the actual number from which it has been sent. Do not reply until you can. Darktrace's AI and Strategy director Margaret Cunningham told me this is all too 'easy.' The attacks, while 'ultimately unsuccessful,' demonstrate 'just how easily generative AI can be used to launch credible, targeted social engineering attacks.' Alarmingly, Cunningham warns, 'this threat didn't fail because it was poorly crafted — it failed because it missed the right moment of human vulnerability.' People make decisions 'while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.' And while the Rubio scam will generate plenty of headlines, the AI fakes warning has being doing the rounds for some months. It won't make those same headlines, but you're more likely to be targeted in your professional life through social engineering that exploits readily available social media connections and content to trick you. The FBI tells smartphone users: 'Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.' This is in addition to the broader advice given the plague of text message attacks now targeting American citizens. Check the details of any message. Delete any that are clear misrepresentations, such as fake tolls or DMV motoring offenses. Do not click any links contained in text messages — ever. And do not be afraid to hang up on the tech or customer support desk or bank or the law enforcement officer contacting you. You can then reach out to the relevant organization using publicly available contact details. ESET's Jake Moore warns 'cloning a voice can now take just minutes and the results are highly convincing when combined with social engineering. As the technology improves, the amount of audio needed to create a realistic clone also continues to shrink.' 'This impersonation is alarming and highlights just how sophisticated generative AI tools have become,' says Black Duck's Thomas Richards. 'It underscores the risk of generative AI tools being used to manipulate and to conduct fraud. The old software world is gone, giving way to a new set of truths defined by AI.' As for the Rubio fakes, 'the State Department is aware of this incident and is currently monitoring and addressing the matter,' a spokesperson told reporters. 'The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department's cybersecurity posture to prevent future incidents.' 'AI-generated content has advanced to the point that it is often difficult to identify,' the bureau warns. 'When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.' With perfect timing, Trend Micro's latest report warns 'criminals can easily generate highly convincing deepfakes with very little budget, effort, and expertise, and deepfake generation tools will only become more affordable and more effective in the future.' The security team says this is being enabled by the same kinds of toolkits driving other types of frauds that have also triggered FBI warnings this year — including a variety of other message attacks. 'tools for creating deepfakes,' Trend Micro says, 'are now more powerful and more accessible by being cheaper and easier to use.' As warned by the FBI earlier in the year and with the latest Rubio impersonations that it has under investigation, deep fake voice technology is now easily deployed. 'The market for AI-generated voice technology is extremely mature,' Trend Micro says, citing several commercial applications, 'with numerous services offering voice cloning and studio-grade voiceovers… While 'these services have many legitimate applications, their potential for misuse cannot be overlooked.' After breaking the Rubio impersonations news, the Washington Post warns that 'In the absent of effective regulation in the United States, the responsibility to protect against voice impostors is mostly on you. The possibility of faked distressed calls is something to discuss with your family — along with whether setting up code words is overkill that will unnecessarily scare younger children in particular. Maybe you'll decide that setting up and practicing a code phrase is worth the peace of mind.' That idea of a secure code word that a friend or relative can use to provide they're real was pushed by the FBI some months ago. 'Create a secret word or phrase with your family to verify their identity,' it suggested in an AI attack advisory. 'Criminals can use AI-generated audio to impersonate well-known, public figures or personal relations to elicit payments,' the bureau warned in December. 'Criminals generate short audio clips containing a loved one's voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom.'

These Attacks Are ‘Easy'—Do Not Ignore FBI Smartphone Warning
These Attacks Are ‘Easy'—Do Not Ignore FBI Smartphone Warning

Forbes

time09-07-2025

  • Politics
  • Forbes

These Attacks Are ‘Easy'—Do Not Ignore FBI Smartphone Warning

FBI's AI warning is increasingly critical. The news that AI is being used to impersonate Secretary of State Marco Rubio and place calls to foreign ministers may be shocking, but it shouldn't be surprising. The FBI has warned such attacks are now taking place, and it will only get worse. As first reported by the Washington Post, the State Department has warned U.S. diplomats that this latest attack has been caught in the act, with at least three foreign ministers, a U.S. senator and a governor amongst those contacted. A fake Signal account (Signal strikes again) was used to instigate contact though text and voice messages. It's clear that voice messages enable AI fakes to be deployed without the inherent risk in attempting to run this in real-time on a live call. Darktrace's AI and Strategy director Margaret Cunningham told me this is all too 'easy.' The attacks, while 'ultimately unsuccessful,' demonstrate 'just how easily generative AI can be used to launch credible, targeted social engineering attacks.' Alarmingly, Cunningham warns, 'this threat didn't fail because it was poorly crafted — it failed because it missed the right moment of human vulnerability.' People make decisions 'while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.' And while the Rubio scam will generate plenty of headlines, the AI fakes warning has being doing the rounds for many months. It won't make those same headlines, but you're more likely to be targeted in your professional life through social engineering that exploits readily available social media connections and content to trick you. The FBI warning is simple and increasingly important: 'Verify the identity of the person calling or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.' This is in addition to the broader advice given the plague of text message attacks now targeting American citizens. Check the details of any message. Delete any that are clear misrepresentations, such as fake tolls or DMV motoring offenses. Do not click any links contained in text messages — ever. And do not be afraid to hang up on the tech or customer support desk or bank or the law enforcement officer contacting you. You can then reach out to the relevant organization using publicly available contact details. 'This impersonation is alarming and highlights just how sophisticated generative AI tools have become,' says Black Duck's Thomas Richards. 'It underscores the risk of generative AI tools being used to manipulate and to conduct fraud. The old software world is gone, giving way to a new set of truths defined by AI.' As for the Rubio impersonations, 'the State Department is aware of this incident and is currently monitoring and addressing the matter,' a spokesperson told reporters, with the clear implication being limited sophistication this time around. 'The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department's cybersecurity posture to prevent future incidents.' 'AI-generated content has advanced to the point that it is often difficult to identify,' the bureau warns. 'When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store