logo
#

Latest news with #shadowAI

Chatbots could be helping hackers to steal data from people and companies
Chatbots could be helping hackers to steal data from people and companies

Daily Mail​

time17 hours ago

  • Business
  • Daily Mail​

Chatbots could be helping hackers to steal data from people and companies

Generative artificial intelligence is the revolutionary new technology that is transforming the world of work. It can summarize and stores reams of data and documents in seconds, saving workers valuable time and effort, and companies lots of money, but as the old saying goes, you don't get something for nothing. As the uncontrolled and unapproved use of unvetted AI tools such as ChatGPT and Copilot soars, so too does the risk that company secrets or sensitive personal information such as salaries or health records are being unwittingly leaked. Time saver: But there are increasing concerns that using tools such as ChatGPT in a business setting could leave sensitive information exposed This hidden and largely unreported risk of serious data breaches stems from the default ability of AI models to record and archive chat history, which is used to help train the AI to better respond to questions in the future. As these conversations become part of the AI's knowledge base, retrieval or deletion of data becomes almost impossible. 'It's like putting flour into bread,' said Ronan Murphy, a tech entrepreneur and AI adviser to the Irish government. 'Once you've done it, it's very hard to take it out.' This 'machine learning' means that highly sensitive information absorbed by AI could resurface later if prompted by someone with malicious intent. Experts warn that this silent and emerging threat from so-called 'shadow AI' is as dangerous as the one already posed by scammers, where hackers trick company insiders into giving away computer passwords and other codes. But cyber criminals are also using confidential data voraciously devoured by chatbots like ChatGPT to hack into vulnerable IT systems. 'If you know how to prompt it, the AI will spill the beans,' Murphy said. The scale of the problem is alarming. A recent survey found that nearly one in seven of all data security incidents is linked to generative AI. Another found that almost a quarter of 8,000 firms surveyed worldwide gave their staff unrestricted access to publicly available AI tools. That puts confidential data such as meeting notes, disciplinary reports or financial records 'at serious risk' that 'could lead employees to inadvertently propagate threats', a report from technology giant Cisco said. 'It's like the invention of the internet – it's just arrived and it's the future – but we don't understand what we are giving to these systems and what's happening behind the scenes at the back end,' said Cisco cyber threat expert Martin Lee. One of the most high-profile cybersecurity 'own-goals' in recent years was scored by South Korean group Samsung. The consumer electronics giant banned employees from using popular chatbots like ChatGPT after discovering in 2023 that one of its engineers had accidentally pasted secret code and meeting notes onto an AI platform. Banks have also cracked down on the use of ChatGPT by staff amid concerns about the regulatory risks they face from sharing sensitive financial information. But as organisations put guardrails in place to keep their data secure, they also don't want to miss out on what may be a once-in-a-generation chance to steal a march on their rivals. 'We're seeing companies race ahead with AI implementation as a means of improving productivity and staying one step ahead of competitors,' said Ruben Miessen, co-founder of compliance software group Legalfly, whose clients include banks, insurers and asset managers. 'However, a real risk is that the lack of oversight and any internal framework is leaving client data and sensitive personal information potentially exposed,' he added. The answer though, isn't to limit AI usage. 'It's about enabling it responsibly,' Miessen said. Murphy added: 'You either say no to everything or figure out a plan to do it safely. 'Protecting sensitive data is not sexy, it's boring and time-consuming.' But unless adequate controls are put in place, 'you make a hacker's job extremely easy'.

Are chatbots stealing your personal data?
Are chatbots stealing your personal data?

Daily Mail​

time4 days ago

  • Business
  • Daily Mail​

Are chatbots stealing your personal data?

It's the revolutionary new technology that is transforming the world of work. Generative artificial intelligence (AI) creates, summarises and stores reams of data and documents in seconds, saving workers valuable time and effort, and companies lots of money. But as the old saying goes, you don't get something for nothing. As the uncontrolled and unapproved use of unvetted AI tools such as ChatGPT and Copilot soars, so too does the risk that company secrets or sensitive personal information such as salaries or health records are being unwittingly leaked. This hidden and largely unreported risk of serious data breaches stems from the default ability of AI models to record and archive chat history, which is used to help train the AI to better respond to questions in the future. As these conversations become part of the AI's knowledge base, retrieval or deletion of data becomes almost impossible. 'It's like putting flour into bread,' said Ronan Murphy, a tech entrepreneur and AI adviser to the Irish government. 'Once you've done it, it's very hard to take it out.' This 'machine learning' means that highly sensitive information absorbed by AI could resurface later if prompted by someone with malicious intent. Experts warn that this silent and emerging threat from so-called 'shadow AI' is as dangerous as the one already posed by scammers like those who recently targeted Marks & Spencer, costing the retailer £300 million. M&S fell victim to a 'ransomware' attack, where hackers tricked company insiders into giving away computer passwords and other codes. Its chairman, Archie Norman, told MPs last week that the hack was caused by 'sophisticated impersonation' of one of its third-party users. Four people have been arrested by police investigating the cyber attacks on M&S and fellow retailers Co-op and Harrods. But cyber criminals are also using confidential data voraciously devoured by chatbots like ChatGPT to hack into vulnerable IT systems. 'If you know how to prompt it, the AI will spill the beans,' Murphy said. The scale of the problem is alarming. A recent survey found that nearly one in seven of all data security incidents is linked to generative AI. Another found that almost a quarter of 8,000 firms surveyed worldwide gave their staff unrestricted access to publicly available AI tools. That puts confidential data such as meeting notes, disciplinary reports or financial records 'at serious risk' that 'could lead employees to inadvertently propagate threats', a report from technology giant Cisco said. 'It's like the invention of the internet – it's just arrived and it's the future – but we don't understand what we are giving to these systems and what's happening behind the scenes at the back end,' said Cisco cyber threat expert Martin Lee. One of the most high-profile cybersecurity 'own-goals' in recent years was scored by South Korean group Samsung. The consumer electronics giant banned employees from using popular chatbots like ChatGPT after discovering in 2023 that one of its engineers had accidentally pasted secret code and meeting notes onto an AI platform. Banks have also cracked down on the use of ChatGPT by staff amid concerns about the regulatory risks they face from sharing sensitive financial information. But as organisations put guardrails in place to keep their data secure, they also don't want to miss out on what may be a once-in-a-generation chance to steal a march on their rivals. 'We're seeing companies race ahead with AI implementation as a means of improving productivity and staying one step ahead of competitors,' said Ruben Miessen, co-founder of compliance software group Legalfly, whose clients include banks, insurers and asset managers. 'However, a real risk is that the lack of oversight and any internal framework is leaving client data and sensitive personal information potentially exposed,' he added. The answer though, isn't to limit AI usage. 'It's about enabling it responsibly,' Miessen said. Murphy added: 'You either say no to everything or figure out a plan to do it safely. 'Protecting sensitive data is not sexy, it's boring and time-consuming.' But unless adequate controls are put in place, 'you make a hacker's job extremely easy'.

The unsanctioned use of AI tools by developers is a serious issue.
The unsanctioned use of AI tools by developers is a serious issue.

Forbes

time07-07-2025

  • Business
  • Forbes

The unsanctioned use of AI tools by developers is a serious issue.

Alex de Minaur of Australia casts a shadow as he serves to Arthur Cazaux of France during their ... More second round men's singles match at the Wimbledon Tennis Championships in London, Thursday, July 3, 2025.(AP Photo/Kin Cheung) Shadow AI is illuminating. In some ways, the use of unregulated artificial intelligence services that fail to align with an organization's IT policies and wider country-specific data governance controls might be seen as a positive i.e. it's a case of developers and data scientists looking for new innovations to bring hitherto unexplored new efficiencies to a business. But mostly, unsurprisingly, shadow AI (like most forms of shadow technology and bring your own device activity) is viewed as a negative, an infringement and a risk. AI Shadow Breeding Ground The problem today is that AI is essentially still so nascent, so embryonic and only really starting to enjoy its first wave of implementation. With many users' exposure to AI relegated to seeing amusing image constructs built by ChatGPT and other tools (think about human plastic toy blister packs last week, cats on diving boards this week and something zanier next week for sure), we've yet to get to a point where widespread enterprise use of AI tools has become the norm. Although that time is arguably very soon, the current state of AI development means that some activity is being driven undercover. The unsanctioned use of AI tools by developers is becoming a serious issue as application development continues to evolve at a rapid pace. Scott McKinnon, CSO for UK&I at Palo Alto Networks says that this means building modern, cloud‑native applications isn't just about writing code anymore, it's about realizing that we're now in a delivery model that's operating in 'continuous beta mode', such is the pressure to roll out new enterprise software services today. 'The knock-on effect is that developers are under intense pressure to be fast and reduce time to market. With this in mind, it's not surprising that many developers are using AI tools in an effort to increase efficiency and deliver on these challenging expectations,' lamented McKinnon. 'Our research suggests that enterprise generative AI traffic exploded by over 890% in 2024 - and with organisations now starting to actually use these apps - a proportion of them can be classed as high risk. Meanwhile, data loss prevention incidents tied to generative AI have more than doubled, which is a clear red flag for governance failures. Go-Around Guardrails Compound all these realities and it's easy to understand why software developers might be tempted to seek ways around the organization's AI guardrail policies and controls. In practice, this sees them plugging into services from open source large language models outside of approved platforms, using AI to generate code without oversight, or skipping data governance policies to speed up implementation. The upshot is the potential for intellectual property to be exposed through compliance slips that also compromise system security. 'It all points to one thing: if developers are to balance speed with security, they must adopt a new operational model. It must be one where clear, enforceable AI governance and oversight are embedded into the continuous delivery pipeline, not bolted on afterwards,' said McKinnon "When developers use AI tools outside of sanctioned channels, one of the most pressing concerns is supply chain integrity. When developers pull in untested or unvetted AI components, they're introducing opaque dependencies that often carry hidden vulnerabilities.' What are opaque software dependencies? It's a scary enough sounding term in and of itself, opaque software dependencies are indeed bad news. Software dependencies are essential component parts of smaller data services, software libraries devoted to establishing database connections, a software framework that controls a user interface or a smaller module that forms a wider external third-party application in its entirety. Useful software dependencies make their DNA easy to see and can be viewed with translucent clarity; opaque software dependencies are functional, but cloudy or muddied in terms of their ability to showcase their progeny and component parts. In technical terms, opaque software application dependencies mean the developer can not 'assign' them (and forge a connection to them) using a public application programming interface. According to McKinnon, another major problem is the potential for prompt injection attacks, where bad actors manipulate the AI's inputs to force it into behaving in unintended and dangerous ways. These types of vulnerabilities are difficult to detect and can undermine the trust and safety of AI-driven applications. When these practices go unchecked, they create new attack surfaces and increase the overall risk of cyber incidents. Organizations must get ahead of this by securing their AI development environments, vetting tools rigorously and ensuring that developers are empowered to work effectively. The Road To Platformization "To effectively address the risks posed by unsanctioned AI use, organisations need to move beyond fragmented tools and processes toward a unified platform approach. This means consolidating AI governance, system controls and developer workflows into a single, integrated system that offers real-time visibility. Without this, organizations struggle to keep pace with the speed and scale of modern development environments, leaving gaps that adversaries can exploit,' said McKinnon. His vision of platformization (and the wider world of platform engineering) is argued to enable organizations to enforce consistent policies across all AI usage, detect risky behaviors early and provide developers with safe, approved AI capabilities within their existing workflows. 'This reduces friction for software developers, allowing them to work quickly without compromising on security or compliance. Instead of juggling multiple disjointed tools, organizations gain a centralized view of AI activity, making it easier to monitor, audit and respond to threats. Ultimately, a platform approach is about balance, providing the safeguards and controls necessary to reduce risk while maintaining the agility and innovation developers need,' concluded Palo Alto Networks' McKinnon. At its worst, shadow AI can lead to so-called model poisoning (also known as data poisoning), a scenario which application and API reliability company Cloudflare defines as when an attacker manipulates the outputs of an AI or machine learning model by changing its training data. An AI model poisoner's goal is to force the AI model itself to produce biased or dangerous results when it starts to process is inference calculations that will ultimately provide us with AI brainpower. According to Mitchell Johnson, chief product office of software supply chain management specialist Sonatype, 'Shadow AI includes any AI application or tool that operates outside an organization's IT or governance frameworks. Think shadow IT but with a lot more potential (and risk). It's the digital equivalent of prospectors staking their claims in the gold rush, cutting through red tape to strike it rich in efficiency and innovation. Examples include employees using ChatGPT to draft proposals, using new AI-powered code assistants, building machine learning models on personal accounts, or automating tedious tasks with unofficial scripts.' Johnson says it rears its head now, increasingly, due to the popularization of remote working where teams can operate outside traditional oversight and where firms have policy gaps, meaning that an organization lacks comprehensive AI governance, which leaves room for improvization. From Out Of The Shadows There is clearly a network system health issue associated with shadow AI; after all, it's the first concern brought up by tech industry commentators who want to warn us about shadow IT of any kind. There are wider implications too in terms of some IT teams gaining what might be perceived to be an unfair advantage, or some developer teams introducing misplaced AI that leads to bias and hallucinations. To borrow a meteorological trusim, shadows are typically only good news in a heatwave… and that usually means there's a fair amount of humidity around with the potential for storms later.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store