logo
The Rise of ‘Vibe Hacking' Is the Next AI Nightmare

The Rise of ‘Vibe Hacking' Is the Next AI Nightmare

WIRED04-06-2025

Jun 4, 2025 6:00 AM In the very near future, victory will belong to the savvy blackhat hacker who uses AI to generate code at scale.
In the near future one hacker may be able to unleash 20 zero-day attacks on different systems across the world all at once. Polymorphic malware could rampage across a codebase, using a bespoke generative AI system to rewrite itself as it learns and adapts. Armies of script kiddies could use purpose-built LLMs to unleash a torrent of malicious code at the push of a button.
Case in point: as of this writing, an AI system is sitting at the top of several leaderboards on HackerOne—an enterprise bug bounty system. The AI is XBOW, a system aimed at whitehat pentesters that 'autonomously finds and exploits vulnerabilities in 75 percent of web benchmarks,' according to the company's website.
AI-assisted hackers are a major fear in the cybersecurity industry, even if their potential hasn't quite been realized yet. 'I compare it to being on an emergency landing on an aircraft where it's like 'brace, brace, brace' but we still have yet to impact anything,' Hayden Smith, the cofounder of security company Hunted Labs, tells WIRED. 'We're still waiting to have that mass event.'
Generative AI has made it easier for anyone to code. The LLMs improve every day, new models spit out more efficient code, and companies like Microsoft say they're using AI agents to help write their codebase. Anyone can spit out a Python script using ChatGPT now, and vibe coding—asking an AI to write code for you, even if you don't have much of an idea how to do it yourself—is popular; but there's also vibe hacking.
'We're going to see vibe hacking. And people without previous knowledge or deep knowledge will be able to tell AI what it wants to create and be able to go ahead and get that problem solved, ' Katie Moussouris, the founder and CEO of Luta Security, tells WIRED.
Vibe hacking frontends have existed since 2023. Back then, a purpose-built LLM for generating malicious code called WormGPT spread on Discord groups, Telegram servers, and darknet forums. When security professionals and the media discovered it, its creators pulled the plug.
WormGPT faded away, but other services that billed themselves as blackhat LLMs, like FraudGPT, replaced it. But WormGPT's successors had problems. As security firm Abnormal AI notes, many of these apps may have just been jailbroken versions of ChatGPT with some extra code to make them appear as if they were a stand-alone product.
Better then, if you're a bad actor, to just go to the source. ChatGPT, Gemini, and Claude are easily jailbroken. Most LLMs have guard rails that prevent them from generating malicious code, but there are whole communities online dedicated to bypassing those guardrails. Anthropic even offers a bug bounty to people who discover new ones in Claude.
'It's very important to us that we develop our models safely,' an OpenAI spokesperson tells WIRED. 'We take steps to reduce the risk of malicious use, and we're continually improving safeguards to make our models more robust against exploits like jailbreaks. For example, you can read our research and approach to jailbreaks in the GPT-4.5 system card, or in the OpenAI o3 and o4-mini system card.'
Google did not respond to a request for comment.
In 2023, security researchers at Trend Micro got ChatGPT to generate malicious code by prompting it into the role of a security researcher and pentester. ChatGPT would then happily generate PowerShell scripts based on databases of malicious code.
'You can use it to create malware,' Moussouris says. 'The easiest way to get around those safeguards put in place by the makers of the AI models is to say that you're competing in a capture-the-flag exercise, and it will happily generate malicious code for you.'
Unsophisticated actors like script kiddies are an age-old problem in the world of cybersecurity, and AI may well amplify their profile. 'It lowers the barrier to entry to cybercrime,' Hayley Benedict, a Cyber Intelligence Analyst at RANE, tells WIRED.
But, she says, the real threat may come from established hacking groups who will use AI to further enhance their already fearsome abilities.
'It's the hackers that already have the capabilities and already have these operations,' she says. 'It's being able to drastically scale up these cybercriminal operations, and they can create the malicious code a lot faster.'
Moussouris agrees. 'The acceleration is what is going to make it extremely difficult to control,' she says.
Hunted Labs' Smith also says that the real threat of AI-generated code is in the hands of someone who already knows the code in and out who uses it to scale up an attack. 'When you're working with someone who has deep experience and you combine that with, 'Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.' That's a really interesting and dynamic part of the situation,' he says.
According to Smith, an experienced hacker could design a system that defeats multiple security protections and learns as it goes. The malicious bit of code would rewrite its malicious payload as it learns on the fly. 'That would be completely insane and difficult to triage,' he says.
Smith imagines a world where 20 zero-day events all happen at the same time. 'That makes it a little bit more scary,' he says.
Moussouris says that the tools to make that kind of attack a reality exist now. 'They are good enough in the hands of a good enough operator,' she says, but AI is not quite good enough yet for an inexperienced hacker to operate hands-off.
'We're not quite there in terms of AI being able to fully take over the function of a human in offensive security,' she says.
The primal fear that chatbot code sparks is that anyone will be able to do it, but the reality is that a sophisticated actor with deep knowledge of existing code is much more frightening. XBOW may be the closest thing to an autonomous 'AI hacker' that exists in the wild, and it's the creation of a team of more than 20 skilled people whose previous work experience includes GitHub, Microsoft, and a half a dozen assorted security companies.
It also points to another truth. 'The best defense against a bad guy with AI is a good guy with AI,' Benedict says.
For Moussouris, the use of AI by both blackhats and whitehats is just the next evolution of a cybersecurity arms race she's watched unfold over 30 years. 'It went from: 'I'm going to perform this hack manually or create my own custom exploit,' to, 'I'm going to create a tool that anyone can run and perform some of these checks automatically,'' she says.
'AI is just another tool in the toolbox, and those who do know how to steer it appropriately now are going to be the ones that make those vibey frontends that anyone could use.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ready Capital Corporation (RC) Declares Quarterly Dividends
Ready Capital Corporation (RC) Declares Quarterly Dividends

Yahoo

time4 minutes ago

  • Yahoo

Ready Capital Corporation (RC) Declares Quarterly Dividends

Ready Capital Corporation (NYSE:RC) is one of the 10 best-value penny stocks to buy, according to analysts. On June 14, the company's board of directors approved a cash dividend of $0.125 per share of common stock. The dividend will be paid to shareholders on July 31, 2025, as of the close of business on June 30, 2025. Copyright: bugtiger / 123RF Stock Photo In addition, the board declared a quarterly cash dividend on its 6.25% Series C Cumulative Convertible Preferred Stock and 6.50% Series E Cumulative Redeemable Preferred Stock. It also declared a dividend of $0.390625 per share of Series C Preferred Stock, payable to Series C Preferred stockholders on July 15, 2025. The quarterly dividends come on the heels of Ready Capital generating a net income of $81.97 million for its first quarter of 2025. It was a significant turnaround from a net loss of $74.17 million for the same quarter last year. Ready Capital Corporation (NYSE:RC) is a real estate finance company that originates, acquires, finances, and services commercial real estate loans for small to medium-sized businesses. It also offers small business loans through the SBA 7(a) program and provides financing for commercial real estate, including agency multifamily, investor, and bridge loans. While we acknowledge the potential of RC as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and . Disclosure: None. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Can CrowdStrike Stock Keep Moving Higher in 2025?
Can CrowdStrike Stock Keep Moving Higher in 2025?

Yahoo

time4 minutes ago

  • Yahoo

Can CrowdStrike Stock Keep Moving Higher in 2025?

CrowdStrike's all-in-one Falcon cybersecurity platform is increasingly popular for businesses, and it has a substantial long-term growth runway. However, CrowdStrike stock is trading at a record high following a 40% gain this year, and its valuation is starting to look a little rich. Investors hoping for more upside in 2025 might be left disappointed, but there is still an opportunity here for those with a longer time horizon. 10 stocks we like better than CrowdStrike › CrowdStrike (NASDAQ: CRWD) is one of the world's biggest cybersecurity companies. Its stock has soared 40% year to date, but its current valuation might be a barrier to further upside for the remainder of the year. With that said, investors who are willing to take a longer-term view could still reap significant rewards by owning a slice of CrowdStrike. The company's holistic all-in-one platform is extremely popular with enterprise customers, and its annual recurring revenue (ARR) could more than double over the next six years based on a forecast from management. The cybersecurity industry is quite fragmented, meaning many providers often specialize in single products like cloud security or identity security, so businesses have to use multiple vendors to achieve adequate protection. CrowdStrike is an outlier in that regard because its Falcon platform is a true all-in-one solution that allows its customers to consolidate their entire cybersecurity stack with one vendor. Falcon uses a cloud-based architecture, which means organizations don't need to install software on every computer and device. It also relies heavily on artificial intelligence (AI) to automate threat detection and incident response, so it operates seamlessly in the background and requires minimal intervention, if any, from the average employee. To lighten the workload for cybersecurity managers specifically, CrowdStrike launched a virtual assistant in 2023 called Charlotte AI. It eliminates alert fatigue by autonomously filtering threats, which means human team members only have to focus on legitimate risks to their organization. Charlotte AI is 98% accurate when it comes to triaging threats, and the company says it's saving managers more than 40 hours per week on average right now. Falcon features 30 different modules (products), so businesses can put together a custom cybersecurity solution to suit their needs. At the end of the company's fiscal 2026 first quarter (ended April 30), a record 48% of its customers were using six or more modules, up from 44% in the year-ago period. It launched a new subscription option in 2023 called Flex, which allows businesses to shift their annual contracted spending among different Falcon modules as their needs change. This can save customers substantial amounts of money, and it also entices them to try modules they might not have otherwise used, which can lead to increased spending over the long term. This is driving what management calls "reflexes," which describes Flex customers who rapidly chew through their budgets and come back for more. The company says 39 Flex customers recently exhausted their budgets within the first five months of their 35-month contracts, and each of them came back to expand their spending. It ended the fiscal 2026 first quarter with a record $4.4 billion in ARR, which was up 22% year over year. That growth has slowed over the last few quarters, mainly because of the major Falcon outage on July 19 last year, which crashed 8.5 million customer computers. Management doesn't anticipate any long-term effects from the incident (which I'll discuss further in a moment) because Falcon is so valuable to customers, but the company did offer customer choice packages to affected businesses that included discounted Flex subscriptions. This is dealing a temporary blow to revenue growth. Here's where things get a little sticky for CrowdStrike. Its stock is up over 40% this year and is trading at a record high, but the strong move has pushed its price-to-sales ratio (P/S) up to 29.1 as of June 24. That makes it significantly more expensive than any of its peers in the AI cybersecurity space: This premium valuation might be a barrier to further upside for the rest of this year, and it seems Wall Street agrees. The Wall Street Journal tracks 53 analysts who cover the stock, and their average price target is $481.95, which is slightly under where it's trading now, implying there could be near-term downside. But there could still be an opportunity here for longer-term investors. As I mentioned earlier, management doesn't expect any lingering impacts from the Falcon outage last year because it continues to reiterate its goal to reach $10 billion in ARR by fiscal 2031. That represents potential growth of 127% from the current ARR of $4.4 billion, and if the forecast comes to fruition, it could fuel strong returns for the stock over the next six years. Plus, $10 billion is still a fraction of CrowdStrike's estimated addressable market of $116 billion today -- a figure management expects to more than double to $250 billion over the next few years. So while I don't think there's much upside on the table for CrowdStrike in the remainder of 2025, those who can hold on to it for the next six years and beyond still have a solid investment opportunity. Before you buy stock in CrowdStrike, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and CrowdStrike wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $687,731!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $945,846!* Now, it's worth noting Stock Advisor's total average return is 818% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends CrowdStrike and Zscaler. The Motley Fool recommends Palo Alto Networks. The Motley Fool has a disclosure policy. Can CrowdStrike Stock Keep Moving Higher in 2025? was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Five surprising facts about AI chatbots that can help you make better use of them
Five surprising facts about AI chatbots that can help you make better use of them

Yahoo

time4 minutes ago

  • Yahoo

Five surprising facts about AI chatbots that can help you make better use of them

AI chatbots have already become embedded into some people's lives, but how many really know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and so how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. After alignment, an AI chatbot might answer something like: 'I'm sorry, but I can't provide that information. If you have safety concerns or need help with legal chemistry experiments, I recommend referring to certified educational sources.' Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But it is clear that AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' Read more: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, subwords or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. The sentence 'The price is $9.99.' is tokenised by ChatGPT as 'The', ' price', 'is', '$' ' 9', '.', '99', whereas 'ChatGPT is marvellous' is tokenised less intuitively: 'chat', 'G', 'PT', ' is', 'mar', 'vellous'. AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. The current version of ChatGPT has its cutoff on June 2024. If asked who is the currently president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots uses web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. How to efficiently update their knowledge is still an open scientific problem. ChatGPT's knowledge is believed to be updated as Open AI introduces new ChatGPT versions. AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work: they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don ́t know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So users should treat AI-generated information as a starting point, not an unquestionable truth. A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, chain of thought enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. This article is republished from The Conversation under a Creative Commons license. Read the original article. Cagatay Yildiz receives funding from DFG (Deutsche Forschungsgemeinschaft, in English German Research Foundation)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store