logo
#

Latest news with #XBOW

What is XBOW? An AI Tool that is America's 'Best Hacker' Secures $75M in Funding
What is XBOW? An AI Tool that is America's 'Best Hacker' Secures $75M in Funding

International Business Times

time11 hours ago

  • Business
  • International Business Times

What is XBOW? An AI Tool that is America's 'Best Hacker' Secures $75M in Funding

An unexpected hacker has topped the leaderboard in discovering real-world cyberthreats, beating some of the very talented human reviewers. Its name is XBOW, a new artificial intelligence system designed to explore for vulnerabilities in software, and it just claimed first place on HackerOne, an international bug bounty-based competition in which hackers work to uncover bugs for big companies. It marks the first time that autonomous systems have surpassed all people on the leaderboard. In the past few months alone, XBOW's AI has identified more than 1,000 vulnerabilities. These are not just guesses—companies such as AT&T, Epic Games, Ford, and Disney have verified 132 of these threats and have issued fixes. 330+ more bugs are targeted for resolution, with hundreds more still under review. XBOW is unique in the way it operates; it continuously scans apps and systems like a tireless red team. Instead of being human-driven—requiring scheduled penetration scans—XBOW runs 24x7. It's AI that detects, models, and emulates attacks against live networks—without the need for manual guidance. The result? Faster identification of genuine security issues—including those deeply buried within complex codebases. The creators of XBOW say that the shift is crucial since cyberattacks have become more intricate as hackers have also started leveraging AI to initiate large-scale attacks. In this accelerating arms race, being capable of thinking and acting at machine speed is no longer a luxury—it's a requirement. But the trend of automated testing tools also raises issues. The increasing number of bug reports from AI is worrying some developers. They fear that if services such as XBOW are replicated, it could flood security personnel with too many alerts, some of which may be duplicative or not warrant attention. XBOW, however, asserts that its reports are not only valid but frequently crucial and notes that human reports can also come in varying qualities. Whatever the merits of that debate, the impact of the platform is clear. It can execute full-scale security tests in hours—something that previously took days or even weeks. And it's not just for cybersecurity experts or researchers; the product is already being used by banks, tech giants, and other major organizations. To fuel its burgeoning ambitions, XBOW recently secured $75 million in a Series B round of funding. The round was led by Altimeter's Apoorv Agrawal and included follow-on from Sequoia Capital and Nat Friedman. The investment brings the company's total raise to $117 million. With the fresh funds, XBOW plans to grow its engineering team and build out its go-to-market plan.

An AI Bot Beats Top Human Hackers
An AI Bot Beats Top Human Hackers

Yahoo

timea day ago

  • Business
  • Yahoo

An AI Bot Beats Top Human Hackers

One of the best hackers in the U.S. is an AI bot from startup XBOW. The CEO and founder of XBOW, Oege de Moor, and Altimeter Capital Partner, Apoorv Agrawal, discuss the role this tool could play in cybersecurity defense. They join Caroline Hyde and Ed Ludlow on "Bloomberg Tech." Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

The Rise of ‘Vibe Hacking' Is the Next AI Nightmare
The Rise of ‘Vibe Hacking' Is the Next AI Nightmare

WIRED

time04-06-2025

  • Business
  • WIRED

The Rise of ‘Vibe Hacking' Is the Next AI Nightmare

Jun 4, 2025 6:00 AM In the very near future, victory will belong to the savvy blackhat hacker who uses AI to generate code at scale. In the near future one hacker may be able to unleash 20 zero-day attacks on different systems across the world all at once. Polymorphic malware could rampage across a codebase, using a bespoke generative AI system to rewrite itself as it learns and adapts. Armies of script kiddies could use purpose-built LLMs to unleash a torrent of malicious code at the push of a button. Case in point: as of this writing, an AI system is sitting at the top of several leaderboards on HackerOne—an enterprise bug bounty system. The AI is XBOW, a system aimed at whitehat pentesters that 'autonomously finds and exploits vulnerabilities in 75 percent of web benchmarks,' according to the company's website. AI-assisted hackers are a major fear in the cybersecurity industry, even if their potential hasn't quite been realized yet. 'I compare it to being on an emergency landing on an aircraft where it's like 'brace, brace, brace' but we still have yet to impact anything,' Hayden Smith, the cofounder of security company Hunted Labs, tells WIRED. 'We're still waiting to have that mass event.' Generative AI has made it easier for anyone to code. The LLMs improve every day, new models spit out more efficient code, and companies like Microsoft say they're using AI agents to help write their codebase. Anyone can spit out a Python script using ChatGPT now, and vibe coding—asking an AI to write code for you, even if you don't have much of an idea how to do it yourself—is popular; but there's also vibe hacking. 'We're going to see vibe hacking. And people without previous knowledge or deep knowledge will be able to tell AI what it wants to create and be able to go ahead and get that problem solved, ' Katie Moussouris, the founder and CEO of Luta Security, tells WIRED. Vibe hacking frontends have existed since 2023. Back then, a purpose-built LLM for generating malicious code called WormGPT spread on Discord groups, Telegram servers, and darknet forums. When security professionals and the media discovered it, its creators pulled the plug. WormGPT faded away, but other services that billed themselves as blackhat LLMs, like FraudGPT, replaced it. But WormGPT's successors had problems. As security firm Abnormal AI notes, many of these apps may have just been jailbroken versions of ChatGPT with some extra code to make them appear as if they were a stand-alone product. Better then, if you're a bad actor, to just go to the source. ChatGPT, Gemini, and Claude are easily jailbroken. Most LLMs have guard rails that prevent them from generating malicious code, but there are whole communities online dedicated to bypassing those guardrails. Anthropic even offers a bug bounty to people who discover new ones in Claude. 'It's very important to us that we develop our models safely,' an OpenAI spokesperson tells WIRED. 'We take steps to reduce the risk of malicious use, and we're continually improving safeguards to make our models more robust against exploits like jailbreaks. For example, you can read our research and approach to jailbreaks in the GPT-4.5 system card, or in the OpenAI o3 and o4-mini system card.' Google did not respond to a request for comment. In 2023, security researchers at Trend Micro got ChatGPT to generate malicious code by prompting it into the role of a security researcher and pentester. ChatGPT would then happily generate PowerShell scripts based on databases of malicious code. 'You can use it to create malware,' Moussouris says. 'The easiest way to get around those safeguards put in place by the makers of the AI models is to say that you're competing in a capture-the-flag exercise, and it will happily generate malicious code for you.' Unsophisticated actors like script kiddies are an age-old problem in the world of cybersecurity, and AI may well amplify their profile. 'It lowers the barrier to entry to cybercrime,' Hayley Benedict, a Cyber Intelligence Analyst at RANE, tells WIRED. But, she says, the real threat may come from established hacking groups who will use AI to further enhance their already fearsome abilities. 'It's the hackers that already have the capabilities and already have these operations,' she says. 'It's being able to drastically scale up these cybercriminal operations, and they can create the malicious code a lot faster.' Moussouris agrees. 'The acceleration is what is going to make it extremely difficult to control,' she says. Hunted Labs' Smith also says that the real threat of AI-generated code is in the hands of someone who already knows the code in and out who uses it to scale up an attack. 'When you're working with someone who has deep experience and you combine that with, 'Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.' That's a really interesting and dynamic part of the situation,' he says. According to Smith, an experienced hacker could design a system that defeats multiple security protections and learns as it goes. The malicious bit of code would rewrite its malicious payload as it learns on the fly. 'That would be completely insane and difficult to triage,' he says. Smith imagines a world where 20 zero-day events all happen at the same time. 'That makes it a little bit more scary,' he says. Moussouris says that the tools to make that kind of attack a reality exist now. 'They are good enough in the hands of a good enough operator,' she says, but AI is not quite good enough yet for an inexperienced hacker to operate hands-off. 'We're not quite there in terms of AI being able to fully take over the function of a human in offensive security,' she says. The primal fear that chatbot code sparks is that anyone will be able to do it, but the reality is that a sophisticated actor with deep knowledge of existing code is much more frightening. XBOW may be the closest thing to an autonomous 'AI hacker' that exists in the wild, and it's the creation of a team of more than 20 skilled people whose previous work experience includes GitHub, Microsoft, and a half a dozen assorted security companies. It also points to another truth. 'The best defense against a bad guy with AI is a good guy with AI,' Benedict says. For Moussouris, the use of AI by both blackhats and whitehats is just the next evolution of a cybersecurity arms race she's watched unfold over 30 years. 'It went from: 'I'm going to perform this hack manually or create my own custom exploit,' to, 'I'm going to create a tool that anyone can run and perform some of these checks automatically,'' she says. 'AI is just another tool in the toolbox, and those who do know how to steer it appropriately now are going to be the ones that make those vibey frontends that anyone could use.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store