Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How BlackBerry Messenger set texting free

    October 12, 2025

    Welcome to the ‘papers, please’ internet

    October 12, 2025

    ChatGPT is becoming an everything app

    October 12, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » AI Agents Are Getting Better at Writing Code—and Hacking It as Well
    Business

    AI Agents Are Getting Better at Writing Code—and Hacking It as Well

    News RoomBy News RoomJune 26, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The latest artificial intelligence models are not only remarkably good at software engineering—new research shows they are getting ever-better at finding bugs in software, too.

    AI researchers at UC Berkeley tested how well the latest AI models and agents could find vulnerabilities in 188 large open source codebases. Using a new benchmark called CyberGym, the AI models identified 17 new bugs including 15 previously unknown, or “zero-day,” ones. “Many of these vulnerabilities are critical,” says Dawn Song, a professor at UC Berkeley who led the work.

    Many experts expect AI models to become formidable cybersecurity weapons. An AI tool from startup Xbow currently has crept up the ranks of HackerOne’s leaderboard for bug hunting and currently sits in top place. The company recently announced $75 million in new funding.

    Song says that the coding skills of the latest AI models combined with improving reasoning abilities are starting to change the cybersecurity landscape. “This is a pivotal moment,” she says. “It actually exceeded our general expectations.”

    As the models continue to improve they will automate the process of both discovering and exploiting security flaws. This could help companies keep their software safe but may also aid hackers in breaking into systems. “We didn’t even try that hard,” Song says. “If we ramped up on the budget, allowed the agents to run for longer, they could do even better.”

    The UC Berkeley team tested conventional frontier AI models from OpenAI, Google, and Anthropic, as well as open source offerings from Meta, DeepSeek, and Alibaba combined with several agents for finding bugs, including OpenHands, Cybench, and EnIGMA.

    The researchers used descriptions of known software vulnerabilities from the 188 software projects. They then fed the descriptions to the cybersecurity agents powered by frontier AI models to see if they could identify the same flaws for themselves by analyzing new codebases, running tests, and crafting proof-of-concept exploits. The team also asked the agents to hunt for new vulnerabilities in the codebases by themselves.

    Through the process, the AI tools generated hundreds of proof-of-concept exploits, and of these exploits the researchers identified 15 previously unseen vulnerabilities and two vulnerabilities that had previously been disclosed and patched. The work adds to growing evidence that AI can automate the discovery of zero-day vulnerabilities, which are potentially dangerous (and valuable) because they may provide a way to hack live systems.

    AI seems destined to become an important part of the cybersecurity industry nonetheless. Security expert Sean Heelan recently discovered a zero-day flaw in the widely used Linux kernel with help from OpenAI’s reasoning model o3. Last November, Google announced that it had discovered a previously unknown software vulnerability using AI through a program called Project Zero.

    Like other parts of the software industry, many cybersecurity firms are enamored with the potential of AI. The new work indeed shows that AI can routinely find new flaws, but it also highlights remaining limitations with the technology. The AI systems were unable to find most flaws and were stumped by especially complex ones.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThink of Tovala as a Meal Kit That Comes With a Robot Chef
    Next Article Microsoft’s Xbox PC launcher gets going with Steam, Epic, and other games showing up

    Related Posts

    How China Is Hoping to Attract Tech Talent

    October 10, 2025

    The City That Made the World Fall for a Monster

    October 10, 2025

    OpenAI Sneezes, and Software Firms Catch a Cold

    October 9, 2025

    Patreon CEO Jack Conte Wants You to Get Off of Your Phone

    October 9, 2025

    Inside Intel’s Hail Mary to Reclaim Chip Dominance

    October 9, 2025

    This Startup Wants to Spark a US DeepSeek Moment

    October 8, 2025
    Our Picks

    Welcome to the ‘papers, please’ internet

    October 12, 2025

    ChatGPT is becoming an everything app

    October 12, 2025

    Scientist Who Was Offline ‘Living His Best Life’ Stunned by Nobel Prize Win

    October 12, 2025

    The ASUS TUF T500 Is a Great Gaming PC for Beginners

    October 12, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Apple ends support for Clips video-editing app

    By News RoomOctober 11, 2025

    Apple finally decided to pull the plug though, removing Clips from the App Store. The…

    How The Verge and our readers manage kids’ screen time

    October 11, 2025

    The AirPods 4 and Lego’s brick-ified Grogu are our favorite deals this week

    October 11, 2025

    Is the Coros Nomad really an adventure watch?

    October 11, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.