Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Now You Can Get Your Flu Vaccine at Home

    August 20, 2025

    Made by Google: How to watch the Pixel 10 launch

    August 20, 2025

    Labubus Are on Track to Be a Billion-Dollar Business This Year

    August 20, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » AI Agents Are Getting Better at Writing Code—and Hacking It as Well
    Business

    AI Agents Are Getting Better at Writing Code—and Hacking It as Well

    News RoomBy News RoomJune 26, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The latest artificial intelligence models are not only remarkably good at software engineering—new research shows they are getting ever-better at finding bugs in software, too.

    AI researchers at UC Berkeley tested how well the latest AI models and agents could find vulnerabilities in 188 large open source codebases. Using a new benchmark called CyberGym, the AI models identified 17 new bugs including 15 previously unknown, or “zero-day,” ones. “Many of these vulnerabilities are critical,” says Dawn Song, a professor at UC Berkeley who led the work.

    Many experts expect AI models to become formidable cybersecurity weapons. An AI tool from startup Xbow currently has crept up the ranks of HackerOne’s leaderboard for bug hunting and currently sits in top place. The company recently announced $75 million in new funding.

    Song says that the coding skills of the latest AI models combined with improving reasoning abilities are starting to change the cybersecurity landscape. “This is a pivotal moment,” she says. “It actually exceeded our general expectations.”

    As the models continue to improve they will automate the process of both discovering and exploiting security flaws. This could help companies keep their software safe but may also aid hackers in breaking into systems. “We didn’t even try that hard,” Song says. “If we ramped up on the budget, allowed the agents to run for longer, they could do even better.”

    The UC Berkeley team tested conventional frontier AI models from OpenAI, Google, and Anthropic, as well as open source offerings from Meta, DeepSeek, and Alibaba combined with several agents for finding bugs, including OpenHands, Cybench, and EnIGMA.

    The researchers used descriptions of known software vulnerabilities from the 188 software projects. They then fed the descriptions to the cybersecurity agents powered by frontier AI models to see if they could identify the same flaws for themselves by analyzing new codebases, running tests, and crafting proof-of-concept exploits. The team also asked the agents to hunt for new vulnerabilities in the codebases by themselves.

    Through the process, the AI tools generated hundreds of proof-of-concept exploits, and of these exploits the researchers identified 15 previously unseen vulnerabilities and two vulnerabilities that had previously been disclosed and patched. The work adds to growing evidence that AI can automate the discovery of zero-day vulnerabilities, which are potentially dangerous (and valuable) because they may provide a way to hack live systems.

    AI seems destined to become an important part of the cybersecurity industry nonetheless. Security expert Sean Heelan recently discovered a zero-day flaw in the widely used Linux kernel with help from OpenAI’s reasoning model o3. Last November, Google announced that it had discovered a previously unknown software vulnerability using AI through a program called Project Zero.

    Like other parts of the software industry, many cybersecurity firms are enamored with the potential of AI. The new work indeed shows that AI can routinely find new flaws, but it also highlights remaining limitations with the technology. The AI systems were unable to find most flaws and were stumped by especially complex ones.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThink of Tovala as a Meal Kit That Comes With a Robot Chef
    Next Article Microsoft’s Xbox PC launcher gets going with Steam, Epic, and other games showing up

    Related Posts

    Labubus Are on Track to Be a Billion-Dollar Business This Year

    August 20, 2025

    Silicon Valley Is Panicking About Zohran Mamdani. NYC’s Tech Scene Is Not

    August 19, 2025

    How Microschools Became the Latest Tech Mogul Obsession

    August 19, 2025

    WIRED Roundup: Why GPT-5 Flopped

    August 19, 2025

    Teachers Are Trying to Make AI Work for Them

    August 19, 2025

    Developers Say GPT-5 Is a Mixed Bag

    August 18, 2025
    Our Picks

    Made by Google: How to watch the Pixel 10 launch

    August 20, 2025

    Labubus Are on Track to Be a Billion-Dollar Business This Year

    August 20, 2025

    Prices leak for the rest of Google’s new Pixel products

    August 20, 2025

    The White House just joined TikTok

    August 19, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Microsoft employees occupy headquarters in protest of Israel contracts

    By News RoomAugust 19, 2025

    On Tuesday, a group of current and former Microsoft employees, as well as community members,…

    Google Gemini can now read your Docs aloud

    August 19, 2025

    Google announced the next step in its nuclear energy plans 

    August 19, 2025

    Apple is reportedly making more of its new iPhones in India instead of China

    August 19, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.