Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft reports strong cloud earnings, with Windows and Xbox up too

    July 30, 2025

    Layoffs hit CNET as its parent company goes on a buying spree

    July 30, 2025

    How Do You Live a Happier Life? Notice What Was There All Along

    July 30, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Who’s to Blame When AI Agents Screw Up?
    Business

    Who’s to Blame When AI Agents Screw Up?

    News RoomBy News RoomMay 22, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Over the past year, veteran software engineer Jay Prakash Thakur has spent his nights and weekends prototyping AI agents that could, in the near future, order meals and engineer mobile apps almost entirely on their own. His agents, while surprisingly capable, have also exposed new legal questions that await companies trying to capitalize on Silicon Valley’s hottest new technology.

    Agents are AI programs that can act mostly independently, allowing companies to automate tasks such as answering customer questions or paying invoices. While ChatGPT and similar chatbots can draft emails or analyze bills upon request, Microsoft and other tech giants expect that agents will tackle more complex functions—and most importantly, do it with little human oversight.

    The tech industry’s most ambitious plans involve multi-agent systems, with dozens of agents someday teaming up to replace entire workforces. For companies, the benefit is clear: saving on time and labor costs. Already, demand for the technology is rising. Tech market researcher Gartner estimates that agentic AI will resolve 80 percent of common customer service queries by 2029. Fiverr, a service where businesses can book freelance coders, reports that searches for “ai agent” have surged 18,347 percent in recent months.

    Thakur, a mostly self-taught coder living in California, wanted to be at the forefront of the emerging field. His day job at Microsoft isn’t related to agents, but he has been tinkering with AutoGen, Microsoft’s open source software for building agents, since he worked at Amazon back in 2024. Thakur says he has developed multi-agent prototypes using AutoGen with just a dash of programming. Last week, Amazon rolled out a similar agent development tool called Strands; Google offers what it calls an Agent Development Kit.

    Because agents are meant to act autonomously, the question of who bears responsibility when their errors cause financial damage has been Thakur’s biggest concern. Assigning blame when agents from different companies miscommunicate within a single, large system could become contentious, he believes. He compared the challenge of reviewing error logs from various agents to reconstructing a conversation based on different people’s notes. “It’s often impossible to pinpoint responsibility,” Thakur says.

    Joseph Fireman, senior legal counsel at OpenAI, said on stage at a recent legal conference hosted by the Media Law Resource Center in San Francisco that aggrieved parties tend to go after those with the deepest pockets. That means companies like his will need to be prepared to take some responsibility when agents cause harm—even when a kid messing around with an agent might be to blame. (If that person were at fault, they likely wouldn’t be a worthwhile target moneywise, the thinking goes). “I don’t think anybody is hoping to get through to the consumer sitting in their mom’s basement on the computer,” Fireman said. The insurance industry has begun rolling out coverage for AI chatbot issues to help companies cover the costs of mishaps.

    Onion Rings

    Thakur’s experiments have involved him stringing together agents in systems that require as little human intervention as possible. One project he pursued was replacing fellow software developers with two agents. One was trained to search for specialized tools needed for making apps, and the other summarized their usage policies. In the future, a third agent could use the identified tools and follow the summarized policies to develop an entirely new app, Thakur says.

    When Thakur put his prototype to the test, a search agent found a tool that, according to the website, “supports unlimited requests per minute for enterprise users” (meaning high-paying clients can rely on it as much as they want). But in trying to distill the key information, the summarization agent dropped the crucial qualification of “per minute for enterprise users.” It erroneously told the coding agent, which did not qualify as an enterprise user, that it could write a program that made unlimited requests to the outside service. Because this was a test, there was no harm done. If it had happened in real life, the truncated guidance could have led to the entire system unexpectedly breaking down.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleGoogle’s AI product names are confusing as hell
    Next Article The AirPods Max with USB-C are a great Memorial Day deal at $70 off 

    Related Posts

    Meta’s AI Recruiting Campaign Finds a New Target

    July 30, 2025

    Meta Is Going to Let Job Candidates Use AI During Coding Tests

    July 30, 2025

    The Real Demon Inside ChatGPT

    July 29, 2025

    Programmers Aren’t So Humble Anymore—Maybe Because Nobody Codes in Perl

    July 29, 2025

    60 Italian Mayors Want to Be the Unlikely Solution to Self-Driving Cars in Europe

    July 29, 2025

    Tesla Readies a Taxi Service in San Francisco—but Not With Robotaxis

    July 29, 2025
    Our Picks

    Layoffs hit CNET as its parent company goes on a buying spree

    July 30, 2025

    How Do You Live a Happier Life? Notice What Was There All Along

    July 30, 2025

    Microsoft is getting ready for GPT-5 with a new Copilot smart mode

    July 30, 2025

    Google is using AI age checks to lock down user accounts

    July 30, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Business

    Meta’s AI Recruiting Campaign Finds a New Target

    By News RoomJuly 30, 2025

    Mark Zuckerberg is on a warpath to recruit top talent in the AI field for…

    You can hide an AirTag in Skechers’ new kids’ shoes

    July 30, 2025

    Elon Musk’s Boring Company announces plan to tunnel under Nashville

    July 30, 2025

    Meta Is Going to Let Job Candidates Use AI During Coding Tests

    July 30, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.