Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Fire Breaks Out at a Data Center Leased by Elon Musk’s X

    May 23, 2025

    Fujifilm X Half hands-on: when flaws become features

    May 23, 2025

    Microsoft employee bypasses ‘Palestine’ block to email thousands of staff in protest

    May 23, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Anthropic’s New Model Excels at Reasoning and Planning—and Has the Pokémon Skills to Prove It
    Business

    Anthropic’s New Model Excels at Reasoning and Planning—and Has the Pokémon Skills to Prove It

    News RoomBy News RoomMay 23, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    When Claude 3.7 Sonnet played the game, it ran into some challenges: It spent “dozens of hours” stuck in one city and had trouble identifying nonplayer characters, which drastically stunted its progress in the game. With Claude 4 Opus, Hershey noticed an improvement in Claude’s long-term memory and planning capabilities when he watched it navigate a complex Pokémon quest. After realizing it needed a certain power to move forward, the AI spent two days improving its skills before continuing to play. Hershey believes that kind of multistep reasoning, with no immediate feedback, shows a new level of coherence, meaning the model has a better ability stay on track.

    “This is one of my favorite ways to get to know a model. Like, this is how I understand what its strengths are, what its weaknesses are,” Hershey says. “It’s my way of just coming to grips with this new model that we’re about to put out, and how to work with it.”

    Everyone Wants an Agent

    Anthropic’s Pokémon research is a novel approach to tackling a preexisting problem—how do we understand what decisions an AI is making when approaching complex tasks, and nudge it in the right direction?

    The answer to that question is integral to advancing the industry’s much-hyped AI agents—AI that can tackle complex tasks with relative independence. In Pokémon, it’s important that the model doesn’t lose context or “forget” the task at hand. That also applies to AI agents asked to automate a workflow—even one that takes hundreds of hours.

    “As a task goes from being a five-minute task to a 30-minute task, you can see the model’s ability to keep coherent, to remember all of the things it needs to accomplish [the task] successfully get worse over time,” Hershey says.

    Anthropic, like many other AI labs, is hoping to create powerful agents to sell as a product for consumers. Krieger says that Anthropic’s “top objective” this year is Claude “doing hours of work for you.”

    “This model is now delivering on it—we saw one of our early-access customers have the model go off for seven hours and do a big refactor,” Krieger says, referring to the process of restructuring a large amount of code, often to make it more efficient and organized.

    This is the future that companies like Google and OpenAI are working toward. Earlier this week, Google released Mariner, an AI agent built into Chrome that can do tasks like buy groceries (for $249.99 per month). OpenAI recently released a coding agent, and a few months back it launched Operator, an agent that can browse the web on a user’s behalf.

    Compared to its competitors, Anthropic is often seen as the more cautious mover, going fast on research but slower on deployment. And with powerful AI, that’s likely a positive: There’s a lot that could go wrong with an agent that has access to sensitive information like a user’s inbox or bank logins. In a blog post on Thursday, Anthropic says, “We’ve significantly reduced behavior where the models use shortcuts or loopholes to complete tasks.” The company also says that both Claude 4 Opus and Claude Sonnet 4 are 65 percent less likely to engage in this behavior, known as reward hacking, than prior models—at least on certain coding tasks.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleXiaomi’s new YU7 spells trouble for Tesla in China
    Next Article Trump threatens Apple with a 25 percent iPhone tariff

    Related Posts

    Fire Breaks Out at a Data Center Leased by Elon Musk’s X

    May 23, 2025

    The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

    May 22, 2025

    Politico’s Newsroom Is Starting a Legal Battle With Management Over AI

    May 22, 2025

    Esoteric Programming Languages Are Fun—Until They Kill the Joke

    May 22, 2025

    Who’s to Blame When AI Agents Screw Up?

    May 22, 2025

    The Epic Rise and Fall of a Dark-Web Psychedelics Kingpin

    May 22, 2025
    Our Picks

    Fujifilm X Half hands-on: when flaws become features

    May 23, 2025

    Microsoft employee bypasses ‘Palestine’ block to email thousands of staff in protest

    May 23, 2025

    Samsung’s G8 QD-OLED Gaming Monitor Is the Prettiest Screen You’ll Find

    May 23, 2025

    Vestaboard’s Note is a smaller, cheaper version of its hypnotic split-flap display

    May 23, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Gear

    29 Best Memorial Day Sales

    By News RoomMay 23, 2025

    Finding the best Memorial Day sales can be hard, but luckily we do this for…

    Sony made its shooting grip better for creators working without a crew

    May 23, 2025

    The Best Sleeping Pads For Campgrounds—Our Comfiest Picks

    May 23, 2025

    OpenAI and Jony Ive’s AI super-gadget

    May 23, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.