Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Nvidia says China’s BYD and Geely will use its robotaxi platform

    Nvidia says China’s BYD and Geely will use its robotaxi platform

    March 16, 2026
    I met Olaf — the Frozen robot who might be the future of Disney Parks

    I met Olaf — the Frozen robot who might be the future of Disney Parks

    March 16, 2026
    Benjamin Netanyahu is struggling to prove he’s not an AI clone

    Benjamin Netanyahu is struggling to prove he’s not an AI clone

    March 16, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » This Tool Probes Frontier AI Models for Lapses in Intelligence
    Business

    This Tool Probes Frontier AI Models for Lapses in Intelligence

    News RoomBy News RoomApril 2, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    This Tool Probes Frontier AI Models for Lapses in Intelligence

    Executives at artificial intelligence companies may like to tell us that AGI is almost here, but the latest models still need some additional tutoring to help them be as clever as they can.

    Scale AI, a company that’s played a key role in helping frontier AI firms build advanced models, has developed a platform that can automatically test a model across thousands of benchmarks and tasks, pinpoint weaknesses, and flag additional training data that ought to help enhance their skills. Scale, of course, will supply the data required.

    Scale rose to prominence providing human labor for training and testing advanced AI models. Large language models (LLMs) are trained on oodles of text scraped from books, the web, and other sources. Turning these models into helpful, coherent, and well-mannered chatbots requires additional “post training” in the form of humans who provide feedback on a model’s output.

    Scale supplies workers who are expert on probing models for problems and limitations. The new tool, called Scale Evaluation, automates some of this work using Scale’s own machine learning algorithms.

    “Within the big labs, there are all these haphazard ways of tracking some of the model weaknesses,” says Daniel Berrios, head of product for Scale Evaluation. The new tool “is a way for [model makers] to go through results and slice and dice them to understand where a model is not performing well,” Berrios says, “then use that to target the data campaigns for improvement.”

    Berrios says that several frontier AI model companies are using the tool already. He says that most are using it to improve the reasoning capabilities of their best models. AI reasoning involves a model trying to break a problem into constituent parts in order to solve it more effectively. The approach relies heavily on post-training from users to determine whether the model has solved a problem correctly.

    In one instance, Berrios says, Scale Evaluation revealed that a model’s reasoning skills fell off when it was fed non-English prompts. “While [the model’s] general purpose reasoning capabilities were pretty good and performed well on benchmarks, they tended to degrade quite a bit when the prompts were not in English,” he says. Scale Evolution highlighted the issue and allowed the company to gather additional training data to address it.

    Jonathan Frankle, chief AI scientist at Databricks, a company that builds large AI models, says that being able to test one foundation model against another sounds useful in principle. “Anyone who moves the ball forward on evaluation is helping us to build better AI,” Frankle says.

    In recent months, Scale has contributed to the development of several new benchmarks designed to push AI models to become smarter, and to more carefully scrutinize how they might misbehave. These include EnigmaEval, MultiChallenge, MASK, and Humanity’s Last Exam.

    Scale says it is becoming more challenging to measure improvements in AI models, however, as they get better at acing existing tests. The company says its new tool offers a more comprehensive picture by combining many different benchmarks and can be used to devise custom tests of a model’s abilities, like probing its reasoning in different languages. Scale’s own AI can take a given problem and generate more examples, allowing for a more comprehensive test of a model’s skills.

    The company’s new tool may also inform efforts to standardize testing AI models for misbehavior. Some researchers say that a lack of standardization means that some model jailbreaks go undisclosed.

    In February, the US National Institute of Standards and Technologies announced that Scale would help it develop methodologies for testing models to ensure they are safe and trustworthy.

    What kinds of errors have you spotted in the outputs of generative AI tools? What do you think are models’ biggest blind spots? Let us know by emailing [email protected] or by commenting below.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleWordPress.com owner Automattic is laying off 16 percent of workers
    Next Article The Nintendo Switch smartphone app is getting some excellent new features

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    I met Olaf — the Frozen robot who might be the future of Disney Parks

    I met Olaf — the Frozen robot who might be the future of Disney Parks

    March 16, 2026
    Benjamin Netanyahu is struggling to prove he’s not an AI clone

    Benjamin Netanyahu is struggling to prove he’s not an AI clone

    March 16, 2026
    Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

    Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

    March 16, 2026
    Ecovacs’ Deebot X8 and X9 Pro Omni robovacs are nearly 50 percent off

    Ecovacs’ Deebot X8 and X9 Pro Omni robovacs are nearly 50 percent off

    March 16, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    This chair gives half-worn clothes a home News

    This chair gives half-worn clothes a home

    By News RoomMarch 16, 2026

    Seating technology may have reached its peak for those of us who struggle to keep…

    Amazon’s Fire TV Stick 4K Max and 4K Plus sticks are up to 50 percent off 

    Amazon’s Fire TV Stick 4K Max and 4K Plus sticks are up to 50 percent off 

    March 16, 2026
    How Yahoo escaped the Verizon death spiral

    How Yahoo escaped the Verizon death spiral

    March 16, 2026
    Amazon’s best Echo speakers and screens just got their biggest discounts

    Amazon’s best Echo speakers and screens just got their biggest discounts

    March 16, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.