Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ICE-tracking app tops App Store

    July 2, 2025

    Come for the Amenity Kits, Stay for the Flight

    July 2, 2025

    Space Elevators Could Totally Work—if Earth Days Were Much Shorter

    July 2, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » How Game Theory Can Make AI More Reliable
    Business

    How Game Theory Can Make AI More Reliable

    News RoomBy News RoomJune 10, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Posing a far greater challenge for AI researchers was the game of Diplomacy—a favorite of politicians like John F. Kennedy and Henry Kissinger. Instead of just two opponents, the game features seven players whose motives can be hard to read. To win, a player must negotiate, forging cooperative arrangements that anyone could breach at any time. Diplomacy is so complex that a group from Meta was pleased when, in 2022, its AI program Cicero developed “human-level play” over the course of 40 games. While it did not vanquish the world champion, Cicero did well enough to place in the top 10 percent against human participants.

    During the project, Jacob—a member of the Meta team—was struck by the fact that Cicero relied on a language model to generate its dialog with other players. He sensed untapped potential. The team’s goal, he said, “was to build the best language model we could for the purposes of playing this game.” But what if instead they focused on building the best game they could to improve the performance of large language models?

    Consensual Interactions

    In 2023, Jacob began to pursue that question at MIT, working with Yikang Shen, Gabriele Farina, and his adviser, Jacob Andreas, on what would become the consensus game. The core idea came from imagining a conversation between two people as a cooperative game, where success occurs when a listener understands what a speaker is trying to convey. In particular, the consensus game is designed to align the language model’s two systems—the generator, which handles generative questions, and the discriminator, which handles discriminative ones.

    After a few months of stops and starts, the team built this principle up into a full game. First, the generator receives a question. It can come from a human or from a preexisting list. For example, “Where was Barack Obama born?” The generator then gets some candidate responses, let’s say Honolulu, Chicago, and Nairobi. Again, these options can come from a human, a list, or a search carried out by the language model itself.

    But before answering, the generator is also told whether it should answer the question correctly or incorrectly, depending on the results of a fair coin toss.

    If it’s heads, then the machine attempts to answer correctly. The generator sends the original question, along with its chosen response, to the discriminator. If the discriminator determines that the generator intentionally sent the correct response, they each get one point, as a kind of incentive.

    If the coin lands on tails, the generator sends what it thinks is the wrong answer. If the discriminator decides it was deliberately given the wrong response, they both get a point again. The idea here is to incentivize agreement. “It’s like teaching a dog a trick,” Jacob explained. “You give them a treat when they do the right thing.”

    The generator and discriminator also each start with some initial “beliefs.” These take the form of a probability distribution related to the different choices. For example, the generator may believe, based on the information it has gleaned from the internet, that there’s an 80 percent chance Obama was born in Honolulu, a 10 percent chance he was born in Chicago, a 5 percent chance of Nairobi, and a 5 percent chance of other places. The discriminator may start off with a different distribution. While the two “players” are still rewarded for reaching agreement, they also get docked points for deviating too far from their original convictions. That arrangement encourages the players to incorporate their knowledge of the world—again drawn from the internet—into their responses, which should make the model more accurate. Without something like this, they might agree on a totally wrong answer like Delhi, but still rack up points.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleWhere to preorder Star Wars Outlaws (and what’s included in each edition)
    Next Article Google is ready to fill free streaming TV channels with ads

    Related Posts

    Come for the Amenity Kits, Stay for the Flight

    July 2, 2025

    Cloudflare Is Blocking AI Crawlers by Default

    July 1, 2025

    Senator Blackburn Pulls Support for AI Moratorium in Trump’s ‘Big Beautiful Bill’ Amid Backlash

    July 1, 2025

    Here Is Everyone Mark Zuckerberg Has Hired So Far for Meta’s ‘Superintelligence’ Team

    July 1, 2025

    Microsoft Says Its New AI System Diagnosed Patients 4 Times More Accurately Than Human Doctors

    July 1, 2025

    OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home’

    June 30, 2025
    Our Picks

    Come for the Amenity Kits, Stay for the Flight

    July 2, 2025

    Space Elevators Could Totally Work—if Earth Days Were Much Shorter

    July 2, 2025

    DJI gets serious about portable power

    July 2, 2025

    AI Videos of Black Women Depicted as ‘Bigfoot’ Are Going Viral

    July 2, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Honor launches the ‘world’s thinnest’ foldable

    By News RoomJuly 2, 2025

    Honor has announced the Magic V5, which it says is the thinnest foldable phone in…

    Subscriber-Only Livestream Replay: Beginner Advice for Claude, a ChatGPT Alternative

    July 2, 2025

    Methane Pollution Has Cheap, Effective Solutions That Aren’t Being Used

    July 2, 2025

    Sonos is offering a refurbished Era 100 for just $119

    July 1, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.