Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    November 19, 2025
    Netflix signs a three year deal to stream MLB live events and games

    Netflix signs a three year deal to stream MLB live events and games

    November 19, 2025
    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    November 19, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » An Adviser to Elon Musk’s xAI Has a Way to Make AI More Like Donald Trump
    Business

    An Adviser to Elon Musk’s xAI Has a Way to Make AI More Like Donald Trump

    News RoomBy News RoomFebruary 11, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    An Adviser to Elon Musk’s xAI Has a Way to Make AI More Like Donald Trump

    A researcher affiliated with Elon Musk’s startup xAI has found a new way to both measure and manipulate entrenched preferences and values expressed by artificial intelligence models—including their political views.

    The work was led by Dan Hendrycks, director of the nonprofit Center for AI Safety and an adviser to xAI. He suggests that the technique could be used to make popular AI models better reflect the will of the electorate. “Maybe in the future, [a model] could be aligned to the specific user,” Hendrycks told WIRED. But in the meantime, he says, a good default would be using election results to steer the views of AI models. He’s not saying a model should necessarily be “Trump all the way,” but he argues it should be biased toward Trump slightly, “because he won the popular vote.”

    xAI issued a new AI risk framework on February 10 stating that Hendrycks’ utility engineering approach could be used to assess Grok.

    Hendrycks led a team from the Center for AI Safety, UC Berkeley, and the University of Pennsylvania that analyzed AI models using a technique borrowed from economics to measure consumers’ preferences for different goods. By testing models across a wide range of hypothetical scenarios, the researchers were able to calculate what’s known as a utility function, a measure of the satisfaction that people derive from a good or service. This allowed them to measure the preferences expressed by different AI models. The researchers determined that they were often consistent rather than haphazard, and showed that these preferences become more ingrained as models get larger and more powerful.

    Some research studies have found that AI tools such as ChatGPT are biased towards views expressed by pro-environmental, left-leaning, and libertarian ideologies. In February 2024, Google faced criticism from Musk and others after its Gemini tool was found to be predisposed to generate images that critics branded as “woke,” such as Black vikings and Nazis.

    The technique developed by Hendrycks and his collaborators offers a new way to determine how AI models’ perspectives may differ from its users. Eventually, some experts hypothesize, this kind of divergence could become potentially dangerous for very clever and capable models. The researchers show in their study, for instance, that certain models consistently value the existence of AI above that of certain nonhuman animals. The researchers say they also found that models seem to value some people over others, raising its own ethical questions.

    Some researchers, Hendrycks included, believe that current methods for aligning models, such as manipulating and blocking their outputs, may not be sufficient if unwanted goals lurk under the surface within the model itself. “We’re gonna have to confront this,” Hendrycks says. “You can’t pretend it’s not there.”

    Dylan Hadfield-Menell, a professor at MIT who researches methods for aligning AI with human values, says Hendrycks’ paper suggests a promising direction for AI research. “They find some interesting results,” he says. “The main one that stands out is that as the model scale increases, utility representations get more complete and coherent.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThomson Reuters wins an early court battle over AI, copyright, and fair use
    Next Article FCC to investigate Comcast for having DEI programs

    Related Posts

    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    November 19, 2025
    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    November 19, 2025
    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    November 19, 2025
    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    November 19, 2025
    Anthropic’s Claude Takes Control of a Robot Dog

    Anthropic’s Claude Takes Control of a Robot Dog

    November 19, 2025
    The AI Boom Is Fueling a Need for Speed in Chip Networking

    The AI Boom Is Fueling a Need for Speed in Chip Networking

    November 18, 2025
    Our Picks
    Netflix signs a three year deal to stream MLB live events and games

    Netflix signs a three year deal to stream MLB live events and games

    November 19, 2025
    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    November 19, 2025
    Google’s new Scholar Labs search uses AI to find relevant studies

    Google’s new Scholar Labs search uses AI to find relevant studies

    November 19, 2025
    For Black Friday, get the M4 MacBook Air that’s only 0 more than the slower M1 model

    For Black Friday, get the M4 MacBook Air that’s only $150 more than the slower M1 model

    November 19, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’ Business

    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    By News RoomNovember 19, 2025

    Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta claims.Strike…

    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    November 19, 2025
    Future Google TV devices might come with a solar-powered remote

    Future Google TV devices might come with a solar-powered remote

    November 19, 2025
    Screw it, I’m installing Linux

    Screw it, I’m installing Linux

    November 19, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.