Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Meet Soham Parekh, the engineer burning through tech by working at three to four startups simultaneously

    July 3, 2025

    Identities of More Than 80 Americans Stolen for North Korean IT Worker Scams

    July 3, 2025

    Here are the letters that convinced Google and Apple to keep TikTok online

    July 3, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI Is Testing Its Powers of Persuasion
    Business

    OpenAI Is Testing Its Powers of Persuasion

    News RoomBy News RoomJuly 11, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    This week, Sam Altman, CEO of OpenAI, and Arianna Huffington, founder and CEO of the health company Thrive Global, published an article in Time touting Thrive AI, a startup backed by Thrive and OpenAI’s Startup Fund. The piece suggests that AI could have a huge positive impact on public health by talking people into healthier behavior.

    Altman and Huffington write that Thrive AI is working toward “a fully integrated personal AI coach that offers real-time nudges and recommendations unique to you that allows you to take action on your daily behaviors to improve your health.”

    Their vision puts a positive spin on what may well prove to be one of AI’s sharpest double-edges. AI models are already adept at persuading people, and we don’t know how much more powerful they could become as they advance and gain access to more personal data.

    Aleksander Madry, a professor on sabbatical from the Massachusetts Institute of Technology, leads a team at OpenAI called Preparedness that is working on that very issue.

    “One of the streams of work in Preparedness is persuasion,” Madry told WIRED in a May interview. “Essentially, thinking to what extent you can use these models as a way of persuading people.”

    Madry says he was drawn to join OpenAI by the remarkable potential of language models and because the risks that they pose have barely been studied. “There is literally almost no science,” he says. “That was the impetus for the Preparedness effort.”

    Persuasiveness is a key element in programs like ChatGPT and one of the ingredients that makes such chatbots so compelling. Language models are trained in human writing and dialog that contains countless rhetorical and suasive tricks and techniques. The models are also typically fine-tuned to err toward utterances that users find more compelling.

    Research released in April by Anthropic, a competitor founded by OpenAI exiles, suggests that language models have become better at persuading people as they have grown in size and sophistication. This research involved giving volunteers a statement and then seeing how an AI-generated argument changes their opinion of it.

    OpenAI’s work extends to analyzing AI in conversation with users—something that may unlock greater persuasiveness. Madry says the work is being conducted on consenting volunteers, and declines to reveal the findings to date. But he says the persuasive power of language models runs deep. “As humans we have this ‘weakness’ that if something communicates with us in natural language [we think of it as if] it is a human,” he says, alluding to an anthropomorphism that can make chatbots seem more lifelike and convincing.

    The Time article argues that the potential health benefits of persuasive AI will require strong legal safeguards because the models may have access to so much personal information. “Policymakers need to create a regulatory environment that fosters AI innovation while safeguarding privacy,” Altman and Huffington write.

    This is not all that policymakers will need to consider. It may also be crucial to weigh how increasingly persuasive algorithms could be misused. AI algorithms could enhance the resonance of misinformation or generate particularly compelling phishing scams. They might also be used to advertise products.

    Madry says a key question, yet to be studied by OpenAI or others, is how much more compelling or coercive AI programs that interact with users over long periods of time could prove to be. Already a number of companies offer chatbots that roleplay as romantic partners and other characters. AI girlfriends are increasingly popular—some are even designed to yell at you—but how addictive and persuasive these bots are is largely unknown.

    The excitement and hype generated by ChatGPT following its release in November 2022 saw OpenAI, outside researchers, and many policymakers zero in on the more hypothetical question of whether AI could someday turn against its creators.

    Madry says this risks ignoring the more subtle dangers posed by silver-tongued algorithms. “I worry that they will focus on the wrong questions,” Madry says of the work of policymakers thus far. “That in some sense, everyone says, ‘Oh yeah, we are handling it because we are talking about it,’ when actually we are not talking about the right thing.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticlePlanet Coaster’s slippery sequel lets you build your own water park
    Next Article Here’s how OpenAI will determine how powerful its AI systems are

    Related Posts

    For Today’s Business Traveler, It’s All About Work-Life Integration

    July 3, 2025

    Affluent Travelers Are Ditching Business Class for Business Jets

    July 2, 2025

    Airplane Wi-Fi Is Now … Good?

    July 2, 2025

    Business Travel Is Evolving Faster Than Ever. We’ll Help You Navigate It

    July 2, 2025

    Airport Lounges Are Sexy Again—if You Can Get In

    July 2, 2025

    Business Class Ain’t What It Used to Be. Don’t Tell First Class

    July 2, 2025
    Our Picks

    Identities of More Than 80 Americans Stolen for North Korean IT Worker Scams

    July 3, 2025

    Here are the letters that convinced Google and Apple to keep TikTok online

    July 3, 2025

    A Group of Young Cybercriminals Poses the ‘Most Imminent Threat’ of Cyberattacks Right Now

    July 3, 2025

    Apple’s colorful Watch Solo Loop bands are up to 70 percent off now

    July 3, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Security

    How to Protest Safely in the Age of Surveillance

    By News RoomJuly 3, 2025

    If you insist on using biometric unlocking methods to have faster access to your devices,…

    E Ink is turning the laptop touchpad into an e-reader for AI apps

    July 3, 2025

    US Supreme Court Upholds Texas Porn ID Law

    July 3, 2025

    A European Startup’s Spacecraft Made It to Orbit. Now It’s Lost at Sea

    July 3, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.