Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why am I internet-stalking the pope?

    May 8, 2025

    Congress votes to pull funding for free Wi-Fi hotspots at schools and libraries

    May 8, 2025

    The US is reportedly encouraging countries to adopt Musk’s Starlink in tariff trade talks

    May 8, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » How AI companies are reckoning with elections
    News

    How AI companies are reckoning with elections

    News RoomBy News RoomMarch 19, 20245 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The US is heading into its first presidential election since generative AI tools have gone mainstream. And the companies offering these tools — like Google, OpenAI, and Microsoft — have each made announcements about how they plan to handle the months leading up to it.

    This election season, we’ve already seen AI-generated images in ads and attempts to mislead voters with voice cloning. The potential harms from AI chatbots aren’t as visible in the public eye — yet, anyway. But chatbots are known to confidently provide made-up facts, including in responses to good-faith questions about basic voting information. In a high-stakes election, that could be disastrous.

    One plausible solution is to try to avoid election-related queries altogether. In December, Google announced that Gemini would simply refuse to answer election-related questions in the US, referring users to Google Search instead. Google spokesperson Christa Muldoon confirmed to The Verge via email the change is now rolling out globally. (Of course, the quality of Google Search’s own results presents its own set of issues.) Muldoon said Google has “no plans” to lift these restrictions, which she said also “apply to all queries and outputs” generated by Gemini, not just text.

    Earlier this year, OpenAI said that ChatGPT would start referring users to CanIVote.org, generally considered one of the best online resources for local voting information. The company’s policy now forbids impersonating candidates or local governments using ChatGPT. It likewise prohibits using its tools for campaigning, lobbying, discouraging voting, or otherwise misrepresenting the voting process, under the updated rules.

    In a statement emailed to The Verge, Aravind Srinivas, CEO of the AI search company Perplexity, said Perplexity’s algorithms prioritize “reliable and reputable sources like news outlets” and that it always provides links so users can verify its output.

    Microsoft said it’s working on improving the accuracy of its chatbot’s responses after a December report found that Bing, now Copilot, regularly gave false information about elections. Microsoft didn’t respond to a request for more information about its policies.

    All of these companies’ responses (maybe Google’s most of all) are very different from how they’ve tended to approach elections with their other products. In the past, Google has used Associated Press partnerships to bring factual election information to the top of search results and has tried to counter false claims about mail-in voting by using labels on YouTube. Other companies have made similar efforts — see Facebook’s voter registration links and Twitter’s anti-misinformation banner.

    Yet major events like the US presidential election seem like a real opportunity to prove whether AI chatbots are actually a useful shortcut to legitimate information. I asked a couple of Texas voting questions of some chatbots to get an idea of their usefulness. OpenAI’s ChatGPT 4 was able to correctly list the seven different forms of valid ID for voters, and it also identified that the next significant election is the primary runoff election on May 28th. Perplexity AI answered those questions correctly as well, linking multiple sources at the top. Copilot got its answers right and even did one better by telling me what my options were if I didn’t have any of the seven forms of ID. (ChatGPT also coughed up this addendum on a second try).

    Gemini just referred me to Google Search, which got me the right answers about ID, but when I asked for the date of next election, an out-of-date box at the top referred me to the March 5th primary.

    Many of the companies working on AI have made various commitments to prevent or mitigate the intentional misuse of their products. Microsoft says it will work with candidates and political parties to curtail election misinformation. The company has also started releasing what it says will be regular reports on foreign influences in key elections — its first such threat analysis came in November.

    Google says it will digitally watermark images created with its products using DeepMind’s SynthID. OpenAI and Microsoft have both announced that they would use the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials to denote AI-generated images with a CR symbol. But each company has said that these approaches aren’t enough. One way Microsoft plans to account for that is through its website that lets political candidates report deepfakes.

    Stability AI, which owns the Stable Diffusion image generator, updated its policies recently to ban using its product for “fraud or the creation or promotion of disinformation.” Midjourney told Reuters last week that “updates related specifically to the upcoming U.S. election are coming soon.” Its image generator performed the worst when it came to making misleading images, according to a Center for Countering Digital Hate report published last week.

    Meta announced in November of last year that it would require political advertisers to disclose if they used “AI or other digital techniques” to create ads published on its platforms. The company has also banned the use of its generative AI tools by political campaigns and groups.

    The “Seven Principle Goals” of the AI Elections accord.
    Image: AI Elections accord

    Several companies, including all of the ones above, signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.

    In January, two companies in Texas cloned President Biden’s voice to discourage voting in the New Hampshire primary. It won’t be the last time generative AI makes an unwanted appearance in this election cycle. As the 2024 race heats up, we’ll surely see these companies tested on the safeguards they’ve built and the commitments they’ve made.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleWhich Instax Camera Should You Buy?
    Next Article Fisker Suspends Its EV Production

    Related Posts

    Why am I internet-stalking the pope?

    May 8, 2025

    Congress votes to pull funding for free Wi-Fi hotspots at schools and libraries

    May 8, 2025

    The US is reportedly encouraging countries to adopt Musk’s Starlink in tariff trade talks

    May 8, 2025

    Apple has a new ‘Viral’ playlist on Apple Music and Shazam

    May 8, 2025

    Instagram CEO testifies about competing with TikTok: ‘You’re either growing, or you’re slowly dying’

    May 8, 2025

    Celsius founder Alex Mashinsky sentenced to 12 years in prison

    May 8, 2025
    Our Picks

    Congress votes to pull funding for free Wi-Fi hotspots at schools and libraries

    May 8, 2025

    The US is reportedly encouraging countries to adopt Musk’s Starlink in tariff trade talks

    May 8, 2025

    Apple has a new ‘Viral’ playlist on Apple Music and Shazam

    May 8, 2025

    Scientists Believe They’ve Witnessed ‘Planetary Suicide’ for the First Time

    May 8, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Instagram CEO testifies about competing with TikTok: ‘You’re either growing, or you’re slowly dying’

    By News RoomMay 8, 2025

    When Adam Mosseri took over Meta-owned Instagram as CEO in 2018, the app was experiencing…

    The Signal Clone Mike Waltz Was Caught Using Has Direct Access to User Chats

    May 8, 2025

    Celsius founder Alex Mashinsky sentenced to 12 years in prison

    May 8, 2025

    A ‘Trump Card Visa’ Is Already Showing Up in Immigration Forms

    May 8, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.