Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Samsung’s Music Frame speaker is more of a bargain at 69 percent off

    May 13, 2025

    PayPal launches iPhone NFC payments in Germany after EU forced Apple to open up

    May 13, 2025

    ICE’s Deportation Airline Hack Reveals Man ‘Disappeared’ to El Salvador

    May 13, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Meta Will Crack Down on AI-Generated Fakes—but Leave Plenty Undetected
    Business

    Meta Will Crack Down on AI-Generated Fakes—but Leave Plenty Undetected

    News RoomBy News RoomFebruary 7, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it acknowledged it must also respond to the technology’s hazards, announcing an expanded policy of tagging AI-generated images posted to Facebook, Instagram, and Threads with warning labels to inform people of their artificial origins.

    Yet much of the synthetic media likely to appear on Meta’s platforms is unlikely to be covered by the new policy, leaving many gaps through which malicious actors could slip. “It’s a step in the right direction, but with challenges,” says Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights.

    Meta already labels AI-generated images made using its own generative AI tools with the tag “Imagined with AI,” in part by looking for the digital “watermark” its algorithms embed into their output. Now Meta says that in coming months it will also label AI images made with tools offered by other companies that embed watermarks into their technology.

    The policy is supposed to reduce the risk of mis- or disinformation being spread by AI-generated images passed off as photos. But although Meta said it is working to support disclosure technology in development at Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, the technology is not yet widely deployed. And many AI image generation tools are available that do not watermark their output, with the technology becoming increasingly easy to access and modify. “The only way a system like that will be effective is if a broad range of generative tools and platforms participated,” says Gregory.

    Even if there is wide support for watermarking, it is unclear how robust any protection it offers will be. There is no universally deployed standard in place, but the Coalition for Content Provenance and Authenticity (C2PA), an initiative founded by Adobe, has helped companies start to align their work on the concept. But the technology developed so far is not foolproof. In a study released last year, researchers found they could easily break watermarks, or add them to images that hadn’t been generated by AI to make it appear that they had.

    Malicious Loophole

    Hany Farid, a professor at the UC Berkeley School of Information who has advised the C2PA initiative, says that anyone interested in using generative AI maliciously will likely turn to tools that don’t watermark their output or betray its nature. For example, the creators of the fake robocall using President Joe Biden’s voice targeted at some New Hampshire voters last month didn’t add any disclosure of its origins.

    And he thinks companies should be prepared for bad actors to target whatever method they try to use to identify content provenance. Farid suspects that multiple forms of identification might need to be used in concert to robustly identify AI-generated images, for example by combining watermarking with hash-based technology used to create watch lists for child sex abuse material. And watermarking is a less developed concept for AI-generated media other than images, such as audio and video.

    “While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies,” Meta spokesperson Kevin McAlister acknowledges. “While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it.”

    Meta’s new policies may help it catch more fake content, but not all manipulated media is AI-generated. A ruling released on Monday by Meta’s Oversight Board of independent experts, which reviews some moderation calls, upheld the company’s decision to leave up a video of President Joe Biden that had been edited to make it appear that he is inappropriately touching his granddaughter’s chest. But the board said that while the video, which was not AI-generated, didn’t violate Meta’s current policies, it should revise and expand its rules for “manipulated media” to cover more than just AI-generated content.

    McAlister, the Meta spokesperson, says the company is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days in accordance with the bylaws.” Farid says that hole in Meta’s policies, and the technical focus on only watermarked AI-generated images, suggests the company’s plan for the gen AI era is incomplete.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleTaylor Swift’s Eras Tour concert film is coming to Disney Plus
    Next Article Disney Plus will start its password-sharing crackdown this summer

    Related Posts

    My X Account Was Hijacked to Sell a Fake WIRED Memecoin. Then Came the Backlash

    May 12, 2025

    Buy Now or Pay More Later? ‘Macroeconomic Uncertainty’ Has Shoppers Anxious

    May 12, 2025

    Donald Trump’s UK Trade Deal Could Secure Jaguar’s Resurrection

    May 9, 2025

    Singapore’s Vision for AI Safety Bridges the US-China Divide

    May 9, 2025

    A ‘Trump Card Visa’ Is Already Showing Up in Immigration Forms

    May 8, 2025

    OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

    May 8, 2025
    Our Picks

    PayPal launches iPhone NFC payments in Germany after EU forced Apple to open up

    May 13, 2025

    ICE’s Deportation Airline Hack Reveals Man ‘Disappeared’ to El Salvador

    May 13, 2025

    Here’s where you can preorder Samsung’s ultra-thin S25 Edge

    May 13, 2025

    The Best Heart Rate Monitors to Check Your Cardiac Health

    May 13, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Microsoft announces layoffs that will impact at least 6,000 employees

    By News RoomMay 13, 2025

    Microsoft is laying off more than 6,000 employees, around three percent of the company’s entire…

    Square’s New Handheld Payment Scanner Looks Like a Phone

    May 13, 2025

    Apple’s new Accessibility Reader can customize text across apps — and in real life

    May 13, 2025

    US Border Agents Are Asking for Help Taking Photos of Everyone Entering the Country by Car

    May 13, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.