Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI’s Teen Safety Features Will Walk a Thin Line

    September 18, 2025

    You can turn off iOS 26’s obtrusive new screenshot previews

    September 18, 2025

    Google will use hashes to find and remove nonconsensual intimate imagery from Search

    September 18, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Generative AI Learned Nothing From Web 2.0
    Business

    Generative AI Learned Nothing From Web 2.0

    News RoomBy News RoomJanuary 2, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    If 2022 was the year the generative AI boom started, 2023 was the year of the generative AI panic. Just over 12 months since OpenAI released ChatGPT and set a record for the fastest-growing consumer product, it appears to have also helped set a record for fastest government intervention in a new technology. The US Federal Elections Commission is looking into deceptive campaign ads, Congress is calling for oversight into how AI companies develop and label training data for their algorithms, and the European Union passed its new AI Act with last-minute tweaks to respond to generative AI.

    But for all the novelty and speed, generative AI’s problems are also painfully familiar. OpenAI and its rivals racing to launch new AI models are facing problems that have dogged social platforms, that earlier era-shaping new technology, for nearly two decades. Companies like Meta never did get the upper hand over mis- and disinformation, sketchy labor practices, and nonconsensual pornography, to name just a few of their unintended consequences. Now those issues are gaining a challenging new life, with an AI twist.

    “These are completely predictable problems,” says Hany Farid, a professor at the UC Berkeley School of Information, of the headaches faced by OpenAI and others. “I think they were preventable.”

    Well-Trodden Path

    In some cases, generative AI companies are directly built on problematic infrastructure put in place by social media companies. Facebook and others came to rely on low-paid, outsourced content moderation workers—often in the Global South—to keep content like hate speech or imagery with nudity or violence at bay.

    That same workforce is now being tapped to help train generative AI models, often with similarly low pay and difficult working conditions. Because outsourcing puts crucial functions of a social platform or AI company administratively at arms length from its headquarters, and often on another continent, researchers and regulators can struggle to get the full picture of how an AI system or social network is being built and governed.

    Outsourcing can also obscure where the true intelligence inside a product really lies. When a piece of content disappears, was it taken down by an algorithm or one of the many thousands of human moderators? When a customer service chatbot helps out a customer, how much credit is due to AI and how much to the worker in an overheated outsourcing hub?

    There are also similarities in how AI companies and social platforms respond to criticism of their ill or unintended effects. AI companies talk about putting “safeguards” and “acceptable use” policies in place on certain generative AI models, just as platforms have their terms of service around what content is and is not allowed. As with the rules of social networks, AI policies and protections have proven relatively easy to circumvent.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleA New Year’s resolution for tech companies: knock it off with the CAPTCHAs
    Next Article Jony Ive imagined the Vision Pro giving you Zoom eyes and sunglasses

    Related Posts

    OpenAI’s Teen Safety Features Will Walk a Thin Line

    September 18, 2025

    Matthew Prince Wants AI Companies to Pay for Their Sins

    September 17, 2025

    How AI Is Upending Politics, Tech, the Media, and More

    September 16, 2025

    Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions

    September 16, 2025

    USA Today Enters Its Gen AI Era With a Chatbot

    September 16, 2025

    OpenAI Ramps Up Robotics Work in Race Toward AGI

    September 15, 2025
    Our Picks

    You can turn off iOS 26’s obtrusive new screenshot previews

    September 18, 2025

    Google will use hashes to find and remove nonconsensual intimate imagery from Search

    September 18, 2025

    Moderna CEO Responds to RFK Jr.’s Crusade Against the Covid-19 Vaccine

    September 18, 2025

    Steam is dropping Windows 32-bit support in 2026

    September 18, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Science

    Big Businesses Are Doing Carbon Dioxide Removal All Wrong

    By News RoomSeptember 18, 2025

    Amazon, Google, Microsoft, and H&M are currently investing in durable CDR. A spokesperson for H&M described…

    Nvidia invests $5 billion into Intel to jointly develop PC and data center chips

    September 18, 2025

    Whole-Genome Sequencing Will Change Pregnancy

    September 18, 2025

    Samsung brings ads to US fridges

    September 18, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.