Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    With RFK Jr. in Charge, Insurers Aren’t Saying If They’ll Cover Vaccines for Kids If Government Stops Recommending Them

    July 5, 2025

    I’m an Outdoor Writer. I’m Shopping These 55 Deals From REI’s 4th of July Sale

    July 5, 2025

    Samsung is about to find out if Ultra is enough

    July 5, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » I’m Not Convinced Ethical Generative AI Currently Exists
    Gear

    I’m Not Convinced Ethical Generative AI Currently Exists

    News RoomBy News RoomFebruary 20, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Are there generative AI tools I can use that are perhaps slightly more ethical than others?
    —Better Choices

    No, I don’t think any one generative AI tool from the major players is more ethical than any other. Here’s why.

    For me, the ethics of generative AI use can be broken down to issues with how the models are developed—specifically, how the data used to train them was accessed—as well as ongoing concerns about their environmental impact. In order to power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the past—and continue to make—to obtain this repository of data are questionable and shrouded in secrecy. Even what people in Silicon Valley call “open source” models hide the training datasets inside.

    Despite complaints from authors, artists, filmmakers, YouTube creators, and even just social media users who don’t want their posts scraped and turned into chatbot sludge, AI companies have typically behaved as if consent from those creators isn’t necessary for their output to be used as training data. One familiar claim from AI proponents is that to obtain this vast amount of data with the consent of the humans who crafted it would be too unwieldy and would impede innovation. Even for companies that have struck licensing deals with major publishers, that “clean” data is an infinitesimal part of the colossal machine.

    Although some devs are working on approaches to fairly compensate people when their work is used to train AI models, these projects remain fairly niche alternatives to the mainstream behemoths.

    And then there are the ecological consequences. The current environmental impact of generative AI usage is similarly outsized across the major options. While generative AI still represents a small slice of humanity’s aggregate stress on the environment, gen-AI software tools require vastly more energy to create and run than their non-generative counterparts. Using a chatbot for research assistance is contributing much more to the climate crisis than just searching the web in Google.

    It’s possible the amount of energy required to run the tools could be lowered—new approaches like DeepSeek’s latest model sip precious energy resources rather than chug them—but the big AI companies appear more interested in accelerating development than pausing to consider approaches less harmful to the planet.

    How do we make AI wiser and more ethical rather than smarter and more powerful?
    —Galaxy Brain

    Thank you for your wise question, fellow human. This predicament may be more of a common topic of discussion among those building generative AI tools than you might expect. For example, Anthropic’s “constitutional” approach to its Claude chatbot attempts to instill a sense of core values into the machine.

    The confusion at the heart of your question traces back to how we talk about the software. Recently, multiple companies have released models focused on “reasoning” and “chain-of-thought” approaches to perform research. Describing what the AI tools do with humanlike terms and phrases makes the line between human and machine unnecessarily hazy. I mean, if the model can truly reason and have chains of thoughts, why wouldn’t we be able to send the software down some path of self-enlightenment?

    Because it doesn’t think. Words like reasoning, deep thought, understanding—those are all just ways to describe how the algorithm processes information. When I take pause at the ethics of how these models are trained and the environmental impact, my stance isn’t based on an amalgamation of predictive patterns or text, but rather the sum of my individual experiences and closely held beliefs.

    The ethical aspects of AI outputs will always circle back to our human inputs. What are the intentions of the user’s prompts when interacting with a chatbot? What were the biases in the training data? How did the devs teach the bot to respond to controversial queries? Rather than focusing on making the AI itself wiser, the real task at hand is cultivating more ethical development practices and user interactions.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleSkylight Calendar’s new AI tool is helped tame my family’s busy schedules
    Next Article The standard iPhone needs ProMotion more than ever

    Related Posts

    I’m an Outdoor Writer. I’m Shopping These 55 Deals From REI’s 4th of July Sale

    July 5, 2025

    Everything You Can Do in the Photoshop Mobile App

    July 5, 2025

    This Is Why Tesla’s Robotaxi Launch Needed Human Babysitters

    July 4, 2025

    A Former Chocolatier Shares the 7 Kitchen Scales She Recommends

    July 4, 2025

    What Is Apple One, and Should You Subscribe?

    July 3, 2025

    Top Hydrow Discount Codes for July

    July 3, 2025
    Our Picks

    I’m an Outdoor Writer. I’m Shopping These 55 Deals From REI’s 4th of July Sale

    July 5, 2025

    Samsung is about to find out if Ultra is enough

    July 5, 2025

    Everything You Can Do in the Photoshop Mobile App

    July 5, 2025

    The Promise and Peril of Digital Security in the Age of Dictatorship

    July 5, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    The Ploopy Knob is an open-source control dial for your PC

    By News RoomJuly 4, 2025

    Ploopy has announced another desktop accessory called the Ploopy Knob that can function like a…

    Laid-off workers should use AI to manage their emotions, says Xbox exec

    July 4, 2025

    Despite Protests, Elon Musk Secures Air Permit for xAI

    July 4, 2025

    This Is Why Tesla’s Robotaxi Launch Needed Human Babysitters

    July 4, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.