Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Should you stare into Sam Altman’s orb before your next date?

    Should you stare into Sam Altman’s orb before your next date?

    April 17, 2026
    Betting on the news raises ethical questions for journalists

    Betting on the news raises ethical questions for journalists

    April 17, 2026
    This charming gadget writes bad AI poetry

    This charming gadget writes bad AI poetry

    April 17, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Trae Stephens Has Built AI Weapons and Worked for Donald Trump. As He Sees It, Jesus Would Approve
    Business

    Trae Stephens Has Built AI Weapons and Worked for Donald Trump. As He Sees It, Jesus Would Approve

    News RoomBy News RoomSeptember 25, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Trae Stephens Has Built AI Weapons and Worked for Donald Trump. As He Sees It, Jesus Would Approve

    When I wrote about Anduril in 2018, the company explicitly said it wouldn’t build lethal weapons. Now you are building fighter planes, underwater drones, and other deadly weapons of war. Why did you make that pivot?

    We responded to what we saw, not only inside our military but also across the world. We want to be aligned with delivering the best capabilities in the most ethical way possible. The alternative is that someone’s going to do that anyway, and we believe that we can do that best.

    Were there soul-searching discussions before you crossed that line?

    There’s constant internal discussion about what to build and whether there’s ethical alignment with our mission. I don’t think that there’s a whole lot of utility in trying to set our own line when the government is actually setting that line. They’ve given clear guidance on what the military is going to do. We’re following the lead of our democratically elected government to tell us their issues and how we can be helpful.

    What’s the proper role for autonomous AI in warfare?

    Luckily, the US Department of Defense has done more work on this than maybe any other organization in the world, except the big generative-AI foundational model companies. There are clear rules of engagement that keep humans in the loop. You want to take the humans out of the dull, dirty, and dangerous jobs and make decisionmaking more efficient while always keeping the person accountable at the end of the day. That’s the goal of all of the policy that’s been put in place, regardless of the developments in autonomy in the next five or 10 years.

    There might be temptation in a conflict not to wait for humans to weigh in, when targets present themselves in an instant, especially with weapons like your autonomous fighter planes.

    The autonomous program we’re working on for the Fury aircraft [a fighter used by the US Navy and Marine Corps] is called CCA, Collaborative Combat Aircraft. There is a man in a plane controlling and commanding robot fighter planes and deciding what they do.

    What about the drones you’re building that hang around in the air until they see a target and then pounce?

    There’s a classification of drones called loiter munitions, which are aircraft that search for targets and then have the ability to go kinetic on those targets, kind of as a kamikaze. Again, you have a human in the loop who’s accountable.

    War is messy. Isn’t there a genuine concern that those principles would be set aside once hostilities begin?

    Humans fight wars, and humans are flawed. We make mistakes. Even back when we were standing in lines and shooting each other with muskets, there was a process to adjudicate violations of the law of engagement. I think that will persist. Do I think there will never be a case where some autonomous system is asked to do something that feels like a gross violation of ethical principles? Of course not, because it’s still humans in charge. Do I believe that it is more ethical to prosecute a dangerous, messy conflict with robots that are more precise, more discriminating, and less likely to lead to escalation? Yes. Deciding not to do this is to continue to put people in harm’s way.

    Photograph: Peyton Fulford

    I’m sure you’re familiar with Eisenhower’s final message about the dangers of a military-industrial complex that serves its own needs. Does that warning affect how you operate?

    That’s one of the all-time great speeches—I read it at least once a year. Eisenhower was articulating a military-industrial complex where the government is not that different from the contractors like Lockheed Martin, Boeing, Northrop Grumman, General Dynamics. There’s a revolving door in the senior levels of these companies, and they become power centers because of that inter-connectedness. Anduril has been pushing a more commercial approach that doesn’t rely on that closely tied incentive structure. We say, “Let’s build things at the lowest cost, utilizing off-the-shelf technologies, and do it in a way where we are taking on a lot of the risk.” That avoids some of this potential tension that Eisenhower identified.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThis New Pac-Man Machine Brought Me Closer to My Teen Kids
    Next Article Mark Zuckerberg: publishers ‘overestimate the value’ of their work for training AI

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    Betting on the news raises ethical questions for journalists

    Betting on the news raises ethical questions for journalists

    April 17, 2026
    This charming gadget writes bad AI poetry

    This charming gadget writes bad AI poetry

    April 17, 2026
    The best budget smartphone you can buy

    The best budget smartphone you can buy

    April 17, 2026
    Our new favorite budget phones

    Our new favorite budget phones

    April 17, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Ghosts in the machine News

    Ghosts in the machine

    By News RoomApril 17, 2026

    Vacuum cleaners, personal massagers, electronic baby rockers, and walking pads: These are the secondhand machines…

    The creative software industry has declared war on Adobe

    The creative software industry has declared war on Adobe

    April 17, 2026
    A giant cell tower is going to space this weekend

    A giant cell tower is going to space this weekend

    April 17, 2026
    OpenAI’s big Codex update is a direct shot at Claude Code

    OpenAI’s big Codex update is a direct shot at Claude Code

    April 16, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.