Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Here’s everything you need to know about preordering the new Google Pixel phones

    August 20, 2025

    Google says the quiet part out loud: IP68 protection doesn’t last

    August 20, 2025

    The Made by Google event felt like being sucked into an episode of Wandavision

    August 20, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Trae Stephens Has Built AI Weapons and Worked for Donald Trump. As He Sees It, Jesus Would Approve
    Business

    Trae Stephens Has Built AI Weapons and Worked for Donald Trump. As He Sees It, Jesus Would Approve

    News RoomBy News RoomSeptember 25, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    When I wrote about Anduril in 2018, the company explicitly said it wouldn’t build lethal weapons. Now you are building fighter planes, underwater drones, and other deadly weapons of war. Why did you make that pivot?

    We responded to what we saw, not only inside our military but also across the world. We want to be aligned with delivering the best capabilities in the most ethical way possible. The alternative is that someone’s going to do that anyway, and we believe that we can do that best.

    Were there soul-searching discussions before you crossed that line?

    There’s constant internal discussion about what to build and whether there’s ethical alignment with our mission. I don’t think that there’s a whole lot of utility in trying to set our own line when the government is actually setting that line. They’ve given clear guidance on what the military is going to do. We’re following the lead of our democratically elected government to tell us their issues and how we can be helpful.

    What’s the proper role for autonomous AI in warfare?

    Luckily, the US Department of Defense has done more work on this than maybe any other organization in the world, except the big generative-AI foundational model companies. There are clear rules of engagement that keep humans in the loop. You want to take the humans out of the dull, dirty, and dangerous jobs and make decisionmaking more efficient while always keeping the person accountable at the end of the day. That’s the goal of all of the policy that’s been put in place, regardless of the developments in autonomy in the next five or 10 years.

    There might be temptation in a conflict not to wait for humans to weigh in, when targets present themselves in an instant, especially with weapons like your autonomous fighter planes.

    The autonomous program we’re working on for the Fury aircraft [a fighter used by the US Navy and Marine Corps] is called CCA, Collaborative Combat Aircraft. There is a man in a plane controlling and commanding robot fighter planes and deciding what they do.

    What about the drones you’re building that hang around in the air until they see a target and then pounce?

    There’s a classification of drones called loiter munitions, which are aircraft that search for targets and then have the ability to go kinetic on those targets, kind of as a kamikaze. Again, you have a human in the loop who’s accountable.

    War is messy. Isn’t there a genuine concern that those principles would be set aside once hostilities begin?

    Humans fight wars, and humans are flawed. We make mistakes. Even back when we were standing in lines and shooting each other with muskets, there was a process to adjudicate violations of the law of engagement. I think that will persist. Do I think there will never be a case where some autonomous system is asked to do something that feels like a gross violation of ethical principles? Of course not, because it’s still humans in charge. Do I believe that it is more ethical to prosecute a dangerous, messy conflict with robots that are more precise, more discriminating, and less likely to lead to escalation? Yes. Deciding not to do this is to continue to put people in harm’s way.

    Photograph: Peyton Fulford

    I’m sure you’re familiar with Eisenhower’s final message about the dangers of a military-industrial complex that serves its own needs. Does that warning affect how you operate?

    That’s one of the all-time great speeches—I read it at least once a year. Eisenhower was articulating a military-industrial complex where the government is not that different from the contractors like Lockheed Martin, Boeing, Northrop Grumman, General Dynamics. There’s a revolving door in the senior levels of these companies, and they become power centers because of that inter-connectedness. Anduril has been pushing a more commercial approach that doesn’t rely on that closely tied incentive structure. We say, “Let’s build things at the lowest cost, utilizing off-the-shelf technologies, and do it in a way where we are taking on a lot of the risk.” That avoids some of this potential tension that Eisenhower identified.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThis New Pac-Man Machine Brought Me Closer to My Teen Kids
    Next Article Mark Zuckerberg: publishers ‘overestimate the value’ of their work for training AI

    Related Posts

    AI Isn’t Coming for Hollywood. It’s Already Arrived

    August 20, 2025

    Sam Altman Says ChatGPT Is on Track to Out-Talk Humanity

    August 20, 2025

    OpenAI Is Poised to Become the Most Valuable Startup Ever. Should It Be?

    August 20, 2025

    Labubus Are on Track to Be a Billion-Dollar Business This Year

    August 20, 2025

    Silicon Valley Is Panicking About Zohran Mamdani. NYC’s Tech Scene Is Not

    August 19, 2025

    How Microschools Became the Latest Tech Mogul Obsession

    August 19, 2025
    Our Picks

    Google says the quiet part out loud: IP68 protection doesn’t last

    August 20, 2025

    The Made by Google event felt like being sucked into an episode of Wandavision

    August 20, 2025

    Google’s Gemini Live AI assistant will show you what it’s talking about

    August 20, 2025

    Today is your last chance to grab a PS5 before Sony’s price hikes go into effect

    August 20, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Gear

    We’ve Rounded Up the Best Early Labor Day Deals on Gear We’ve Tested

    By News RoomAugust 20, 2025

    Labor Day is not until September 1, but retailers are already offering oodles of Labor…

    Google’s Pixel Buds 2A add Gemini, noise cancellation, and a replaceable battery

    August 20, 2025

    The best new features of the Pixel 10 lineup

    August 20, 2025

    The Google Pixel 10 and 10 Pro come with magnets, a new chip, and AI everywhere

    August 20, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.