xAI is one of several leading AI companies to receive the award, alongside Anthropic, Google, and OpenAI. But the timing of the announcement is striking given Grok’s recent high-profile spiral, which drew congressional ire and public pushback. The use of technology, and especially AI, in the defense space has long been a controversial topic even within the tech industry, and Musk’s prior involvement in slashing federal government contracts through his work at the Department of Government Efficiency (DOGE) still raises questions about potential conflicts — though his relationship with President Donald Trump has more recently soured, and Trump’s administration has claimed Musk would step back from any potential conflicts while at DOGE.
The contract announcement from the Chief Digital and Artificial Intelligence Office (CDAO) is light on details, but says the deals will help the DoD “develop agentic AI workflows across a variety of mission areas.” Alongside the contract award, xAI announced “Grok for Government,” which it says will supply “frontier AI products” to the US. In addition to the DoD contract, xAI says other federal agencies will now be able to purchase its tools via the General Services Administration (GSA) schedule. The company plans to work on new products for government customers, like custom models focused on national security, applications for healthcare and science use cases, and models accessible in classified environments.
Days after changes to Grok sent it off the rails (saying that if “calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the mustache,” and referencing a “pattern-noticing meme” where “folks with surnames like ‘Steinberg’ (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety”), the company apologized for “the horrific behavior that many experienced.” It said the update responsible for Grok’s tirades was active for 16 hours but had been deprecated. Instructions given to the chatbot, like to not be “afraid to offend people who are politically correct,” had the “undesired” effect of leading it to “ignore its core values in certain circumstances in order to make the response engaging to the user” — even if that meant “producing responses containing unethical or controversial opinions.”