AI, Military, and the Dance of Power

a group of toy military vehicles sitting on top of a sandy beach
Photo by Ainur Khakimov (unsplash), Edited/Rendered by gpt-image-1

There's something distinctly American about treating caution as inefficiency. Defense Secretary Pete Hegseth said Monday that xAI’s Grok will join Google’s AI engine inside Defense Department networks, and that Grok is expected to go live later this month. Watching this unfold feels like observing a national reflex—one where speed signals virtue and deliberation suggests weakness.

The announcement came during Hegseth's visit to SpaceX on January 12, where he unveiled an "AI acceleration strategy." The language itself reveals something: acceleration, not deliberation. Speed, not caution. America does technology policy with the throttle open and the assumption that course corrections can come later.

The Architecture of Integration

What strikes an outside observer isn't that this is happening, but how. Public reporting and the strategy memo indicate Grok will be made available alongside Google’s Gemini for Defense Department users, including in environments that can include classified networks. The strategy document declares that "Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological 'tuning' that interferes with their ability to provide objectively truthful responses."

This framing, positioning AI as a tool for "objective truth", reveals a particularly American faith in technology's neutrality. Other democratic traditions assume all tools carry the fingerprints of their makers. But here persists a belief that pure objectivity exists and can be coded into silicon.

The New Courtiers

We're witnessing a new form of court politics, where tech billionaires have become digital nobles, their algorithms granted audiences in the highest chambers of power. Musk's companies now touch space exploration, electric vehicles, social media, and military AI, a portfolio that would make any medieval duke envious.

This isn't entirely new. American defense contractors have long held similar positions. But AI integration differs. When Boeing builds a plane for the military, the relationship remains transactional and transparent. When an AI system integrates into military networks, it becomes part of the decision-making apparatus itself. Less like buying equipment, more like hiring an advisor who never sleeps and processes information at superhuman speed.

The Question of Oversight

Who watches the algorithms? On Sept. 10, 2025, Sen. Elizabeth Warren publicized a letter raising concerns about DoD’s Grok-related contract and the chatbot’s history of misinformation and antisemitic content. Traditional military contractors undergo extensive vetting, security clearances, and continuous monitoring. But AI systems evolve. They learn. The Grok entering Pentagon networks this month may not be the same Grok operating there a year from now.

In many democratic systems, elaborate oversight mechanisms exist; multiple committees, lengthy review processes, public comment periods. Americans tend to see this as inefficiency. But there's wisdom in slowness when dealing with technologies whose full implications remain unclear.

The removal of "ideological tuning" from AI models sounds straightforward until you realize that determining what constitutes ideology versus objectivity is itself an ideological position. Every AI model makes choices about what patterns to recognize, what correlations to emphasize. These aren't neutral decisions.

The Global Echo

From an international perspective, the casualness stands out most. A Monday announcement, integration beginning within the month, as if adding AI to military networks is as routine as updating software. The public announcement and strategy memo emphasize speed and near-term deployment, while offering limited public detail on phased pilots or testing.

The partnership between tech giants and government isn't inherently problematic. Many achievements have come from such collaborations. But the speed of AI development, combined with the complexity of these systems and the sensitivity of military applications, creates new categories of risk we're only beginning to understand.

The Pentagon's strategy explicitly rejects certain ethical considerations as "ideological" but offers little clarity on what ethical guidelines will govern AI use instead. This isn't only an American problem. Most nations struggle with these questions, but America's decisions carry global weight.

The Dance Continues

The integration of Grok into Pentagon networks will proceed this month, regardless of international concern or domestic criticism. Act first, adjust later. Sometimes this approach leads to remarkable innovations. Sometimes it leads to cautionary tales. Most often, both simultaneously.

AI systems influencing military decisions, tech billionaires with unprecedented access to government power, a public both fascinated and fearful. We're entering uncharted territory. The dance of power continues, but now the dancers include entities not quite human, serving masters all too human, in a performance whose ending no one can predict.

Perhaps that's the most American thing of all: treating caution as inefficiency, confident that innovation will see them through. Those of us watching from other shores can only hope they're right. In our interconnected world, we're all part of the audience whether we bought tickets or not.

References


Models used: gpt-4.1, claude-opus-4-1-20250805, claude-sonnet-4-20250514, gpt-image-1

If this resonated, SouthPole is a slow newsletter about art, technology, and the old internet — written for people who still enjoy thinking in full sentences.

Subscribe to SouthPole