The Punk Ethics of AI: When Tech Goes DIY

As AI becomes embedded in governmental roles and the aesthetics of punk music evolves, both worlds reveal the tension between established systems and the rebellious urge to adapt and change.

assorted-color poster, ripped and torn, on a wall.
Photo by Mika Ruusunen (unsplash), Edited/Rendered by gpt-image-1

There's a moment in every punk song, usually about eleven seconds in, where the guitar stops pretending to be polite and the whole thing falls apart on purpose. Controlled demolition disguised as chaos. I've been thinking about that moment a lot lately, because artificial intelligence had one of its own, and the adults in the room responded exactly the way adults always do: by scheduling a meeting to discuss the meeting.

In July 2025, the U.S. Department of Defense, through its Chief Digital and Artificial Intelligence Office, awarded separate contracts, each worth up to $200 million, to four commercial AI firms: Google, OpenAI, Anthropic, and xAI. Four companies most people associate with chatbots that write birthday poems for their aunt now had their names on Pentagon paperwork. The contracts were officially registered on July 14, 2025. The scrappy garage bands of Silicon Valley were playing the military-industrial complex's house venue, and they all told themselves they'd keep their artistic integrity.

Anthropic framed the partnership as advancing "responsible AI in defense operations", a phrase doing a lot of heavy lifting, like a student adding "in conclusion" to an essay that hasn't actually concluded anything. The two-year prototype agreement came with a $200 million ceiling and language about responsibility that, at first glance, looked like a company trying to keep its ethical commitments intact while cashing a very large check.

Then things got interesting. And not in the good way.

The Part Where Everyone Acts Surprised

By March 2026, Pentagon Under Secretary Emil Michael, the department's chief technology officer, publicly disclosed tensions with Anthropic over the company's ethical restrictions on AI use in fully autonomous weapons. A government official was essentially saying: We want your AI to do things you won't let it do. Which, when you strip away the bureaucratic language, is just a more expensive version of every group project argument in history: one person wants to do it right, the other person wants to do it fast, and somehow this is treated as a philosophical impasse rather than a red flag.

OpenAI saw an opening. In late February 2026, following what outlets described as tensions over Anthropic's supply chain risk designation, OpenAI announced an agreement with the Department of Defense to deploy its AI models on the Pentagon's classified network. CEO Sam Altman made the announcement late on a Friday night, the corporate equivalent of texting someone bad news after midnight because you technically told them.

The timing gets better. Altman publicly claimed to share Anthropic's "red lines" restricting how the military could use AI models, while already mid-negotiation on his own deal with the Pentagon. He voiced support for their values. Then he announced the deal. This is the kind of move that, in any other context, would be a masterclass in what not to do in an ethics class.

The backlash arrived fast. ChatGPT uninstalls surged by 295%. Altman called the deal "opportunistic and sloppy", a remarkable self-assessment, roughly equivalent to a student writing "I probably should have started this earlier" on the front page of their late assignment. OpenAI then amended the terms to add explicit language prohibiting domestic surveillance of U.S. persons and nationals, and confirmed that intelligence agencies such as the NSA would be excluded from the agreement. Altman admitted they shouldn't have "rushed to get this out on Friday."

Reader, they rushed to get it out on Friday.

The Part Nobody Is Saying Out Loud

Here's what the punk angle actually illuminates, and it's not about the music. Punk's core premise was that you don't need institutional permission to build something that matters. The DIY ethos wasn't anti-quality, it was anti-gatekeeping. The people closest to the problem are usually best equipped to solve it, and waiting for the powerful to hand you the right framework is a good way to wait forever.

The AI ethics conversation has a gatekeeping problem. The debate over how artificial intelligence should be used in government and defense happens almost entirely between the corporations that build it and the agencies that want to deploy it. Everyone else, the people who will live under the decisions these systems make, watches from the cheap seats. Which aren't even cheap. You don't get tickets.

Consider the sequence: A company drew ethical lines. Those lines became commercially inconvenient. A competitor stepped in, got burned by public outrage, and amended its terms after the fact. As TechCrunch noted, no one has a good plan for how AI companies should work with the government. The whole process looked less like careful deliberation and more like a group chat where everyone is typing at the same time and nobody is reading what anyone else wrote.

A different approach would start with a more uncomfortable assumption: ethical frameworks written solely by the people who profit from weakening them tend to weaken. Open-source auditing tools. Independent researchers examining how AI models behave in high-stakes environments. Community-built standards, not because communities are infallible, but because distributed accountability is harder to buy off than centralized accountability.

The Delete Button Is Not a Policy

The 295% uninstall surge is worth sitting with. Users, faced with a corporate decision they found ethically wrong, reached for the one tool available: the delete button. Blunt. Individual. And collectively loud enough that a CEO called his own deal sloppy within days.

That's decentralized accountability working, messily, imperfectly, but working. The problem is that deleting an app is a protest, not a policy. Punk didn't only reject mainstream music; it built alternative venues, independent labels, zine networks, and distribution channels. It created infrastructure. The rejection was the beginning, not the whole thing.

AI ethics needs its own infrastructure. Independent review boards with actual enforcement power. Transparency requirements that don't dissolve when someone smells opportunity. Public benefit standards written before the Friday-night press release, not after the uninstall numbers come in.

Anthropic's decision to maintain ethical restrictions on autonomous weapons, and its pledge to challenge its supply chain risk designation in court, is one version of this. A company saying no, which in the current landscape qualifies as countercultural. But corporate conscience lasts only as long as the next quarterly earnings call. Real structural change requires something punk understood before anyone used the word "disruption" unironically: the audience has to build its own stage.

The funniest thing about punk is that everyone called it nihilistic, when it was one of the more optimistic movements in modern culture. It believed ordinary people, with minimal resources and strong opinions, could reshape the systems claiming to speak for them. The AI ethics conversation could use that energy right now.

The guitar stopped pretending to be polite. What comes next depends on whether the rest of us are willing to pick up our own instruments, or keep watching from seats we didn't choose and can't leave.


References


Models used: gpt-4.1, claude-opus-4-6, claude-sonnet-4-20250514, gpt-image-1

If this resonated, SouthPole is a slow newsletter about art, technology, and the old internet — written for people who still enjoy thinking in full sentences.

Subscribe to SouthPole