On February 27, Secretary of War Pete Hegseth announced that the AI company Anthropic would be designated a “Supply-Chain Risk to National Security,” thus preventing its systems from being used for military contracts. The move came in response to Anthropic’s refusal to provide its AI models for use in fully autonomous weapons or for domestic mass surveillance. Hegseth justified the ban in the name of American democracy. It should be the president and the American people, he said, who determine how the military conducts itself, and not “unelected tech executives.” 

Responding to this decision, Anthropic CEO Dario Amodei also invoked American democracy. “In a narrow set of cases,” he said, “we believe AI can undermine, rather than defend, democratic values.” Crossing Anthropic’s red lines, he asserted, would be “contrary to American values.” 

“Anthropic is not a democratically elected body.”

Amodei also noted that Anthropic is not a democratically elected body. “The right long-term solution is not for a private company and the Pentagon to argue about this,” he said. “I think Congress needs to act here.” The problem, however, is that “Congress doesn't move fast.” Because the law is lagging behind the technology, he reasoned, it is the responsibility of those with the expertise to make sure that the public interest is protected. Anthropic can be the “judge of what our models can do reliably” and of the ways in which the technology “is getting ahead of the law.” This knowledge, he claimed, gives the company the unique ability to act in the interest of the American people. 

Amodei’s claim that Anthropic represents the American people might seem outlandish. Private companies are understood to act in their own interest, while public institutions are understood to represent the popular will. Sovereignty, as traditionally conceived, is the preserve of states with the ability to govern a territory and people, which enjoy sole legitimacy to pass laws and regulations. They alone may claim to act on behalf of the citizenry as a whole.

But developers of modern digital technologies have long defended an alternative vision of sovereignty. John Perry Barlow’s 1996 “Declaration of the Independence of Cyberspace,” pronounced in Davos at the dawn of the internet age, declared that governments “have no sovereignty” over the online domain. Barlow’s statement reflected the “cyberlibertian” outlook common among early internet pioneers, who viewed the digital realm as a zone of autonomy from the state. 

This deterritorialized vision of cyberspace ran into the realities of an internet infrastructure built and run within the territories of the governments he sought to escape. However, the idea that the digital innovators are pioneering a political as well as a technological frontier has never died out. Today, Silicon Valley billionaires tie their technologies to attempts to escape state power: whether in the form of sea-steading and private communities in the Californian desert or the more ambitious dream of building a libertarian society in space. 

The language tech companies use to describe their activities is often drawn from that of governments. For instance, in 2020 Meta established its own private “Supreme Court,” an Oversight Board that adjudicated controversial content moderation decisions made by the platform. Anthropic, meanwhile, has its own “Constitution.” 

The most significant ways in which technology companies have moved to privatize sovereignty have been far subtler. As it has grown to unprecedented scales over the past two decades, Big Tech has come to act as a de facto regulator of human existence. As the legal scholar Lawrence Lessig argued in his 1999 book Code and Other Laws of Cyberspace, users are constantly regulated through the very architecture of tech platforms, which condition what we can and cannot do online. The choices made by engineers in setting these limits encode values into the systems, ideas of right and wrong use that enact a regulatory function. 

Seen in this light, Amodei’s claims to act for the public interest are continuous with the de facto role that tech companies have already been assuming for decades. The significance of Anthropic’s run-in with the Department of War, then, may be to vividly illustrate, and perhaps to test the limits, of this redefinition of sovereignty. The conflict has turned the implicit regulatory power of technology companies into a point of explicit political contestation. 

While the promise of a Supply Chain Risk designation seems like an assertion of traditional state sovereign control, how the contest plays out depends in part on what other options are available to the Pentagon. Defense officials know that other nations like China may be close behind in their integration of AI, and fear losing a lead. Perhaps other companies like OpenAI can fill the gap left by Anthropic, but in the event that Anthropic’s systems are genuinely superior, US leadership may decide that the company’s technology provides an edge they cannot do without. 

Amodei has predicted continued exponential growth in AI capabilities, characterizing soon-to-be-built AI systems as a “country of geniuses in a datacenter.” But who will be sovereign over such a country? This question extends far beyond any one company or administration. Hegseth and Amodei equally invoked American values and the American people in laying claim to this representative role. Traditionally, these values have included wariness of the centralization of political power in any institution. The American people would, presumably, prefer a future in which no powerful entity, be it a public or a private one, is able to exert sweeping control over their lives. Whether such a future can be achieved remains an open question. 

Conor McGlynn is a doctoral candidate in public policy at Harvard University.

Get the best of Compact right in your inbox.

Sign up for our free newsletter today.

Great! Check your inbox and click the link.
Sorry, something went wrong. Please try again.