Elon Musk’s acquisition of Twitter might fall through. It might mean nothing. My agents have been inquiring without much success into the Tesla boss’s actual politics. All I have learned is that he is definitely not “based.” A Clinton Democrat, claimed one source, with a few pet peeves: unions, censorship…

But if the deal goes through, and the social-media giant becomes what all good people hope for, this is a philanthropic acquisition. Philanthropy, the most powerful political force of the last century, is the art of turning money into power. A normal investment is the art of turning money into more money. But Musk doesn’t need more money. He could turn Twitter into a nonprofit if he wanted—and arguably, he should.

Turning Twitter back into the “free-speech wing of the free-speech party,” as its executives used to call it, will not maximize profits. With the European Union threatening to steal 6 percent of Twitter’s global revenues if it doesn’t “prevent dangerous misinformation from going viral,” there is no rational business decision besides abject and immediate compliance.

But once Musk closes his acquisition, Twitter doesn’t need to make rational business decisions. It can even lose money. Musk’s Twitter isn’t a money-making corporation, like ExxonMobil; it is a power-making corporation, like The New York Times or the Ford Foundation.


The Vacuum of Democracy

Twitter is unlike The New York Times, which wields oligarchic power directly. Twitter, when it works, creates a vacuum of power, which democratic power fills. By default, Twitter isn’t in any way in control of the portal of pure chaos it creates.

“Twitter is a natural cradle of any kind of counter-establishment power—good or bad.”

As any Arab despot can tell you, Twitter is the most powerful weapon of democracy ever invented. If Twitter is bad, Twitter is bad because democracy is bad. While there is a real case for this point of view, few dare to make it.

Twitter is a natural cradle of any kind of counter-establishment power—good or bad. If the words “oligarchy” and “democracy” don’t mean “establishment” and “counter-establishment,” these words have lost all meaning.

As Margaret Mead put it, “even a few people with extreme views can create grave harm in the world.” Sorry, that wasn’t Margaret Mead. That was misinformation expert Dr. Brendan Nyhan. This was Margaret Mead: “Never doubt that a small group of thoughtful, committed citizens can change the world.”

You, uh, see the problem. One great example was a 2019 incident in which Facebook foolishly assumed that it was on perfectly solid ground in banning “organizations or individuals that proclaim a violent mission or are engaged in violence.”

Unfortunately, these organizations were in Burma (“Myanmar”). The government of Burma is bad; so organizations that oppose it violently are good. Therefore, experts in misinformation told Facebook, these organizations, rather than being bad “dangerous organizations,” are actually good “ethnic armed organizations.”

Extending regime power over Twitter requires the appearance of neutrality. Policies that are actually just about distinctions between friend and enemy need to be presented as morally objective. Because this fundamental falsehood—that censorship is morally objective, rather than politically subjective—is essential to the process of political censorship, the objective rules of the system must be applied through a tacit or secret subjective filter.


Defending the Vacuum of Power

As the cradle of any new democracy, the goal of the new Twitter should be to provide the best vacuum of power possible, so that the strongest new democratic movements can flourish. It isn’t Twitter’s role to choose between these new powers, or between the new powers and the old. None of them should be able to exercise any power of censorship over the others.

When a social network is ordered to consider the impact of democratic speech, it is generally being ordered to assess the political movement that this speech supports. When it turns out that this assessment isn’t actually up to the platform, but must match the views of the old regime, which may not be based on objective features of the movement, but solely on who it is for and whom it is against, we are just looking at a glove on the hand of state power. If all power needs to censor is this thin glove, freedom of speech is already a comedy.

These orders aren’t communicated by some Soviet commissar. They are in the air and the water. Power knows how to communicate a deep emotional sense of which side is good and which side is bad. I intuitively and emotionally feel that the Burmese rebels are the good guys—and I have more or less no idea why.


Speech Is Not Action

If Twitter wants to promote free speech, it must absolutely and unconditionally reject the theory that it is responsible for the consequences of the speech on its platform. If it accepts such responsibility, it will become a tool of the powers that be.

“Power knows how to communicate a deep emotional sense of which side is good and which side is bad.”

Under US law, Twitter is immunized from the legal consequences of its users’ speech. Twitter has no legal need to care what its users say. But does it have a moral need? Does any such platform have the moral obligation to use its real political power in favor of good and against evil? If so, then freedom of speech on that platform cannot exist.

It is hard to believe in freedom of speech when you also believe that there are two sides, one of which is good and one of which is bad. Almost everyone believes this—which means they believe speech in favor of the bad side is bad. And it is useless for anyone to claim that they believe anything bad is OK.

But if speech is not associated with its consequences—at least, as a convention in certain contexts—there is no such thing as speech “in favor of” a bad side. Refusing to accept that speech is action is the foundational shared illusion that makes the refreshing and productive air of free speech possible.

The truth is that speech is action, and speech is power. But that power will appear in its least distorted and dangerous form if it is exposed to the fresh air of free speech. Realizing that free speech is needed is the same as realizing that new power is needed, and the purpose of Twitter is to be a healthy cradle for every new democratic power.


The Four Interventions

Twitter is a neutral platform, except for four interventions: censorship, moderation, recommendation, and fact-checking. All of these tasks are essential, and the new Twitter under Musk must not abandon them—it must execute them more clearly and capably.

The purpose of moderation is to prevent users from seeing content that they don’t want to see. The purpose of censorship is to prevent users from seeing content that they do want to see. The purpose of recommendation is to arrange content in the order that users want to see it. The purpose of fact-checking is to remind users of the service’s official opinion on factual claims in the content.

These interventions must not be applied equally to all accounts. For small accounts, interventions should be very aggressive, automated, opaque, and cheap. For larger accounts, interventions should be very careful, manual, transparent, and expensive.


Censorship

Censorship isn’t pretty—so it tends to masquerade as moderation, promotion, or fact-checking. This is a lie, and lies are bad. But moderation, promotion, and fact-checking aren’t bad—when they aren’t cloaks for censorship, which should always go naked.

Censorship is an important obligation of a social-media platform. A platform must censor all content prohibited by law in all jurisdictions that prohibit it. If content maligns the king of Thailand, it must be censored in Thailand. If it endorses Adolf Hitler, it must be censored in Germany. If it contains child pornography, it must be censored everywhere.

Censorship should always be exercised on the consumer end. Suppose you post AI-generated child porn of Baby Hitler with the king of Thailand. No one anywhere can view this profoundly horrific content. Twitter can’t distribute it, anywhere, at all. You still posted it—and that’s on you. Twitter engineering is a big boy and isn’t afraid to store even your repulsive AI-generated child porn, for the benefit of the authorities alone, in an underground vault at the temperature of permafrost.

In one world, the police can subpoena this shitpost and prosecute you for it. But in another world, 50 years from now, our grandchildren decide they believe in infinite freedom of speech—and release all the censored tweets from the early 21st century.

There are two errors that a platform can make when censoring. First, it can censor beyond the demands of the law—becoming a tool for extralegal power. Second, it can censor on the creator end—not just blocking distribution of the censored content, but taking a para-governmental role by punishing its author beyond the demands of the law. In either case, the platform is being used as a weapon of an informal regime more powerful than the formal regime.

The purpose of a free Twitter is to stop being a tool of oligarchic power. This change will make it a tool of democratic power—which won’t make the oligarchy happy. The new management shouldn’t be surprised by the consequences of this.


Moderation

Moderation without censorship is easy. Consider the problem of “hate speech,” which isn’t at all illegal in the United States but which most American users have no wish to see.

If you are an American who makes a Twitter account, it is safe for Twitter to assume you don’t want to see hate speech. But this is a default preference. You could set your preference to see all speech.

By default, tweets in your timeline that contain hate speech will have a click-through warning. If you aren’t disturbed by hate speech, you will see them as normal. If you are really disturbed by the existence of hate speech, they won’t show up on your timeline. Different users may be differently disturbed by different levels of “hate”—this should be an adjustable knob.

As a user, you should have to click no more than two buttons to make sure your Twitter experience is completely “woke”—or completely nonsexual, or completely nonviolent, or even completely Islamic. Most people aren’t actually this sensitive. Some are, and that’s fine.

But how does a platform determine the presence of hate speech? AI can do a lot here, although it should always be trivial to appeal an AI decision to a human. If you post a lot of content that gets this flag, your content may get auto-flagged. Again, if this is wrong, a human needs to decide.

Ideally, shitposters should flag their own posts. If registered shitposters want to make posts that only registered shitposters can read, no harm at all is done to the civilians—who at most notice that shitposts exist, and can even turn that noticing off. But if they troll by pretending to have no shitposting intent and sneaking past the AI, they will be flagged as trolls. Then, even their cat pictures will be treated as shitposts.

Once moderation is distinguished from censorship, the rules can be public. Defining “woke” is fine and can be done objectively and without dissimulation, as long as “wokeness” is recognized as one of many ideals which needn’t dominate all others.

Just as it is normal and healthy for devout Muslims to want to see only halal content, it is normal and healthy for devout progressives to want to see only woke content. But if fundamentalist Muslims took over Twitter and banned all haram content, under a system of vague guidelines which purported to be objective and universal but in fact coincided perfectly with the teachings of the Prophet Muhammad—Twitter would no longer be a free platform for all of humanity. It would be a Muslim platform, only for Muslims.


Recommendation

Recommendation is just automated micro-moderation. As long as recommendation isn’t abused as a censorship device, it is harmless. Its algorithms can be published and should be as simple as possible—so that spammers gain the least by abusing them.

The purpose of recommendation, like the purpose of moderation, is to make the user’s experience as rich and pleasant as possible, by showing them what they want to see. Once any other motivation enters this algorithm, it is being used for censorship.

Once recommendation and moderation are separated from censorship, their true importance emerges. Promotion and moderation are curation; they are about quality—from the user’s own perspective. De gustibus non disputandum.

The algorithm should be no more biased for or against ideologies than Spotify’s algorithmic curation for or against musical genres. Spotify doesn’t hate that its customers include too many metalheads and try to drive them into bluegrass by putting a subtle thumb on the recommendation algorithm’s scale.


Fact-checking

Unless fact-checking is being used for censorship, it is a wholly benign procedure. Twitter has, in fact, neglected the importance of fact-checking—by outsourcing it to the usual suspects: progressive journalists at prestige outlets and various progressive NGOs

Twitter has an enormous opportunity: It can develop its own independent capacity to check facts. Unlike Wikipedia, it needn’t rely on official authorities—or at least, it has the resources to constructively and effectively doubt these authorities. Twitter can build its own public mind.

A company the size of Twitter can afford to think from scratch and at scale. It can form its own opinions. For example, it can decide that the 17th Earl of Oxford was the real Shakespeare, and mark all Stratfordian tweets as misinformation.

The process of coming to such a decision wouldn’t be simple—on the contrary, it would require scholarly inquiry and a structured adversarial process. It would take something like a mock trial, but better. It would certainly not be cheap. The purpose of a monopoly is to do things that aren’t cheap.

The most useful design, from the user’s perspective, would be double annotation. A tweet containing disputed facts would be marked with both opinions—the Twitter version and the official version. There is no need for them to compete in any way; and there is no need, except as prescribed by law, to censor tweets that are marked as false.

But clicking through this mark must always reveal a specific explanation of the reasons for this judgment. For instance, if Twitter decides that ivermectin doesn’t cure Covid and marks every tweet that contains claims about its healing powers as misinformation (in its opinion), a click on the mark shouldn’t send users to a page that says, “Always trust content from the WHO,” but to Twitter’s own from-scratch evaluation of the ivermectin evidence—which should be as least as convincing, even to a specialist, as anything in the medical or scientific literature—yet which also reserves the right to differ with that literature.

Obviously, this level of independent fact-checking—which requires Twitter itself to perform an arbitrarily deep reality check—is extremely expensive, and extremely hard to do. At least, it is hard to do well; and if done poorly, it is worse than useless. Then again, these are also characteristics of a rocket launch.

Musk might end up doing hardly anything with Twitter. He might appoint a weak CEO who can’t get even his shallowest ideas—an edit button!—through. He might get bored with its bureaucracy and quail in the face of its officious Ministry of Trust and Safety.

Or he might return it to its values of 2015. That is as much as most people hope for. But no clock can be turned back. The opportunity before him is even greater—to set out a clear new vision of free social speech. Musk’s new Twitter could be more important than all his cars, rockets, and tunnels combined.

Curtis Yarvin writes at Gray Mirror.

Get the best of Compact right in your inbox.

Sign up for our free newsletter today.

Great! Check your inbox and click the link.
Sorry, something went wrong. Please try again.