For much of the past 20 years, the West has been mired in a state of quasi-permanent crisis: the post-9/11 terrorism crisis, the post-2008 financial-economic crisis (which in Europe evolved into a specific sub-crisis: the “euro crisis”), the pandemic crisis and, just as the latter seemed to be waning, the ongoing military crisis in Ukraine, which has morphed into a much wider great-power confrontation.
This is taking place against the backdrop of a wide range of other (partially related) crises: an energy, inflationary, and cost-of-living crisis, a new debt crisis in the making (in Europe), an ever-present social crisis, and a looming climate and ecological crisis—all of which, according to the World Economic Forum, should be understood as components of a singular, massive “polycrisis.” These are largely pan-Western affairs, but there are, of course, countless more localized ones, such as the recurring migrant crisis in certain countries.
Indeed, “crisis” has become such a pervasive and all-encompassing feature of our lives that one may legitimately wonder if this is just the result of a series of unfortunate events, or if there is more at play here. Even before the Covid pandemic, several critical scholars had posited that, under neoliberalism, crisis had become a “method of government.”
But what does this mean exactly?
One influential explanation is the idea of “disaster capitalism,” first developed in Naomi Klein’s 2007 book, The Shock Doctrine. Her central thesis is that in moments of public fear and disorientation, it is easier to re-engineer societies: Dramatic changes to the existing economic order, which would normally be politically impossible, are imposed in rapid-fire succession before the public has had time to understand what is taking place. Hence every natural disaster, every economic crisis, every military conflict, and every terrorist attack is exploited by governments to radicalize and accelerate the neoliberal reordering of economies, social systems, and state apparatuses.
The French Marxists Pierre Dardot and Christian Laval went a step further, arguing in the 2019 book Never-Ending Nightmare: The Neoliberal Assault on Democracy that “neoliberalism largely consists in [a] logic of the self-feeding or, to be more exact, self-aggravation of crisis.” Neoliberal policies, rather than adapting to objective logics imposed from without, “instead strive to construct situations and intensify dynamics that indirectly compel governments to accept the consequences of their own previous policies. And this literally infernal logic leads to the pursuit of policies that further aggravate the situation.” In this sense, rather than disaster capitalism, it would perhaps be more apt to talk of “crisis or emergency capitalism”—a regime which is only able to function by resorting to a semi-permanent state of emergency through the exploitation and aggravation of an endless series of “crises,” often generated by the intrinsic contradictions of the system itself.
Both these explanations, however, tend to generally accept the framing of these events as “crises.” It is the response to these—or at most, the logic that led to the “crisis” in the first place—that is criticized, almost never the definition itself. In doing so, such accounts obfuscate the extent to which the framing of whatever happens to be the event in question as a “crisis” is part and parcel of the “shock doctrine” itself.
After all, crises are just as much rooted in some objective reality as they are the product of a narrative construction: An event only becomes a “crisis” once it is officially classified as such as by authorities (hence a Chinese balloon straying into US airspace becomes a momentary crisis requiring the nation’s full attention, while a toxic cloud of cancer-causing chemicals enveloping a town in Ohio at first barely makes the headlines). In this sense, “crisis as a mode of government” shouldn’t be understood simply as the self-serving exploitation of crises, à la Klein, or as the result of the intrinsic crisis-generating tendencies of neoliberalism, à la Dardot and Laval, but also, and perhaps even more important, as the constant invoking of crisis itself, regardless of whether the situation at hand actually deserves the label or not.
In such a regime, “crisis” is the norm, the default starting point for all politics. Far from being a rational response to an objective reality, this narrative of permanent crisis or emergency should be understood as a way of shaping reality, and more specifically as one of the main tools through which Western ruling elites have attempted to overcome neoliberalism’s intrinsic tendencies toward stagnation and polarization, and its inability to generate societal consensus or hegemony in either material or ideological terms.
In his 2022 book, States of Emergency, the Dutch political scientist Kees van der Pijl traces the roots of this crisis-led, fear-based regime to the end of the Cold War. The disappearance of the Soviet bloc and communism—the adversary that had given unity and purpose to the West for more than 30 years—exposed a moral and ideological void that threw the Atlantic ruling class into an existential crisis, despite the triumphant proclamation of the end of history. Having abandoned the rational class compromise of the postwar “Keynesian” era in favor of an aggressive class war from above, and having no legitimating symbolic reservoir, or “secular theology,” to draw from beyond nihilistic consumerism and individualism, the question for Western elites arose as to how to get the majority to accept being ruled by an increasingly small minority.
The answer they came up with was fear. As the threat of a conventional or nuclear war between superpowers subsided—since now there was only one superpower left—Western strategists increasingly began to turn their attention to other, less conventional threats, ranging from terrorism to large-scale natural disasters. A new paradigm emerged, what we may call “full-spectrum preparedness,” which involved preparing for all sorts of high-impact, low-probability worst-case scenarios. As the French health historian Patrick Zylberman noted in his 2013 book, Tempêtes microbiennes (Microbial Storms), many of these scenarios brought the logic to such an extreme that they effectively went beyond the measurement of probability, veering off into the realm of fiction. What Zylberman calls a “world market of fantasy” was created, also thanks to the emergence, in the same period, of 24-hour news channels and the internet as unlimited information hubs.
While there were certainly various factors driving this trend, such as the expanding power of computing and the growing importance of mathematical modeling, in hindsight there is little doubt that in the eyes of Western planners the point of such worst-case scenarios wasn’t so much to prepare against a threat as to raise the specter of a threat—to create a situation in which citizens, gripped by fear, would submit to in the belief that there is no way of escaping the looming disaster it is told hangs over their head other than by obeying government instructions.
As Adam Curtis put it in his 2004 documentary, The Power of Nightmare: The Rise of the Politics of Fear, instead of delivering dreams, politicians now promised to protect us from (often fictional) nightmares. Worse even, to the extent that worst-case scenarios increasingly came to be seen as plausible reasons for making people give up their freedoms, they created perverse incentives for elites to transform such fantasies—or nightmares—into reality. If the future could be imagined, why couldn’t it be scripted?
Among the fantasies being dreamt up by Western security planners, the issue of “biological threats”—either natural or man-made, often in combination with terrorism (bioterrorism)—soon assumed a central role. Hence the American government’s massive preventative investment, beginning in the late 1990s, into research on the potential military use of biological agents such as anthrax and smallpox, referred to as “dual-use” pathogens. In 1997, the Clinton administration initiated the Anthrax Vaccine Immunization Program, under which active US service personnel were to be immunized with an anthrax vaccine (which, it would later emerge, caused serious side effects).
In a telling example of the blurring of the line between fiction and reality in the conceptualization of worst-case scenarios, the following year, Bill Clinton requested that his deputy secretary of defense, John Hamre, read the novel The Cobra Event, which describes a terror attack in which a genetically modified virus with horrendous symptoms is unleashed upon New York. The author, Richard Preston, was subsequently invited, alongside experts in bioterrorism, to a conference held in 1997 by the Infectious Diseases Society of America.
The early 2000s saw a proliferation of “exercises” aimed at simulating the effects of a bioterror attack. The first of these, Exercise Top Officials 2000, held that same year (many others would follow), was aimed at assessing the response to a series of geographically dispersed terrorist threats in the United States. It was followed in June 2001 by another bioterrorist attack simulation, Operation Dark Winter, organized by the Johns Hopkins Center. Dark Winter already warned against disinformation about medications and other matters endangering public safety; for the suspension of civil liberties, it recommended the proclamation of the state of emergency and that the president should also use his prerogatives under the Insurrection Act.
On that occasion, prominent hawks predicted with great certainty that a real bioterror attack was imminent; Vice President Cheney even took anthrax medication after being briefed by the Dark Winter team. Sure enough, just a few months later came 9/11, which was followed by the anthrax attacks, when letters containing anthrax were sent to several media offices and two Democratic senators, killing five people. How the participants in Dark Winter could have uncannily anticipated an anthrax attack, we don’t know. What we do know is that, even though the attacks were initially attributed to Saddam Hussein, the anthrax most likely originated in America’s own biowarfare lab at Fort Detrick. Nonetheless, the decision was made to vaccinate all US military personnel against anthrax before sending them to Afghanistan or Iraq.
The post-9/11 War on Terror—which rested on the Hollywoodesque, largely fictional account of a powerful network of international terrorists out to destroy the West, with sleeper cells in countries across the world—represented the great debut of this new politics of fear, with its blurring of fact and fiction, forecast and script. In many ways, it provided a blueprint for this new authoritarian management of Western societies: With citizens scared into submission, governments faced relatively little opposition as they swept aside civil liberties and erected increasingly authoritarian national-security apparatuses; 9/11 and the anthrax attacks also seared into the public’s consciousness the notion of bioterrorism.
As the years went by, however, the threat of bioterrorism began to be overshadowed by the prospect of a new Spanish Flu-style pandemic. This purportedly stemmed from a growing concern over the link between the emergence of new infectious diseases in the natural world, environmental change (such as the industrialization of animal farming and deforestation), economic transformations (hence the resurgence of tuberculosis or cholera in developing countries), and growing bacterial resistance to antibiotics. This led to the emergence of concepts such as “health security,” “human security” and “biosecurity,” whereby the definition of security was expanded to include virtually every area of human life, and public health in particular, allowing the state to intervene wherever a biological threat might potentially exist—that is, everywhere.
This heightened attention on “health security” was accompanied by a push toward the internationalization and supranationalization of health policies, with the transformation of “public health” into “global health.” A crucial step in this direction was the third revision of the World Health Organization’s International Health Regulations in 2005, on the heels of the 2002-2004 SARS outbreak, which for the first time made the agreements, encompassing a wide range of rule for the management of epidemics and pandemics, binding for all member states. In hindsight, this transfer of national sovereignty to WHO in the name of biosecurity appears particularly concerning, given that the organization had, to a significant degree, fallen under the dominion of private capital (through the Gates Foundation, for example). Moreover, as Zylberman wrote, global health management also offered a convenient cover for the “imperial pretensions” of Europe and the United States, the dominant states within WHO.
Over the following years, simulations and exercises preparing for the arrival of new pandemics became increasingly common. A crucial turning point in global biosecurity politics, according to van der Pijl, was the 2008 financial crisis. The crisis, and subsequent global recession, brought into stark relief the extent to which decades of neoliberal policies had led to a colossal transfer of wealth into the hands of Western financial oligarchs, who, in turn, had all but captured the political process. The response to the crisis confirmed this—and the backlash was massive. In the years that followed, all indicators of social unrest around the world showed an upward trend: Strikes, riots, and anti-government demonstrations broke existing records in every category during this period. Trust in government, and even more in information, declined in all countries, leading to a “populist” anti-establishment wave that swept across several Western nations.
Western elites found themselves desperately looking for a way to restore order, and justify the increasingly authoritarian, repressive, and militaristic measures needed to remain in power and stifle the mounting challenges to their authority. A new “nightmare” was desperately needed, and it just so happened that after 2008, the results of previous exercises and simulations had begun to be elaborated into very detailed scenarios for a possible state of emergency due to a pandemic or “health crisis.”
Few authors foresaw the rise of “health security” or “biosecurity” as a new authoritarian paradigm and ideology of social control as presciently as Patrick Zylberman. In 2013’s Tempêtes microbiennes, he looked at the social-psychological consequences of the voluntary quarantines instituted in China and Canada in the early 2000s during the SARS-1 epidemic.
What he found surprised him. There was a new force at work, according to Zylberman, and that was extreme civic spirit, patriotism even. Even though people in quarantine suffered, they nonetheless accepted the measures, and often embraced them. The frequently contradictory instructions by the authorities aggravated frustration and unrest, and yet people followed them widely. The SARS-1 epidemic showed, in a microcosm, what is possible if a citizenry can be made to see it as its duty to enforce health measures. Could this have escaped the attention of those contemplating how to address global popular unrest?
It certainly wouldn’t seem so, judging by a report published in 2010 by the Rockefeller Foundation, which looks at various scenarios for the future of the world. Of particular interest is the scenario called “Lock Step,” which describes the arrival of “the pandemic that the world had been anticipating for years,” in response to which governments follow China’s lead in imposing “mandatory quarantine for all citizens” (i.e. lockdown), the mandatory wearing of face masks, the sealing of borders, and so on, leading to “a world of tighter top-down government control and more authoritarian leadership.” “At first,” the report notes of this hypothetical sequence of events, “the notion of a more controlled world gained wide acceptance and approval. Citizens willingly gave up some of their sovereignty—and their privacy—to more paternalistic states in exchange for greater safety and stability.” This echoes Zylberman’s conclusions.
The narrative in the Rockefeller report goes on: “Citizens were tolerant, and even eager, for top-down direction and oversight, and national leaders often had unexpected latitude to impose order in the ways they saw fit. In developed countries, this heightened oversight took many forms: biometric IDs for all citizens were among them.” Of particular interest is the fact that “even after the pandemic faded, this more authoritarian control and oversight of citizens and their activities stuck and even intensified.” Certainly, the report acknowledged that the population wouldn’t accept these restrictions indefinitely, but the changes would have become irreversible in the meantime, and the new regime would be so firmly entrenched that a return to the status quo ante would no longer be possible.
Reading all of that in light of the past three years, one can’t help but wonder: Was the future being hypothesized—or was it being scripted? In the years that followed the publication of that report, the “pandemic-preparedness” industry really took off. In 2016, at the World Economic Forum in Davos, the Innovation for Uptake, Scale and Equity in Immunization project, developed by GAVI, a vaccination pressure group of the Gates Foundation, was set up. The following year saw the launch at Davos of the Coalition for Epidemic Preparedness Innovations—an initiative aimed at securing vaccine supplies for global emergencies and pandemics, funded by government and private donors, including Gates.
In 2018, on the occasion of the centenary of the Spanish Flu, the WEF, along with a number of other relevant networks, organized a meeting dedicated to combating a pandemic that was “almost certain” to occur. A year later, in July 2019, a new transnational body, the Global Preparedness Monitoring Board, released its first annual report. This organization had been convened by the World Bank Group and WHO to prepare the world for “the specter of a general public-health emergency.” The board was mainly aimed at preparing the ground for an international state of emergency, with strict obligations for individual states. In September 2019, a “global vaccination summit” followed, organized by the European Commission together with WHO and featuring industry representatives such as Pfizer and Moderna.
Finally, one month later—just two months before the official start of the outbreak in Wuhan—the Gates Foundation, in collaboration with the Johns Hopkins Center for Health Security and the WEF, hosted an exercise called Event 201, which simulated “an outbreak of a novel zoonotic coronavirus transmitted from bats to pigs to people that eventually becomes efficiently transmissible from person to person, leading to a severe pandemic.” In the event of a pandemic, the organizers noted, national governments, international organizations and the private sector should provide ample resources for the manufacturing and distribution of large quantities of vaccines through “robust forms of public-private cooperation.” Moreover, great emphasis was placed on the need to combat mis- and disinformation by, for example, tightening censorship of social media.
Does this flurry of simulations and exercises in the years and months leading up to the Covid-19 outbreak, which imagined a pandemic very similar to the latter (down to the smallest details, in some cases), and outlined a response very similar to the one that was then taken, indicate some level of foreknowledge on behalf of Western elites? I will leave it to the reader to decide that.
What’s clear is that by 2020, the biosecurity complex had burgeoned into a mammoth of terrifying power and proportions, encompassing the world’s largest pharma and biotech companies, the biometrics industry, social-media giants, traditional media conglomerates, national security (military and intelligence) apparatuses, global and national public-health organizations such as WHO, the World Bank’s health division, the US Centers for Disease Control and Prevention and the National Institutes of Health and their equivalents in other countries, private philanthropies such as the Gates Foundation, and trans-Atlantic planning groups-cum-think tanks such as the WEF as important intermediaries between the various actors.
It’s clear that this complex had been preparing for an event such as the Covid-19 pandemic for quite some time, at least a decade, and was ready to lead the response once it happened. It’s also clear that its members understood very well that such an event was likely to lead to a rollback of democracy and civil liberties, and a restructuring of societies along more authoritarian lines—and openly called for this kind of response. Finally, it is clear that there were converging economic and political interests that stood to benefit greatly from an external shock of this kind, as the events of the past few three years have borne out.
Ultimately, the issue of foreknowledge is overshadowed by an even more daunting truth: We have entered an era in which Western establishments need to constantly invent new nightmares to maintain their power; and when enough people with enough power start to dream the same nightmare, it’s only a matter of time before it comes true.