The notion of viewing the government as a corporation is foundational for NeoReaction and Absolutism, having first been proposed by Mencius Moldbug and presently being revisited by Imperial Energy. The government is in the security business: its customers (formerly known as “citizens”) pay a fee (formerly known as “taxes”) and the government provides internal security for property and person, and external security from, presumably, the other security corporations in the world—or, perhaps, from more primitive and therefore maybe more dangerous state organizations. The idea has its roots in libertarian thought. It might gain more support from the recent work of political scientist David Ciepley, who in one essay argues that the framers of the US Constitution very deliberately constructed the constitution as a “charter” and the government as a corporation. According to Ciepley, mainstream political thinkers through the 19th century were perfectly aware of all this, and used the words “charter” and “constitution” interchangeably. This argument regarding the USG is part of a larger argument Ciepley has been making, perhaps most prominently in an American Affairs essay, about the fundamentally anti-liberal character of the corporation. Contrary to liberal and libertarian accounts going back to Adam Smith, which see the economy in terms of contracts entered into by individuals and more recently updated by Milton Friedman, who misrepresented the corporation as owned by its shareholders (causing all kinds of mischief), corporations are fundamentally public-private mixtures, established by the state and rooted in medieval social forms—and these institutions, not contractually based partnerships, dominate the modern economy.

Ciepley’s argument regarding the US founding is a complex one. He rejects the notion that founders like Madison and Hamilton had “social contract” theories, whether those positing a covenant among a people or those positing one between the people and a ruler, in mind in theorizing the new order they were establishing. They knew how preposterous such theories were. They were trying to establish a charter for a government, a corporate “person,” that, like a corporation, would have powers limited to those enumerated in the charter. They modeled this new government on the state governments, all of which had, in fact, been corporations chartered by the British government. Like the shareholders of a corporation, the people could vote for officials filling the positions established by the charter, but would have no role in governing—and, if they were to make demands that violated the terms of the charter, those demands should be ignored. He even shows how the practice of judicial review evolved not out of some pure constitutional logic but the role of the sovereign in rejecting policies of the corporation that violate its charter. But this is where the problem for the founders lay—if the government was a chartered corporation, who chartered it? Corporations are chartered by the sovereign—but the sovereign, the British Parliament, had just been overthrown. The “people” had to be sovereign, but what did that mean? A kind of social contract theory gets snuck in through the back door here, as some constitution of the people as a people must be retrojected back into the distant past. Developments within ancient and medieval theory helped here, as the Roman emperors legitimated themselves by claiming a one time (and of course irrevocable) donation of power to them by the “people”; this theory, mostly dormant in Roman history itself, was picked up and activated by those critical of the medieval European kings.

This opens all kinds of very interesting problems, because in this conception popular sovereignty is essentially a cipher—the sovereign is the original source of legitimacy, and the basis upon which the acts of the government can be criticized, but can’t actually do anything. It’s pure negation, which is the way imperium in imperio works. In a sense, all modern political theory is an attempt to give some content to what is almost a mathematical term introduced to make an equation work—it’s an ideal site for power conflicts because anyone can introduce anything into it they want. The American founders were acutely aware of these dangers (I don’t share Ciepley’s awe at their solution, but his argument is so powerful that his admiration for them rubs off), and tried to present the American people as a kind of instantaneously dissolving sovereign: they assembled in a formal, recognized manner, on the model of, say, town hall meeting called by the local authorities (of course all this must be recognized after the fact), in order to establish the constitution, and then recede into quiescence and let the government do its work. Americans still participate in government, but as individuals voting, promoting candidates, arguing about ideas and policies, etc.—not as the sovereign. They can resume their sovereignty in a way accounted for by the Constitution itself—the amendment process—but is that really sovereignty? If the charter of a corporation contains a provision allowing the shareholders to modify some element of the charter, do the shareholders thereby become sovereign? Well, maybe, because if they can modify one element, they can modify two, and if two, three, and ultimately the entire charter. Eventually they would have to finish the “amending” process and become passive sovereigns once more. This is quite different, though, from a sovereign who has chartered the corporation from the outside, and who has chartered many other corporations besides this one. The shareholders or citizens all benefit, or perceive themselves as benefiting, in different ways and degrees from the operation of the corporation. To get to the point of a constitutional convention or some other mechanism by which the charter is to be overhauled the divisions must be running very deep among the community—indeed, since everyone knows it can get to this point, the very possibility would be a source of division that many within the corporation would have an interest in inflaming. And this is for the reason I gave above: we are dealing with what is really phantom sovereign, an empty center which those occupying different positions within the actual sovereign can struggle to fill. So, the process of everyone claiming diverse and incompatible forms of sovereignty while being unaccountable to the consequences of such claims in the actual operations of sovereignty never ends.

Any conceptualization of the government as a corporation, then, has to deal with the question of who has chartered the corporation—it’s enough for a business partnership to have customers, but a corporation, an institution that transcends the lives of those who run it and resists any effort by participants to fold it up by “exiting,” must have a charter, from a real, not notional, sovereign. This is why I think both that the corporate form is the ideal form for the absolutist state and that the state itself cannot be a corporation. (Ciepley points out that most of the European states were in fact corporations, but since that is what allowed the phantom sovereignty to be slipped in, they are not to be emulated in that regard.) Chartering corporations of all kinds—and here the medieval and even ancient roots of the corporation are important—religious, educational, scientific, exploratory and, of course, profit-making businesses is the best way for the sovereign to recognize socially relevant and beneficial activities and scrutinize them in the most economical and non-intrusive way. And, as Ciepley points out, the corporation form itself is consistent with all kinds of internal governance—to his credit, it’s very hard to get a sense of Ciepley’s own politics, and I sense they wouldn’t fall very clearly in one place along the left-right axis, but he does acknowledge the viability of worker participation in some forms of corporate governance—as a way of helping keep the corporation focused on its long-term prosperity, rather than turning a quick profit for shareholders.

The corporate form has obviously lasted so long, through so many social transformations, because it is an extremely reasonable mode of organization. It is especially remarkable that the corporation has persisted in spite of its being in absolute contradiction to liberal principles—the Enlightenment liberals, and liberals since then, have wanted to get rid of or at least reduce to liberal imperatives the corporation, that remnant of feudal governance, with its fixed hierarchies, it being a quasi-law unto itself, its governance through “status” rather than “contract.” The Left has always been well aware of and suitably outraged by these features of the corporation—they’ve never quite been able to give the abolition of the atrocity of limited liability the high profile they had hoped to, but it’s still there, lurking in the shadows, although perhaps now more for purposes of blackmail than any real transformation, as the Left has learned to work its will very well through corporations. Ciepley in fact agrees with the left (and, in fact, some—especially pro-Trump, interestingly—sections of the right as well) in condemning the Citizens United decision. He thinks that, as entities chartered by the states, corporations should not have the rights given to natural persons. But perhaps the problem is that we still think in terms of “natural persons”—Ciepley doesn’t see any problem with the public-private distinction as such, he just thinks that corporations straddle the divide. He also thinks that corporations can be liberalized and democratized—for example, the free speech rights granted to citizens could be extended to employees of corporations. But this suggests some uneasiness on Ciepley’s part with the undemocratic character of corporations. We could more easily argue for pushing the needle in the other direction, toward the corporatization of the rest of social life. While the whole notion of free speech, free assembly, religion, and so on, is becoming increasingly inapplicable in public life, it seems to particularly ridiculous to try and impose it on corporations. You want your employees to speak freely about problems they notice in the engineering design of the latest product; and you want them to shut the hell up about gendered bathrooms. What do we need “people” in general to speak freely about? As chartered corporations, shouldn’t towns be allowed to prevent their public spaces from being taken over and defaced by “protesters”? If these public corporations need public input into their decision making, they can solicit it in their own way. Now, the interesting thing about Citizens United was that it wasn’t a business, but, rather, a corporation formed for the purpose of making a movie criticizing Hillary Clinton. Ciepley answers the question of why corporations like the New York Times should have free speech by noting that the Constitution explicitly establishes freedom of the press, but what is the press? Whatever the sovereign says it is, it seems to me—if I get together with a couple of friends and form a corporation to make gifs ridiculing prominent public figures, we’re the “media” just as much as the Times, NBC, CNN, and the rest—and our charter will reflect that our purpose is to enrich public life through satire. So, rather than saying that corporations should not participate in public life, because they are not “natural persons” with rights, we should say to “natural” persons to de-nature themselves, incorporate, get a charter, and enter public life on terms agreeable to and with rights granted by the sovereign.

Corporations have been so successful (“adaptable”) because they presuppose an absolutist ontology. They presuppose a structured hierarchy prior to the individuals that will enter that hierarchy. We can ordinarily assume that those who originate the corporation and first acquire the charter will themselves fill those roles—perhaps that would often be stipulated in the “application” (much like the US Presidency was designed with its first occupant in mind)—the corporation will be designed to perpetuate that originary relationship and purpose. That’s absolutist ontology: any enterprise has a founding and a founder; the founder has “seconds” of various kinds (a “board”); and the enterprise is then ready to mobilize people and resources. But in an incorporated world, what kind of organization will the sovereign preside over? What kind of non-corporate organization will even be conceivable? The corporation institutionalizes, rationalizes and “routinizes” the founding; the sovereign retains the “charisma” of the founding, and is staffed by those who prefer the “team” to the “roles,” the anomalous to the rule-governed. The sovereign would mostly be chartering and inspecting the conformity of corporations with the terms of the charter—he would need a team of “generalists.” How to select the sovereign himself is a problem, because there’s no reason to assume a hereditary monarch will be up to the task. Maybe some kind of rotation of the leading CEOs themselves, with each choosing his own team. Every corporation has those with abilities, ambitions and visions stifled by the institution—sometimes, of course, they should be stifled, but the sovereign would want to staff his own team with such “rogues,” who are more interested in innovation and excellence than “playing ball.” They must also be the people most interested in secure sovereignty.

Power, Media, and Counter-Algorithmic Praxis

Eric Gans published an essay titled “On the One Medium” in a book on Girard’s mimetic theory and media (Mimesis, Movies and Media, 2015) that I just had a chance to read and is worth discussing here. Gans argues here that the internet is becoming the one medium that will subsume all others: text, video, cinema, music, etc. Other media may continue to exist for reasons of convenience, but everything will be convertible into the one medium, and will therefore be thought of and composed as convertible. This implies the erosion of the integrity of the other media, and their current modes of presentation: Gans gives the example of downloading and binge-watching a TV series, which makes it indistinguishable, other than in terms of time, from watching a movie. This erosion is furthered by the capacity, within the one medium, to modify and mix different products of different media—Gans alludes to the implications of this capacity indirectly by discussing an effort at UCLA to impose a licensing agreement on university journals allowing “users of those materials, once the original source is referenced, to ‘tweak, remix, and build upon’ the materials they contain.” It’s easy enough to imagine what might be done by integrating text or “performances” of scholarly articles into music videos—just remember that Hayek vs. Keynes rap contest that was current a couple of years ago. Gans also points out the fragility of the medium, based as it is upon advertising revenue and, even more importantly, like all markets, “on political systems, with peace enforced by arms.”


There remain two sets of phenomena that cannot be reduced to the One Medium because they depend upon an immediate relationship to their public: performances on the one hand, and art-objects on the other. Students of GA will recognize the two essential components of the human (cultural-representational) scene: the sacred central object and its sacrificial/alimentary substitutes, and the peripheral human group that surrounds the center, celebrates and consecrates it, and eventually, in a typical rite, takes nourishment from it.

Performances can be recorded, of course, and events can be hosted on the internet, but the point is that they can be recorded and are therefore “always already” recorded and therefore no longer dependent for their reality upon an original set of witnesses. But all of these recorded performances are still dependent upon an original live performance, or at least the existence of individuals capable of giving live performances—and if there are people capable of giving live performances, there will be a demand for such performances. So, there is something irreducible about performance, as we can see even more forcefully in the sphere of ritual. Could a baptism be performed online, with the priest in one place and the infant in another? Some actions, to become real, require something like the laying on of hands. Since we are mimetic beings, human interaction grounds our world in a way simulation can’t—Gans uses the example of chess, pointing out that we now have computers that can defeat any human in the game, and yet we still hold human tournaments while no one would have the slightest interest in a chess match between computer programs.

In his discussion of the art-object, Gans notes that the existence of the One Medium places a premium on work that takes up physical space—in economic terms, “real things,” made simply to be displayed in front of a live audience, become “scarce.” It seems to me a similar argument would hold for performance, although Gans doesn’t pursue this—that is, those performances that are likely to become the most privileged, the most esthetically pleasing, are those which are most resistant to reproduction. Going to a live concert and watching it later on the internet are not the same thing, but the performer may not do anything different in the live performance than he does in a performance directly recorded to be shown on the internet—and if the performers know that the concert is destined to end up online, they are likely to minimize the “liveness” of the performance. Unless they don’t, and decide to maximize the difference of each performance, and make the performance as dependent as possible on the live audience. Of course, the best way to do that is to erase the boundary between performer and audience, as in some forms of experimental theater. But in that case, why have a bounded, formal event in the first place—the logical conclusion of this line of reasoning is for performers to create scenes and events out of the material of everyday life, in the midst of everyday life, the purest example of which is the “happening,” a form of art developed by Allan Kaprow. These would be events that no one would know to record in the first place.

Now, to push things a little further, if “happenings” become the most valued performative esthetic, physical interactions between people, unrepeatable and unreproduceable events, are going to more and more approximate happenings. In other words, we will more and more strive to give them a ritual character, by introducing constraints that operate as notes of deferral that are explicitly marked as such. The individual who disrupts a scene, however gently, implicitly makes himself a potential sacrifice or scapegoat—anything that goes wrong from here on in can easily be blamed on the disruptor. Your reason for wanting to attempt this, nevertheless, is that you detect some “imbalance” or latent and dangerous set of resentments in the scene; making yourself a potential target of those resentments is a way of defusing them. The risk may be worth it, because if you become a skilled “happener,” you can become a very valued person. You might be able to parlay such a skill and such a(n initially minor) celebrity into YouTube fame, thereby creating a dialectic between the simulacral internet and the irreducibly performative. The former tries to capture the latter, which develops new strategies of evasion and in turn informs new meming strategies.

We have a good reason to theorize such a dialectic. No one can be unaware now of the irrelevance of the supposed liberal freedoms such as speech, religion and assembly. Power is becoming more naked, and seemingly more desperate. It has always been the case that you’re not really allowed to be illiberal in a liberal order—a more secure liberalism could choose not to press the point, though. But the simulacral nature of liberalism itself has caught up with it—liberalism makes no sense outside of the liberal/illiberal binary. What is illiberal is centrality; what is liberal is resistance to centrality. There must be some law of acceleration determining the speed with which new modes of centrality and resistance to them are discovered. There must also be some way of predicting what will come next. “Lookism” as Nazism, with a ban on any reference to a person’s physical attractiveness that could even implicitly suggest the lesser attractiveness of others? Borders as apartheid? Both ideas have been floating around—maybe it’s time for one or both to catch fire. Does anyone dare laugh, or claim to foresee the limits of new victimary offensives? Liberalism has become compulsive: it must generate new offenses.

The internet and social media have accelerated the process by creating, as Gans pointed out not too long ago, a new and devastatingly effective mode of scapegoating; at the same time, the absolute binaries generated by victimary politics are fodder for the creation of new algorithms, which is the manifestation of power by the internet monopolies. Developing an algorithm for identifying “white supremacist” websites, blogs, videos, etc., is precisely the kind of task Silicon Valley is prepared for. You need input from organizations like the SPLC and ADL—they will help you develop keywords, phrases, verbal patterns and other markers to look for. But the smarter the computers get the dumber the people get. There have already been leftist websites complaining that they have gotten caught up in the censorious sweeps for heretical material by the Inquisition. The computers can’t distinguish between articles written by “white supremacists” and those written about “white supremacists” by their enemies. And, increasingly, neither can people. It is becoming less and less possible to depend upon people distinguishing between you saying something, and you saying that someone else says that. If it’s coming out of your mouth it’s all the same. I’ve noticed the same kind of intellectual collapse in related areas: if you ask people about the legality or constitutionality of things like giving legal status to illegal aliens or legalizing gay marriage or transgender rights, they increasingly talk about how they care about illegal alien children, have gay friends, and think the transgender don’t bother anyone. In other words, admittedly counter-intuitive distinctions between what you are formally allowed to do and what you should do are becoming unintelligible. Just like if the words come from you, they are your words, you can only mention an individual, practice or situation in order to approve or disapprove.

This massive devastation, a kind of wiping out of volumes of mental “programs” which must end up leaving much of our cultural inheritance unintelligible, actually provides an opening for realist/formalist politics. We have no problem with simplicities: the media talks about Russia because they want to destroy Trump; the Democrats promote immigration because they hate white people; feminists talk about “rape culture” because they want revenge against men; blacks commit more crimes because there are more criminals among them, etc. If the media gets their way, it’s because they are more powerful than those who didn’t get their way; if some part of the US government subverts some institution or country it’s because they are screwing some other part of the government, or some other agency. We can hack away at all the vaporous talk of equality, democracy, freedom, etc., identify the workings of power, and presuppose the real binary of order vs. disorder. But it would be a very good idea to learn how to do all this under the radar of the algorithms of the Inquisition. We make texts and events say what needs to be said by introducing disruptions and interruptions into them. Some familiarity with Twitter makes it clear that lots of people are already getting very good at this kind of thing. It would be very hard to police all the variations of the “guy checking out girl”/”distracted boyfriend” meme. Steve Sailer is a masterful, subtle reader of mainstream texts, even if he himself is already too well known to fly under the radar.

The simplest way to develop such an algorithm-resistant praxis is to speak and write as if you are doing nothing more than taking orders from those in power, and explicitly pointing out that you are doing so. “In such a case, as we have been repeatedly warned, we should look out for someone who might notice…” A lot will depend on how you have described the “case.”  Of course, we want to theorize openly and have more straightforward discussions, and we’re probably not all going to rounded up right away, once and for all. But we can have these back-up, “happening” discourses, predicated on an analysis of what would subtly disrupt or interrupt a power approved event, on or off line. Simply indicating, regularly and casually, that the instructions, while universally felt, are not all that clear, will have a powerful and cumulative effect. In this way we prepare the transition from mock (but actual) obedience to chaos generating sovereigns to real obedience to a patron capable of assessing value, and, finally, to a genuine sovereign. And in the process we will do our part to reintroduce humor, irony, complexity, self-reflexivity and distance back into the culture.

The Generativity of Deferral

The question it occurred to me someone might ask after reading my last post was, “can’t there be too much deferral”? After all, you eventually have to eat, or respond to a threat (or blow), right? You can’t commit to infinite deferral—the Hunger Artist of Kafka’s story dies at the end. Such questions emerge from an understandable misunderstanding of deferral, the more advanced forms of which allow for plenty of eating, drinking, lovemaking, fighting (where necessary) and anything else needed for a full human life. As I’ve mentioned, the immediate effect of deferral is not an intolerable feeling of privation, since deferral emerges in response to accumulating desire more than to need (it is not an increasingly imminent threat that makes you angrier as the argument with your spouse intensifies)—rather, the effect is of a new world opening up. Threats and rivals become collaborators and potential friends; the source of desire is transfigured. On the originary scene itself, according to Gans’s hypothesis, the object at the center is divinized: it has saved the community by “commanding” them to let it(self) be. A range of other possibilities emerge: the point of contention between friends, spouses or co-workers can become comical—how could we have gotten so angry over that! Humor, or anything else that enables us to convert a source of contention into a new way of looking at something, derives from that divinization on the originary scene. But if the telos of humanity, and therefore our highest priority, is to bring about such conversions, aren’t those who adopt that telos as their own at the mercy of those who defer only so much as is necessary to turn themselves into a cohesive production and fighting unit? Isn’t being willing to hit first an insuperable advantage?

Such a question easily emerges if we neglect, as I have in fact been doing, the function of the sign and of language more generally in “distributing” the world. How, indeed, does the new community on the originary scene get from the moment of deferral in which they stand in front of a divine benefactor, in a circle no one can break except at pain of restarting the violent convergence, to actually approaching and consuming the object? Here, I may see things somewhat differently than Gans. Gans hypothesizes a “sparagmos,” a violent tearing apart of the central object driven by resentment at the object for “refusing” itself, however momentarily, to the group’s appetite. In this, Gans stays very close to both Girard and traditional accounts of sacrificial rituals (the best known example of which is probably the Dionysian frenzy central to Euripides’s The Bacchae). Subsequent to the sparagmos is a repetition or replay of the originary scene making use of the remains of the devoured object—this is the origin of ritual. I see no basis or need for questioning any of this. I do wonder, though, what, in the course of dropping the restraint acquired in the moment of deferral in favor of a savagery that (because resentful) is more than animalistic, prevents the members of the group from turning on each other once again as they scramble for their respective pieces of the meal. It seems to me that the sign formed on the scene of deferral must play this role. Whenever the aggression toward the central object threatens to spill over into renewed intra-group aggression the members of the group would “flash” the sign, or in some manner signify a reversion to the restrained and pacific posture modeled on the originary scene. This directs attention back to shared destruction and consumption of the object, and maintains some manner of “acceptable” distribution of the “proceeds” of the scene—not an equal share, surely (the larger and faster members would no doubt exploit their advantages), but close enough to prevent a breakdown of the accord accomplished. And this use of the sign then continues in the ritualistic phase of the entire event, making the re-enactment (which can subsequently become part of the sign preceding the sharing of the object) possible.

What this means is that extending the scene of deferral is not primarily a question of waiting longer to eat, or strike back, or enjoy anything. To some extent that’s necessary—you can’t have a family dinner if the kids are standing around in the kitchen grabbing food as soon as mother takes it out of the oven. They have to sit and wait at the table. But sitting and waiting until 7 doesn’t make them more civilized or moral than if they get served at 6. What is important is that the meal is properly “framed”: everyone is at the table, distractions are eliminated, some kind of sign (like saying grace) that it is time to begin is given, everyone uses their utensils, people don’t reach across the table to grab food off others’ plates, etc. The same holds for self-defense. The one who lets himself get hit ten times is not thereby more civilized or moral than the one who strikes back after the first blow, or even pre-empts that blow. The question is whether you act in accord with the social means, resources and rules for defending yourself. Your confrontation is framed through a narrative that will be reconstructed later, even if only by the participants in recounting their actions to themselves. Terms like “necessary,” “legitimate,” “excessive” among others then come into play, along with “fictional” narratives playing out alternative outcomes (could I have avoided the entire situation in the first place?). All these terms and mental acts are products of century and millennia of deferrals, and your ability to use these terms in a way that would meet with the approval of those in command of the social resources for managing these kinds of instances is evidence of your having “extended” the scene of deferral. Insofar as more restraint than you showed was, in fact, called for, that will be made evident by these narrative reconstructions, and then an extension of the scene of deferral is displayed others’ and your own capacity for learning.

So, there can’t be too much deferral because there can’t be too much differentiation and attention control. Every time you don’t do something, especially the obvious and easy thing, you multiply the possibilities for doing other things, and imagining the consequences of doing them. I mentioned in my previous post “infinite” deferral, which is important, because that is implicitly distinguished from the extremely patient predator who nevertheless cannot be infinitely patient. There is an important moral distinction here. What can be infinitely deferred is the degradation of the center. This is in fact the foundational or constitutive deferral, and one that needs to be renewed with each institutional and civilizational transformation. There is the figure at the center and there is the center. The king is not God, but the king or sovereign is the earthly means by which deferral is maintained. To obey the king is to recall or to retrieve the founding of the kingdom, while the founding of the kingdom is retrieved in its differentiation from the founding of humanity. We defer our resentments towards the king and those who rule over us as we obey the king to defer our resentments towards those whom we might accuse of occupying our place. These deferrals are never accomplished once and for all, and are continually institutionalized as deferences we owe each other. Our relation to the figure at the center is our attempt to have that figure convey information from the center: to instruct us by example in the arts of deferral. The abstraction of the sovereign from rivalry provides a model of action free from resentment: regardless of the actual motives of actual sovereigns, to the extent that they are sovereign the absence of resentment can be imputed to them. In fact, obeying them as if they are free of resentment is the best means of encouraging them to become more sovereign.

The Modernity of Absolutism

The notion of sovereignty reaches back, in a various forms, to distant antiquity, as does the assumption that the monarch exercises complete power, unlimited by law, but the absolute right of kings, in the Western tradition, is only explicitly stated and defended in early modern Europe, by apologists for the absolutist sovereigns then emergent. Kingship was, through the middle ages, bound up with a whole network of rights and responsibilities which served to limit its power both explicitly and implicitly. At the same time, the king was the king, to whom all owed loyalty and obedience, so we could say there was some confusion there. In that case, the establishment of absolute monarchies, along with theorists defending them, in particular Robert Filmer, served as a genuine clarification of sovereignty. You can define sovereignty in such a way as to subtract everything personal from it, and that may be how it looks from the outside, but rule, at any rate, must be personal. A decision can be disguised as a corporate affair, but ultimately someone has made it, and all human activities and institutions are the results of decisions. We could see all of the social sciences which replace decision with “process,” “structure,” “interaction,” and so on as evasions. And with good reason –to say that something happened as a result of a “process” means it’s out of our hands and we don’t have to fight about it. We could, then, see something moral in this evasion, insofar as it is a mode of deferral; but it is a marker of moral immaturity, like telling children the tooth fairy will come to make them stop crying—moral maturity would involve examining the ways we might best make our own decisions so as to preserve or reverse the decisions of the past.

But while Filmer’s argument for the absolute power of kings is really exemplary and a model of reasoning, contemporary absolutists find the modern absolutist monarchs to be highly problematic. They centralized power by using the “people” as a battering ram against the middle orders, the nobility that exercised countervailing power (they could withhold funds and soldiers needed for war) and the Church that considered itself entitled to determine the legitimacy of the king. In doing so they demolished the entire traditional moral order that situated individuals within institutions, with well-defined roles, and set us on the path where there is no public morality other than screaming for a larger and more intrusive state to grant more equality by punishing those who seem to believe that there is anything more important than more equality. Was there another way that the “clarification” of absolutism could have been accomplished, though? This is obviously a relevant question for those interested in a similar clarification today. Perhaps that’s the wrong question—after all, we can’t rewrite history. Maybe there was no other path then, but there are paths now. That would still mean we should learn from history, if for no other reason than to help us identify the preferred paths. What, exactly, do we think a more overt absolutist order would accomplish? If we could identify lots of things—ideas, institutions, practices—that are “in the way” of establishing absolutism, surely they are not all in the way in the same way, much less to be gotten out of the way in the same way. Absolutism implies some kind of centralization—what kind of centralization, then, does not require that all on the margins have exactly the same relation to the center? What kind of absolutism would preserve and even enhance differentiation and embeddedness?

Originary thinking provides us with a model for moral development. At the origin of humanity lies representation as the deferral of violence. There’s an object that everyone in the group wants; the fact that everyone wants it, and everyone knows that everyone wants it, makes everyone want it even more. They want it so much that the normal pecking order of the higher animal group breaks down—the alpha animal can fight off any single contender but is helpless against the simultaneous convergence of all upon the center. Some new means of restoring order is needed: that new means is the sign, in this case the gesture by which all members of the group come to “communicate” to one another that they will defer appropriation of the central object. We now have a configuration: we all pay attention to something at the center. We pay attention to it rather than trying to appropriate it, and language is our way of letting each other know that is what we are doing. We can imagine that the first, foundational, instance of deferral was very short—as Eric Gans suggests, maybe no more than a brief hesitation preceding a more orderly, or at least “framed,” shared consumption of the object. In that case, moral and human development would involve stretching out that moment of deferral: a group that could defer appropriation for a couple of minutes would be more “competent” than one that couldn’t defer for more than a few seconds. And then the group that could defer for an hour would be even more competent—and would find it easy to conquer the less continent groups.

This greater competence comes, in the first instance, from a greater control over reactions and the development of a greater range of responses to the actions of others: think about who would win a confrontation between someone who feels compelled to respond directly and completely to every insult, every slight, and someone capable of seeing those insults and slights as baits to which one is free to reserve a response. It also comes, though, from the greater differentiation of signs that results from sustaining, shifting and manipulating attention. Language is essentially us getting each other to pay attention to things. The group that can defer appropriation for an hour will use that time to talk about a lot of things—they will notice things about the object, about how it came into their possession, about one another’s relation, or mode of approach, to the object, about the difference between this scene and previous ones. The human vocation is to continue extending the act of deferral, ultimately until infinity. Remember the Greek proverb: call no man happy until he is dead. That itself memorializes a history of deferral, through which rather than seeing human life as bound up with the immediate mimetically generated fears of rivals and ancestors and the constantly shifting “scorecard” in one’s struggle with them, it becomes possible to see a life as an ethical whole. But we could just as easily say “call no man happy until all the possible ways of understanding happiness have been exhausted,” which is to say, never. It’s very funny to watch some online disputes, for example in the comments section of blogs, where commenters harangue, ridicule and sometimes even threaten each other, in a style of communication that has its roots in oral communication, where one side will best the other right now to the acclaim of an audience. I can’t say for sure what works best for what purposes here and now (and I like a good meme as much as the next man), but eventually people will start thinking in terms of using these very extended lines of communication to intervene in long term ways in broader communications and institutional networks. Some people are surely doing this already, and seemingly short term strategies (like memes meant to humiliate) may very well be part of longer term strategies. But that would mean you have trained yourself to not really believe in the meme you are deploying, except in the sense that you “believe” in the arsenal you are maintaining.

So, we can say, in a preliminary way, that the centralizing imperative of absolutism is better directed against the entrenchment of lesser modes of deferral and in favor of more extended forms. We can see evidence of the degree of deferral attained in the ways communities assign responsibility. A community that attributes a plague to a microbe that can be isolated in a pool of water used as a drinking source has attained a higher degree of deferral than a community that blames the plague on a priest’s failure to perform the prescribed ritual properly. This is not just a question of knowing that science provides us with truth and that rituals don’t really have any effect on the natural world. It’s a question of whether the communities involved have suspended their desire to assign responsibility so as to consider a range of possible “causes.” A community that blamed itself for the plague for its failure to maintain justice in its courts would be just as wrong as the community that blamed the priest, but it would be exhibiting a higher level of deferral because rather than directing attention in the least resistant and most “satisfying” way it would have thought in terms of distributing blame, and finding a cure not in murder but in institutional repair. (Such a community would probably be more likely to find its way to some notion of “public health.”) Now, this approach doesn’t necessarily make for easy decision making and the determination of moral distinctions (we could imagine a very—but not infinitely!—patient and very merciless predator, for example), but these are the terms on which a serious moral discussion can be had, and we could say that in all uncertainly over decisions we could sort out the imperatives for extending deferral from those for collapsing it.

Liberalism and progressivism also claim to enable improvements in human behavior and social arrangements, but they don’t purport to do so by extending human deferral capacities. Both ideologies assert the possibility of downloading human morality into institutions—so, the “checks and balances” of liberal government will themselves restrict the violent tendencies endemic to human beings, or the “market,” given sufficient prosperity, will have the same effect. But the implication in both cases is really that advanced civilization is compatible with a reversal of previous tendencies and a decline in the capacity to defer. If one’s desires can be rerouted to objects readily available on the market, then domestication probably would be fairly easy—and a lot of study can be put into this rerouting. If you want to render human being a desert and call it peace, this is fine. The logical extension, as the social and medical sciences advance and intertwine, is to develop the optimal social and pharmaceutical “cocktails” to make the potentially problematic manageable. This process is obviously well under way. But this is also a kind of centralization, and it brings to bear social and medical developments that might have better uses. What would make the uses “better”? The only real argument is that since all actions, all scenes, involve someone occupying, albeit temporarily, the center, and others aligning themselves on the margins, which themselves on close look are little centers themselves, the absolutist wants everyone to be able to man their positions. Whatever enhances the ability of the individual to adopt a further increment of deferral—not take the quickest route to pleasure, not act out the most immediate resentment—is therefore to be preferred. Only in the course of making decisions within the fullest scope of your responsibility can you acknowledge the decisions made the same way up the chain of command.

The modernity of absolutism lies in the imperative to make delegation increasingly precise. Responsibility can always be more closely aligned with power. This involves the continual refinement of attention, the mark of a further increment in deferral. So, expecting the priest to stop the plague by carrying out a pre-determined ritual invites no refinement of attention. If the priest fails, that proves he is no longer worthy of being priest, and he should be replaced. Assuming the plague has a point of origin, and appointing someone to determine that point of origin, with that person in turn selecting those he wants to search, according to known criteria best identified by those trained by those who know, various sectors of the city, having them report periodically, pursuant to which he reassigns them, etc.—here we see attention continually refined. Let’s say the guy charged with searching for the origin of the contagion is required to place ads in the media, which restrict the job to those bearing specific credentials, state the equal opportunity character of the hiring process, and especially encourage minorities and women to apply; must meet environmental and labor safety standards in carrying out his charge; must respect the property rights of those who might refuse him entry. He is clearly no longer sovereign, or a bearer of sovereignty, but that doesn’t mean that to be sovereign he must only hire his friends and relatives, that he should trample all over the accumulated culture of the city in course of his search, that he should behave obnoxiously and imperiously to those with interests in the city that will still be there once the contagion is over. He’s sovereign because he knows the best people for the job and they know him and understand how important it is; because they all care about the city and are not just a bunch of hired hands who will get their paycheck and be gone tomorrow; because the people with property know their property best and want to help eliminate the contagion as much as anyone and so cooperate with the guy who has the job in hand.

The progressives want the equal opportunity employment requirement, the environmental standard, the labor law (and the media that can interview anyone inconvenienced by the search, the special prosecutor who can look into laws violated during the emergency, the licensing board who can take away the guy’s right to lead such a search, etc.) because they don’t want him to be sovereign, and they don’t want anyone to be sovereign over him—they want to be in the fight for power themselves, spreading it and regathering it. They deaden attention themselves, because you have to pay more attention to their rules, and therefore their power games, than to the task at hand—ultimately, it’s like dealing with the priest, as you have to figure out what kind of ritual performance will enable you to get to the next move. Insofar as that’s modernity, the absolutist is reactionary—the absolutist is ready even to see what the priest can contribute. I’ve been straw-manning the priest a bit—there was always a bit more to divination than carrying out a prescribed, mechanical ritual. The priest undoubtedly “read” the community, and not just the innards: his practices were in fact a form of deferral, a way of delaying panic and providing for solidarity. He may turn out to be an intractable obstacle, he may interfere with efforts to solve the problem that would discredit him, but why determine that in advance, just because he lacks credentials? He may know a lot more than he’s letting on. If you’re centralizing power, you should always start with and try to incorporate the existing chains of command. And you should always resist anyone clamoring for the removal of anyone from a position of power and authority for reasons other than their demonstrated inability to use that power to meet their responsibility (if the problem is that they need more power or less responsibility, you can see to that). But what all this means is that absolutism, as a political project, depends upon enough people working consistently to align power and responsibility, for themselves and others. For those with more power than you, read back to them their responsibilities by further refining the attention their delegation to you requires; for those with less power than you, dole out more power with each advance in adopted responsibility; for yourself, show a concentration of your powers dedicated to everything within your sphere of responsibility along with an absolute respect for other spheres. How many is “enough”? That’s unknown, but fortunately far less than a majority, at least to start turning things around.

Trump’s Process of Inquiry

I think we’re all going to be talking a lot about fascism for the forseeable future. Not Nazism, which is really a red herring here. Imperial Energy has been posting on fascism, presenting a definition of it as, essentially, a nation perpetually mobilized for war. I wouldn’t argue with that, but I think “fascism” means something a bit different in American political discourse. Perhaps we need to talk about “folk fascism.” For the left, “fascism” means a kind of extreme “law and order” stance, and that’s a helpful way to think about it (that’s what made, say, Nixon, a “fascist” to the New Left). The left thrives on division—their M.O. is to find some idea or institution that no one has given any thought to because it’s simply obvious, and turn it into a “controversy,” complete with irreconcilable divisions and ongoing moral emergencies. The left is an extortion racket, and you need to break a few windows to show the need for “protection.” The “windows” in this case is your peace and quiet and assumption that any activity is outside of politics. The left’s spectral folk fascism is the counter movement of those with sovereign authority to “cauterize” the wounds opened by the left. Fascism in this sense shadows the left, and wherever the left incites division fascism comes right in to isolate, control and expel the source of division, and restore and strengthen the normal chain of command.

If we can’t call that “fascism” then we need another word for it, because it’s an essential practice, and one especially to be recommended to President Trump. On the one hand, it’s just a question of enforcing existing laws, within the framework of the legal order. Antifa could be shut down very easily using laws against property destruction, assault, racketeering, and so on. Illegal immigration, needless to say, can be treated as a law enforcement problem. But the truth is, law enforcement, to the exclusion of all other considerations, runs counter to American political traditions and cultural norms—look at how any film or TV show represents the “tight-ass” who insists that the rules be followed, that punishment be swift and sure. Vigilantes and rogue cops (like Paul Kersey and Dirty Harry) are far more popular than the straightlaced sheriff. Of course, Kersey and Harry are also part of folk fascism. “Real” folk fascism, then, would be the actual sovereign forces “cleaning up” like Kersey and Harry tried to do from the “outside” (or the outside of the inside in Harry’s case). If the feds crack down on Antifa, they will have to ignore calls to respect the “idealism” of the protesters, to take into account that there may be misguided young people among them, to keep the focus on the even worse targets of the Antifa—there will be stories of this promising young student and that naïve protester who got out of her depths, etc. And the same with illegal immigration—what about this mother devoted to her American children, that hard working father reaching retirement age, and so on. To dismiss all these appeals and cut through the administrative delays they exploit and err on the side of “over-enforcement”—that’s folk fascism. It’s what Slavoj Zizek might (and probably has) call the “real kernel” of fascism constitutive of even the most liberal society. The real kernel displaces the liberal in military dictatorships, like those of Franco and Pinochet but, strictly, speaking, the military takeover shouldn’t be necessary. All that is necessary is that at every point where one might legitimately tilt toward the side of liberal rights or order, one tilts in favor of order. Liberals and leftists are right to fear that if acted on consistently, this approach would leave very little liberalism left in the state. I know it would be very bad “branding’ to use the term “fascism” for this authority over liberalism approach, but what do we call it, then?

I’ll call it, for now, spectral fascism, or *Fascism*. President Trump may get to the point where he realizes he has to use all the (really quite considerable) legal means at his disposal to cauterize all the wounds being salted by the left or he will, eventually, be removed from power, one way or another, or at best neutralized (and it doesn’t look like his enemies are going to be too particular about the means). Trump’s form of learning or probing seems to be to make innocuous statements and introduce unexceptional initiatives (generally in favor of law and order, public safety, our unity as Americans, etc.), see who attacks them, and then polarize discourse around that enmity. It’s a good strategy—how can you tell what your enemies are up to without engaging them, stirring them up, setting them in motion, and it’s smart to do so in a way that forces them to show as much of their hand as possible. The next step, though, which Trump always seems to be on the threshold of, is to flip the means the enemy is using back against them. For example, there is a special prosecutor looking into non-existent Russian influence on the 2016 election. Why not, in the spirit of “both sides share the blame,” appoint special prosecutors to look into the funding of Antifa and BLM, both criminal enterprises? Use civil forfeiture laws to confiscate the assets of the foundations funding both? Why not special prosecutors and/or FBI investigations into groups that are inciting violence, like the Southern Poverty Law Center or, for that matter, the Anti-Defamation League? You could use the criteria of the left to support official inquiries in these and other groups. Or, for that matter, how about a special prosecutor looking into who started the “Russia hacked the election” hoax? (A special prosecutor to look into who pressed for the first special prosecutor.) Make liberal use subpoenas, find ways to ask all kinds of people, like journalists, questions under oath. We all know the drill—the process is the punishment, make them all pay, expose the networks, bring in allies, deputize (or some equivalent) law firms and others to bring civil suits, maybe bring Julian Assange in from the cold, etc. Really, all he has to do is everything they are trying to do to him, and turn their cries of resistance into proof of their guilt.

With each move the President makes, we will see who doubles down and who backs down—make allies of those who back down by offering a piece of those who double down. Keep upping the ante—anti-trust suits against the major players in Silicon Valley (Google, Apple, Facebook, PayPal…) who are now arrogantly asserting control over political discourse in the country. (Isn’t that really an attempt to hack all the upcoming elections?) Who knows, maybe an inventive special prosecutor can put together some kind of racketeering or espionage case against CNN and other media organizations. Making the point that we all now know that the law is nothing other than what those who control the law say it is would be valuable in itself. Appoint one of his hotels to the Senate. (OK, I’m kidding about that one.) Show that he has learned just how liberally rights and procedures can be interpreted—what matters is cauterizing, suturing, protecting. Expose the networks, create a map of enemies of the people. Clearly immigration must be completely shut down until these matters can be sorted out. Lobbyists for foreign countries and companies might want to take a break for a while. While we’re at it, let’s plug all the leaks in sovereignty—otherwise, how can a new mode of legality be established? The only real question is whether Trump has the staff with which to do all this. As of now, my guess would be that he doesn’t—but the only way to generate the personnel is to initiate the process, open positions for men of ability, create hierarchies based largely on who came in first and grant amnesty to those abandoning the sinking ships of the foundations and corporations (maybe a whole bunch of people about to kicked off Twitter, Facebook, YouTube and other platforms will be free to pick up the slack). It can’t be for nothing that Trump’s cabinet is drawn so massively from the military. They may be coming for someone you don’t like today, but they’ll be coming for you tomorrow, and I won’t let them get you, even if we have to set aside some constitutional and legal niceties (all those judges I have been appointing will understand).

That’s a *Fascism* we can get behind: staunch stanching, and nothing else. The universities go back to teaching, learning, researching; the internet companies go back to providing their services; city councils go back to deciding on the upkeep of parks and monuments; corporations go back producing goods and services; the media learn how to report without relying on leaks, and so on. What has turned out to be incompatible with constitutional order is what Madison called “factions,” and which he hoped would counter and balance each other across a heterogeneous country. A “faction” is any group that is against someone else, rather than simply for social order and the normal functioning of institutions. Any good government will support some kind of think tank devoted to the study of factionalism, especially to detecting its early signs. The roots of factionalism lie in the adversarial structure of liberal society itself, which promotes the assumption that no claim can be considered true unless it has conquered a counter-claim, which nevertheless lives on, chained up down below, spawning more counter-claims. In other words, liberalism builds Satanism into its order—someone is against me, therefore I am. The alternative is a center immune to factionalism. How did you contribute to the institution, and how were your contributions disabled, thereby compelling your contraversion of the center? You could never really prove that you have exhausted all avenues of improving the institution, of discovering what is required of you. You’re representing resentments widely held, not just your own? What have you done to dampen, rather than inflame, those resentments?

Perhaps the liberal horror of *Fascism* provides a clue to how the foundations approach things. They always start with some concept, like “democracy,” “freedom,” “the individual,” “peace,” etc., that’s considered central to modern liberal society. It’s always a contentious concept, born in contention, meant to produce more contention. So the foundation heads look around and see that there’s not nearly enough contention around that concept. Most people seem more or less satisfied with the inherited meanings of “democracy,” and so on. But that violates the very essence of the concept! The dissonance is unbearable—the society is not living up to its full potential, to the true meaning of its creed. So, you look for dissenters to fund—those who challenge the “complacency” of the majority, and create “real” democracy, individualism and all the rest. There is a felt need for full spectrum dissension—it’s like those activists (in favor of what, exactly, I’m not sure) who complain about uncontested legislative seats and won’t be satisfied until every election is 51-49%, with every community effectively polarized around every issue. Those who feel this need most strongly are the “forward looking” elites, those enhancing their power by distinguishing themselves from the “entrenched,” backward looking elites. And, of course, it makes sense that if you see your growing, spreading enterprise as requiring a more receptive sphere of circulation and consumption, you will see society in general as in need of being “opened up.” And once you start on this path, how do you stop? There can never be enough democracy, freedom, individualism, peace, tolerance—the concepts, unlike, say, sovereignty, are intrinsically open-ended and even infinite. They can’t stop themselves; they must be stopped. That’s what *Fascism* is for.