Mimetic Theory and High-Low v the Middle

Let’s imagine a scene, let’s say an accident on the side of the road: a few people rush to the scene and start helping the victims; if a few more come and there is nothing more for them to do for the victims, they call for help and help keep others from entering the primary scene; then, others come, with nothing much to do, but they serve as witnesses and in case some instrument or specialty must be fetched (a mechanic or doctor; a first aid kit). I think this is the best way to think about social organization, as always centered on specific needs and dangers, and as set up to differentiate people in accord with the role they can best play in meeting those needs and facing those dangers. In the scene presented above, there is a bit of chance and bit of natural difference: it may be that those first on the scene just happened to be closest, while some of those standing around later might have been just as qualified to help. Still these things tend to sort themselves out—someone who happened to be first but is afraid to take responsibility (or is unqualified, which means that he has avoided such situations, and neglected preparing for them, in the past) is likely to slip back into the crowd, while someone among the later arrivals who is willing and qualified to help is likely to present and announce himself.

According to Eric Gans, the first human scene, upon which we can model later ones like that sketched above, is more precisely specified. Here we have a desirable object, presumably some food item, at the center of the not yet human group: these advanced, highly imitative apes, have their appetite for that central object inflamed, made into desire, by the awareness of the desire of all the other members of the group. This intensifying desire overrides the animal pecking order that normally maintains peace within the group—the alpha animal eats first, the beta animal eats when the alpha is finished, and so on. The alpha could never withstand the force of the group as a whole, but animals never “organize” themselves as cooperative, coordinating groups. Now, as all start to rush to the center, the animal hierarchy is abolished. What takes its place, according to the originary hypothesis, is the sign—what Gans calls the “aborted gesture of appropriation.” Think about traditional gestures of greeting, like hand shaking—it’s a way for each side to show it is not holding any weapons. Stretching out your hand with a weapon in it would signal violence; here, the same physical gesture is converted into a renunciation of violence. Think, for that matter, of a threatening gesture (which I doubt anyone does any more), like shaking your fist at someone—by demonstratively withholding the act of violence, you actually provide a space of peace, even if coupled with a warning. The initial sign was the invention and discovery of this “method” of converting violent actions into gestures of deferral. The gesture is likely to be more effective and enduring the more it actually mimics and therefore evokes the violence deferred—when we shake hands now, we don’t do so (in civilized zones, at least) with a sense of the relief that the hand coming towards us isn’t holding a knife—which is what makes the handshake an essentially empty gesture (it’s not good enough to seal a deal any more, that’s for sure).

The car accident seems like a very different scene—there’s no object of desire, and therefore no cause for conflict. Everyone can just focus on helping the victims. But that’s not the case—every human scene has an object of desire and hence contains within it potential conflict. Something goes wrong in the attempt to extricate the victim—wait a minute, whose idea was that!? The rescue effort can turn very quickly into an exercise in blame shifting and power struggles. There must be someone first on the scene in a more primary sense—someone who can command the gestures of deferral needed to prevent those resentments lying right beneath the surface from becoming manifest and distracting from the effort. Maybe everyone involved is good at that—like trained medics would probably be. But that’s the result of the institutionalization and trans-generational transmission of the necessary gestures. Someone, then, had to build and maintain those institutions, and doing so involved an analogous process of deferring the resentments inherent in any collaboration and creating the norms and models of leadership others can inherit.

I’ve explored in a couple of recent posts the problems involved in the process of institutionalization. There’s nothing new here—in one of the commemorations I’ve read recently for the just deceased science fiction and military writer Jerry Pournelle, I’ve heard attributed to Pournelle the observation that in every institution there are those who are concerned with the primary function of the institution, and those concerned with the maintenance of the institution itself. Anyone who has ever worked in any institution knows how true this is, with the exception that plenty of institutions don’t even have anyone concerned with (or cognizant of) its primary function any more. Those concerned with the primary function should be making the most important decisions, but it will be those interested in institutional maintenance who will be most focused on and skilled at getting into the decision making positions. But someone has to be concerned with the maintenance of the institution—those absorbed in its primary function consider much of the work necessary for that maintenance tedious and compromising. (The man of action vs. the bureaucrat is one of popular culture’s favorite tropes—in more fair representations, we are shown that sometimes the bureaucrat is needed to get the man of action out of holes of his own digging.)

If we go back to the simple scene outlined in the beginning, we can see this is a difference between those who are first on the scene, and those who are second—for simplicity’s sake, we can just call them “firsts” and “seconds.” The seconds establish the guardrails around the firsts as the latter do their work, and they make for the “interface” between the firsts and those who gather around the scene (the “thirds”). They will also decide which resources get called for and which get through to the firsts, who are too busy to see to such details. There is no inherent conflict between the firsts, seconds and thirds, but there is the potential for all kinds of conflict. The firsts (and the first among the firsts) should rule, and should be interested in nothing more than enacting all the signs of deferral that have been collected through successive acts of rule. Even defense against external enemies is really a function of enhancing the readiness of the defenders of the community, and the community as a whole, and doing that is a function of eliminating all the distractions caused by desires and resentments, with the most attention dedicated to where it matters most. The seconds should be filtering information coming from below, marshalling resources, and transmitting commands and exhortations from the ruler. And the thirds, the vast majority of the community, should be modeling themselves on and ordering their lives in accord with the hierarchy constitutive of the community. The problem of institutionalization is the problem of the relation between firsts and seconds, or firstness and secondness (since all of us occupy different “ordinal” positions in different settings).

But, of course, sometimes the first is not up to the task—maybe he once was, but no longer is, while being unwilling to cede power, without their being any definitive proof of his unfitness. And once there is a formalized form of firstness, the tradition or mechanism by which someone is placed in that role will sometimes elevate someone unworthy. In such cases, the seconds, who will be the first to notice, start to worry—they may start to think one of them should be in charge (but which one…?); or that they have to exercise power behind the scenes, reducing the person presently in charge, but very likely his successors as well, to a position of dependence. Under such conditions, the right thing to do is to above all preserve the ontology implicit in the originary scene, what some of us call an “absolutist ontology,” which should therefore be inculcated as part of the accumulated signs of deferral bred into the community. We all know that in an emergency, or in any really important situation, no one thinks in terms of democracy—everybody, except for saboteurs, thinks in terms of manning the stations each is best suited to man. But that also means taking the stations each is presently manning, or is accustomed to man, as the default. A reliable indicator of firstness is the ability to revise previous assessments and assignments and to formalize present fitness. If the first is not up to the task, the radical solution of removal must come very far down on the list of remedies—we must first of all carry on as if he is capable, and if the seconds have to lend some support that will go unnoticed and unacknowledged, so be it. (This is itself a form of firstness on their part.) It may even be necessary, after the fact, to narrate events in such a way as to attribute centrality to the designated first. Of course, if removal becomes absolutely necessary for the survival of the community, such practices will make it all the more difficult; this is a good thing, though, and these practices also ensure that any remove and replace actions will be carefully crafted so as to preserve absolutist ontology.

Absolutist ontology is rejected when these practices, these attempts to bring formalized roles and assessed capabilities into closer correspondence, are abandoned and some among the seconds start to exploit the gap between attributed power and actual power of the ruler. If the second’s efforts must sometimes go unacknowledged, the same goes for the first’s dependence on the second, and this can be a lever for increasing that dependence. Then a struggle, partly overt, partly covert, commences, and it is at this point that both parties (or all parties, because the seconds are likely to fall out amongst themselves under these conditions, while the king thereby surrenders his firstness) seek allies, or proxies, among the thirds. The king has been granted power, but he doesn’t really deserve or properly use that power; perhaps he doesn’t really exercise that power, which is in fact wielded by secret, insidious forces. The hierarchy inherent in absolutist ontology can in this case no longer serve as a model for the thirds to use in composing their lives—rather, it is a mere appearance, hiding a reality that the action proposed by one or another of the seconds (or the first himself, turning against what Imperial Energy calls his “essentials”) will unveil. Skepticism, pluralism, and all the rest follow, and here is where HLvM has full sway. What has happened is that mimetic desire, that is, envy of the putative being possessed by the other, which the centuries or even millennia of accumulated deferral has converted into a complex array of signs assigning roles and duties, has now been introduced as a legitimate principle within the community (the king/your lord is keeping something from you, so, therefore, are his supporters, and maybe your neighbor as well)—and once this happens, mimetic desire, corrosive as it is, must become the dominating principle of the community. Then you have institutionalized civil war, and democracy is nothing other than this institutionalization, with voting blocs at most several steps away from dissolving into armed camps. The problem is how to avoid taking sides in this civil war, or at least not just taking sides; the only solution is to find ways of realigning ourselves as firsts, seconds and thirds in as many (and sufficiently visible) ways as possible, and thereby recovering and creating as many gestures of deferral (while marking them as such) as we can.

Power and Digital Order

Eric Gans has a compelling hypothesis regarding the form of our present disorder that I’d like to give more consideration than I have done thus far. Gans has been emphasizing the enormous economic gulf created by the digital economy run by those capable of sophisticated forms of symbolic manipulations, since the reduction of production processes to symbolic manipulation makes all those incapable of such intellectual work essentially economically obsolete. Gans has been connecting this development to the intensification of victimary cultural politics (wherein every “inequality” is reduced to the form of the Nazi-Jew binary), because such victimary politics becomes the only way of compelling some kind of moral reciprocity on the part of the elites. In his most recent Chronicle of Love & Resentment, “Common Sense,” he makes this connection even more forcefully:

To put the binary cultural hypothesis very simply, the more bytes required to organize the material economy, including the entertainment (thank you, Frankfurt School) that painlessly discharges our resentments and satisfies our appetites with fast food and eventually with self-driving cars and AI-enhanced sex robots (in the business section of the September 27 Los Angeles Times: “Silicone sex dolls get an AI makeover. These ‘girls’ will ‘have sensual conversations and tell naughty jokes’”), the fewer bytes needed to maintain cultural solidarity. The Internet is maintained with terabytes of know-how in order to allow people to Tweet the crudest obscenities.

Local versions of this dichotomy are everywhere. At the university, in the embarrassing contrast between highly sophisticated and theory-driven scientific research and near-universal ideological idiocy. Not to speak of GA’s difficulty in obtaining a hearing. For its complexity is based in ideas rather than algorithms, and it thereby falls between the cultural and technical stools. In a world run on big-data-based algorithms, when it comes to exercising the imagination enough to conceive an “originary hypothesis,” the response is one of intellectual panic: how can you speculate without data, on the basis of what you fancy to be our shared intuition? No one really “understands” particle physics or string theory, but these are not things to understand, merely equations to work out. “Cognitive theory” has equations; GA has only imagination, and there is no longer enough of a common symbolic world to allow sharing imaginary constructs as a mode of truth-seeking.

Once we have all become positivist creators and “trainers” of algorithms, we can no longer allow the kind of “gentlemen’s” criteria for success that still existed in my youth, which permitted the less favored both to resent the “caste system” yet be reassured by its authority. Today, all that counts is either knowing the right people, which is not the same as being part of a loosely aristocratic old-boy system, or getting a high score on an exam. For those who don’t know the right people, getting the score is all, whence the reign of disparate impact.

Economic productivity used to require a certain degree of cultural solidarity: bosses, managers and workers needed forms of extended cooperation; the educational and entertainment system had to enforce standardized cultural norms, so as to sustain cross generationally the models of behavior required for an advanced workforce and citizenry. Meanwhile, I just googled to discover the number of Google employees: 72, 053. (A much diminished Ford Motor Company still has 201,000.) The only cultural solidarity needed there is that of the graduates of the top dozen or so universities in the US, perhaps the world. As Gans notes, these elite workers will be able to produce substitutes for the satisfactions previously offered as inducements to participate in cultural reproduction: instead of a wife, increasingly realistic sex bots (will women want these as well?). Soon enough, people will forget what “wives” were. The often cited Morlock vs. Eloi dichotomy is being realized. What to do with all that surplus population?

I want to address Gans’s reference to the reception (not) granted to GA more specifically, but first of all to note (as Gans himself indicates) that this observation holds for social and cultural theory as a whole. Here’s an interesting way to think about this. The linguist Anna Wierzbicka has developed what she calls a “Natural Semantic Metalanguage” comprised of all the words that are common to all the languages in the world. Along with this metalanguage, she has developed a method of translation, using the metalanguage to translate the various otherwise untranslatable concepts constitutive of each language. So, for example, the word “emotion,” which does not translate out of English, can be translated by reducing it to the words “feel” and “think,” which are part of the NSM. Wierzbicka’s method involves composing a series of sentences that are aimed very precisely at bringing out the specific meaning of any word. Now, in describing her method, Wierzbicka says there are essentially two ways of talking about any event: first, one could speak of the outcome or intention as “good” or “bad” (both words in the NSM); second, one could speak of the event as similar to another event. The latter approach opens the way to identifying prototypical events that would distinguish one culture from another and enable us to account for its language as allowing for ever more complex events modeled on while being differentiated from those prototypical ones. So, it is as if what has happened now is the complete collapse of all events into certain prototypical ones, with all of them summarily labeled “bad.” It is really a kind of cultural lobotomy. It may be that for the socially autistic digital elites and their political proxies and protectors, social spaces (the Humanities, entertainment, more and more often sports…) are set aside for the sub-elites drawn from the under-classes to, in lieu of forming “cultural solidarity,” lead their charges in LARPing iconic events (the March on Selma, the liberation of Europe, the Algerian War, etc.) in real time.

GA, of course, has never had a particularly warm reception in the academy, and its emergence almost simultaneously with victimary thinking offers as good an explanation as any. GA is interested not primarily in labeling a particular social or cultural form good or bad, but in understanding it as modeled, however distantly, on an originary scene (the prototype of prototypes) defined by the deferral of collective violence. The implications of such an approach for making sense of inter-group and inter-sex relationships are simply too triggering—GA suppresses altogether the incredibly pleasurable retroactive accusation and self-congratulation that has driven most thinking in the Humanities and Social Sciences for quite a while.  But it also, as Gans points out in the excerpt above, resists the supposedly more sophisticated and objective data-driven approaches to social order, because they can never ask the question, why is there social order (and therefore “data”) in the first place? The practitioners of such approaches cannot understand the paradoxical question, what must language be in order to be what it is?, because they have no way or initiating a data search or devising an algorithm to address it. But there was language before there was “data,” and language couldn’t have emerged out of some primordial “data-generating” process on the part of exceptionally intelligent organisms that somehow became a collective process—accounting for that “somehow” implicates the sober scientist in silly “just-so stories” and B-movie quality creationist accounts of human origin.

Now, let’s take a look at Gans’s “final reflection”:

No one can dispute that making women wear veils or worse in public, let alone “honor-killing” them for speaking to strange men, does not rank very high on the scale of moral equality. But the White Guilt that tolerates it is not solely motivated by fear of “Islamophobia.” It reflects a guilty distortion of the healthy idea that women’s destiny, whatever else they elect to do, is to bear children; that, in other words, female biology, at least at the present stage of human technology, is still “in service” to the society as a whole. Women are not solely to blame for the West’s low birth rate, but in a world where women are not subordinate to men, ways must be found to encourage couples to reach a replacement level of population as we live ever longer as individuals, unless of course we would prefer to disappear.

However crude and barbaric these archaic customs may be, they are not simply “irrational.” Not everything that one dislikes can be understood as a variant of Nazism. The idea that the subordination of women, or slavery, or even human sacrifice, is simply “evil” does nothing to explain why it has existed, let alone why it has been abolished in societies that can afford to do so. And calling it “scapegoating” is just one more one-bit explanation.

Once you start along this line of thinking, there is no way of telling where it will end. If female biology is still “in service” to the society as a whole, we might discover that a lot of other things are as well. The reason for the virulence of victimary leftism is that they know if the male-female distinction can be institutionalized so as to maximize some social purpose, every other distinction can be as well. Some institutionalizations of these distinctions are abolished by “societies that can afford to do so.” But what is affordable at one point might turn out to be unaffordable after all at some later point; the judgment may even be made that it was never really affordable in the first place, but that reckless, wasteful people had gotten in charge of the social reserves. The “one-bit” thinking might go farther back than anyone thinks—was not liberalism, in fact, the first one-bit political theory (down with kings! Up with the people!)? In that case, maybe it’s not a question of more or less rapidly tearing down social distinctions but of calibrating the ones that exist along with the emergence of new ones. Here, in fact, we have the difference between conservative and reactionary social thought: the conservative wants to make equality safe for the world, whereas the reactionary wants all inequalities recognized and formalized through reciprocal obligations. All that matters is holding the center.

Gans never does propose a way of genuinely countering the victimary, other than a (maybe not so) ambivalent endorsement of Trump’s “common sense,” i.e., open, confrontational, undeterred approach. But more important is the other problem he, along with everyone else so far, leaves unsolved—what to about those who cannot be integrated into the digital order which, through automation, AI and algorithmic programming, is in the process of rendering virtually all means of acquiring virtue not merely degraded or abused but obsolete. At least Gans lays the problem down on the table, with all its moral and ethical perplexities. But maybe the two problems, as Gans seems to intuit, are one. In an overtly hierarchical order, the victimary, which depends upon the liberal’s sense that there’s always some unnoticed inequality he’s about to be called out for, would be impossible. In such an order, it would also be possible to ask, explicitly, what is the best way for humans to live, and how can we provide such a way? For example, what form of property ownership would promote self-sufficiency and authority in men, and devotion to family in women? Perhaps a return to homesteading would be best for some, and a case could be made for this on aesthetic as well as health grounds—a revival of craftsmanship and homegrown and hunted food. Maybe it’s hard for some to resist a smirk here, because homesteading as a “lifestyle choice” seems affected and “postmodern”—real homesteaders did it to survive, whereas this would have something of the Disney park to it. But if enough people turn to it, that would mean it is a question of survival, cultural and maybe physical, if the cities and suburbs become unlivable, or unaffordable for many. Maybe it will become the best way for those who are not rich to prevent obesity. Immigration can be essentially eliminated, and technological developments can be slowed down or even stopped or reversed for some purposes, in some areas—once we habituate ourselves to the sense that technology is a series of decisions, rather than an inexorable force, many things might be possible. There’s no reason to stand in a stupor and stare vacantly as millions of people are displaced by technology. Some as yet unanticipated technological and economic developments may take up some of the slack, but there’s no law saying how much.

But let’s return to Gans’s essay “On the One Medium,” which I discussed a couple of posts back, and which concludes as follows:

We may tentatively conclude that so far, at least, under the reign of the One Medium, if the periphery appears to be doing fine, the center seems to be increasingly less figurable, either as a god or as an artwork. This might be thought to signal the decline of the sacred, with as a result perhaps the impending end of humanity itself. But let us avoid apocalypse. A world where rocks and old furniture have taken the place of the works of the masters as the cultural “replacement” for traditional religion may just find that traditional religion does a better job. Certainly, as David P. Goldman (aka “Spengler”) likes to point out, religious people are greatly overrepresented among those who produce children beyond the replacement level, and who therefore guarantee their participation in future generations.


Religion too may be found on the Internet, and not only serving its more pernicious functions, such as the recruitment of jihadists. Do there exist the equivalent of MOOCs (Massive Open Online Courses) in religious services? Or should we rather learn to look at the Internet itself at a given moment as a MOO religious service, where virtual human togetherness replaces the central godhead with the figure of global humanity itself, nameless and figureless, existing by right of its ubiquity alone?


No, I rather think not. But our massive dissolution in the crowd may have for effect our enhanced attraction to the Subject, real or constructed, that we experience in its center: the One God, I AM WHO I AM.


Why would our massive dissolution in the crowd enhance our attraction to the Subject at its center? Because this dissolution presupposes the emergence of a new center. Similarly, the invention and dissemination of alphabetic writing can be causally linked to the emergence of ancient Hebrew monotheism and Greek metaphysics: in abstracting the word from any voice, the word is “anonymized,” seeming to come from everywhere and nowhere. This process is itself bound up with both imperial power and the resistance to it, with both the Greek city-states and the Jewish Commonwealth situated on the margins of, and threatened with assimilation to, the great empires of antiquity. Perhaps this too is a high-low vs the middle strategy, with God about as high as one can go, and the realization of justice on earth an open-ended project that can never be considered defeated once and for all. The development of writing from its origins as a bookkeeping device to the broader purposes of cultural transmission also follows the trajectory of the establishment of those empires, with alphabetic writing in particular—writing based on the analysis of speech down to the most elementary individual sounds—making available to the “low” (the general population, or much of it) a technology previously controlled exclusively by the specialized scribes of the empire, who monopolized the very intricate technique of hieroglyphic or syllabic writing.

The internet is not God, and we have become far more aware in recent months of the very direct control the quite visible and well-known masters of the supposedly ultra-liberal technologies exercise over their platforms. But if the invention of monotheism was an imagined high-low alliance, it certainly exceeded whatever political function it never actually performed anyway, at least not for the Jews. The revelation of the one God, I AM WHO (or THAT) I AM, is, we could say, an iteration of the originary scene: God gathers all his people together and speaks to them directly, providing moral dictates that render human sacrifices and God-Emperors irrelevant. Now, this has often been parlayed into various kinds of high-low alliances, rallying one “people” or another against those pretending to mediate between the people and the divine. That won’t stop, but no one can simply invent a new God either. We can counter the more earthly high-low alliance with the permanent one, though, insofar as the monotheistic iteration of the originary scene need mean nothing more than the general possibility of forming congregations around central objects, i.e., disciplines—even organized around rocks and old furniture, which have displaced the works of the masters precisely because the forms and terms of the congregation are more important than the pretext for it. The monotheistic God issued what Philip Rieff called the “absolute imperative,” and we can hear this imperative (to not usurp the center) renewed in the “one medium”: sacral kingship is replaced once and for all by the sovereign restoring the “middle” as the guarantor of the differentiated disciplinary social order for which the one medium is perfectly suited. One doesn’t need to be a believer in anything other than a center that will outlast any other center and will do so because we keep creating and obeying centers in the world that help pare down the sovereign center to its bare minimum while removing all obstructions to its operation.


Power entails, first, occupying the center and, second, using that occupation to direct attention to another center. It’s like a conversation where you first need to get someone to pay attention to you, and then you can get them to pay attention to what you really want them to. In the kind of power we are most used to talking about, political power, you can make people pay attention to you and then attend to what you wish by making them pay a very heavy price if they don’t. But in order to make them pay a heavy price, there must be lots of other people who pay attention to you and will attend to making sure that actual or potential dissident gets his mind right. For a while up the ladder you can make them (e.g., a conscript) pay a heavy price for disobedience as well, and even very powerful people can be brought to heel if isolated, but at a certain point those obeying you (attending to you and to whatever you want them to attend) must have reasons other than fear for doing so. Potential conflicts, perceived to be more destructive than the consequences of obedience itself, are felt to be deferred through respect to the person and/or office. At the very least, then, whoever occupies the place of power must not be generating resentments more uncontrollable than those his presence in power contains. He must, in centering himself, be deferring conflicts by directing attention to a more permanent center, a model of order.

We can say, then, that centering is power. I have pointed out in previous posts (perhaps not for a while, though) that Eric Gans, in what we could call his originary history of humanity, locates the crucial turning point in the emergence of the “Big Man” who seizes the sacred center and becomes in charge of distribution. Up until this point, in small scale, egalitarian, primitive communities, while of course some individuals are more central than others on all occasions, no one has permanent occupancy of the center, access to which is therefore controlled by a vast, sprawling and intricate array of (no doubt erratically enforced) tacit and explicit rules and prohibitions. Once the Big Man emerges, the general possibility of a single individual occupying THE CENTER becomes imaginable; once imaginable, such a possibility can be desired. The ramifications of this social transformation are tremendous—Gans himself traces a line from this transformation to the monotheistic revelations, which essentially forbid the individual from trying, or even desiring, to occupy the center. This doesn’t just mean that no individual should start a rebellion aiming at making himself king—such a prohibition would obviously be trivial, and already covered by the existence and power of the actual king. It means that every individual, king included, should remember that his occupancy implies an ongoing reference to the permanent center.

There are innumerable ways of placing oneself at the center, which is to say substitutions for and imitations of the centrality of sacral kingship. Power is centrality, and power is absolute. Any occupation of the center, then, means absolute power within the space of attention producing that center. Let’s take the apparently most powerless individual—the torture victim. Insofar as a specific response is desired by the torturer, i.e., as long as the torture is not akin to kicking a sack of potatoes, the tortured has the absolute power to satisfy or not the torturer’s desire and to that extent “over” the torturer. Obviously the scope of this power is extremely limited, in space and time, but within those limitations, it is absolute. And, of course, the largest scale power is also limited while being absolute within its sphere—governing is really a matter of retaining absolute power within that sphere while not (or by not) reaching for power outside of it: the sovereign will rule as long as he directs the attention he draws away from the signs that he causes the resentments he contains and towards the permanent center. More important for my purposes here is that we have a means of analyzing any social relationship in these terms, as an interplay of power (the torturer’s power sets the terms of the power of the tortured). We are always taking turns at the center, and we can therefore always desire to prolong our stay there, with there being no a priori limit on how long that stay might be.

All this is prefatory to initiating a dialogue between originary thinking (and absolutism) and Alasdair MacIntyre, maybe the most important moral philosopher of our time, and certainly the most important anti-modern moral philosopher. In his After Virtue, in developing a concept of virtue to counter the incoherence of liberal morality, MacIntyre begins with the concept of “practice”:


Any coherent and complex form of socially established cooperative human activity through which goods internal to that form of activity are realised in the course of trying to achieve those standards of excellence which are appropriate to, and partially definitive of that form of activity, with the result that human powers to achieve excellence, and human conceptions to the ends and goods involved, are systematically extended


I consider this notion of a practice very similar to what I have been calling a “discipline.” The practice must be social and cooperative, which is to say it involves shared attention; it is complex, which means it involves a hierarchical articulation of modes of attention, so that one pays attention to one element of the practice in order to direct attention to another, with the result of that act of attention determining the range of possibilities for the next one, and so on. There are standards of excellence, which is to say one could master certain elements of the practice and still be a novice or incompetent in other, higher elements of it: there is a pedagogical, initiatory component. All participants in the practice learn how to judge the practice along with participating in it, creating a shared space which one must enter in order to contribute –there couldn’t be any competent judgment from the outside. If “human powers” and “human conceptions” are “systematically extended,” this seems to me to suggest that a practice has a history to it, with models of excellence that can be studied, imitated and improved upon. It is really a question of increments of deferral, whereby letting some object be and transforming it into an object of contemplation and anthropomorphized presence generates new objects “framed” by that one, ultimately producing a “world” of cooperative relations between activities and objects. For MacIntyre, the practice generates constitutive virtues like integrity, honesty, and fairness—if you want to be the best chess player, you not only wouldn’t want to cheat in chess, but you would want a clean space in which chess competitions can take place without suspicion; also, if you are really motivated by love of the game, you will support institutions that nurture young chess players, you will mentor them, and so on.

MacIntyre goes on to point out that all of virtue can’t be contained in the practice because, for one thing, one might be committed to competing practices, and the basis for choosing between them (say, between excellence in chess and excellence as a father raising a family) can’t be contained within any of the practices themselves. It is here that MacIntyre (drawing heavily upon his recuperation of an Aristotlean ethics) introduces the notion of a telos of the individual life, grounded in the possibility, even necessity, of understanding ourselves in narrative terms. (Here, an understanding of the origin of language and the emergence of discourse, the subject of Eric Gans’s The Origin of Language, would enrich MacIntyre’s account considerably.) The narrative of one’s life as a telos doesn’t so much answer the moral question (practice chess for another hour or come home and tuck in your child…) as it articulates the conflicts between competing goods that are constitutive of a serious life. (Here, MacIntyre relies heavily upon the tragic view of life, as opposed to more philosophical views that believe one can discover the Good and subordinate all other goods to it.) In engaging the practices and searching for the telos of your life (living a life aimed at discovering what is the good life) you become the kind of person who will make mature, moral decisions. Finally, MacIntyre concludes that the narrative of one’s individual telos is always embedded in some tradition, and that part of one’s telos is participating in that tradition and contributing to the work of distinguishing what deserves to survive and be enhanced in it from what should be marginalized or discarded—all in terms of criteria generated within the tradition itself, of course.

I would say that the creation of a narrative form for the individual life is itself a discipline, or practice—so MacIntyre’s lower level concept can be employed at the higher level. The arts and storytelling traditions are all disciplines/practices aimed at providing narrative forms that individuals will then adopt and revise for the dilemmas and conflicts their own trajectory generates. Similarly, the maintenance of tradition is a discipline/practice, with any complex community having its specialists in tradition maintenance but with any healthy community having all of its members become at least competent “amateurs.” We can talk about all this in terms of centering: at each level, from the practice to the telos to the tradition, attention is directed towards something irreducible to the individual: let’s say some model of action (or virtue) distilled through the tradition. Now, what originary thinking can add, and what only a properly anthropological inquiry can add, is an understanding of how all of this is grounded in the fundamental form of sociability, the deferral of violence through representation. A community aiming at the production of excellence (and, therefore, standards and judges of excellence) constructs a system whereby honors are conferred upon someone who occupies the center according to specific rules. The desire for centrality is thereby rerouted through a system that makes it serve the elevation of the community. A Freudian would call this “sublimation,” but originary thinking doesn’t approach practices in that way: more complex, learned forms of attention management avoid the bad, it is true, but while also being a positive good and, more importantly, irreducible to the “evil” deferred.

How you narrate your life, or how you live your life in such a way as to be narrated, therefore involves a practice or discipline of self-centering. You understand that people are looking at you—people are looking at everyone, we are all looking at each other. You act, then, so as to attract attention, but specific kinds of attention. The problem of human centrality is the problem of resentment: the other has taken my place, and the big Other (the sacral king, before being divvied up into God, on the one hand, and the civil authorities, on the other) has allowed this to happen, at the very least by not recognizing and remedying the injury done me. MacIntyre doesn’t have a way of addressing the crisis inherent in this condition. The Western solution to this problem has been through the sacrifice of an exemplary individual who has attracted murderous attention by revealing, let’s say the log in the eye of all those who see a mote in his. But the attention need not be murderous, and better not be if we want sustainable moral practices rather than emergency coups of a center in crisis. A moral life is one lived so as to attract and deflect resentment, ultimately to the benefit even of those possessed of that resentment. Look at how much of social media is consumed with taking down some Big Man (or Woman) or other, someone who has “illegitimately” claimed centrality. You can’t tell people not to do this, because if they weren’t drawn to such encircling, they wouldn’t be people; but you can respond to this resentment in a defusive rather than escalating way.

The most basic way of doing so is to occupy the center the resentment places you in, but in such a way as to show that it is the resentful attention itself that has placed you there. In a sense, you would be counter-mimicking or iterating the resentment directed toward you. Once we have moved past the “pure” scapegoating of the Girardian scene of mimetic crisis, there is always some institutional structure, some practice, that justifies resentment in the form of exclusion, punishment, marginalization or demotion—for example, tweeting that a journalist has “lost his credibility” with his latest story. The bar for what counts as “enough” credibility can always be raised or lowered as convenience dictates. The way to respond to such a charge is to raise the bar for everyone, including oneself along with the hostile tweeter. Of course, how well that will work will depend upon what kind of journalist one has been, what kinds of narratives one has lived one’s life so as to “fit.” So, how credible are you, anonymous tweeter, in determining the credibility of journalists, how credible can any of us be in this medium or elsewhere, where is the final court of appeals for establishing credibility, anyway? In making the accusation, is the tweeter not trying to establish his own credentials for joining the club the target of his accusation should presumably be ousted from? In other words, run “credibility” through the ringer, repeat it over and over again so as to drain it of all use as a portable cliché. Of course, you can do this so as to “discredit” all notions of truth and good faith inquiry and investigation, but it can also be a way of cleansing the words we use to talk about those things—what is it that we are actually talking about when we talk about “credibility”? If you’re a real journalist, participating in a genuine tradition of exploration and exposure for the sake of public knowledge, you welcome the interruption; if not, you will mount a counter-attack to drive the accuser out of the public sphere. And then that will become part of your narratable life, leaving you in the hands or at the mercy of participants in the practice of studying the practice of “journalism.” The result will be what Gans has called “lowering the threshold of significance,” i.e., making things open to notice and meaning-making that previously weren’t, including regarding yourself as the one enabling the lowering. And that’s the most moral practice because it opens new modes of deferral. (What is implies for the practice of governing I will leave to another post.)

Distilling Sovereignty

Sovereignty is incompatible with democracy, but contemporary politics often make the most ardent advocates of majority rule the most insistent upon a reclaiming of sovereignty. This is most obviously the case in the US, where Trump was, if we sum up his central commitments (anti-immigration, anti-free trade, pro-swamp draining, America First) the candidate of both the restoration of American sovereignty and the redirection of government to the service and control of the American people. The apparent convergence of sovereignty and straightforward majority rule derives from the formidable opponents they share: transnational corporations with economic interests in the ability to move facilities around the world at will; corporations and activists who benefit from the continual migration of masses of Third Worlders; transnational progressives and foundations who find it more efficient to work through institutions less responsive to popular will, like the judiciary, the media and government bureaucracies. Both the strict sovereignist and the strict democrat will want these vectors of chaos closed off. The incommensurabilities will become clear eventually, but until then those democrats and populists concerned with the preservation of sovereignty might have a lot of interesting things to say, and even their contradictions and inconsistencies may be worth looking at.  Especially when they are both articulate and responsible members of a ruling coalition.

Good governance is not a blind force, certainly not a strong but silent engine… the ability to carry out goals in the way they have been defined is a prerequisite condition for good governance, but is far from being sufficient in itself: good governance is measured above anything else by the ability of government ministers to establish their own goals.

A politician who knows how to bring the train to its destination, but is unable to set the destination, as senior as he may be — is not governing but merely subcontracting; he may have been appointed Minister, and he may get to cut ribbons in the end, but he is nothing more than a contractor… To move down a track laid down by others does not require leaders; any driver could do it just fine. The essence of governance is always setting down directions and posting goals. This requires of elected officials to lay down new tracks only after they had decided for themselves where they would like to take the train.

This is from the opening of an essay by Israeli Justice Minister Ayelet Shaked, “Tracks Toward Governing,” published in a new Hebrew Journal last fall. The government of which Shaked is a part has been working on curtailing the activities of foreign funded NGOs (largely by compelling them to reveal the amounts and sources of their funding—I don’t know where this effort stands), is in the process of passing a law that will enable the elected government to override Supreme Court decisions, has built a border fence that has drastically reduced border crossing by African migrants while making fresh commitments to remove the “infiltrators” (the word anti-illegal immigration Israelis use) already there (this seems to be stalled by the problem of finding countries to take them, with their home countries apparently not eager to have them back) or at least remove them from the population areas where they have significantly degraded the quality of life for Israelis. Shaked has been a leading force in these initiatives, and defends and explains them better than other Israeli politicians, so she is unsurprisingly the most vilified figure in Israeli public life today. Many on the alt-right propose, in what always seems to me a combination of bitter sarcasm and a genuine admiration for the way Israel address its own “national problem,” that we take Israel as a model—in the game spirit of “agree and amplify,” I’m going to take that injunction seriously in a discussion and assessment of Shaked’s very interesting essay.

As we can see from the passage quoted above, Shaked defines sovereignty very rigorously (if metaphorically) and her definition would hold for any kind of government, regardless of who the sovereign is. To be sovereign is to set the goals for governing and possess the means to fulfill them. You could ask someone who sees sovereignty this way whether she would adhere to this understanding even if it can be shown that sovereignty, in this sense, is impossible under either liberalism or democracy. Indeed, her whole train metaphor is illiberal. The notion of the government “setting the direction” and getting to the “destination” is incompatible with liberal government—if we are all free individuals embarking on subjectively chosen and unrelated life projects how could we share a common destination that the government is to take us to? Shaked may mean “destination” in a weaker sense: if the government, say, decides to partner with an energy company to explore off-shore oil reserves (the very issue that is forefront in Shaked’s mind in her polemic against the Supreme Court) no higher authority should be able to disallow or “derail” that “trip.” But if that’s the only kind of “destination” the government is allowed to set, if it cannot, for example, set itself the task of suppressing de-moralizing cultural tendencies, or establishing harmonious terms of interaction between the country’s different ethnic groups, wouldn’t that make it nothing more than a “contractor,” because other forces will then be setting the agenda in those areas, and the government will have nothing more to do than “lay the rails” where they say. Indeed, this question raises itself forcefully in the one area where Shaked lapses into a tentativeness uncharacteristic of the rest of the essay (and, indeed, Shaked’s political persona): Israel’s Jewishness.

Shaked’s leanings in political theory are a mixture of libertarian and neo-conservative lines of thought. It wouldn’t be too hard to figure out who she’s been exposed to (she draws heavily upon Milton Friedman in arguing for minimal government interference in the economy). She first attacks Israel’s Knesset (Parliament) for the cancerous growth of laws coming out of that body. She frames this in a very interesting way, although I don’t know if this is original to her: she says that every law the government passes is a vote of no confidence in its citizens, since it usurps from them the solution to some problem they might have worked out on their own. (She goes through quite a few particularly ridiculous and harmful laws, focused in particular on their effects on the ability of business to determine their own “destination” in their own sphere.) She attacks the Supreme Court by defining the function of the law in very a precise and minimal way: the law intervenes when there is a conflict, when the conflict may have led to some damage, and when at least one side of the conflict has standing to bring it before the judicial arbiter. The court is not simply to opine on the legality of laws on its own initiative. (Here she refers to Hamilton’s assessment on the judiciary as the weakest branch of government, controlling neither purse nor sword.) All this is really essential to any good government.

On to the Jewish Question. Israel is defined as a “Jewish State,” but it’s never been very clear exactly what that means, especially in law, beyond the right granted to all Jews worldwide to immigrate there. The Rabbinical establishment controls marriage law, but this is hotly contested and rarely put forward as an example of the Jewishness of the state—if it were possible to form a governing coalition without the religious parties this arrangement might be eliminated overnight. There is some explicit, and much tacit support for Israel’s Jewishness to be understood in ethno-statist terms—Jewishness as nationality, like Frenchness, Russian, etc. Much of the activity of the left in Israel seeks to combat this tacit definition. No one wants to define Israel as a Judaic state, though, as one governed by Jewish law. Shaked certainly doesn’t. She wants to insist on the compatibility of the Jewish character of the state along with its democratic character (the “Jewish and democratic” formulation has long been canonized in Israeli law and culture, with political arguments focused on whether to exacerbate or paper over the contradictions). She refers to the Jewish heritage going back to the Old Testament prophets, who criticized kings in the name of justice and the people; she discusses the importance of Jewish history and sacred literature to the modern republicanism that emerged out of Protestantism and the return to an “unfettered” reading of the Hebrew Bible. Shaked is drawing upon some important recent scholarship here, but all this is really well worn ideological territory and not very compelling, relying as it does on a very idealized and indeed modern understanding of the meaning of Judiasm and Jewish history (she doesn’t go anywhere near the whole “Tikkun Olam” nonsense, fortunately). She speaks of making the “Jewish” part of the state as substantial and interwoven with all its institutions as the “democratic” part, which for starters means the Jewish calendar being the state calendar, Jewish holidays being the state holidays, Hebrew being the dominant language, Jewish history being the history taught in school, etc., all of which is already the case.

Shaked does make one more interesting move, though: she recalls an argument made by the former Supreme Court Justice Menachem Elon that Jewish law should be used as a source of Israeli law (along with secular Israeli law, the British law in place during the Mandate, and even the Ottoman law preceding that). Depending on the composition of judges, this could involve a gradual incorporation of Jewish law into Israeli law, and then perhaps an awareness on the part of the rabbinical makers of Jewish law of their impact upon state law, leading them, in turn, to think of themselves as legislating indirectly for the state. This could de-ghettoize the legal thinking of at least some rabbis, and those rabbis could have their profile raised by the state, creating a virtuous circle. Since the “national religious” community is already the most powerful part of the electorate, and the fastest growing part of the population, with extensive cultural, religious and political institutions of their own, there is a constituency for these developments. A proposal for a return of the monarchy is not unthinkable under these conditions. How better to have a government that can determine its own destination, that is not just a “contractor”? In fact, on some authoritative accounts, the Jewish Messiah will simply be the king of the restored Jewish Commonwealth (he’s also supposed to be of the Davidic line, but perhaps that can be dealt with)—in other words, a completely rational “Messianic” politics is possible.

This clearly goes beyond anything Shaked, or any other contemporary public figure, would consider. But Shaked insists on foregrounding the Jewishness of Israel because only Israel’s Jewishness prevents its institutions from colluding with it being carved up by lushly funded international NGOs. She is right that this Jewishness needs to be given a content, and ethnicity won’t suffice because no political order can be derived from ethnicity. If one insists on sovereignty (the state as path-setter, and as possessing the means to follow that path) and Jewishness, if, in fact, the Jewishness of the state is perceived to come into contradiction with its democracy, the Israeli leadership would be faced with an interesting choice. What we see, in other words, is that once the question of sovereignty is made the highest priority, that question become a fulcrum that can turn around the rest of the social order, and initiate inquiries in possible reconstructions of the various traditions of that order. He who wants the end must want the means; so, what does he who wants sovereignty want?

I will add that Donald Trump’s speech to the UN on September 19, lays down a similar marker, making sovereignty central to international order. Of course, there are plenty of inconsistencies here as well, as Trump insisted that the US doesn’t impose its political form on other countries while calling for democracy all over the place. But, the question is the same: if you have to choose sovereignty or democracy, which do you choose? In Trump’s foreign policy, he seems to be choosing sovereignty: as analyzed by the blogger Sundance, the proprietor of the Conservative Treehouse blog, that foreign policy involves Trump using economic levers to compel larger states to take “ownership” of their patrons and rein them in—this in particular has been Trump’s approach to China vis a vis North Korea and Pakistan vis a vis Afghanistan. Implicit in this approach is a world order based on a hierarchy of sovereignties, with the higher level sovereigns supporting and constraining sovereignty on the lower levels. It’s not too difficult to imagine a new international law being written that would take into account these power differentials, and give all the major powers an interest in neither interfering in the others’ sphere nor using their own clients to nurture hostilities against rivals. The recognition that a leak in sovereignty in one place is a leak elsewhere, and perhaps ultimately everywhere, would spread. Insisting that other states exercise sovereignty (as long as one is not simultaneously undermining their sovereignty) has a moral justification that insisting they become democracies or expand human rights doesn’t. Perhaps at some point there will be the equivalent of a reformist vs. revolutionary debate within absolutism, or NRx more generally—the reformist (or gradualist) case will be that it is possible to form an elite constituency that takes sovereignty as the highest priority, and that such a constituency will find it necessary to become increasingly hostile to liberalism and democracy and will thereby decide what “they want.”


What, exactly, is power? Who obtains it, who holds it, how is it manifested and used, how is it transmitted, and why? Power, as de Jouvenel says, is credit, which suggests that the origin of power is in the ceding of the decision to one person, or at least a single will, when all have to adhere to the same decision. We can think of obvious examples where this would be the case—during a hunt, or when the community is under attack. Someone who has led successful hunts, or defenses against attacks, in the past, is obeyed in a similar situation now. This assumes, though, that there was a first hunt, or self-defense, in which the need for leadership was experienced in the very act of someone taking it. Instead of a disorganized chase after the prey, or a rout by some enemy, the group was given form by the credit granted to whomever was presumed or was proving to be most capable. But this can only describe the human group—a group of advanced apes wouldn’t have a “moment of decision.” But the first decision, we originary thinkers assume, was made on the originary scene, and that decision was to not succumb to uncontrolled violence against the others. All subsequent decisions are to be modeled on that one. Power, then, first of all means structuring attention so as to quell or preferably pre-empt the panic that results from a collapse of shared attention into upwardly spiraling rivalries. It is that structuring of attention that makes the hunt or self-defense successful.

What is the best way of accounting for the growth of power, its institutionalization, and perpetuation even long after those wielding power have ceased to earn any “credit”? This is a set of issues requiring clarity within absolutism. The uncertainty begins with de Jouvenel and readings of de Jouvenel, starting with Moldbug’s—de Jouvenel’s analyses certainly lend support to the “High-Low v. Middle” structure that has been constitutive and canonical in absolutism and neo-reaction. He consistently shows “Power” undermining the middle layers (the aristocracy in particular) so as to flatten out the social structure and rule directly over an equalized mass. But is this because Power was “insecure” or because Power insatiably seeks to grow and extend itself? If Power is insatiable, that implies that there is always something outside of Power, something evading its grasp: presumably some irreducible human freedom or spontaneity. But that itself would indicate something less than secure in Power, something registered as “anomalous” somewhere within the power system. But this also implies that the more secure Power becomes in fact, the more intolerable it will find even minimal gaps in the extent of its reach. “Secure” and “Unsecure” are relative terms. An early medieval king ruling over a territory the size of a small town may consider his power quite secure if he can on occasion rouse his lords to mobilize their soldiers to defend against the predations of a gang of nomadic looters; the modern state apparently feels its power is insecure if there is a single “white supremacist” who can hold down a job. Why, though, describe the purge of “white supremacists” in terms of “unsecure” power rather than simply power hunger? What is it the state wants to do that it perceives the “white supremacists” to interfere with? The reasoning can quite easily get circular here: the state wants everyone to feel “safe,” which the existence of white supremacists prevents. It seems that just about any case where power is extended could be described one way or the other. So, which way is better, and why? The determination can’t be made on empirical grounds, because the conceptual order will lead to the empirical observation.

Let’s continue with our originary power analysis. The leader of the hunt or self-defense team acquires his credit by securing rewards for those who obey him. In fact, at more primitive levels of social development, the followers may get a larger share than the leader—the prestige of leadership is more important than the material reward, and that prestige depends upon others being rewarded. But the structuring of attention precedes the reward. It’s not enough that the activity was successful—any member of the group can claim credit for the success—the leader didn’t necessarily throw the spear that killed the buffalo, or kill the most enemy combatants. If leadership depended on such crude quantitative measures, it would be impossible to sustain. The source of power is representational. The first attempt to take down a buffalo fails. One member of the group shouts at another, let’s say for throwing and missing with his spear. The member who has been shouted at shoves his accuser. Other members of the group move in, ready to join one side or the other. Whoever can step in the middle of this simmering brew of resentments, stand between the sides without taking sides, making sure that if he has to block a blow from one side he shows himself ready to block a blow from the other, and then points to where the buffalo were last seen headed—that’s the leader, that’s who has power. And only one person can have it, because once one person has resolved things, the situation doesn’t allow for anyone else to step in. Of course, the first person to try this might get his skull bashed in, which would just mean that he doesn’t have the power. Exercising power means being able to “dwell” within the situation itself (you have to be ready to parry and if necessary return blows, you need to know who is most likely to strike, whose potential dominance might need to be countered) while simultaneously standing outside of it and reframing it (this is a distraction, our dinner is still out there).

So, that leader becomes “chief,” a quasi-permanent position, with ritual honors and responsibility. But it’s not so easy to intervene in every dispute, to calm every panic, in just the right way, by recognizing and deflecting the precise structure of resentments. You’ll need a repertoire of “moves” that are effective in deferring resentments become stereotyped rather than crafted so as to be appropriate to the situation; new acts, moves and postures are created so as to ensure that potential combatants never get to the point where the leader has to directly step in. Credit is extended, which makes it more deeply rooted but also more tenuous. There’s always an element of bluff in the exercise of power. This means that the mettle of the leader is tested less often, at least publicly and unequivocally—he has to rely on tales of former heroic acts a general “sense” that things are going as they should, and the proper exercise of ritual responsibilities. So far we’re talking about securing power, and we’re talking about it as very serious problem—only one person can be chief, so absolutist premises hold, but there might be several individuals who would be just as good at it, and it wouldn’t be so hard for one of them to mobilize enough supporters for a real contest. The chief will need to create or “certify” new positions: “head warrior,” “storyteller,” “hunt planner,” “seer,” etc. This in turn produces new tests of his leadership capacity—can he manage his subordinates? For a very long time, even into the early modern period, it was expected that kings lead their subjects in war—at a certain point this became unthinkable, which means that something crucial had changed in the meaning of Power. Even modern rulers run risks that give them a quasi-military status—JFK is buried at Arlington National Cemetery because he was killed in the service of his country. But there’s no reason for these risks other than the inevitable imperfections of secret service protection.

At a certain point the position of power to be filled becomes more important than the person filling it. This has to happen as sacrality drains from the central figure himself; or, conversely, the regularized delegation of “offices” is what drains that sacrality. This would seem to be the perfect security of power—if anyone can be president, then the office of the presidency can’t be damaged beyond repair by any single occupant. This is also what give Power its remorseless, inexorable tendency towards growth: if the person inhabiting the office doesn’t personify or exemplify the office by retrieving the sources of power (by leading an army against Power), then the accrual of power to the office as such will be an end in itself, certainly for those filling permanent positions auxiliary to the elective one. But the fact that the office is always “empty” insofar as its occupant is as an exchangeable cog like anyone else really means that it’s a site of endless power struggles. Everyone can imagine they can define the office with their own set of imperatives. These power struggles contribute to the growth of “Power,” because if everyone thinks they can use Power everyone wants it larger. So, are the contestants trying to secure power? Or is Power just following its own growth imperative? At this point the best qualifications for filling the highest offices no longer include the charisma of leadership, or earned credit—rather, those functionaries are recruited from the broader cultural training grounds established so as to continually replenish the elites with facsimiles of the existing ones. And what the future elites are trained in is how to play the idealized “principles” of Power against Power, the equality reflected in the abstraction of all individuals before Power against the insufficient degree of equality presently presided over by that Power.

In the midst of this, someone must be trying to secure power. Everyone can’t be engaged in a perpetual and increasingly reckless power grab all the time. There must actually be some way of securing power, otherwise what would we be talking about? Maybe not all the time, but we must always assume the possibility. Even those engaged in subverting power, except under the most desperate conditions, must want the technological capacities that will help them rule if they get the chance, and must therefore limit political encroachments upon the space granted to scientific inquiry and production processes. But the failure to actually secure power might very well accelerate power struggles and hence the growth of Power—so the dialectic of attempts to secure power leading to the destruction of middle layers of authority and hence more insecure power holds true under these conditions. Those trying to secure power will often be the beneficiaries of a previous power grab, so it’s not surprising that they won’t have the institutional, intellectual or moral resources to stop subsequent ones. In order to secure power, there is no alternative to returning to the originary form of power, in which an individual occupies the center, defers some imminent crisis, and redirects attention to the permanent center. The permanent center is nothing more than the possibility that there will always be someone who can occupy the center when needed upon any scene, large or small, central or peripheral, and that each of us be ready to do so or defer to whoever does. Deferring to the permanent center entails proper naming and seeing to the order of names. By “naming” here, I mean a designation of your place in relation to the center. Your name is your discipline and the articulation of your origin and telos—it is given to you by others and others ascertain your embodiment of it, but only you can fill it. And the better you fill it the better equipped you will be to recognize whoever can best establish and perpetuate the proper order of names.