Sovereign as Onomastician-in-Chief

To see yourself as an “individual” is to see yourself as a center of attention, with as many qualifications (titles, formal associations, histories) as possible obscured—the more stripped of qualifications, the more individualized. Liberalism projects the denuded individual back to the founding of society, but that individual is obviously a result of liberalism. In other words, liberalism’s self-legitimating misconception doesn’t detract from the reality of such an individual—but it has to change our assessment of its meaning. Individuals can be removed from their supporting and defining institutional dependencies, which means that the individual is defined against those institutions and dependencies. (Eric Gans sees this self-definition as the project of romanticism.) To be an individual is to be in a perpetual state of mutiny against whatever form of order most directly threatens to define one. Don’t look at me as a “_____,” the individual demands, look at me as… the other of “_____.” Individualism is a kind of negative gnostic theology.

David Graeber’s discussion in Debt: the First 5,000 Years emphasizes the violence intrinsic to this abstraction of individuals from their dependencies. Humanism posits the “human” as the highest value, and what makes anything a “value” is its commensurability and exchangeability with other values—and against what can human value be defined other than against other humans? Gans sees the romantic production of the individual as a means of enabling humans to participate in the market—the creation of an “anti-social” self-representation is a way of achieving value within society (Gans calls this the “constitutive hypocrisy of romanticism”). But in that case it is humans, rather than things, that are circulating on the market. We may not readily see or feel the violence of this competitive self-valuing, habituated as we are to it, but it becomes easier if we imagine removing the (also unnoticed) limits upon individualization that must still exist. What if we were actually to define ourselves constantly, indiscriminately, against every social dependency—friends, families, colleagues, acquaintances, etc.? Such behavior would be psychopathic. Moreover, defining yourself against dependencies don’t leave those dependencies unaffected—rather, it has a deeply corrosive effect. Our mutinies always target specific dependencies, and are aimed at extracting specific concessions—hence, they are best described as hostage taking. Not the market itself, but the “market economy,” is a system of hostage exchange, of more and less direct kinds. It is promoted by those with the most to gain by sowing discord and disorder.

Now, the expanded economy of hostage taking follows the discrediting of the restricted economy of human sacrifice constitutive of sacral kingship and ancient imperial orders. Since there is no way back to sacrificial order, even if we wanted it (which we can’t, really), the central problem for absolutism is a non-sacrificial recentering. Absolutism extends the basic principles of absolutism—a rejection of divided power, or imperium in imperio; and the assumption that all that is said and done within a sovereign territory is commanded or permitted by the sovereign—to the entire social order. To give someone responsibility for a specific institution or task is to provide them with all the means for fulfilling that responsibility along with freedom from interference, as long as the responsibility is indeed fulfilled. As opposed to the abstractive process of liberalism, absolutism would involve a concentrative process—placing everyone within orders in which their responsibilities are made clear. All contemporary issues, such as technological development, “bioethics,” social media, etc., would be assessed in these terms: how does a particular possibility make it possible to concentrate rather than abstract. The elimination of the abstraction of “the human” removes all potential sacrificial targets. Imagine that instead of singling out individuals as celebrities or villains, or getting suckered by the mysticisms of “human rights,” we were to assign responsibility for the actions of individuals (whether praiseworthy or blameworthy) to the executive within the supervising institution. But it’s wrong to say “we” would do the assigning; rather, it would be the sovereign that treats any act that might turn an individual into a cynosure as a problem for the reform of some institution.

That, in fact, is the defining purpose of the sovereign: to maximize individual responsibility for the institutions that maximize the embeddedness of the individual in the institution. This process of individualization through embeddedness ramifies throughout each institution, and is the object of the discourses and dialogues comprising the life of the institution. What we would always be talking about is how to enhance each individual’s responsibility within an order that thereby comes to be defined by increasing degrees of responsibility, and in that sense complexity. Linguistically, this process takes the form of naming—baptizing, so to speak, new roles to be filled by individuals. To name is both to reify, to create a role independent of whoever fills it, and to singularize, insofar as we can always distinguish between those who more or less adequately or authoritatively “inhabit” that name. the reification is then less an alienation or objectification than the creation of a new set of capacities. Names are the most basic link between individuals and the social order—that’s why everyone must have one. (Try to imagine a social order in which most people have names, but there are quite a few without.) Intellectually, naming is aligned to conceptualization: concepts are names for previously unseen objects, actions and processes. Once such things are named we can predicate them in various ways; just as important is that we can receive commands from the name. The first command is to refer to the named object within the sovereign order of names.

A (there are quite a few) good way to think about names is as follows. A is the daughter of B and C; the sister of D and E; the grandchild of F, G, H and J; the cousin of K, L, M, N and O; the niece of… the great-granddaughter of…., and so on. The perfect name would reference all of these relations, in the relative importance they have in that social order (how distant from siblings are cousins considered to be, in marriage and inheritance law or custom, etc.); it would also reference revered ancestors, both familial and those of the community; it would affirm more recent heroes, like the general who won the last war (in both cases, really just more distant relatives, founders of lines, we might say). In giving actual names to children, parents select from among all these relations and references, and thereby position the child within the field of the system of names. To name the child after a pop star is to announce the priority of celebrity over reverence of ancestors—naming after an ancestor is a possibility that has been rejected. But the child will also be given a middle name, and might be called by a nickname, and might be drawn elsewhere into the naming field. Again, concepts operate the same way, reorganizing and centering a conceptual field which gives even an apparently familiar concept a new force.

Naming is the way the sovereign and his delegates (those who have been named by him) incorporate and authenticate institutions, authorities and practices. This is also why names are so important politically—it has often been noted how many political movements and even individuals have been named by their enemies, converting names intended as insults into badges of honor. Contemporary meming is essentially naming—each side trying to make names stick on the other (think about the origin of the word “branding,” and how it has come to be used). Whether or not a name sticks, and whether or not you can appropriate it provides a good metric for how likely your position is to endure. If your political enemies can shower you with insults that define you and you’re not able to transform them into badges of honor that’s a good sign either that you’re on the wrong side or your side is lacking in conceptual force.

The more “anti-fragile” your own position, the more you will be able to inhabit the various ways you have named yourself and been named. This is all part of the process of “auditioning,” that is, performing in such a way as to attract power centers interested in restoring order. What could be more desired by those recruiting an onomastician-in-chief than those proven in the study and deployment of names? This is not a superficial discipline, even if it works on surfaces—naming goes all the way down. The center is always named, and there is always a center. As soon as you take on or are given a name you have a persona, even if that persona is defined by the repudiation of the name. The name plugs you into the command order. Thinking politically is to a great extent the ability to think within the names imposed upon one or adopted. Any designation (e.g., “racist”) mobilizes a whole regime of commands that includes the named and others (what they must do to the one so designated). Thinking politically involves figuring out which commands to obey and when—some immediately, some in modified form, some at a yet to be determined future time (commands themselves are time sensitive, but not always equally so). Obey the ones that enhance embeddedness and extend the constitutive traditions of the institution (e.g., “which understanding of ‘racism’ are we working with here…?”) and defer to the extent possible those subversive of articulated obligations (“apologize!”).

Saturating the world with names saturates the world with sovereignty. Whenever one inhabits a name that can spread its shoots through the field of names and anchor it one imagines a sovereignty that would formalize that designation. Absolutism is interested in making dependencies and embedments explicit; liberalism wants to deploy designations as sites of conflict, which is to say inscribe them with loopholes providing for shirking and defection. The most formidable liberal names (like “racist”) are justifications for shirking, defection, and the parasitic blackmail one must live on as a result. Reactionary Future’s proxy theory, which designates political actors as proxies (“rebellious tools”) of some powerful actor suggests the need to distinguish between titles that are, we might say, “pre-proxified,” and those that are proxy-resistant because they are located within the pyramid of commands. The pre-proxified have the loopholes; the proxy resistant designations come with embedments built in and the means to create further embedments. It’s a difference between namings that demand further abstraction (disembed from your traditions, from the chain of command you find yourself in) and namings that command further concentration (clarify the chain of command, embed more explicitly in your traditions). Once we are saturated in names, there are no more abstract humans; there is the sovereign presiding over the field of names.

The Roots of Political Correctness

In Terry Teachout’s recent article, he shows that past members of two great orchestras, in Vienna and Berlin, acquiesced and in some cases participated in anti-Semitism during the Nazi reign in Germany and Europe.

https://www.commentarymagazine.com/articles/orchestras-and-nazis/

We know that most Germans of the time were at least acquiescent in the attempted genocide of the Jews, but we may not have known that also were the accomplished musicians of these orchestras. Hitler is famous as a failed painter, and this article is a valuable reminder that success as an artist does not preclude racism. We can assume that these musicians were cultured and intelligent people, well-respected, who presumably had what they thought were good reasons for acting, or failing to act, as they did. And this should serve as a warning to all of us, that we remain susceptible to the rhetoric of prejudice and scapegoating.

The message here is important but capable of distortion. It has degenerated into the assumption that anytime we find one group that is less powerful or successful than another group in any given society, the less powerful group must be persecuted victims. Eric Gans views our current victimary culture or Political Correctness as a reaction to the Holocaust. And the message of the persecuted minority is not unique to the Holocaust. We find it in the Bible, with the persecution of the Israelites by the Egyptians, and the persecution of Jesus and his followers. In American history prominent examples include Jim Crow laws and the McCarthy Congress investigations of suspected communists.

Narratives of persecution today are ubiquitous. Ethnic and racial minorities, females, LGBT, the 99% and so on are supposedly persecuted mercilessly today, even when they are prosperous and middle class. These groups are assumed to be the modern equivalent of the European Jews during the Holocaust. As a result, anyone who opposes illegal immigration is by definition a racist, and anyone who opposes gay marriage is a Nazi. Since protesting the Nazis during their reign by any means including violence was justified, today’s SJWs (Social Justice Warriors) feel justified in using any means for protesting today’s supposed Nazis. Ironically, Jews do not enjoy any protection by SJWs, simply because they are more powerful than the Palestinians, despite the fact that they have helped the Palestinians more than any Arab country (not to mention the PLO and Hamas), and despite the fact that they are a tiny country surrounded by larger and more numerous hostile neighbors.

Here is a defense of violence and the violation of basic human rights from a liberal journalist:

While we may want to rely on a rights defense in court, where First Amendment activity is threatened, our defense of dissent outside the courts should not be limited by what the state deems defensible by metrics of human or civil rights. A rights discourse, for example, would not defend the deliverer of that glorious punch to neo-Nazi Richard Spencer—it would, in fact, defend Spencer.
https://thenewinquiry.com/know-your-rights/

Spencer is literally a Nazi, so he is, rhetorically, a safe target, but SJWs also target innocuous figures like Charles Murray with violence, or anyone who doesn’t fit the SJW’s political agenda. Note the euphemism for violence as a “defense of dissent.”

The message taken from the Holocaust is “don’t be on the wrong side of history.” The orchestra members who submitted to the Nazis, and the American politicians who resisted Civil Rights legislation; today they look like idiots or worse. But how to know which group will turn out to be the Nazis and which group will turn out to have been unfairly persecuted? Rather than use any moral or rational criteria, the assumption today is that the weaker group is always morally justified, and the stronger group must be attacked and brought down. But is the less-powerful group always oppressed? In America, ethnic and racial minorities, females, LGBT, are all accorded extraordinary advantages in many respects. In academia, the courts, the job market, and the media they are the privileged class; any public expression of skepticism about their supposed victimization is punished mercilessly. If SJWs were truly concerned about empowering minorities, I would not have any problem with their words and actions. But there is a steadfast refusal to look at the real causes why some groups of people are less successful than others. Instead, there is a blanket assumption that the less-powerful must have been unfairly discriminated against. Any discussion of personal responsibility is dismissed as racist, sexist, classist, etc. This willed blindness makes the problem worse.

At the bottom of liberals’ political correctness is a vanity for their supposedly iconoclastic moral virtue, a selfish desire to be perceived as bravely defying “the man,” and a cowardly fear of being accused of racism or whatever. Questioning what actually constitutes racism in any particular case is taken as evidence of racism. The PC crowd enjoys a rhetorical advantage because we must admit that no instance of actual discrimination should be tolerated, but they refuse to admit any discussion of what constitutes actual discrimination in any given case. The SJWs have claimed the moral high ground, so that opponents of PC appear to be on the “wrong side of history,” but which is really just the accepted media-narrative of the moment.

Debts and Deferences

(For those who would like to comment on the GABlog specifically, I have set up reddit page: https://www.reddit.com/r/GABlog/comments/6kukdg/debts_and_deferences/)

David Graeber’s Debt: the First 5,000 Years adds a few decisive nails to the coffin of liberal economics and politics. Liberal economists imagine money and markets emerging out of barter; typically, they cannot show that anything like this ever happened, any more than social contract theorists can find an instance where that fictional event ever happened. Villager A doesn’t have too many chickens, while villager B has too many potatoes, and so A and B exchange chickens for potatoes; villagers C, D, E… n do not get in on the game, so that a certain point all the bartering gets too confusing so all must agree on a currency into which all values can be converted. All of this is ahistorical nonsense. Markets have historically been created and managed by states, for the purpose of maintaining ritual and military institutions. A fully marketized order, meanwhile, involves the violent disruption of personal and moral economies of credit (largely conducted without currency or calculation) and their replacement by debt regimes in which all of an individual’s possessions and the individual him/herself are alienable.  Traditional debt regimes, in which economies are always moral economies, presuppose the inclusion of everyone within the system—debts never completely expropriate the debtors. The market economy has everyone treating everyone else as outside of the system of obligations, as a potential adversary.

Graeber distinguishes between three forms of social organization. First, what he calls “communist,” using the definition from the Communist Manifesto, “from each according to his abilities, to each according to his needs.” Graeber sees this as a kind of originary form of social relations, which we all live according to for much of our everyday lives—such a relation treats the world as a single, eternal, object/environment to which everyone contributes and from which everyone receives indiscriminately (if you’ve ever held open a door for elderly woman, you acted like a communist). The second form of social organization is “exchange,” in which things are seen as commensurable. The third form is “hierarchy,” in which there is no commensurability between objects and individuals, and obligations are set by precedent. The exchange relation is really the focus of Graeber’s book. He traces the disembedding of exchange relations from “communist” ones and this seems to take place through the intervention of hierarchy. Kings need armies, and so they need to pay their soldiers, so they produce coins in order to do so; those soldiers need to spend the money somewhere so tradesmen surround the military. Kings need to tax their subjects, so some way of measuring wealth becomes necessary; taxes can be set high enough so that subjects have to go into debt, which in turn makes it easier to appropriate their property. We need currency in order to pay such “antagonistic” debts. Now, part of what makes Graeber’s discussion especially interesting (in a way, it’s the starting point of his discussion) is the perplexing fact that not only is debt generally and unthinkably taken to be a moral question (“we must repay our debts,” everyone must get what is due him”) but that moral thinking more generally seems to operate primarily with a vocabulary drawn from that of debt (God has given us all kinds of things and we in turn are deeply obliged to Him; we seek redemption from the slavery of sin, etc.).

Graeber’s intention is primarily to debunk this language of debt, which he examines in a sustained way in his chapter on “Primordial Debt.” He discusses sacrifice, and makes the very interesting observation that in some conditions the main form of currency (the representation of value into which exchangeable objects can be converted) is some object or objects (like cattle) that are most commonly used for sacrifice. For Graeber, the moral discourse of debt is irrational, and the standard of rationality seems to come from “communist” morality. For Graeber, the communists he discusses are much more rational than those of us besotted by debt-talk, who imagine all kinds of unpayable and even unimaginable debts (with God, for example, who couldn’t possibly need anything from us) rather than simply recognizing the basic fact of our interdependence. It would complicate Graeber’s argument to acknowledge that some form of exchange, or debt, not to mention hierarchy, is constitutive of the communist community as well. (Graeber doesn’t see “communism,” “exchange” and “hierarchy” as different kinds of social orders, but as moral economies that co-exist within a single order—still, it’s clear that social orders are distinguished by the predominance of one over the others, and that the egalitarian communities from which Graeber draws his critiques of pathological exchange orders are the more reliable repositories of communist morality.) He focuses on intra-communal relations, not their relation to the sacred center (their ritual order), so the possibility that the notion of debt is indeed primordial, preceding the origin of human inequality, doesn’t arise. This makes it easy for him to ridicule the notion, that some researchers purport to see as fundamental in the ancient Middle East and India, that existence itself is a form of indebtedness, as a kind of state ideology, contending that rather than seeing these theological claims as supposing a (ridiculous!) “infinite” debt, we should rather interpret

 

this list [of escalating debts] as a subtle way of saying that the only way of “freeing oneself” from the debt was not literally repaying debts, but rather showing that these debts do not exist because one is not in fact separate to begin with, and hence that the very notion of canceling the debt, and achieving a separate, autonomous existence, was ridiculous from the start. Or even that the very presumption of positing oneself as separate from humanity or the cosmos, so much so that one can enter into one-to-one dealings with it, is itself the crime that can be answered only by death. Our guilt is not due to the fact that we cannot repay our debt to the universe. Our guilt is our presumption in thinking of ourselves as being in any sense an equivalent to Everything Else that Exists or Has Ever Existed, so as to be able to conceive of such a debt in the first place. (68)

 

Contrary to his normal procedure, though, Graeber doesn’t show that anyone, other than a present-day anarchist or communist, actually has interpreted these notions in this way. It’s understandable that Graber would want to insist upon an originary debt-free condition, since the only other way out of the violence endemic to impersonalized debt relations would be through hierarchy. Interestingly, Graeber points out that ancient and more recent pre-modern history is replete with revolts against the expropriating consequences of debt, where there is an implicit equality between debtor and creditor (insofar as they engage in exchange), but almost none against caste systems and slavery, and I would add far fewer against monarchy, or military hierarchies, where social distinctions are non-negotiable and beyond appeal—but doesn’t pursue the implications of this observation.

Graeber makes an argument intimately related to one of Marx’s central ones, and it is an argument that must be conceded. What, exactly, makes it possible to exchange one object with any other; what makes the objects commensurable? The objects must be abstracted from the network of relations in which they are embedded, and by “abstracted” Graeber means “violently ripped out.” This analysis, like Marx’s of “abstract labor,” implicates exchange and debt in sacrifice by focusing on the most exchangeable of all objects: human beings. Early forms of exchange between communities and families involved replacing people, and therefore establishing their value (as represented by other objects): brides, slaves, murder victims, and so on. Although Graeber doesn’t speak in these terms, the implication is that hostage taking is central to the earliest forms of exchange. (It is not clear to me whether, for Graeber, or in reality for that matter, the more localized and personalized forms of “credit” Graeber valorizes precede and are distorted by the pathological, hostage taking forms or, on the contrary, the personalized forms are reforms and curtailments of hostage taking, under a new mode of the sacred and new mode of sovereignty. I find myself assuming the latter is the case, since the establishment by sovereigns of markets must have always involved some violent abstraction, and early forms of exchange between tribes, families and communities must have always presupposed the possibility of violent escalation.) Now, as I argued in my post on sacral kingship, for human beings to have this extremely high “value,” it must be possible to place them at the center—which means that the center must have already been expropriated by the “Big Man” and eventually permanently occupied by the sacral king. Again, we see the inseparability of “humanization” and human sacrifice. Humanity cannot be the highest value without humans being the most valuable exchangeable and sacrificable object. Graeber is right to associate this economy of hostages with the honor culture, which he especially dislikes, seeing one’s honor as being defined by the stripping of another’s. Flinching at the brutality of such systems, especially when one would be unable to imagine a credible alternative under those conditions, is a serious analytical failure—honor culture must not only have suppressed forms of violence endemic to relations within and between more communist orders, but any replacement of honor culture must defer some critical mode of violence that can be recognized as communally destructive within such societies. And this kind of recognition comes, to quote Marx, under conditions not of one’s choosing.

Despite his ridicule of theologies of “infinite” and “existential” debt Graeber implicitly concedes that that development of these (critical) modes of thought in the “Axial Age” (800 BC to 600AD) of the great ancient empires led to the diminishment and ultimately elimination of the most egregious practices of mass slavery and human sacrifice of those empires. Once debt is conceived in infinite, existential terms, defining one’s relation to the sacred, then it is the assumption that debts can be settled through the exchange of hostages that becomes vulnerable to irony, ridicule and denunciation. Whether it’s “rational” (according to what tradition of rationality? Developed how—by reference to what system of exchange?) is completely irrelevant to the ethical advance that Graeber sees from the Axial to the Middle Ages (600-1450 AD), an advance we must see as a result of the gradual assimilation of the transcendent forms of the sacred of Buddhism, Christianity and Islam. The sacral king is the earliest form of absolutism: the sacral king is the cynosure of the order, the mediator between divine and human, and also for this reason a possible sacrifice—the first form of human sacrifice. The ancient emperors retain this sacrality in an extended form (they cannot be violated under any conditions), but since they remove themselves from the position of sacrificial victim, they are sacrificed to, not sacrificed. The ancient empires were regimes of expanded sacrifice, or hostage taking, in which the abstraction and redistribution of individuals was routinely used to settle accounts. This accounts for the moral state of the axial empires that Graeber deplores, and which led to the more metaphorical and spiritual forms of sacrifice that provided for the moral revolution which restored a more reciprocal economy, based upon embedded debt networks, personal credit rather than currency, in the Middle Ages.

We can now focus on the relation between hostage taking, or the violent extraction of humans from relations of “communism,” “exchange” and “hierarchy” that define them, and sovereignty. The forms of holiness inherited from the Axial Age dissenters invalidate hostage taking: each human being has a unique relation to the divine, so humans can no longer be treated as commensurable with one another. Rather than a possible sacrifice or receiver of sacrifice, the sovereign’s role is now to suppress sacrifice. To sacrifice a human requires that all the attention of the community converge on the sacrificial figure. He or she must be seen as the repository of all desires and resentments, the origin of some proliferating criminality or plague, the cause of dashed hopes. The post-axial sovereign ensures that such attention can only be organized on the terms of the sovereign. Hostage taking implies an honor system, and the suppression of sacrifice means the suppression of the honor system, which is to say the vendetta. The sovereign must settle accounts between groups and individuals in such a way that grievances are satisfied sufficiently so as to make recourse to the vendetta unthinkable. Sovereignty must reach into and shape the social order so as to block the emergence of power centers interested in restoring the honor system. This means a system of deferences that interpose between the convergent attention of the many and any individual the question, “what would the sovereign do (and have me do)”? Which further means that the sovereign construct a justice system that disseminates answers to those questions broadly and clearly, verbally and through institutionalized practices. When our attention converges on an individual—a celebrity, an infamous criminal or defendant, the victim of a Twitter mob—we may insult, ridicule, taunt, ostracize, but will stop short of appropriating the sovereign’s prerogative to imprison or kill. At a certain point, our attention converges on those who seem more likely than us to appropriate that prerogative (to organize a lynch mob, for example).

This gradual incorporation of the norms of axial age transcendence into Middle Ages governance accounts for the moral, political and even economic and technological advances steadily gained in medieval Europe (I’m not going to try and include parallel developments in the Islamic world, India and China). But insofar as these terms of transcendence inform the state, they can be invoked against the state, especially when they are embodied in a powerful institution with sacral imperial pretensions of its own. It is, after all, possible to concede that central power should be exercised absolutely while still insisting that the occupant of that central power be subject to replacement. Any specific argument along these lines will be marked by inconsistencies, but so will arguments for sovereign determined succession. And the criteria for replacement will most likely derive from the transcendent terms that are embedded in the sovereign itself. It’s then a few steps to modern democracy, which insists on institutionalizing a system of replacement so that his temporary hold on power will always be present in the mind of the sovereign. It’s then barely a step at all to propose that counters to sovereign action be built into sovereignty itself, in the form of “checks and balances.” But this makes the modern executive perilously close to becoming a sacrificial object again—not just in the once and for all manner in which the absolutist monarchs were sacrificed to inaugurate the modern age, but as a routine, almost ritualized matter. To refer again to my post on sacral kingship, I am arguing for an understanding of modern history as the ongoing attempt to create a satisfactory replacement for sacral kingship—sovereignty as a non-sacrificial center of attention that, even more, deflects towards itself all other potentially sacrificial centers of attention.

What makes the consequences of the “always already” divided sovereignty of medieval Christianity even more destructive is the possibility of re-“abstracting” individuals from their social networks of obligation and reciprocity. The breaking up of the honor system, which gives the individual a direct relation to the sovereign, makes this abstraction a site of power struggles—the source of the high-low vs. the middle power blocs. I’m not going to work through Graeber’s complex discussion of the rise of modernity, but he associates the rise of “capitalism” with a massive new abstraction of individuals—not so much as human hostages (although Graeber foregrounds the importance of world conquest and slavery by the West to this process) but as potential capitalists who see the world completely in terms of exchange. This self-capitalization respects the transcendent axial terms because in self-capitalizing, the subject is self-sacrificing through labor, discipline, and the exclusion or reduction of whole domains of what have always been considered essential human experiences. The asceticism of the capitalist subject is certainly in the Christian tradition. As long as this type of subject is privileged, the unification and securing of power is impossible—the self-sacrificing individuals will always be eager clients for sowers of dissension and division. The modern market is a product of power as much as markets ever were, with modern capitalists, as Graber argues, the descendants of the military adventurers of the early modern age—but, by setting markets against the state, liberalism makes the market a multiplier and intensifier of divided power. If liberalism does not directly restore, it always incites and ultimately relies on the return of the honor system—leftism is the institutionalization and infinitely varied refinment of the vendetta. So, absolutism demands the re-embedding of individuals into “communistic,” “exchange” and “hierarchical” orders, but on terms that preclude reversion to the honor system and preserve the mass literacy and numeracy presupposed, if not quite accomplished, by contemporary social orders.

To an extent, absolutists stand with some elements of the contemporary left, those that still have abolishing the capitalist world order on their agenda—at the very least, we can notice some of the same things deliberately ignored by liberals. There are actually a very few, and those very feeble (in power and intellectual acuity), among the left that have kept their eye on replacing the metastasized systems of exchange that have swallowed up all human relations and made us all hostage to globalizing economic, political and media regimes. Transnational human rights regimes and climate fanaticism, to take two examples (both providing legal and moral bases for “political correctness” and supply chains from transnational economic entities to your humble social justice warrior) tie the left irreversibly to capitalism. Blackmailing corporations and other large institutions, along with infiltrating the permanent state (which ensures the blackmailing will work), pretty much defines the left at this point.  No one is more calculating and exchange oriented than they are. And those on the left who wish to return to class, economic inequality and socialist transformation are completely unwilling to challenge the splintering of the leftist project along identity lines.

Graeber, to his credit, says little about the prospects of the left, refusing to feed his readership false optimism. To his discredit, while insisting on the permanence of the “communistic” dimension of human experience (we could hardly rid ourselves of it if we wished), and devoting the bulk of his attention to distinguishing productive from pathological modes of exchange, he says very little, especially by way of proposing new ways of thinking, about the “hierarchical” dimension. He concedes its necessity, but never offers even the most qualified praise for responsible uses of hierarchy, much less a rigorous distinction between positive and negative forms. I have to assume that, as a confirmed leftist speaking mostly to other leftists (Graeber has been an important figure in the “anti-globalization” movement [the ones who smashed up Seattle back in antiquity, i.e., 1999] which, insofar as it still exists, has become the alt-right movement). We, of course, have no such scruples—quite to the contrary! The articulation of “communism,” “exchange” and “hierarchy” can probably be incorporated very nicely into absolutism. The most originary manifestation of hierarchy is naming: to name another being is to establish an origin and destiny, and thereby constitute it, bring it into existence. Delegating is itself a form of naming. Naming is performative, like christening a ship or marrying a couple, activities that manifest the most basic social traditions. In a sense, that is what a tradition entails—a reciprocally constituting system of names.

The political formalism instituted by Moldbug is also a form of naming—anonymous, and therefore apparently spontaneous powers are incorporated and made subordinate to the sovereign through naming. The media are propaganda agencies of some power center or another—the blogger Sundance at the Conservative Treehouse asserts that the CIA leaks to the Washington Post and the FBI to the New York Times. No doubt we could create a more comprehensive map of affiliations. In the interests of transparency, we should not only have such a map but it should be used to centralize the information policy of the regime. Every piece of information comes from some specific place in the chain of command. That means all information purveyors are named by the sovereign. Moving beyond this specific example, we can see that sovereign naming prevents the abstraction of individuals in a way that conforms to a dynamic social order. Something new—a new enterprise, an invention—comes out of something existing, something with a name, and is itself named as soon as it comes to the attention of the sovereign (and the sovereign keeps getting better at noticing and assessing novel phenomena).

How do we devise and apply new names? Like Graeber’s “communism,” this practice is part of our most elementary relations to the world and each other. To point to something that hasn’t been noticed is to name it, even if only as “today’s hamburger,” as opposed to all the other hamburgers we’ve all eaten previously. Sovereign naming produces new centers of attention that direct our attention back to the sovereign’s naming capacity. Here’s a way to think about how “naming” as a form of thinking and speaking happens. Gertrude Stein had a habit of naming the chapters in her books. One reads through Chapters 1-6 and then the next chapter is “Chapter 3.” This arrests one attention and directs it toward the meta-critical dimension of books, to things we don’t ordinarily notice. After one has read a lot of books, one notices patterns—so, a “typical” novel might have, say 15 chapters, and the different chapters develop a certain character, or “feel,” because of the formulas of novel writing. So, in a 15 chapter book, chapter 7 has a “turning point” or “climax,” and when the reader gets to Chapter 7 such an expectation is implicit. One notices these patterns and forgets them, as we simply plug new books into the formula. But if there is a character or feel to “Chapter 7,” then other chapters can be Chapter 7-ish, say, in a book that reworks the formulae. You can let the reader notice the subversion of the formula, or you can explicitly identify the upcoming chapter as, “really,” Chapter 7, even if it comes after Chapter 2 and before Chapter 3. Whatever is better for writers, it is better for authority to explicitly name the “emergent property,” and to do so, also explicitly, in the only way one can—tropologically, that is, by violating some linguistic rule or expectation, using a word in a “wrong” way that is now made “right” by its authoritative application. Sovereign naming is thus the ostensive dimension of social order, which allows for a coherent array of imperatives and therefore a clarified chain of command. Of course, subjects will themselves get into the habit of naming, of making explicit their relations to each other, their obligations and expectations, and also their disappointments and amendments of those relations. We would have the means to resist our “abstraction” by deferring to one another’s names.

Equality and Morality

I appreciate Eric Gans’s detailed response to my blog post (In)equality and (Im)morality, and am glad to respond to at least most of the issues he raises there. Part of the problem here is that, as pretty much everyone knows by now, “equality” is used in so many different ways that it would be futile to define it in a single, agreed upon way. Maybe it’s even useless, or should only be used in very restricted and precisely defined contexts (like “economic inequality,” by which we mean the highest salary is x times larger than the smallest, or whatever). That, of course, would remove it from moral discourse altogether, or at least make it subordinate to a moral discussion conducted on different grounds (high levels of economic inequality might indicate, but not demonstrate, some underlying moral issue). Would moral discourse suffer from this excision or derogation? Let’s look at one of Eric’s examples:

In spontaneously formed groups up to a certain size and in a context that makes the sheer exercise of force impossible (in contrast to the “savage” groups favored in apocalyptic disaster films), people tend to cooperate democratically, profiting when necessary from the specific skills of individuals but not choosing a “king,” and the same is true in juries, where the foreman is an officer of the group rather than its leader. Democracy in this sense doesn’t deny that some people may have better judgment than others, but it permits unanimous cooperation, and I venture to say, corresponds to “natural” human interaction since the originary scene.

The point here is to affirm the originary nature of equality, here defined in the sense of the voluntary and spontaneous quality of the cooperation and the fluidity of leadership changes. I think we can easily find other examples of small group formation, especially under more urgent conditions, where hierarchies are firmly established and preserved, without the application of physical force. Indeed, that is what takes place in most of the disaster films I’ve seen—you couldn’t really force someone to follow you out of a burning building, or find the part of the ship that will sink last, or keep one step ahead of the aliens. In such cases, people follow whoever proves he (is it sexist that it is still usually a “he”?) is capable of overcoming obstacles, keeping cool, anticipating problems, calming the others, fending off challenges without undermining group cohesion, etc. In the case of a jury, we have one very clearly designed and protected institution (and hardly spontaneously formed)—but why, exactly, is the foreman necessary? Why do we take it for granted that the jury can’t simply spontaneously run itself, with a democratic vote over which piece of evidence to discuss next, then a democratic vote to decide whether to take a preliminary vote, but first a vote to decide whether the other votes should be by secret ballot, etc.? It seems pretty obvious that the process will work better, and lead to a more just result, if someone sets the agenda—but why is it obvious? An even broader point here is that we have no way of determining, on empirical grounds, whether the cooperation involved is “spontaneous,” “voluntary” and “unanimous.” These are ontological questions, which enter into the selection of a model. In any case that Eric could describe as people as organizing themselves spontaneously I could describe them as following someone who has taken the initiative. The question, then, is which provides the better description? I think that the absolutist ontology I propose does, because to describe any group as organizing itself spontaneously collapses into incoherence. They can’t all act simultaneously, can they? If not, one is in the lead at any moment, and the others are following, in some order that we could identify. (If they don’t follow, we don’t have a group, and the question is moot.)

Does talk of equality and inequality help us here? I don’t see how. Let’s say a particular member of jury feels that his or her contributions to the discussion have been neglected, and he or she resents that. There are two possibilities—one, the contributions have been less useful than those of others, meaning the neglect was justified; two, the contributions have been unjustly neglected. In the first case the moral thing to do (a good foreman would take the lead here) is to explain to the individual juror what has been lacking in his contributions, and suggest ways to improve them as the deliberations proceed. In the second case, the moral thing to do is to realize that the foreman has marginalized contributions that would have enhanced the deliberative process, and, in the interest of improving that process, she should acknowledge the value of those contributions, try to understand why they went unappreciated, and be more attentive to the distinctive nature of those contributions in the future. The juror’s resentment, in either case, is framed in terms of a resentment on behalf of the process itself or, to put it in originary terms, on behalf of the center. The assumption is that all want the same thing—a just verdict. Once the resentment is framed in terms of unequal treatment, to be addressed by the application of the norm of equal treatment (everyone’s opinion must be given equal weight? Everyone must speak for the same amount of time?), the deliberative process is impaired, and if that framing is encouraged, it will impair the process beyond repair. The moral thing to do, then, is to resist such a framing. Now, it may very well be that the juror has been marginalized for reasons such as racial prejudice (it’s also possible that the juror is complaining for that reason), in which case the deliberative process should be corrected to account for that. The point, though is always to improve that process, not to eliminate that form of prejudice (and all of its effects) within the jury room. Even if the juror in question is trying to reduce the conflict to one of some difference extrinsic to the process, the foreman should reframe it in this way—that is the moral thing to do.

I think this ontological question, which turns into a question of framing, can be situated on the originary scene itself. What matters on the originary scene is that everyone defer appropriation, and offer a sign to the others affirming this. Everyone does something—should we call that “equality”? We can, I suppose, but why? There are plenty of cases where “everyone” plays their individual part in “doing something,” while those parts are widely disparate in terms of difficulty and significance to the project. It’s just as easy to imagine a differentiated originary scene, where, for example, some sign only after others have already set the terms, so to speak, as it is to imagine a scene in which everyone signs simultaneously and with equal efficacy. Easier, in fact, I think. What matters is that everyone is on the scene. The same is the case when it comes to dividing the meal—there’s no need to assume that everyone eats exactly the same amount, all we have to assume is that everyone eats together (unlike the animal pecking order, where each has to wait until the higher ranking animal has finished). This is what I think the moral model requires: everyone affirms the scene, and their relations to all others on the scene; and everyone is at the “table” and receives a “piece.” What this will mean in any given social order can’t be determined in advance and therefore will be something we can always argue over (and any ruler will want to receive feedback on), but that what makes it a basis for criticizing the existing order. If the individual juror’s contribution never does get recognized and this was in fact to the detriment of the deliberations, then we could say she has done her part in affirming the scene but has not gotten her “piece,” or has been kept away from the “table,” thereby weakening the scene as a whole. Again, I don’t see any point along the way here where the concept of “equality” clarifies anything.

Now, I do believe that primitive (let’s say, stateless and marketless) communities are highly egalitarian. Equality does mean something here—this is their interpretation of the originary scene, and they certainly have very good reasons for it. What equality means might be that no goods of any kind are saved, that no family is allowed a larger dwelling than any other, that anyone who gets too good at something be punished in some way, that no one speak to another member of the community in such a way as to imply a relation of indebtedness, and so on. Such an understanding of equality still prevails at times, even in much more advanced and complex societies—we see it in children, among colleagues in the workplace, family members, and so on. We are all at least a little bit communist. But there’s nothing inherently moral about this “communism.” Sometimes it might be moral, sometimes not. It’s immoral to destroy a common project because you’re afraid someone else will show you up; it might very well be moral for children to “enforce” (within bounds) equal treatment by the parents of all the siblings, because this insistence might help correct for favoritism of which the parents might not be aware, and therefore might help the family to flourish. Again, though, the question of morality comes down to whether you are contributing to the preservation and enhancement of an institution.

I do agree that “telling the truth about human difference” is a marginal issue, and not a moral position in itself. My only point in this regard is that, in this case, telling the truth is more moral than lying, and the victimary forces poisoning public life today give us no choice but to do one or the other. I think we could get along fine without dwelling on tables showing the relative IQs of all the ethnic and racial groups in in the world, but we need such a reference point if we refuse to concede that the only explanation for disparate outcomes is racism/sexism/homophobia, etc. And, really, if the more moral thing, in this instance, is to tell the truth, then it’s hard to fault those who do so with a bit of gusto. Those flinging accusations of racism are not exactly restrained in their “debating” tactics, after all. A bit of tit for tat can be moral as well, although whether it involves “equality” is also a matter of framing. If there’s a more moral way of responding to those who, by now, are claiming that we want to kill millions of people and openly celebrate violence in the streets, I’d be very glad to hear it. In fact, as some of those most viciously accused of “white supremacy” among other thought crimes have pointed out quite cogently, if, in fact, it turns out that some groups are on average smarter than others (and some groups are better than others in other ways, and some groups are better in math and other in verbal skills, etc.), there is absolutely no reason why we still can’t all get along perfectly well. After all, more and less intelligent and capable people get along within the same institution all the time, so the only thing that would prevent this from being the case throughout society is persistent equality-mongering. That’s why I think the best way forward in terms of using the originary scene as a moral model is to focus on common participation in, contribution to, and recognition by social institutions. And if we are to direct our attention to the preservation, enhancement and creation of institutions (if we want to be good teachers and students within functioning schools and universities rather than affirmatively acted upon experts in group resentment, if we want to be good husbands, wives and parents within a flourishing system of monogamy rather than feminists, etc.) then we want those institutions to be well run and considerately run. And if we want them run in these ways, we want to bring the power of those running them as closely in line with their accountability as we can. In other words, we want cooperation to be directed (to go back to those opening examples, no one is going to propose allowing a university to be run “spontaneously,” I assume) by those with initiative, experience, and responsibility, and we want them to be appointed and assessed in a like manner, by others competent to do so. And that, I think, would bring us to a much higher level of morality.

It seems to me that the problem Eric is trying to solve here is the following: in any minimally civilized or developed order, “inequality” has developed to the point that the moral model must be “translated” in some way so as to minimize the resentments generated by that inequality. The way he thinks the historical process has enabled this is through the emergence of the market and liberal democratic political processes. The “actual” inequality (the existence of both billionaires and those who sleep under bridges) is mitigated by the “formal” equality of the market (my dollar is worth as much as anyone else’s), the vote, various “rights,” and so on. How can we tell whether this “works”? We can point out that the US is still richer and more powerful than Russia or China, I suppose, but, leaving aside how certain we can be about the causes (and continuance) of this Western predominance, we certainly can’t see this as a moral argument. (There’s nothing particularly moral about bribing the lower classes to remain quiescent.) I think there is an unjustified leap of faith here. It may be true that these forms of formal (pretend?) equality have been granted for the purposes Eric suggests, but that doesn’t prove they have actually served that purpose—it might mean exactly the opposite, that the progress of “equality” has been a means of ensuring that the real inequalities (or structures of power) remain untouched.

I would push this further—there is no reason to assume that whatever we can call “inequalities” are themselves the source of any resentment that might threaten the social order. We could say, for example, that the 19th century European working class resented having its labor exploited, being underpaid, being subjected to unsafe conditions, and so on. Or, we could say they resented having their communities undermined, the network of relations in which they were embedded torn apart, and being driven off the land and into packed cities where they were stripped on any system of moral reciprocities. Interestingly, both the capitalist and the revolutionary have good grounds for preferring the first explanation—it presents the capitalist with a problem he can solve politically (labor unions, welfare, minimum wage, public housing, etc.) and the communist with leverage (in case the capitalist palliatives don’t work). Neither wants to confront the implications of the second explanation, which would require preserving or reconstructing a moral order. This too is a question of ontology and framing. Maybe real reciprocity rather than formal equality is called for. One could now say “but these changes were inevitable,” but that’s what one says in abandoning responsibility. One could say, “still, overall, modernity is preferable,” but can one make that argument on terms other than those of modernity itself? Has anyone actually made the argument that increasing wealth, developing technology and improving living conditions requires liberal democracy and ever expanding forms of formal equality? Once we step outside of the frame forcing us to see “modernity” as a single, inevitable, beneficial package, the connection is not obvious at all. (It’s interesting that there’s never been much of a push to democratize or liberalize the structure of corporations. The continued existence of such a creature as a CEO doesn’t seem to trouble our moral model. Even the left has learned to love the CEO.) Every form of cooperation has an end and a logic to it, an end and logic that we can always surface from the language we find ourselves using in discussing that form. Schools are for learning, commerce is for mutually beneficial exchange, militaries are for fighting other militaries, families are for channeling sexual desire into the raising of new generations, conversations are for creating solidarities, exchanging information, trying out new roles, etc. We can frame all resentments as indicating possible infirmities in these forms of cooperation, and then address those resentments by repairing those forms where necessary. And by “we,” I mean whoever has the most responsibility within those forms. This would involve far more moral seriousness than robotically translating each complaint into an accusation of inequality. In this way the moral model would be just as real now as it was on the originary scene (it is still being used to sustain the scene), rather than an abstraction uncomfortably fit onto what we have decided to see as a qualitatively different set of relations.

Sacral Kingship and After: Preliminary Reflections

Sacral kingship is the political commonsense of humankind, according to historian Francis Oakley. In his Kingship: The Politics of Enchantment, and elsewhere, Oakley explores the virtual omnipresence (and great diversity) of sacral kingship, noting that the republican and democratic periods in ancient Greece and Rome, much less our own contemporary democracies, could reasonably be seen as anomalies. What makes kingship sacral is the investment in the king of the maintenance of global harmony—in other words, the king is responsible not only for peace in the community but peace between humans and the world—quite literally, the king is responsible for the growth of crops, the mildness of the weather, the fertility of livestock and game, and more generally maintaining harmony between the various levels of existence. Thinking in originary anthropological terms, we can recognize here the human appropriation of the sacred center, executed first of all by the Big Man but then institutionalized in ritual terms. The Big Man is like the founding genius or entrepreneur, while the sacred king is the inheritor of the Big Man’s labors, enabled and hedged in by myriad rules and expectations. The Big Man, we can assume, could still be replaced by a more effective Big Man, within the gift economy and tribal polity. Once the center has been humanly occupied, it must remain humanly occupied, while ongoing clarification regarding the mode of occupation would be determined by the needs of deferring new forms of potential violence.

One effect of the shift from the more informal Big Man mode of rule to sacral kingship would be the elimination of the constant struggle between prospective Big Men and their respective bands. But at least as important is the possibility of uploading a far more burdensome ritual weight upon the individual occupying the center. And if the sacral king is the nodal point of the community’s hopes he is equally the scapegoat of its resentments. Sacral kings are liable for the benefits they are supposed to bring, and the ritual slaughter of sacral kings is quite common, in some cases apparently ritually prescribed. It’s easy to imagine this being a common practice, since not only does the king, in fact, have no power over the weather, a king elevated through ritual means will not necessarily be more capable in carrying out the normal duties of a ruler better than anyone else. Indeed, some societies separated out the ritual from the executive duties of kingship, delegating the latter to some commander, and thereby instituting an early form of division of power—but these seem to have been more complex and advanced social orders, capable of living with some tension between the fictions and realities of power (medieval to modern Japan is exemplary here).

It seems obvious that sacral kings, especially the more capable among them, must have considered ways of improving their position within this set of arrangements. The most obvious way of doing so would be to conquer enough territories, introduce enough differentiations into the social order, and establish enough of a bureaucracy to neutralize any hope on the part of rivals to replace oneself. (No doubt, the “failures” of sacral kings to ensure fertility or a good rainy season were often framed and broadcast by such rivals, even if the necessity of carrying out such power struggles in the ritualistic language of the community would make it hard to discern their precise interplay at a distance.) Once this has been accomplished, we have a genuine “God Emperor” who can rule over vast territories and bequeath his rule to millennia of descendants. The Chinese, ancient Near East and Egyptian monarchies fit this model and the king is still sacred, still divine, still ensuring the happiness of marriages, the abundance of offspring, and so on. If it’s stable, unified government we want, it’s hard to argue with models that remained more or less intact in some cases for a couple of thousand years. Do we want to argue with it?

The arguments came first of all from the ancient Israelites, who revealed a God incompatible with the sacralization of a human ruler. The foundational story of the Israelites is, of course, that of a small, originally nomadic, then enslaved, people, escaping from and them inflicting a devastating defeat upon, the mightiest empire in the world. The exodus has nourished liberatory and egalitarian narratives ever since. Furthermore, even a cursory, untutored reading of the history of ancient Israel as recorded in the Hebrew Bible can see the constant, ultimately unresolved tension regarding the nature and even legitimacy of kingship, either for the Israelite polity itself or those who took over the task of writing (revising? Inventing?) its history. On the simplest level, if God is king, then no human can be put in that role; insofar as we are to have a human king, he must be no more than a mere functionary of God’s word (which itself is relayed more reliably by priests, judges and prophets). At the very least, the assumption that the king is subjected to some external measure that could justify his restraint or removal now seems to be a permanent part of the human condition. Even more, if the Israelite God is the God of all humankind, with the Israelites His chosen priests and witnesses, the history of that people takes on an unprecedented meaning. Under conditions of “normal” sacral kingship, the conquest and replacement of one king by another merely changes the occupant, not the nature, of the center. Strictly speaking, the entire history (or mythology) of the community pre-conquest is cancelled and can be, and probably usually is, forgotten—or, at least, aggressively translated into the terms of the new ritual and mythic order. Not for the Israelites—their history is that of a kind of agon between the Israelites and, by extension, humanity, with God—the defeats and near obliteration of the Jews are manifestations of divine judgment, punishing the Jews for failing to keep faith with God’s law. Implicit in this historical logic is the assumption that a return to obedience to God’s will is to issue in redemption, making the continued existence of this particular people especially vital to human history as a whole, but just as significantly providing a model for history as such.

At the same time, Judaic thought never really imagines a form of government other than kingship. As has often been noted, the very discourse used to describe God in the Scriptures, and to this day in Jewish prayer, is highly monarchical—God is king, the king of kings, the honor due to God is very explicitly modeled on the kind of honor due to kings and the kind of benefits to result from doing God’s will follow very closely those expected from the sacral king. The covenant between the Israelites and God (the language of which determines that used by the prophets in their vituperations against the sinning community) is very similar to covenants between kings and their people common in the ancient Near East. And, of course, throughout the history of the diaspora, Jewish hopes resided in the coming of the Messiah, very clearly a king, even descended from the House of David—so deeply rooted are these hopes that many Jews prior to the founding of the State of Israel, and a tenacious minority still today, refuse to admit its legitimacy because it fails to fit the Messianic model. All of this testifies to the truth of Oakley’s point—so powerful and intuitive is the political commonsense of humankind that even the most radical revolutions in understandings of the divine ultimately resolve themselves into a somewhat revised version of the original model. Of course, slight revisions can contain vast and unpredictable consequences.

So, why not simply reject this odd Jewish notion and stick with what works, an undiluted divine imperium? For one thing, we know that kings can’t control the weather. But how did we come to know this? If in the more local sacral kingships, the “failure” of the king would lead to the sacrificial killing of that king (on the assumption that some ritual infelicity on the part of the king must have caused the disaster), what happens once the God Emperor is beyond such ritual punishment? Something else, lots of other things, get sacrificed. The regime of human sacrifice maintained by the Aztec monarchs was just the most vivid and gruesome example of what was the case in all such kingdoms—human sacrifice on behalf of the king. One of Eric Gans’s most interesting discussions in his The End of Culture concerns the emergence of human sacrifice at a later, more civilized level of cultural development—it’s not the hunter and gatherer aboriginals who offer up their first born to the gods, but those in more highly differentiated and hierarchical social orders. If your god-ancestor is an antelope, you can offer up a portion of your antelope meal in tribute; if your god is a human king, you offer up your heir, or your slave, because that is what he has provided you with. This can take on many forms, including the conquest, enslavement and extermination of other people, in order to provide such tribute. What the Judaic revelation reveals is that such sacrifice is untenable. What accounts for this revelation? (It’s so hard for us to see this as a revelation because is hard for us to imagine believing that the king, for example, provides for the orderly movements of heavenly bodies. But “we” believed then, just like “we” believe now, in everything conducive, as far as we can tell, which is to say as far as we are told by those we have no choice but to trust, to the deferral of communal violence.) The more distant the sacred center, the more all these subjects’ symmetrical relation to the center outweighs their differences, and the more it becomes possible to imagine that anyone could be liable to be sacrificed. And if anyone could be liable to be sacrificed, anyone can put themselves forward as a sacrifice, or at least demonstrate a willingness to be sacrificed, if necessary. One might do this for the salvation of the community, but this more conscious self-sacrifice would involve some study of the “traits” and actions that make one a more likely sacrifice; i.e., one must become a little bit of a generative anthropologist. The Jewish notion of “chosenness” is really a notion of putting oneself forward as a sacrifice. And, of course, this notion is completed and universalized by the self-sacrifice of Jesus of Nazareth who, as Girard argued, discredited sacrifice by showing its roots in nothing more than mimetic contagion. (What Jesus revealed, according to Gans, is that anyone preaching the doctrine of universal reciprocity will generate the resentment of all, because all thereby stand accused of resentment.) No one can, any more, carry out human sacrifices in good faith; hence, there is no return to the order of sacral kingship—and, as a side effect, other modes of human and natural causality can be explored.

Oakley follows the tentative and ultimately unresolved attempts of Christianity to come to terms with this same problem—the incompatibility of a transcendent God with sacralized kingships. There is much to be discussed here, and much of the struggle between Papacy and the medieval European kings took ideological form in the arguments over the appropriateness of “worldly” kings exercising power that included sacerdotal power. But I’m going to leave this aside for now, in part because I still have a bit of Oakley to read, but also because I want to see what is involved in speaking about power in the terms I am laying out here. Here’s the problem: sacral kingship is the “political commonsense of humankind,” and indeed continues to inform our relation to even the most “secular” leaders, and yet is impossible; meanwhile, we haven’t come up with anything to replace it with—not even close. (One thing worth pointing out is that if, since the spread of Christianity, human beings have been embarked upon the task of constructing a credible replacement for sacral kingship, we can all be a lot more forgiving of our political enemies, present and past, because this must be the most difficult thing humans have ever had to do.)

Power, for originary thinking, ultimately lies in deferral and discipline, a view that I think is consistent with de Jouvenal’s attribution of power to “credit,” i.e., faith in someone’s proven ability to step into some “gap” where leadership is required. To take an example I’ve used before, in a group of hungry men, the one who can abstain from suddenly available food in order to remain dedicated to some urgent task would appear and therefore be extremely powerful in relation to his fellows. The more disciplined you are, the more you want such discipline displayed in the exercise of power, whether that exercise is yours or another’s. We can see, in sacral kingship, absolute credit being given to the king. Why does he deserve such credit? Well, who are you to ask the question—in doing so, don’t you give yourself a bit too much credit? As long as any failures in the social order can be repaired by more or better sacrifices, such credit can continue to flow, and if necessary redirected. But if sacrifice is not the cure, it’s not clear what is. If the king puts himself forward as a self-sacrifice on behalf of the community in post-sacrificial terms, well so can others—shaping yourself as a potential sacrifice, in your own practices and your relation to your community, is itself a capability, one that marks you as elite, i.e., powerful—especially if you inherit the other markers of potential rulership, such as property and bloodline (themselves markers of credit advanced by previous generations). Unsecure or divided power really points to an unresolved anthropological and historical dilemma. If the arguments about Church and Throne in the middle ages mask struggles for power, those struggles for power also advance a kind of difficult anthropological inquiry, upon which we are still engaged. There’s no reason to assume that the lord who put together an army to overthrow the king didn’t genuinely believe he was God’s “real” regent on earth. It’s a good idea to figure out what good faith reasons he might have had for believing this.

Now, Renaissance and Reformation thinkers had what they thought would be a viable replacement for sacral kingship (one drawn from ancient philosophy): “Nature.” If we can understand the laws of nature, both physical and human nature, we can order society rightly. This would draw together the new sciences with a rational political order unindebted to “irrational” hierarchies and rituals. I want to suggest one thing about this attempt (which has reshaped social and political life so thoroughly that we can’t even see how deeply embedded “Nature” is in our thinking about everything): “Nature” is really an attempt to create a more indirect system of sacrifice. The possibility of talking about modern society as a system of sacrifice is by now a well-established tradition, referencing the modern genocides and wars along with far more mundane economic practices. Indeed, it’s very easy to see the valorization of “the market” as an indirect method of sacrifice: we know that if certain restrictions on trade, capital mobility, ownership, labor-capital relations, etc., are overturned, a certain amount of resources will be destroyed and a certain number of lives ruined. All in the name of “the Economy.” We know it will happen, and we can participate in the purging of the antiquated and inefficient, but no one is actually doing it—no one is responsible for singling out another to be sacrificed for the sake of the Economy. The indirectness is not just evasiveness, though—it does allow for the actual causes of social events to be examined and discussed. It’s just that they must be discussed in a framework that ensures that some power center will preside over the destruction of constituents of another. One could imagine justifying the “natural” sacrifices of a Darwinian social order if it served as a viable, post-Christian replacement of a no longer acceptable sacrificial order—except that it no longer seems to be working. We can think, for example, about Affirmative Action as a sacrificial policy: we place a certain number of less qualified members of “protected classes” into positions with the predictable result that a certain number of lives and certain amount of wealth will be lost, and we do this to appease the furies of racial hatred that have led to civil war in the past. But the fact that the policy is sacrificial, and not “rational,” is proven by the lack of any limits to the policy. No one can say when the policy will end, even hypothetically, nor can anyone say what forms of “inequality” or past “sins” it can’t be used to remedy. All this is to be determined by the anointed priests and priestesses of the victimary order. We can just as readily talk about Western immigration policies as an enormous sacrifice of “whiteness,” for the disappearance of which no one now feels they must hide their enthusiasm. The modern social sciences are for the most part elaborate justifications of indirect sacrifices.

So, the problem of absolutism is then a problem of establishing a post-sacrificial order. This may be very difficult but also rather simple. Absolutism privileges the more disciplined over the less disciplined, in every community, every profession, every human activity, every individual, including, of course, sovereignty itself. We can no longer see the king as the fount of spring showers, but we can see him as the font of the discipline that makes us human and members of a particular order. We could say that such a disciplinary order has a lot in common with modern penology, with its shift in emphasis from purely punitive to rehabilitative measures; it may even sound somewhat “therapeutic.” But one difference is that we apply disciplinary terms to ourselves, not just the other—we’re all in training. Another difference is a greater affinity with a traditional view that sees indiscipline as a result of unrestrained desire—lust, envy, resentment, etc., rather than (as modern therapeutic approaches insist) the repression of those desires. (Strictly speaking, therapeutic approaches see discipline itself as the problem.) But we may have a lot to learn from Foucault here, and I take his growing appreciation of the various “technologies of the self” that he studied, moving a great distance from his initial seething resentment of the disciplinary order, as a grudging acknowledge of that order’s civilizing nature. Absolutism might be thought of as a more precise panopticon: not every single subject needs to be constant view, just those on an immediately inferior level of authority. Discipline, in its preliminary forms, involves a kind of “self-sacrifice” (learning to forego certain desires), and a willingness to step into the breach when some kind of mimetically driven panic or paralysis is evident can also be described in self-sacrificial terms—in its more advanced forms, though, discipline means being able to found and adhere to disciplines, that is, constraint based forms of shared practice and inquiry. Then, discipline becomes less self-sacrificial than generative of models for living—and, therefore, for ruling and being ruled.