GABlog Generative Anthropology in the Public Sphere

June 15, 2017

Sacral Kingship and After: Preliminary Reflections

Filed under: GA — adam @ 4:49 pm

Sacral kingship is the political commonsense of humankind, according to historian Francis Oakley. In his Kingship: The Politics of Enchantment, and elsewhere, Oakley explores the virtual omnipresence (and great diversity) of sacral kingship, noting that the republican and democratic periods in ancient Greece and Rome, much less our own contemporary democracies, could reasonably be seen as anomalies. What makes kingship sacral is the investment in the king of the maintenance of global harmony—in other words, the king is responsible not only for peace in the community but peace between humans and the world—quite literally, the king is responsible for the growth of crops, the mildness of the weather, the fertility of livestock and game, and more generally maintaining harmony between the various levels of existence. Thinking in originary anthropological terms, we can recognize here the human appropriation of the sacred center, executed first of all by the Big Man but then institutionalized in ritual terms. The Big Man is like the founding genius or entrepreneur, while the sacred king is the inheritor of the Big Man’s labors, enabled and hedged in by myriad rules and expectations. The Big Man, we can assume, could still be replaced by a more effective Big Man, within the gift economy and tribal polity. Once the center has been humanly occupied, it must remain humanly occupied, while ongoing clarification regarding the mode of occupation would be determined by the needs of deferring new forms of potential violence.

One effect of the shift from the more informal Big Man mode of rule to sacral kingship would be the elimination of the constant struggle between prospective Big Men and their respective bands. But at least as important is the possibility of uploading a far more burdensome ritual weight upon the individual occupying the center. And if the sacral king is the nodal point of the community’s hopes he is equally the scapegoat of its resentments. Sacral kings are liable for the benefits they are supposed to bring, and the ritual slaughter of sacral kings is quite common, in some cases apparently ritually prescribed. It’s easy to imagine this being a common practice, since not only does the king, in fact, have no power over the weather, a king elevated through ritual means will not necessarily be more capable in carrying out the normal duties of a ruler better than anyone else. Indeed, some societies separated out the ritual from the executive duties of kingship, delegating the latter to some commander, and thereby instituting an early form of division of power—but these seem to have been more complex and advanced social orders, capable of living with some tension between the fictions and realities of power (medieval to modern Japan is exemplary here).

It seems obvious that sacral kings, especially the more capable among them, must have considered ways of improving their position within this set of arrangements. The most obvious way of doing so would be to conquer enough territories, introduce enough differentiations into the social order, and establish enough of a bureaucracy to neutralize any hope on the part of rivals to replace oneself. (No doubt, the “failures” of sacral kings to ensure fertility or a good rainy season were often framed and broadcast by such rivals, even if the necessity of carrying out such power struggles in the ritualistic language of the community would make it hard to discern their precise interplay at a distance.) Once this has been accomplished, we have a genuine “God Emperor” who can rule over vast territories and bequeath his rule to millennia of descendants. The Chinese, ancient Near East and Egyptian monarchies fit this model and the king is still sacred, still divine, still ensuring the happiness of marriages, the abundance of offspring, and so on. If it’s stable, unified government we want, it’s hard to argue with models that remained more or less intact in some cases for a couple of thousand years. Do we want to argue with it?

The arguments came first of all from the ancient Israelites, who revealed a God incompatible with the sacralization of a human ruler. The foundational story of the Israelites is, of course, that of a small, originally nomadic, then enslaved, people, escaping from and them inflicting a devastating defeat upon, the mightiest empire in the world. The exodus has nourished liberatory and egalitarian narratives ever since. Furthermore, even a cursory, untutored reading of the history of ancient Israel as recorded in the Hebrew Bible can see the constant, ultimately unresolved tension regarding the nature and even legitimacy of kingship, either for the Israelite polity itself or those who took over the task of writing (revising? Inventing?) its history. On the simplest level, if God is king, then no human can be put in that role; insofar as we are to have a human king, he must be no more than a mere functionary of God’s word (which itself is relayed more reliably by priests, judges and prophets). At the very least, the assumption that the king is subjected to some external measure that could justify his restraint or removal now seems to be a permanent part of the human condition. Even more, if the Israelite God is the God of all humankind, with the Israelites His chosen priests and witnesses, the history of that people takes on an unprecedented meaning. Under conditions of “normal” sacral kingship, the conquest and replacement of one king by another merely changes the occupant, not the nature, of the center. Strictly speaking, the entire history (or mythology) of the community pre-conquest is cancelled and can be, and probably usually is, forgotten—or, at least, aggressively translated into the terms of the new ritual and mythic order. Not for the Israelites—their history is that of a kind of agon between the Israelites and, by extension, humanity, with God—the defeats and near obliteration of the Jews are manifestations of divine judgment, punishing the Jews for failing to keep faith with God’s law. Implicit in this historical logic is the assumption that a return to obedience to God’s will is to issue in redemption, making the continued existence of this particular people especially vital to human history as a whole, but just as significantly providing a model for history as such.

At the same time, Judaic thought never really imagines a form of government other than kingship. As has often been noted, the very discourse used to describe God in the Scriptures, and to this day in Jewish prayer, is highly monarchical—God is king, the king of kings, the honor due to God is very explicitly modeled on the kind of honor due to kings and the kind of benefits to result from doing God’s will follow very closely those expected from the sacral king. The covenant between the Israelites and God (the language of which determines that used by the prophets in their vituperations against the sinning community) is very similar to covenants between kings and their people common in the ancient Near East. And, of course, throughout the history of the diaspora, Jewish hopes resided in the coming of the Messiah, very clearly a king, even descended from the House of David—so deeply rooted are these hopes that many Jews prior to the founding of the State of Israel, and a tenacious minority still today, refuse to admit its legitimacy because it fails to fit the Messianic model. All of this testifies to the truth of Oakley’s point—so powerful and intuitive is the political commonsense of humankind that even the most radical revolutions in understandings of the divine ultimately resolve themselves into a somewhat revised version of the original model. Of course, slight revisions can contain vast and unpredictable consequences.

So, why not simply reject this odd Jewish notion and stick with what works, an undiluted divine imperium? For one thing, we know that kings can’t control the weather. But how did we come to know this? If in the more local sacral kingships, the “failure” of the king would lead to the sacrificial killing of that king (on the assumption that some ritual infelicity on the part of the king must have caused the disaster), what happens once the God Emperor is beyond such ritual punishment? Something else, lots of other things, get sacrificed. The regime of human sacrifice maintained by the Aztec monarchs was just the most vivid and gruesome example of what was the case in all such kingdoms—human sacrifice on behalf of the king. One of Eric Gans’s most interesting discussions in his The End of Culture concerns the emergence of human sacrifice at a later, more civilized level of cultural development—it’s not the hunter and gatherer aboriginals who offer up their first born to the gods, but those in more highly differentiated and hierarchical social orders. If your god-ancestor is an antelope, you can offer up a portion of your antelope meal in tribute; if your god is a human king, you offer up your heir, or your slave, because that is what he has provided you with. This can take on many forms, including the conquest, enslavement and extermination of other people, in order to provide such tribute. What the Judaic revelation reveals is that such sacrifice is untenable. What accounts for this revelation? (It’s so hard for us to see this as a revelation because is hard for us to imagine believing that the king, for example, provides for the orderly movements of heavenly bodies. But “we” believed then, just like “we” believe now, in everything conducive, as far as we can tell, which is to say as far as we are told by those we have no choice but to trust, to the deferral of communal violence.) The more distant the sacred center, the more all these subjects’ symmetrical relation to the center outweighs their differences, and the more it becomes possible to imagine that anyone could be liable to be sacrificed. And if anyone could be liable to be sacrificed, anyone can put themselves forward as a sacrifice, or at least demonstrate a willingness to be sacrificed, if necessary. One might do this for the salvation of the community, but this more conscious self-sacrifice would involve some study of the “traits” and actions that make one a more likely sacrifice; i.e., one must become a little bit of a generative anthropologist. The Jewish notion of “chosenness” is really a notion of putting oneself forward as a sacrifice. And, of course, this notion is completed and universalized by the self-sacrifice of Jesus of Nazareth who, as Girard argued, discredited sacrifice by showing its roots in nothing more than mimetic contagion. (What Jesus revealed, according to Gans, is that anyone preaching the doctrine of universal reciprocity will generate the resentment of all, because all thereby stand accused of resentment.) No one can, any more, carry out human sacrifices in good faith; hence, there is no return to the order of sacral kingship—and, as a side effect, other modes of human and natural causality can be explored.

Oakley follows the tentative and ultimately unresolved attempts of Christianity to come to terms with this same problem—the incompatibility of a transcendent God with sacralized kingships. There is much to be discussed here, and much of the struggle between Papacy and the medieval European kings took ideological form in the arguments over the appropriateness of “worldly” kings exercising power that included sacerdotal power. But I’m going to leave this aside for now, in part because I still have a bit of Oakley to read, but also because I want to see what is involved in speaking about power in the terms I am laying out here. Here’s the problem: sacral kingship is the “political commonsense of humankind,” and indeed continues to inform our relation to even the most “secular” leaders, and yet is impossible; meanwhile, we haven’t come up with anything to replace it with—not even close. (One thing worth pointing out is that if, since the spread of Christianity, human beings have been embarked upon the task of constructing a credible replacement for sacral kingship, we can all be a lot more forgiving of our political enemies, present and past, because this must be the most difficult thing humans have ever had to do.)

Power, for originary thinking, ultimately lies in deferral and discipline, a view that I think is consistent with de Jouvenal’s attribution of power to “credit,” i.e., faith in someone’s proven ability to step into some “gap” where leadership is required. To take an example I’ve used before, in a group of hungry men, the one who can abstain from suddenly available food in order to remain dedicated to some urgent task would appear and therefore be extremely powerful in relation to his fellows. The more disciplined you are, the more you want such discipline displayed in the exercise of power, whether that exercise is yours or another’s. We can see, in sacral kingship, absolute credit being given to the king. Why does he deserve such credit? Well, who are you to ask the question—in doing so, don’t you give yourself a bit too much credit? As long as any failures in the social order can be repaired by more or better sacrifices, such credit can continue to flow, and if necessary redirected. But if sacrifice is not the cure, it’s not clear what is. If the king puts himself forward as a self-sacrifice on behalf of the community in post-sacrificial terms, well so can others—shaping yourself as a potential sacrifice, in your own practices and your relation to your community, is itself a capability, one that marks you as elite, i.e., powerful—especially if you inherit the other markers of potential rulership, such as property and bloodline (themselves markers of credit advanced by previous generations). Unsecure or divided power really points to an unresolved anthropological and historical dilemma. If the arguments about Church and Throne in the middle ages mask struggles for power, those struggles for power also advance a kind of difficult anthropological inquiry, upon which we are still engaged. There’s no reason to assume that the lord who put together an army to overthrow the king didn’t genuinely believe he was God’s “real” regent on earth. It’s a good idea to figure out what good faith reasons he might have had for believing this.

Now, Renaissance and Reformation thinkers had what they thought would be a viable replacement for sacral kingship (one drawn from ancient philosophy): “Nature.” If we can understand the laws of nature, both physical and human nature, we can order society rightly. This would draw together the new sciences with a rational political order unindebted to “irrational” hierarchies and rituals. I want to suggest one thing about this attempt (which has reshaped social and political life so thoroughly that we can’t even see how deeply embedded “Nature” is in our thinking about everything): “Nature” is really an attempt to create a more indirect system of sacrifice. The possibility of talking about modern society as a system of sacrifice is by now a well-established tradition, referencing the modern genocides and wars along with far more mundane economic practices. Indeed, it’s very easy to see the valorization of “the market” as an indirect method of sacrifice: we know that if certain restrictions on trade, capital mobility, ownership, labor-capital relations, etc., are overturned, a certain amount of resources will be destroyed and a certain number of lives ruined. All in the name of “the Economy.” We know it will happen, and we can participate in the purging of the antiquated and inefficient, but no one is actually doing it—no one is responsible for singling out another to be sacrificed for the sake of the Economy. The indirectness is not just evasiveness, though—it does allow for the actual causes of social events to be examined and discussed. It’s just that they must be discussed in a framework that ensures that some power center will preside over the destruction of constituents of another. One could imagine justifying the “natural” sacrifices of a Darwinian social order if it served as a viable, post-Christian replacement of a no longer acceptable sacrificial order—except that it no longer seems to be working. We can think, for example, about Affirmative Action as a sacrificial policy: we place a certain number of less qualified members of “protected classes” into positions with the predictable result that a certain number of lives and certain amount of wealth will be lost, and we do this to appease the furies of racial hatred that have led to civil war in the past. But the fact that the policy is sacrificial, and not “rational,” is proven by the lack of any limits to the policy. No one can say when the policy will end, even hypothetically, nor can anyone say what forms of “inequality” or past “sins” it can’t be used to remedy. All this is to be determined by the anointed priests and priestesses of the victimary order. We can just as readily talk about Western immigration policies as an enormous sacrifice of “whiteness,” for the disappearance of which no one now feels they must hide their enthusiasm. The modern social sciences are for the most part elaborate justifications of indirect sacrifices.

So, the problem of absolutism is then a problem of establishing a post-sacrificial order. This may be very difficult but also rather simple. Absolutism privileges the more disciplined over the less disciplined, in every community, every profession, every human activity, every individual, including, of course, sovereignty itself. We can no longer see the king as the fount of spring showers, but we can see him as the font of the discipline that makes us human and members of a particular order. We could say that such a disciplinary order has a lot in common with modern penology, with its shift in emphasis from purely punitive to rehabilitative measures; it may even sound somewhat “therapeutic.” But one difference is that we apply disciplinary terms to ourselves, not just the other—we’re all in training. Another difference is a greater affinity with a traditional view that sees indiscipline as a result of unrestrained desire—lust, envy, resentment, etc., rather than (as modern therapeutic approaches insist) the repression of those desires. (Strictly speaking, therapeutic approaches see discipline itself as the problem.) But we may have a lot to learn from Foucault here, and I take his growing appreciation of the various “technologies of the self” that he studied, moving a great distance from his initial seething resentment of the disciplinary order, as a grudging acknowledge of that order’s civilizing nature. Absolutism might be thought of as a more precise panopticon: not every single subject needs to be constant view, just those on an immediately inferior level of authority. Discipline, in its preliminary forms, involves a kind of “self-sacrifice” (learning to forego certain desires), and a willingness to step into the breach when some kind of mimetically driven panic or paralysis is evident can also be described in self-sacrificial terms—in its more advanced forms, though, discipline means being able to found and adhere to disciplines, that is, constraint based forms of shared practice and inquiry. Then, discipline becomes less self-sacrificial than generative of models for living—and, therefore, for ruling and being ruled.

June 4, 2017

Cognition as Originary Memory

Filed under: GA — adam @ 6:57 pm

This is the paper (leaving aside any last minute editing) that I will be reading (via Skype) June 9 at the 11th annual GASC Conference in Stockholm.

Cognition as Originary Memory

 

The shift in focus, in cognitive theory, from the relation between mind and objects in the world to the relation between minds mediated by inter-subjectivity, brings it into dialogue with originary thinking. Michael Tomasello’s studies in language and cognition have become a familiar reference point in originary inquiries, which have drawn upon the deep consonance between his notion of “joint attention” and the originary hypothesis’s scenic understanding of human origin. Peter Gardenfors, in his How Homo Became Sapiens, builds on the work of Tomasello and others so as to include the development of cultural and technological implements, in particular writing, in this social understanding of cognition. Much of the vocabulary of cognitive thinking, though, still retains the assumption of separate, autonomous selves: sensations, perceptions, ideas, thoughts, minds, feelings, knowledge, imagination and so on are all experiences or capacities that individuals have, even if we explain them in social and historical terms. My suggestion is that we think of cognition, of what we do when we think, feel, remember and so on directly in linguistic terms, as operations and movements within language, in terms that always already imply shared intentionality. In this way we can grasp the essentially idiomatic character of human being.

 

Eric Gans’s studies of the elementary linguistic forms provide us with an approach to this problem. His most extended study of these forms, of course, is in The Origin of Language, but he has shorter yet sustained and highly suggestive discussions of the relations between the ostensive, the imperative and the declarative in The End of Culture, Science and Faith, Originary Thinking, and Signs of Paradox. In The End of Culture Gans uses the succession of linguistic forms to account for the emergence of mythological thinking and social hierarchy, in Science and Faith to account for the emergence and logic of monotheism, in Originary Thinking, among other things, to propose a more rigorous theory of speech acts, and in Signs of Paradox to account for metaphysics and the constitutive paradoxicality of advanced thought. It makes sense to take what are in these cases historical inquiries and make use of them to examine individual or, to make use of the Girardian term, “interdividual,” cognition, which is always bound up in anthropomorphizing our social configurations in terms of a center constituted out of our desires and resentments.

 

In The Origin of Language Gans shows how each new linguistic form maintains, or preserves, or conserves, the “linguistic presence” threatened by some limitation in the lower form. So, the emergence of the imperative is the making present of an object that an “inappropriate ostensive” has referred to. Bringing the object “redeems” the reference. The assumption here seems to me that the loss of linguistic presence is unthinkable—the most basic thing we do as language users is conserve linguistic presence. Another key concept put to use early on in The Origin of Language is the “lowering of the threshold of significance,” which is to say the movement from one significant object in a world comprised of insignificant ones to a granting of less and less significance to more and more objects. I think we could say that lowering the threshold of significance is the way we conserve linguistic presence: what threatens linguistic presence is the loss of a shared center that we could point to; by lowering the threshold of significance we place a newly identified object at that center. So, right away we can talk about “thinking” or “cognition” as the discipline of conserving linguistic presence by lowering the threshold of significance.

 

This raises the question of how we conserve linguistic presence by lowering the threshold of significance. If linguistic presence is continuous, then our relation to the originary scene is continuous—in a real sense, we are all, always, on the originary scene—it has never “closed.” In that case, a crisis in linguistic presence marks some weakening of that continuity with the originary scene—the crisis is that we are in danger of being cut off from the scene. But in that case, continuity with the scene must entail the repetition of the scene or, more precisely, its iteration. As long as we are within linguistic presence we are iterating the original scene, in all of our uses of signs. Any crisis must then be a failure of iteration, equivalent to forgetting how to use language. The conservation of linguistic presence, then, is a remembering of the originary scene. Our thinking always oscillates between a forgetting and remembering of the originary scene. But this oscillation must itself be located on the originary scene, which then must be constituted by a dialectic of forgetting and remembering, or repeating and iterating. For my purposes, the difference between “repeat” and “iterate” is as follows: repeating maps the sign onto the center; iterating enacts the center-margin relation.

 

 

Now, let’s leap ahead to the linguistic form in which we do most of our thinking: the declarative. The declarative has its origins in the “negative ostensive,” the response to the “inappropriate imperative,” where the object cannot be provided, the imperative cannot be fulfilled, and linguistic presence is therefore threatened. But Gans is at pains to distinguish this “negation” from the logical negation that can come into being only with the declarative itself. He refers to the negation in the negative ostensive as the “operator of interdiction,” which he further suggests must be rooted in the first proto-interdiction, the renunciation of appetite on the originary scene. This remembering of the originary scene further passes through other forms of interdiction which entail “enforcement” through what Gans calls “normative awaiting”—he uses examples like the injunction to children not to talk to strangers. As opposed to normal imperatives, these interdictions can never be fulfilled once and for all. Now, even keeping in mind the limited resources available within an imperative culture, this is not an obvious way to relate the information that the demanded object is not available. The issuer of the interdiction is told not to do (something)+the object. Not to continue demanding, perhaps; not to do more than demand, i.e., not to escalate the situation. None of these alternatives, along with repeating the name of the object, seems to communicate anything about the object itself. But we can read the operator of interdiction as referring to the object—the object is being told not to present itself. But by whom? Clearly not the speaker. I think the initial declarative works because both possibilities are conveyed simultaneously—the “imperator” is ordered to cease pursuing his demand, and the object is ordered, ultimately by the center, to not be present, which in turn adds force to the interdiction directed back at the imperator, who donates his imperative power to the center. In essence, the declarative restores linguistic presence by telling someone that they must lower their threshold of significance because the object of their desire, as they have imagined it, has been rendered unavailable by, let’s say, “reality.” The lowered threshold brings to attention a center yet to be figured by actions, from a direction and at a time yet to be determined.

 

Now, the embedding of the declarative in the imperative order is not very important if once we have the declarative, we have the declarative, i.e., a new linguistic form irreducible to the lower ones, in the way biology is irreducible to chemistry, and chemistry to physics. But biology is still constrained by chemistry, and chemistry by physics. So is the declarative constrained by the imperative order it transcends and, of course, the imperative by the ostensive. The economy of the dialectic of linguistic forms is conserved. Just as on the originary scene remembering the sign is a way of forgetting the scene, immersion in the declarative dimension of culture is a forgetting of the imperative and the ostensive. To operate, to think and communicate in declarative terms is to imagine oneself liberated from imperatives. This gets formulated, via Kant, in imperative terms: to be a “declarative subject” is treat others as ends, never as means, to will that your own actions embody a universal law binding on everyone. We could call this an ethics of the declarative. This imperative remembers the origin of the declarative in a kind of imperative from the center to suspend imperatives amongst each other. We could say that logic itself recalls an imperative for the proper use of declaratives, one that allows no imperatives to be introduced, even implicitly, into the discourse at hand—but, of course, this is accomplished in overwhelming imperative terms, as all manner of otherwise perfectly legitimate uses of language must be subjected to interdiction. Even more, these imperative uses of the declarative include the imperative to not rest content with any particular formulation of that imperative: what, exactly, does it mean to treat another as an end or means, how can you tell whether another is really taking your action as a law—what counts as adjudication here? If you take to treat others only as ends in consequence of your devotion to the categorical imperative, aren’t you treating them as a means to that end? The paradoxes of declarative culture and subjectivity derive from the ineradicability of the absolute imperative founding them.

 

The most decisive liberation of the declarative from the imperative can be seen in the cognitive ramifications of writing, as explained most rigorously, I think, by David Olson in his The World on Paper. Olson argues that it is the invention of writing, alphabetic writing in particular, that turns language into an object of inquiry: something we can break down into parts that we then rearticulate synthetically. These parts are first of all the sounds to be represented by letters, but just as much the words, or parts of sentences, that are identified through writing for the first time. The grammatical analysis of the sentence treats the sentence as a piece of information, makes it possible to construct the scene of speech as a multi-layered dissemination of information about that scene, and thereby provides a model for treating the entire world as a collection of bits of information, ultimately of an event of origin through speech. We could see this as a declarative cosmology. In that case the world can be viewed as a constant flow of information conveyed through everything that could be an object of an ostensive, that is, effect some shift of attention.  This declarative metaphysics only comes to fruition in the computer age. We keep discovering that each piece of information is in fact just a piece of a larger piece of information that perhaps radically changes the meaning of the piece we have just assimilated. This is an intrinsic part of scientific inquiry, but subverts more local and informal inquiries with a much lower tolerance for novelty because of a greater reliance on ostensive and imperative culture. Declarative culture promises us we will only have to obey one imperative: the imperative of reality. In that case, we should be able to bracket and contain potentially subversive inquiries into reality by constructing institutions that introduce new increments of deferral and upward gradations of discipline and therefore social integrity, facilitating the assimilation of transformative knowledge. Olson himself, in his Psychological Theory and Educational Reform seems to think along similar lines by pointing to the intrinsic connection between a literate population and large scale bureaucracies, which is to say hierarchical orders predicated upon the ongoing translation of language into disciplinary metalanguages that simultaneously direct inquiry and impose discipline. However, if we take declarative culture to provide a mandate, an imperative, to extirpate all imperatives that cannot present themselves as the precipitate of a declarative, then those flows of information come equipped with incessantly revised imperatives coming from no imperative and ostensive center, subjecting imperative traditions to constant assault from hidden and competing metaphysical centers.

 

There will always be imperatives that cannot be justified declaratively because the lowering of the threshold of significance generates new regions of ostensivity that generate imperatives in order to establish guardianship over those regions, in turn leading to requests for information, i.e., interrogatives, which themselves presuppose a cluster of demands that attention be directed in certain ways. In the long term most, maybe all imperatives could be provided with a declaratively generated genealogy, but only if we for the most part obey them in the meantime. This constitutively imperative relation to a center could be called an “imperative exchange.” I do what you, the center, the distillation of converging desires and shared renunciations, commands, and you, the center, do what I request, that is, make reality minimally compliant. We must think in this way in most of our daily transactions—the alternative would be to be perpetually calculating on the basis of extremely limited and uncertain data, the probabilities of the various possible consequences of this or that action. For the most part, we have to “trust the world,” since we as yet have insufficiently advanced internal algorithms to operate coherently without doing so. The development of declarative, that is, literate, culture, heightens this tension by establishing with increasing rigor both a comprehensive centralized, which is to say imperative, order and an interdiction on referring to that order too directly. The absolutized imperative founding the declarative order forbids us to speak and therefore think about it.

 

The revelation of the declarative sentence as the name of God, analyzed by Gans in Science and Faith, his study of the Mosaic revelation of the burning bush, cancels this imperative exchange, which leads one to place a figure at the disappointing center, and replaces it with the information that since God has given everything to you, you are to give everything to God, which is to say to the origin of and through speech. There is no more commensurability and therefore no more exchange. You are to embody the conversion of imperatives into declaratives through readiness to have those imperatives converge upon you. Imperative exchange is ancestor worship, and the absolute imperative embedded in I AM THAT I AM is to suspend ancestor worship and remember the originary scene—that is, remember that it is the participation of all in creating reciprocity that generated the sign, not the other way around. But imperative exchange cannot be eliminated—it is embedded in our habits, it is the form in which we remember the sign and forget the scene—if I do this, reality will supply that. Thinking begins with the failure of some imperative exchange—I did this, but reality didn’t supply that, and why in the world should I have expected it to, since it’s not subject to my commands or tied to me by any promise. The declarative sentence, then, is best understood as the conversion of a failed imperative exchange into a constraint—in thinking, you derive a rule from the failure of your obedience to some command to garner a commensurate response from reality. This rule ties some lowering of significance to the maintenance of linguistic presence, as this relationship requires less substantial or at least less immediate cooperation from reality. We get from the command to the rule by way of the interrogative, the prolongation of the command into a request for the ostensive conditions of its fulfillment. The commands we prolong are themselves embedded in the declaratives, the discourses, we circulate through—raising a question about a claim is tantamount to identifying an unavowed imperative, some attempt at word magic, that claim conveys. This is how we oscillate between the imperative and ostensive worlds in which we are immersed and the declarative order we extract from and use to remake those worlds. A good question prolongs the command directed at reality indefinitely, iterating it through a series of possible ostensive conditions of fulfillment, which can only be sustained by treating the declarative order as a source of clearer, more convertible commands.

 

June 2, 2017

(Im)morality and (In)equality

Filed under: GA — adam @ 9:49 am

I’d like to work with a few passages from Eric Gans’s latest Chronicle of Love & Resentment (#549) to address some critical questions regarding morality and equality in originary thinking. Needless to say, I share Gans’s “pessimism” regarding the future of Western liberal democracies while seeing (unlike Gans) such pessimism for liberal democracy as optimism for humanity.

What kind of state-level government is feasible in the Middle East?—and one could certainly include large areas of Africa in the question. The fact that we have no clear response suggests that the end of colonialism, however morally legitimate we may find it, did not resolve the difficulty to which colonization, both hypocritically and sincerely, had attempted to respond: how to integrate into the global economy of technologically advanced nation states those societies that remain at what we cannot avoid judging as a lower level of social organization.

So, the end of colonialism is morally legitimate, even though it has left vast swathes of large areas of the world increasingly ungovernable, and made it impossible to integrate them into the global economy. What kind of morality is this, then—what does it consider more important than maintaining a livable social order? A note of doubt is introduced here, though: “we may find” this to be morally legitimate, but presumably we may not. There is some straining against the anti-colonialist morality here. The morality that we may or may not consider legitimate, I assume is that of judging some forms of social organization as lower than others. But what makes refraining from this judgment moral? Colonialism involved governing others according to norms different than those according to which the home country was governed, but unless we assume that this governing was done in the interests of the colonizer and against the interests of the colonized, and could only be so, the moral problem is not clear. These assumptions therefore get introduced into discussions of the colonial relation, but since those assumptions are as arbitrary regarding this form of governance as any other, there’s clearly something else going on.

There is no “racism” here; on the contrary, by assuming that all human beings have fundamentally the same abilities, and that we owe a certain prima facie respect to any social order that is not, like Nazism, altogether pathological, we cannot help but note that some societies are less able than others to integrate the scientific and technological advances of modernity. Thus health crises in Africa continue to be dealt with in what can only be called a “neocolonial” fashion, however unprofitable it may be for the former colonizers, who send doctors, medicine, medical equipment, and food aid to nations suffering from epidemics of Aids or Ebola, or starving from drought or crop failure—or rebuilding from earthquakes, as in Haiti.

The most moral gestures of the modern West are, it seems, its most colonial ones. And what could more disastrously interfere with this moral impulse that the assumption that “all human beings have fundamentally the same capabilities”? That assumption forces you to look for dysfunctions on a sociological and historical level—one must conclude it is colonialism itself that is responsible for the disasters of the undeveloped world. But if that is your assumption, you can only behave morally—i.e., actually treat other people as needing your help—by finding some roundabout way of claiming that that is not what you’re doing. That’s the best case scenario—the worst case is that you keep attacking the “remnants” of colonialism itself, even if they are the most functional part of the social order. Morality and immorality seem to have switched places.

For if we have indeed entered the “digital” age, implying an inalterable premium for symbol manipulation and hence IQ-type intelligence, then the liberal-democratic faith in the originary equality of all is no longer compatible with economic reality. Hence the liberal political system, as seems to be increasingly the case today, cannot simply continue to correct the excesses of the market and provide a safety net for the less able. Increasingly the market system seems to have only two political alternatives. It can be openly subordinated to an authoritarian elite, and in the best cases, as in China, achieve generally positive economic results. Or else, as seems to be happening throughout the West, it is fated to erect ever more preposterous victimary myths to maintain the fiction of universal political equality, rendering itself all but impotent against the “post-colonial” forces of radical Islam.

If vast inequalities based in part upon natural differences in ability is incompatible with the liberal democratic faith in the originary equality of all than that faith was always a delusion. Some are arguing that the inequalities opening up now over the digital divide are the most massive ever, but who can really know? What are our criteria—are today’s differences greater than those between medieval lords and serfs, or between 19th century industrialists and day laborers paid by piecework? There’s no common measure, but every civilized society has highly significant inequalities and today’s is not qualitatively different in that regard. Perhaps there is now less hope that the inequalities can someday be overcome or lessened, but that hope is itself just a manifestation of liberal-democratic faith, so we are going in a circle. It would be more economical to see that loss of faith as an increase in clarity. But what does the increasing or more intractable inequality have to do with the diminishing legitimacy function of the welfare state—is it that the rich no longer have enough money to support it or the less able are no longer willing to accept the bribe (or have figured out that the bribe will continue even if legitimacy is denied)? The choice between an authoritarian China-style solution and the preposterous victimary imaginary of the West seems clear, but why be downcast about it? If China is the “best case” so far, presumably there can be yet better cases. Obviously creating myths so as to maintain fictions is unsustainable—what next, legends to preserve the myths that maintain the fiction?—and it might be a relief to engage reality. (In fact, if the welfare state no longer serves a legitimating function, that may be because yet another—let’s just call it a—lie has been exposed, that of endless upward mobility and generational status upgrades.) But does not the discarding of lies and fantasies and the apprehension of reality represent greater morality, rather than immorality?

 

Victimary thinking is an ugly and dangerous business, but the inhabitants of advanced economies in their “crowd-sourced” wisdom appear to have determined so far that it is the lesser evil compared to naked hierarchy. The “transnational elite” imposes its own de facto hierarchy, but masks it by victimary virtue-signaling, more or less keeping the peace, while at the same time in Europe and even here fostering a growing insecurity.

We have the “crowd-sourced” wisdom of the inhabitants, but then the “transnational elite” and its hierarchy makes an immediate entrance. Has that elite not been placing its finger on the outsourcing scale (so to speak)? Through which—through whose—sign exchange systems has the wisdom been crowd sourced? So, let’s translate: the transnational elite masks its hierarchy by imposing victimary virtue-signaling, but is now running into diminishing returns—the very method that has more or less kept the peace now generates insecurity. It remains only to add that the elites don’t seem to have a Plan B, and appear to be determined to autistically continue to double down on their masking and signaling.

But as the economy becomes ever more symbol-driven, these expedients are unlikely to remain sufficient. It would seem that unless science can find an effective way of increasing human intelligence across the board, with all the unpredictable results that would bring about (including no doubt ever higher levels of cybercrime), the liberal-democratic model will perforce follow the bellwether universities into an ever higher level of thought control, and ultimately of tyrannical victimocracy. At which point the “final conflict” will indeed be engaged, perhaps with nuclear weapons, between the self-flagellating victimary West and a backward but determined Third World animated by Islamic resentment…

Or not. Perhaps the exemplary conflict between Western-Judeo-Christian-modern-national-Israeli and Middle-Eastern-Islamic-traditional-tribal-Palestinian can be resolved, and global humanity brought slowly into harmony. Or perhaps the whole West will decline along with its periphery and our great-grandchildren will grow up peacefully speaking Chinese.

 

But is the China model exclusive to China? Can we not, in a moment of humility, study the China model, and the way it retrieves ancient Chinese traditions from the wreckage of communism? And, in a renewal of characteristic Western pride, adapt and improve upon the Chinese model? This would require a return to school regarding our own traditions, subjecting them to an unrestrained scrutiny that even its most stringent critics (Marx, Freud, Nietzsche, Heidegger, Derrida…) could never have imagined. But what’s the point of a revolutionary and revelatory theory like GA if not to do exactly that? But the first question to take up would have to be…

 

Human language was the originary source of human equality, and if our hypothesis is correct, it arose in contrast to the might-makes-right ethos of the animal pecking-order system. The irony would seem to be that the discovery of the vast new resources of human representation made possible in the digital age is in the process of reversing the residue of this originary utopia more definitively than all the tyrannies of the past. Indeed, we may now find in the transparent immorality of these tyrannies a model to envy, because it provided a fairly clear path to the “progress” that would one day overturn them. Whereas for the moment, no such “enlightened” path to the future can be seen.

 

That of the relation between morality and equality. This is the heart of the matter. Human equality is utopian, but then it couldn’t be at the origin, because the origin couldn’t be utopian. Morality has nothing, absolutely nothing, literally nothing, to do with equality. We should reverse the entire frame here and say there is no equality, except as designated for very specific circumstances using very specific measuring implements. It’s an ontological question: deciding to call the capacity to speak with one another an instance of “equality” is to import liberal ontology into a mode of anthropological inquiry that must suspend liberal “faith” if it is to ask whether that faith is justified. We can then ask which description is better—people talking to each other as “equal” or people talking to each other as engaged in fine tuning and testing the direction each wants to lead the other. Which description will provide more powerful insights into human interactions and social order? Determining that “equality” must be the starting assumption just leads you to ignore all features of the interaction that interfere with that assumption, which means it leads you to ignore everything that makes it an interaction—which, interestingly, in practice leads to all kinds of atrocities. What seems like equality is just an oscillation of hierarchies, within a broader hierarchy. In a conversation, the person speaking is for the moment in charge; in 30 seconds, the other person will be in charge. It would be silly to call this “inequality,” even in its more permanent forms (like teacher and student), because it’s simply devotion to the center—whoever can show the way to manifest this devotion points the way to others. And that’s morality—showing others how to manifest devotion to the center. Nothing could more completely overturn the animal pecking order—a peasant can show a king how to manifest devotion to the center, but the king is still the king because he shows lots of other people how to do it, in lots of situations well beyond the experience and capability of the peasant. Morality involves reciprocity and reciprocity not only has nothing to do with equality, but is positively undermined by equality. There can only be reciprocity within accepted roles. Most of us don’t go around slaughtering our fellow citizens, but that’s not reciprocity because such acts are unlawful and these laws at least are seriously enforced and, moreover, most of us don’t want to do anything like that. When a worker performs his job competently and conscientiously, and the manager rewards the worker with steady pay increases, a promise of continued employment and safe, clean working conditions—that’s reciprocity. Friends can engage in reciprocity with each other without any explicit hierarchy, but here we’re talking about a gift economy with all kinds of implicit hierarchies. I wouldn’t deny all reciprocity to market exchanges (overwhelmingly between gigantic corporations and individuals), but this kind of reciprocity is minimal and, as we can see, hardly sufficient to stake a social order on. Language makes it possible for us to all participate in social order, but inclusive participation is also not equality, nor is recognition or acknowledgement. In other words, morality (recognition, acknowledgement, reciprocity), yes; equality, no. Forget equality. What, exactly, made those old tyrannies immoral, or even “tyrannies,” other than (tautologically) their failure to recognize equality?—their successes and our capacity to shape those models in new ways should not be disheartening. If there must be hierarchies and central power, then those things cannot be immoral, any more than hunger can be immoral. Morality enters into our engagement with these realities.

May 22, 2017

Absolutism: Some Clarifications

Filed under: GA — adam @ 7:26 am

It may be that for some “absolutism” might simply be an argument for one form of government over others—as if an absolute monarch with complete sovereignty over a population with no power and no rights is “better” than a democracy, or a liberal oligarchy, or socialism, or anything else. But the argument for absolutism, compressed most economically in the principle “sovereignty is conserved,” is more a tautological maxim than a preference based on some other ethical, moral, economic or aesthetic principle. The conservation of energy is what R.G. Collingwood called an “absolute assumption,” not a preference for saving energy over wasting it, and the same is true for the conservation of sovereignty. Everyone really agrees with this, because everyone knows when we speak of “the United States” speaking with “Germany” we know this means Donald Trump, or someone appointed by Donald Trump, speaking with Angela Merkel, or someone appointed by her. We can argue over the real sovereign, and some Americans, for example, out of frustration, will claim that the Supreme Court really rules—but until Chief Justice Roberts starts issuing orders to the special forces I think I’ll stick with the sovereignty of the President. Now, given that the President is sovereign, the arguments about better and worse forms of government begin when we start to ask whether the President should be chosen through an electoral process (and if so, which one), whether he should be replaced regularly, whether he should require authorization from other branches of government for certain actions, whether it should be possible to remove him (and if so, how), etc. Still, in a genuine emergency, everyone would look to the President to act, and unless all sense of national unity and purpose has been drained out of the country, the states and courts would defer to him, and Congress would facilitate his activity with enabling legislation.

Now, once we have established the ontological claim of absolutism, we can further point out that absolutism enables us to structure in very productive ways the debate over forms of government. If someone is to be sovereign, it were best that sovereignty be clear and secure. We can think about this by analogy with just about any other task we ask someone to perform. If we ask someone to coach the high school basketball team, he must be given power over everything pertaining to coaching the basketball team—if we introduce a rule that the players must vote on the starting line-up, then he isn’t really the coach, and we are setting him to fail by introducing permanent conflict between him and his players. If he, on the other hand, wants to give us players that power, it may be wise or unwise, but within the scope of his authority. The same with mechanisms for selecting leaders: the sovereign can allow for offices to be filled through election; indeed, through a supreme act of self-abnegation, he can place himself up for election and risk being removed, without thereby losing sovereignty. We can argue, and I think very convincingly, that this would be a serious mistake and a destructive way of selecting leaders, but that argument would then take place on absolutist terms: the argument against it is that it makes sovereignty less clear and secure. So, if we would all defer to the executive in a crisis, we should make that explicit and gear all institutions to readiness to be helpful in serving the executive in a crisis. We might as well take the next step and acknowledge that the executive will decide when there actually is a crisis, and that other institutions should therefore prepare themselves by providing ongoing feedback to the executive on the ways potential pre-crises are registering across the social order.

The sticking point for a lot of people seems to be the question of removing a clearly unfit leader, which a rigorous absolutism seems to preclude, because any such mechanism introduces division into sovereignty by now making someone else sovereign—the doctor who determines the mental fitness of the ruler, the board of directors that gathers to assess his performance, the judges who would hear appeals regarding disqualifying acts of the president, the legislature that impeaches and removes him, etc. All the divisions and power plays that the clarification of sovereignty aims at eliminating would all then rush in through this open door. But absolutism can answer the question of removing an unfit leader, even if it’s not a very comforting answer. If a ruler’s unfitness manifests itself in an incapacity to defend the country or maintain the conditions of law and order, he will be removed by whichever of his subordinates is in the best position to do so—the best positioned in terms of readiness to manage the emergency, rally the support of other power centers, and command the forces needed to rule. And that subordinate will then seek to return power as soon as possible either to the once again fit sovereign, or whoever is next in line according to whatever tradition has been followed in ensuring the continuity of sovereignty. Maybe that subordinate will serve as sovereign temporally or even permanently. And if he fails to remove the sovereign, and no one else can either, then that suggests either the sovereign wasn’t really unfit, or sovereignty can no longer be sustained in that form on that territory—maybe it needs to be broken down into smaller units or aggregated into a larger one.

It would be easy to say that this is a recipe for instability, since any strongman can now come along and claim sovereignty if he can take it. But strongman who violently seize power almost invariably do so in the name of some other, presumably more real sovereign, which legitimates the takeover. He takes power in the name of the people, the working class, the dominant ethnic group, a restoration of the principles of some previous constitution, etc. In other words, he disclaims responsibility for sovereignty. Widely shared absolutist assumptions would make it impossible to get away with this—if you want to take power, you might be able to claim that a sense of duty impels you to it, but make no mistake—you are taking power, in your own name, under your own newly acquired authority, and you will be responsible for how you see it through. You can’t fob it off on anyone else. Such widely shared assumptions would be highly discouraging to reckless adventurers and utopian ideologues. What’s interesting here is that this supposedly most tyrannical approach to government would in fact rely more than any other of the thoughtfulness, knowledge, and clear-headedness of the people. If everyone understands that a particular interpretation of the constitution, or of the Bible, or a history of mistreatment, real or imagined, by the social or ethnic group you belong to, gives you absolutely no claim to power; that, on the contrary, power belongs to whoever can hold it within the political tradition of rule in that country, then there’s no problem. But that means we’re talking about a fairly sophisticated and disciplined people, capable of dismissing all kinds of flattering BS. Everyone would know that attempts to obligate the sovereign are attempts to weaken the sovereign, to subject the sovereign to the sway, not of “the people” in general, but of some very specific people with a very pressing desire for power, if not necessarily a clear idea of how to use it. All clamoring for “rights,” “freedoms,” a “voice,” etc., would lead everyone to look around and discover who is most ready to use and benefit from those rights and freedoms. And to shut their ears to any remonstrance coming from that corner.

But there must be something that prevents the complete, unlimited power of the ruler from being exercised unchecked upon each and every member of society! If liberalism is part of your common sense, or even a little piece of it, it will be very difficult to get past this kind of reaction. Of course the reaction itself, along with the pitiful devices put in place to calm anxieties, like “rights,” “rule of law,” “constitution,” “checks and balances,” etc., testifies to its own impotence and childishness. Who defends rights, maintains the rule of law, protects the constitution if not whoever has the power to do so; and whoever has the power to do so transparently has the power to violate and redefine rights, law and the constitution. As for “checks and balances,” what can that mean other than different institutions or power centers fighting each other to gain more power for themselves and stymie the others, and either one will succeed, or society will become one big bumper car ride, with everybody knocking everybody else into everybody else. And then you end up developing a social theory claiming all individuals are really out of control bumper cars.

All these devices seem to make sense because they presuppose a shared understanding of “rights,” “laws,” “constitution” and social ends (so the checking and balancing can all seem to be moving things in a more or less agreed upon direction). There can be a shared understanding of these concepts, and as long as that continues the harm done by their incoherence can be minimized. If several people are building a house together, and everyone knows that the roofer needs certain materials and a certain amount of time to work on the roof, it doesn’t matter much if the roofer wants to insist he has a “right” to those things. But these concepts become important in proportion to the shrinking sense of shared purpose, and at a certain point they accelerate that decline in common goals. The builders come to work prepared to defend their rights rather than construct the building as well as they can. If the members of society are for the most part engaged in productive and rewarding activities, in which the contributions of each are valued, then we would be speaking about how to ensure this remains the case, and talk of “rights” and all the rest becomes irrelevant. What is experienced or seen as mistreatment or unfairness either is or is not interference with or impairment of the cooperation required for the task at hand. If someone could be contributing more than they are being allowed or enabled to, there is a problem, but on extremely unlikely to be solved by some outside adjudicator deploying concepts drawn from legalistic or political discourses. One must appeal to those familiar with and involved and interested in the success of the project. Absolutism in government supports a little absolutism in each sphere of authority. To modify the conservative maxim, everyone is absolutist in what they know best, and an absolutist ruler would find such local absolutists to be the best guarantee of good order.

The last clarification, for now, is regarding the appearance that absolutism is a retrograde or nostalgic project, inapplicable to contemporary settings. Absolutism is actually a highly innovative and unprecedented mode of political thinking. In looking for genuine predecessors, we find few—Robert Filmer, Betrand de Jouvenel (who, however, was a kind of conservative liberal in his own politics), Mencius Moldbug (whose rejection of “imperio in imperium,” but not his “cameralism,” is essential to absolutism), and that’s about it. Everything—economics, science, technology, art, philosophy, anthropology, history, etc.—remains to be rethought and re-examined on these new premises. Absolutism is not utopian, though, because, as I suggested above, it is always in fact assumed in any discussion of politics, which suggests it is an unspoken desire of all political thinking. When “Germany” speaks with “the United States” there is really nobody who would prefer that whatever agreements “Germany” and “the United States” arrive at would be irrelevant because those who represent either country haven’t the power to enforce them. (And if they have the power to enforce those agreements, they must have the power to enforce much else.) Or, if you would prefer it, it’s because you don’t like either or both countries very much and want to see harm come to them—you certainly wouldn’t prefer it for countries or institutions you care about. Just as it is always assumed, past governments have always approximated absolutism to some degree, especially when they especially needed to, and are therefore rich sources of insights for historical studies. We have no desire to reproduce the ad hoc and unworkable array of “estates,” institutions and rituals of medieval Europe, or the often times desperate absolutisms that tried to tame or abolish them, but we can certainly learn a lot from that history regarding difficulties of re-unifying divided authority. Ancient peoples killed their kings for not ensuring a successful harvest, a practice we won’t be reinstituting, but one displaying a very keen, if primitive, understanding of the centrality of power to any minimally complex social order. Contemporary absolutism wishes to learn from all this historical experience and deliberately establish an absolutist order for what will really be the first time.

May 16, 2017

The Attentional Structure of Sovereignty

Filed under: GA — adam @ 11:20 am

Considered at its most minimal, language is grounded, as Michael Tomasello along with Eric Gans has shown, in joint attention—the capacity to pay attention to the same thing at the same time, to know that we are doing it, and to know that we know (to let each other know). It should be possible, then, to analyze all human, which is to say social, phenomena, in terms of forms of attention, articulated in ever more complex ways. I think we can reduce the basic attentional dispositions to three. First, one directs others’ attention toward oneself as the center, and joins in that attention directed towards oneself. Second, one directs others attention to something one has produced, and joins in that attention. Third, one directs the attention of others to something one is attending to and neither controls—which is both the originary disposition and, as I will suggest, a “late” one. Naturally, in each of these cases one could rewrite “one directs others’ attention” as “one’s attention is directed by another,” as both must be happening simultaneously and are really almost indistinguishable in their elemental forms. The first two dispositions can readily transition into the third, and beauty and human accomplishments are still among the most compelling objects of attention.

It seems to me that making oneself the center of attention is the basic feminine disposition and making one’s products the center the basic masculine one. These attentional dispositions can take many different forms and articulate and include each other in innumerable ways. The self-centering of the first mode can take forms ranging from frivolous, borderline hysterical narcissism to self-sacrificing martyrdom. The product centering of the second mode can range from idle boasting and bullying to striving for excellence and even immortality as a creator. If we think in terms of sexual relations, the self-centering woman desires the product centering man because attaching herself to him guarantees a perpetual source of potential attention to her; for the product centering man, the woman best able to capture attention best reflects the value of his own products. (And, no doubt, this adds to their reciprocal desire for each other in intimate relations.) We could analyze all manner of group dynamics (all female, all male, mixed—mixed singles and couples, etc.) in these terms. What women want in spending time with each other and appearing together is a broadened center of attention which each of them could hope to occupy at any point; what men want from association is a competitive space in which their productive capacities can be tested and displayed, etc.

If we were to imagine a social order organized solely in terms of these dispositions, it would probably be a highly hierarchical, tribal, patriarchal order that adheres closely to the “social-sexual” hierarchy represented on Vox Day’s Alpha Game blog. The “products” most valued would be weapons, fighting skills, along with organizational effectiveness and the domination and territory they would bring. No doubt many, maybe most, early societies did look something like this, which raises the question of how humans ever found a way to organize themselves differently. Here is where we must consider the third and also originary disposition, that of having attention directed towards something (here, the more passive formulation is more appropriate) that is attached to neither of the “attenders” in particular. There must have often been times when physical confrontations led to mutual destruction, or at least the loss of some of those goods (markers of status) that the confrontation was meant to preserve or add to. It may be obvious to us that such a result indicates that a different approach (retreat, surrender, negotiation) might sometimes be preferable, but it would certainly not be obvious to the fighting man himself, nor to his competitors within the order he dominates, whose response to a defeat would surely be to seize the opportunity to contest the alpha. The alpha, in turn, would have to turn his attention directly to defending his predominance. Remaining locked in a hierarchical combative stance has cognitive consequences.

Someone else in the social order would have to notice that automatic response to physical confrontation leads to unwanted results. That someone would be significantly less alpha than the ruler or his main challengers, who would all be too focused on the struggle for power to think past it. That observer would combine the first two dispositions in order to direct the attention of others, and most especially one of the primary contenders, to consequences of their actions they would not notice on their own. This figure would draw attention to himself in various ways—by having flamboyant “visions,” or fits, or seizures, or ascetic rituals that would mark him as being possessed by some being not subject to the control of those locked into the first two dispositions. He would also produce a kind of “work” worthy of attention—spells, stories, prophecies, etc. (There could be no other way of redirecting the attention of those locked into the first two dispositions—you couldn’t just say, “hey, you know what’s interesting about what you’re doing…”) This articulation of all three dispositions is the line leading from shamans, to holy men and saints, to philosophers and “intellectuals.” (It’s worth noting not only that such figures are often sexually ambiguous but that women, and especially women off the “market,” such as old women, often play an important role in such proceedings.) The Big Man believes in the magic of words, because when he commands others, things happen; the shaman confirms, supplements and exploits this faith by divining new commands when those issued by the ruler fail to transform reality in the desired manner.

Eventually, the Big Man will take to himself the shaman figure for his counsel. In fact, despite the temporal order I’ve laid out for the purpose of exploring the relations between these dispositions, this “alliance” or synthesis would have been there from the beginning. There could never have been any “pure,” Conan-style fighting men who knew nothing but slaughter. War and internal ranking would have had their rites from the beginning. The first kings were priests themselves, guarding the shrines to the ancestors, and kings eventually became gods. But the early king-priests were vulnerable, as they were responsible for everything that happened in the community, and this vulnerability would have required the support of shaman figures who could “read” the signs indicating whether the king’s time had come. The far less vulnerable imperial god-kings would construct more elaborate systems of myth and ritual displaying and embedding their rule. Even more fundamentally, only as a result of the emergence of the human and language could the differentiation into these primitive attentional dispositions take shape and thereby recuperate natural hierarchies and complementarities in specifically human forms. The basic configuration, then—the alignment of the exemplary figure of the second (attention to products) disposition and the exemplary figure of the third (shared attention) disposition (which articulates the first two in a more marginal way) is the “attentional” basis of sovereignty. If the sovereign, most fundamentally, commands and delegates, then his first command and delegation is to the counsel he trusts to draw his attention to consequences of his own actions and even character that his immersion in those actions might blind him to. The ruler commands the shaman/priest/prophet/philosopher/sage/scientist/intellectual to, first of all, help me to clarify my commands.

The Big Man/Imperial order remains based on a “command economy” (I’m punning a bit here)—an exchange between the commands of the sovereign and the pleas of the subjects. This order is transcended once the representative of the third disposition is set against the sovereign and community as a sacrificial figure. The obvious examples here are Socrates and Jesus, and what they have in common is that the community as a whole sees that the centering of attention upon this figure reveals a violent resentment toward the center. Such figures reveal the foundations of social order, they remember the originary scene, when the community is ready to iterate it, but the community can only iterate it by murdering the figure who reveals those foundations. (Think about what Jesus’s impact would have been had he maintained the same teachings but died peacefully in old age as an honored member of the community.) Only in that way—through a community shattering paroxysm—could this revelation of something or someone that cannot be commanded, and therefore our reliance, for anything to be attended to at all, upon a shared renunciation, be made memorable. We see a similar configuration in Moses’s relation to the Hebrews he led out of Egypt, even if it never led to actual violence against Moses (Freud of course, would disagree, and one could see why). And, of course, the relation between the Hebrew prophets and the community and kings had a very similar structure. (As I’ve done before, I must confess my Western-centric bias here, and would be very interested in knowing how such relations have been historically articulated in China and India in particular. I hypothesize that every civilization has revered figures that spoke and acted so as to make themselves the center of attention in order to implicate the community in their desire to ignore the violent possibilities implicit in their participation in shared attention. But perhaps masculine figures who create enduring works synthesizing and de-ritualizing canonical modes of renunciation and deliberately eschew or minimize public reward or honor can play an equivalent iconic, civilizing role.)

The sovereign, then, cultivates and institutionalizes this form of attention to that which transcends sovereignty. He does this in the interest of preserving his own rule, because otherwise the oscillation between reverence and hatred toward the figure at the center will always threaten to engulf him. The sovereign distinguishes himself, as the figure at the center, from the locus of the center (a distinction for which I am indebted to Eric Gans, if it’s worth singling out one debt among all the others), that will outlast and that backgrounds him. And the sovereign himself takes counsel from those “third persons” who have committed themselves to exploring that disposition. To a great extent the pre-modern history of the West is a series of attempts to make sense of the sovereign’s accountability to God. It’s “logical” to say that the king cannot be his own judge in assessing this accountability, but it’s equally logical to say that no one else can without being sovereign himself, which would lead us to an infinite regress. The way of squaring the circle is to direct attention to the ongoing elevation of subjects to third persons who present themselves as offering a kind of tacit counsel to the sovereign by being the kinds of subjects receptive to sovereign will. Not exactly the “nation of priests” of Scripture, or the “nation of philosophers” of some modern utopians, but a nation of seekers after God’s will as mediated by the sovereign’s consular relation to God. Each fulfills, to the best of his or her knowledge, the will of the sovereign as embedded in the entire chain of command directed towards oneself; and each prepares oneself and one’s works as possible centers of attention that will mitigate damaging and amplify promising consequences of those commands in their margins for choice, which commands always leave. And one stands ready to be corrected in this regard. You could say that an absolutist ethics entails “indwelling,” to use Michael Polanyi’s term for the participatory attention of the inquirer, within the consular relation between sovereign and center.

The relationship between the sovereign and the representative of third personhood is the most important and requires the most attention—we could say that all the devastating diremptions of modernity result from misbegotten forms of this relationship, one in which the sovereign is irremediably dependent. How can you know whether your advisor is giving you bad advice? Especially since his advice might almost always be good, but a little bad advice here and there might be enough to make things go off the rails. And if he is giving you bad advice, how can you know why? May be he’s just wrong about something, but maybe he’s conducting the ambitions of another power center. There certainly can’t be any formula here, and the sovereign is sovereign in his choice of advisors as in all things. The only way of mitigating dangers here is to turn attention to the process of production of advisors, which is to say a system of education, i.e., of the labeling of powers that increases the likelihood that advisors who gain access to the sovereign will dwell within the consular relation between the sovereign and God.

« Newer PostsOlder Posts »

Powered by WordPress