Category Archives: GA

Cognition as Originary Memory

This is the paper (leaving aside any last minute editing) that I will be reading (via Skype) June 9 at the 11th annual GASC Conference in Stockholm.

Cognition as Originary Memory

 

The shift in focus, in cognitive theory, from the relation between mind and objects in the world to the relation between minds mediated by inter-subjectivity, brings it into dialogue with originary thinking. Michael Tomasello’s studies in language and cognition have become a familiar reference point in originary inquiries, which have drawn upon the deep consonance between his notion of “joint attention” and the originary hypothesis’s scenic understanding of human origin. Peter Gardenfors, in his How Homo Became Sapiens, builds on the work of Tomasello and others so as to include the development of cultural and technological implements, in particular writing, in this social understanding of cognition. Much of the vocabulary of cognitive thinking, though, still retains the assumption of separate, autonomous selves: sensations, perceptions, ideas, thoughts, minds, feelings, knowledge, imagination and so on are all experiences or capacities that individuals have, even if we explain them in social and historical terms. My suggestion is that we think of cognition, of what we do when we think, feel, remember and so on directly in linguistic terms, as operations and movements within language, in terms that always already imply shared intentionality. In this way we can grasp the essentially idiomatic character of human being.

 

Eric Gans’s studies of the elementary linguistic forms provide us with an approach to this problem. His most extended study of these forms, of course, is in The Origin of Language, but he has shorter yet sustained and highly suggestive discussions of the relations between the ostensive, the imperative and the declarative in The End of Culture, Science and Faith, Originary Thinking, and Signs of Paradox. In The End of Culture Gans uses the succession of linguistic forms to account for the emergence of mythological thinking and social hierarchy, in Science and Faith to account for the emergence and logic of monotheism, in Originary Thinking, among other things, to propose a more rigorous theory of speech acts, and in Signs of Paradox to account for metaphysics and the constitutive paradoxicality of advanced thought. It makes sense to take what are in these cases historical inquiries and make use of them to examine individual or, to make use of the Girardian term, “interdividual,” cognition, which is always bound up in anthropomorphizing our social configurations in terms of a center constituted out of our desires and resentments.

 

In The Origin of Language Gans shows how each new linguistic form maintains, or preserves, or conserves, the “linguistic presence” threatened by some limitation in the lower form. So, the emergence of the imperative is the making present of an object that an “inappropriate ostensive” has referred to. Bringing the object “redeems” the reference. The assumption here seems to me that the loss of linguistic presence is unthinkable—the most basic thing we do as language users is conserve linguistic presence. Another key concept put to use early on in The Origin of Language is the “lowering of the threshold of significance,” which is to say the movement from one significant object in a world comprised of insignificant ones to a granting of less and less significance to more and more objects. I think we could say that lowering the threshold of significance is the way we conserve linguistic presence: what threatens linguistic presence is the loss of a shared center that we could point to; by lowering the threshold of significance we place a newly identified object at that center. So, right away we can talk about “thinking” or “cognition” as the discipline of conserving linguistic presence by lowering the threshold of significance.

 

This raises the question of how we conserve linguistic presence by lowering the threshold of significance. If linguistic presence is continuous, then our relation to the originary scene is continuous—in a real sense, we are all, always, on the originary scene—it has never “closed.” In that case, a crisis in linguistic presence marks some weakening of that continuity with the originary scene—the crisis is that we are in danger of being cut off from the scene. But in that case, continuity with the scene must entail the repetition of the scene or, more precisely, its iteration. As long as we are within linguistic presence we are iterating the original scene, in all of our uses of signs. Any crisis must then be a failure of iteration, equivalent to forgetting how to use language. The conservation of linguistic presence, then, is a remembering of the originary scene. Our thinking always oscillates between a forgetting and remembering of the originary scene. But this oscillation must itself be located on the originary scene, which then must be constituted by a dialectic of forgetting and remembering, or repeating and iterating. For my purposes, the difference between “repeat” and “iterate” is as follows: repeating maps the sign onto the center; iterating enacts the center-margin relation.

 

 

Now, let’s leap ahead to the linguistic form in which we do most of our thinking: the declarative. The declarative has its origins in the “negative ostensive,” the response to the “inappropriate imperative,” where the object cannot be provided, the imperative cannot be fulfilled, and linguistic presence is therefore threatened. But Gans is at pains to distinguish this “negation” from the logical negation that can come into being only with the declarative itself. He refers to the negation in the negative ostensive as the “operator of interdiction,” which he further suggests must be rooted in the first proto-interdiction, the renunciation of appetite on the originary scene. This remembering of the originary scene further passes through other forms of interdiction which entail “enforcement” through what Gans calls “normative awaiting”—he uses examples like the injunction to children not to talk to strangers. As opposed to normal imperatives, these interdictions can never be fulfilled once and for all. Now, even keeping in mind the limited resources available within an imperative culture, this is not an obvious way to relate the information that the demanded object is not available. The issuer of the interdiction is told not to do (something)+the object. Not to continue demanding, perhaps; not to do more than demand, i.e., not to escalate the situation. None of these alternatives, along with repeating the name of the object, seems to communicate anything about the object itself. But we can read the operator of interdiction as referring to the object—the object is being told not to present itself. But by whom? Clearly not the speaker. I think the initial declarative works because both possibilities are conveyed simultaneously—the “imperator” is ordered to cease pursuing his demand, and the object is ordered, ultimately by the center, to not be present, which in turn adds force to the interdiction directed back at the imperator, who donates his imperative power to the center. In essence, the declarative restores linguistic presence by telling someone that they must lower their threshold of significance because the object of their desire, as they have imagined it, has been rendered unavailable by, let’s say, “reality.” The lowered threshold brings to attention a center yet to be figured by actions, from a direction and at a time yet to be determined.

 

Now, the embedding of the declarative in the imperative order is not very important if once we have the declarative, we have the declarative, i.e., a new linguistic form irreducible to the lower ones, in the way biology is irreducible to chemistry, and chemistry to physics. But biology is still constrained by chemistry, and chemistry by physics. So is the declarative constrained by the imperative order it transcends and, of course, the imperative by the ostensive. The economy of the dialectic of linguistic forms is conserved. Just as on the originary scene remembering the sign is a way of forgetting the scene, immersion in the declarative dimension of culture is a forgetting of the imperative and the ostensive. To operate, to think and communicate in declarative terms is to imagine oneself liberated from imperatives. This gets formulated, via Kant, in imperative terms: to be a “declarative subject” is treat others as ends, never as means, to will that your own actions embody a universal law binding on everyone. We could call this an ethics of the declarative. This imperative remembers the origin of the declarative in a kind of imperative from the center to suspend imperatives amongst each other. We could say that logic itself recalls an imperative for the proper use of declaratives, one that allows no imperatives to be introduced, even implicitly, into the discourse at hand—but, of course, this is accomplished in overwhelming imperative terms, as all manner of otherwise perfectly legitimate uses of language must be subjected to interdiction. Even more, these imperative uses of the declarative include the imperative to not rest content with any particular formulation of that imperative: what, exactly, does it mean to treat another as an end or means, how can you tell whether another is really taking your action as a law—what counts as adjudication here? If you take to treat others only as ends in consequence of your devotion to the categorical imperative, aren’t you treating them as a means to that end? The paradoxes of declarative culture and subjectivity derive from the ineradicability of the absolute imperative founding them.

 

The most decisive liberation of the declarative from the imperative can be seen in the cognitive ramifications of writing, as explained most rigorously, I think, by David Olson in his The World on Paper. Olson argues that it is the invention of writing, alphabetic writing in particular, that turns language into an object of inquiry: something we can break down into parts that we then rearticulate synthetically. These parts are first of all the sounds to be represented by letters, but just as much the words, or parts of sentences, that are identified through writing for the first time. The grammatical analysis of the sentence treats the sentence as a piece of information, makes it possible to construct the scene of speech as a multi-layered dissemination of information about that scene, and thereby provides a model for treating the entire world as a collection of bits of information, ultimately of an event of origin through speech. We could see this as a declarative cosmology. In that case the world can be viewed as a constant flow of information conveyed through everything that could be an object of an ostensive, that is, effect some shift of attention.  This declarative metaphysics only comes to fruition in the computer age. We keep discovering that each piece of information is in fact just a piece of a larger piece of information that perhaps radically changes the meaning of the piece we have just assimilated. This is an intrinsic part of scientific inquiry, but subverts more local and informal inquiries with a much lower tolerance for novelty because of a greater reliance on ostensive and imperative culture. Declarative culture promises us we will only have to obey one imperative: the imperative of reality. In that case, we should be able to bracket and contain potentially subversive inquiries into reality by constructing institutions that introduce new increments of deferral and upward gradations of discipline and therefore social integrity, facilitating the assimilation of transformative knowledge. Olson himself, in his Psychological Theory and Educational Reform seems to think along similar lines by pointing to the intrinsic connection between a literate population and large scale bureaucracies, which is to say hierarchical orders predicated upon the ongoing translation of language into disciplinary metalanguages that simultaneously direct inquiry and impose discipline. However, if we take declarative culture to provide a mandate, an imperative, to extirpate all imperatives that cannot present themselves as the precipitate of a declarative, then those flows of information come equipped with incessantly revised imperatives coming from no imperative and ostensive center, subjecting imperative traditions to constant assault from hidden and competing metaphysical centers.

 

There will always be imperatives that cannot be justified declaratively because the lowering of the threshold of significance generates new regions of ostensivity that generate imperatives in order to establish guardianship over those regions, in turn leading to requests for information, i.e., interrogatives, which themselves presuppose a cluster of demands that attention be directed in certain ways. In the long term most, maybe all imperatives could be provided with a declaratively generated genealogy, but only if we for the most part obey them in the meantime. This constitutively imperative relation to a center could be called an “imperative exchange.” I do what you, the center, the distillation of converging desires and shared renunciations, commands, and you, the center, do what I request, that is, make reality minimally compliant. We must think in this way in most of our daily transactions—the alternative would be to be perpetually calculating on the basis of extremely limited and uncertain data, the probabilities of the various possible consequences of this or that action. For the most part, we have to “trust the world,” since we as yet have insufficiently advanced internal algorithms to operate coherently without doing so. The development of declarative, that is, literate, culture, heightens this tension by establishing with increasing rigor both a comprehensive centralized, which is to say imperative, order and an interdiction on referring to that order too directly. The absolutized imperative founding the declarative order forbids us to speak and therefore think about it.

 

The revelation of the declarative sentence as the name of God, analyzed by Gans in Science and Faith, his study of the Mosaic revelation of the burning bush, cancels this imperative exchange, which leads one to place a figure at the disappointing center, and replaces it with the information that since God has given everything to you, you are to give everything to God, which is to say to the origin of and through speech. There is no more commensurability and therefore no more exchange. You are to embody the conversion of imperatives into declaratives through readiness to have those imperatives converge upon you. Imperative exchange is ancestor worship, and the absolute imperative embedded in I AM THAT I AM is to suspend ancestor worship and remember the originary scene—that is, remember that it is the participation of all in creating reciprocity that generated the sign, not the other way around. But imperative exchange cannot be eliminated—it is embedded in our habits, it is the form in which we remember the sign and forget the scene—if I do this, reality will supply that. Thinking begins with the failure of some imperative exchange—I did this, but reality didn’t supply that, and why in the world should I have expected it to, since it’s not subject to my commands or tied to me by any promise. The declarative sentence, then, is best understood as the conversion of a failed imperative exchange into a constraint—in thinking, you derive a rule from the failure of your obedience to some command to garner a commensurate response from reality. This rule ties some lowering of significance to the maintenance of linguistic presence, as this relationship requires less substantial or at least less immediate cooperation from reality. We get from the command to the rule by way of the interrogative, the prolongation of the command into a request for the ostensive conditions of its fulfillment. The commands we prolong are themselves embedded in the declaratives, the discourses, we circulate through—raising a question about a claim is tantamount to identifying an unavowed imperative, some attempt at word magic, that claim conveys. This is how we oscillate between the imperative and ostensive worlds in which we are immersed and the declarative order we extract from and use to remake those worlds. A good question prolongs the command directed at reality indefinitely, iterating it through a series of possible ostensive conditions of fulfillment, which can only be sustained by treating the declarative order as a source of clearer, more convertible commands.

 

(Im)morality and (In)equality

I’d like to work with a few passages from Eric Gans’s latest Chronicle of Love & Resentment (#549) to address some critical questions regarding morality and equality in originary thinking. Needless to say, I share Gans’s “pessimism” regarding the future of Western liberal democracies while seeing (unlike Gans) such pessimism for liberal democracy as optimism for humanity.

What kind of state-level government is feasible in the Middle East?—and one could certainly include large areas of Africa in the question. The fact that we have no clear response suggests that the end of colonialism, however morally legitimate we may find it, did not resolve the difficulty to which colonization, both hypocritically and sincerely, had attempted to respond: how to integrate into the global economy of technologically advanced nation states those societies that remain at what we cannot avoid judging as a lower level of social organization.

So, the end of colonialism is morally legitimate, even though it has left vast swathes of large areas of the world increasingly ungovernable, and made it impossible to integrate them into the global economy. What kind of morality is this, then—what does it consider more important than maintaining a livable social order? A note of doubt is introduced here, though: “we may find” this to be morally legitimate, but presumably we may not. There is some straining against the anti-colonialist morality here. The morality that we may or may not consider legitimate, I assume is that of judging some forms of social organization as lower than others. But what makes refraining from this judgment moral? Colonialism involved governing others according to norms different than those according to which the home country was governed, but unless we assume that this governing was done in the interests of the colonizer and against the interests of the colonized, and could only be so, the moral problem is not clear. These assumptions therefore get introduced into discussions of the colonial relation, but since those assumptions are as arbitrary regarding this form of governance as any other, there’s clearly something else going on.

There is no “racism” here; on the contrary, by assuming that all human beings have fundamentally the same abilities, and that we owe a certain prima facie respect to any social order that is not, like Nazism, altogether pathological, we cannot help but note that some societies are less able than others to integrate the scientific and technological advances of modernity. Thus health crises in Africa continue to be dealt with in what can only be called a “neocolonial” fashion, however unprofitable it may be for the former colonizers, who send doctors, medicine, medical equipment, and food aid to nations suffering from epidemics of Aids or Ebola, or starving from drought or crop failure—or rebuilding from earthquakes, as in Haiti.

The most moral gestures of the modern West are, it seems, its most colonial ones. And what could more disastrously interfere with this moral impulse that the assumption that “all human beings have fundamentally the same capabilities”? That assumption forces you to look for dysfunctions on a sociological and historical level—one must conclude it is colonialism itself that is responsible for the disasters of the undeveloped world. But if that is your assumption, you can only behave morally—i.e., actually treat other people as needing your help—by finding some roundabout way of claiming that that is not what you’re doing. That’s the best case scenario—the worst case is that you keep attacking the “remnants” of colonialism itself, even if they are the most functional part of the social order. Morality and immorality seem to have switched places.

For if we have indeed entered the “digital” age, implying an inalterable premium for symbol manipulation and hence IQ-type intelligence, then the liberal-democratic faith in the originary equality of all is no longer compatible with economic reality. Hence the liberal political system, as seems to be increasingly the case today, cannot simply continue to correct the excesses of the market and provide a safety net for the less able. Increasingly the market system seems to have only two political alternatives. It can be openly subordinated to an authoritarian elite, and in the best cases, as in China, achieve generally positive economic results. Or else, as seems to be happening throughout the West, it is fated to erect ever more preposterous victimary myths to maintain the fiction of universal political equality, rendering itself all but impotent against the “post-colonial” forces of radical Islam.

If vast inequalities based in part upon natural differences in ability is incompatible with the liberal democratic faith in the originary equality of all than that faith was always a delusion. Some are arguing that the inequalities opening up now over the digital divide are the most massive ever, but who can really know? What are our criteria—are today’s differences greater than those between medieval lords and serfs, or between 19th century industrialists and day laborers paid by piecework? There’s no common measure, but every civilized society has highly significant inequalities and today’s is not qualitatively different in that regard. Perhaps there is now less hope that the inequalities can someday be overcome or lessened, but that hope is itself just a manifestation of liberal-democratic faith, so we are going in a circle. It would be more economical to see that loss of faith as an increase in clarity. But what does the increasing or more intractable inequality have to do with the diminishing legitimacy function of the welfare state—is it that the rich no longer have enough money to support it or the less able are no longer willing to accept the bribe (or have figured out that the bribe will continue even if legitimacy is denied)? The choice between an authoritarian China-style solution and the preposterous victimary imaginary of the West seems clear, but why be downcast about it? If China is the “best case” so far, presumably there can be yet better cases. Obviously creating myths so as to maintain fictions is unsustainable—what next, legends to preserve the myths that maintain the fiction?—and it might be a relief to engage reality. (In fact, if the welfare state no longer serves a legitimating function, that may be because yet another—let’s just call it a—lie has been exposed, that of endless upward mobility and generational status upgrades.) But does not the discarding of lies and fantasies and the apprehension of reality represent greater morality, rather than immorality?

 

Victimary thinking is an ugly and dangerous business, but the inhabitants of advanced economies in their “crowd-sourced” wisdom appear to have determined so far that it is the lesser evil compared to naked hierarchy. The “transnational elite” imposes its own de facto hierarchy, but masks it by victimary virtue-signaling, more or less keeping the peace, while at the same time in Europe and even here fostering a growing insecurity.

We have the “crowd-sourced” wisdom of the inhabitants, but then the “transnational elite” and its hierarchy makes an immediate entrance. Has that elite not been placing its finger on the outsourcing scale (so to speak)? Through which—through whose—sign exchange systems has the wisdom been crowd sourced? So, let’s translate: the transnational elite masks its hierarchy by imposing victimary virtue-signaling, but is now running into diminishing returns—the very method that has more or less kept the peace now generates insecurity. It remains only to add that the elites don’t seem to have a Plan B, and appear to be determined to autistically continue to double down on their masking and signaling.

But as the economy becomes ever more symbol-driven, these expedients are unlikely to remain sufficient. It would seem that unless science can find an effective way of increasing human intelligence across the board, with all the unpredictable results that would bring about (including no doubt ever higher levels of cybercrime), the liberal-democratic model will perforce follow the bellwether universities into an ever higher level of thought control, and ultimately of tyrannical victimocracy. At which point the “final conflict” will indeed be engaged, perhaps with nuclear weapons, between the self-flagellating victimary West and a backward but determined Third World animated by Islamic resentment…

Or not. Perhaps the exemplary conflict between Western-Judeo-Christian-modern-national-Israeli and Middle-Eastern-Islamic-traditional-tribal-Palestinian can be resolved, and global humanity brought slowly into harmony. Or perhaps the whole West will decline along with its periphery and our great-grandchildren will grow up peacefully speaking Chinese.

 

But is the China model exclusive to China? Can we not, in a moment of humility, study the China model, and the way it retrieves ancient Chinese traditions from the wreckage of communism? And, in a renewal of characteristic Western pride, adapt and improve upon the Chinese model? This would require a return to school regarding our own traditions, subjecting them to an unrestrained scrutiny that even its most stringent critics (Marx, Freud, Nietzsche, Heidegger, Derrida…) could never have imagined. But what’s the point of a revolutionary and revelatory theory like GA if not to do exactly that? But the first question to take up would have to be…

 

Human language was the originary source of human equality, and if our hypothesis is correct, it arose in contrast to the might-makes-right ethos of the animal pecking-order system. The irony would seem to be that the discovery of the vast new resources of human representation made possible in the digital age is in the process of reversing the residue of this originary utopia more definitively than all the tyrannies of the past. Indeed, we may now find in the transparent immorality of these tyrannies a model to envy, because it provided a fairly clear path to the “progress” that would one day overturn them. Whereas for the moment, no such “enlightened” path to the future can be seen.

 

That of the relation between morality and equality. This is the heart of the matter. Human equality is utopian, but then it couldn’t be at the origin, because the origin couldn’t be utopian. Morality has nothing, absolutely nothing, literally nothing, to do with equality. We should reverse the entire frame here and say there is no equality, except as designated for very specific circumstances using very specific measuring implements. It’s an ontological question: deciding to call the capacity to speak with one another an instance of “equality” is to import liberal ontology into a mode of anthropological inquiry that must suspend liberal “faith” if it is to ask whether that faith is justified. We can then ask which description is better—people talking to each other as “equal” or people talking to each other as engaged in fine tuning and testing the direction each wants to lead the other. Which description will provide more powerful insights into human interactions and social order? Determining that “equality” must be the starting assumption just leads you to ignore all features of the interaction that interfere with that assumption, which means it leads you to ignore everything that makes it an interaction—which, interestingly, in practice leads to all kinds of atrocities. What seems like equality is just an oscillation of hierarchies, within a broader hierarchy. In a conversation, the person speaking is for the moment in charge; in 30 seconds, the other person will be in charge. It would be silly to call this “inequality,” even in its more permanent forms (like teacher and student), because it’s simply devotion to the center—whoever can show the way to manifest this devotion points the way to others. And that’s morality—showing others how to manifest devotion to the center. Nothing could more completely overturn the animal pecking order—a peasant can show a king how to manifest devotion to the center, but the king is still the king because he shows lots of other people how to do it, in lots of situations well beyond the experience and capability of the peasant. Morality involves reciprocity and reciprocity not only has nothing to do with equality, but is positively undermined by equality. There can only be reciprocity within accepted roles. Most of us don’t go around slaughtering our fellow citizens, but that’s not reciprocity because such acts are unlawful and these laws at least are seriously enforced and, moreover, most of us don’t want to do anything like that. When a worker performs his job competently and conscientiously, and the manager rewards the worker with steady pay increases, a promise of continued employment and safe, clean working conditions—that’s reciprocity. Friends can engage in reciprocity with each other without any explicit hierarchy, but here we’re talking about a gift economy with all kinds of implicit hierarchies. I wouldn’t deny all reciprocity to market exchanges (overwhelmingly between gigantic corporations and individuals), but this kind of reciprocity is minimal and, as we can see, hardly sufficient to stake a social order on. Language makes it possible for us to all participate in social order, but inclusive participation is also not equality, nor is recognition or acknowledgement. In other words, morality (recognition, acknowledgement, reciprocity), yes; equality, no. Forget equality. What, exactly, made those old tyrannies immoral, or even “tyrannies,” other than (tautologically) their failure to recognize equality?—their successes and our capacity to shape those models in new ways should not be disheartening. If there must be hierarchies and central power, then those things cannot be immoral, any more than hunger can be immoral. Morality enters into our engagement with these realities.

Absolutism: Some Clarifications

It may be that for some “absolutism” might simply be an argument for one form of government over others—as if an absolute monarch with complete sovereignty over a population with no power and no rights is “better” than a democracy, or a liberal oligarchy, or socialism, or anything else. But the argument for absolutism, compressed most economically in the principle “sovereignty is conserved,” is more a tautological maxim than a preference based on some other ethical, moral, economic or aesthetic principle. The conservation of energy is what R.G. Collingwood called an “absolute assumption,” not a preference for saving energy over wasting it, and the same is true for the conservation of sovereignty. Everyone really agrees with this, because everyone knows when we speak of “the United States” speaking with “Germany” we know this means Donald Trump, or someone appointed by Donald Trump, speaking with Angela Merkel, or someone appointed by her. We can argue over the real sovereign, and some Americans, for example, out of frustration, will claim that the Supreme Court really rules—but until Chief Justice Roberts starts issuing orders to the special forces I think I’ll stick with the sovereignty of the President. Now, given that the President is sovereign, the arguments about better and worse forms of government begin when we start to ask whether the President should be chosen through an electoral process (and if so, which one), whether he should be replaced regularly, whether he should require authorization from other branches of government for certain actions, whether it should be possible to remove him (and if so, how), etc. Still, in a genuine emergency, everyone would look to the President to act, and unless all sense of national unity and purpose has been drained out of the country, the states and courts would defer to him, and Congress would facilitate his activity with enabling legislation.

Now, once we have established the ontological claim of absolutism, we can further point out that absolutism enables us to structure in very productive ways the debate over forms of government. If someone is to be sovereign, it were best that sovereignty be clear and secure. We can think about this by analogy with just about any other task we ask someone to perform. If we ask someone to coach the high school basketball team, he must be given power over everything pertaining to coaching the basketball team—if we introduce a rule that the players must vote on the starting line-up, then he isn’t really the coach, and we are setting him to fail by introducing permanent conflict between him and his players. If he, on the other hand, wants to give us players that power, it may be wise or unwise, but within the scope of his authority. The same with mechanisms for selecting leaders: the sovereign can allow for offices to be filled through election; indeed, through a supreme act of self-abnegation, he can place himself up for election and risk being removed, without thereby losing sovereignty. We can argue, and I think very convincingly, that this would be a serious mistake and a destructive way of selecting leaders, but that argument would then take place on absolutist terms: the argument against it is that it makes sovereignty less clear and secure. So, if we would all defer to the executive in a crisis, we should make that explicit and gear all institutions to readiness to be helpful in serving the executive in a crisis. We might as well take the next step and acknowledge that the executive will decide when there actually is a crisis, and that other institutions should therefore prepare themselves by providing ongoing feedback to the executive on the ways potential pre-crises are registering across the social order.

The sticking point for a lot of people seems to be the question of removing a clearly unfit leader, which a rigorous absolutism seems to preclude, because any such mechanism introduces division into sovereignty by now making someone else sovereign—the doctor who determines the mental fitness of the ruler, the board of directors that gathers to assess his performance, the judges who would hear appeals regarding disqualifying acts of the president, the legislature that impeaches and removes him, etc. All the divisions and power plays that the clarification of sovereignty aims at eliminating would all then rush in through this open door. But absolutism can answer the question of removing an unfit leader, even if it’s not a very comforting answer. If a ruler’s unfitness manifests itself in an incapacity to defend the country or maintain the conditions of law and order, he will be removed by whichever of his subordinates is in the best position to do so—the best positioned in terms of readiness to manage the emergency, rally the support of other power centers, and command the forces needed to rule. And that subordinate will then seek to return power as soon as possible either to the once again fit sovereign, or whoever is next in line according to whatever tradition has been followed in ensuring the continuity of sovereignty. Maybe that subordinate will serve as sovereign temporally or even permanently. And if he fails to remove the sovereign, and no one else can either, then that suggests either the sovereign wasn’t really unfit, or sovereignty can no longer be sustained in that form on that territory—maybe it needs to be broken down into smaller units or aggregated into a larger one.

It would be easy to say that this is a recipe for instability, since any strongman can now come along and claim sovereignty if he can take it. But strongman who violently seize power almost invariably do so in the name of some other, presumably more real sovereign, which legitimates the takeover. He takes power in the name of the people, the working class, the dominant ethnic group, a restoration of the principles of some previous constitution, etc. In other words, he disclaims responsibility for sovereignty. Widely shared absolutist assumptions would make it impossible to get away with this—if you want to take power, you might be able to claim that a sense of duty impels you to it, but make no mistake—you are taking power, in your own name, under your own newly acquired authority, and you will be responsible for how you see it through. You can’t fob it off on anyone else. Such widely shared assumptions would be highly discouraging to reckless adventurers and utopian ideologues. What’s interesting here is that this supposedly most tyrannical approach to government would in fact rely more than any other of the thoughtfulness, knowledge, and clear-headedness of the people. If everyone understands that a particular interpretation of the constitution, or of the Bible, or a history of mistreatment, real or imagined, by the social or ethnic group you belong to, gives you absolutely no claim to power; that, on the contrary, power belongs to whoever can hold it within the political tradition of rule in that country, then there’s no problem. But that means we’re talking about a fairly sophisticated and disciplined people, capable of dismissing all kinds of flattering BS. Everyone would know that attempts to obligate the sovereign are attempts to weaken the sovereign, to subject the sovereign to the sway, not of “the people” in general, but of some very specific people with a very pressing desire for power, if not necessarily a clear idea of how to use it. All clamoring for “rights,” “freedoms,” a “voice,” etc., would lead everyone to look around and discover who is most ready to use and benefit from those rights and freedoms. And to shut their ears to any remonstrance coming from that corner.

But there must be something that prevents the complete, unlimited power of the ruler from being exercised unchecked upon each and every member of society! If liberalism is part of your common sense, or even a little piece of it, it will be very difficult to get past this kind of reaction. Of course the reaction itself, along with the pitiful devices put in place to calm anxieties, like “rights,” “rule of law,” “constitution,” “checks and balances,” etc., testifies to its own impotence and childishness. Who defends rights, maintains the rule of law, protects the constitution if not whoever has the power to do so; and whoever has the power to do so transparently has the power to violate and redefine rights, law and the constitution. As for “checks and balances,” what can that mean other than different institutions or power centers fighting each other to gain more power for themselves and stymie the others, and either one will succeed, or society will become one big bumper car ride, with everybody knocking everybody else into everybody else. And then you end up developing a social theory claiming all individuals are really out of control bumper cars.

All these devices seem to make sense because they presuppose a shared understanding of “rights,” “laws,” “constitution” and social ends (so the checking and balancing can all seem to be moving things in a more or less agreed upon direction). There can be a shared understanding of these concepts, and as long as that continues the harm done by their incoherence can be minimized. If several people are building a house together, and everyone knows that the roofer needs certain materials and a certain amount of time to work on the roof, it doesn’t matter much if the roofer wants to insist he has a “right” to those things. But these concepts become important in proportion to the shrinking sense of shared purpose, and at a certain point they accelerate that decline in common goals. The builders come to work prepared to defend their rights rather than construct the building as well as they can. If the members of society are for the most part engaged in productive and rewarding activities, in which the contributions of each are valued, then we would be speaking about how to ensure this remains the case, and talk of “rights” and all the rest becomes irrelevant. What is experienced or seen as mistreatment or unfairness either is or is not interference with or impairment of the cooperation required for the task at hand. If someone could be contributing more than they are being allowed or enabled to, there is a problem, but on extremely unlikely to be solved by some outside adjudicator deploying concepts drawn from legalistic or political discourses. One must appeal to those familiar with and involved and interested in the success of the project. Absolutism in government supports a little absolutism in each sphere of authority. To modify the conservative maxim, everyone is absolutist in what they know best, and an absolutist ruler would find such local absolutists to be the best guarantee of good order.

The last clarification, for now, is regarding the appearance that absolutism is a retrograde or nostalgic project, inapplicable to contemporary settings. Absolutism is actually a highly innovative and unprecedented mode of political thinking. In looking for genuine predecessors, we find few—Robert Filmer, Betrand de Jouvenel (who, however, was a kind of conservative liberal in his own politics), Mencius Moldbug (whose rejection of “imperio in imperium,” but not his “cameralism,” is essential to absolutism), and that’s about it. Everything—economics, science, technology, art, philosophy, anthropology, history, etc.—remains to be rethought and re-examined on these new premises. Absolutism is not utopian, though, because, as I suggested above, it is always in fact assumed in any discussion of politics, which suggests it is an unspoken desire of all political thinking. When “Germany” speaks with “the United States” there is really nobody who would prefer that whatever agreements “Germany” and “the United States” arrive at would be irrelevant because those who represent either country haven’t the power to enforce them. (And if they have the power to enforce those agreements, they must have the power to enforce much else.) Or, if you would prefer it, it’s because you don’t like either or both countries very much and want to see harm come to them—you certainly wouldn’t prefer it for countries or institutions you care about. Just as it is always assumed, past governments have always approximated absolutism to some degree, especially when they especially needed to, and are therefore rich sources of insights for historical studies. We have no desire to reproduce the ad hoc and unworkable array of “estates,” institutions and rituals of medieval Europe, or the often times desperate absolutisms that tried to tame or abolish them, but we can certainly learn a lot from that history regarding difficulties of re-unifying divided authority. Ancient peoples killed their kings for not ensuring a successful harvest, a practice we won’t be reinstituting, but one displaying a very keen, if primitive, understanding of the centrality of power to any minimally complex social order. Contemporary absolutism wishes to learn from all this historical experience and deliberately establish an absolutist order for what will really be the first time.

The Attentional Structure of Sovereignty

Considered at its most minimal, language is grounded, as Michael Tomasello along with Eric Gans has shown, in joint attention—the capacity to pay attention to the same thing at the same time, to know that we are doing it, and to know that we know (to let each other know). It should be possible, then, to analyze all human, which is to say social, phenomena, in terms of forms of attention, articulated in ever more complex ways. I think we can reduce the basic attentional dispositions to three. First, one directs others’ attention toward oneself as the center, and joins in that attention directed towards oneself. Second, one directs others attention to something one has produced, and joins in that attention. Third, one directs the attention of others to something one is attending to and neither controls—which is both the originary disposition and, as I will suggest, a “late” one. Naturally, in each of these cases one could rewrite “one directs others’ attention” as “one’s attention is directed by another,” as both must be happening simultaneously and are really almost indistinguishable in their elemental forms. The first two dispositions can readily transition into the third, and beauty and human accomplishments are still among the most compelling objects of attention.

It seems to me that making oneself the center of attention is the basic feminine disposition and making one’s products the center the basic masculine one. These attentional dispositions can take many different forms and articulate and include each other in innumerable ways. The self-centering of the first mode can take forms ranging from frivolous, borderline hysterical narcissism to self-sacrificing martyrdom. The product centering of the second mode can range from idle boasting and bullying to striving for excellence and even immortality as a creator. If we think in terms of sexual relations, the self-centering woman desires the product centering man because attaching herself to him guarantees a perpetual source of potential attention to her; for the product centering man, the woman best able to capture attention best reflects the value of his own products. (And, no doubt, this adds to their reciprocal desire for each other in intimate relations.) We could analyze all manner of group dynamics (all female, all male, mixed—mixed singles and couples, etc.) in these terms. What women want in spending time with each other and appearing together is a broadened center of attention which each of them could hope to occupy at any point; what men want from association is a competitive space in which their productive capacities can be tested and displayed, etc.

If we were to imagine a social order organized solely in terms of these dispositions, it would probably be a highly hierarchical, tribal, patriarchal order that adheres closely to the “social-sexual” hierarchy represented on Vox Day’s Alpha Game blog. The “products” most valued would be weapons, fighting skills, along with organizational effectiveness and the domination and territory they would bring. No doubt many, maybe most, early societies did look something like this, which raises the question of how humans ever found a way to organize themselves differently. Here is where we must consider the third and also originary disposition, that of having attention directed towards something (here, the more passive formulation is more appropriate) that is attached to neither of the “attenders” in particular. There must have often been times when physical confrontations led to mutual destruction, or at least the loss of some of those goods (markers of status) that the confrontation was meant to preserve or add to. It may be obvious to us that such a result indicates that a different approach (retreat, surrender, negotiation) might sometimes be preferable, but it would certainly not be obvious to the fighting man himself, nor to his competitors within the order he dominates, whose response to a defeat would surely be to seize the opportunity to contest the alpha. The alpha, in turn, would have to turn his attention directly to defending his predominance. Remaining locked in a hierarchical combative stance has cognitive consequences.

Someone else in the social order would have to notice that automatic response to physical confrontation leads to unwanted results. That someone would be significantly less alpha than the ruler or his main challengers, who would all be too focused on the struggle for power to think past it. That observer would combine the first two dispositions in order to direct the attention of others, and most especially one of the primary contenders, to consequences of their actions they would not notice on their own. This figure would draw attention to himself in various ways—by having flamboyant “visions,” or fits, or seizures, or ascetic rituals that would mark him as being possessed by some being not subject to the control of those locked into the first two dispositions. He would also produce a kind of “work” worthy of attention—spells, stories, prophecies, etc. (There could be no other way of redirecting the attention of those locked into the first two dispositions—you couldn’t just say, “hey, you know what’s interesting about what you’re doing…”) This articulation of all three dispositions is the line leading from shamans, to holy men and saints, to philosophers and “intellectuals.” (It’s worth noting not only that such figures are often sexually ambiguous but that women, and especially women off the “market,” such as old women, often play an important role in such proceedings.) The Big Man believes in the magic of words, because when he commands others, things happen; the shaman confirms, supplements and exploits this faith by divining new commands when those issued by the ruler fail to transform reality in the desired manner.

Eventually, the Big Man will take to himself the shaman figure for his counsel. In fact, despite the temporal order I’ve laid out for the purpose of exploring the relations between these dispositions, this “alliance” or synthesis would have been there from the beginning. There could never have been any “pure,” Conan-style fighting men who knew nothing but slaughter. War and internal ranking would have had their rites from the beginning. The first kings were priests themselves, guarding the shrines to the ancestors, and kings eventually became gods. But the early king-priests were vulnerable, as they were responsible for everything that happened in the community, and this vulnerability would have required the support of shaman figures who could “read” the signs indicating whether the king’s time had come. The far less vulnerable imperial god-kings would construct more elaborate systems of myth and ritual displaying and embedding their rule. Even more fundamentally, only as a result of the emergence of the human and language could the differentiation into these primitive attentional dispositions take shape and thereby recuperate natural hierarchies and complementarities in specifically human forms. The basic configuration, then—the alignment of the exemplary figure of the second (attention to products) disposition and the exemplary figure of the third (shared attention) disposition (which articulates the first two in a more marginal way) is the “attentional” basis of sovereignty. If the sovereign, most fundamentally, commands and delegates, then his first command and delegation is to the counsel he trusts to draw his attention to consequences of his own actions and even character that his immersion in those actions might blind him to. The ruler commands the shaman/priest/prophet/philosopher/sage/scientist/intellectual to, first of all, help me to clarify my commands.

The Big Man/Imperial order remains based on a “command economy” (I’m punning a bit here)—an exchange between the commands of the sovereign and the pleas of the subjects. This order is transcended once the representative of the third disposition is set against the sovereign and community as a sacrificial figure. The obvious examples here are Socrates and Jesus, and what they have in common is that the community as a whole sees that the centering of attention upon this figure reveals a violent resentment toward the center. Such figures reveal the foundations of social order, they remember the originary scene, when the community is ready to iterate it, but the community can only iterate it by murdering the figure who reveals those foundations. (Think about what Jesus’s impact would have been had he maintained the same teachings but died peacefully in old age as an honored member of the community.) Only in that way—through a community shattering paroxysm—could this revelation of something or someone that cannot be commanded, and therefore our reliance, for anything to be attended to at all, upon a shared renunciation, be made memorable. We see a similar configuration in Moses’s relation to the Hebrews he led out of Egypt, even if it never led to actual violence against Moses (Freud of course, would disagree, and one could see why). And, of course, the relation between the Hebrew prophets and the community and kings had a very similar structure. (As I’ve done before, I must confess my Western-centric bias here, and would be very interested in knowing how such relations have been historically articulated in China and India in particular. I hypothesize that every civilization has revered figures that spoke and acted so as to make themselves the center of attention in order to implicate the community in their desire to ignore the violent possibilities implicit in their participation in shared attention. But perhaps masculine figures who create enduring works synthesizing and de-ritualizing canonical modes of renunciation and deliberately eschew or minimize public reward or honor can play an equivalent iconic, civilizing role.)

The sovereign, then, cultivates and institutionalizes this form of attention to that which transcends sovereignty. He does this in the interest of preserving his own rule, because otherwise the oscillation between reverence and hatred toward the figure at the center will always threaten to engulf him. The sovereign distinguishes himself, as the figure at the center, from the locus of the center (a distinction for which I am indebted to Eric Gans, if it’s worth singling out one debt among all the others), that will outlast and that backgrounds him. And the sovereign himself takes counsel from those “third persons” who have committed themselves to exploring that disposition. To a great extent the pre-modern history of the West is a series of attempts to make sense of the sovereign’s accountability to God. It’s “logical” to say that the king cannot be his own judge in assessing this accountability, but it’s equally logical to say that no one else can without being sovereign himself, which would lead us to an infinite regress. The way of squaring the circle is to direct attention to the ongoing elevation of subjects to third persons who present themselves as offering a kind of tacit counsel to the sovereign by being the kinds of subjects receptive to sovereign will. Not exactly the “nation of priests” of Scripture, or the “nation of philosophers” of some modern utopians, but a nation of seekers after God’s will as mediated by the sovereign’s consular relation to God. Each fulfills, to the best of his or her knowledge, the will of the sovereign as embedded in the entire chain of command directed towards oneself; and each prepares oneself and one’s works as possible centers of attention that will mitigate damaging and amplify promising consequences of those commands in their margins for choice, which commands always leave. And one stands ready to be corrected in this regard. You could say that an absolutist ethics entails “indwelling,” to use Michael Polanyi’s term for the participatory attention of the inquirer, within the consular relation between sovereign and center.

The relationship between the sovereign and the representative of third personhood is the most important and requires the most attention—we could say that all the devastating diremptions of modernity result from misbegotten forms of this relationship, one in which the sovereign is irremediably dependent. How can you know whether your advisor is giving you bad advice? Especially since his advice might almost always be good, but a little bad advice here and there might be enough to make things go off the rails. And if he is giving you bad advice, how can you know why? May be he’s just wrong about something, but maybe he’s conducting the ambitions of another power center. There certainly can’t be any formula here, and the sovereign is sovereign in his choice of advisors as in all things. The only way of mitigating dangers here is to turn attention to the process of production of advisors, which is to say a system of education, i.e., of the labeling of powers that increases the likelihood that advisors who gain access to the sovereign will dwell within the consular relation between the sovereign and God.

Absolutism and History

Modern history begins with the first elites to use the high-low vs. the middle logic first deployed by the king to question the legitimacy of the monarchy itself. The absolutist monarch consolidated power by reducing all subjects to equidistance from his own central power; the next, fairly obvious, step is to ask why we need the king to establish this equidistance from the center. Wouldn’t it be better to have a center actually chosen on the terms of, and thereby confirming, the a priori (and not merely bestowed) equidistance of all subjects from the center? This step, which introduces the public-private, state-citizen distinction (and all the others that follow, such as economics-politics, culture-religion, impartial-partisan, etc.) is also the beginning of the dissimulation of power. To be a private entity is to be officially bereft of any formal power, and hence free of responsibility for the power one exercises. We must see things this way if we see individuals as the basic units of society, in which case all private power is vaguely illegitimate while only being liable to criticism in terms of improper access to and use of state power—which is easier to discover or construct, the more powerful the actor (a major exception that proves the rule here is anti-discrimination law, which criminalizes unapproved of forms of association—but which has set in motion the implosion of the private-public distinction itself, because in the end there is no area of life where we don’t “discriminate.” For example, could anyone provide, in terms of anti-discrimination law, a convincing reason why marriage certificates shouldn’t only be granted to those who marry “others,” however defined?). In this way we all join in the modernizing project of trying to raze all “cabals” to the ground so as to release the free, self-determining, powerless and power-free individuals somehow enchained within them. Prior to this modern project of concealing and dissimulating power, though, the monarchies of Europe had sabotaged themselves by diluting power by entitling individuals who benefited the throne, rather than those who had proven themselves worthy of what should have remained hard won and rarely granted privileges.

So, re-starting the absolutist project means naming powers properly. This imperative unites our historical accounts, our analyses of contemporary politics, our ongoing political projects and a summative ontology and ethics of sovereignty. An absolutist history identifies the dilution and then dissimulation of names for power, along with seeking out the actions and accounts of those who, in the midst of the corruption of names, sought to reattach them to their proper objects—those people are our precursors and models, our “fathers” you might say. Political analysis involves tracing the relations between formal, political, powers, and informal, secondary and therefore unnamed and dissimulated powers. This is complicated because informal powers preserve their power by being informal. We might say, in good formalist/realist fashion, that the New York Times was the press agency of the Obama Administration, and we would be largely right—but if the New York Times admitted that that was what it was, much less if the Obama Administration had officially delegated such duties to them, they would have been completely unable to fulfill them, and hence disempowered. Similarly, if the Ford Foundation stopped sponsoring activist groups, funding academic organizations, various legal defense organizations, think tanks writing up reports on the future of democracy, etc., and called a news conference in which its leadership openly “owned” its power and declared its intention to start exercising it openly, it would lose all of that power. So, we must name the New York Times and the Ford Foundation as delegated powers (looking to the laws and political protection that enable their functioning) that can only exercise their powers (and can only use those powers to exploit and subvert the sovereign that delegated them) as delegated powers dissimulated as informal. The ultimate purpose of the analysis is to show how these delegated powers muddy the chain of command constitutive of sovereignty and, here as well, identify the kinds of actions and inactions that could help clarify the chain of command.

But what most interests me here is the final question, that of the ethics and ontology of absolutism, which can now be seamlessly integrated into history, contemporary analysis and political projects. The starting point of this post was the inaugural post of the post-Reactionary Future blog Neoabsolutism, entitled Neoabsolutism as a Contender for the Title of the Fourth Political Theory. The post is a review of Dugin’s book, in which Dugin distinguishes between the “subjects” of the main three political theories of modernity: the liberal “individual” subject, the communist “class” subject, and the fascist/Nazi nation/race subject. It’s not clear whether Dugin is proposing a new subject for his “fourth political theory,” and if so who it would be, but what is important here is the question of whether neoabsolutism is proposing a new political subject as part of its contention for the fourth political theory, and if so what would that be. After some give and take on our reddit page, I concluded that neoabsolutism (I still prefer “absolutism,” being somewhat allergic to “neos”) is a radical break from modern political theories insofar as, among other things, it eschews the nomination of a historical subject. The political subjects of the other theories are all constituted by some desire for “liberation” from some form of “subjugation,” along a line of “progress” that can never really be accomplished and ultimately serves as a pretext for piling up the body counts. The point of reactionary, and certainly absolutist, thinking, is to be rid of all that world destroying resentment, along with the illusion that the resentment can be harnessed for beneficial social purposes.

Part of the purpose of a historical subject is to generate a historical narrative that one can then enter—the individual struggles against the chains of censorship, persecution and superstition, then against repressive norms of sexuality, against racial prejudice, against the belief in binary genders, etc.; the working class struggles against the capitalist class and its state, and then imperialist encirclement; the nation struggles against formal or informal imperial power, against internal divisions and inherited backwardness, the race struggles against inferior races and the Jews, etc.—very compelling stories can be told using these templates. So, what’s the story of absolutism? It seems to me that what happens in absolutism is that tacit powers and the traditions they bear are explicitly recognized and titled. In a sense this is the fundamental attribute of sovereignty, since a precondition of its primary function of protecting the realm is designating and nominating subordinate powers to assist in doing so. The sovereign names powers and “seals” traditions by authenticating their transfer from previous or other sovereigns and their incorporation into his own sovereignty. Rather than a historical subject, there is an asymmetrically reciprocal exchange between sovereign and subjects, in which subjects seek further recognition and incorporation and the sovereign recognizes value and power legitimately acquired within the approved institutions by designating it and providing it with formal access and audience. This interaction addresses the fundamental anthropological question of resentment, which is always resentment toward the center (if another humiliates me, it is still the central power that allowed that to happen, and therefore failed to give me my due), by providing for public and controlled competition and ambition. So, our present day auditioning and requests for clarification regarding commands and the command structure transitions into a proper order in which such clarification, through an articulation of sovereign designations, is what sovereignty is openly comprised of. There’s no “progress” or historical guarantees here—there’s nothing but continuing attempts to become worthier and make actual hierarchies explicitly acknowledged ones, along with a cultivation of readiness for exceptional action when it becomes possible. No doubt there are and will be compelling stories to tell in accord with this template, however much we may have to rewire our narrative apparatus to tell them.