GABlog Generative Anthropology in the Public Sphere

June 25, 2017

Equality and Morality

Filed under: GA — adam @ 6:23 am

I appreciate Eric Gans’s detailed response to my blog post (In)equality and (Im)morality, and am glad to respond to at least most of the issues he raises there. Part of the problem here is that, as pretty much everyone knows by now, “equality” is used in so many different ways that it would be futile to define it in a single, agreed upon way. Maybe it’s even useless, or should only be used in very restricted and precisely defined contexts (like “economic inequality,” by which we mean the highest salary is x times larger than the smallest, or whatever). That, of course, would remove it from moral discourse altogether, or at least make it subordinate to a moral discussion conducted on different grounds (high levels of economic inequality might indicate, but not demonstrate, some underlying moral issue). Would moral discourse suffer from this excision or derogation? Let’s look at one of Eric’s examples:

In spontaneously formed groups up to a certain size and in a context that makes the sheer exercise of force impossible (in contrast to the “savage” groups favored in apocalyptic disaster films), people tend to cooperate democratically, profiting when necessary from the specific skills of individuals but not choosing a “king,” and the same is true in juries, where the foreman is an officer of the group rather than its leader. Democracy in this sense doesn’t deny that some people may have better judgment than others, but it permits unanimous cooperation, and I venture to say, corresponds to “natural” human interaction since the originary scene.

The point here is to affirm the originary nature of equality, here defined in the sense of the voluntary and spontaneous quality of the cooperation and the fluidity of leadership changes. I think we can easily find other examples of small group formation, especially under more urgent conditions, where hierarchies are firmly established and preserved, without the application of physical force. Indeed, that is what takes place in most of the disaster films I’ve seen—you couldn’t really force someone to follow you out of a burning building, or find the part of the ship that will sink last, or keep one step ahead of the aliens. In such cases, people follow whoever proves he (is it sexist that it is still usually a “he”?) is capable of overcoming obstacles, keeping cool, anticipating problems, calming the others, fending off challenges without undermining group cohesion, etc. In the case of a jury, we have one very clearly designed and protected institution (and hardly spontaneously formed)—but why, exactly, is the foreman necessary? Why do we take it for granted that the jury can’t simply spontaneously run itself, with a democratic vote over which piece of evidence to discuss next, then a democratic vote to decide whether to take a preliminary vote, but first a vote to decide whether the other votes should be by secret ballot, etc.? It seems pretty obvious that the process will work better, and lead to a more just result, if someone sets the agenda—but why is it obvious? An even broader point here is that we have no way of determining, on empirical grounds, whether the cooperation involved is “spontaneous,” “voluntary” and “unanimous.” These are ontological questions, which enter into the selection of a model. In any case that Eric could describe as people as organizing themselves spontaneously I could describe them as following someone who has taken the initiative. The question, then, is which provides the better description? I think that the absolutist ontology I propose does, because to describe any group as organizing itself spontaneously collapses into incoherence. They can’t all act simultaneously, can they? If not, one is in the lead at any moment, and the others are following, in some order that we could identify. (If they don’t follow, we don’t have a group, and the question is moot.)

Does talk of equality and inequality help us here? I don’t see how. Let’s say a particular member of jury feels that his or her contributions to the discussion have been neglected, and he or she resents that. There are two possibilities—one, the contributions have been less useful than those of others, meaning the neglect was justified; two, the contributions have been unjustly neglected. In the first case the moral thing to do (a good foreman would take the lead here) is to explain to the individual juror what has been lacking in his contributions, and suggest ways to improve them as the deliberations proceed. In the second case, the moral thing to do is to realize that the foreman has marginalized contributions that would have enhanced the deliberative process, and, in the interest of improving that process, she should acknowledge the value of those contributions, try to understand why they went unappreciated, and be more attentive to the distinctive nature of those contributions in the future. The juror’s resentment, in either case, is framed in terms of a resentment on behalf of the process itself or, to put it in originary terms, on behalf of the center. The assumption is that all want the same thing—a just verdict. Once the resentment is framed in terms of unequal treatment, to be addressed by the application of the norm of equal treatment (everyone’s opinion must be given equal weight? Everyone must speak for the same amount of time?), the deliberative process is impaired, and if that framing is encouraged, it will impair the process beyond repair. The moral thing to do, then, is to resist such a framing. Now, it may very well be that the juror has been marginalized for reasons such as racial prejudice (it’s also possible that the juror is complaining for that reason), in which case the deliberative process should be corrected to account for that. The point, though is always to improve that process, not to eliminate that form of prejudice (and all of its effects) within the jury room. Even if the juror in question is trying to reduce the conflict to one of some difference extrinsic to the process, the foreman should reframe it in this way—that is the moral thing to do.

I think this ontological question, which turns into a question of framing, can be situated on the originary scene itself. What matters on the originary scene is that everyone defer appropriation, and offer a sign to the others affirming this. Everyone does something—should we call that “equality”? We can, I suppose, but why? There are plenty of cases where “everyone” plays their individual part in “doing something,” while those parts are widely disparate in terms of difficulty and significance to the project. It’s just as easy to imagine a differentiated originary scene, where, for example, some sign only after others have already set the terms, so to speak, as it is to imagine a scene in which everyone signs simultaneously and with equal efficacy. Easier, in fact, I think. What matters is that everyone is on the scene. The same is the case when it comes to dividing the meal—there’s no need to assume that everyone eats exactly the same amount, all we have to assume is that everyone eats together (unlike the animal pecking order, where each has to wait until the higher ranking animal has finished). This is what I think the moral model requires: everyone affirms the scene, and their relations to all others on the scene; and everyone is at the “table” and receives a “piece.” What this will mean in any given social order can’t be determined in advance and therefore will be something we can always argue over (and any ruler will want to receive feedback on), but that what makes it a basis for criticizing the existing order. If the individual juror’s contribution never does get recognized and this was in fact to the detriment of the deliberations, then we could say she has done her part in affirming the scene but has not gotten her “piece,” or has been kept away from the “table,” thereby weakening the scene as a whole. Again, I don’t see any point along the way here where the concept of “equality” clarifies anything.

Now, I do believe that primitive (let’s say, stateless and marketless) communities are highly egalitarian. Equality does mean something here—this is their interpretation of the originary scene, and they certainly have very good reasons for it. What equality means might be that no goods of any kind are saved, that no family is allowed a larger dwelling than any other, that anyone who gets too good at something be punished in some way, that no one speak to another member of the community in such a way as to imply a relation of indebtedness, and so on. Such an understanding of equality still prevails at times, even in much more advanced and complex societies—we see it in children, among colleagues in the workplace, family members, and so on. We are all at least a little bit communist. But there’s nothing inherently moral about this “communism.” Sometimes it might be moral, sometimes not. It’s immoral to destroy a common project because you’re afraid someone else will show you up; it might very well be moral for children to “enforce” (within bounds) equal treatment by the parents of all the siblings, because this insistence might help correct for favoritism of which the parents might not be aware, and therefore might help the family to flourish. Again, though, the question of morality comes down to whether you are contributing to the preservation and enhancement of an institution.

I do agree that “telling the truth about human difference” is a marginal issue, and not a moral position in itself. My only point in this regard is that, in this case, telling the truth is more moral than lying, and the victimary forces poisoning public life today give us no choice but to do one or the other. I think we could get along fine without dwelling on tables showing the relative IQs of all the ethnic and racial groups in in the world, but we need such a reference point if we refuse to concede that the only explanation for disparate outcomes is racism/sexism/homophobia, etc. And, really, if the more moral thing, in this instance, is to tell the truth, then it’s hard to fault those who do so with a bit of gusto. Those flinging accusations of racism are not exactly restrained in their “debating” tactics, after all. A bit of tit for tat can be moral as well, although whether it involves “equality” is also a matter of framing. If there’s a more moral way of responding to those who, by now, are claiming that we want to kill millions of people and openly celebrate violence in the streets, I’d be very glad to hear it. In fact, as some of those most viciously accused of “white supremacy” among other thought crimes have pointed out quite cogently, if, in fact, it turns out that some groups are on average smarter than others (and some groups are better than others in other ways, and some groups are better in math and other in verbal skills, etc.), there is absolutely no reason why we still can’t all get along perfectly well. After all, more and less intelligent and capable people get along within the same institution all the time, so the only thing that would prevent this from being the case throughout society is persistent equality-mongering. That’s why I think the best way forward in terms of using the originary scene as a moral model is to focus on common participation in, contribution to, and recognition by social institutions. And if we are to direct our attention to the preservation, enhancement and creation of institutions (if we want to be good teachers and students within functioning schools and universities rather than affirmatively acted upon experts in group resentment, if we want to be good husbands, wives and parents within a flourishing system of monogamy rather than feminists, etc.) then we want those institutions to be well run and considerately run. And if we want them run in these ways, we want to bring the power of those running them as closely in line with their accountability as we can. In other words, we want cooperation to be directed (to go back to those opening examples, no one is going to propose allowing a university to be run “spontaneously,” I assume) by those with initiative, experience, and responsibility, and we want them to be appointed and assessed in a like manner, by others competent to do so. And that, I think, would bring us to a much higher level of morality.

It seems to me that the problem Eric is trying to solve here is the following: in any minimally civilized or developed order, “inequality” has developed to the point that the moral model must be “translated” in some way so as to minimize the resentments generated by that inequality. The way he thinks the historical process has enabled this is through the emergence of the market and liberal democratic political processes. The “actual” inequality (the existence of both billionaires and those who sleep under bridges) is mitigated by the “formal” equality of the market (my dollar is worth as much as anyone else’s), the vote, various “rights,” and so on. How can we tell whether this “works”? We can point out that the US is still richer and more powerful than Russia or China, I suppose, but, leaving aside how certain we can be about the causes (and continuance) of this Western predominance, we certainly can’t see this as a moral argument. (There’s nothing particularly moral about bribing the lower classes to remain quiescent.) I think there is an unjustified leap of faith here. It may be true that these forms of formal (pretend?) equality have been granted for the purposes Eric suggests, but that doesn’t prove they have actually served that purpose—it might mean exactly the opposite, that the progress of “equality” has been a means of ensuring that the real inequalities (or structures of power) remain untouched.

I would push this further—there is no reason to assume that whatever we can call “inequalities” are themselves the source of any resentment that might threaten the social order. We could say, for example, that the 19th century European working class resented having its labor exploited, being underpaid, being subjected to unsafe conditions, and so on. Or, we could say they resented having their communities undermined, the network of relations in which they were embedded torn apart, and being driven off the land and into packed cities where they were stripped on any system of moral reciprocities. Interestingly, both the capitalist and the revolutionary have good grounds for preferring the first explanation—it presents the capitalist with a problem he can solve politically (labor unions, welfare, minimum wage, public housing, etc.) and the communist with leverage (in case the capitalist palliatives don’t work). Neither wants to confront the implications of the second explanation, which would require preserving or reconstructing a moral order. This too is a question of ontology and framing. Maybe real reciprocity rather than formal equality is called for. One could now say “but these changes were inevitable,” but that’s what one says in abandoning responsibility. One could say, “still, overall, modernity is preferable,” but can one make that argument on terms other than those of modernity itself? Has anyone actually made the argument that increasing wealth, developing technology and improving living conditions requires liberal democracy and ever expanding forms of formal equality? Once we step outside of the frame forcing us to see “modernity” as a single, inevitable, beneficial package, the connection is not obvious at all. (It’s interesting that there’s never been much of a push to democratize or liberalize the structure of corporations. The continued existence of such a creature as a CEO doesn’t seem to trouble our moral model. Even the left has learned to love the CEO.) Every form of cooperation has an end and a logic to it, an end and logic that we can always surface from the language we find ourselves using in discussing that form. Schools are for learning, commerce is for mutually beneficial exchange, militaries are for fighting other militaries, families are for channeling sexual desire into the raising of new generations, conversations are for creating solidarities, exchanging information, trying out new roles, etc. We can frame all resentments as indicating possible infirmities in these forms of cooperation, and then address those resentments by repairing those forms where necessary. And by “we,” I mean whoever has the most responsibility within those forms. This would involve far more moral seriousness than robotically translating each complaint into an accusation of inequality. In this way the moral model would be just as real now as it was on the originary scene (it is still being used to sustain the scene), rather than an abstraction uncomfortably fit onto what we have decided to see as a qualitatively different set of relations.

June 15, 2017

Sacral Kingship and After: Preliminary Reflections

Filed under: GA — adam @ 4:49 pm

Sacral kingship is the political commonsense of humankind, according to historian Francis Oakley. In his Kingship: The Politics of Enchantment, and elsewhere, Oakley explores the virtual omnipresence (and great diversity) of sacral kingship, noting that the republican and democratic periods in ancient Greece and Rome, much less our own contemporary democracies, could reasonably be seen as anomalies. What makes kingship sacral is the investment in the king of the maintenance of global harmony—in other words, the king is responsible not only for peace in the community but peace between humans and the world—quite literally, the king is responsible for the growth of crops, the mildness of the weather, the fertility of livestock and game, and more generally maintaining harmony between the various levels of existence. Thinking in originary anthropological terms, we can recognize here the human appropriation of the sacred center, executed first of all by the Big Man but then institutionalized in ritual terms. The Big Man is like the founding genius or entrepreneur, while the sacred king is the inheritor of the Big Man’s labors, enabled and hedged in by myriad rules and expectations. The Big Man, we can assume, could still be replaced by a more effective Big Man, within the gift economy and tribal polity. Once the center has been humanly occupied, it must remain humanly occupied, while ongoing clarification regarding the mode of occupation would be determined by the needs of deferring new forms of potential violence.

One effect of the shift from the more informal Big Man mode of rule to sacral kingship would be the elimination of the constant struggle between prospective Big Men and their respective bands. But at least as important is the possibility of uploading a far more burdensome ritual weight upon the individual occupying the center. And if the sacral king is the nodal point of the community’s hopes he is equally the scapegoat of its resentments. Sacral kings are liable for the benefits they are supposed to bring, and the ritual slaughter of sacral kings is quite common, in some cases apparently ritually prescribed. It’s easy to imagine this being a common practice, since not only does the king, in fact, have no power over the weather, a king elevated through ritual means will not necessarily be more capable in carrying out the normal duties of a ruler better than anyone else. Indeed, some societies separated out the ritual from the executive duties of kingship, delegating the latter to some commander, and thereby instituting an early form of division of power—but these seem to have been more complex and advanced social orders, capable of living with some tension between the fictions and realities of power (medieval to modern Japan is exemplary here).

It seems obvious that sacral kings, especially the more capable among them, must have considered ways of improving their position within this set of arrangements. The most obvious way of doing so would be to conquer enough territories, introduce enough differentiations into the social order, and establish enough of a bureaucracy to neutralize any hope on the part of rivals to replace oneself. (No doubt, the “failures” of sacral kings to ensure fertility or a good rainy season were often framed and broadcast by such rivals, even if the necessity of carrying out such power struggles in the ritualistic language of the community would make it hard to discern their precise interplay at a distance.) Once this has been accomplished, we have a genuine “God Emperor” who can rule over vast territories and bequeath his rule to millennia of descendants. The Chinese, ancient Near East and Egyptian monarchies fit this model and the king is still sacred, still divine, still ensuring the happiness of marriages, the abundance of offspring, and so on. If it’s stable, unified government we want, it’s hard to argue with models that remained more or less intact in some cases for a couple of thousand years. Do we want to argue with it?

The arguments came first of all from the ancient Israelites, who revealed a God incompatible with the sacralization of a human ruler. The foundational story of the Israelites is, of course, that of a small, originally nomadic, then enslaved, people, escaping from and them inflicting a devastating defeat upon, the mightiest empire in the world. The exodus has nourished liberatory and egalitarian narratives ever since. Furthermore, even a cursory, untutored reading of the history of ancient Israel as recorded in the Hebrew Bible can see the constant, ultimately unresolved tension regarding the nature and even legitimacy of kingship, either for the Israelite polity itself or those who took over the task of writing (revising? Inventing?) its history. On the simplest level, if God is king, then no human can be put in that role; insofar as we are to have a human king, he must be no more than a mere functionary of God’s word (which itself is relayed more reliably by priests, judges and prophets). At the very least, the assumption that the king is subjected to some external measure that could justify his restraint or removal now seems to be a permanent part of the human condition. Even more, if the Israelite God is the God of all humankind, with the Israelites His chosen priests and witnesses, the history of that people takes on an unprecedented meaning. Under conditions of “normal” sacral kingship, the conquest and replacement of one king by another merely changes the occupant, not the nature, of the center. Strictly speaking, the entire history (or mythology) of the community pre-conquest is cancelled and can be, and probably usually is, forgotten—or, at least, aggressively translated into the terms of the new ritual and mythic order. Not for the Israelites—their history is that of a kind of agon between the Israelites and, by extension, humanity, with God—the defeats and near obliteration of the Jews are manifestations of divine judgment, punishing the Jews for failing to keep faith with God’s law. Implicit in this historical logic is the assumption that a return to obedience to God’s will is to issue in redemption, making the continued existence of this particular people especially vital to human history as a whole, but just as significantly providing a model for history as such.

At the same time, Judaic thought never really imagines a form of government other than kingship. As has often been noted, the very discourse used to describe God in the Scriptures, and to this day in Jewish prayer, is highly monarchical—God is king, the king of kings, the honor due to God is very explicitly modeled on the kind of honor due to kings and the kind of benefits to result from doing God’s will follow very closely those expected from the sacral king. The covenant between the Israelites and God (the language of which determines that used by the prophets in their vituperations against the sinning community) is very similar to covenants between kings and their people common in the ancient Near East. And, of course, throughout the history of the diaspora, Jewish hopes resided in the coming of the Messiah, very clearly a king, even descended from the House of David—so deeply rooted are these hopes that many Jews prior to the founding of the State of Israel, and a tenacious minority still today, refuse to admit its legitimacy because it fails to fit the Messianic model. All of this testifies to the truth of Oakley’s point—so powerful and intuitive is the political commonsense of humankind that even the most radical revolutions in understandings of the divine ultimately resolve themselves into a somewhat revised version of the original model. Of course, slight revisions can contain vast and unpredictable consequences.

So, why not simply reject this odd Jewish notion and stick with what works, an undiluted divine imperium? For one thing, we know that kings can’t control the weather. But how did we come to know this? If in the more local sacral kingships, the “failure” of the king would lead to the sacrificial killing of that king (on the assumption that some ritual infelicity on the part of the king must have caused the disaster), what happens once the God Emperor is beyond such ritual punishment? Something else, lots of other things, get sacrificed. The regime of human sacrifice maintained by the Aztec monarchs was just the most vivid and gruesome example of what was the case in all such kingdoms—human sacrifice on behalf of the king. One of Eric Gans’s most interesting discussions in his The End of Culture concerns the emergence of human sacrifice at a later, more civilized level of cultural development—it’s not the hunter and gatherer aboriginals who offer up their first born to the gods, but those in more highly differentiated and hierarchical social orders. If your god-ancestor is an antelope, you can offer up a portion of your antelope meal in tribute; if your god is a human king, you offer up your heir, or your slave, because that is what he has provided you with. This can take on many forms, including the conquest, enslavement and extermination of other people, in order to provide such tribute. What the Judaic revelation reveals is that such sacrifice is untenable. What accounts for this revelation? (It’s so hard for us to see this as a revelation because is hard for us to imagine believing that the king, for example, provides for the orderly movements of heavenly bodies. But “we” believed then, just like “we” believe now, in everything conducive, as far as we can tell, which is to say as far as we are told by those we have no choice but to trust, to the deferral of communal violence.) The more distant the sacred center, the more all these subjects’ symmetrical relation to the center outweighs their differences, and the more it becomes possible to imagine that anyone could be liable to be sacrificed. And if anyone could be liable to be sacrificed, anyone can put themselves forward as a sacrifice, or at least demonstrate a willingness to be sacrificed, if necessary. One might do this for the salvation of the community, but this more conscious self-sacrifice would involve some study of the “traits” and actions that make one a more likely sacrifice; i.e., one must become a little bit of a generative anthropologist. The Jewish notion of “chosenness” is really a notion of putting oneself forward as a sacrifice. And, of course, this notion is completed and universalized by the self-sacrifice of Jesus of Nazareth who, as Girard argued, discredited sacrifice by showing its roots in nothing more than mimetic contagion. (What Jesus revealed, according to Gans, is that anyone preaching the doctrine of universal reciprocity will generate the resentment of all, because all thereby stand accused of resentment.) No one can, any more, carry out human sacrifices in good faith; hence, there is no return to the order of sacral kingship—and, as a side effect, other modes of human and natural causality can be explored.

Oakley follows the tentative and ultimately unresolved attempts of Christianity to come to terms with this same problem—the incompatibility of a transcendent God with sacralized kingships. There is much to be discussed here, and much of the struggle between Papacy and the medieval European kings took ideological form in the arguments over the appropriateness of “worldly” kings exercising power that included sacerdotal power. But I’m going to leave this aside for now, in part because I still have a bit of Oakley to read, but also because I want to see what is involved in speaking about power in the terms I am laying out here. Here’s the problem: sacral kingship is the “political commonsense of humankind,” and indeed continues to inform our relation to even the most “secular” leaders, and yet is impossible; meanwhile, we haven’t come up with anything to replace it with—not even close. (One thing worth pointing out is that if, since the spread of Christianity, human beings have been embarked upon the task of constructing a credible replacement for sacral kingship, we can all be a lot more forgiving of our political enemies, present and past, because this must be the most difficult thing humans have ever had to do.)

Power, for originary thinking, ultimately lies in deferral and discipline, a view that I think is consistent with de Jouvenal’s attribution of power to “credit,” i.e., faith in someone’s proven ability to step into some “gap” where leadership is required. To take an example I’ve used before, in a group of hungry men, the one who can abstain from suddenly available food in order to remain dedicated to some urgent task would appear and therefore be extremely powerful in relation to his fellows. The more disciplined you are, the more you want such discipline displayed in the exercise of power, whether that exercise is yours or another’s. We can see, in sacral kingship, absolute credit being given to the king. Why does he deserve such credit? Well, who are you to ask the question—in doing so, don’t you give yourself a bit too much credit? As long as any failures in the social order can be repaired by more or better sacrifices, such credit can continue to flow, and if necessary redirected. But if sacrifice is not the cure, it’s not clear what is. If the king puts himself forward as a self-sacrifice on behalf of the community in post-sacrificial terms, well so can others—shaping yourself as a potential sacrifice, in your own practices and your relation to your community, is itself a capability, one that marks you as elite, i.e., powerful—especially if you inherit the other markers of potential rulership, such as property and bloodline (themselves markers of credit advanced by previous generations). Unsecure or divided power really points to an unresolved anthropological and historical dilemma. If the arguments about Church and Throne in the middle ages mask struggles for power, those struggles for power also advance a kind of difficult anthropological inquiry, upon which we are still engaged. There’s no reason to assume that the lord who put together an army to overthrow the king didn’t genuinely believe he was God’s “real” regent on earth. It’s a good idea to figure out what good faith reasons he might have had for believing this.

Now, Renaissance and Reformation thinkers had what they thought would be a viable replacement for sacral kingship (one drawn from ancient philosophy): “Nature.” If we can understand the laws of nature, both physical and human nature, we can order society rightly. This would draw together the new sciences with a rational political order unindebted to “irrational” hierarchies and rituals. I want to suggest one thing about this attempt (which has reshaped social and political life so thoroughly that we can’t even see how deeply embedded “Nature” is in our thinking about everything): “Nature” is really an attempt to create a more indirect system of sacrifice. The possibility of talking about modern society as a system of sacrifice is by now a well-established tradition, referencing the modern genocides and wars along with far more mundane economic practices. Indeed, it’s very easy to see the valorization of “the market” as an indirect method of sacrifice: we know that if certain restrictions on trade, capital mobility, ownership, labor-capital relations, etc., are overturned, a certain amount of resources will be destroyed and a certain number of lives ruined. All in the name of “the Economy.” We know it will happen, and we can participate in the purging of the antiquated and inefficient, but no one is actually doing it—no one is responsible for singling out another to be sacrificed for the sake of the Economy. The indirectness is not just evasiveness, though—it does allow for the actual causes of social events to be examined and discussed. It’s just that they must be discussed in a framework that ensures that some power center will preside over the destruction of constituents of another. One could imagine justifying the “natural” sacrifices of a Darwinian social order if it served as a viable, post-Christian replacement of a no longer acceptable sacrificial order—except that it no longer seems to be working. We can think, for example, about Affirmative Action as a sacrificial policy: we place a certain number of less qualified members of “protected classes” into positions with the predictable result that a certain number of lives and certain amount of wealth will be lost, and we do this to appease the furies of racial hatred that have led to civil war in the past. But the fact that the policy is sacrificial, and not “rational,” is proven by the lack of any limits to the policy. No one can say when the policy will end, even hypothetically, nor can anyone say what forms of “inequality” or past “sins” it can’t be used to remedy. All this is to be determined by the anointed priests and priestesses of the victimary order. We can just as readily talk about Western immigration policies as an enormous sacrifice of “whiteness,” for the disappearance of which no one now feels they must hide their enthusiasm. The modern social sciences are for the most part elaborate justifications of indirect sacrifices.

So, the problem of absolutism is then a problem of establishing a post-sacrificial order. This may be very difficult but also rather simple. Absolutism privileges the more disciplined over the less disciplined, in every community, every profession, every human activity, every individual, including, of course, sovereignty itself. We can no longer see the king as the fount of spring showers, but we can see him as the font of the discipline that makes us human and members of a particular order. We could say that such a disciplinary order has a lot in common with modern penology, with its shift in emphasis from purely punitive to rehabilitative measures; it may even sound somewhat “therapeutic.” But one difference is that we apply disciplinary terms to ourselves, not just the other—we’re all in training. Another difference is a greater affinity with a traditional view that sees indiscipline as a result of unrestrained desire—lust, envy, resentment, etc., rather than (as modern therapeutic approaches insist) the repression of those desires. (Strictly speaking, therapeutic approaches see discipline itself as the problem.) But we may have a lot to learn from Foucault here, and I take his growing appreciation of the various “technologies of the self” that he studied, moving a great distance from his initial seething resentment of the disciplinary order, as a grudging acknowledge of that order’s civilizing nature. Absolutism might be thought of as a more precise panopticon: not every single subject needs to be constant view, just those on an immediately inferior level of authority. Discipline, in its preliminary forms, involves a kind of “self-sacrifice” (learning to forego certain desires), and a willingness to step into the breach when some kind of mimetically driven panic or paralysis is evident can also be described in self-sacrificial terms—in its more advanced forms, though, discipline means being able to found and adhere to disciplines, that is, constraint based forms of shared practice and inquiry. Then, discipline becomes less self-sacrificial than generative of models for living—and, therefore, for ruling and being ruled.

June 4, 2017

Cognition as Originary Memory

Filed under: GA — adam @ 6:57 pm

This is the paper (leaving aside any last minute editing) that I will be reading (via Skype) June 9 at the 11th annual GASC Conference in Stockholm.

Cognition as Originary Memory

 

The shift in focus, in cognitive theory, from the relation between mind and objects in the world to the relation between minds mediated by inter-subjectivity, brings it into dialogue with originary thinking. Michael Tomasello’s studies in language and cognition have become a familiar reference point in originary inquiries, which have drawn upon the deep consonance between his notion of “joint attention” and the originary hypothesis’s scenic understanding of human origin. Peter Gardenfors, in his How Homo Became Sapiens, builds on the work of Tomasello and others so as to include the development of cultural and technological implements, in particular writing, in this social understanding of cognition. Much of the vocabulary of cognitive thinking, though, still retains the assumption of separate, autonomous selves: sensations, perceptions, ideas, thoughts, minds, feelings, knowledge, imagination and so on are all experiences or capacities that individuals have, even if we explain them in social and historical terms. My suggestion is that we think of cognition, of what we do when we think, feel, remember and so on directly in linguistic terms, as operations and movements within language, in terms that always already imply shared intentionality. In this way we can grasp the essentially idiomatic character of human being.

 

Eric Gans’s studies of the elementary linguistic forms provide us with an approach to this problem. His most extended study of these forms, of course, is in The Origin of Language, but he has shorter yet sustained and highly suggestive discussions of the relations between the ostensive, the imperative and the declarative in The End of Culture, Science and Faith, Originary Thinking, and Signs of Paradox. In The End of Culture Gans uses the succession of linguistic forms to account for the emergence of mythological thinking and social hierarchy, in Science and Faith to account for the emergence and logic of monotheism, in Originary Thinking, among other things, to propose a more rigorous theory of speech acts, and in Signs of Paradox to account for metaphysics and the constitutive paradoxicality of advanced thought. It makes sense to take what are in these cases historical inquiries and make use of them to examine individual or, to make use of the Girardian term, “interdividual,” cognition, which is always bound up in anthropomorphizing our social configurations in terms of a center constituted out of our desires and resentments.

 

In The Origin of Language Gans shows how each new linguistic form maintains, or preserves, or conserves, the “linguistic presence” threatened by some limitation in the lower form. So, the emergence of the imperative is the making present of an object that an “inappropriate ostensive” has referred to. Bringing the object “redeems” the reference. The assumption here seems to me that the loss of linguistic presence is unthinkable—the most basic thing we do as language users is conserve linguistic presence. Another key concept put to use early on in The Origin of Language is the “lowering of the threshold of significance,” which is to say the movement from one significant object in a world comprised of insignificant ones to a granting of less and less significance to more and more objects. I think we could say that lowering the threshold of significance is the way we conserve linguistic presence: what threatens linguistic presence is the loss of a shared center that we could point to; by lowering the threshold of significance we place a newly identified object at that center. So, right away we can talk about “thinking” or “cognition” as the discipline of conserving linguistic presence by lowering the threshold of significance.

 

This raises the question of how we conserve linguistic presence by lowering the threshold of significance. If linguistic presence is continuous, then our relation to the originary scene is continuous—in a real sense, we are all, always, on the originary scene—it has never “closed.” In that case, a crisis in linguistic presence marks some weakening of that continuity with the originary scene—the crisis is that we are in danger of being cut off from the scene. But in that case, continuity with the scene must entail the repetition of the scene or, more precisely, its iteration. As long as we are within linguistic presence we are iterating the original scene, in all of our uses of signs. Any crisis must then be a failure of iteration, equivalent to forgetting how to use language. The conservation of linguistic presence, then, is a remembering of the originary scene. Our thinking always oscillates between a forgetting and remembering of the originary scene. But this oscillation must itself be located on the originary scene, which then must be constituted by a dialectic of forgetting and remembering, or repeating and iterating. For my purposes, the difference between “repeat” and “iterate” is as follows: repeating maps the sign onto the center; iterating enacts the center-margin relation.

 

 

Now, let’s leap ahead to the linguistic form in which we do most of our thinking: the declarative. The declarative has its origins in the “negative ostensive,” the response to the “inappropriate imperative,” where the object cannot be provided, the imperative cannot be fulfilled, and linguistic presence is therefore threatened. But Gans is at pains to distinguish this “negation” from the logical negation that can come into being only with the declarative itself. He refers to the negation in the negative ostensive as the “operator of interdiction,” which he further suggests must be rooted in the first proto-interdiction, the renunciation of appetite on the originary scene. This remembering of the originary scene further passes through other forms of interdiction which entail “enforcement” through what Gans calls “normative awaiting”—he uses examples like the injunction to children not to talk to strangers. As opposed to normal imperatives, these interdictions can never be fulfilled once and for all. Now, even keeping in mind the limited resources available within an imperative culture, this is not an obvious way to relate the information that the demanded object is not available. The issuer of the interdiction is told not to do (something)+the object. Not to continue demanding, perhaps; not to do more than demand, i.e., not to escalate the situation. None of these alternatives, along with repeating the name of the object, seems to communicate anything about the object itself. But we can read the operator of interdiction as referring to the object—the object is being told not to present itself. But by whom? Clearly not the speaker. I think the initial declarative works because both possibilities are conveyed simultaneously—the “imperator” is ordered to cease pursuing his demand, and the object is ordered, ultimately by the center, to not be present, which in turn adds force to the interdiction directed back at the imperator, who donates his imperative power to the center. In essence, the declarative restores linguistic presence by telling someone that they must lower their threshold of significance because the object of their desire, as they have imagined it, has been rendered unavailable by, let’s say, “reality.” The lowered threshold brings to attention a center yet to be figured by actions, from a direction and at a time yet to be determined.

 

Now, the embedding of the declarative in the imperative order is not very important if once we have the declarative, we have the declarative, i.e., a new linguistic form irreducible to the lower ones, in the way biology is irreducible to chemistry, and chemistry to physics. But biology is still constrained by chemistry, and chemistry by physics. So is the declarative constrained by the imperative order it transcends and, of course, the imperative by the ostensive. The economy of the dialectic of linguistic forms is conserved. Just as on the originary scene remembering the sign is a way of forgetting the scene, immersion in the declarative dimension of culture is a forgetting of the imperative and the ostensive. To operate, to think and communicate in declarative terms is to imagine oneself liberated from imperatives. This gets formulated, via Kant, in imperative terms: to be a “declarative subject” is treat others as ends, never as means, to will that your own actions embody a universal law binding on everyone. We could call this an ethics of the declarative. This imperative remembers the origin of the declarative in a kind of imperative from the center to suspend imperatives amongst each other. We could say that logic itself recalls an imperative for the proper use of declaratives, one that allows no imperatives to be introduced, even implicitly, into the discourse at hand—but, of course, this is accomplished in overwhelming imperative terms, as all manner of otherwise perfectly legitimate uses of language must be subjected to interdiction. Even more, these imperative uses of the declarative include the imperative to not rest content with any particular formulation of that imperative: what, exactly, does it mean to treat another as an end or means, how can you tell whether another is really taking your action as a law—what counts as adjudication here? If you take to treat others only as ends in consequence of your devotion to the categorical imperative, aren’t you treating them as a means to that end? The paradoxes of declarative culture and subjectivity derive from the ineradicability of the absolute imperative founding them.

 

The most decisive liberation of the declarative from the imperative can be seen in the cognitive ramifications of writing, as explained most rigorously, I think, by David Olson in his The World on Paper. Olson argues that it is the invention of writing, alphabetic writing in particular, that turns language into an object of inquiry: something we can break down into parts that we then rearticulate synthetically. These parts are first of all the sounds to be represented by letters, but just as much the words, or parts of sentences, that are identified through writing for the first time. The grammatical analysis of the sentence treats the sentence as a piece of information, makes it possible to construct the scene of speech as a multi-layered dissemination of information about that scene, and thereby provides a model for treating the entire world as a collection of bits of information, ultimately of an event of origin through speech. We could see this as a declarative cosmology. In that case the world can be viewed as a constant flow of information conveyed through everything that could be an object of an ostensive, that is, effect some shift of attention.  This declarative metaphysics only comes to fruition in the computer age. We keep discovering that each piece of information is in fact just a piece of a larger piece of information that perhaps radically changes the meaning of the piece we have just assimilated. This is an intrinsic part of scientific inquiry, but subverts more local and informal inquiries with a much lower tolerance for novelty because of a greater reliance on ostensive and imperative culture. Declarative culture promises us we will only have to obey one imperative: the imperative of reality. In that case, we should be able to bracket and contain potentially subversive inquiries into reality by constructing institutions that introduce new increments of deferral and upward gradations of discipline and therefore social integrity, facilitating the assimilation of transformative knowledge. Olson himself, in his Psychological Theory and Educational Reform seems to think along similar lines by pointing to the intrinsic connection between a literate population and large scale bureaucracies, which is to say hierarchical orders predicated upon the ongoing translation of language into disciplinary metalanguages that simultaneously direct inquiry and impose discipline. However, if we take declarative culture to provide a mandate, an imperative, to extirpate all imperatives that cannot present themselves as the precipitate of a declarative, then those flows of information come equipped with incessantly revised imperatives coming from no imperative and ostensive center, subjecting imperative traditions to constant assault from hidden and competing metaphysical centers.

 

There will always be imperatives that cannot be justified declaratively because the lowering of the threshold of significance generates new regions of ostensivity that generate imperatives in order to establish guardianship over those regions, in turn leading to requests for information, i.e., interrogatives, which themselves presuppose a cluster of demands that attention be directed in certain ways. In the long term most, maybe all imperatives could be provided with a declaratively generated genealogy, but only if we for the most part obey them in the meantime. This constitutively imperative relation to a center could be called an “imperative exchange.” I do what you, the center, the distillation of converging desires and shared renunciations, commands, and you, the center, do what I request, that is, make reality minimally compliant. We must think in this way in most of our daily transactions—the alternative would be to be perpetually calculating on the basis of extremely limited and uncertain data, the probabilities of the various possible consequences of this or that action. For the most part, we have to “trust the world,” since we as yet have insufficiently advanced internal algorithms to operate coherently without doing so. The development of declarative, that is, literate, culture, heightens this tension by establishing with increasing rigor both a comprehensive centralized, which is to say imperative, order and an interdiction on referring to that order too directly. The absolutized imperative founding the declarative order forbids us to speak and therefore think about it.

 

The revelation of the declarative sentence as the name of God, analyzed by Gans in Science and Faith, his study of the Mosaic revelation of the burning bush, cancels this imperative exchange, which leads one to place a figure at the disappointing center, and replaces it with the information that since God has given everything to you, you are to give everything to God, which is to say to the origin of and through speech. There is no more commensurability and therefore no more exchange. You are to embody the conversion of imperatives into declaratives through readiness to have those imperatives converge upon you. Imperative exchange is ancestor worship, and the absolute imperative embedded in I AM THAT I AM is to suspend ancestor worship and remember the originary scene—that is, remember that it is the participation of all in creating reciprocity that generated the sign, not the other way around. But imperative exchange cannot be eliminated—it is embedded in our habits, it is the form in which we remember the sign and forget the scene—if I do this, reality will supply that. Thinking begins with the failure of some imperative exchange—I did this, but reality didn’t supply that, and why in the world should I have expected it to, since it’s not subject to my commands or tied to me by any promise. The declarative sentence, then, is best understood as the conversion of a failed imperative exchange into a constraint—in thinking, you derive a rule from the failure of your obedience to some command to garner a commensurate response from reality. This rule ties some lowering of significance to the maintenance of linguistic presence, as this relationship requires less substantial or at least less immediate cooperation from reality. We get from the command to the rule by way of the interrogative, the prolongation of the command into a request for the ostensive conditions of its fulfillment. The commands we prolong are themselves embedded in the declaratives, the discourses, we circulate through—raising a question about a claim is tantamount to identifying an unavowed imperative, some attempt at word magic, that claim conveys. This is how we oscillate between the imperative and ostensive worlds in which we are immersed and the declarative order we extract from and use to remake those worlds. A good question prolongs the command directed at reality indefinitely, iterating it through a series of possible ostensive conditions of fulfillment, which can only be sustained by treating the declarative order as a source of clearer, more convertible commands.

 

June 2, 2017

(Im)morality and (In)equality

Filed under: GA — adam @ 9:49 am

I’d like to work with a few passages from Eric Gans’s latest Chronicle of Love & Resentment (#549) to address some critical questions regarding morality and equality in originary thinking. Needless to say, I share Gans’s “pessimism” regarding the future of Western liberal democracies while seeing (unlike Gans) such pessimism for liberal democracy as optimism for humanity.

What kind of state-level government is feasible in the Middle East?—and one could certainly include large areas of Africa in the question. The fact that we have no clear response suggests that the end of colonialism, however morally legitimate we may find it, did not resolve the difficulty to which colonization, both hypocritically and sincerely, had attempted to respond: how to integrate into the global economy of technologically advanced nation states those societies that remain at what we cannot avoid judging as a lower level of social organization.

So, the end of colonialism is morally legitimate, even though it has left vast swathes of large areas of the world increasingly ungovernable, and made it impossible to integrate them into the global economy. What kind of morality is this, then—what does it consider more important than maintaining a livable social order? A note of doubt is introduced here, though: “we may find” this to be morally legitimate, but presumably we may not. There is some straining against the anti-colonialist morality here. The morality that we may or may not consider legitimate, I assume is that of judging some forms of social organization as lower than others. But what makes refraining from this judgment moral? Colonialism involved governing others according to norms different than those according to which the home country was governed, but unless we assume that this governing was done in the interests of the colonizer and against the interests of the colonized, and could only be so, the moral problem is not clear. These assumptions therefore get introduced into discussions of the colonial relation, but since those assumptions are as arbitrary regarding this form of governance as any other, there’s clearly something else going on.

There is no “racism” here; on the contrary, by assuming that all human beings have fundamentally the same abilities, and that we owe a certain prima facie respect to any social order that is not, like Nazism, altogether pathological, we cannot help but note that some societies are less able than others to integrate the scientific and technological advances of modernity. Thus health crises in Africa continue to be dealt with in what can only be called a “neocolonial” fashion, however unprofitable it may be for the former colonizers, who send doctors, medicine, medical equipment, and food aid to nations suffering from epidemics of Aids or Ebola, or starving from drought or crop failure—or rebuilding from earthquakes, as in Haiti.

The most moral gestures of the modern West are, it seems, its most colonial ones. And what could more disastrously interfere with this moral impulse that the assumption that “all human beings have fundamentally the same capabilities”? That assumption forces you to look for dysfunctions on a sociological and historical level—one must conclude it is colonialism itself that is responsible for the disasters of the undeveloped world. But if that is your assumption, you can only behave morally—i.e., actually treat other people as needing your help—by finding some roundabout way of claiming that that is not what you’re doing. That’s the best case scenario—the worst case is that you keep attacking the “remnants” of colonialism itself, even if they are the most functional part of the social order. Morality and immorality seem to have switched places.

For if we have indeed entered the “digital” age, implying an inalterable premium for symbol manipulation and hence IQ-type intelligence, then the liberal-democratic faith in the originary equality of all is no longer compatible with economic reality. Hence the liberal political system, as seems to be increasingly the case today, cannot simply continue to correct the excesses of the market and provide a safety net for the less able. Increasingly the market system seems to have only two political alternatives. It can be openly subordinated to an authoritarian elite, and in the best cases, as in China, achieve generally positive economic results. Or else, as seems to be happening throughout the West, it is fated to erect ever more preposterous victimary myths to maintain the fiction of universal political equality, rendering itself all but impotent against the “post-colonial” forces of radical Islam.

If vast inequalities based in part upon natural differences in ability is incompatible with the liberal democratic faith in the originary equality of all than that faith was always a delusion. Some are arguing that the inequalities opening up now over the digital divide are the most massive ever, but who can really know? What are our criteria—are today’s differences greater than those between medieval lords and serfs, or between 19th century industrialists and day laborers paid by piecework? There’s no common measure, but every civilized society has highly significant inequalities and today’s is not qualitatively different in that regard. Perhaps there is now less hope that the inequalities can someday be overcome or lessened, but that hope is itself just a manifestation of liberal-democratic faith, so we are going in a circle. It would be more economical to see that loss of faith as an increase in clarity. But what does the increasing or more intractable inequality have to do with the diminishing legitimacy function of the welfare state—is it that the rich no longer have enough money to support it or the less able are no longer willing to accept the bribe (or have figured out that the bribe will continue even if legitimacy is denied)? The choice between an authoritarian China-style solution and the preposterous victimary imaginary of the West seems clear, but why be downcast about it? If China is the “best case” so far, presumably there can be yet better cases. Obviously creating myths so as to maintain fictions is unsustainable—what next, legends to preserve the myths that maintain the fiction?—and it might be a relief to engage reality. (In fact, if the welfare state no longer serves a legitimating function, that may be because yet another—let’s just call it a—lie has been exposed, that of endless upward mobility and generational status upgrades.) But does not the discarding of lies and fantasies and the apprehension of reality represent greater morality, rather than immorality?

 

Victimary thinking is an ugly and dangerous business, but the inhabitants of advanced economies in their “crowd-sourced” wisdom appear to have determined so far that it is the lesser evil compared to naked hierarchy. The “transnational elite” imposes its own de facto hierarchy, but masks it by victimary virtue-signaling, more or less keeping the peace, while at the same time in Europe and even here fostering a growing insecurity.

We have the “crowd-sourced” wisdom of the inhabitants, but then the “transnational elite” and its hierarchy makes an immediate entrance. Has that elite not been placing its finger on the outsourcing scale (so to speak)? Through which—through whose—sign exchange systems has the wisdom been crowd sourced? So, let’s translate: the transnational elite masks its hierarchy by imposing victimary virtue-signaling, but is now running into diminishing returns—the very method that has more or less kept the peace now generates insecurity. It remains only to add that the elites don’t seem to have a Plan B, and appear to be determined to autistically continue to double down on their masking and signaling.

But as the economy becomes ever more symbol-driven, these expedients are unlikely to remain sufficient. It would seem that unless science can find an effective way of increasing human intelligence across the board, with all the unpredictable results that would bring about (including no doubt ever higher levels of cybercrime), the liberal-democratic model will perforce follow the bellwether universities into an ever higher level of thought control, and ultimately of tyrannical victimocracy. At which point the “final conflict” will indeed be engaged, perhaps with nuclear weapons, between the self-flagellating victimary West and a backward but determined Third World animated by Islamic resentment…

Or not. Perhaps the exemplary conflict between Western-Judeo-Christian-modern-national-Israeli and Middle-Eastern-Islamic-traditional-tribal-Palestinian can be resolved, and global humanity brought slowly into harmony. Or perhaps the whole West will decline along with its periphery and our great-grandchildren will grow up peacefully speaking Chinese.

 

But is the China model exclusive to China? Can we not, in a moment of humility, study the China model, and the way it retrieves ancient Chinese traditions from the wreckage of communism? And, in a renewal of characteristic Western pride, adapt and improve upon the Chinese model? This would require a return to school regarding our own traditions, subjecting them to an unrestrained scrutiny that even its most stringent critics (Marx, Freud, Nietzsche, Heidegger, Derrida…) could never have imagined. But what’s the point of a revolutionary and revelatory theory like GA if not to do exactly that? But the first question to take up would have to be…

 

Human language was the originary source of human equality, and if our hypothesis is correct, it arose in contrast to the might-makes-right ethos of the animal pecking-order system. The irony would seem to be that the discovery of the vast new resources of human representation made possible in the digital age is in the process of reversing the residue of this originary utopia more definitively than all the tyrannies of the past. Indeed, we may now find in the transparent immorality of these tyrannies a model to envy, because it provided a fairly clear path to the “progress” that would one day overturn them. Whereas for the moment, no such “enlightened” path to the future can be seen.

 

That of the relation between morality and equality. This is the heart of the matter. Human equality is utopian, but then it couldn’t be at the origin, because the origin couldn’t be utopian. Morality has nothing, absolutely nothing, literally nothing, to do with equality. We should reverse the entire frame here and say there is no equality, except as designated for very specific circumstances using very specific measuring implements. It’s an ontological question: deciding to call the capacity to speak with one another an instance of “equality” is to import liberal ontology into a mode of anthropological inquiry that must suspend liberal “faith” if it is to ask whether that faith is justified. We can then ask which description is better—people talking to each other as “equal” or people talking to each other as engaged in fine tuning and testing the direction each wants to lead the other. Which description will provide more powerful insights into human interactions and social order? Determining that “equality” must be the starting assumption just leads you to ignore all features of the interaction that interfere with that assumption, which means it leads you to ignore everything that makes it an interaction—which, interestingly, in practice leads to all kinds of atrocities. What seems like equality is just an oscillation of hierarchies, within a broader hierarchy. In a conversation, the person speaking is for the moment in charge; in 30 seconds, the other person will be in charge. It would be silly to call this “inequality,” even in its more permanent forms (like teacher and student), because it’s simply devotion to the center—whoever can show the way to manifest this devotion points the way to others. And that’s morality—showing others how to manifest devotion to the center. Nothing could more completely overturn the animal pecking order—a peasant can show a king how to manifest devotion to the center, but the king is still the king because he shows lots of other people how to do it, in lots of situations well beyond the experience and capability of the peasant. Morality involves reciprocity and reciprocity not only has nothing to do with equality, but is positively undermined by equality. There can only be reciprocity within accepted roles. Most of us don’t go around slaughtering our fellow citizens, but that’s not reciprocity because such acts are unlawful and these laws at least are seriously enforced and, moreover, most of us don’t want to do anything like that. When a worker performs his job competently and conscientiously, and the manager rewards the worker with steady pay increases, a promise of continued employment and safe, clean working conditions—that’s reciprocity. Friends can engage in reciprocity with each other without any explicit hierarchy, but here we’re talking about a gift economy with all kinds of implicit hierarchies. I wouldn’t deny all reciprocity to market exchanges (overwhelmingly between gigantic corporations and individuals), but this kind of reciprocity is minimal and, as we can see, hardly sufficient to stake a social order on. Language makes it possible for us to all participate in social order, but inclusive participation is also not equality, nor is recognition or acknowledgement. In other words, morality (recognition, acknowledgement, reciprocity), yes; equality, no. Forget equality. What, exactly, made those old tyrannies immoral, or even “tyrannies,” other than (tautologically) their failure to recognize equality?—their successes and our capacity to shape those models in new ways should not be disheartening. If there must be hierarchies and central power, then those things cannot be immoral, any more than hunger can be immoral. Morality enters into our engagement with these realities.

Powered by WordPress