GABlog Generative Anthropology in the Public Sphere

July 2, 2017

Debts and Deferences

Filed under: GA — adam @ 10:33 am

(For those who would like to comment on the GABlog specifically, I have set up reddit page: https://www.reddit.com/r/GABlog/comments/6kukdg/debts_and_deferences/)

David Graeber’s Debt: the First 5,000 Years adds a few decisive nails to the coffin of liberal economics and politics. Liberal economists imagine money and markets emerging out of barter; typically, they cannot show that anything like this ever happened, any more than social contract theorists can find an instance where that fictional event ever happened. Villager A doesn’t have too many chickens, while villager B has too many potatoes, and so A and B exchange chickens for potatoes; villagers C, D, E… n do not get in on the game, so that a certain point all the bartering gets too confusing so all must agree on a currency into which all values can be converted. All of this is ahistorical nonsense. Markets have historically been created and managed by states, for the purpose of maintaining ritual and military institutions. A fully marketized order, meanwhile, involves the violent disruption of personal and moral economies of credit (largely conducted without currency or calculation) and their replacement by debt regimes in which all of an individual’s possessions and the individual him/herself are alienable.  Traditional debt regimes, in which economies are always moral economies, presuppose the inclusion of everyone within the system—debts never completely expropriate the debtors. The market economy has everyone treating everyone else as outside of the system of obligations, as a potential adversary.

Graeber distinguishes between three forms of social organization. First, what he calls “communist,” using the definition from the Communist Manifesto, “from each according to his abilities, to each according to his needs.” Graeber sees this as a kind of originary form of social relations, which we all live according to for much of our everyday lives—such a relation treats the world as a single, eternal, object/environment to which everyone contributes and from which everyone receives indiscriminately (if you’ve ever held open a door for elderly woman, you acted like a communist). The second form of social organization is “exchange,” in which things are seen as commensurable. The third form is “hierarchy,” in which there is no commensurability between objects and individuals, and obligations are set by precedent. The exchange relation is really the focus of Graeber’s book. He traces the disembedding of exchange relations from “communist” ones and this seems to take place through the intervention of hierarchy. Kings need armies, and so they need to pay their soldiers, so they produce coins in order to do so; those soldiers need to spend the money somewhere so tradesmen surround the military. Kings need to tax their subjects, so some way of measuring wealth becomes necessary; taxes can be set high enough so that subjects have to go into debt, which in turn makes it easier to appropriate their property. We need currency in order to pay such “antagonistic” debts. Now, part of what makes Graeber’s discussion especially interesting (in a way, it’s the starting point of his discussion) is the perplexing fact that not only is debt generally and unthinkably taken to be a moral question (“we must repay our debts,” everyone must get what is due him”) but that moral thinking more generally seems to operate primarily with a vocabulary drawn from that of debt (God has given us all kinds of things and we in turn are deeply obliged to Him; we seek redemption from the slavery of sin, etc.).

Graeber’s intention is primarily to debunk this language of debt, which he examines in a sustained way in his chapter on “Primordial Debt.” He discusses sacrifice, and makes the very interesting observation that in some conditions the main form of currency (the representation of value into which exchangeable objects can be converted) is some object or objects (like cattle) that are most commonly used for sacrifice. For Graeber, the moral discourse of debt is irrational, and the standard of rationality seems to come from “communist” morality. For Graeber, the communists he discusses are much more rational than those of us besotted by debt-talk, who imagine all kinds of unpayable and even unimaginable debts (with God, for example, who couldn’t possibly need anything from us) rather than simply recognizing the basic fact of our interdependence. It would complicate Graeber’s argument to acknowledge that some form of exchange, or debt, not to mention hierarchy, is constitutive of the communist community as well. (Graeber doesn’t see “communism,” “exchange” and “hierarchy” as different kinds of social orders, but as moral economies that co-exist within a single order—still, it’s clear that social orders are distinguished by the predominance of one over the others, and that the egalitarian communities from which Graeber draws his critiques of pathological exchange orders are the more reliable repositories of communist morality.) He focuses on intra-communal relations, not their relation to the sacred center (their ritual order), so the possibility that the notion of debt is indeed primordial, preceding the origin of human inequality, doesn’t arise. This makes it easy for him to ridicule the notion, that some researchers purport to see as fundamental in the ancient Middle East and India, that existence itself is a form of indebtedness, as a kind of state ideology, contending that rather than seeing these theological claims as supposing a (ridiculous!) “infinite” debt, we should rather interpret

 

this list [of escalating debts] as a subtle way of saying that the only way of “freeing oneself” from the debt was not literally repaying debts, but rather showing that these debts do not exist because one is not in fact separate to begin with, and hence that the very notion of canceling the debt, and achieving a separate, autonomous existence, was ridiculous from the start. Or even that the very presumption of positing oneself as separate from humanity or the cosmos, so much so that one can enter into one-to-one dealings with it, is itself the crime that can be answered only by death. Our guilt is not due to the fact that we cannot repay our debt to the universe. Our guilt is our presumption in thinking of ourselves as being in any sense an equivalent to Everything Else that Exists or Has Ever Existed, so as to be able to conceive of such a debt in the first place. (68)

 

Contrary to his normal procedure, though, Graeber doesn’t show that anyone, other than a present-day anarchist or communist, actually has interpreted these notions in this way. It’s understandable that Graber would want to insist upon an originary debt-free condition, since the only other way out of the violence endemic to impersonalized debt relations would be through hierarchy. Interestingly, Graeber points out that ancient and more recent pre-modern history is replete with revolts against the expropriating consequences of debt, where there is an implicit equality between debtor and creditor (insofar as they engage in exchange), but almost none against caste systems and slavery, and I would add far fewer against monarchy, or military hierarchies, where social distinctions are non-negotiable and beyond appeal—but doesn’t pursue the implications of this observation.

Graeber makes an argument intimately related to one of Marx’s central ones, and it is an argument that must be conceded. What, exactly, makes it possible to exchange one object with any other; what makes the objects commensurable? The objects must be abstracted from the network of relations in which they are embedded, and by “abstracted” Graeber means “violently ripped out.” This analysis, like Marx’s of “abstract labor,” implicates exchange and debt in sacrifice by focusing on the most exchangeable of all objects: human beings. Early forms of exchange between communities and families involved replacing people, and therefore establishing their value (as represented by other objects): brides, slaves, murder victims, and so on. Although Graeber doesn’t speak in these terms, the implication is that hostage taking is central to the earliest forms of exchange. (It is not clear to me whether, for Graeber, or in reality for that matter, the more localized and personalized forms of “credit” Graeber valorizes precede and are distorted by the pathological, hostage taking forms or, on the contrary, the personalized forms are reforms and curtailments of hostage taking, under a new mode of the sacred and new mode of sovereignty. I find myself assuming the latter is the case, since the establishment by sovereigns of markets must have always involved some violent abstraction, and early forms of exchange between tribes, families and communities must have always presupposed the possibility of violent escalation.) Now, as I argued in my post on sacral kingship, for human beings to have this extremely high “value,” it must be possible to place them at the center—which means that the center must have already been expropriated by the “Big Man” and eventually permanently occupied by the sacral king. Again, we see the inseparability of “humanization” and human sacrifice. Humanity cannot be the highest value without humans being the most valuable exchangeable and sacrificable object. Graeber is right to associate this economy of hostages with the honor culture, which he especially dislikes, seeing one’s honor as being defined by the stripping of another’s. Flinching at the brutality of such systems, especially when one would be unable to imagine a credible alternative under those conditions, is a serious analytical failure—honor culture must not only have suppressed forms of violence endemic to relations within and between more communist orders, but any replacement of honor culture must defer some critical mode of violence that can be recognized as communally destructive within such societies. And this kind of recognition comes, to quote Marx, under conditions not of one’s choosing.

Despite his ridicule of theologies of “infinite” and “existential” debt Graeber implicitly concedes that that development of these (critical) modes of thought in the “Axial Age” (800 BC to 600AD) of the great ancient empires led to the diminishment and ultimately elimination of the most egregious practices of mass slavery and human sacrifice of those empires. Once debt is conceived in infinite, existential terms, defining one’s relation to the sacred, then it is the assumption that debts can be settled through the exchange of hostages that becomes vulnerable to irony, ridicule and denunciation. Whether it’s “rational” (according to what tradition of rationality? Developed how—by reference to what system of exchange?) is completely irrelevant to the ethical advance that Graeber sees from the Axial to the Middle Ages (600-1450 AD), an advance we must see as a result of the gradual assimilation of the transcendent forms of the sacred of Buddhism, Christianity and Islam. The sacral king is the earliest form of absolutism: the sacral king is the cynosure of the order, the mediator between divine and human, and also for this reason a possible sacrifice—the first form of human sacrifice. The ancient emperors retain this sacrality in an extended form (they cannot be violated under any conditions), but since they remove themselves from the position of sacrificial victim, they are sacrificed to, not sacrificed. The ancient empires were regimes of expanded sacrifice, or hostage taking, in which the abstraction and redistribution of individuals was routinely used to settle accounts. This accounts for the moral state of the axial empires that Graeber deplores, and which led to the more metaphorical and spiritual forms of sacrifice that provided for the moral revolution which restored a more reciprocal economy, based upon embedded debt networks, personal credit rather than currency, in the Middle Ages.

We can now focus on the relation between hostage taking, or the violent extraction of humans from relations of “communism,” “exchange” and “hierarchy” that define them, and sovereignty. The forms of holiness inherited from the Axial Age dissenters invalidate hostage taking: each human being has a unique relation to the divine, so humans can no longer be treated as commensurable with one another. Rather than a possible sacrifice or receiver of sacrifice, the sovereign’s role is now to suppress sacrifice. To sacrifice a human requires that all the attention of the community converge on the sacrificial figure. He or she must be seen as the repository of all desires and resentments, the origin of some proliferating criminality or plague, the cause of dashed hopes. The post-axial sovereign ensures that such attention can only be organized on the terms of the sovereign. Hostage taking implies an honor system, and the suppression of sacrifice means the suppression of the honor system, which is to say the vendetta. The sovereign must settle accounts between groups and individuals in such a way that grievances are satisfied sufficiently so as to make recourse to the vendetta unthinkable. Sovereignty must reach into and shape the social order so as to block the emergence of power centers interested in restoring the honor system. This means a system of deferences that interpose between the convergent attention of the many and any individual the question, “what would the sovereign do (and have me do)”? Which further means that the sovereign construct a justice system that disseminates answers to those questions broadly and clearly, verbally and through institutionalized practices. When our attention converges on an individual—a celebrity, an infamous criminal or defendant, the victim of a Twitter mob—we may insult, ridicule, taunt, ostracize, but will stop short of appropriating the sovereign’s prerogative to imprison or kill. At a certain point, our attention converges on those who seem more likely than us to appropriate that prerogative (to organize a lynch mob, for example).

This gradual incorporation of the norms of axial age transcendence into Middle Ages governance accounts for the moral, political and even economic and technological advances steadily gained in medieval Europe (I’m not going to try and include parallel developments in the Islamic world, India and China). But insofar as these terms of transcendence inform the state, they can be invoked against the state, especially when they are embodied in a powerful institution with sacral imperial pretensions of its own. It is, after all, possible to concede that central power should be exercised absolutely while still insisting that the occupant of that central power be subject to replacement. Any specific argument along these lines will be marked by inconsistencies, but so will arguments for sovereign determined succession. And the criteria for replacement will most likely derive from the transcendent terms that are embedded in the sovereign itself. It’s then a few steps to modern democracy, which insists on institutionalizing a system of replacement so that his temporary hold on power will always be present in the mind of the sovereign. It’s then barely a step at all to propose that counters to sovereign action be built into sovereignty itself, in the form of “checks and balances.” But this makes the modern executive perilously close to becoming a sacrificial object again—not just in the once and for all manner in which the absolutist monarchs were sacrificed to inaugurate the modern age, but as a routine, almost ritualized matter. To refer again to my post on sacral kingship, I am arguing for an understanding of modern history as the ongoing attempt to create a satisfactory replacement for sacral kingship—sovereignty as a non-sacrificial center of attention that, even more, deflects towards itself all other potentially sacrificial centers of attention.

What makes the consequences of the “always already” divided sovereignty of medieval Christianity even more destructive is the possibility of re-“abstracting” individuals from their social networks of obligation and reciprocity. The breaking up of the honor system, which gives the individual a direct relation to the sovereign, makes this abstraction a site of power struggles—the source of the high-low vs. the middle power blocs. I’m not going to work through Graeber’s complex discussion of the rise of modernity, but he associates the rise of “capitalism” with a massive new abstraction of individuals—not so much as human hostages (although Graeber foregrounds the importance of world conquest and slavery by the West to this process) but as potential capitalists who see the world completely in terms of exchange. This self-capitalization respects the transcendent axial terms because in self-capitalizing, the subject is self-sacrificing through labor, discipline, and the exclusion or reduction of whole domains of what have always been considered essential human experiences. The asceticism of the capitalist subject is certainly in the Christian tradition. As long as this type of subject is privileged, the unification and securing of power is impossible—the self-sacrificing individuals will always be eager clients for sowers of dissension and division. The modern market is a product of power as much as markets ever were, with modern capitalists, as Graber argues, the descendants of the military adventurers of the early modern age—but, by setting markets against the state, liberalism makes the market a multiplier and intensifier of divided power. If liberalism does not directly restore, it always incites and ultimately relies on the return of the honor system—leftism is the institutionalization and infinitely varied refinment of the vendetta. So, absolutism demands the re-embedding of individuals into “communistic,” “exchange” and “hierarchical” orders, but on terms that preclude reversion to the honor system and preserve the mass literacy and numeracy presupposed, if not quite accomplished, by contemporary social orders.

To an extent, absolutists stand with some elements of the contemporary left, those that still have abolishing the capitalist world order on their agenda—at the very least, we can notice some of the same things deliberately ignored by liberals. There are actually a very few, and those very feeble (in power and intellectual acuity), among the left that have kept their eye on replacing the metastasized systems of exchange that have swallowed up all human relations and made us all hostage to globalizing economic, political and media regimes. Transnational human rights regimes and climate fanaticism, to take two examples (both providing legal and moral bases for “political correctness” and supply chains from transnational economic entities to your humble social justice warrior) tie the left irreversibly to capitalism. Blackmailing corporations and other large institutions, along with infiltrating the permanent state (which ensures the blackmailing will work), pretty much defines the left at this point.  No one is more calculating and exchange oriented than they are. And those on the left who wish to return to class, economic inequality and socialist transformation are completely unwilling to challenge the splintering of the leftist project along identity lines.

Graeber, to his credit, says little about the prospects of the left, refusing to feed his readership false optimism. To his discredit, while insisting on the permanence of the “communistic” dimension of human experience (we could hardly rid ourselves of it if we wished), and devoting the bulk of his attention to distinguishing productive from pathological modes of exchange, he says very little, especially by way of proposing new ways of thinking, about the “hierarchical” dimension. He concedes its necessity, but never offers even the most qualified praise for responsible uses of hierarchy, much less a rigorous distinction between positive and negative forms. I have to assume that, as a confirmed leftist speaking mostly to other leftists (Graeber has been an important figure in the “anti-globalization” movement [the ones who smashed up Seattle back in antiquity, i.e., 1999] which, insofar as it still exists, has become the alt-right movement). We, of course, have no such scruples—quite to the contrary! The articulation of “communism,” “exchange” and “hierarchy” can probably be incorporated very nicely into absolutism. The most originary manifestation of hierarchy is naming: to name another being is to establish an origin and destiny, and thereby constitute it, bring it into existence. Delegating is itself a form of naming. Naming is performative, like christening a ship or marrying a couple, activities that manifest the most basic social traditions. In a sense, that is what a tradition entails—a reciprocally constituting system of names.

The political formalism instituted by Moldbug is also a form of naming—anonymous, and therefore apparently spontaneous powers are incorporated and made subordinate to the sovereign through naming. The media are propaganda agencies of some power center or another—the blogger Sundance at the Conservative Treehouse asserts that the CIA leaks to the Washington Post and the FBI to the New York Times. No doubt we could create a more comprehensive map of affiliations. In the interests of transparency, we should not only have such a map but it should be used to centralize the information policy of the regime. Every piece of information comes from some specific place in the chain of command. That means all information purveyors are named by the sovereign. Moving beyond this specific example, we can see that sovereign naming prevents the abstraction of individuals in a way that conforms to a dynamic social order. Something new—a new enterprise, an invention—comes out of something existing, something with a name, and is itself named as soon as it comes to the attention of the sovereign (and the sovereign keeps getting better at noticing and assessing novel phenomena).

How do we devise and apply new names? Like Graeber’s “communism,” this practice is part of our most elementary relations to the world and each other. To point to something that hasn’t been noticed is to name it, even if only as “today’s hamburger,” as opposed to all the other hamburgers we’ve all eaten previously. Sovereign naming produces new centers of attention that direct our attention back to the sovereign’s naming capacity. Here’s a way to think about how “naming” as a form of thinking and speaking happens. Gertrude Stein had a habit of naming the chapters in her books. One reads through Chapters 1-6 and then the next chapter is “Chapter 3.” This arrests one attention and directs it toward the meta-critical dimension of books, to things we don’t ordinarily notice. After one has read a lot of books, one notices patterns—so, a “typical” novel might have, say 15 chapters, and the different chapters develop a certain character, or “feel,” because of the formulas of novel writing. So, in a 15 chapter book, chapter 7 has a “turning point” or “climax,” and when the reader gets to Chapter 7 such an expectation is implicit. One notices these patterns and forgets them, as we simply plug new books into the formula. But if there is a character or feel to “Chapter 7,” then other chapters can be Chapter 7-ish, say, in a book that reworks the formulae. You can let the reader notice the subversion of the formula, or you can explicitly identify the upcoming chapter as, “really,” Chapter 7, even if it comes after Chapter 2 and before Chapter 3. Whatever is better for writers, it is better for authority to explicitly name the “emergent property,” and to do so, also explicitly, in the only way one can—tropologically, that is, by violating some linguistic rule or expectation, using a word in a “wrong” way that is now made “right” by its authoritative application. Sovereign naming is thus the ostensive dimension of social order, which allows for a coherent array of imperatives and therefore a clarified chain of command. Of course, subjects will themselves get into the habit of naming, of making explicit their relations to each other, their obligations and expectations, and also their disappointments and amendments of those relations. We would have the means to resist our “abstraction” by deferring to one another’s names.

June 25, 2017

Equality and Morality

Filed under: GA — adam @ 6:23 am

I appreciate Eric Gans’s detailed response to my blog post (In)equality and (Im)morality, and am glad to respond to at least most of the issues he raises there. Part of the problem here is that, as pretty much everyone knows by now, “equality” is used in so many different ways that it would be futile to define it in a single, agreed upon way. Maybe it’s even useless, or should only be used in very restricted and precisely defined contexts (like “economic inequality,” by which we mean the highest salary is x times larger than the smallest, or whatever). That, of course, would remove it from moral discourse altogether, or at least make it subordinate to a moral discussion conducted on different grounds (high levels of economic inequality might indicate, but not demonstrate, some underlying moral issue). Would moral discourse suffer from this excision or derogation? Let’s look at one of Eric’s examples:

In spontaneously formed groups up to a certain size and in a context that makes the sheer exercise of force impossible (in contrast to the “savage” groups favored in apocalyptic disaster films), people tend to cooperate democratically, profiting when necessary from the specific skills of individuals but not choosing a “king,” and the same is true in juries, where the foreman is an officer of the group rather than its leader. Democracy in this sense doesn’t deny that some people may have better judgment than others, but it permits unanimous cooperation, and I venture to say, corresponds to “natural” human interaction since the originary scene.

The point here is to affirm the originary nature of equality, here defined in the sense of the voluntary and spontaneous quality of the cooperation and the fluidity of leadership changes. I think we can easily find other examples of small group formation, especially under more urgent conditions, where hierarchies are firmly established and preserved, without the application of physical force. Indeed, that is what takes place in most of the disaster films I’ve seen—you couldn’t really force someone to follow you out of a burning building, or find the part of the ship that will sink last, or keep one step ahead of the aliens. In such cases, people follow whoever proves he (is it sexist that it is still usually a “he”?) is capable of overcoming obstacles, keeping cool, anticipating problems, calming the others, fending off challenges without undermining group cohesion, etc. In the case of a jury, we have one very clearly designed and protected institution (and hardly spontaneously formed)—but why, exactly, is the foreman necessary? Why do we take it for granted that the jury can’t simply spontaneously run itself, with a democratic vote over which piece of evidence to discuss next, then a democratic vote to decide whether to take a preliminary vote, but first a vote to decide whether the other votes should be by secret ballot, etc.? It seems pretty obvious that the process will work better, and lead to a more just result, if someone sets the agenda—but why is it obvious? An even broader point here is that we have no way of determining, on empirical grounds, whether the cooperation involved is “spontaneous,” “voluntary” and “unanimous.” These are ontological questions, which enter into the selection of a model. In any case that Eric could describe as people as organizing themselves spontaneously I could describe them as following someone who has taken the initiative. The question, then, is which provides the better description? I think that the absolutist ontology I propose does, because to describe any group as organizing itself spontaneously collapses into incoherence. They can’t all act simultaneously, can they? If not, one is in the lead at any moment, and the others are following, in some order that we could identify. (If they don’t follow, we don’t have a group, and the question is moot.)

Does talk of equality and inequality help us here? I don’t see how. Let’s say a particular member of jury feels that his or her contributions to the discussion have been neglected, and he or she resents that. There are two possibilities—one, the contributions have been less useful than those of others, meaning the neglect was justified; two, the contributions have been unjustly neglected. In the first case the moral thing to do (a good foreman would take the lead here) is to explain to the individual juror what has been lacking in his contributions, and suggest ways to improve them as the deliberations proceed. In the second case, the moral thing to do is to realize that the foreman has marginalized contributions that would have enhanced the deliberative process, and, in the interest of improving that process, she should acknowledge the value of those contributions, try to understand why they went unappreciated, and be more attentive to the distinctive nature of those contributions in the future. The juror’s resentment, in either case, is framed in terms of a resentment on behalf of the process itself or, to put it in originary terms, on behalf of the center. The assumption is that all want the same thing—a just verdict. Once the resentment is framed in terms of unequal treatment, to be addressed by the application of the norm of equal treatment (everyone’s opinion must be given equal weight? Everyone must speak for the same amount of time?), the deliberative process is impaired, and if that framing is encouraged, it will impair the process beyond repair. The moral thing to do, then, is to resist such a framing. Now, it may very well be that the juror has been marginalized for reasons such as racial prejudice (it’s also possible that the juror is complaining for that reason), in which case the deliberative process should be corrected to account for that. The point, though is always to improve that process, not to eliminate that form of prejudice (and all of its effects) within the jury room. Even if the juror in question is trying to reduce the conflict to one of some difference extrinsic to the process, the foreman should reframe it in this way—that is the moral thing to do.

I think this ontological question, which turns into a question of framing, can be situated on the originary scene itself. What matters on the originary scene is that everyone defer appropriation, and offer a sign to the others affirming this. Everyone does something—should we call that “equality”? We can, I suppose, but why? There are plenty of cases where “everyone” plays their individual part in “doing something,” while those parts are widely disparate in terms of difficulty and significance to the project. It’s just as easy to imagine a differentiated originary scene, where, for example, some sign only after others have already set the terms, so to speak, as it is to imagine a scene in which everyone signs simultaneously and with equal efficacy. Easier, in fact, I think. What matters is that everyone is on the scene. The same is the case when it comes to dividing the meal—there’s no need to assume that everyone eats exactly the same amount, all we have to assume is that everyone eats together (unlike the animal pecking order, where each has to wait until the higher ranking animal has finished). This is what I think the moral model requires: everyone affirms the scene, and their relations to all others on the scene; and everyone is at the “table” and receives a “piece.” What this will mean in any given social order can’t be determined in advance and therefore will be something we can always argue over (and any ruler will want to receive feedback on), but that what makes it a basis for criticizing the existing order. If the individual juror’s contribution never does get recognized and this was in fact to the detriment of the deliberations, then we could say she has done her part in affirming the scene but has not gotten her “piece,” or has been kept away from the “table,” thereby weakening the scene as a whole. Again, I don’t see any point along the way here where the concept of “equality” clarifies anything.

Now, I do believe that primitive (let’s say, stateless and marketless) communities are highly egalitarian. Equality does mean something here—this is their interpretation of the originary scene, and they certainly have very good reasons for it. What equality means might be that no goods of any kind are saved, that no family is allowed a larger dwelling than any other, that anyone who gets too good at something be punished in some way, that no one speak to another member of the community in such a way as to imply a relation of indebtedness, and so on. Such an understanding of equality still prevails at times, even in much more advanced and complex societies—we see it in children, among colleagues in the workplace, family members, and so on. We are all at least a little bit communist. But there’s nothing inherently moral about this “communism.” Sometimes it might be moral, sometimes not. It’s immoral to destroy a common project because you’re afraid someone else will show you up; it might very well be moral for children to “enforce” (within bounds) equal treatment by the parents of all the siblings, because this insistence might help correct for favoritism of which the parents might not be aware, and therefore might help the family to flourish. Again, though, the question of morality comes down to whether you are contributing to the preservation and enhancement of an institution.

I do agree that “telling the truth about human difference” is a marginal issue, and not a moral position in itself. My only point in this regard is that, in this case, telling the truth is more moral than lying, and the victimary forces poisoning public life today give us no choice but to do one or the other. I think we could get along fine without dwelling on tables showing the relative IQs of all the ethnic and racial groups in in the world, but we need such a reference point if we refuse to concede that the only explanation for disparate outcomes is racism/sexism/homophobia, etc. And, really, if the more moral thing, in this instance, is to tell the truth, then it’s hard to fault those who do so with a bit of gusto. Those flinging accusations of racism are not exactly restrained in their “debating” tactics, after all. A bit of tit for tat can be moral as well, although whether it involves “equality” is also a matter of framing. If there’s a more moral way of responding to those who, by now, are claiming that we want to kill millions of people and openly celebrate violence in the streets, I’d be very glad to hear it. In fact, as some of those most viciously accused of “white supremacy” among other thought crimes have pointed out quite cogently, if, in fact, it turns out that some groups are on average smarter than others (and some groups are better than others in other ways, and some groups are better in math and other in verbal skills, etc.), there is absolutely no reason why we still can’t all get along perfectly well. After all, more and less intelligent and capable people get along within the same institution all the time, so the only thing that would prevent this from being the case throughout society is persistent equality-mongering. That’s why I think the best way forward in terms of using the originary scene as a moral model is to focus on common participation in, contribution to, and recognition by social institutions. And if we are to direct our attention to the preservation, enhancement and creation of institutions (if we want to be good teachers and students within functioning schools and universities rather than affirmatively acted upon experts in group resentment, if we want to be good husbands, wives and parents within a flourishing system of monogamy rather than feminists, etc.) then we want those institutions to be well run and considerately run. And if we want them run in these ways, we want to bring the power of those running them as closely in line with their accountability as we can. In other words, we want cooperation to be directed (to go back to those opening examples, no one is going to propose allowing a university to be run “spontaneously,” I assume) by those with initiative, experience, and responsibility, and we want them to be appointed and assessed in a like manner, by others competent to do so. And that, I think, would bring us to a much higher level of morality.

It seems to me that the problem Eric is trying to solve here is the following: in any minimally civilized or developed order, “inequality” has developed to the point that the moral model must be “translated” in some way so as to minimize the resentments generated by that inequality. The way he thinks the historical process has enabled this is through the emergence of the market and liberal democratic political processes. The “actual” inequality (the existence of both billionaires and those who sleep under bridges) is mitigated by the “formal” equality of the market (my dollar is worth as much as anyone else’s), the vote, various “rights,” and so on. How can we tell whether this “works”? We can point out that the US is still richer and more powerful than Russia or China, I suppose, but, leaving aside how certain we can be about the causes (and continuance) of this Western predominance, we certainly can’t see this as a moral argument. (There’s nothing particularly moral about bribing the lower classes to remain quiescent.) I think there is an unjustified leap of faith here. It may be true that these forms of formal (pretend?) equality have been granted for the purposes Eric suggests, but that doesn’t prove they have actually served that purpose—it might mean exactly the opposite, that the progress of “equality” has been a means of ensuring that the real inequalities (or structures of power) remain untouched.

I would push this further—there is no reason to assume that whatever we can call “inequalities” are themselves the source of any resentment that might threaten the social order. We could say, for example, that the 19th century European working class resented having its labor exploited, being underpaid, being subjected to unsafe conditions, and so on. Or, we could say they resented having their communities undermined, the network of relations in which they were embedded torn apart, and being driven off the land and into packed cities where they were stripped on any system of moral reciprocities. Interestingly, both the capitalist and the revolutionary have good grounds for preferring the first explanation—it presents the capitalist with a problem he can solve politically (labor unions, welfare, minimum wage, public housing, etc.) and the communist with leverage (in case the capitalist palliatives don’t work). Neither wants to confront the implications of the second explanation, which would require preserving or reconstructing a moral order. This too is a question of ontology and framing. Maybe real reciprocity rather than formal equality is called for. One could now say “but these changes were inevitable,” but that’s what one says in abandoning responsibility. One could say, “still, overall, modernity is preferable,” but can one make that argument on terms other than those of modernity itself? Has anyone actually made the argument that increasing wealth, developing technology and improving living conditions requires liberal democracy and ever expanding forms of formal equality? Once we step outside of the frame forcing us to see “modernity” as a single, inevitable, beneficial package, the connection is not obvious at all. (It’s interesting that there’s never been much of a push to democratize or liberalize the structure of corporations. The continued existence of such a creature as a CEO doesn’t seem to trouble our moral model. Even the left has learned to love the CEO.) Every form of cooperation has an end and a logic to it, an end and logic that we can always surface from the language we find ourselves using in discussing that form. Schools are for learning, commerce is for mutually beneficial exchange, militaries are for fighting other militaries, families are for channeling sexual desire into the raising of new generations, conversations are for creating solidarities, exchanging information, trying out new roles, etc. We can frame all resentments as indicating possible infirmities in these forms of cooperation, and then address those resentments by repairing those forms where necessary. And by “we,” I mean whoever has the most responsibility within those forms. This would involve far more moral seriousness than robotically translating each complaint into an accusation of inequality. In this way the moral model would be just as real now as it was on the originary scene (it is still being used to sustain the scene), rather than an abstraction uncomfortably fit onto what we have decided to see as a qualitatively different set of relations.

June 15, 2017

Sacral Kingship and After: Preliminary Reflections

Filed under: GA — adam @ 4:49 pm

Sacral kingship is the political commonsense of humankind, according to historian Francis Oakley. In his Kingship: The Politics of Enchantment, and elsewhere, Oakley explores the virtual omnipresence (and great diversity) of sacral kingship, noting that the republican and democratic periods in ancient Greece and Rome, much less our own contemporary democracies, could reasonably be seen as anomalies. What makes kingship sacral is the investment in the king of the maintenance of global harmony—in other words, the king is responsible not only for peace in the community but peace between humans and the world—quite literally, the king is responsible for the growth of crops, the mildness of the weather, the fertility of livestock and game, and more generally maintaining harmony between the various levels of existence. Thinking in originary anthropological terms, we can recognize here the human appropriation of the sacred center, executed first of all by the Big Man but then institutionalized in ritual terms. The Big Man is like the founding genius or entrepreneur, while the sacred king is the inheritor of the Big Man’s labors, enabled and hedged in by myriad rules and expectations. The Big Man, we can assume, could still be replaced by a more effective Big Man, within the gift economy and tribal polity. Once the center has been humanly occupied, it must remain humanly occupied, while ongoing clarification regarding the mode of occupation would be determined by the needs of deferring new forms of potential violence.

One effect of the shift from the more informal Big Man mode of rule to sacral kingship would be the elimination of the constant struggle between prospective Big Men and their respective bands. But at least as important is the possibility of uploading a far more burdensome ritual weight upon the individual occupying the center. And if the sacral king is the nodal point of the community’s hopes he is equally the scapegoat of its resentments. Sacral kings are liable for the benefits they are supposed to bring, and the ritual slaughter of sacral kings is quite common, in some cases apparently ritually prescribed. It’s easy to imagine this being a common practice, since not only does the king, in fact, have no power over the weather, a king elevated through ritual means will not necessarily be more capable in carrying out the normal duties of a ruler better than anyone else. Indeed, some societies separated out the ritual from the executive duties of kingship, delegating the latter to some commander, and thereby instituting an early form of division of power—but these seem to have been more complex and advanced social orders, capable of living with some tension between the fictions and realities of power (medieval to modern Japan is exemplary here).

It seems obvious that sacral kings, especially the more capable among them, must have considered ways of improving their position within this set of arrangements. The most obvious way of doing so would be to conquer enough territories, introduce enough differentiations into the social order, and establish enough of a bureaucracy to neutralize any hope on the part of rivals to replace oneself. (No doubt, the “failures” of sacral kings to ensure fertility or a good rainy season were often framed and broadcast by such rivals, even if the necessity of carrying out such power struggles in the ritualistic language of the community would make it hard to discern their precise interplay at a distance.) Once this has been accomplished, we have a genuine “God Emperor” who can rule over vast territories and bequeath his rule to millennia of descendants. The Chinese, ancient Near East and Egyptian monarchies fit this model and the king is still sacred, still divine, still ensuring the happiness of marriages, the abundance of offspring, and so on. If it’s stable, unified government we want, it’s hard to argue with models that remained more or less intact in some cases for a couple of thousand years. Do we want to argue with it?

The arguments came first of all from the ancient Israelites, who revealed a God incompatible with the sacralization of a human ruler. The foundational story of the Israelites is, of course, that of a small, originally nomadic, then enslaved, people, escaping from and them inflicting a devastating defeat upon, the mightiest empire in the world. The exodus has nourished liberatory and egalitarian narratives ever since. Furthermore, even a cursory, untutored reading of the history of ancient Israel as recorded in the Hebrew Bible can see the constant, ultimately unresolved tension regarding the nature and even legitimacy of kingship, either for the Israelite polity itself or those who took over the task of writing (revising? Inventing?) its history. On the simplest level, if God is king, then no human can be put in that role; insofar as we are to have a human king, he must be no more than a mere functionary of God’s word (which itself is relayed more reliably by priests, judges and prophets). At the very least, the assumption that the king is subjected to some external measure that could justify his restraint or removal now seems to be a permanent part of the human condition. Even more, if the Israelite God is the God of all humankind, with the Israelites His chosen priests and witnesses, the history of that people takes on an unprecedented meaning. Under conditions of “normal” sacral kingship, the conquest and replacement of one king by another merely changes the occupant, not the nature, of the center. Strictly speaking, the entire history (or mythology) of the community pre-conquest is cancelled and can be, and probably usually is, forgotten—or, at least, aggressively translated into the terms of the new ritual and mythic order. Not for the Israelites—their history is that of a kind of agon between the Israelites and, by extension, humanity, with God—the defeats and near obliteration of the Jews are manifestations of divine judgment, punishing the Jews for failing to keep faith with God’s law. Implicit in this historical logic is the assumption that a return to obedience to God’s will is to issue in redemption, making the continued existence of this particular people especially vital to human history as a whole, but just as significantly providing a model for history as such.

At the same time, Judaic thought never really imagines a form of government other than kingship. As has often been noted, the very discourse used to describe God in the Scriptures, and to this day in Jewish prayer, is highly monarchical—God is king, the king of kings, the honor due to God is very explicitly modeled on the kind of honor due to kings and the kind of benefits to result from doing God’s will follow very closely those expected from the sacral king. The covenant between the Israelites and God (the language of which determines that used by the prophets in their vituperations against the sinning community) is very similar to covenants between kings and their people common in the ancient Near East. And, of course, throughout the history of the diaspora, Jewish hopes resided in the coming of the Messiah, very clearly a king, even descended from the House of David—so deeply rooted are these hopes that many Jews prior to the founding of the State of Israel, and a tenacious minority still today, refuse to admit its legitimacy because it fails to fit the Messianic model. All of this testifies to the truth of Oakley’s point—so powerful and intuitive is the political commonsense of humankind that even the most radical revolutions in understandings of the divine ultimately resolve themselves into a somewhat revised version of the original model. Of course, slight revisions can contain vast and unpredictable consequences.

So, why not simply reject this odd Jewish notion and stick with what works, an undiluted divine imperium? For one thing, we know that kings can’t control the weather. But how did we come to know this? If in the more local sacral kingships, the “failure” of the king would lead to the sacrificial killing of that king (on the assumption that some ritual infelicity on the part of the king must have caused the disaster), what happens once the God Emperor is beyond such ritual punishment? Something else, lots of other things, get sacrificed. The regime of human sacrifice maintained by the Aztec monarchs was just the most vivid and gruesome example of what was the case in all such kingdoms—human sacrifice on behalf of the king. One of Eric Gans’s most interesting discussions in his The End of Culture concerns the emergence of human sacrifice at a later, more civilized level of cultural development—it’s not the hunter and gatherer aboriginals who offer up their first born to the gods, but those in more highly differentiated and hierarchical social orders. If your god-ancestor is an antelope, you can offer up a portion of your antelope meal in tribute; if your god is a human king, you offer up your heir, or your slave, because that is what he has provided you with. This can take on many forms, including the conquest, enslavement and extermination of other people, in order to provide such tribute. What the Judaic revelation reveals is that such sacrifice is untenable. What accounts for this revelation? (It’s so hard for us to see this as a revelation because is hard for us to imagine believing that the king, for example, provides for the orderly movements of heavenly bodies. But “we” believed then, just like “we” believe now, in everything conducive, as far as we can tell, which is to say as far as we are told by those we have no choice but to trust, to the deferral of communal violence.) The more distant the sacred center, the more all these subjects’ symmetrical relation to the center outweighs their differences, and the more it becomes possible to imagine that anyone could be liable to be sacrificed. And if anyone could be liable to be sacrificed, anyone can put themselves forward as a sacrifice, or at least demonstrate a willingness to be sacrificed, if necessary. One might do this for the salvation of the community, but this more conscious self-sacrifice would involve some study of the “traits” and actions that make one a more likely sacrifice; i.e., one must become a little bit of a generative anthropologist. The Jewish notion of “chosenness” is really a notion of putting oneself forward as a sacrifice. And, of course, this notion is completed and universalized by the self-sacrifice of Jesus of Nazareth who, as Girard argued, discredited sacrifice by showing its roots in nothing more than mimetic contagion. (What Jesus revealed, according to Gans, is that anyone preaching the doctrine of universal reciprocity will generate the resentment of all, because all thereby stand accused of resentment.) No one can, any more, carry out human sacrifices in good faith; hence, there is no return to the order of sacral kingship—and, as a side effect, other modes of human and natural causality can be explored.

Oakley follows the tentative and ultimately unresolved attempts of Christianity to come to terms with this same problem—the incompatibility of a transcendent God with sacralized kingships. There is much to be discussed here, and much of the struggle between Papacy and the medieval European kings took ideological form in the arguments over the appropriateness of “worldly” kings exercising power that included sacerdotal power. But I’m going to leave this aside for now, in part because I still have a bit of Oakley to read, but also because I want to see what is involved in speaking about power in the terms I am laying out here. Here’s the problem: sacral kingship is the “political commonsense of humankind,” and indeed continues to inform our relation to even the most “secular” leaders, and yet is impossible; meanwhile, we haven’t come up with anything to replace it with—not even close. (One thing worth pointing out is that if, since the spread of Christianity, human beings have been embarked upon the task of constructing a credible replacement for sacral kingship, we can all be a lot more forgiving of our political enemies, present and past, because this must be the most difficult thing humans have ever had to do.)

Power, for originary thinking, ultimately lies in deferral and discipline, a view that I think is consistent with de Jouvenal’s attribution of power to “credit,” i.e., faith in someone’s proven ability to step into some “gap” where leadership is required. To take an example I’ve used before, in a group of hungry men, the one who can abstain from suddenly available food in order to remain dedicated to some urgent task would appear and therefore be extremely powerful in relation to his fellows. The more disciplined you are, the more you want such discipline displayed in the exercise of power, whether that exercise is yours or another’s. We can see, in sacral kingship, absolute credit being given to the king. Why does he deserve such credit? Well, who are you to ask the question—in doing so, don’t you give yourself a bit too much credit? As long as any failures in the social order can be repaired by more or better sacrifices, such credit can continue to flow, and if necessary redirected. But if sacrifice is not the cure, it’s not clear what is. If the king puts himself forward as a self-sacrifice on behalf of the community in post-sacrificial terms, well so can others—shaping yourself as a potential sacrifice, in your own practices and your relation to your community, is itself a capability, one that marks you as elite, i.e., powerful—especially if you inherit the other markers of potential rulership, such as property and bloodline (themselves markers of credit advanced by previous generations). Unsecure or divided power really points to an unresolved anthropological and historical dilemma. If the arguments about Church and Throne in the middle ages mask struggles for power, those struggles for power also advance a kind of difficult anthropological inquiry, upon which we are still engaged. There’s no reason to assume that the lord who put together an army to overthrow the king didn’t genuinely believe he was God’s “real” regent on earth. It’s a good idea to figure out what good faith reasons he might have had for believing this.

Now, Renaissance and Reformation thinkers had what they thought would be a viable replacement for sacral kingship (one drawn from ancient philosophy): “Nature.” If we can understand the laws of nature, both physical and human nature, we can order society rightly. This would draw together the new sciences with a rational political order unindebted to “irrational” hierarchies and rituals. I want to suggest one thing about this attempt (which has reshaped social and political life so thoroughly that we can’t even see how deeply embedded “Nature” is in our thinking about everything): “Nature” is really an attempt to create a more indirect system of sacrifice. The possibility of talking about modern society as a system of sacrifice is by now a well-established tradition, referencing the modern genocides and wars along with far more mundane economic practices. Indeed, it’s very easy to see the valorization of “the market” as an indirect method of sacrifice: we know that if certain restrictions on trade, capital mobility, ownership, labor-capital relations, etc., are overturned, a certain amount of resources will be destroyed and a certain number of lives ruined. All in the name of “the Economy.” We know it will happen, and we can participate in the purging of the antiquated and inefficient, but no one is actually doing it—no one is responsible for singling out another to be sacrificed for the sake of the Economy. The indirectness is not just evasiveness, though—it does allow for the actual causes of social events to be examined and discussed. It’s just that they must be discussed in a framework that ensures that some power center will preside over the destruction of constituents of another. One could imagine justifying the “natural” sacrifices of a Darwinian social order if it served as a viable, post-Christian replacement of a no longer acceptable sacrificial order—except that it no longer seems to be working. We can think, for example, about Affirmative Action as a sacrificial policy: we place a certain number of less qualified members of “protected classes” into positions with the predictable result that a certain number of lives and certain amount of wealth will be lost, and we do this to appease the furies of racial hatred that have led to civil war in the past. But the fact that the policy is sacrificial, and not “rational,” is proven by the lack of any limits to the policy. No one can say when the policy will end, even hypothetically, nor can anyone say what forms of “inequality” or past “sins” it can’t be used to remedy. All this is to be determined by the anointed priests and priestesses of the victimary order. We can just as readily talk about Western immigration policies as an enormous sacrifice of “whiteness,” for the disappearance of which no one now feels they must hide their enthusiasm. The modern social sciences are for the most part elaborate justifications of indirect sacrifices.

So, the problem of absolutism is then a problem of establishing a post-sacrificial order. This may be very difficult but also rather simple. Absolutism privileges the more disciplined over the less disciplined, in every community, every profession, every human activity, every individual, including, of course, sovereignty itself. We can no longer see the king as the fount of spring showers, but we can see him as the font of the discipline that makes us human and members of a particular order. We could say that such a disciplinary order has a lot in common with modern penology, with its shift in emphasis from purely punitive to rehabilitative measures; it may even sound somewhat “therapeutic.” But one difference is that we apply disciplinary terms to ourselves, not just the other—we’re all in training. Another difference is a greater affinity with a traditional view that sees indiscipline as a result of unrestrained desire—lust, envy, resentment, etc., rather than (as modern therapeutic approaches insist) the repression of those desires. (Strictly speaking, therapeutic approaches see discipline itself as the problem.) But we may have a lot to learn from Foucault here, and I take his growing appreciation of the various “technologies of the self” that he studied, moving a great distance from his initial seething resentment of the disciplinary order, as a grudging acknowledge of that order’s civilizing nature. Absolutism might be thought of as a more precise panopticon: not every single subject needs to be constant view, just those on an immediately inferior level of authority. Discipline, in its preliminary forms, involves a kind of “self-sacrifice” (learning to forego certain desires), and a willingness to step into the breach when some kind of mimetically driven panic or paralysis is evident can also be described in self-sacrificial terms—in its more advanced forms, though, discipline means being able to found and adhere to disciplines, that is, constraint based forms of shared practice and inquiry. Then, discipline becomes less self-sacrificial than generative of models for living—and, therefore, for ruling and being ruled.

June 4, 2017

Cognition as Originary Memory

Filed under: GA — adam @ 6:57 pm

This is the paper (leaving aside any last minute editing) that I will be reading (via Skype) June 9 at the 11th annual GASC Conference in Stockholm.

Cognition as Originary Memory

 

The shift in focus, in cognitive theory, from the relation between mind and objects in the world to the relation between minds mediated by inter-subjectivity, brings it into dialogue with originary thinking. Michael Tomasello’s studies in language and cognition have become a familiar reference point in originary inquiries, which have drawn upon the deep consonance between his notion of “joint attention” and the originary hypothesis’s scenic understanding of human origin. Peter Gardenfors, in his How Homo Became Sapiens, builds on the work of Tomasello and others so as to include the development of cultural and technological implements, in particular writing, in this social understanding of cognition. Much of the vocabulary of cognitive thinking, though, still retains the assumption of separate, autonomous selves: sensations, perceptions, ideas, thoughts, minds, feelings, knowledge, imagination and so on are all experiences or capacities that individuals have, even if we explain them in social and historical terms. My suggestion is that we think of cognition, of what we do when we think, feel, remember and so on directly in linguistic terms, as operations and movements within language, in terms that always already imply shared intentionality. In this way we can grasp the essentially idiomatic character of human being.

 

Eric Gans’s studies of the elementary linguistic forms provide us with an approach to this problem. His most extended study of these forms, of course, is in The Origin of Language, but he has shorter yet sustained and highly suggestive discussions of the relations between the ostensive, the imperative and the declarative in The End of Culture, Science and Faith, Originary Thinking, and Signs of Paradox. In The End of Culture Gans uses the succession of linguistic forms to account for the emergence of mythological thinking and social hierarchy, in Science and Faith to account for the emergence and logic of monotheism, in Originary Thinking, among other things, to propose a more rigorous theory of speech acts, and in Signs of Paradox to account for metaphysics and the constitutive paradoxicality of advanced thought. It makes sense to take what are in these cases historical inquiries and make use of them to examine individual or, to make use of the Girardian term, “interdividual,” cognition, which is always bound up in anthropomorphizing our social configurations in terms of a center constituted out of our desires and resentments.

 

In The Origin of Language Gans shows how each new linguistic form maintains, or preserves, or conserves, the “linguistic presence” threatened by some limitation in the lower form. So, the emergence of the imperative is the making present of an object that an “inappropriate ostensive” has referred to. Bringing the object “redeems” the reference. The assumption here seems to me that the loss of linguistic presence is unthinkable—the most basic thing we do as language users is conserve linguistic presence. Another key concept put to use early on in The Origin of Language is the “lowering of the threshold of significance,” which is to say the movement from one significant object in a world comprised of insignificant ones to a granting of less and less significance to more and more objects. I think we could say that lowering the threshold of significance is the way we conserve linguistic presence: what threatens linguistic presence is the loss of a shared center that we could point to; by lowering the threshold of significance we place a newly identified object at that center. So, right away we can talk about “thinking” or “cognition” as the discipline of conserving linguistic presence by lowering the threshold of significance.

 

This raises the question of how we conserve linguistic presence by lowering the threshold of significance. If linguistic presence is continuous, then our relation to the originary scene is continuous—in a real sense, we are all, always, on the originary scene—it has never “closed.” In that case, a crisis in linguistic presence marks some weakening of that continuity with the originary scene—the crisis is that we are in danger of being cut off from the scene. But in that case, continuity with the scene must entail the repetition of the scene or, more precisely, its iteration. As long as we are within linguistic presence we are iterating the original scene, in all of our uses of signs. Any crisis must then be a failure of iteration, equivalent to forgetting how to use language. The conservation of linguistic presence, then, is a remembering of the originary scene. Our thinking always oscillates between a forgetting and remembering of the originary scene. But this oscillation must itself be located on the originary scene, which then must be constituted by a dialectic of forgetting and remembering, or repeating and iterating. For my purposes, the difference between “repeat” and “iterate” is as follows: repeating maps the sign onto the center; iterating enacts the center-margin relation.

 

 

Now, let’s leap ahead to the linguistic form in which we do most of our thinking: the declarative. The declarative has its origins in the “negative ostensive,” the response to the “inappropriate imperative,” where the object cannot be provided, the imperative cannot be fulfilled, and linguistic presence is therefore threatened. But Gans is at pains to distinguish this “negation” from the logical negation that can come into being only with the declarative itself. He refers to the negation in the negative ostensive as the “operator of interdiction,” which he further suggests must be rooted in the first proto-interdiction, the renunciation of appetite on the originary scene. This remembering of the originary scene further passes through other forms of interdiction which entail “enforcement” through what Gans calls “normative awaiting”—he uses examples like the injunction to children not to talk to strangers. As opposed to normal imperatives, these interdictions can never be fulfilled once and for all. Now, even keeping in mind the limited resources available within an imperative culture, this is not an obvious way to relate the information that the demanded object is not available. The issuer of the interdiction is told not to do (something)+the object. Not to continue demanding, perhaps; not to do more than demand, i.e., not to escalate the situation. None of these alternatives, along with repeating the name of the object, seems to communicate anything about the object itself. But we can read the operator of interdiction as referring to the object—the object is being told not to present itself. But by whom? Clearly not the speaker. I think the initial declarative works because both possibilities are conveyed simultaneously—the “imperator” is ordered to cease pursuing his demand, and the object is ordered, ultimately by the center, to not be present, which in turn adds force to the interdiction directed back at the imperator, who donates his imperative power to the center. In essence, the declarative restores linguistic presence by telling someone that they must lower their threshold of significance because the object of their desire, as they have imagined it, has been rendered unavailable by, let’s say, “reality.” The lowered threshold brings to attention a center yet to be figured by actions, from a direction and at a time yet to be determined.

 

Now, the embedding of the declarative in the imperative order is not very important if once we have the declarative, we have the declarative, i.e., a new linguistic form irreducible to the lower ones, in the way biology is irreducible to chemistry, and chemistry to physics. But biology is still constrained by chemistry, and chemistry by physics. So is the declarative constrained by the imperative order it transcends and, of course, the imperative by the ostensive. The economy of the dialectic of linguistic forms is conserved. Just as on the originary scene remembering the sign is a way of forgetting the scene, immersion in the declarative dimension of culture is a forgetting of the imperative and the ostensive. To operate, to think and communicate in declarative terms is to imagine oneself liberated from imperatives. This gets formulated, via Kant, in imperative terms: to be a “declarative subject” is treat others as ends, never as means, to will that your own actions embody a universal law binding on everyone. We could call this an ethics of the declarative. This imperative remembers the origin of the declarative in a kind of imperative from the center to suspend imperatives amongst each other. We could say that logic itself recalls an imperative for the proper use of declaratives, one that allows no imperatives to be introduced, even implicitly, into the discourse at hand—but, of course, this is accomplished in overwhelming imperative terms, as all manner of otherwise perfectly legitimate uses of language must be subjected to interdiction. Even more, these imperative uses of the declarative include the imperative to not rest content with any particular formulation of that imperative: what, exactly, does it mean to treat another as an end or means, how can you tell whether another is really taking your action as a law—what counts as adjudication here? If you take to treat others only as ends in consequence of your devotion to the categorical imperative, aren’t you treating them as a means to that end? The paradoxes of declarative culture and subjectivity derive from the ineradicability of the absolute imperative founding them.

 

The most decisive liberation of the declarative from the imperative can be seen in the cognitive ramifications of writing, as explained most rigorously, I think, by David Olson in his The World on Paper. Olson argues that it is the invention of writing, alphabetic writing in particular, that turns language into an object of inquiry: something we can break down into parts that we then rearticulate synthetically. These parts are first of all the sounds to be represented by letters, but just as much the words, or parts of sentences, that are identified through writing for the first time. The grammatical analysis of the sentence treats the sentence as a piece of information, makes it possible to construct the scene of speech as a multi-layered dissemination of information about that scene, and thereby provides a model for treating the entire world as a collection of bits of information, ultimately of an event of origin through speech. We could see this as a declarative cosmology. In that case the world can be viewed as a constant flow of information conveyed through everything that could be an object of an ostensive, that is, effect some shift of attention.  This declarative metaphysics only comes to fruition in the computer age. We keep discovering that each piece of information is in fact just a piece of a larger piece of information that perhaps radically changes the meaning of the piece we have just assimilated. This is an intrinsic part of scientific inquiry, but subverts more local and informal inquiries with a much lower tolerance for novelty because of a greater reliance on ostensive and imperative culture. Declarative culture promises us we will only have to obey one imperative: the imperative of reality. In that case, we should be able to bracket and contain potentially subversive inquiries into reality by constructing institutions that introduce new increments of deferral and upward gradations of discipline and therefore social integrity, facilitating the assimilation of transformative knowledge. Olson himself, in his Psychological Theory and Educational Reform seems to think along similar lines by pointing to the intrinsic connection between a literate population and large scale bureaucracies, which is to say hierarchical orders predicated upon the ongoing translation of language into disciplinary metalanguages that simultaneously direct inquiry and impose discipline. However, if we take declarative culture to provide a mandate, an imperative, to extirpate all imperatives that cannot present themselves as the precipitate of a declarative, then those flows of information come equipped with incessantly revised imperatives coming from no imperative and ostensive center, subjecting imperative traditions to constant assault from hidden and competing metaphysical centers.

 

There will always be imperatives that cannot be justified declaratively because the lowering of the threshold of significance generates new regions of ostensivity that generate imperatives in order to establish guardianship over those regions, in turn leading to requests for information, i.e., interrogatives, which themselves presuppose a cluster of demands that attention be directed in certain ways. In the long term most, maybe all imperatives could be provided with a declaratively generated genealogy, but only if we for the most part obey them in the meantime. This constitutively imperative relation to a center could be called an “imperative exchange.” I do what you, the center, the distillation of converging desires and shared renunciations, commands, and you, the center, do what I request, that is, make reality minimally compliant. We must think in this way in most of our daily transactions—the alternative would be to be perpetually calculating on the basis of extremely limited and uncertain data, the probabilities of the various possible consequences of this or that action. For the most part, we have to “trust the world,” since we as yet have insufficiently advanced internal algorithms to operate coherently without doing so. The development of declarative, that is, literate, culture, heightens this tension by establishing with increasing rigor both a comprehensive centralized, which is to say imperative, order and an interdiction on referring to that order too directly. The absolutized imperative founding the declarative order forbids us to speak and therefore think about it.

 

The revelation of the declarative sentence as the name of God, analyzed by Gans in Science and Faith, his study of the Mosaic revelation of the burning bush, cancels this imperative exchange, which leads one to place a figure at the disappointing center, and replaces it with the information that since God has given everything to you, you are to give everything to God, which is to say to the origin of and through speech. There is no more commensurability and therefore no more exchange. You are to embody the conversion of imperatives into declaratives through readiness to have those imperatives converge upon you. Imperative exchange is ancestor worship, and the absolute imperative embedded in I AM THAT I AM is to suspend ancestor worship and remember the originary scene—that is, remember that it is the participation of all in creating reciprocity that generated the sign, not the other way around. But imperative exchange cannot be eliminated—it is embedded in our habits, it is the form in which we remember the sign and forget the scene—if I do this, reality will supply that. Thinking begins with the failure of some imperative exchange—I did this, but reality didn’t supply that, and why in the world should I have expected it to, since it’s not subject to my commands or tied to me by any promise. The declarative sentence, then, is best understood as the conversion of a failed imperative exchange into a constraint—in thinking, you derive a rule from the failure of your obedience to some command to garner a commensurate response from reality. This rule ties some lowering of significance to the maintenance of linguistic presence, as this relationship requires less substantial or at least less immediate cooperation from reality. We get from the command to the rule by way of the interrogative, the prolongation of the command into a request for the ostensive conditions of its fulfillment. The commands we prolong are themselves embedded in the declaratives, the discourses, we circulate through—raising a question about a claim is tantamount to identifying an unavowed imperative, some attempt at word magic, that claim conveys. This is how we oscillate between the imperative and ostensive worlds in which we are immersed and the declarative order we extract from and use to remake those worlds. A good question prolongs the command directed at reality indefinitely, iterating it through a series of possible ostensive conditions of fulfillment, which can only be sustained by treating the declarative order as a source of clearer, more convertible commands.

 

June 2, 2017

(Im)morality and (In)equality

Filed under: GA — adam @ 9:49 am

I’d like to work with a few passages from Eric Gans’s latest Chronicle of Love & Resentment (#549) to address some critical questions regarding morality and equality in originary thinking. Needless to say, I share Gans’s “pessimism” regarding the future of Western liberal democracies while seeing (unlike Gans) such pessimism for liberal democracy as optimism for humanity.

What kind of state-level government is feasible in the Middle East?—and one could certainly include large areas of Africa in the question. The fact that we have no clear response suggests that the end of colonialism, however morally legitimate we may find it, did not resolve the difficulty to which colonization, both hypocritically and sincerely, had attempted to respond: how to integrate into the global economy of technologically advanced nation states those societies that remain at what we cannot avoid judging as a lower level of social organization.

So, the end of colonialism is morally legitimate, even though it has left vast swathes of large areas of the world increasingly ungovernable, and made it impossible to integrate them into the global economy. What kind of morality is this, then—what does it consider more important than maintaining a livable social order? A note of doubt is introduced here, though: “we may find” this to be morally legitimate, but presumably we may not. There is some straining against the anti-colonialist morality here. The morality that we may or may not consider legitimate, I assume is that of judging some forms of social organization as lower than others. But what makes refraining from this judgment moral? Colonialism involved governing others according to norms different than those according to which the home country was governed, but unless we assume that this governing was done in the interests of the colonizer and against the interests of the colonized, and could only be so, the moral problem is not clear. These assumptions therefore get introduced into discussions of the colonial relation, but since those assumptions are as arbitrary regarding this form of governance as any other, there’s clearly something else going on.

There is no “racism” here; on the contrary, by assuming that all human beings have fundamentally the same abilities, and that we owe a certain prima facie respect to any social order that is not, like Nazism, altogether pathological, we cannot help but note that some societies are less able than others to integrate the scientific and technological advances of modernity. Thus health crises in Africa continue to be dealt with in what can only be called a “neocolonial” fashion, however unprofitable it may be for the former colonizers, who send doctors, medicine, medical equipment, and food aid to nations suffering from epidemics of Aids or Ebola, or starving from drought or crop failure—or rebuilding from earthquakes, as in Haiti.

The most moral gestures of the modern West are, it seems, its most colonial ones. And what could more disastrously interfere with this moral impulse that the assumption that “all human beings have fundamentally the same capabilities”? That assumption forces you to look for dysfunctions on a sociological and historical level—one must conclude it is colonialism itself that is responsible for the disasters of the undeveloped world. But if that is your assumption, you can only behave morally—i.e., actually treat other people as needing your help—by finding some roundabout way of claiming that that is not what you’re doing. That’s the best case scenario—the worst case is that you keep attacking the “remnants” of colonialism itself, even if they are the most functional part of the social order. Morality and immorality seem to have switched places.

For if we have indeed entered the “digital” age, implying an inalterable premium for symbol manipulation and hence IQ-type intelligence, then the liberal-democratic faith in the originary equality of all is no longer compatible with economic reality. Hence the liberal political system, as seems to be increasingly the case today, cannot simply continue to correct the excesses of the market and provide a safety net for the less able. Increasingly the market system seems to have only two political alternatives. It can be openly subordinated to an authoritarian elite, and in the best cases, as in China, achieve generally positive economic results. Or else, as seems to be happening throughout the West, it is fated to erect ever more preposterous victimary myths to maintain the fiction of universal political equality, rendering itself all but impotent against the “post-colonial” forces of radical Islam.

If vast inequalities based in part upon natural differences in ability is incompatible with the liberal democratic faith in the originary equality of all than that faith was always a delusion. Some are arguing that the inequalities opening up now over the digital divide are the most massive ever, but who can really know? What are our criteria—are today’s differences greater than those between medieval lords and serfs, or between 19th century industrialists and day laborers paid by piecework? There’s no common measure, but every civilized society has highly significant inequalities and today’s is not qualitatively different in that regard. Perhaps there is now less hope that the inequalities can someday be overcome or lessened, but that hope is itself just a manifestation of liberal-democratic faith, so we are going in a circle. It would be more economical to see that loss of faith as an increase in clarity. But what does the increasing or more intractable inequality have to do with the diminishing legitimacy function of the welfare state—is it that the rich no longer have enough money to support it or the less able are no longer willing to accept the bribe (or have figured out that the bribe will continue even if legitimacy is denied)? The choice between an authoritarian China-style solution and the preposterous victimary imaginary of the West seems clear, but why be downcast about it? If China is the “best case” so far, presumably there can be yet better cases. Obviously creating myths so as to maintain fictions is unsustainable—what next, legends to preserve the myths that maintain the fiction?—and it might be a relief to engage reality. (In fact, if the welfare state no longer serves a legitimating function, that may be because yet another—let’s just call it a—lie has been exposed, that of endless upward mobility and generational status upgrades.) But does not the discarding of lies and fantasies and the apprehension of reality represent greater morality, rather than immorality?

 

Victimary thinking is an ugly and dangerous business, but the inhabitants of advanced economies in their “crowd-sourced” wisdom appear to have determined so far that it is the lesser evil compared to naked hierarchy. The “transnational elite” imposes its own de facto hierarchy, but masks it by victimary virtue-signaling, more or less keeping the peace, while at the same time in Europe and even here fostering a growing insecurity.

We have the “crowd-sourced” wisdom of the inhabitants, but then the “transnational elite” and its hierarchy makes an immediate entrance. Has that elite not been placing its finger on the outsourcing scale (so to speak)? Through which—through whose—sign exchange systems has the wisdom been crowd sourced? So, let’s translate: the transnational elite masks its hierarchy by imposing victimary virtue-signaling, but is now running into diminishing returns—the very method that has more or less kept the peace now generates insecurity. It remains only to add that the elites don’t seem to have a Plan B, and appear to be determined to autistically continue to double down on their masking and signaling.

But as the economy becomes ever more symbol-driven, these expedients are unlikely to remain sufficient. It would seem that unless science can find an effective way of increasing human intelligence across the board, with all the unpredictable results that would bring about (including no doubt ever higher levels of cybercrime), the liberal-democratic model will perforce follow the bellwether universities into an ever higher level of thought control, and ultimately of tyrannical victimocracy. At which point the “final conflict” will indeed be engaged, perhaps with nuclear weapons, between the self-flagellating victimary West and a backward but determined Third World animated by Islamic resentment…

Or not. Perhaps the exemplary conflict between Western-Judeo-Christian-modern-national-Israeli and Middle-Eastern-Islamic-traditional-tribal-Palestinian can be resolved, and global humanity brought slowly into harmony. Or perhaps the whole West will decline along with its periphery and our great-grandchildren will grow up peacefully speaking Chinese.

 

But is the China model exclusive to China? Can we not, in a moment of humility, study the China model, and the way it retrieves ancient Chinese traditions from the wreckage of communism? And, in a renewal of characteristic Western pride, adapt and improve upon the Chinese model? This would require a return to school regarding our own traditions, subjecting them to an unrestrained scrutiny that even its most stringent critics (Marx, Freud, Nietzsche, Heidegger, Derrida…) could never have imagined. But what’s the point of a revolutionary and revelatory theory like GA if not to do exactly that? But the first question to take up would have to be…

 

Human language was the originary source of human equality, and if our hypothesis is correct, it arose in contrast to the might-makes-right ethos of the animal pecking-order system. The irony would seem to be that the discovery of the vast new resources of human representation made possible in the digital age is in the process of reversing the residue of this originary utopia more definitively than all the tyrannies of the past. Indeed, we may now find in the transparent immorality of these tyrannies a model to envy, because it provided a fairly clear path to the “progress” that would one day overturn them. Whereas for the moment, no such “enlightened” path to the future can be seen.

 

That of the relation between morality and equality. This is the heart of the matter. Human equality is utopian, but then it couldn’t be at the origin, because the origin couldn’t be utopian. Morality has nothing, absolutely nothing, literally nothing, to do with equality. We should reverse the entire frame here and say there is no equality, except as designated for very specific circumstances using very specific measuring implements. It’s an ontological question: deciding to call the capacity to speak with one another an instance of “equality” is to import liberal ontology into a mode of anthropological inquiry that must suspend liberal “faith” if it is to ask whether that faith is justified. We can then ask which description is better—people talking to each other as “equal” or people talking to each other as engaged in fine tuning and testing the direction each wants to lead the other. Which description will provide more powerful insights into human interactions and social order? Determining that “equality” must be the starting assumption just leads you to ignore all features of the interaction that interfere with that assumption, which means it leads you to ignore everything that makes it an interaction—which, interestingly, in practice leads to all kinds of atrocities. What seems like equality is just an oscillation of hierarchies, within a broader hierarchy. In a conversation, the person speaking is for the moment in charge; in 30 seconds, the other person will be in charge. It would be silly to call this “inequality,” even in its more permanent forms (like teacher and student), because it’s simply devotion to the center—whoever can show the way to manifest this devotion points the way to others. And that’s morality—showing others how to manifest devotion to the center. Nothing could more completely overturn the animal pecking order—a peasant can show a king how to manifest devotion to the center, but the king is still the king because he shows lots of other people how to do it, in lots of situations well beyond the experience and capability of the peasant. Morality involves reciprocity and reciprocity not only has nothing to do with equality, but is positively undermined by equality. There can only be reciprocity within accepted roles. Most of us don’t go around slaughtering our fellow citizens, but that’s not reciprocity because such acts are unlawful and these laws at least are seriously enforced and, moreover, most of us don’t want to do anything like that. When a worker performs his job competently and conscientiously, and the manager rewards the worker with steady pay increases, a promise of continued employment and safe, clean working conditions—that’s reciprocity. Friends can engage in reciprocity with each other without any explicit hierarchy, but here we’re talking about a gift economy with all kinds of implicit hierarchies. I wouldn’t deny all reciprocity to market exchanges (overwhelmingly between gigantic corporations and individuals), but this kind of reciprocity is minimal and, as we can see, hardly sufficient to stake a social order on. Language makes it possible for us to all participate in social order, but inclusive participation is also not equality, nor is recognition or acknowledgement. In other words, morality (recognition, acknowledgement, reciprocity), yes; equality, no. Forget equality. What, exactly, made those old tyrannies immoral, or even “tyrannies,” other than (tautologically) their failure to recognize equality?—their successes and our capacity to shape those models in new ways should not be disheartening. If there must be hierarchies and central power, then those things cannot be immoral, any more than hunger can be immoral. Morality enters into our engagement with these realities.

« Newer PostsOlder Posts »

Powered by WordPress