GABlog Generative Anthropology in the Public Sphere

November 7, 2017

Felicity

Filed under: GA — adam @ 7:10 am

J.L. Austin, in originating the concept of “performative” speech acts, considered such acts to be “felicitous” or “infelicitous.” Performative speech acts effect some change in the world, rather than saying something “about” something, and therefore either “work” or don’t “work,” as opposed to being true or false. The canonical example is the words spoken in the marriage ceremony: “I do”; “I now pronounce you man and wife.” In this case, the groom and bride are not describing how they feel about each other, nor is the pastor describing their relationship—all three are participating in in creating a new relationship between the two. Such speech acts are felicitous if carried out under the proper, ritual, ceremonial, sanctioned conditions: if I happen to hear, in a store, one customer say to one salesman, “I do” (when asked, say, if he would like to look at another pair of pants) and another customer say “I do” (“do you like that perfume”) and I shout out “I now pronounce you man and wife,” nothing has happened, even if the two might appreciate my quick wit. The problem for speech act theory or philosophy has always been where and how to draw the line between performative speech acts and what Austin called “constative,” or referential speech acts (which can be judged true or false). As is often the case, what seems to be a simple and intuitively obvious distinction gets bogged down in “boundary cases” the more closely we examine it. Even a scientific claim, with its proof replicated numerous times, requires its felicity conditions: a “sanctioned” laboratory, a scientific journal, an established discipline, etc. Genuine theoretical advances always come from cutting such Gordian knots by subordinating one concept to the other, with the subordinate concept (like Newtonian physical laws within Einsteinian physics) becoming a limiting case of the dominant one. Within the disciplinary space created by the originary hypothesis, the first speech act was undeniably performative, creating humans, God, and a world of objects that could be referred to, the decision is an easy one: all uses of language are to be understood as performative, with the constative the limiting case.

Seeing language as performative is easy in the case of the lower speech acts theorized by Gans in The Origin of Language; the ostensive and the imperative are, from any perspective, acts which do something in their saying: such acts only make sense if they work, i.e., change something in the world. The problem comes with the speech act traditionally defined in terms of truth conditions, the declarative. Declarative sentences are, first of all, true or false; that it be reducible to truth or falsity seems almost be a definition of the declarative sentence. So, what do declaratives do? Well, for starters, they answer questions. As R.G. Collingwood pointed out, any sentence can answer, at a minimum, one of two questions: a question about the subject or a question about the predicate. If I say “John is home,” I can be answering a question about John’s whereabouts or about who is home. Introducing modifiers increases the number of (quite possibly mutually inclusive) questions that might be answered by the sentence: “John is safe at home” answers, along with at least one of the previously mentioned questions, a question about some danger presumably or imaginably faced by John. We might say that a good sentence is one that maximizes the questions it elicits and answers. And a good question would be answerable by a declarative sentence. Of course, what makes a question answerable, and which questions a sentence might be answering, depends upon the space, ultimately a disciplinary space of historical language users, within which the sentence is uttered, written and/or read; and sentences provide us with evidence, perhaps the best we can have, regarding the constitution of those spaces. Our sentences are informed by tacit, unasked questions.

But what are questions? The fact that any question can easily be re-written in the form of “tell me…” indicates the interrogative’s dependence upon the imperative. If you look at it from the other side, we can imagine the process of transition from imperative to interrogative: get that! Go ahead, get it! Come on, get it already! Get it, please! Will you get it? Could you get it? Will you let me know whether you might be willing to get it? If the shared focus is maintained, an unfulfilled (either refused or impossible) command turns into a request for the performed action or object, and finally a request for information regarding its possibility. Imperatives themselves, meanwhile, are an immensely complicated and varied batch—from plea and prayer on one side to command and directive on the other, with summons, requests, instructions and much else in between. I have focused, perhaps inordinately, upon the imperative, and intend to continue to do so, because very few people like to talk too much about it. The reason is obvious: imperatives are intrinsically asymmetrical, implying some difference in power or access, even if momentary—if I tell you to pass the salt because it’s at your end of the table, neither of us thereby has more power, but it is precisely that kind of relation—one person in possession of something others need—that makes a more structural imperative relation possible. Linguistically speaking, the liberal fantasy is for a world without imperatives: the mere statement of facts and description of realities would be sufficient to get us all doing what we should. But what is the dominant means of production in the contemporary world, the algorithm, if not series of imperatives strung together declaratively (if A, then implement B; if C or D, implement E…)?

And, finally, what is an imperative? It has its origins in an infelicitous ostensive—the ostensive involves shared pointing at something, for which the verbal equivalents are naming and exclamations (“What a beautiful day!” doesn’t make an empirical claim but rather assumes the listener to will join in appreciation of the day). The infelicitous ostensive that leads to the imperative is naming—what happens if someone, out of ignorance, impatience, desire or naughtiness names an object that’s not there? If it happens to be nearby, someone might just go and get it, and we have a new speech act. All these speech acts, then, from pointing to the most convoluted sentence, emerge from the Name-of-God directed at the object at the center on the originary scene. Now that we have brought the center into play, we can work our way back in the other direction. The imperative, according to Gans, would have been invented (or discovered—the line between the two is very thin here) on the margins—the (ritual) activity at the center among these earliest humans would not have allowed for such mistakes (or at least would not allow for them to be acknowledged). But it would quickly come to be applied to the center. The basic relation between humans and deities is a reciprocally imperative one: we pray to God and God issues commands to us. This is what I elsewhere called an “imperative exchange”: if we do what God says we can expect our requests to Him to be honored. But the imperative exchange accounts for our immediate relation to the world more generally. In originary terms, the world consists of nameable objects—not everything in the world is named, but anything could be. Those names are all derivative of the center, the source of nameability itself. We engage in imperative exchanges with all named objects, all objects that are “invested” linguistically: we accept commands from them that require us to “handle” them in specific ways, and in return they yield to our own demands that they nourish, or guide or refrain from harming us or otherwise aid us. We of course have little crises of faith all the time in this regard. One thing we do in response is firm up the world of things, make it more articulated, make the chain of commands issuing from it more hierarchical and regular. In other words, a technological understanding of the world is essentially the ordering of all the imperative exchanges in which we participate. A very powerful theory of technology in general, and contemporary technological developments in particular, will follow from this.

Now, Gans provides for a complex derivation of the declarative from the failed (infelicitous) imperative, and I would like to preserve that complexity—this is no place for shortcuts. (In my reading, despite its natural relation to the imperative, the interrogative actually emerges after the more primitive declarative forms, filling in a gap between the imperative and declarative.) Someone in the community makes some demand or issues some command and you either refuse or (more likely) are unable to comply—the object is unavailable, the act cannot be performed. This must have happened often in the purely imperative community, but it must have also been resolved fairly quickly, because we have, of course, no record of any human community that stopped at the imperative. The problem is, how to communicate, how to find the resources for communicating, the infelicity of the imperative? We have to imagine a kind of brief equilibrium—the “imperator” is not withdrawing his command, but is presumably not proceeding to act directly on its ‘refusal” violently; the recipient of the command is presumably standing his ground, but also not eager to initiate violence; there’s some danger, therefore, enough to make some innovation necessary; but not enough to make it impossible—there’s a need to think and some space to do so.

In Gans’s construction of this (let’s say, proto-declarative) scene, the target of the imperative repeats the name of the object requested along with an “operator of interdiction.” The operator of interdiction is an imperative, forbidding in an open-ended way, some action: examples would be “don’t cross at the red light”; “don’t smoke”; “don’t eat fatty foods,” etc. The operator of interdiction is an imperative, that seems closer than any other to the originary sign itself, which is essentially an interdiction on appropriating the central object. The operator of interdiction must have emerged when one member of the community needed to bring another member into a familiar form of shared attention or “linguistic presence” in which others were already participating—think about situations where it’s enough to say “don’t” for the other to understand what they shouldn’t do; it would subsequently have been used repeatedly in cooperative contexts, when impatience or imminent conflict threatened to undermine the group’s goal: a gesture meaning “don’t move” or “don’t make a sound” would be readily intelligible in situations where it was evident that that is precisely what someone was about to do. The interdiction is a slightly asymmetrical ostensive and a very gentle imperative. The linguistic form of the interdiction would have gradually been extended over longer periods of cooperation where dense tacit understandings unite the participants, until the form became generally available.

Its meaning, though, juxtaposed to the repeated name of the object, in this novel context, seems multidirectional: what is the “imperator” being told to refrain from? Issuing the imperative itself? Proceeding from the infelicitous imperative to violent retaliation? Desiring the object altogether? The imperator will recognize an interdiction being imposed upon him, but why should he obey it? What makes it convincing? Only a realization of the absence of the object. The problem, though, is that it is on this scene that the means for communicating the absence of the object are created. If the operator of interdiction is also directed toward the object, though, that is, if the object itself is being commanded to “refrain” (from being present and available), then the two-pronged imperative can have the necessary effect. So, in this primitive declarative—the operator of interdiction is the first “predicate”—the imperator is told to cease and desist “because” the object has been ordered away. And the only possible source for the imperative issued to the object is the center itself, or God. But in that case, the interdiction issued by the speaker must have the same source, since it is intrinsically connected to that issued to the object. The declarative sentence, then, opens us up to imperatives from (to mangle Spinoza) “God, which is to say, reality.” Declarative sentences respond to or anticipate the failure of some imperative exchange by conveying a command from the center to lower or redirect our expectations, which involves redistributing our attention. Unlike the ostensive and the imperative, the declarative establishes a linguistic reality that does not depend upon the presence of any particular object or person in the world: it creates and sustains, in the face of the constant force of imperative realities, a model of the world that allows more of the world to be named. They utter the Name-of-God outside of any ritual context. That is what declarative sentences do, that is their performative effect.

This language centered discourse needs to be put to work, and that will be done. For starters, consider the following: why do you, does any of us, do what we do? We can always ascribe rational motives to ourselves by retrojecting a chain of reasoning for what we have done, but obviously there wasn’t a chain of reasoning that got you started on that chain of reasoning in the first place. Why were you interested in the thing you started thinking about, and interested in the way that started that particular line of thought? We can give psychological and even biological explanations, but there is ultimately a leap from some purported internal “mechanism” to language that can’t be bridged. No, you do what you do because you are obeying a command. Where in “reality” (material exigencies; tradition, or a long chain of commands) that command comes from, how it has been reshaped in the processes of arriving at you, how you have to modify it in order to fulfill it, when its authority lapses, and that of another imperative takes its place, are all among the most interesting questions. But we are command obeying beings.

A final, ethical conclusion. How are we to find felicity, that is, a general felicitousness of our speech acts? In the continual clarification of each of them in themselves and in their relations to each other. In the ostensive domain, we engage perpetually in the Confucian “rectification of names.” In the imperative domain, we clarify the commands we heed (and those we in turn transmit), trace them back to a larger chain of commands, and cleanse them of reactive, resentful, prideful counter-commands (the commands we heed themselves provide the resources for this). Our questions should be grounded in some imperative “blockage,” and made answerable (if not necessarily once and for all) by declaratives. And our declaratives should decomposable into such questions while letting through higher, more central imperatives, commanding us to renounce stalled imperative exchanges and the resentment towards the center they generate.

November 21, 2017

Originary Grammar and Political Grammar

Filed under: GA — adam @ 6:34 am

The highest purpose of political discourse is to expose the political imaginaries of everyone participating on the scene. How do you solicit someone’s political imaginary? Very simple—ask them what they want, perhaps in commonsensical political terms (“I want universal healthcare”), but not necessarily. If you can determine what kind of sovereign would have to be in place for them to get what they want, you have constructed their political imaginary. The process is much like that new makeapp that subtracts the effect of makeup on a photographed face: everything existing that interferes with the political desire gets subtracted. “I want a world without racism.” OK, how would our raceapp approach this? We would have to identify all the “markers” of racism, about which we now have an enormous wealth of information thanks to the it’s not ok to be white movement: we can, in great detail, itemize the differences in wealth and power, the choices in mates, friends and even children, the intellectual proclivities (do you like math?), the gestures, the neighborhood you live in, and so on. So, we must imagine all that eliminated (which means we must imagine those who will perform, and those who will suffer, the elimination)—which further means we have to inquire into what other social relations support all of that, determine the various causal linkages tying the supports to the markers to be expunged, and then imagine a process by which those supports are re-engineered into supports for a world in which all of our abilities, our sexual desires, our sense of humor, our sense of beauty, our bank accounts, living arrangements, posture and much else are radically transformed—and all this as required in vastly differing ways for each individual, as we are all unique carriers of racism. What kind of sovereign are you then imagining? One who commands a vast guerilla army of mindless, heartless human resources drones following a rigid playbook that gets rewritten constantly leaving their present efforts obsolete as they are expended so that the next wave of race drones come after them, until the prototypical racist/target is distilled from the continuous investigation. But we’re not done, not by a long shot—how does the never again racism sovereign incorporate the never again sexism, never again homophobia, etc., modes of sovereignty?

“I want a world without racism” is just a subroutine of the apparently more moderate “I want everyone to be treated equally.” Here, as well, one imagines a sovereign with knowledge of the infinite number of markers of “unequal” treatment, or, more precisely, a sovereign constantly engaged in collecting and punishing examples of “unequal” treatment, identified by a previous, so far rough, estimate of those markers of unequal treatment that most need to be addressed, leading to the constant accumulation of knowledge of more and more kinds and indicators of unequal treatment, many of them products of previous attempts to remedy some form of unequal treatment. A sovereign, in other words, which is the enemy of all the people it governs (in differing degrees, at different times). You would have to be constantly enraged, inhabiting such a vicious imaginary. This imaginary could be considered liberal, it could be considered statist or totalitarian, but, despite the seeming paradox (or because of it), it is best seen as anarchist: it presupposes a circulation of equal units prior to any authority, and the job it assigns to the state, to restore that original anarchy by slicing through layers of inegalitarian accretion, is enough to drive anyone mad. By contrast with the anarchist imaginary, the absolutist imaginary is a thing of simple, almost tautological beauty: all of our wants translate into a desire for a sovereign that is sovereign. We imagine a sovereign commanding subordinates to command their subordinates to fulfill the purpose of their institutions as he does with his own. Institutions have purposes we can discern because all human interactions serve some purpose, which is to say they serve the center that has constituted them. Our absolutistapp erases everything intervening between sovereign decision, its implementation, and the feedback required to ensure the next decision is similarly unobstructed. I think these are really the only two political imaginaries worth considering today—all others would resolve themselves back into one of these two.

The anarchist imaginary only makes sense as a form of resentment towards the absolutist imaginary. Historically, of course, this is the case: liberalism is a process of defectors from monarchy trying to find space within monarchy, to influence monarchy, to transform monarchy, and ultimately to destroy monarchy. The point of attack is always the command structure: no one in a position of command can ever give a completely satisfactory account of why it should be him giving the command, and why he gave this command. On the question, why him?, the only real answer is that I inherited, seized or was delegated this power, which really just sends the question back into an infinite regress. Regarding the this, an imperative is always irreducible to declarative explanation (even though, of course, such explanations can be given) since it depends upon circumstances and exigencies that could always be reconstructed after the fact in a way they couldn’t have been in making the decision itself. And even such after the fact reconstructions will send us back to inheritances and traditions that can never be fully excavated. The absolutist imaginary attributes a good faith faithfulness to the best of those traditions to the decision maker; the anarchist imaginary replaces this with a bad faith faithlessness.

The anarchist imaginary introduces declarative criteria into the selection of responsible agents and into the process of decision. It does this not to provide feedback to those making such decisions, but to establish a perpetual show trial of the imperative as such by demonstrating that it must always fall short of declarative criteria. Whatever names and attributes are given to the leader are translated into a series of predicates that can be subjected to inquiry one by one, according to criteria that could never be stated in advance because the declarative is itself first of all the interdiction on issuing some imperative, in this case the one issued by the sovereign. Is the king the “protector of his people”? But what counts as “protection,” and are his people really more protected under his rule than they might be under some other possible one? (A series of questions is always the wedge displacing the imperative and introducing declarative rule.) In what sense are they “his” people—how do they come into his possession? For that matter, are they even “a” people—what constitutes a people? Etc. The same goes for decisions actually made, which can always be compared with plausible alternatives with better outcomes which could never be conclusively dismissed. Such criticism after the fact can be very useful if undertaken from the standpoint of the actor, but that is not the purpose of the declarative coup, which seeks to discredit the structure of command and temporal chain of imperatives altogether. Any “given” can be further dissolved into presumably free agents that have somehow been welded together in a hierarchy. The free individual, conceptually, is the precipitate of the erosion of sovereign command—the most free individual is whoever can be posited as most resistant to the current sovereign command.

In its fully developed form, liberalism posits the agreement of solitary, ahistorical, self-interested individuals as the original basis or cause of social order; somehow, this original agreement was usurped, and then history can be read as a continual process of its recovery. This means reading history as a sequence of events in which explicit agreements between individuals subject to no command serve (or fail) to overthrow orders predicated on an inherited structure of command, i.e., imperatives derived from accepted names. Explicit agreements that don’t depend upon the individuals entering into them because conformity with the agreement will be judged by those legitimated by that very agreement to judge them according to protocols that can be read out of or into the agreement is the declarative condition. Why did you do______? Because I was authorized by an agreement arrived at through free deliberation by all concerned parties and publicly recorded. This declarative politics swallows its own tail because its inheritors can always come along and play the same game and its initiators: what made the deliberations “free”? Who was counted as a “concerned party”? Some already existing authority must have made such determinations. And such agreements in practice must present themselves as pledges and promises, i.e., ostensives: you have to swear loyalty, you can’t just claim that your objective analysis of conditions accounts for the extreme likelihood that you will be loyal—because everyone knows that analysis will be conducted in order to justify your continued loyalty or defection. But that just means that what makes declarativity a powerful weapon against the imperative order keeps it a powerful weapon against the inevitable recrudescence of imperativity within the declarative order.

Absolutism defends the imperative order within the present declarative one, operating under the assumption that the imperative order, and the ostensive order (the network of names upon which it rests) can never be utterly eradicated. Everyone giving orders and everyone taking orders wants orders to be clear; everyone who begs, solicits, summons, requests, forbids, suggests, demands, prays wants, not necessarily every one of these imperatives to be obeyed, but for us to know whether they are or not, and to be certain we could tell. Everyone has an interest in clarifying their felicity conditions. (Such declarative defenses of the imperative order should be kept to a minimum.) We have seen the advantages declarativity has long exploited in subverting the imperative order, but the imperative order has its advantages as well. Not only can the declarative order never separate itself from its imperative substratum, but that imperative order is inscribed within the declarative itself. If we conclude a meeting and someone says, “good, then we’re all agreed,” it does not need to be stated explicitly that this agreement commands each participant to act their respective part in seeing it fulfilled. Separating imperative from declarative is as impossible as separating fact from value, and for the same reason: every declarative, even the most neutral sounding description or explanation commands some response. “It’s going to rain tomorrow”=”bring your umbrella.” “City x is located at __ degrees longitude and ___degrees latitude”=”set your navigating instruments accordingly”; “remember to write that for your exam tomorrow.” So, in listening to any sentence, your question should always be, what is this sentence demanding of me?

In the first instance, it’s demanding that you reassess something it presumes you want. It’s interrupting some demand it takes you to be making upon reality. Which means it’s also disrupting the fabric of your imaginary, either to destroy it or enable you to immunize it against some threat. (You can, of course, turn attempts at the former into instances of the latter.) It’s throwing a shadow of doubt on the conditions of some imperative exchange you are in the middle of—it’s encouraging you not to hold up your side of the exchange, not to obey the command directed your way, because the other side will break faith. It’s demanding that you look at, and look to, something you have neglected, or have been unaware of.  As my examples above indicate, we can often restate in declarative terms the tacit, constitutive imperative of some declarative. That ultimately entraps you within the declarative order. So, for example, arguing over who is the “real racist,” or “what racism really is,” is simply a way of surrendering in the war on imperativity. Even making a clear argument about how evil and ridiculous it is to desire a “world without racism” is feeble—the conditions of declarative felicity will always leave open the possibility of retrieving “hope” of such a world. The more all-encompassing approach is to strive to obey the imperatives, to perform the deferral the sentence implicitly demands of you. Acting as someone set out on the hunt by the declaration of the need to abolish racism short-circuits the declarative-imperative wiring far more effectively. Even the most hardened (or softened) SJW hasn’t really taken in what it would mean to take their tacit imperatives literally.

Situating yourself thusly on the border between imperative and declarative is not just a way of counter-culturally subverting the Cathedral (although it is that, and I do think it provides excellent formulas for memeing). The practice I’m proposing serves a winnowing purpose. Seeking to obey all the imperatives coming our way is the only way of finding out which can really be obeyed, and obeyed without contradicting other imperatives that, taken alone, could also be obeyed. In other words, these are the means by which the imperative order can be recovered and restored. And while we extract imperatives to obey from the sentences/discourses surrounding us, we comment on them declaratively—the most powerful political discourse today would probably be a kind of traveler’s account of one’s attempts to obey the imperative lodged in the most widely circulated declaratives. In that way, the desires instigated by those declaratives can be put on display and thereby deferred, the liberal political imaginary exposed and the absolutist imaginary summoned from its cracks and crevices.

June 15, 2017

Sacral Kingship and After: Preliminary Reflections

Filed under: GA — adam @ 4:49 pm

Sacral kingship is the political commonsense of humankind, according to historian Francis Oakley. In his Kingship: The Politics of Enchantment, and elsewhere, Oakley explores the virtual omnipresence (and great diversity) of sacral kingship, noting that the republican and democratic periods in ancient Greece and Rome, much less our own contemporary democracies, could reasonably be seen as anomalies. What makes kingship sacral is the investment in the king of the maintenance of global harmony—in other words, the king is responsible not only for peace in the community but peace between humans and the world—quite literally, the king is responsible for the growth of crops, the mildness of the weather, the fertility of livestock and game, and more generally maintaining harmony between the various levels of existence. Thinking in originary anthropological terms, we can recognize here the human appropriation of the sacred center, executed first of all by the Big Man but then institutionalized in ritual terms. The Big Man is like the founding genius or entrepreneur, while the sacred king is the inheritor of the Big Man’s labors, enabled and hedged in by myriad rules and expectations. The Big Man, we can assume, could still be replaced by a more effective Big Man, within the gift economy and tribal polity. Once the center has been humanly occupied, it must remain humanly occupied, while ongoing clarification regarding the mode of occupation would be determined by the needs of deferring new forms of potential violence.

One effect of the shift from the more informal Big Man mode of rule to sacral kingship would be the elimination of the constant struggle between prospective Big Men and their respective bands. But at least as important is the possibility of uploading a far more burdensome ritual weight upon the individual occupying the center. And if the sacral king is the nodal point of the community’s hopes he is equally the scapegoat of its resentments. Sacral kings are liable for the benefits they are supposed to bring, and the ritual slaughter of sacral kings is quite common, in some cases apparently ritually prescribed. It’s easy to imagine this being a common practice, since not only does the king, in fact, have no power over the weather, a king elevated through ritual means will not necessarily be more capable in carrying out the normal duties of a ruler better than anyone else. Indeed, some societies separated out the ritual from the executive duties of kingship, delegating the latter to some commander, and thereby instituting an early form of division of power—but these seem to have been more complex and advanced social orders, capable of living with some tension between the fictions and realities of power (medieval to modern Japan is exemplary here).

It seems obvious that sacral kings, especially the more capable among them, must have considered ways of improving their position within this set of arrangements. The most obvious way of doing so would be to conquer enough territories, introduce enough differentiations into the social order, and establish enough of a bureaucracy to neutralize any hope on the part of rivals to replace oneself. (No doubt, the “failures” of sacral kings to ensure fertility or a good rainy season were often framed and broadcast by such rivals, even if the necessity of carrying out such power struggles in the ritualistic language of the community would make it hard to discern their precise interplay at a distance.) Once this has been accomplished, we have a genuine “God Emperor” who can rule over vast territories and bequeath his rule to millennia of descendants. The Chinese, ancient Near East and Egyptian monarchies fit this model and the king is still sacred, still divine, still ensuring the happiness of marriages, the abundance of offspring, and so on. If it’s stable, unified government we want, it’s hard to argue with models that remained more or less intact in some cases for a couple of thousand years. Do we want to argue with it?

The arguments came first of all from the ancient Israelites, who revealed a God incompatible with the sacralization of a human ruler. The foundational story of the Israelites is, of course, that of a small, originally nomadic, then enslaved, people, escaping from and them inflicting a devastating defeat upon, the mightiest empire in the world. The exodus has nourished liberatory and egalitarian narratives ever since. Furthermore, even a cursory, untutored reading of the history of ancient Israel as recorded in the Hebrew Bible can see the constant, ultimately unresolved tension regarding the nature and even legitimacy of kingship, either for the Israelite polity itself or those who took over the task of writing (revising? Inventing?) its history. On the simplest level, if God is king, then no human can be put in that role; insofar as we are to have a human king, he must be no more than a mere functionary of God’s word (which itself is relayed more reliably by priests, judges and prophets). At the very least, the assumption that the king is subjected to some external measure that could justify his restraint or removal now seems to be a permanent part of the human condition. Even more, if the Israelite God is the God of all humankind, with the Israelites His chosen priests and witnesses, the history of that people takes on an unprecedented meaning. Under conditions of “normal” sacral kingship, the conquest and replacement of one king by another merely changes the occupant, not the nature, of the center. Strictly speaking, the entire history (or mythology) of the community pre-conquest is cancelled and can be, and probably usually is, forgotten—or, at least, aggressively translated into the terms of the new ritual and mythic order. Not for the Israelites—their history is that of a kind of agon between the Israelites and, by extension, humanity, with God—the defeats and near obliteration of the Jews are manifestations of divine judgment, punishing the Jews for failing to keep faith with God’s law. Implicit in this historical logic is the assumption that a return to obedience to God’s will is to issue in redemption, making the continued existence of this particular people especially vital to human history as a whole, but just as significantly providing a model for history as such.

At the same time, Judaic thought never really imagines a form of government other than kingship. As has often been noted, the very discourse used to describe God in the Scriptures, and to this day in Jewish prayer, is highly monarchical—God is king, the king of kings, the honor due to God is very explicitly modeled on the kind of honor due to kings and the kind of benefits to result from doing God’s will follow very closely those expected from the sacral king. The covenant between the Israelites and God (the language of which determines that used by the prophets in their vituperations against the sinning community) is very similar to covenants between kings and their people common in the ancient Near East. And, of course, throughout the history of the diaspora, Jewish hopes resided in the coming of the Messiah, very clearly a king, even descended from the House of David—so deeply rooted are these hopes that many Jews prior to the founding of the State of Israel, and a tenacious minority still today, refuse to admit its legitimacy because it fails to fit the Messianic model. All of this testifies to the truth of Oakley’s point—so powerful and intuitive is the political commonsense of humankind that even the most radical revolutions in understandings of the divine ultimately resolve themselves into a somewhat revised version of the original model. Of course, slight revisions can contain vast and unpredictable consequences.

So, why not simply reject this odd Jewish notion and stick with what works, an undiluted divine imperium? For one thing, we know that kings can’t control the weather. But how did we come to know this? If in the more local sacral kingships, the “failure” of the king would lead to the sacrificial killing of that king (on the assumption that some ritual infelicity on the part of the king must have caused the disaster), what happens once the God Emperor is beyond such ritual punishment? Something else, lots of other things, get sacrificed. The regime of human sacrifice maintained by the Aztec monarchs was just the most vivid and gruesome example of what was the case in all such kingdoms—human sacrifice on behalf of the king. One of Eric Gans’s most interesting discussions in his The End of Culture concerns the emergence of human sacrifice at a later, more civilized level of cultural development—it’s not the hunter and gatherer aboriginals who offer up their first born to the gods, but those in more highly differentiated and hierarchical social orders. If your god-ancestor is an antelope, you can offer up a portion of your antelope meal in tribute; if your god is a human king, you offer up your heir, or your slave, because that is what he has provided you with. This can take on many forms, including the conquest, enslavement and extermination of other people, in order to provide such tribute. What the Judaic revelation reveals is that such sacrifice is untenable. What accounts for this revelation? (It’s so hard for us to see this as a revelation because is hard for us to imagine believing that the king, for example, provides for the orderly movements of heavenly bodies. But “we” believed then, just like “we” believe now, in everything conducive, as far as we can tell, which is to say as far as we are told by those we have no choice but to trust, to the deferral of communal violence.) The more distant the sacred center, the more all these subjects’ symmetrical relation to the center outweighs their differences, and the more it becomes possible to imagine that anyone could be liable to be sacrificed. And if anyone could be liable to be sacrificed, anyone can put themselves forward as a sacrifice, or at least demonstrate a willingness to be sacrificed, if necessary. One might do this for the salvation of the community, but this more conscious self-sacrifice would involve some study of the “traits” and actions that make one a more likely sacrifice; i.e., one must become a little bit of a generative anthropologist. The Jewish notion of “chosenness” is really a notion of putting oneself forward as a sacrifice. And, of course, this notion is completed and universalized by the self-sacrifice of Jesus of Nazareth who, as Girard argued, discredited sacrifice by showing its roots in nothing more than mimetic contagion. (What Jesus revealed, according to Gans, is that anyone preaching the doctrine of universal reciprocity will generate the resentment of all, because all thereby stand accused of resentment.) No one can, any more, carry out human sacrifices in good faith; hence, there is no return to the order of sacral kingship—and, as a side effect, other modes of human and natural causality can be explored.

Oakley follows the tentative and ultimately unresolved attempts of Christianity to come to terms with this same problem—the incompatibility of a transcendent God with sacralized kingships. There is much to be discussed here, and much of the struggle between Papacy and the medieval European kings took ideological form in the arguments over the appropriateness of “worldly” kings exercising power that included sacerdotal power. But I’m going to leave this aside for now, in part because I still have a bit of Oakley to read, but also because I want to see what is involved in speaking about power in the terms I am laying out here. Here’s the problem: sacral kingship is the “political commonsense of humankind,” and indeed continues to inform our relation to even the most “secular” leaders, and yet is impossible; meanwhile, we haven’t come up with anything to replace it with—not even close. (One thing worth pointing out is that if, since the spread of Christianity, human beings have been embarked upon the task of constructing a credible replacement for sacral kingship, we can all be a lot more forgiving of our political enemies, present and past, because this must be the most difficult thing humans have ever had to do.)

Power, for originary thinking, ultimately lies in deferral and discipline, a view that I think is consistent with de Jouvenal’s attribution of power to “credit,” i.e., faith in someone’s proven ability to step into some “gap” where leadership is required. To take an example I’ve used before, in a group of hungry men, the one who can abstain from suddenly available food in order to remain dedicated to some urgent task would appear and therefore be extremely powerful in relation to his fellows. The more disciplined you are, the more you want such discipline displayed in the exercise of power, whether that exercise is yours or another’s. We can see, in sacral kingship, absolute credit being given to the king. Why does he deserve such credit? Well, who are you to ask the question—in doing so, don’t you give yourself a bit too much credit? As long as any failures in the social order can be repaired by more or better sacrifices, such credit can continue to flow, and if necessary redirected. But if sacrifice is not the cure, it’s not clear what is. If the king puts himself forward as a self-sacrifice on behalf of the community in post-sacrificial terms, well so can others—shaping yourself as a potential sacrifice, in your own practices and your relation to your community, is itself a capability, one that marks you as elite, i.e., powerful—especially if you inherit the other markers of potential rulership, such as property and bloodline (themselves markers of credit advanced by previous generations). Unsecure or divided power really points to an unresolved anthropological and historical dilemma. If the arguments about Church and Throne in the middle ages mask struggles for power, those struggles for power also advance a kind of difficult anthropological inquiry, upon which we are still engaged. There’s no reason to assume that the lord who put together an army to overthrow the king didn’t genuinely believe he was God’s “real” regent on earth. It’s a good idea to figure out what good faith reasons he might have had for believing this.

Now, Renaissance and Reformation thinkers had what they thought would be a viable replacement for sacral kingship (one drawn from ancient philosophy): “Nature.” If we can understand the laws of nature, both physical and human nature, we can order society rightly. This would draw together the new sciences with a rational political order unindebted to “irrational” hierarchies and rituals. I want to suggest one thing about this attempt (which has reshaped social and political life so thoroughly that we can’t even see how deeply embedded “Nature” is in our thinking about everything): “Nature” is really an attempt to create a more indirect system of sacrifice. The possibility of talking about modern society as a system of sacrifice is by now a well-established tradition, referencing the modern genocides and wars along with far more mundane economic practices. Indeed, it’s very easy to see the valorization of “the market” as an indirect method of sacrifice: we know that if certain restrictions on trade, capital mobility, ownership, labor-capital relations, etc., are overturned, a certain amount of resources will be destroyed and a certain number of lives ruined. All in the name of “the Economy.” We know it will happen, and we can participate in the purging of the antiquated and inefficient, but no one is actually doing it—no one is responsible for singling out another to be sacrificed for the sake of the Economy. The indirectness is not just evasiveness, though—it does allow for the actual causes of social events to be examined and discussed. It’s just that they must be discussed in a framework that ensures that some power center will preside over the destruction of constituents of another. One could imagine justifying the “natural” sacrifices of a Darwinian social order if it served as a viable, post-Christian replacement of a no longer acceptable sacrificial order—except that it no longer seems to be working. We can think, for example, about Affirmative Action as a sacrificial policy: we place a certain number of less qualified members of “protected classes” into positions with the predictable result that a certain number of lives and certain amount of wealth will be lost, and we do this to appease the furies of racial hatred that have led to civil war in the past. But the fact that the policy is sacrificial, and not “rational,” is proven by the lack of any limits to the policy. No one can say when the policy will end, even hypothetically, nor can anyone say what forms of “inequality” or past “sins” it can’t be used to remedy. All this is to be determined by the anointed priests and priestesses of the victimary order. We can just as readily talk about Western immigration policies as an enormous sacrifice of “whiteness,” for the disappearance of which no one now feels they must hide their enthusiasm. The modern social sciences are for the most part elaborate justifications of indirect sacrifices.

So, the problem of absolutism is then a problem of establishing a post-sacrificial order. This may be very difficult but also rather simple. Absolutism privileges the more disciplined over the less disciplined, in every community, every profession, every human activity, every individual, including, of course, sovereignty itself. We can no longer see the king as the fount of spring showers, but we can see him as the font of the discipline that makes us human and members of a particular order. We could say that such a disciplinary order has a lot in common with modern penology, with its shift in emphasis from purely punitive to rehabilitative measures; it may even sound somewhat “therapeutic.” But one difference is that we apply disciplinary terms to ourselves, not just the other—we’re all in training. Another difference is a greater affinity with a traditional view that sees indiscipline as a result of unrestrained desire—lust, envy, resentment, etc., rather than (as modern therapeutic approaches insist) the repression of those desires. (Strictly speaking, therapeutic approaches see discipline itself as the problem.) But we may have a lot to learn from Foucault here, and I take his growing appreciation of the various “technologies of the self” that he studied, moving a great distance from his initial seething resentment of the disciplinary order, as a grudging acknowledge of that order’s civilizing nature. Absolutism might be thought of as a more precise panopticon: not every single subject needs to be constant view, just those on an immediately inferior level of authority. Discipline, in its preliminary forms, involves a kind of “self-sacrifice” (learning to forego certain desires), and a willingness to step into the breach when some kind of mimetically driven panic or paralysis is evident can also be described in self-sacrificial terms—in its more advanced forms, though, discipline means being able to found and adhere to disciplines, that is, constraint based forms of shared practice and inquiry. Then, discipline becomes less self-sacrificial than generative of models for living—and, therefore, for ruling and being ruled.

March 27, 2016

Trumpism

Filed under: GA — adam @ 7:05 pm

Eric Gans, in his most recent Chronicle, made an argument for considering Donald Trump a “metaconservative,” concerned, albeit perhaps not explicitly, with restoring the structure of compromise and deal-making between left and right by converting the left’s struggle for justice back into a defense of group interests. Until such a structure is restored, formulating the most brilliant conservative policies in the most prestigious think tanks will be irrelevant because, as conservatives themselves may have forgotten, such policy proposals are themselves merely opening bids in the negotiation, a negotiation that by definition requires a good faith partner.

If this is indeed what Trump is doing, and through “embodiment” more than “articulation,” how, exactly, is he doing it? The flip side of deal-making is tit-for-tat responses to attacks by others—in both cases, a kind of reciprocity is established. And if we follow the logic of Trump’s behavior, he seems to treat tit-for-tat responses to insults and offenses as a principle of virtually religious sanctity. Much of what seems bizarre in Trump’s actions can be explained in this way—as in the recent dust-up, completely ridiculous in any rational terms, over Trump’s and Cruz’s respective wives, makes perfect sense if Trump’s logic is, “ if you attack my wife I’ll attack yours.” Of course, what counts as an “attack” on Trump’s wife by Cruz is rather subjective—in this case, a photo from Melania Trump’s modeling days was tweeted by an anti-Trump (not, as I understand it, pro-Cruz) PAC, with the suggestion that voting for Cruz would be the best way of avoiding the presumed scandal of such a first lady. Perhaps this hurt Trump in Utah, but probably not much anywhere else—on balance, an attractive wife might be a plus for a Presidential candidate and Mrs. Trump comports herself with dignity. But all these are details—all that matters is that someone, according to some reasoning, wanted this to hurt Trump and help Cruz, so a response was necessary. What kind of response? Here as well, it seems the details get worked out on the fly—first, a threat to “spill the beans” about Mrs. Cruz and then a retweet of matching photos of the two wives, Melania at her sultry best and Heidi at her harried worst. (No beans have yet been spilt, to my knowledge.) How does this help Trump, who may already be fairly unpopular with normal women unlikely to appreciate being reminded of the disparity between them, after a day of work and chasing the kids around, and your average supermodel. But that doesn’t seem to enter Trump’s calculations either—he struck back, however scattershottedly, and that’s an end to it until the next attack. If there is no direct counter-attack, all seems to be forgotten, which may explain Trump’s penchant for denying he said things that he said very famously and is, of course, caught on video saying. What he said were not declaratives to be judged according to their truth value but performatives to be judged according to their “felicity” at each occasion.

The broader, meta-conservative effect of this honor system is to suggest powerfully to supporters that Trump will defend the interests of those supporters the same way he defends his own interests, and will defend the United States in that way as well—if someone screws us, we screw them right back. And the notions of payback and deterrence have taken thoroughly delegitimated under the Obama regime (even though that regime practices retaliation against its domestic enemies far more systematically than any other since Nixon’s), at least as an openly acknowledged principle of governmental and, indeed, human, behavior. What Obama’s supporters celebrate as “cerebral” and “non-reactive” is precisely an unwillingness to demand satisfaction from those who insult America, and therefore to give satisfaction to those who identify with American as an honor seeking entity in the world. Indeed, victimary thinking is predicated upon the suspense of honor as a reciprocal principle, demanding honor for the designated victim but guilt and shame for the oppressor.

Tit-for-tat in private and business life is inherently limited, but in public life it’s hard to see where the limits are. Hundreds of claims are made about a political candidate, let alone an office holder, every day that might easily be taken as “insults.” But more specific, formidable, and dangerous opponents emerge, opponents whom it is necessary that one be seen engaging and defeating. That seems to be Trump’s method—make a “provocative” statement, i.e., one that many people will find offensive, and let a hierarchy of enemies emerge in the course of a general taking of the bait. Nor are the provocations random—they generally involve some national point of honor, some instance or relationship in which America has been insulted or exploited by another nation. The enemies he attracts, then, are those interested in de-escalating conflicts with other countries (but, also, with others within this country who gravitate toward a transnational economic, political, and/or cultural sphere of activity) but, paradoxically, are willing to be drawn into an escalating antagonism with Trump himself. If my analysis is right, we can expect a kind of stabilization of the Trump phenomenon (assuming his continued success) as those heavily dependent upon transnational progressivism or transnational corporatism and/or finance line up against him with ever more intense paroxysms of denunciation while those more flexible in their affiliations and commitments find ways of coming to terms—either Trump will be swept away by the opposition or we will, as Gans suggests, find ourselves in a new, more unpredictable era as responsible agencies (e.g., corporations and other states) come to the table, and conflicts become more explicit but maybe also more manageable and transparent.

But I doubt very much that this will be the case with the left. The American left has apparently decided that they are going to try and shut Trump down, as if he were a conservative speaker invited to a college campus—staging riots at his events with the explicit purpose of making it impossible to hold them. A smaller scale version of this practice—sending protestors to Trump rallies and having them disrupt the event—has led to the manifestation of Trumpism that has perhaps made some of his potential supporters most uneasy: the encouragement of physical violence, by both the security and police, and by attendees at the rallies themselves, encouragement which has already yielded some more or less serious scuffling. This is bound to continue, as it’s hard to imagine Trump allowing such a provocation to go unanswered. And the left must, as Vox Day in his analysis of SJWs contends, continue to “double down,” and drag the official Democratic party along with it—already, to use Gans’s terms, Democrats treat Trump’s campaign as a blatant instance of injustice, rather than the representation of a legitimate, competing interest. It’s hard to see how they can do otherwise: can they really allow themselves to get into an argument with Trump about the proclivity of Mexican immigrants to rape and murder, or about how severely to restrict Muslim immigration? We will see a real crystallization of forces around the question of American sovereignty (tit-for-tat/deal-making on the national level), in all its dimensions. This is a showdown that Trump is initiating and propelling forward through a subjective dynamic all his own, but that Trumpism will continue, without him if necessary.

November 23, 2009

Political Marginalism, Originary Grammar, Cultural Generativity

Filed under: GA — adam @ 11:46 am

A marginalist politics begins with the observation that any situation can be reduced to a binary:  do a or b.  Even there are, in principle, many choices, as soon as you inch closer to one the world splits into that or not-that—and you are always inching.  Self-reflection upon any situation reduces itself to such binary—I am this, not that, here, not there, etc.  Similarly, the binary situation immediately confronted is the product of a long series of bifurcations—my choice is now a or b, because previous choices have eliminated c, d, e, and so on.  This binarism derives from the binary on the originary scene:  to continue reaching for the central object  (to pursue the mimetic path of least resistance) or to imitate the newly formed sign and withhold one’s grasp.  Since the right choice was made on the scene, it is impossible for us not to think of ourselves as making the right choice now:  even if I egregiously violate the terms of the scene I am on I will reconstruct another scene upon which no such violation took place—yes, I cheated, but everyone cheats; or, my situation was different than others’; or that was wrong but it wasn’t really who I am, etc.  And if I fully confess my inexcusable violation, I can only do that because I am now on some other scene, whose terms I can represent my choice to confess (rather than further dissimulate) as confirming.  Indeed, I can reconstruct any scene, any time, on the spot, reconfiguring the binary choice, from say, cheating/not cheating to maintaining the harmony of the scene/disrupting the scene by letting my cheating be discovered.  But binary there will always be.

 

Each binary retrojects the series of bifurcations it has emerged out of—if I now determine that effecting change by peaceful means is impossible, I reference and construct a history in which violence has been rejected many times, and earlier choices in which violence didn’t even appear as one of the alternatives, and so the current choice is the distillation of that entire series of resentments (resentment is itself essentially binary—he shouldn’t be there, I should, or someone else more deserving, but first of all him or not-him).  Criteria for choosing one way or another are always embedded in the binary situation, but only become explicit after the fact, once the act has disclosed the scene I am on now. I have “inched” before I realize I have done so.  Leading up to the event, the criteria are tacit—I will feel at a certain point that I can’t go on the old way anymore, but trying to explain why I now, all of a sudden, feel that way, could only lead me to reference some other experience whose roots would be tacit—say, for the first time I noticed how demoralized my fellow citizens seem to be, but what changed among my fellow citizens or in my own attentiveness that led me to notice that?  There is some threshold that has been crossed—from beaten down but not hopeless to thoroughly demoralized—that I detect before I am able to explain how I detected it.  I could, of course, be wrong, in which case I didn’t “really’ detect it—but realizing that I was wrong must also be an event articulated through a binary point wherein I located that threshold elsewhere, which in turn confirms the possibility of such a threshold, or the real threshold which was concealed behind the one I imagine and has now achieved such a threshold of presence as to be revealed to me.  And continuing in my wrongness will simply exemplify that threshold in my own failure to observe it.  There must always be such thresholds—for there to be a scene is for the scene to be capable of collapse into the desires and resentments it has deferred; and for it to contain the resources to transition into a new scene that extends the prevailing sign.  And, of course, noticing a threshold is part of my being on a scene as well—I am drawn along with others pointing to that threshold, or my identification of that threshold is part of my recoil from others, who seem to me unwilling to notice something, even something they and I know not. 

 

The politics that follows from marginalism is the creation of new binary “forks” out of any situation.  On the one hand, of course any course of action produces new “forks” in the road all by itself; on the other hand, though, one can either continually narrow the area in which forkings become possible, or one can widen the area, increasing the visibility of the series of choices embedded in any event.  Even if one chooses violence, schism, or secession, for example, one can fight or sever ties in such a way as preserve conditions for a possible peace and for others to register their own choices in ways that may lead more quickly to a cessation of violence or new associations.  The premium, in other words, is on practicing freedom in such a way as to invite others to do the same; to make the consequences of choices as visible as possible, because this is the best way of placing the full range of available resentments on display, and putting that full range on display is the best way of inviting everyone to propose ways of channeling those resentments in the interests of the center. 

 

Now, we have two questions:  first, how to describe these bifurcations, or choices; second, how to describe the threshold in which we are suspended, infinitesimally, before each one?  My answer is with originary grammar.  The basic structure of the declarative sentence, the topic/comment relationship Gans works with in The Origin of Language, is the record of such a completed choice, or branching off:  the topic, deriving ultimately from a name, represents the object of a demand, or a proposed replacement for such an object, a demand that, through some possible series of concatenations (refusals and counter-demands), could lead to the unraveling of the signs constituting the community; the comment, meanwhile, places the topic beyond reach, at least for the present, embedding it in some reality that resists our imperatives.  So, a choice has been made to defer imperatives and a further choice has been made to defer imperatives in this particular way—as opposed to some other sentence which, presumably, would have been more likely to inflame rather than quell the upsurge in “demand” (perhaps by dangling the topic in front of some part of the audience, rather than removing it from the reach of all).  A discourse, then, is the articulation of a whole series of such choices and, of course, with political documents, especially founding ones, people will argue over every single sentence, every single word and punctuation mark.  The grammar of the sentence, furthermore, iterates the “grammar” of the originary scene, where my choice to imitate the aborted gesture rather than the gesture itself is “predicated” upon everyone else doing the same—in that case, using grammatical terms to structure the scene for us, the one aborting his gesture give us the “topic” and those who imitate him in turn are “commenting.”  Similarly, “understanding” a sentence means knowing how to restore or maintain a proper relation between declaratives and imperatives:  where and how to match the declarative with a symmetrical declarative, where and how to take the declarative as an occasion to reframe the imperative.  So, the relation between a sentence and succeeding sentences is itself one between “topic” and “comment.”

 

Complying, for now, with traditional grammar, we can reduce all sentence types to four:  the declarative, the interrogative, the imperative and the exclamatory.  The exclamatory is what I propose to represent the ostensive on the grammatical level, so the entire sequence from ostensive to declarative can be represented grammatically, and each sentence analyzed as some articulation of all types.  What a beautiful day!  How I love you!  These are the prototypical exclamations, and I think we could usefully annex to the exclamation on one side what would ordinarily be classified as interjections (oh my God!), and on the other side what might be classified as ostensive or deictic references (in declarative sentences)—there it is!  That’s it!  It’s a boy!  The exclamation calls the attention of the interlocutor to some present object and both embodies and proposes some attitude attached to attending to that object.  In that case, “thank you,” “I promise” and other “ostensive” (in the originary sense) expressions can join the category as well.

 

Each kind of sentence has a range of possible responses and extensions built in:  the declarative can lead to other declaratives, it can transition imperceptibly into imperatives (the door is still open… ok, I’ll get it), it can call forth questions and exclamations, and we could analyze any discourse in terms of which possibilities get actualized.  Imperatives get obeyed, more or less precisely, more or less sincerely, or they are refused, with greater or less power; imperatives transition into interrogatives, and we could trace any interrogative back to an imperative that has been prolonged, suspended, and converted into a more or less open field.  The grammar of the exclamation is to evoke a matching exclamation:  Yes!  And I you! So it is…  And, of course, one sentence type can easily stand in for others:  “you’re kidding!” is often an exclamation masked as a declarative, while “are you out of your mind?” is one masked as an interrogative—and in each case the masking is possible because the expressions are impossible if taken literally.  It also seems to me that the exclamation has a special relationship to the first person, the imperative and interrogative (more obviously) to the second person, and the declarative to the third person.  I won’t explore this now, but Eugen Rosenstock-Huessy has an analysis of the differences, which can best be called “grammatical,” between the statements “I love you,” “you love her,” and “she loves him” that transcend the fact that all happen to be, formally, declarative sentences.  In my terms, the disclosure “I love you” functions much more like an exclamation, calling for a matching or symmetrical response confirming the shared reality; “you love her” is as impertinent and intrusive as any unauthorized imperative, and translates easily into “admit it, already”; while “she loves him” is the only properly declarative of the three, with its topic’s presumed distance from either of the interlocutors. 

 

I plan to return to this extremely rich field of speculation, of course, but my point here is that thresholds and bifurcations in the social world can best be registered grammatically.  A while back, after mentioning to a friend of mine (with whom, for reasons that will become evident, I rarely stray into political discussions) my admiration for Frederick Kagan (the main intellectual architect of the so-called “Surge” in Iraq in 2007), he responded in the following manner:  “if you think it’s ok to send kids to war while you stay safe.”  Now, the argument here, such as it is, doesn’t interest me much—it’s the standard “chicken-hawk” accusation (although, incidentally, the infelicity of so many of the Left’s insults—from “chicken hawk,” which is of course an actual bird that eats chickens, not a chicken that pretends to be a hawk; to the idiotic title of Michael Moore’s “Fahrenheit 9/11”—“9/11,” needless to say, can’t be a temperature, and the reference to a book on book burning is oddly connected to a political crisis which had little to do with censorship, etc.; to the current slur, “teabagger,” which “plays” with remarkable clumsiness on “tea party,” while for indiscernible reasons associating those protestors with practitioners of an obscure sexual, usually homosexual, I have heard, practice—would be fascinating subject to study:  in other words, what does it say that the Left can’t really work with language, that it seems to rely on a deeply embedded system of allusions that couldn’t really be articulated explicitly if they tried?).  What do we make of the grammar, though, which I take to be very typical?  On the one hand, it could be a subordinate clause, the with main “I could see admiring him..” elided, but that doesn’t really work since no one actually contemplating such admiration would phrase its precondition in this way; you could say that the subordinate clause comments ironically on the main one, but the accusation is too thick, leaden and literalistic to qualify as “ironic.”  The expression strikes me, rather, as an exclamation, but one that can’t present itself as one to an interlocutor who won’t “match” it (to a fellow leftist it would be easy enough to just say something like “sending more kids off to get killed!” at the mention of Kagan’s name).  Which is to say that it’s a founding exclamation that can’t really take on public, “declarative” form.  Nor can it lead to any imperative:  “what a beautiful day!” leads naturally into “let’s go out and enjoy it!” or “get out and play!”; “sending kids off to die!” can only lead to an imperative like “let’s stop it!,” but to whom is that imperative addressed, outside of a quasi-ritualized sphere in which it is associated with constant affirmations, dedications, oaths, etc., to “do something”?  So, in the masking and grammatical isolation of this particular phrase, its self-cornering, we can identify the shape and position of a corresponding configuration of resentments.  Which is not to say (obviously!) that such resentments, expressed in such mangled grammatical forms, can’t be highly successful politically—that too would be subject to grammatical analysis.  And so would, or could, any counter-analysis to my own.  I think such an approach is much more promising than either “logical,” “rhetorical,” or “ideological” modes of analysis.

 

So, at the point of any bifurcation stands an exclamation, expressing a revelation of some new reality and its attendant possibilities; then comes the imperative, determining which path to take; followed by the inflection of the imperative into interrogatives, probing the various by-ways of the path; and by the time the declarative comes along, the choice has already been made and the speaker is in the process of inscribing that choice in reality.  Of course, how the choice gets inscribed in reality is extremely important—indeed, it is an intrinsic element of reality itself and lays the groundwork for upcoming bifurcations.  I would even say that the declarative sentence essentially articulates a series of exclamations and imperatives, presents them after the event of their interference in reality, and thereby packages, preserves and re-circulates what would otherwise have been lost in the event itself.  When we argue about a text, we are arguing about what it is asking us to wonder at and what it is telling us to do.

 

“White guilt is the guilt of the unmarked toward the marked.”  I confess that there is a lot in this definition of Gans’s that I haven’t sufficiently attended to in my own thinking on White Guilt—in particular, the notion of being either marked or unmarked, and the relation between the two.  To be marked is to be identified as a potential victim, as someone who could be violated with impunity or whose violation may even be the subject of an imperative.  In principle, one could be marked either from “above” or from “below”—indeed, if scapegoating originally targeted the “Big Man,” then marking was originally a source of privileges as well as victimage, presumably in some equilibrium. How, then, did victimage become exclusively associated with the “lower orders,” even though we still scapegoat our Big Men and Women (celebrities, political and business leaders) all the time?  I think the answer lies in the way we have managed to defer scapegoating, and make it less deadly when it occurs, in the modern world.  Rather than ritual rules for marking scapegoats, we have devised juridical, administrative and medical procedures for determining who is to be marked.  On the one hand, then, the “higher” orders are far better able to avail themselves of these processes of deferral, which in turn tend to add stigma to the lower orders, who are likely to look “guilty,” “sick,” or “unauthorized” in all kinds of ways.  On the other hand, these procedures make the powerful more predictable and therefore less frightening (indeed, rhetorical attacks on the powerful are celebrated, without necessarily having much effect), while the powerless or excluded, attended to anxiously in all kinds of ways by our institutions, appear even more mysterious and potentially disruptive.

 

The scapegoating of the powerless, then, was a result of the modern attempt to unmark everyone—an attempt which paradoxically made the resulting marks all the more indelible.  It’s probably a lot harder to resist being marked with “a genetic and environmental propensity to criminal behavior” then the charge of poisoning wells.  At least one could disprove the latter—who, though, could so remake the “science” involved in the former as to invalidate the label?  The guilt towards the marked thus reflects the realization that any of us could be marked, and that this modern form of deferral could engulf modern society in more hideous forms of violence than we have known.  The form taken by this guilt is, interestingly, not to continue the thankless and hopeless task of a general unmarking (perhaps we should use the term “bleaching” to describe the goal of a “color-blind” society); rather, it is to seek to establish an orthodox, ritualized system of marking, in which markers of exclusion are both tabooed and assiduously collected and in turn reversed into markers of privilege—the easily parodied and inevitably rough attempts to arrive at a hierarchy of victimage is the result.  The consequent scapegoating of the gift of firstness which, in a sense, restores the old scapegoating of the powerful to its originary position, reflects the realization that the capacity for freedom, for starting over, continually threatens to undo what has become a system of insurance (chock-full of mandates, naturally), of reciprocal indemnification from risk:  we have almost, in the minds of those self-appointed to construct the rituals of White Guilt, arrived at a new social contract everyone could sign onto (the unmarked are ready to follow the new rules of marking and the marked are willing to accept the payment of victimary blackmail in exchange for a relief from their infinite demands), and only the permanence of the capacity for freedom and responsibility threatens to undermine all that labor.

 

The only solution is to mark everyone, over and over again.  Not by some kind of essential characteristic (race, class, gender, sexual orientation, religion, etc.) but by their idioms.  Jean-Francois Lyotard’s definition of the differend as a claim or contention expressed in the idiom of only one of the interlocutors is of great value to us today.  Lyotard yoked this notion to victimary imperatives, but he also knew that it exceeded such easily formulated asymmetries.  Idioms are what resist translation—they require that you enter the grammar of another, the characteristic way in which they articulate exclamations, imperatives, interrogatives and declaratives.  But the mistranslations of idioms are just as interesting, and increasingly common in a world made up of niche markets that overlap one another in thousands of ways.  “You hit that one out of the park!” is perfectly intelligible to anyone familiar with baseball; I can barely imagine what it would sound like to someone who isn’t, or how it would get iterated further and further away from its point of origin.  Idiomatic marking would both enter others’ idioms and mistranslate, or inflect, or, simply, mistake them—make the explicit the imperative implicit in someone’s declarative (by obeying or disobeying it), supply the exclamation missing in someone’s imperative, or the line of questioning that might have led from the embrace of an imperative to its declarative, doxic, forms (and do so by exclaiming, by questioning), render a demand in the declarative form of its fulfillment, etc.    Everybody is vulnerable in this way, but not too vulnerable, and in ways that are not easily predictable or controlled; idiomatic marking would also allow for new forms of generosity, as idioms can just as easily be interpreted “up” as “down.” 

 

The problem with this, as other radical proposals, is who wants to go first?  On the one hand, what I am describing already happens all the time—it’s a large part of the way in which friends and family relate to each other:  teasing one another about each one’s idiosyncrasies, but in such a way as to make those idiosyncrasies a source of love as well as resentment.  But it rarely happens outside of such safe spaces and, indeed, would have to take on very different forms in public life.  It seems to me that the rise of the “Tea Party” movement and Sarah Palin will give us a chance to see what that might look like—a commentary which I recently read (one hostile to Palin’s influence with the Republican Party) said (I’m quoting from memory) that the Republicans “need someone familiar with all the B.S. of politics, which Palin speaks like a tourist carrying around a phrase book.”  This gets both sides of the equation right:  contemporary political discourse is all “B.S.”—does anyone really believe that phrases like “he’s going to move to the center, pick up some moderates, and then shore up his base in time for the next election” mean anything anymore?  And Palin does, indeed, try to speak it, with an intensified sincerity that exposes it as a patchwork of empty phrases, while at the same time generating the elements of a new idiom.  And as much as anything else, Obama’s unspeakably boring (except, I imagine, to listeners of NPR) fluency in a particular set of “progressive” commonplaces is likely to sink his Presidency.

 

There is a space here for some rigor as well, though.  For those so interested, I would suggest the methods of the Oulipo literary group, the possible applications of which to public life have been so far unexplored (to my knowledge)—although there is the amusing homophonic bumper sticker, “Visualize Whirled Peas,” and perhaps others I’m forgetting.  I would love to see the results of the application of the N+7 method to one of Obama’s speeches—maybe I’ll do it myself.  Harry Mathews, the only American member of the group, has invented what he calls “perverbs”—statements created by attaching the second part of a proverb or maxim to the first part of another one.  So, for example, from the hybridization of “Too many cooks spoil the broth” and “Let the dead bury their dead,” we get “Too many cooks bury their dead.”  Mathews then writes a little story that makes sense of the new phrase, which leads to some hilarious results (how could we get from there being too many cooks to those cooks burying someone’s—the cook’s own?—dead, etc.?) but also suggests an excellent way to puncture and disable clichés and, in the process, transform them into the material for new idioms.  The Oulipo methods elevate form and rules over substance and thereby make it easy to see how much of “substance” is simply sedimented forms and rules.

 

Just for fun, let’s try something with this little snippet of President Obama’s speech to Congress on health care, given in September:

 

Well, the time for bickering is over. The time for games has passed. (Applause.) Now is the season for action. Now is when we must bring the best ideas of both parties together, and show the American people that we can still do what we were sent here to do. Now is the time to deliver on health care. Now is the time to deliver on health care.

 

I propose that we borrow another of Mathews’s ideas, his “Algorithm,” in which (I’m simplifying enormously) a particular word or phrase in each sentence is moved down to replace the word in that position in the next sentence.  In these remarks of Obama, the key word or phrase in each sentence seems to me to be the objects of auxiliary verbs and prepositions:  “bickering,” “games,” “action,” “bring,” “do,” and “deliver”—that’s where the real political distinctions are made.  So, let’s give it a try, making the necessary adjustments for grammatical correctness:

 

Well, the time for delivering is over.  The time for bickering has passed.  Now is the season for games.  Now is when we must act the best ideas of both parties and show the American people that we can still bring what we were sent here to bring.  Now is the time to do health care.  Now is the time to do health care. 

 

I will just say that this idiomatic marking seems to me truer than the original:  the time for delivering is certainly over; leaving the “bickering” sandwiched between the first and third sentences bring outs better what is menacing in that assertion; is it ever the season for games!; “acting” the best ideas is certainly as close as they are coming to any ideas; what, indeed, have they been sent to “bring,” and to whom? (and by now there are plenty of new idiomatic, in particular taunting and boasting, uses of “bring,” like “bring your best game”); and, who can deny they are “doing health care,” with all the rich idiomatic implications, often threatening, of “do”?   

 

Idiomatic markings are perfect for a de-centralized popular culture, and for an intelligent one.  A lot of blows will be struck, but very few of them deadly—Obama will survive even much more artfully done and politically biting algorithmic permutations of his discourse than the one I have produced.  But some of these permutations will turn out to be very memorable, even if we could never predict which ones in advance.  And what we might come to share, what might be a “game-changer,” what might “transcend partisanship” (or “game partisanship” and “transcend change”) is our participation is remaking and rejuvenating our common linguistic material.

Powered by WordPress