GABlog Generative Anthropology in the Public Sphere

October 23, 2010

Self-Evidency

Filed under: GA — adam @ 7:05 pm

When we speak about the “arbitrariness” of the sign, someone usually hastens to add that what is meant by that is, of course, its conventionality. “Arbitrary” is the right word, though, for what is assumed: that the sounds we make in speaking the languages we speak could just as easily be any other sounds, with the evidence of this being the obvious fact that words for the same things are different in all the languages, not to mention the enormous differences in grammar. The more you think about it, though, the more problematic the claim is—how in the world could we imagine everyone in a community agreeing to confer meaning upon a particular sound that in itself has nothing to do with the meaning it bears? The political implications of “arbitrariness,” which we rightly associate with tyranny, are therefore relevant here: if the sign is indeed arbitrary, it could only be because it was imposed upon everyone by some oppressor. In this assumption about the sign, then, we can see the trajectory from Lockean social contract theory (Locke was a firm believer in the arbitrariness of the sign) to the contemporary Left—the arbitrariness of the sign, starting with medieval nominalism, and, indeed social contract theory itself, were weapons against the assumptions about natural social order and natural law constitutive of Western Christendom. The arbitrariness of the sign is liberalism in linguistics and, in the end, liberalism (in the classical sense) has shown itself to share enough genetic material with the Leftism that succeeded it so as to leave it almost devoid of antibodies to fight the Leftist infection.

There is another liberalism, another Enlightenment, and another way of thinking about social agreement, though, which has been severely marginalized by the line leading from Locke through Hume and then Kant and Hegel (even the individualism of John Stuart Mill is ultimately derived from the German romanticism he imbibed through Coleridge). This other liberalism starts from the common sense philosophy of Thomas Reid, and can be followed through the American pragmatism of, at least, Charles Sanders Peirce, and is then strongly represented in the 20th century (in very different ways) by Hannah Arendt, Friedrich Hayek, Ludwig Wittgenstein and Michael Polanyi. The basic assumption shared by all of these thinkers is that we know far more than we know we know; that is, our knowledge is to a great extent, to use Polyani’s term, tacit—and not merely because we haven’t yet brought it up to consciousness, but constitutively so. As the novelist Ronald Sukenick once wrote, “the more we know the less we know”—not only because knowledge continually opens up new vistas of possible knowledge, but more importantly because the ways we know what we know cannot be made part of the knowledge we make present to ourselves. Any “language game,” disciplinary space, or idiom takes a grant deal for granted in addressing itself to a particular, emergent corner of reality; if it tries to bring that taken for granted bedrock into sight (and we do this all the time) it can only do so in terms of everything else that is still taken for granted, included some new things that enabled us to turn toward this new corner. Do you know for sure that you are at this moment present on the planet earth, that you are surrounded by building, streets, other people, etc.? “Know” is a very strange word to use here, which is not to say that we can’t really be sure—rather, what would be taken as “proof” that we are on the planet earth, surrounded by all those things? What would be better proof of this reality than the reality itself, as Wittgenstein liked to say? The question of how we know we are here, that we are ourselves, that we have bodies, that our senses integrate us into our surroundings, etc., is a very artificial one, but it’s that kind of question that modernity (and the dominant strand of liberalism) started with—most explicitly in Descartes, but Locke’s empiricism is ultimately no less corrosive of such self-evidency, as Hume revealed and Reid so forcefully demonstrated.

I have been writing much lately of mistakenness as constitutive of our linguistic and therefore social being, but it is equally true that there can be no mistakenness without certainty. I can only be mistaken in my articulation of an English sentence because I am certain that I am speaking English, not Chinese. If I’m completely out of place or out of line, it can only because there is indubitably a place or line to be out of. Mistakes disrupt a scene because there is a scene to be disrupted, and we are certain that it has been disrupted and while we can’t be certain that it will be restored, we can be certain about scenicity, without which there would be no mistakes. My argument has been that rather than evidence of the fragility of our worlds, mistakenness can be treated as evidence of its solidity. Assuming the arbitrariness of the sign intensifies the sense of fragility—if our common use of signs has just been imposed through some kind of force, human or natural, and, therefore, must continually be re-imposed, then of course deviation is dangerous. (For leftists, meanwhile, the consequence is that the arbitrariness of the sign encourages one to see reality as “constructed,” and hence infinitely malleable, in particular by those best at managing signs.) If signs, though, have an irreducibly iconic dimension, an iconic dimension that pervades every level of language, including semantics and grammar, then we just need to uncover the iconic meaning of a given mistake so as to bring it back within a reformed linguistic fold.

Isaiah Berlin, in his study of the determinist theories of history that undergirded socialist and communist politics in particular, made the point that you simply can’t remove the terms referring to human intentionality and, therefore, responsibility, from social and political discourse without making it impossible to refer to anything intelligibly at all. “He killed them” can’t be the same kind of statement as, nor can it be assimilated to, “e=mc2” or “historical development is determined by the force of production.” It’s not just that such ways of talking are immoral or unjust; rather, it’s that they are not really “ways of talking” at all, and therefore can’t sustain themselves without inventing all kinds of crazy agents (like “history” and “society”) which perform “actions” which no one has ever seen and or would recognize if they saw them. As originary thinkers, we can now say that this is because declarative, propositional meaning is rooted in the ostensive and imperative domains. We notice mistakes, in fact, because we can notice that our attention has been misdirected, which in turn reminds us that our attention is always being directed by everything we experience in reality.

The iconicity of meaning can be traced back to the gesture. The originary sign had to be gesture—it couldn’t be imagined as a sound, or a line drawn on the ground. Gesture is embedded in what we call “context,” that catch-all phrase we use when we reach the limits of our capacity to describe why something means how and when it does. A joke’s funniness might depend upon one of the listeners being where he is, and not a couple of steps to the left—that’s the kind of “contextual” effect we sum up with the phrase “you had to be there.” Gestures are also self-evident, in the sense that unlike propositional discourse, they cannot be replaced by their definition or explanation because they require the entirety of their “context.” The self-evidency of gestures also means that any normal human, initiated into any linguistic system whatsoever, would be able to make sense, on a gestural level, of the actions of any other human, from any other linguistic system—at least insofar as the gestures of the other are directed toward herself. On the most basic level, even though the meaning of gestures of course varies widely across cultures, we could recognize signs of aggression or good will directed towards ourself, even if those signs could also be used to deceive us.

Self-evidency, though, provides no support for Enlightenment optimism regarding universal communicability and amity. Indeed, self-evidency is also what radically divides us. The members of another culture who deceive me by exploiting my awareness of the meaning of their signs of peace are able to do so because within their own gestural system, inaccessible to me, they can signify that this naïve hick is ripe for the plucking. That is, to act in concert against me they need no dark conspiracy, no secret agreement—they know each other, and they know when one of them is welcoming an other with an excessiveness that communicates irony to them but not to me; they know how to follow each other’s lead in ways that I won’t figure out until it is too late, they know that anyone who might object to their scheme is far away at the moment, and so on. Of course, once they are through with me, I will be able to understand what they have done, if I am still around to do so. All self-evidency “proves” is that any attempt to impose a common idiom will generate idiomatic sub-systems resistant to control, understanding, or even detection.

What we can do is enhance and elaborate upon overlapping idioms and habits so as to create broader spaces of attenuated self-evidency—the fact that we can do so is what makes human equality self-evident, even while the attenuation of iconicity is what introduces what is called (by Michael Tomasello, among others) the drift toward the arbitrariness of the sign. The self-evidence of human equality lies in our ability to complement the inclusive drift toward arbitrariness with new modes of iconicity, within language and in our social relations. It is such a process that has brought us from the egalitarian distribution of the most primitive communities to the more expansive gift economy and ultimately to the market economy where the need for a single measure for value leads us to the relatively arbitrary universal equivalent of the precious metals—and, yet, what could be more iconic than gold, signifying wealth? (The arbitrariness of fiat currency, meanwhile, is arbitrary in the bad sense—it measures nothing but the will of the central banker.) It is also such a process through which we can try and move conflicts from the category of exterminationist opposition to war with rules and some notion of honor; from war to arbitration—or from criminality to civil law, and from civil law to friendly disagreements settled informally. And we can engage in such civilizing processes without succumbing to the delusion that any of these categories will ever disappear once and for all.

People only support icons, not arbitrary signs—an argument in favor of human equality in general is meaningless; what can be meaningful is a particular example of human equality at stake. (Which is why we will never get past the “distractions” and focus on the “real issues”; but, there’s no need to worry because the real issues get addressed, always imperfectly and so as to produce new, and equally real, issues, through the distractions.) And icons can be incommensurable with each other, which is why there will always be conflict. Successful icons are those that provide a new ground for the struggle between icons, and those icons will have the character of rules in relation to the lower level ones; or, more precisely, they will embody the kind of deferral and intellectual flexibility associated with rule following behavior, while still being exemplified by individuals acting alone and together. How can we support egalitarian distribution in sites like the family or other institutions devoted to close bonding and comradeship, while ensuring that any individual within that compact group is free to enter the market society; how can the norms of honor and shame needed to produce individuals ready to protect market societies from the enemies they will always produce in abundance, without nurturing fatal resistance to market society within its very bosom—the answer to such questions will always come, if they do come, in the form of some representative of a provisional, partial solution.

But let’s come back to the obvious: “dog” is “perro” in Spanish and “calev” in Hebrew; ergo, the word can’t have any intrinsic relation to the referent—the sign is arbitrary, case closed. Things must look this way for the linguist, with single systems of language, and the amazing diversity of the world’s languages, laid out in front of them; and to the naïve language user, compelled by such examples to take the linguistic perspective. The fact that when a speaker of English says “dog” it rather self-evidently refers to the animal in question, that “doggy” seems to “fit” the specific animal we feel affectionately towards, seems to be a pretty slim counter-argument. But there could never have been a point at which the word “dog” was imposed upon an acquiescent community of language users; the word was always firmly embedded with all the other words in the English language, and the languages English in turn evolved from, and if there was a first time the word’s ancestor was used (there must have been, right?—we are committed to at least that assumption), then it was used in such a way as to best ensure its referential capacity and memorability; or, if the choice was random, if it worked, it was remembered in such a way as to do so. And there never could have been a time when it exited that orbit of self-evidency. The systematicity of language—the fact that words don’t stand alone, but take on their “value” from all their interrelationships with other words (so, “dog” takes on its meanings from its distinction from “cat” and “mouse” on the one hand, from “wolf” and “fox” on the other, and from more specific terms like “poodle” and “German Shepherd” on yet another)—makes the point even stronger—at no time was any word or “lexical unit” outside of the linguistic world experienced as a whole, a linguistic world itself always in direct contact with the real one, via the ostensives and imperatives which embed us in that world—and, anyway, the sound symbolism of language can be every bit as complex as the semantic and grammatical systems: we can assume here as well, not a one-to-one correspondence between single sounds and dictionary-style meanings, but overlapping and interconnected connotations, which in turn interact with semantics and grammar in various ways. To address the argument for arbitrariness head on, the claim that linguistic signs would imitate, in their formal character (the articulation of sounds comprising them) those things they refer to or those events they aim at generating doesn’t imply that there should be only one language—why wouldn’t there by as many ways as “interpreting” what “sounds like” “dog” as there are ways of interpreting any complex text? It would be better to speak of a drift towards abstraction, rather than toward arbitrariness: the sign is abstract, even the first one, which had to be normed in such a way as to supplement its self-evidence from the very beginning precisely because there was no single way of conveying the intent to cease and desist. But even in this case the abstractions we speak of are marked by the drift, by the disciplinary spaces that have constructed them: in other words, abstraction involves accentuation and abbreviation, which is necessitated by the entrance of outsiders for whom the particular version of the sign current is not self-evident while at the same time making the sign even more difficult for the next outsider to grasp. The abstraction of the sign, then represents the disciplinary space (the shared inquiry into how to modify the sign so as to fit it for its new purpose) iconically, creating privileged and typical (unmarked) users, enabling the sign to attain self-evidency throughout the community.

I feel a strong need for a name for the politics of this marginalized liberal tradition, and the word “liberal” is not worth fighting over any more—especially since you’d have to fight the leftists who still use the term, the rightists who won’t give up on using the term to describe the leftists, and the libertarians who are very interesting but ineffectual semi-anarchists. The term I have been using on and off, “marginalism,” isn’t bad, but it sounds vaguely “oppositional,” and suggests an reactivel rather than comprehensive politics. I would like to derive a name from the rereading of “arbitrariness” I am proposing here, which sees the arbitrariness of the sign as a kind of secondary iconicity, a commitment to the iconicity of the sign that realizes that we can only rely upon the icons generated through the scenes we constitute. Icons lose their primary self-evidency when outsiders who don’t use the sign properly, because it isn’t self-evident to them, having their own self-evident semiotic system, and because ensuring the self-evidency of the sign to the primary community has made it idiosyncratic, or idiomatic. It is precisely this idiosyncrasy or idiomaticity that is, simultaneously and paradoxically, the ground of self-evidence: the shaped, complexly marked nature of the idiomatic sign is what makes it learnable through immersion in the scene. Common sense is, in turn, the meeting ground of these idioms, the discovery of overlappings.

I have thought about “plurality,” not in the sense of a diversity of ideas and lifestyles (pluralism) but in the sense of fundamental incommensurabilities in any community which tempt to violence but can facilitate rather than interfere with living together. I want the sense of “sampling” that Charles Sanders Peirce associates with inquiry (any knowledge is knowledge of the relation between the proportions in a sample and the proportions in a whole)—the notion one can derive from the icon (not necessarily Peirce’s) of a continuous sampling of possibilities in any event (when you try something the first time what’s the proportion of visible supporters and opponents; and then the second, third and fourth times?) can ultimately lead to the conclusion that the generation of samples is itself the event. Politics in this case is about thinking and knowledge, but not knowledge which then guides politics—instead, the politics generates knowledge which can only be used within political action, as the provisional articulation of our tacit knowing. Alongside of “sampling,” I considered “a politics of proportion,” which shares with “sampling” the relation between parts and the whole, while including the word “portion,” which reminds us of politics’ relation to dividing and sharing in some “equal” manner, and suggests a notion of politics as balancing and inclusive while still being interested, inevitably, in one’s “portion.” But “plurality” seems like a way of describing politics from the outside, from within thinking, and sampling is too “experimental,” by itself, suggesting the progressive sense of a “scientific” politics; “proportion” has the same problems, while another idea, “partiality,” or particularity,” evoke partisanship and identity politics rather than the notion of a whole that not only exceeds but can only be grasped through the parts which we are.

What I have for now, and will try out, is the neologistic (according to Merriam-Webster, neologism is either “a new word, expression or usage” or “a meaningless word coined by a psychotic”) “anyownnness”—any, or “one-y,” evokes (for me, via Gertrude Stein) singularity but also plurality, since anyone is as any as anyone else; “own” replaces “one” (which is redundant here anyway), and can suggest one’s property, one’s ownership of oneself prior to and as a basis for property, the opacity of any’s “ownness” to others; I hope it can suggest that one’s ownness, one’s singularity and property, is (“constitutively”) bound up with that of others, hence maintaining the notions of proportionality, sampling—and marginality, in the specifically economic sense, i.e., that infinitesimal point at which one’s (or anyone’s) “weight” on a particular “scale” tips that scale in the opposing direction. A politics of anyownness, or of the anyown, then, is a politics of motivatedness: nothing is arbitrary, nothing is simply imposed, everything is exemplary and abstract, anyone can be the marginal representative of idiomatic common sense.

So, Next: The right of the anyown

October 10, 2010

A Sapir-Katz Hypothesis

Filed under: GA — adam @ 5:05 pm

We all know about the Sapir-Whorf hypothesis (and if you don’t, you can google it)—it’s really Whorf, who was a student of Sapir’s and greatly expanded a couple of much more tentative suggestions from Sapir regarding the relations between language, thought and culture, who is responsible for the notion that grammatical structures influence thought and culture to the extent that incommensurability arises between different languages, and through them those ways of thinking and culture. Contemporary linguists seem to treat Whorf’s hypothesis as a kind of piñata, as if to see who can smash it most decisively, and it’s easy to see how vulnerable the once thrilling idea was: supposedly, the Hopi had no grammatical means of distinguishing tenses, and therefore, rather than sharply distinguishing, as we linearly minded Westerners do, between past, present and future, they see reality as an ongoing “process”—a perspective which, Whorf went on to claim, made their way of thinking marvelously compatible with the space-time of relativity theory. Linguists, I think, like the fun you can have with this, drawing upon their knowledge of the remarkable diversity of grammatical structures and peculiarities of the world’s languages to suggest various bizarre cultural and intellectual forms by way of refuting Whorf. I can play too: English verbs have no future tense—we “normally” use “will” as an auxiliary with the verb we wish to place in the future but we also often place a time designator in a regular present tense sentence to indicate futurity (I arrive tomorrow; I’m going to be there soon; we meet at 7, etc.)—so English speakers must be incapable of thinking coherently about the future: we are, depending upon your cultural tastes, doomed to be improvident wastrels, or happy-go-lucky live for the present types. But, of course, you can reverse all this, and say that precisely because we have no future tense, complacency about the future is forbidden us—we are more mindful of the various ways the future impends upon the present because we must devise all kinds of novel ways of referring to it. Or, how about the fact that in English the present tense doesn’t really refer to the present, that is, to something that is happening right now—at this moment, I do not “write” this sentence, I “am writing” it—that is, we use the present continuous. So, are English speakers more conscious of the incomplete nature of the present, or of the distinction between things we do habitually and what we are doing at the moment? Where would go to even begin to explore such “hypotheses”? While a science fiction writer might be able to do wonders with this kind of thing, it doesn’t take us, as cultural theorists, very far.

But language must be bound up with thought and culture, and we must be able to describe thought and cultural with linguistic and semiotic vocabularies—what else are thought and culture comprised of if not words, sentences and signs? You won’t get anywhere exploring these relationships if you are focused on what obsessed liberal intellectuals from the turn against imperialism and the (re)discovery of native peoples early in the 20th century (itself a victimary development of Romantic theories of nationality and ethnicity) until today: asymmetrical differences between cultures. But how about differences within languages? Any idiom creates a new way of thinking, a way of thinking possible only within that idiom—until the idiom is normalized and made readily convertible into other elements within the language. In other words, the point is not the inherent properties of language; rather, it is, first, the possibilities for invention inherent in language, and the certainty that new desires, resentments and loves will demand new idioms of expression; and second, the incessant change undergone by language, which normalizes idioms and idiomizes norms, thereby creating new resources for expression. Slang words like “cool” (amazingly still going strong) and “groovy” (hermetically sealed within the idioms of the 60s and very early 70s, and incapable of revival due to the demise of the technology it was predicated upon) are obvious examples: for at least a cultural moment, in a particular cultural space, these words enabled people to say, and therefore think, something that couldn’t be said or thought any other way. But then they become subject to mockery (“groovy” is more likely to make people think of The Brady Bunch than of Woodstock) and extension (“cool” seems to me to have become, in many instances, more or less synonymous with “OK”), and easily translated into other terms. At this point, there’s nothing you could say or think with “cool” that couldn’t be said or thought more inventively or nimbly in many other ways; and you couldn’t speak or think with “groovy” at all.

Once we say that idioms provide a new way for thinking, we can say the reverse: to create a new way of thinking is to construct an idiom. Ethically and intellectually, this would mean that my obligation in some new situation is to construct an idiom adequate to it: a means of mediating the articulation of desire and resentment especially threatening in that situation. Idioms construct habits: the best idioms what Peirce called the “deliberately formed, self-analyzing habit.” Idioms and habits refine and direct resentments: let’s say that you decide, in a particularly tense social setting that you can’t avoid, that you will counter every expression of hostility you encounter by restating it in literal, atonal indicative sentences: in other words, you will “translate” rude imperatives, hostile rhetorical questions, interjections, sarcasm, etc., into something like: I understand that you would like to take a break now. That would be an idiom—it would get noted, ridiculed, admired, imitated (perhaps involuntarily), revised, elaborated, and so on—others would have to respond to it in some way, leading to further developments within the idiom. They may want to speak with you about your idiom—can your idiom handle that conversation? Will you draw them in or will they draw you out? It may not work—it may send resentments spiraling out of control by appearing robotic, or deeply sarcastic itself—but then some other idiom will, or nothing will (some situations are beyond saving). The point is that you would be thinking in terms of inventing and experimenting with idioms, with rules that could be at least tacitly recognized. The more deliberately you construct idioms, the more attentive you become to potential materials for such construction: accidents, mistakes, surprises, on the one hand; places where communication and amity seem to be breaking down on the other. After a while an inventory of possible idioms evolves, and the ability to improvise, to redefine an idiom in the middle of things, emerges.

In fact, I have been experimenting with such an “indicative” idiom for a while—I first discovered its ancestor many years ago, in a situation where I had to provide academically acceptable answers in a highly politicized and hostile (and, for me, rather high stakes) setting—what I discovered is, no matter how snide and sarcastic your questioners are, in the end they need to ask a question; you can then carve out that question out of the fog of vicious innuendo, restate it, and answer it. A primitive version of the idiom has helped me often since, but lately I have been working on systematizing it: writing without interrogatives or imperatives, or even disguised imperatives like those lurking within words like “should,” “must,” and so on. This forces you to, then, rewrite a sentence like “we should do that” in a way that commits you to representing an actual event: not doing that will likely involve us in the following difficulties. We can try out other rules, perhaps in controlled ways: staying within the present tense leads us to fold all consequences into their present possibility; eliminating conjunctions takes away additive and oppositional habits of thought; or, eliminating conjunctions turns additive thinking into a search for degrees and thresholds; such elimination simultaneously tends to make opposites mere differences. And every few paragraphs, or according to some other division, suspend all the rules (why not?), because you must let go on occasion and you create a veritable carnival of forbidden terms.

I have referred in previous posts, a couple of times already, to one of Marshall McLuhan’s axioms that I find compelling: the content of any medium is another medium. Meanwhile, a reading of G.A. Well’s The Origin of Language: Aspects of the Discussion from Condillac to Wundt (a book I happened to come across in a used bookstore) crystallized for me the assumption that a meaningful world of ostensive gestures must have preceded speech (I am still thinking about whether one can imagine imperatives and even declaratives emerging within a purely gestural world, but for now I am assuming a realm of ostensivity). Language is, then, primarily iconic, as gestures would mostly be, as was the first gesture, aborted actions; and, then, exaggerated actions, simulated actions responding to other simulated actions, and so on. From the beginning, though, we can assume a drift toward the arbitrary, as there are always many ways of conveying an incomplete action, and gestures would take shape in accord with the habits of a community, and groups within communities—outsiders would not be able to treat them as self-evident and would need to be taught how to use the signs. There is even an irreducible element of arbitrariness on the originary scene itself, as the sign that prevails will be the one that works, not necessarily the one that is closest to a Platonic ideal of a gesture of aborted appropriation.

If human beings are deliberately and increasingly skillfully imitating each other in meaningful ways—ways that create new shared objects and means of appropriating and distributing them—then it seems to me reasonable to assume they will be imitating other things in the world as well. Once we admit this assumption, then all those ridiculous theories of the origin or language that have long ago been dismissed, from onomatopoeia, to imitating the cries of animals or the blowing of the wind, to stylized cries of pain and pleasure, become a lot more plausible—as the origin of speech within an already existing gestural world. The sounds that ultimately get combined into words would also, then, have iconic roots, which would support arguments for “phonosemantics,” or “sound symbolism”—the argument made most audaciously by Margaret Magnus (http://www.trismegistos.com/MagicalLetterPage/) that the meaning of words is tied to their sounds. Sounds would initially be made to accentuate a gesture, and then to supplement it when the gesture could not be seen—aiming, then, at the same effect as the gesture. In that case, the content of speech is gesture, just as the content of writing is speech. Speech would take over vast domains of human communication first covered by gesture, while at the same time incorporating, embedding itself within and expanding the realm of gesture—and, in the end, only being meaningful in terms of gesture. By gesture, I mean all the ways human beings coordinate with each other spatially—architecture is gesture, the fact that we face each other when we talk, and generally stand a few feet apart and never, say, three inches apart (unless we are lovers)—all this, and much more, is gesture. Speech is always about the possibility that we could have something in front of us that we could orient ourselves towards together.

But what is the relation between form and content other than one of inquiry—in the sense that the “content” of the originary scene is the repellent power of the object, and the “formal” gesture is eliciting that power, which is to say seeking it out, distinguishing it from everything else in the world, and “measuring” and “broadcasting” its effects. Roman Jakobson makes the argument upon which I am modeling this one: he contends (like David Olsen) that the invention of writing reified speech and “language,” turning it into an object of inquiry—in the case of alphabetical writing, an inquiry into which were the smallest representable “units” of language. Jakobson then suggests that this linguistic “atomism” was the source of the scientific atomism that predominated in Greek philosophy—if language, why couldn’t anything be subdivided into it most minimal units? For Olsen, the problem of writing is to supplement all the elements of speech that make understanding possible—gesture, of course, but also intonation and other elements of the speech situation. So, whole new vocabularies emerge as a result of writing—a word like “assume,” for example, as in “he assumed they were lying” would be unnecessary in speech, because there would be other ways of showing someone’s attitude in reporting their speech in a spoken manner—most obviously, imitating the way they spoke (in a questioning manner, say). The word “assume,” then, like a word such as “suggest,” are the means and results of an inquiry into linguistic interaction that is prompted by the invention of writing. Speech, then, is likewise a mode of inquiry into gesture, as gesture is itself a mode of inquiry into “elemental” desires and resentments.

I have also applied McLuhan’s axiom to the elementary speech forms, and would like to update that account. An imperative, then, is a mode of inquiry into ostensivity—not only that, of course, because if you are issuing an imperative you do want the thing done (just as if you are writing you are writing about something and not just inquiring into the operations of speech)—but an imperative attends from the absence of the object to the possibility of its being made present. An imperative is also an inquiry into the effects of tone and gesture (it needs to be loud enough, but not too loud, “authoritative,” it’s better to be standing or leaning forward, but sitting back in a chair might be a way of testing the intangibles of authoritativeness as well…), all elements of ostensivity. Indeed, the imperative might be seen as inquiry into the iconicity of the person. And like any inquiry, it originated in some uncertainty regarding the object in question. Similarly, the interrogative is an inquiry into the imperative—it marks the unfilled character of some demand or command, and unmarks the possibility that it will be fulfilled; the question attends from the expectation of a demand supplied to the disappointment of that expectation, and then from the prolongation of that demand to some anticipated location in reality whence the reformed demand might yet be satisfied. Inquiry is a act of marking and unmarking—when we are converging on the object, the object is marked for destruction, but once the sign is issued we attend, first, from the sign to the object, unmarking the formal sign and sharing our marking of the object; and then, second, we attend from the object to each other, thereby unmarking the object (which is to say unmarking everyone’s defense of, resentment on behalf of, the object) and marking our own now evident, because naked, desire for the object and resentment toward the others. Signs are unmarked insofar as they single out portions of a reality than in turn marks as partial those singling out. Just as portions of reality can be marked by signs, signs can internally mark parts of themselves, which really involves marking some prior use of the sign while unmarking the sign itself. Sign use, language, is always inquiry insofar as it is always prompted by some portion of reality, and the signs which have zoned off that portion, having moved from an unmarked into a marked state, and the need to restore relation of (un)markedness.

The declarative, then, is an inquiry into the resolution of the state of uncertainty (and “patience”) unmarked by the question, marking its continuance and unmarking what would ultimately be the articulation of imperatives and ostensives that would resolve it. The sentence, then, unmarks whatever the question marks, a reality that exceeds the scope of the question: if this one were to move a bit this way, and the other a bit that way, and another were to look over there and promise not to move, etc., the uncertainty would be resolved—all those acts marked as uncertain by the question are unmarked as embedded in reality, as commanded by reality, in retrievable ostensive-imperative articulations; and the sentence can, in turn, mark and return to the domain of the question any of those articulations, which is to say, who observed and did what to make the event represented in the sentence and the event of the sentence itself possible. Inquiry, then, is the process of allowing anything on the scene to be marked or unmarked; representation is a solid state of un/markedness. The sentence articulates an event by mapping another event: where before there were increasingly marked (or potentially increasingly marked) convergences of desire and resentment, questions in danger of relapsing into commands, commands into the attempt to grab something, even if not what was originally desired, there is now an event with participants upon a scene everyone can identify and inhabit, however tacitly or indirectly. They can attend from their own scene of tribulated conversation to the scene presented by the sentence, and from the scene represented by the sentence to their own participation on the scene of speech, a participation now framed in terms of words that might match desires and resentments.

An idiom, then, creates a space of inquiry, and spaces of inquiry let things be, and suspend us in observance of those things; an idiom allows us to negotiate its own terms, guaranteeing that we will share the same space as we do so. The fleshing out of an idiom will entail its embodiment in gesture, speech and writing, and allow for certain norms regarding the issuing of ostensives and imperatives. The indicative idiom I have presented may be more weighted towards writing, but for that reason might have striking effects in speech situations; it might suggest minimal gesturality, but minimal gesturality might be maximal in its meaningfulness. Imperatives would be left largely implicit in such an idiom—an overt imperative would be heavily marked—but since the imperative space will be fulfilled one way or another, learning such an idiom would mean deducing imperatives from representations drained as much as possible of all resentments other than those directed against over-invested representations of reality. Above all (an indicative idiom would rule out phrases like “above all,” which tell—command—the communicant how much importance they “should” give to one claim over another) idioms inspire the invention of other idioms, in this case perhaps an imperative centered one that introduces equivocation into explicit imperatives.

A sign presents, bears with it, involves a scene; a sign also represents the results of a completed scene to those who weren’t on it. You might think about the difference between the working out of a shared sign on the spot, and the teaching of that sign to others, once a consensus on its shape and use has been decided upon. Each sign contains both dimensions, but in differing proportions. In presenting, in inquiry, the preliminary marking of the ultimately unmarked is enacted; in representing, that preliminary marking is unremarked upon, and the (un)markedness of the system and its elements appears ready made. The generation of idioms aims to tilt the proportion more towards presenting than is ordinarily the case, to mark more elements of language so as to make them available for future unmarkings.

Along with formalizing our own incessant idiom generation we can construe others in terms of their tacit idioms. Insofar as you can work with someone’s idioms, obeying and extending its rules, you have granted them a right to speak within a particular discursive space. There is no reason to tamper with the basic rights conveyed to us from Enlightenment politics and, in the U.S., the U.S. Constitution—free speech, free assembly, right to due process, to bear arms, and so on but rather than reducing all political discussions to these rights, which means they either get stretched and distorted or become irrelevant; and, rather than leaving talk of rights behind and allowing bureaucratic expansion to proceed by way of “non-ideological problem solving,” we can grant a kind of pragmatic, subsidiary right to idioms. Instead, for example, of a Supreme Court delivered “right to privacy” based upon a incoherent reading of the 4th Amendment with the penumbras of a couple of others thrown into the mix, why not recognize the idioms in which women speak with and about their relations with their doctors, bodies and intimates, and identify (and argue about identifying) some boundary beyond which laws shouldn’t pass—and then, rather than forbidding all laws that transgress that boundary, bring that argument into the debate over laws? We would then be using “right” in a more informal way, in the way you say to someone, “you have no right to speak to me like that!” (like what?: in some idiom, no doubt), but the use of the same word can ensure continuity with more “fundamental” uses of the concept. Such idiomatic uses of “rights” recall the origin of the term in the more medieval notion of “privileges,” which associates rights with honor within a gift and Big Man economy—and something like honor is what is usually involved when we say “you have no right to speak to me/treat me like that!” We can give linguistic if no legal heft to our intuitions that the media, for example, have no “right” to investigate the children or cousins of candidates for office, and we can embed impoverished contemporary shibboleths like “privacy” with articulations of right and obligation implicit in terms like “modesty,” “reticence,” “shame,” “respect” and other terms reflecting our tacit knowledge of social boundaries and the individual attitudes and aptitudes required to preserve them. There is a kind of extremism, found in some versions of libertarianism in particular, that sees other modes of exchange as competitors to the market mode, and it is that kind of extremism (reinforcing the leftist extremism that wants a reduction to a bureaucratic reinterpretation of “rights”) that wants to drive out all ways of adjudicating conflicts other than through “rights”—but a healthy free market would be based upon a healthy informal gift economy, and allow for transit back and forth between the two—and even encourage us to go back to the primitive egalitarian distribution found in families and other groupings (like sports teams, for example). People with a complex sense of “their own,” and with sophisticated idioms for parsing “ownness” will be all the better prepared to enter the global market economy.

Anyway, why “Sapir-Katz”? Partly for the symmetrical displacement of “Sapir-Whorf,” but that is only possible because Edward Sapir did, in fact, have a more subtle understanding of the relations between language, thought and culture than Whorf and has helped to suggest, for me, the possibility that the construction, through various means, of idiomatic shifts within the language provide new pathways for thought and culture. But that’s enough for now.

September 19, 2010

Islamovictimism

Filed under: GA — adam @ 6:42 pm

I’ve opened this post to discussion of Chronicles 399 & 400.

http://www.anthropoetics.ucla.edu/views/vw399.htm

September 5, 2010

The Right of the Idiom, Yet Again

Filed under: GA — adam @ 5:26 pm

A few months ago I saw a student wearing a t-shirt with the words “Us vs. Them” on solid background (I don’t remember the color of the shirt or the lettering). It seems to me an example of minimalist brilliance. It first of all must be read ironically, as criticizing all the ultimately “arbitrary” divisions in the world, all which would reduce to this single, “irrational” or “primordial” gesture of demonizing some other. But such a critique proves too much—if that is, indeed, what we are doing all the time, isn’t the implication that we can’t do otherwise? Even more, what else is the wearer of the t-shirt herself (it was a young woman) doing other than constructing an “us” (those aware of the arbitrariness of conflicts) and “them” (those who actually believe in the causes of the conflicts)? I think we can work with the assumption that the irony is meant to bounce back upon the wearer of the shirt in this way, but that doesn’t end things: there is still some marginal difference between the one who stays completely and uncritically invested in his community’s battles and the one capable of stepping back, however momentarily and provisionally, and attaining a more anthropological insight into the sources of those battles. In that case, the second “us vs. them,” that of the anthropologists vs. the merely mimetic human, which is in fact a division within each of us rather than between some of us and others, interferes, however weakly, with the rush toward the center—it is, that is, a kind of originary gesture, all the more meaningful for acknowledging its own implication in the anthropological truth it reveals.

An idiom is this articulation of group membership, the sharing of a sacred center, and its anthropological “surplus,” or awareness that the signs designating that center might be otherwise and in fact are otherwise, having their equivalents in every other group. The preservation of an idiom, moreover, depends upon sharpening the differences between equivalents rather than ironing them out—the attempt to create more general signs that would smooth out idiomatic differences is really just the process of creating new groups, albeit ones that claim to speak (and may do so more or less meaningfully) in the name of, say, “humanity.” One sign of an autonomous idiom is the proliferation of individual styles, as the idiom becomes rich enough to gather influences from a range of other idioms as a way of enhancing its own distinctiveness—a very good example is the copious wealth of Renaissance English, with its avaricious devouring of Latin, Italian and French influence, its engagement with the emergent sciences, and the problems of translating the Bible, establishing national unity and devising a specifically Anglican form of Christianity.

There are three ways in which human beings share things: we can divide them literally and materially; we can exchange them through a gift economy; we can exchange them on the market through the universal medium of money. Historically, we have moved, unevenly, from the first to the last, but just as Eric Gans acknowledged recently that the gift economy still pervades our market one, we should further acknowledge that we are never done, once and for all, with any of these economic systems. A family sitting down for dinner will cut up a single piece of meat and distribute sections to each member, according to some convention or the urgency of individual requests (no problem if there is enough, but possible problems if there isn’t); a group of college kids will pass around the bong, each taking one “hit” (if people still do this); a baseball team gives each batter a certain number of hits during batting practice (and this may be done on the basis of equal number of chances or need), and so on. As Gans noted, if you invite me to dinner this Friday night, I’m expected to invite you, not Saturday night (which would make it look too much like a payback, or like I’m trying to free myself of an onerous obligation) but, perhaps, next Friday night—but gift exchanges go well beyond this into emotional, cultural and intellectual areas of existence: where else but in a gift economy does my writing of this blog entry belong, as the only exchange I can hope for is a comment, a reference on some other blog, or a more general diffusion of my ideas? Within any large company, for that matter, employee survival and advancement depends largely upon the effective calculation of whom and how much one should gift—helping someone else develop an idea, taking someone else’s place when they’re sick, doing a bit of overtime for the boss, etc.

If material division, gift economy and market all co-exist, it stands to reason that some kind of healthy co-existence is possible and necessary, along with more unhealthy varieties, in which one economy interferes with the workings of another. Indeed, what is socialist economics other than an imposition of the economy of material division upon the market and gift economies, treating the total social product as if it were one big “piece” and distributing it according to some notion of need or desert? Less crudely, Keynsian economics does this by expanding the money supply, which Mises saw as benefiting those who received the money first, and principle which can be applied to all forms of government regulation, which favor, or direct money towards, those currently best equipped to comply with them and hence gain advantage over their competitors. Gifting introduces firstness into economics, as primitive egalitarian division depends upon some kind of ritual principle (even in the modern examples I gave above, where the solidarity of the group is foremost)—someone has to give the gift first and impose an obligation on others, an obligation which can be accepted and converted into a new mode of firstness with greater or lesser grace. The modes of firstness developed within the gift economy are too intense and unstable, and ultimately give way to “Big Man” modes of social organization and tributary mode of social distribution, with the genuine market emerging on the margins of despotic empires and corroding their authority. The “Big Man” is, for that matter, still and always will be, with us, and the return to the more primary, “authentic” and “rational” strict division or allocation by need, along with the central bureaucratic authority needed to make that happen, are, in the market context, resentful parodies of the more genuine firstness finally created in the form of the entrepreneur, who creates new desires and thereby transforms the social division of labor, but does so by submitting himself to consumer “sovereignty.” I think we can assume that this resentful counter-firstness, or secondness, will be a permanent feature of free societies.

The violation of the norms of primitive division produces defilement within the community, and the only response is expulsion and/or some form of ritual purgation—the more modern and less destructive form of this is embarrassment, and which I have situated within what I call originary mistakenness. Within the gift economy and the hierarchies that begin to emerge within it, we start to see honor and shame as governing principles—honor and shame are the only ways of enforcing group norms and the authority of the Big Man without legal sanctions and an impersonal governing authority. As we know, the more interiorizing and individualizing concepts of sin and guilt come later, but we are never rid of defilement, honor and shame either. It is fascinating to note how regularly polemicists against the horrors following from Islamic notions of honor and shame (in particular in the form of violence against women) appeal to the honor of their readers as citizens of a democratic society and attempt to shame them out of their passivity—as with the various economies, the problem is always one of articulation and conversion, rather than the elimination of previous cultural forms. An enormous amount of destruction has resulted from attempts to utterly eliminate more primitive norms, and by now we should be able to see that a purely “enlightened” or “modern” notion of reason or rights has nothing to replace them with in the vast majority of everyday social settings. Indeed, how much of contemporary politics is driven by the sense of defilement, shame and honor on the part of the “enlightened” as they seek to impose their own idiom on the rest of us?

Parallel to the distinctions I have just explored, historians of literacy like Eric Havelock, Walter Ong and David Olson (Tom Bertonneau, well known to those familiar with GA, has written some excellent essays developing these arguments) have described the invention and spread of writing as a watershed in human consciousness and therefore society. To sum up the point succinctly, what writing makes possible is, first, an understanding of first of language but ultimately reality as something which can be broken down into ever smaller parts (words, syllables, phonemes; molecules, atoms, quarks…); second, a coherent, linear and therefore causal representation of events (one of which follows another just like one word, one sentence, one page is seen to follow the other); third, the distinction between (deceptive) appearance and essence which founds epistemology (and between signifier and signified which founds linguistics, as particular signs can be seen as windows to particular sounds and meanings). In this case, even more than in the others, it is clear that any healthy social and intellectual order will find ways of articulating these elements, rather than trying to privilege one over the other—the ways of thinking made possible by writing are, of course, to be preserved, disseminated and enhanced, but who would want to argue that we are not, nevertheless, thoroughly immersed in orality through much of our existence—an orality that has, of course, been shaped by our history of literacy, but which shapes the latter in turn and, in fact, makes it possible in the first place: in the end, even when we read silently to ourselves we are experiencing the words as sounds.

This account of language, though, is far from complete without taking into account gesture as well. We assume that the first sign was a gesture, which means that it was iconic or self-evident—not only are we just as immersed in gesturality as we are in orality (orality is itself unthinkable without gestures), but gesturality, in the broader sense of the alignment of human bodies with each other, is itself embedded in the physical structures in which we house ourselves and provide access to one another. Gesturality is also embedded in language in some very fundamental ways—most obviously in deictics, but more subtly in our prepositions: we can’t use language which means we can’t think without being inside or outside, above or below, before or behind, near or far, etc., etc. These terms all derive from fundamental spatial orientations, and however abstract prepositions become, I would defy any to suggest where else they could come from; and, those of us familiar with GA in particular should be aware of the importance of words like “above,” inside,” central,” and so on in installing a basically scenic human reality within language. Even beyond that, it’s very probable that most if not all words can be traced back to basic experiential distinctions between well and ill, large and small, strong and weak, straight and bent, and so on. The gestural or iconic elements of language pervade writing as well as speech (as the innovative writer Ronald Sukenick once remarked, if you change the traditional Gutenbergian make up of the written page people go berserk—what is, then, the iconicity of that homogenous line of print going predictably from top to bottom of page after page?), and the explosion of new media over the past century should be accounted for as a resurgence of gesturality as well as orality within a world presumably conquered by the printed word.

Going even further into a specifically originary idiom, the ostensive, imperative and interrogative elements of language are built into declarative culture—I would say no declarative could make sense that didn’t accommodate its conversion into imperatives and ostensives via a series of gradations—in a sense, what else could any statement mean other than some version of “attend to this and that will be brought to your attention”? Obviously such formulations can become extremely complex—after that is brought to your attention you will in turn need to attend to the other thing, and so on, and a very simple sentence may map out such a string of attending to each other’s attending to. An originary “parsing” of a sentence would be breaking it down into the various ostensives and imperatives it might contain, such as the indications, promises, oaths, prayers, and hypotheses (questions) embedded within it.

My plan, finally, is to treat the notion of “rights,” or the word “right” as a thread going through all these fields. “Right” is a modern notion (more precisely, perhaps, its spread maps the transition of medieval to modern life), but it registers and translates the insistence upon respect and access that constitutes any idiom; “right” began its career as a narrow political concept, but now pervades the language—it is quite common to assert, for example, that “you have no right to speak to me like that,” in which “right” has collapsed back into a more colloquial notion of “honor”; rights are generally asserted in declarative form (we hold these rights to be self-evident) but have a strong imperative and ostensive component—they forbid all kinds of encroachments, and reveal an inviolable integrity; and, to return to the starting point of my discussion of the “right of the idiom,” the integration of rights within a legal and political system presupposes the existence of writing, at least for the keeping of records and forging of agreements, while sustaining a respect for rights as something other than markers of bureaucratic power requires the convertibility of rights (or “rights”) within the gift economy (where claims will be deeply rooted in orality and gesturality) into rights within a market economy. This last point is especially important politically: only in this way can we imagine the transformation of those countries with stunted (or worse) market systems succeeding within the global market and, I would say, only in that way will be able to think through the extraordinarily complicated issues of property rights in an information economy that furthermore transforms much that has been natural (like our DNA) into information that could be traded and used.

Idioms distribute rights internally—to speak within an idiom is to have a place to speak within it, and therefore a right to that place; and speakers of idioms insist upon their rights as speakers of that idiom within other idioms. This formulation brings us up against the dilemma Jean-Francois Lyotard called the “differend,” wherein the two parties to a dispute occupy incommensurable idioms and the decision therefore will be made in an idiom alien to at least one of them. One example he gave, not very surprisingly, was that of indigenous land rights claims within a modern settler colonial society, like those of North America or Australia. The native can’t produce a land deed or any proof of occupation or ownership—the myths which account for their belonging to the land (the very notion of “belonging” to the land) are “inadmissible” within a modern court of law. There seems to me no reason why a modern legal system can’t address such issues in a way similar to Hernando de Soto’s proposal for legalizing the informal property held by migrants to the margins of so many of the world’s great cities—give credibility to the oral traditions and actual circuits of exchange visible within the community in question. A right, in the end, is others granting you an unmarked position within their idiom, and the way you do that is acknowledging that you might be marked or “markable” in ways analogous to the one asserting the right, making that common markedness a center of joint attention and thereby unmarking it.

More precisely, proposing a common markedness is to propose a mistakenness that contaminates you both and through you the entire community, even world. Mistakenness implies the violation of some rule, which can be a tacit one. Let’s say that a “rule” is a boundary between a field of ostensives and a field of imperatives—you do these things and you see those things; the doing with an eye to that particular revelation and the observance contingent upon the faithful fulfillment of the prescribed act. (I suppose we might think of this like opening a box in which sunlight enters in a particular angle at a particular time of day and produces a very specific shadow or reflection.) Failing to follow the rule leads means you obey an imperative but see something unrecognizable; or you see something that seems unmoored from any imperative that might have placed you before it. Such mistakenness marks you as a dangerous site of infinite desire—after all, the channeling of imperatives and ostensives into one another is aimed at checking desires which can’t be contained within the ritual space. Unmarking the other involves finding a way to ostensively verify the imperative they have presented themselves as following, or supplying the imperative which might account for the ostensive they have presented. Marking yourself, meanwhile, or implicating yourself in mistakenness, involves providing some ostensive sign that one of the imperatives you habitually obey has proven inadequate to this instance. It is always possible to do this, because we are always mistaken and we can always see this if we widen our sense of the scene a bit. And this approach will work with enemies as well as friends, or potential friends—to see the other as mistaken is not to eliminate the idioms of guilt or shame; to see oneself as mistaken is not to surrender one’s power of judgment—rather, mistakenness gives the other the right to speak openly of the imperatives they follow and gives you the right to present your imperatives before them. Addressing a jihadist as the “infidel” of their discourse, or a rebellious “dhimmi,” while inviting them to convert to what for them is the religious other, for example, might contain more possibilities than the legal and political terms we are currently working with. In other words, it is still us vs. them, but with that minimal anthropological surplus.

August 25, 2010

Anti-Semitism

Filed under: GA — adam @ 7:56 pm

Below is the paper I read today at the Yale Initiative for the Interdisciplinary Study of Anti-Semitism’s conference on Modernity.” It was a rather interesting conference, which I will perhaps feel moved to comment on at some point. For, now, though, I’d like to state the central conclusion I arrived at from the proceedings. There are now two terms in play, each seeking to name the source of global violence and possible breakdown: “anti-semitism” and “Islamophobia.” For reasons I will perhaps expand upon, I am convinced that these two concepts cannot co-exist–one will disappear or be significantly marginalized, and one will at least have the chance to organize a new mode of politics. I hope that “anti-semitism” is the survivor, since it names something real and the concept might help us advance the cause of civilization. But I wouldn’t bet on its chances. At any rate, I would suggest that the fate of the opposed terms will serve a a clear index of where things are headed in the years ahead.

Anyway, here’s the paper:

Anti-Semitism and the Victimary Era

Adam Katz
Quinnipiac University

In this paper I will offer an account of contemporary anti-semitism in terms of Eric Gans’s “originary hypothesis” regarding the origin of language and culture. The originary hypothesis extends and revises Rene Girard’s analysis of mimetic rivalry: according to the originary hypothesis, the first sign emerged in a single event, a mimetic crisis in which the (proto) human group arrested their common and self-destructive convergence upon a common object by putting forward what Gans calls an “aborted gesture of appropriation.” Representation, then, is the deferral of violence, as is, therefore, all of culture. History is the ongoing process of preserving and, where necessary and possible, replacing such means of deferral (languages, rituals, beliefs, moralities, art, and so on) which are intrinsically fragile and under constant threat from mimetic desire, rivalry and violence.

In a series of books, beginning with The Origin of Language in 1980, through The End of History, Science and Faith, and Signs of Paradox, to mention a few, and his on-line column, Chronicles of Love and Resentment on his Anthropoetics website, Gans’s “new way of thinking” has developed an account of history according to which the market system, and now the world market system, best realizes the reciprocity achieved on the originary scene. History is the liberation of humanity from attachment and “enslavement” to the singular object on the originary scene towards the universal exchange of objects within the market system. It is in the context of the market system that Gans first situates anti-semitism:

The Jew is not in some undefined sense a scapegoat for the larger society’s frustrations. He serves as a model of the inexistent and unfigurable center of the market system… the Jew, having rejected the incarnation, incarnates the truly unincarnable—mediation… In the postritual world of market exchange, the Jew is a paradoxical construction who regulates the self-regulating market, who fixes the prices determined by the interaction of supply and demand; we must eliminate him to gain control over this “inhuman” mechanism. (Chronicle 74, 1997)

Gans’s allusion to the Jews’ rejection of the incarnation already suggests that the suitability of the Jews for such a “model” of the unfigurable center of the market has roots that precede modernity. Anti-semitism, for Gans, is ultimately predicated upon the paradox of the Jewish discovery of monotheism: the Judaic revelation presented knowledge of a single God beyond the means of control of totemic religions and a single humanity whose knowledge of God is most profoundly revealed in the reciprocal relations between humans; at the same time, this very revelation is granted to a single people, “chosen” to work out before the world the implications of this understanding of the divine. The spread of monotheism, already inscribed in its universalistic origin, could hardly take place other than through resentment towards those who both gave this God to humanity and “selfishly” claimed an exclusive relation to Him.

What Gans calls Jewish “narrative monotheism” lays the groundwork for the eventual emergence of the modern market not only by de-fetishizing local totems but by separating faith in God and the obligation to follow the law from the national power and success of the Jewish people. If the defeats and even destruction of the nation are given meaning by demands and promises that transcend those temporal events, then moral meaning can be found in the contingencies of history, rather than the maintenance of a closed ritual space. But this contribution of Judaism to modernity collides with the more specifically Christian contribution or, rather, the revision of Christianity constitutive of modernity. According to Gans, “[w]here Jews had understood that the real center was inhabited by the Being of the sign, the Christians realized that this Being was generated, and could be generated anew, by an act interpretable as a victimization.” In other words, while Jewish victimization was already a sign of Jewish chosenness, this was a burden borne by Jews alone; for Christianity, the persecution of Jesus is imitable and identification with it the source of salvation. But this also meant that Christianity provides the model for anti-semitism: “[t]he anti-semite compels the Jew to enter the infernal circle of rivalry and persecution in order to reenact his own Christian conversion: he is the new Paul, and the Jew is the Saul he used to be.” (Chronicle #207)

The consequence of this privileging of victimization and identification with it as a moral model is clarified by Gans’s account of the role of Romanticism in the development of the modern market. Gans speaks of the “constitutive hypocrisy of Romanticism,” wherein the Romantic individual performs his rejection of the market system and proclaims his persecution by all those situated within that system only in order, ultimately, to create a compelling self capable of circulating effectively within the market. In abiding tension with this individualistic gesture is the formation of nationalism along analogous lines, through the martyrdom of the nation and its heroes at the hands of its oppressors; oppressors that are, of course, simultaneously mimetic models. So, Gans argues,

anti-Semitism intensifies in the bourgeois era because it is at this point in history that persecution, which grants significance, comes to be preferable to indifference… At this point the Jews indifference to Jesus is no longer a veil covering his guilt for the Crucifixion; it is itself the ultimate persecution. To opt out of the theater of national life is ipso facto to operate in the hidden realm of conspiracy. The Jew is the ultimate dandy whose detachment from society—in principle, regardless of fact—is the sign of his omnipotence. The anthropological meaning of anti-Semitism may be expressed in terms of the market, but only insofar as the lesson of the modern market is itself understood as a transhistorical revelation concerning human exchange. The Jew is designated the “subject” of the market because, faithful to the empty center revealed by the burning bush, he remains in principle indifferent to the object—whether of persecution or adoration—that he finds there. (Chronicle #207)

The fury of the Nazi’s assault against the Jews gathered together all these threads of the anti-market revolt within a desperate attempt to displace the primacy of the Jews and “falsify” their narrative: “[e]nraged at the Jews’ monotheistic equanimity in defeat and disaster, the Nazis hoped to inflict on them a catastrophe so great that it could not be understood as a message of God to His people.”

The ultimately omnicidal potential for human violence revealed by the Holocaust introduces something new into this equation. The Holocaust marks the beginning of the victimary era, in which we are now living. The virulent hatred of the Nazis towards the Jews drew the world into a cataclysmic struggle, the like of which we will not survive again in the nuclear age. The eschewing of such hatred must be the center of the new system of deferral constructed after the war: whatever “looks like” the Nazi-Jew relation must be uncompromisingly proscribed. This, of course, creates an incentive to make one’s own grievance fit that model: post-colonial, anti-racist, feminist, environmentalist and so on struggles are all cast in terms of the perpetrator/victim/bystander configuration extracted from the Holocaust.

The Jews are once again placed in a paradoxical position. First of all, the response on the part of the Jews to the consequences of their utter defenselessness in the Holocaust is to create and, with growing unanimity, support a Jewish nation-state. But the nation-state, with its ethnic exclusivity, preparedness for belligerency and narrow self-interest, is one of those things that “looks” very much “like” Nazism. Second, the victimary principle can only be universalized if the Jewish monopoly on Holocaust guilt is broken—the best way to do so is to present the Jews as oppressors, at least just like the rest of us, at worst uniquely so, insofar they have exploited the world’s guilt so as to perpetuate the very conditions that enabled their own victimization, only this time at the expense of others. Finally, then, the emergence of a new victim, the Palestinians, the victim of the Jews, completes the victimary metaphysics first set in motion by the essentially theological response to the Holocaust. The victimary system, then, depends upon this new, expanded anti-semitism, in which the Jews are scapegoated for the crimes of the West as well as for the intensifying resentments toward the West, coming now, in particular, from the most bitter if not the oldest of those resentments: that of Islam.

It was the Israeli victory in the 1967 war that made it possible to maneuver the Jews, ideologically, out of the victimized and into the victimizer position. But this maneuvering might have gone no further than the kind standard anti-colonial critique applied to the US in Vietnam or the European powers without the increasing abandonment of nationality on the part of Western Europeans and the rise of radical Islam. In this context, as Gans says, we are, first of all,

struck by the similarity between medieval and modern Christian antisemitism. In both cases, the Jew is accused of remaining behind in the “old” Israel rather than entering the New Israel of Christianity. It is by this suspicious archaism that he betrays his immoral preference for honoring the historical memory of his monotheistic discovery over its inherent promise of universality. Whether well-poisoner or Protocol-worshiper, the Jew is accused of refusing to “love his [non-Jewish] neighbor” as himself. (Chronicle #301)

Earlier, I suggested that we could attribute to the modern market a “Jewish” and a “Christian” component: the former being the location of meaning in one’s “patient” action within history and the latter in the processes of individual singularization of the player on the market. It would, in that case, be the “Jewish” component that insists upon the regularization of exchange by the rule of law within what would inevitably a national framework—which is to say the same paradox of universality and exclusivity long associated with the Jewish place in the world. Only the U.S. has fully embraced this paradox and the burdens it implies, which accounts for not only the alliance between the US and Israel, but that of anti-Semitism and anti-Americanism. In that case, the contemporary European attempt to transcend nationality is not so much a rejection of the modern market in the manner of Nazi and Communist totalitarianism as it is a rejection of one of the critical elements of the market, the nation-state under the rule of law, and an evasion of the paradoxes and resentments involved in the articulation of nationality and the world market.

With the most politically influential currents of contemporary Islam, meanwhile, we do most emphatically see a rejection of the market. Gans sees Islam, in its origins and today, as the monotheism of an “excluded majority,” forged out of resentment against the first monotheism and the prevailing, dominant one: “the Hebrews discovered monotheism as the source of communal harmony independent of political power; the Muslims discovered it as a means for mobilizing the margins of the decaying imperial provinces to overpower them” (Chronicle 301, “Anti-Semitism from a Judeocentric Perspective I”). Hence the Islamic notion of the “uncreated Koran,” a direct rebuke to the potential for interpretation and supersessionism (“distortion”) built into the Jewish and Christian scriptures. Today, though, this resentment places Muslims at the margins of the global market, which they cannot avoid, and, indeed, through the oil producing states participate in substantially, but in such a way as to minimize the transformations in the division of labor that would reflect genuine cultural and ethical integration. The identification of Jews with the Subject controlling the uncontrollable marketplace inherited from modern Western anti-Semitism is in a sense radicalized in the Muslim world, which can create a political identity against the market itself from the outside. In the course of an analysis of a 2004 speech by former Malaysian Prime Minister Mahathir Mohamad, Gans contrasts modern European anti-Semitism, which sees itself as occupying the same world as the Jews, with

Mahathir’s world [where], on the contrary, the Jews occupy a different world from us, and their hidden domination of that world is that the root of that world’s open domination of Islam. By setting up the Jews as the all-powerful enemy, he is encouraging Muslims to forget their military and economic inferiority to the West and focus on the infinitesimal number of their “real” masters. The only thing our billions need in order to vanquish these few million Jews is a collective will to power. (Chronicle #302)

Gans focuses more on the global Muslim “umma” in these reflections I am working with, than Muslims living within the Western countries, but following the line of his argument one could suggest that the convergence of this mutated form of Islamic anti-semitism and the revival of anti-Semitism in the West along with the consolidation of White Guilt is creating a particularly intractable new strain. As Gans says, the anti-Israel contingent in the West doesn’t distribute copies of The Protocols of the Elders of Zion but they respect the right of Muslims to do so. We might say that the Western Left plays the role of defense attorney to Islamic terrorism: it doesn’t approve, but it is determined to see that the accused receive due process. “International law,” as the latest supersessionist project of the West thereby becomes a vehicle for this new brand of anti-semitism: as reflected in the Goldstone Report in particular, postcolonial, postmodern international law can readily be interpreted in such a way as to render any conceivable form of Israeli self-defense illegitimate; how else can we translate this project than in terms of a simple imperative: die!

The conclusion, I think, is that we cannot effectively address this emergent anti-semitism without addressing the pathologies surrounding the global market. On the one hand, the form taken on the marketplace by what Gans calls Jewish “firstness” is that of the centrality of the entrepreneur, who organizes capital, introduces a new division of labor and creates new desires. Despite claims of consumer supremacy, one source of the mysteriousness of the market’s workings is precisely that new products enter the market before anyone has been asking for, or has even thought of them—tales of consumer manipulation take on their plausibility from this fact. Similarly, the solicitation of investment capital, from the outside, inevitably looks conspiratorial, especially when heavily regulated markets require political maneuvering before new projects can get off the ground. We can see exploitative and deceptive entrepreneurial practices as exceptions to the rule in a fundamentally beneficial market process; or we can see the honest worker and consumer as, a priori, the victims of malevolent and unaccountable market players: which perspective we adopt will determine the way we think about regulating economic institutions, and only a fundamentally benevolent view will make it possible to accept the basic asymmetry between producers and consumers, capital and labor, and resist the search for scapegoats for our disappointments in the market.

Second, though, as I suggested earlier, Jewish firstness is represented by a willingness to endure historical contingency, adhere to the moral law (even if no one else does), and ask for no recognition or “proof” of election. I should make it clear that even if this possible relation between law, morality and history was invented by the Jews, it can, of course, be adopted by anyone (as, for example, in “American Exceptionalism”). At any rate, this form of firstness takes the form of an embrace of normalcy—not at the expense of eccentricity, innovativeness or otherness in general, but certainly as a rejection of the a priori victimary stance which artificially inflates the value of alterity. The location of cultural exemplars among the upholders of everyday middle class values and common sense patriotism, and the social prioritization of such values might prove even more difficult than rehabilitating the figure of the entrepreneur. Without such a cultural turn in which we come to see entrepreneurialism and normalcy as the modes of deferral they are rather than as exploitation and indifference to the other, though, anti-semitism will continue to attract and direct the resentments generated by the world market.

« Newer PostsOlder Posts »

Powered by WordPress