GABlog Generative Anthropology in the Public Sphere

October 23, 2010

Self-Evidency

Filed under: GA — adam @ 7:05 pm

When we speak about the “arbitrariness” of the sign, someone usually hastens to add that what is meant by that is, of course, its conventionality. “Arbitrary” is the right word, though, for what is assumed: that the sounds we make in speaking the languages we speak could just as easily be any other sounds, with the evidence of this being the obvious fact that words for the same things are different in all the languages, not to mention the enormous differences in grammar. The more you think about it, though, the more problematic the claim is—how in the world could we imagine everyone in a community agreeing to confer meaning upon a particular sound that in itself has nothing to do with the meaning it bears? The political implications of “arbitrariness,” which we rightly associate with tyranny, are therefore relevant here: if the sign is indeed arbitrary, it could only be because it was imposed upon everyone by some oppressor. In this assumption about the sign, then, we can see the trajectory from Lockean social contract theory (Locke was a firm believer in the arbitrariness of the sign) to the contemporary Left—the arbitrariness of the sign, starting with medieval nominalism, and, indeed social contract theory itself, were weapons against the assumptions about natural social order and natural law constitutive of Western Christendom. The arbitrariness of the sign is liberalism in linguistics and, in the end, liberalism (in the classical sense) has shown itself to share enough genetic material with the Leftism that succeeded it so as to leave it almost devoid of antibodies to fight the Leftist infection.

There is another liberalism, another Enlightenment, and another way of thinking about social agreement, though, which has been severely marginalized by the line leading from Locke through Hume and then Kant and Hegel (even the individualism of John Stuart Mill is ultimately derived from the German romanticism he imbibed through Coleridge). This other liberalism starts from the common sense philosophy of Thomas Reid, and can be followed through the American pragmatism of, at least, Charles Sanders Peirce, and is then strongly represented in the 20th century (in very different ways) by Hannah Arendt, Friedrich Hayek, Ludwig Wittgenstein and Michael Polanyi. The basic assumption shared by all of these thinkers is that we know far more than we know we know; that is, our knowledge is to a great extent, to use Polyani’s term, tacit—and not merely because we haven’t yet brought it up to consciousness, but constitutively so. As the novelist Ronald Sukenick once wrote, “the more we know the less we know”—not only because knowledge continually opens up new vistas of possible knowledge, but more importantly because the ways we know what we know cannot be made part of the knowledge we make present to ourselves. Any “language game,” disciplinary space, or idiom takes a grant deal for granted in addressing itself to a particular, emergent corner of reality; if it tries to bring that taken for granted bedrock into sight (and we do this all the time) it can only do so in terms of everything else that is still taken for granted, included some new things that enabled us to turn toward this new corner. Do you know for sure that you are at this moment present on the planet earth, that you are surrounded by building, streets, other people, etc.? “Know” is a very strange word to use here, which is not to say that we can’t really be sure—rather, what would be taken as “proof” that we are on the planet earth, surrounded by all those things? What would be better proof of this reality than the reality itself, as Wittgenstein liked to say? The question of how we know we are here, that we are ourselves, that we have bodies, that our senses integrate us into our surroundings, etc., is a very artificial one, but it’s that kind of question that modernity (and the dominant strand of liberalism) started with—most explicitly in Descartes, but Locke’s empiricism is ultimately no less corrosive of such self-evidency, as Hume revealed and Reid so forcefully demonstrated.

I have been writing much lately of mistakenness as constitutive of our linguistic and therefore social being, but it is equally true that there can be no mistakenness without certainty. I can only be mistaken in my articulation of an English sentence because I am certain that I am speaking English, not Chinese. If I’m completely out of place or out of line, it can only because there is indubitably a place or line to be out of. Mistakes disrupt a scene because there is a scene to be disrupted, and we are certain that it has been disrupted and while we can’t be certain that it will be restored, we can be certain about scenicity, without which there would be no mistakes. My argument has been that rather than evidence of the fragility of our worlds, mistakenness can be treated as evidence of its solidity. Assuming the arbitrariness of the sign intensifies the sense of fragility—if our common use of signs has just been imposed through some kind of force, human or natural, and, therefore, must continually be re-imposed, then of course deviation is dangerous. (For leftists, meanwhile, the consequence is that the arbitrariness of the sign encourages one to see reality as “constructed,” and hence infinitely malleable, in particular by those best at managing signs.) If signs, though, have an irreducibly iconic dimension, an iconic dimension that pervades every level of language, including semantics and grammar, then we just need to uncover the iconic meaning of a given mistake so as to bring it back within a reformed linguistic fold.

Isaiah Berlin, in his study of the determinist theories of history that undergirded socialist and communist politics in particular, made the point that you simply can’t remove the terms referring to human intentionality and, therefore, responsibility, from social and political discourse without making it impossible to refer to anything intelligibly at all. “He killed them” can’t be the same kind of statement as, nor can it be assimilated to, “e=mc2” or “historical development is determined by the force of production.” It’s not just that such ways of talking are immoral or unjust; rather, it’s that they are not really “ways of talking” at all, and therefore can’t sustain themselves without inventing all kinds of crazy agents (like “history” and “society”) which perform “actions” which no one has ever seen and or would recognize if they saw them. As originary thinkers, we can now say that this is because declarative, propositional meaning is rooted in the ostensive and imperative domains. We notice mistakes, in fact, because we can notice that our attention has been misdirected, which in turn reminds us that our attention is always being directed by everything we experience in reality.

The iconicity of meaning can be traced back to the gesture. The originary sign had to be gesture—it couldn’t be imagined as a sound, or a line drawn on the ground. Gesture is embedded in what we call “context,” that catch-all phrase we use when we reach the limits of our capacity to describe why something means how and when it does. A joke’s funniness might depend upon one of the listeners being where he is, and not a couple of steps to the left—that’s the kind of “contextual” effect we sum up with the phrase “you had to be there.” Gestures are also self-evident, in the sense that unlike propositional discourse, they cannot be replaced by their definition or explanation because they require the entirety of their “context.” The self-evidency of gestures also means that any normal human, initiated into any linguistic system whatsoever, would be able to make sense, on a gestural level, of the actions of any other human, from any other linguistic system—at least insofar as the gestures of the other are directed toward herself. On the most basic level, even though the meaning of gestures of course varies widely across cultures, we could recognize signs of aggression or good will directed towards ourself, even if those signs could also be used to deceive us.

Self-evidency, though, provides no support for Enlightenment optimism regarding universal communicability and amity. Indeed, self-evidency is also what radically divides us. The members of another culture who deceive me by exploiting my awareness of the meaning of their signs of peace are able to do so because within their own gestural system, inaccessible to me, they can signify that this naïve hick is ripe for the plucking. That is, to act in concert against me they need no dark conspiracy, no secret agreement—they know each other, and they know when one of them is welcoming an other with an excessiveness that communicates irony to them but not to me; they know how to follow each other’s lead in ways that I won’t figure out until it is too late, they know that anyone who might object to their scheme is far away at the moment, and so on. Of course, once they are through with me, I will be able to understand what they have done, if I am still around to do so. All self-evidency “proves” is that any attempt to impose a common idiom will generate idiomatic sub-systems resistant to control, understanding, or even detection.

What we can do is enhance and elaborate upon overlapping idioms and habits so as to create broader spaces of attenuated self-evidency—the fact that we can do so is what makes human equality self-evident, even while the attenuation of iconicity is what introduces what is called (by Michael Tomasello, among others) the drift toward the arbitrariness of the sign. The self-evidence of human equality lies in our ability to complement the inclusive drift toward arbitrariness with new modes of iconicity, within language and in our social relations. It is such a process that has brought us from the egalitarian distribution of the most primitive communities to the more expansive gift economy and ultimately to the market economy where the need for a single measure for value leads us to the relatively arbitrary universal equivalent of the precious metals—and, yet, what could be more iconic than gold, signifying wealth? (The arbitrariness of fiat currency, meanwhile, is arbitrary in the bad sense—it measures nothing but the will of the central banker.) It is also such a process through which we can try and move conflicts from the category of exterminationist opposition to war with rules and some notion of honor; from war to arbitration—or from criminality to civil law, and from civil law to friendly disagreements settled informally. And we can engage in such civilizing processes without succumbing to the delusion that any of these categories will ever disappear once and for all.

People only support icons, not arbitrary signs—an argument in favor of human equality in general is meaningless; what can be meaningful is a particular example of human equality at stake. (Which is why we will never get past the “distractions” and focus on the “real issues”; but, there’s no need to worry because the real issues get addressed, always imperfectly and so as to produce new, and equally real, issues, through the distractions.) And icons can be incommensurable with each other, which is why there will always be conflict. Successful icons are those that provide a new ground for the struggle between icons, and those icons will have the character of rules in relation to the lower level ones; or, more precisely, they will embody the kind of deferral and intellectual flexibility associated with rule following behavior, while still being exemplified by individuals acting alone and together. How can we support egalitarian distribution in sites like the family or other institutions devoted to close bonding and comradeship, while ensuring that any individual within that compact group is free to enter the market society; how can the norms of honor and shame needed to produce individuals ready to protect market societies from the enemies they will always produce in abundance, without nurturing fatal resistance to market society within its very bosom—the answer to such questions will always come, if they do come, in the form of some representative of a provisional, partial solution.

But let’s come back to the obvious: “dog” is “perro” in Spanish and “calev” in Hebrew; ergo, the word can’t have any intrinsic relation to the referent—the sign is arbitrary, case closed. Things must look this way for the linguist, with single systems of language, and the amazing diversity of the world’s languages, laid out in front of them; and to the naïve language user, compelled by such examples to take the linguistic perspective. The fact that when a speaker of English says “dog” it rather self-evidently refers to the animal in question, that “doggy” seems to “fit” the specific animal we feel affectionately towards, seems to be a pretty slim counter-argument. But there could never have been a point at which the word “dog” was imposed upon an acquiescent community of language users; the word was always firmly embedded with all the other words in the English language, and the languages English in turn evolved from, and if there was a first time the word’s ancestor was used (there must have been, right?—we are committed to at least that assumption), then it was used in such a way as to best ensure its referential capacity and memorability; or, if the choice was random, if it worked, it was remembered in such a way as to do so. And there never could have been a time when it exited that orbit of self-evidency. The systematicity of language—the fact that words don’t stand alone, but take on their “value” from all their interrelationships with other words (so, “dog” takes on its meanings from its distinction from “cat” and “mouse” on the one hand, from “wolf” and “fox” on the other, and from more specific terms like “poodle” and “German Shepherd” on yet another)—makes the point even stronger—at no time was any word or “lexical unit” outside of the linguistic world experienced as a whole, a linguistic world itself always in direct contact with the real one, via the ostensives and imperatives which embed us in that world—and, anyway, the sound symbolism of language can be every bit as complex as the semantic and grammatical systems: we can assume here as well, not a one-to-one correspondence between single sounds and dictionary-style meanings, but overlapping and interconnected connotations, which in turn interact with semantics and grammar in various ways. To address the argument for arbitrariness head on, the claim that linguistic signs would imitate, in their formal character (the articulation of sounds comprising them) those things they refer to or those events they aim at generating doesn’t imply that there should be only one language—why wouldn’t there by as many ways as “interpreting” what “sounds like” “dog” as there are ways of interpreting any complex text? It would be better to speak of a drift towards abstraction, rather than toward arbitrariness: the sign is abstract, even the first one, which had to be normed in such a way as to supplement its self-evidence from the very beginning precisely because there was no single way of conveying the intent to cease and desist. But even in this case the abstractions we speak of are marked by the drift, by the disciplinary spaces that have constructed them: in other words, abstraction involves accentuation and abbreviation, which is necessitated by the entrance of outsiders for whom the particular version of the sign current is not self-evident while at the same time making the sign even more difficult for the next outsider to grasp. The abstraction of the sign, then represents the disciplinary space (the shared inquiry into how to modify the sign so as to fit it for its new purpose) iconically, creating privileged and typical (unmarked) users, enabling the sign to attain self-evidency throughout the community.

I feel a strong need for a name for the politics of this marginalized liberal tradition, and the word “liberal” is not worth fighting over any more—especially since you’d have to fight the leftists who still use the term, the rightists who won’t give up on using the term to describe the leftists, and the libertarians who are very interesting but ineffectual semi-anarchists. The term I have been using on and off, “marginalism,” isn’t bad, but it sounds vaguely “oppositional,” and suggests an reactivel rather than comprehensive politics. I would like to derive a name from the rereading of “arbitrariness” I am proposing here, which sees the arbitrariness of the sign as a kind of secondary iconicity, a commitment to the iconicity of the sign that realizes that we can only rely upon the icons generated through the scenes we constitute. Icons lose their primary self-evidency when outsiders who don’t use the sign properly, because it isn’t self-evident to them, having their own self-evident semiotic system, and because ensuring the self-evidency of the sign to the primary community has made it idiosyncratic, or idiomatic. It is precisely this idiosyncrasy or idiomaticity that is, simultaneously and paradoxically, the ground of self-evidence: the shaped, complexly marked nature of the idiomatic sign is what makes it learnable through immersion in the scene. Common sense is, in turn, the meeting ground of these idioms, the discovery of overlappings.

I have thought about “plurality,” not in the sense of a diversity of ideas and lifestyles (pluralism) but in the sense of fundamental incommensurabilities in any community which tempt to violence but can facilitate rather than interfere with living together. I want the sense of “sampling” that Charles Sanders Peirce associates with inquiry (any knowledge is knowledge of the relation between the proportions in a sample and the proportions in a whole)—the notion one can derive from the icon (not necessarily Peirce’s) of a continuous sampling of possibilities in any event (when you try something the first time what’s the proportion of visible supporters and opponents; and then the second, third and fourth times?) can ultimately lead to the conclusion that the generation of samples is itself the event. Politics in this case is about thinking and knowledge, but not knowledge which then guides politics—instead, the politics generates knowledge which can only be used within political action, as the provisional articulation of our tacit knowing. Alongside of “sampling,” I considered “a politics of proportion,” which shares with “sampling” the relation between parts and the whole, while including the word “portion,” which reminds us of politics’ relation to dividing and sharing in some “equal” manner, and suggests a notion of politics as balancing and inclusive while still being interested, inevitably, in one’s “portion.” But “plurality” seems like a way of describing politics from the outside, from within thinking, and sampling is too “experimental,” by itself, suggesting the progressive sense of a “scientific” politics; “proportion” has the same problems, while another idea, “partiality,” or particularity,” evoke partisanship and identity politics rather than the notion of a whole that not only exceeds but can only be grasped through the parts which we are.

What I have for now, and will try out, is the neologistic (according to Merriam-Webster, neologism is either “a new word, expression or usage” or “a meaningless word coined by a psychotic”) “anyownnness”—any, or “one-y,” evokes (for me, via Gertrude Stein) singularity but also plurality, since anyone is as any as anyone else; “own” replaces “one” (which is redundant here anyway), and can suggest one’s property, one’s ownership of oneself prior to and as a basis for property, the opacity of any’s “ownness” to others; I hope it can suggest that one’s ownness, one’s singularity and property, is (“constitutively”) bound up with that of others, hence maintaining the notions of proportionality, sampling—and marginality, in the specifically economic sense, i.e., that infinitesimal point at which one’s (or anyone’s) “weight” on a particular “scale” tips that scale in the opposing direction. A politics of anyownness, or of the anyown, then, is a politics of motivatedness: nothing is arbitrary, nothing is simply imposed, everything is exemplary and abstract, anyone can be the marginal representative of idiomatic common sense.

So, Next: The right of the anyown

October 10, 2010

A Sapir-Katz Hypothesis

Filed under: GA — adam @ 5:05 pm

We all know about the Sapir-Whorf hypothesis (and if you don’t, you can google it)—it’s really Whorf, who was a student of Sapir’s and greatly expanded a couple of much more tentative suggestions from Sapir regarding the relations between language, thought and culture, who is responsible for the notion that grammatical structures influence thought and culture to the extent that incommensurability arises between different languages, and through them those ways of thinking and culture. Contemporary linguists seem to treat Whorf’s hypothesis as a kind of piñata, as if to see who can smash it most decisively, and it’s easy to see how vulnerable the once thrilling idea was: supposedly, the Hopi had no grammatical means of distinguishing tenses, and therefore, rather than sharply distinguishing, as we linearly minded Westerners do, between past, present and future, they see reality as an ongoing “process”—a perspective which, Whorf went on to claim, made their way of thinking marvelously compatible with the space-time of relativity theory. Linguists, I think, like the fun you can have with this, drawing upon their knowledge of the remarkable diversity of grammatical structures and peculiarities of the world’s languages to suggest various bizarre cultural and intellectual forms by way of refuting Whorf. I can play too: English verbs have no future tense—we “normally” use “will” as an auxiliary with the verb we wish to place in the future but we also often place a time designator in a regular present tense sentence to indicate futurity (I arrive tomorrow; I’m going to be there soon; we meet at 7, etc.)—so English speakers must be incapable of thinking coherently about the future: we are, depending upon your cultural tastes, doomed to be improvident wastrels, or happy-go-lucky live for the present types. But, of course, you can reverse all this, and say that precisely because we have no future tense, complacency about the future is forbidden us—we are more mindful of the various ways the future impends upon the present because we must devise all kinds of novel ways of referring to it. Or, how about the fact that in English the present tense doesn’t really refer to the present, that is, to something that is happening right now—at this moment, I do not “write” this sentence, I “am writing” it—that is, we use the present continuous. So, are English speakers more conscious of the incomplete nature of the present, or of the distinction between things we do habitually and what we are doing at the moment? Where would go to even begin to explore such “hypotheses”? While a science fiction writer might be able to do wonders with this kind of thing, it doesn’t take us, as cultural theorists, very far.

But language must be bound up with thought and culture, and we must be able to describe thought and cultural with linguistic and semiotic vocabularies—what else are thought and culture comprised of if not words, sentences and signs? You won’t get anywhere exploring these relationships if you are focused on what obsessed liberal intellectuals from the turn against imperialism and the (re)discovery of native peoples early in the 20th century (itself a victimary development of Romantic theories of nationality and ethnicity) until today: asymmetrical differences between cultures. But how about differences within languages? Any idiom creates a new way of thinking, a way of thinking possible only within that idiom—until the idiom is normalized and made readily convertible into other elements within the language. In other words, the point is not the inherent properties of language; rather, it is, first, the possibilities for invention inherent in language, and the certainty that new desires, resentments and loves will demand new idioms of expression; and second, the incessant change undergone by language, which normalizes idioms and idiomizes norms, thereby creating new resources for expression. Slang words like “cool” (amazingly still going strong) and “groovy” (hermetically sealed within the idioms of the 60s and very early 70s, and incapable of revival due to the demise of the technology it was predicated upon) are obvious examples: for at least a cultural moment, in a particular cultural space, these words enabled people to say, and therefore think, something that couldn’t be said or thought any other way. But then they become subject to mockery (“groovy” is more likely to make people think of The Brady Bunch than of Woodstock) and extension (“cool” seems to me to have become, in many instances, more or less synonymous with “OK”), and easily translated into other terms. At this point, there’s nothing you could say or think with “cool” that couldn’t be said or thought more inventively or nimbly in many other ways; and you couldn’t speak or think with “groovy” at all.

Once we say that idioms provide a new way for thinking, we can say the reverse: to create a new way of thinking is to construct an idiom. Ethically and intellectually, this would mean that my obligation in some new situation is to construct an idiom adequate to it: a means of mediating the articulation of desire and resentment especially threatening in that situation. Idioms construct habits: the best idioms what Peirce called the “deliberately formed, self-analyzing habit.” Idioms and habits refine and direct resentments: let’s say that you decide, in a particularly tense social setting that you can’t avoid, that you will counter every expression of hostility you encounter by restating it in literal, atonal indicative sentences: in other words, you will “translate” rude imperatives, hostile rhetorical questions, interjections, sarcasm, etc., into something like: I understand that you would like to take a break now. That would be an idiom—it would get noted, ridiculed, admired, imitated (perhaps involuntarily), revised, elaborated, and so on—others would have to respond to it in some way, leading to further developments within the idiom. They may want to speak with you about your idiom—can your idiom handle that conversation? Will you draw them in or will they draw you out? It may not work—it may send resentments spiraling out of control by appearing robotic, or deeply sarcastic itself—but then some other idiom will, or nothing will (some situations are beyond saving). The point is that you would be thinking in terms of inventing and experimenting with idioms, with rules that could be at least tacitly recognized. The more deliberately you construct idioms, the more attentive you become to potential materials for such construction: accidents, mistakes, surprises, on the one hand; places where communication and amity seem to be breaking down on the other. After a while an inventory of possible idioms evolves, and the ability to improvise, to redefine an idiom in the middle of things, emerges.

In fact, I have been experimenting with such an “indicative” idiom for a while—I first discovered its ancestor many years ago, in a situation where I had to provide academically acceptable answers in a highly politicized and hostile (and, for me, rather high stakes) setting—what I discovered is, no matter how snide and sarcastic your questioners are, in the end they need to ask a question; you can then carve out that question out of the fog of vicious innuendo, restate it, and answer it. A primitive version of the idiom has helped me often since, but lately I have been working on systematizing it: writing without interrogatives or imperatives, or even disguised imperatives like those lurking within words like “should,” “must,” and so on. This forces you to, then, rewrite a sentence like “we should do that” in a way that commits you to representing an actual event: not doing that will likely involve us in the following difficulties. We can try out other rules, perhaps in controlled ways: staying within the present tense leads us to fold all consequences into their present possibility; eliminating conjunctions takes away additive and oppositional habits of thought; or, eliminating conjunctions turns additive thinking into a search for degrees and thresholds; such elimination simultaneously tends to make opposites mere differences. And every few paragraphs, or according to some other division, suspend all the rules (why not?), because you must let go on occasion and you create a veritable carnival of forbidden terms.

I have referred in previous posts, a couple of times already, to one of Marshall McLuhan’s axioms that I find compelling: the content of any medium is another medium. Meanwhile, a reading of G.A. Well’s The Origin of Language: Aspects of the Discussion from Condillac to Wundt (a book I happened to come across in a used bookstore) crystallized for me the assumption that a meaningful world of ostensive gestures must have preceded speech (I am still thinking about whether one can imagine imperatives and even declaratives emerging within a purely gestural world, but for now I am assuming a realm of ostensivity). Language is, then, primarily iconic, as gestures would mostly be, as was the first gesture, aborted actions; and, then, exaggerated actions, simulated actions responding to other simulated actions, and so on. From the beginning, though, we can assume a drift toward the arbitrary, as there are always many ways of conveying an incomplete action, and gestures would take shape in accord with the habits of a community, and groups within communities—outsiders would not be able to treat them as self-evident and would need to be taught how to use the signs. There is even an irreducible element of arbitrariness on the originary scene itself, as the sign that prevails will be the one that works, not necessarily the one that is closest to a Platonic ideal of a gesture of aborted appropriation.

If human beings are deliberately and increasingly skillfully imitating each other in meaningful ways—ways that create new shared objects and means of appropriating and distributing them—then it seems to me reasonable to assume they will be imitating other things in the world as well. Once we admit this assumption, then all those ridiculous theories of the origin or language that have long ago been dismissed, from onomatopoeia, to imitating the cries of animals or the blowing of the wind, to stylized cries of pain and pleasure, become a lot more plausible—as the origin of speech within an already existing gestural world. The sounds that ultimately get combined into words would also, then, have iconic roots, which would support arguments for “phonosemantics,” or “sound symbolism”—the argument made most audaciously by Margaret Magnus (http://www.trismegistos.com/MagicalLetterPage/) that the meaning of words is tied to their sounds. Sounds would initially be made to accentuate a gesture, and then to supplement it when the gesture could not be seen—aiming, then, at the same effect as the gesture. In that case, the content of speech is gesture, just as the content of writing is speech. Speech would take over vast domains of human communication first covered by gesture, while at the same time incorporating, embedding itself within and expanding the realm of gesture—and, in the end, only being meaningful in terms of gesture. By gesture, I mean all the ways human beings coordinate with each other spatially—architecture is gesture, the fact that we face each other when we talk, and generally stand a few feet apart and never, say, three inches apart (unless we are lovers)—all this, and much more, is gesture. Speech is always about the possibility that we could have something in front of us that we could orient ourselves towards together.

But what is the relation between form and content other than one of inquiry—in the sense that the “content” of the originary scene is the repellent power of the object, and the “formal” gesture is eliciting that power, which is to say seeking it out, distinguishing it from everything else in the world, and “measuring” and “broadcasting” its effects. Roman Jakobson makes the argument upon which I am modeling this one: he contends (like David Olsen) that the invention of writing reified speech and “language,” turning it into an object of inquiry—in the case of alphabetical writing, an inquiry into which were the smallest representable “units” of language. Jakobson then suggests that this linguistic “atomism” was the source of the scientific atomism that predominated in Greek philosophy—if language, why couldn’t anything be subdivided into it most minimal units? For Olsen, the problem of writing is to supplement all the elements of speech that make understanding possible—gesture, of course, but also intonation and other elements of the speech situation. So, whole new vocabularies emerge as a result of writing—a word like “assume,” for example, as in “he assumed they were lying” would be unnecessary in speech, because there would be other ways of showing someone’s attitude in reporting their speech in a spoken manner—most obviously, imitating the way they spoke (in a questioning manner, say). The word “assume,” then, like a word such as “suggest,” are the means and results of an inquiry into linguistic interaction that is prompted by the invention of writing. Speech, then, is likewise a mode of inquiry into gesture, as gesture is itself a mode of inquiry into “elemental” desires and resentments.

I have also applied McLuhan’s axiom to the elementary speech forms, and would like to update that account. An imperative, then, is a mode of inquiry into ostensivity—not only that, of course, because if you are issuing an imperative you do want the thing done (just as if you are writing you are writing about something and not just inquiring into the operations of speech)—but an imperative attends from the absence of the object to the possibility of its being made present. An imperative is also an inquiry into the effects of tone and gesture (it needs to be loud enough, but not too loud, “authoritative,” it’s better to be standing or leaning forward, but sitting back in a chair might be a way of testing the intangibles of authoritativeness as well…), all elements of ostensivity. Indeed, the imperative might be seen as inquiry into the iconicity of the person. And like any inquiry, it originated in some uncertainty regarding the object in question. Similarly, the interrogative is an inquiry into the imperative—it marks the unfilled character of some demand or command, and unmarks the possibility that it will be fulfilled; the question attends from the expectation of a demand supplied to the disappointment of that expectation, and then from the prolongation of that demand to some anticipated location in reality whence the reformed demand might yet be satisfied. Inquiry is a act of marking and unmarking—when we are converging on the object, the object is marked for destruction, but once the sign is issued we attend, first, from the sign to the object, unmarking the formal sign and sharing our marking of the object; and then, second, we attend from the object to each other, thereby unmarking the object (which is to say unmarking everyone’s defense of, resentment on behalf of, the object) and marking our own now evident, because naked, desire for the object and resentment toward the others. Signs are unmarked insofar as they single out portions of a reality than in turn marks as partial those singling out. Just as portions of reality can be marked by signs, signs can internally mark parts of themselves, which really involves marking some prior use of the sign while unmarking the sign itself. Sign use, language, is always inquiry insofar as it is always prompted by some portion of reality, and the signs which have zoned off that portion, having moved from an unmarked into a marked state, and the need to restore relation of (un)markedness.

The declarative, then, is an inquiry into the resolution of the state of uncertainty (and “patience”) unmarked by the question, marking its continuance and unmarking what would ultimately be the articulation of imperatives and ostensives that would resolve it. The sentence, then, unmarks whatever the question marks, a reality that exceeds the scope of the question: if this one were to move a bit this way, and the other a bit that way, and another were to look over there and promise not to move, etc., the uncertainty would be resolved—all those acts marked as uncertain by the question are unmarked as embedded in reality, as commanded by reality, in retrievable ostensive-imperative articulations; and the sentence can, in turn, mark and return to the domain of the question any of those articulations, which is to say, who observed and did what to make the event represented in the sentence and the event of the sentence itself possible. Inquiry, then, is the process of allowing anything on the scene to be marked or unmarked; representation is a solid state of un/markedness. The sentence articulates an event by mapping another event: where before there were increasingly marked (or potentially increasingly marked) convergences of desire and resentment, questions in danger of relapsing into commands, commands into the attempt to grab something, even if not what was originally desired, there is now an event with participants upon a scene everyone can identify and inhabit, however tacitly or indirectly. They can attend from their own scene of tribulated conversation to the scene presented by the sentence, and from the scene represented by the sentence to their own participation on the scene of speech, a participation now framed in terms of words that might match desires and resentments.

An idiom, then, creates a space of inquiry, and spaces of inquiry let things be, and suspend us in observance of those things; an idiom allows us to negotiate its own terms, guaranteeing that we will share the same space as we do so. The fleshing out of an idiom will entail its embodiment in gesture, speech and writing, and allow for certain norms regarding the issuing of ostensives and imperatives. The indicative idiom I have presented may be more weighted towards writing, but for that reason might have striking effects in speech situations; it might suggest minimal gesturality, but minimal gesturality might be maximal in its meaningfulness. Imperatives would be left largely implicit in such an idiom—an overt imperative would be heavily marked—but since the imperative space will be fulfilled one way or another, learning such an idiom would mean deducing imperatives from representations drained as much as possible of all resentments other than those directed against over-invested representations of reality. Above all (an indicative idiom would rule out phrases like “above all,” which tell—command—the communicant how much importance they “should” give to one claim over another) idioms inspire the invention of other idioms, in this case perhaps an imperative centered one that introduces equivocation into explicit imperatives.

A sign presents, bears with it, involves a scene; a sign also represents the results of a completed scene to those who weren’t on it. You might think about the difference between the working out of a shared sign on the spot, and the teaching of that sign to others, once a consensus on its shape and use has been decided upon. Each sign contains both dimensions, but in differing proportions. In presenting, in inquiry, the preliminary marking of the ultimately unmarked is enacted; in representing, that preliminary marking is unremarked upon, and the (un)markedness of the system and its elements appears ready made. The generation of idioms aims to tilt the proportion more towards presenting than is ordinarily the case, to mark more elements of language so as to make them available for future unmarkings.

Along with formalizing our own incessant idiom generation we can construe others in terms of their tacit idioms. Insofar as you can work with someone’s idioms, obeying and extending its rules, you have granted them a right to speak within a particular discursive space. There is no reason to tamper with the basic rights conveyed to us from Enlightenment politics and, in the U.S., the U.S. Constitution—free speech, free assembly, right to due process, to bear arms, and so on but rather than reducing all political discussions to these rights, which means they either get stretched and distorted or become irrelevant; and, rather than leaving talk of rights behind and allowing bureaucratic expansion to proceed by way of “non-ideological problem solving,” we can grant a kind of pragmatic, subsidiary right to idioms. Instead, for example, of a Supreme Court delivered “right to privacy” based upon a incoherent reading of the 4th Amendment with the penumbras of a couple of others thrown into the mix, why not recognize the idioms in which women speak with and about their relations with their doctors, bodies and intimates, and identify (and argue about identifying) some boundary beyond which laws shouldn’t pass—and then, rather than forbidding all laws that transgress that boundary, bring that argument into the debate over laws? We would then be using “right” in a more informal way, in the way you say to someone, “you have no right to speak to me like that!” (like what?: in some idiom, no doubt), but the use of the same word can ensure continuity with more “fundamental” uses of the concept. Such idiomatic uses of “rights” recall the origin of the term in the more medieval notion of “privileges,” which associates rights with honor within a gift and Big Man economy—and something like honor is what is usually involved when we say “you have no right to speak to me/treat me like that!” We can give linguistic if no legal heft to our intuitions that the media, for example, have no “right” to investigate the children or cousins of candidates for office, and we can embed impoverished contemporary shibboleths like “privacy” with articulations of right and obligation implicit in terms like “modesty,” “reticence,” “shame,” “respect” and other terms reflecting our tacit knowledge of social boundaries and the individual attitudes and aptitudes required to preserve them. There is a kind of extremism, found in some versions of libertarianism in particular, that sees other modes of exchange as competitors to the market mode, and it is that kind of extremism (reinforcing the leftist extremism that wants a reduction to a bureaucratic reinterpretation of “rights”) that wants to drive out all ways of adjudicating conflicts other than through “rights”—but a healthy free market would be based upon a healthy informal gift economy, and allow for transit back and forth between the two—and even encourage us to go back to the primitive egalitarian distribution found in families and other groupings (like sports teams, for example). People with a complex sense of “their own,” and with sophisticated idioms for parsing “ownness” will be all the better prepared to enter the global market economy.

Anyway, why “Sapir-Katz”? Partly for the symmetrical displacement of “Sapir-Whorf,” but that is only possible because Edward Sapir did, in fact, have a more subtle understanding of the relations between language, thought and culture than Whorf and has helped to suggest, for me, the possibility that the construction, through various means, of idiomatic shifts within the language provide new pathways for thought and culture. But that’s enough for now.

Powered by WordPress