GABlog Generative Anthropology in the Public Sphere

April 7, 2009

Basics

Filed under: GA — adam @ 12:05 pm

We are in the steepest economic downturn since the 1930s, in or verging on another Great Depression; and, yet, there are signs daily that we might be starting to emerge from the recession by Spring 2010.  I wonder if the Left, now that they are in power, want to play the same kind of game they accused Bush of playing with the War on Terror:  it’s a emergency, requiring vast expansions of government power, and yet daily life goes on as normal, aside from the occasional color coded alerts, which I am certain not a single person paid the slightest attention to.   And, to tell the truth, they had a point:  if the War on Terror, or against Islamic supremacism or radicalism, was indeed the highest calling of our generation, why are the Iranian mullahs not only still in power but about to obtain a nuclear weapon?  Why is Pakistan likely to become the first nuclear armed jihadist state–or, at least, no less likely now than before  9/11?  Why are the Saudis still riding high, receiving obsequious bows from our new President?  In the end, did Bush take all this any more seriously than the Democrats take the financial crisis, which is clearly nothing more than an opportunity for them to pass a wish list of Great Society programs along with a lot of good old-fashioned graft which they had kept on hand for the moment when they would finally be free of the dead end of Republican rule.  “Subjectively” maybe he did, but wouldn’t that simply mean with all the good will in the world an assertive strategy of expanding freedom throughout the world is simply impossible, for reasons I am not able completely to explain?  Even more, one would have to say that things like policies and strategies are really impossible–what is impossible, that is, is anything that would subordinate procedures and the news cycle to some externally determined purpose.

I wonder if I am the only one dissatisfied with the thinness of accounts of the economic “crisis”–it is almost as if no one considers themselves obliged to explain exactly what the “freezing of credit” or whatever is anticipated would mean to billions of people.  My own credit is perfect–so, will no one issue me a credit card, or give me a mortgage or car loan?  Nobody?  Are they sure?  How can they be?  Will no one loan to any one else?  No one?  To anyone?  I think we really need to ask the questions in this way if we are ever to get straight answers; or, failing the straight answers, awkward, obfuscatory answers that are relatively easy to decode.  Because I suspect what they really mean is that only people with genuine reserves will be able to lend, and only to people with proven track records of making money and paying back loans; and, who knows, maybe only to those people with a reasonable business plan or documentable source of income.  No one, it seems, to me, can prove that such activity won’t continue, and in that case the real “crisis” is that things can’t go on the way they have.  To put it bluntly, people will no longer be able to lend money they don’t have to people who won’t be abel to pay it back and people won’t be able to buy, again with money they don’t have, “assets” that represent only possibilities based on speculative accounts of future economic developments.  As far as I’m concerned, that’s a blessing, not a crisis.

GA supports, as we all know, the free market as the most effective mechanism for deferring and recirculating resentments yet invented by human beings–indeed, we can’t really see anything beyond the free market, which I imagine is why Eric Gans finds Fukuyama’s End of History Thesis so compelling.  But the market GA supports is one in which everyone takes on their share of deferral, participating in the Weberian “Protestant ethic”–only with such ethical support will the market economy not end up generating more resentments than it can bear, in turn calling into being monstrous leftist agglomorations of resentment that will take down the system.  But the cultural contradictions of capitalism leave their marks on any originary analysis as well:  what if the intensification of social antagonisms require that we abandon the Gold Standard, institute the welfare state, lower interest rates so as to artificially sustain growth, etc.–on what grounds can GA argue against any one of those resentment lowering innovations, and in favor of a “pure” market system?  People have to defer the rivalries gathering force around them, and they can’t leave those immediate conflicts untended to in the name of the abstract system which we are one day sure to find is best for deferring conflicts.

But perhaps we can make an argument within GA for a return to beginnings, for periodic refoundings, re-constitutions of our own singular versions of the originary scene.  The century of Progressive depradations upon our constitutional order (which have been legitimated through association with the one genuine improvement to that order, the inclusion of black citizens) has given a great deal of credit to the notion put forth most forcefully by Marxism–that capitalism produces social conflicts beyond the capacity of the minimal state and individualist culture to handle–the assumption, that is, that the lower orders will have to be bought off perpetually in the name of social peace.  Even now, one can sense the terror in our rulers lest a real recession ruffle the surface of our social life for a couple of years–something, anything must be done prevent that, or at least make its effects tolerable through redistribution or the flat creation of money out of nothing. 

So, who is willing to bet that if unemployment goes up to 10, to 15% or higher, if most of us have to give back our second cars and not buy that third TV set or PC, if many families will have to live on one income for a while, etc., we will, nevertheless, not start slitting each others’ throats or forming militias and laying siege to the capital or the home office of Citibank?  Who is willing to bet, indeed, that local lending institutions, mutual assistance organizations, patronage of local businesses, charities, and other spontaneous forms of self and other help, will fill in the gap?  This would be a version of the market as well, if a more embedded one.  Indeed, could it be that that possibility is just as frightening to our rulers as the nightmarish visions of social collapse?

In the interest of bi-partisanship, I will apply the same logic to the threat of Islamic terror–why not, as I am forced to conclude that we (the collective “we” of our state institutions) are incapable of addressing victimary blackmail outside of such obeisence to the collective international norms which have been corrupted beyond repair by such blackmail.  I must painfully acknowledge that the approach I will recommend won’t work for the Israelis–we may be close to the point where the attitude towards Israel of its friends will take on the character of rescue rather than support.  Unfortunately, the Israelis themselves, albeit under enormously difficult situations, have not shown much capacity of late for avoiding the suicidal paralysis we have succumbed to–they can much less afford it, though.  

So, let’s accept Obama’s understanding of the US as one nation among many, with nothing exceptional about it at all.  The protection and freedom of others is their problem, not ours.  Indeed, perhaps we are returning to the natural state of the American republic, a state interrupted by the exceptional threat posed by world communism.  In that case, our political energies should be directed towards a withdrawal of American troops from the rest of the world–let’s begin with the places where they obviously don’t serve any purpose anyway, and where it is not at all clear they are wanted:  Europe, Japan and S. Korea, for starters.  We should work on withdrawing from the UN as well, on having that wretched institution removed from American soil, and on repudiating all international agreements that might infringe on our sovereignty.  The world market will no longer have its policeman, and will no doubt fall prey to all kinds of pirates; we, though, can build carefully constructed bi-lateral relations with specific nations–relations outlining very clear reciprocal duties and benefits, both economic and security.  Let anyone who does want our protection request it and offer something tangible (not “stability”) in exchange.  Similarly, we should outline very clear forms of deterrence, also on a nation-to-nation basis.  Perhaps such nation-to-nation alliances will lead to networks of alliances, new collaborative institutions, with the obligations of all involved to be carefully clarified at each stage.  (At the same time, private associations of individuals might form their own alliances with citizens of other lands, friendly or hostile, willingly taking the risk that the US government will not be able to back them if they get into trouble; hoping to convince their fellow citizens to take that risk.)  And, since the world will clearly become a much more dangerous place, border security must become an absolute priority, one which we will now have the military resources to attend to.  Such a political program would intersect with a movement to restore our constitution, and would cut against the grain of the ruling Left’s transnationalism in some very effective ways.  We would consistently be on the side of austere, focused, fair and accountable policies against flabby, diffuse, easily corruptible ones.  We could constantly be exposing and explaining, very clearly tagging particular policies as in or opposed to some definable American interest–most obviously wealth generating activities that also increase our energy security, like drilling for oil and building nuclear power plants.  And we could keep things very simple–for this, against that; for the local, the national, the productive, the friend; against the transnational, the parasitical, anything that fetters free activity, the duplicitous pseudo-ally.

The market is always tainted by thousands of political decisions, attending to this or that resentment that has emerged through the markeplace or the latest adjustment made in response to a previously expressed resentment.  It gets to the point where the sign is obscured beyond recognition–what does it mean to be a participant in the marketplace, once any economic decision is bounded on all sides by government moralizing, hectoring, bullying, helping?  At that point politics has to be about clarifying what we are all loyal to:  what are our basic signs and events?  Certainly individual citizens should avoid at all costs confrontations with Leviathan–but I don’t think that bureaucracies are going to get any smarter, and there must be all kinds of ways of outwitting and neutralizing them, of privatizing what the state would like to control, even in foreign policy.  The politics I am arguing here is one of going first–asserting the reality of some sign by acting on it and inviting others to gather around it, working on several levels–local self-help, movement for constitutional amendment, lobbies for attacking the most unpopular and vulnerable entangling alliances with transnational bureaucracies and regulations, actual friendships with lovers of freedom abroad, radical and yet reasonable claims for abolishing dysfunction institutions like the CIA and State Department–while being prepared to work small, under the radar, or go mainstream in the midst of the chain of crises that are surely coming, that has perhaps already begun. 

March 28, 2009

Originary Grammar and Post-Sacrificial Semiotic Agency

Filed under: GA — adam @ 3:40 pm

Here’s a paper I just read at the American Comparative Literature Assoication conference:

If the post-colonial is located within the common, if asymmetrical, mimetic space including colonizer and colonized, then we can speak about the post-colonial as the post-sacrificial.  Once mimicry rather than violence characterizes asymmetrical social and cultural forces, and the practices of hybridization supplant the absoluteness of violence, neither dominant or resistant forces can coalesce around a single, central figure whose death would provide transcendent meaning.  Sacrifice marks the limits of mimesis:  some figure can be presented as the origin of mimetic contagion and crisis at the boundaries of the normalizing practices of a community. 

 

By “sacrifice” I mean, for my purposes here, human sacrifice, that is, scapegoating.  For Rene Girard’s mimetic theory, scapegoating lies at the origin of language and the human:  in Girard’s originary scene, a mimetic crisis within the group is resolved by singling out, arbitrarily, some member of the group, who is then “lynched” and subsequently deified as the one who has brought peace to the group.  According to Eric Gans’ reconfiguration of the originary scene, though, the mimetic crisis Girard identifies need not be resolved violently—in fact, the originary sign which founds the community must be a deferral of violence, even on Girard’s terms, since the lynching itself would only resolve the crisis through some shared sign that could in that case have come in place of the violence.

 

For Gans, indeed, scapegoating and human sacrifice is introduced much later, in the wake of the organization of communities around the so-called “Big Man,” who centralizes the distribution of resources.  Once the origin of goods is centralized, in other words, so can be the origin of contagion.  The Big Man both defers violence by deflecting resentments within the community and becomes a pole of mimetic attraction and hence a more focused violence.  This origin of scapegoating accounts for its operation:  an intensified attention directed toward some figure, with the more comprehensive mapping of that figure identifying ever more intentional, malevolent, insidious and systematic betrayal of the norms of the community.  For Girard’s Christo-centric version of mimetic theory, the sacrifice of Jesus exposes the falsity of scapegoating along with our universal implication in the practice:  in an analysis not at odds with Gans’ version, Jesus teaches the primacy of defense of those who would be scapegoated and for this he is scapegoated by all. 

 

Modernity inherits, while substantially modifying, this essentially victimary understanding of sacrifice and violence:  violence against the “other” is understood as a kind of scapegoating that implicates us all, and that will ultimately destroy us all if not deferred in some way.  The anti-colonial theory pioneered by Fanon, Cesaire and others is ambivalent insofar as it calls both for a kind of purifying, sacrificial violence predicated upon an absolute asymmetry and for modes of symmetrical symbolic interaction that would negate the conclusiveness of such sacrificial violence.  The post-colonial theory of Bhabha, Spivak and others makes this ambivalence central and constitutive to theory.  If we are post-colonial, then, it is because we recognize and resist asymmetries without claiming we can register or transform them in any systemic way; because we are implicated in resentments implying the possibility of what would now be monstrous sacrifices without believing in the apocalyptic or millennial resolution promised by those sacrifices.

 

The emergence of writing, while of course not coinciding with the creation of the stark social divisions of colonialism, can nevertheless be usefully mapped onto those divisions as follows.  The logocentrism Derrida associated with Western metaphysical modes of thinking, along with all of its distinctions between rational and irrational, civilized and barbaric, and so on, can indeed be predicated upon modes of thinking produced by writing.  David Olson, in his World on Paper, contests the commonsensical assumption that writing was invented as a way to record speech.  This assumption itself presupposes that speech was already understood as made up of the units—separable sounds, syllables and words—that writing then simply reproduced.  Olson rejects this assumption, contending instead that writing began as a separate sign system of its own—for example, in the use of tallies for record keeping—and then became applicable to the recording of speech once it acquired its own “syntax”:  for example, one sign for “3” and another sign for “sheep.”  Writing, rather than being modeled on speech, itself becomes the model upon which speech is then understood.  The separation of language into discrete and combinable units is, then, the product of the application of writing as a model to speech.

 

For Olson, writing is from the start an imperfect representation of speech because it cannot capture the entire speech situation—what Olson refers to as the “illocutionary force” of the utterance.  In speech situations, the mutual understanding of the speakers represents the success of the speech act; writing, meanwhile, draws attention to the text.  Writing, for Olson, is about managing the illocutionary force of the utterance represented on the page through, for example, the invention of meta-critical terms like “assumed,” “asserted,” “insisted,” “suggested,” “inferred,” the expanded use of connectives and punctuation to distinguish and articulate different utterances, etc.   The history of writing can then be understood as a series of attempts to successfully manage illocutionary force.  The expedient discovered to accomplish this was to anchor the text in the intention of the author, whose illocutionary aims towards his own audience we, as readers, seek to understand—and, by implication, seek to eliminate the effects of other reading practices.   According to Olson, the emergence of the Reformation and the scientific revolution and, therefore, modern Europe, can be explained in terms of the logic of this kind of reading strategy.  The saturation of the modern world with literacy in turn produced the incomprehension of modes of language use in which, for example, the distinction between literal and figurative, or between syllogistic reasoning and the pragmatic situation of utterances, is not foundational—and, therefore, produced the assumption that intellectual and cultural inferiority attached to those modes of language use. 

 

To the extent that we can register hierarchies, both within and between societies, in the establishment and enforcement of normative language usage made possible by literacy we can propose the following hypothesis regarding the convergence of writing with sacrifice and scapegoating.  As opposed to the speech situation, where the rough edges of claims are smoothed out pragmatically, with the written text the author is source of both truth and error, and so is the interpreter with regard to the truth of the text.  The attention we direct toward the author is analogous to the attention we direct to the scapegoat:  he or she is the source of meaning and potentially of the destruction of meaning.  The naïve insistence upon a return to literal scriptural meaning in late medieval Christian Europe and through the Reformation, along with Church’s insistence upon the heretical nature of such efforts, would seem to reinforce this connection.  And the martyrdom of such figures, along with the first scientists who insisted upon a “literal” reading of the “Book of Nature,” at the hands of the Church, a martyrdom understood on the model of the Christian sacrifice itself, provides the model for the modern victimary problematic in terms of which we have made sense of the asymmetries of the colonial relation.  Once the privileged victim is the one victimized by the forces of superstition and state-Church “complexes,” cultural positions legible as “superstitious” and “theocratic” (that do not recognize the distinction between literal and figurative, factual and speculative) will be designated as “other.”

 

In modern pedagogies organized in terms of textual clarity and the literality of meaning, the text becomes a model for pedagogy:  the teacher can direct the student toward the features of textuality which traditions of attentiveness have catalogued, and determine the degree to which the student conforms.  Error is the mark of the scapegoat, and any organized mimetic practice will produce a lot of it.  And this can be error on any level:  errant interpretations as well as well as grammatical mistakes and incorrect usage.  The class is organized around the convergence upon error—no one is lynched, but ostracism and exclusion certainly result.  And writing maintains its function as a sorting system enabling ascension into one or another of the modern elites. 

 

The point is not to reverse this process and begin privileging error as a kind of populist resistance—which is to say, the point is not to scapegoat in turn the modern pedagogue and treat the students as their martyred victim.  Not only have we no power to do this in the classroom, but the association of error with processes of marginalization by no means proves that grammatical and rhetorical norms are simply arbitrary or only oppressive—privileging error would both betray our students by making them pawns in our own intra-elite battles and grossly simplify the process by which the normalization and standardization of language takes place.   The far better approach is to follow up on the theory of error pioneered by Mina Shaughnessy and developed further by David Bartholomae.  Their basic insight is that error is not merely negative:  when the writer violates some rule, it is because they are following some other rule, or some idiosyncratic hybridization of the rule in question.

 

This insight directs our attention toward rules and habits, and the reciprocal interference of rules and habits.  We can, along with our students,  make error into an object of inquiry so that we can ask what rule or habit a particular writer is following, and how following that rule or habit leads to that collision with another rule that we notice as “error.”  This would mean that writing itself is constructed as a mode of inquiry into articulation of habits and rules in language.  Establishing the composition classroom as a site of inquiry into the workings of writing as a mode of inquiry integrates composition into the university as an institution dedicated to inquiry—composition, even more, is placed in a questioning, critical relationship with the other disciplines, which are always liable to allow received content to trump the mode of inquiry, especially when it comes to pedagogy. 

 

Constructing the classroom as a space of inquiry further requires that our assignments and practices generate the required objects of inquiry in a controlled manner.  In other words, we try to set up the reciprocal interference between habits and rules I was referring to before.  Setting up the classroom in this way involves, first of all, using texts as models of an alien grammar and vocabulary of inquiry that students are required to construct—and, inevitably to err in the construction of.  The students’ habit of making sense confront this unfamiliar way of making sense, and two things happen:  first, they normalize the text, reducing it to their own vocabulary and grammar; second, their own vocabulary and grammar is “infected” by the text, and errors that they would not make under ordinary conditions enter their language.  Their resentment toward the text and classroom that thus undermines their certainties is deflected by the interest the class as a whole takes in the emergence of new linguistic forms, the starting point of new idioms.

 

The text, in other words, becomes a pretext for the process of modeling modes of inquiry into language, rather than a comforting and/or menacing center.  Error in such a classroom is produced but not exactly encouraged:  part of the inquiry conducted in class concerns the differences between normative and idiosyncratic forms, and students learn the normative forms along with their rationale and limits.  But error is no longer a singling out of the student as measured against the model of textuality being violated:  indeed, all errors are different, it is hard to tell which are “better” or “worse,” harder or easier to correct, or even what counts as a single error, as opposed, say, to part of pattern of errors which can all be addressed simultaneously.  Attention to error is no longer a form of punishment, or a participation in punitive raids upon other students—rather, it is one’s ticket into the space of inquiry.  And the role of the teacher is different in such a classroom—no longer hunting down error, but rather pointing towards innovative ways of accounting for it—ways that exceed one’s own ready vocabulary for discussing such issues.  The teacher is inside the space of inquiry, in other words.

 

According to Olson, modern thought is thought attending to the categories which emerge in the history of attempts to manage illocutionary force, or how an utterance is to be taken, which has produced the various gradations distinguishing assertion, hypothesis, inference, suggestion, etc., from each other. In that case, we might take the next step and see the author of the written text as intending to produce models for the modeling of language.  This revision would in turn lead us to privilege texts, for pedagogical purposes, which stage anomalies in the habits deployed to manage illocutionary force.  Such innovative texts constitute the boundary between originary force and error, and lead us to create new maxims and practices for the remaking of our semiotic habits.

 

If the written text is now understood to be a model for representing not only speech but semiotic practices generally, then semiosis can be viewed as most fundamentally an inquiry into its own ongoing emergence, what we might call originary grammar.  Originary grammar involves a reading of signs as serving to sustain the semiotic process itself—at the very least we can agree that there would be no point to my saying anything if I didn’t presuppose that someone could iterate my sign:  so, we can inquire into signifying practices in terms of their iterability.  Furthermore, my sign becoming iterable in fact constitutes it as sign in the first place, which means that the initial sign in any series must both err as a modification of an undifferentiated set of mimetic practices and “norm” those practices, that is situate them equidistant from some center. Norm and error emerge together, that is, along with the iterability of the sign.  All signs within the semiotic field can be read in these terms, all are recognizable as deviation and possible norm. 

 

If norm and error emerge simultaneously along with the sign, then any iteration of the sign produces a field of norming and error.  And if the most fundamental social and cultural praxis is the iteration of signs, a greater social and cultural complexity emerges from more deliberate iterations conjoined with more transparent norming practices aimed at detecting and shaping patterns of error.  I am describing a more specifically pedagogical practice as well as a broader cultural one, a cultural practice with a claim to be considered post-colonial.  The most basic assignment, in the classroom or the culture, is to create a space for the deliberate iteration of models placed at the center of the activity in question.  The process of attending to, in ever greater detail and complexity, the model, along with mapping it as an array of moves one could go on to enact and articulate, is analogous to the attention paid to the scapegoat—only in this case, the model undergoes a process of constitution and revision, and can therefore never stabilize as an object of appropriation and violence. 

 

Iteration produces error or difference, and the norming process proceeds by integrating those errors and differences into a field where attention oscillates back and forth between the original model to the economy of practices the iteration has issued into. For such purposes, I would propose various “normalization” practices, concerned with explicitly “managing illocutionary force,” and involving the use of “pre-declarative” linguistic practices to surround the iteration with various conditions and consequences:  with interrogatives (what question was the original model trying to answer, what problem was it trying to solve; how did these questions and problems get taken up by the iteration); with imperatives (what, insofar as we take either model or iteration as mimetic object, is it telling us to do; what are we telling it to do so as to serve as our model); and with ostensives (what can we point to in the model and then in the iteration that marks the latter as an iteration).  Such assignments will direct our attention to the interferences of convergent habits, and integrate error as a series of questions, orders and indexes we can take responsibility for as the materials for new practices. 

 

And these new practices, ultimately, are the generation of idioms.  As new differences emerge within and among signs, and as these differences in turn get taken up as the resource out of which new signs are elaborated, the threshold of meaning is continually lowered:  we notice more differences, which also means we notice more desires and resentments, including those that might turn dangerous; but this lowered threshold also registers as the discovery of new materials for semiosis.  The perpetual generation of idioms might then become the cultural norm, a process, once again, we can model in both classrooms and everyday life.  A growing attentiveness to the range of grammatical possibilities, produced by the incorporation of error as habit and rule, opens up grammatical innovations as modes of expressiveness.  The full range of inventiveness that characterizes the evolution of languages can be transformed into experiments in grammar, as our pedagogical and cultural praxis becomes the generation of idioms of inquiry.

 

 

 

 

 

March 9, 2009

originary grammar

Filed under: GA — adam @ 2:37 pm

Anyone interested in what my originary grammar is doing at the moment, here is my latest post on JCRT Live

http://jcrt.typepad.com/jcrt_live/2009/03/originary-grammar-part-2.html

March 2, 2009

Habits, Error, Assignments

Filed under: GA — adam @ 4:15 pm

When institutions fail, we must direct our attention toward habits–habits are foundation of institutions, which essentially codify and police habits, and which must fail when habits degenerate; but the degeneration of habits really means the interference of one set of habits with another.  Thinking in terms of habits adds greatly to our clarity, because there are only positive terms:  habits are “bad” only from the standpoint of some other set of habits, and norms are nothing more than the winnowing process of distinguishing what is shared in our habits and which are the idiosyncratic deviations, or errors, each of us imparts to our own enactment of shared habits. 

I turn to habits because I can’t find anything to say about politics in the normal sense of public, representative events that transcend resentment.  The trajectory of the Obama Presidency and the Democratic congress seems very clear to me:  interference in the economy to the point where there is no “economy,” but rather a “political economy” in which one would need knowledge of imponderables–like the way Obama’s wise men are reading whatever tea leaves they are reading these days; or which interest group needs to be stroked by Congress this month–in order to chart out the direction of the economy.  This interference will be combined with bouts of scapegoating of the “rich,” who will be blamed for continuing decline.  I have already written pretty extensively on their plans for foreign policy.  Finally, I believe they plan to stay in power for as long as they can, regardless of the means (and there are many available to them)–the Democrats had a near-death experience between 2000-2006, when they were shut out of power at all levels of the federal government for what I assume was the first time since the founding of the party, they have never seen the Republican revolt against the New Deal State as legitimate, and they are determined not to go through all that again.  The Global Intifada is in the ascendancy and normalcy is on the defensive–who knows, maybe on its last legs.  Of course, it will all crash, but when and how can’t be predicted, not can the precise shape and size of the pieces that will need to be picked up–much less who will be around with the capacity to start putting them back together.  So why talk about it?

So, habits.  Habits emerge from the modeling of our behavior in accord with events which have revealed some new sign to us.  Imitation of a model is (I don’t have to tell you) a very complicated affair.  Taking someone, in one of their incarnations, or in an averaging out of their incarnations, as a model, is in large part tacit–indeed, being impressed by an event is to initiate the process of modeling.  At the same time, in order to do something “like” someone else does it I need to derive imperatives from their activity:  I need to tell myself, no, he does it this way; more of that; no, that’s more like the way x does it…  In order to derive imperatives from someone’s activity or, more precisely, their being as it is made present in their activity, I must suspend all criticism of that figure–indeed, defending the model against criticism is part of the process of “internalizing” it–his rivals become mine.  A couple of years back, I think, Eric Gans wrote a Chroncle of Love & Resentment on celebrity, arguing for the way in which celebrity functions to limit rivalries by measuring all contestants in a given arena against some model elevated above and hence unattainable to all of them.  One kid’s jump shot might be better than another’s while the other might be a better dribbler, but it all gets evened when we keep in mind that they share Michael Jordan as a model.  The implication for my discussion here will be that if we want to restore habits, we will need to restore unquestionable models, something which may not be as impossible as it sounds.

Watching children who, as Gans says somewhere else (in a discussion of the Harry Potter phenomenon, I believe) are completely untroubled by the mimetic origin of their desires, is particularly instructive in this regard.  Children are simply a melange of habits cobbled together from their parents, friends, siblings, celebrities, fictional characters, and so on.  And, as Gertrude Stein says somewhere (as I mix up my own incompatible models), we repeat what we love and we love what we repeat.  And cause and effect get confused beyond distinction here–in the end, we love our repeating and repeat our loving and our models become our habits.  Others can see the rules we are implicitly following–the results of whatever consistency we have been able to create among all the orders we have been giving ourselves–but to the extent that they have become our habits, we don’t.  We love the new habit region we have created. 

Of course, our models are not responsible for the habit regions we create, and serial killers love their habits too.  By the time they have become our habits they have traveled a long way from our original being taken up as a model.  But let’s go back to the beginning, to the originary scene, and to the question, what makes an imitation an imitation in the first place?  Who says that what I do is “like” what you do, and how do they know?  What would it mean to argue over whether something is a genuine imitation or not–what are we pointing to as the decisive markers of a “real imitation”?  Indeed, the more aware you are that you are imitating, the less you actually are because a consciousness of your distance from your model is animating your mapping of that model.  This is why I believe that whoever first put forward the aborted gesture of appropriation on the originary scene could not have known what he was doing until he saw its imitations come back to him the iteration of others–the gesture probably “improved” as it coursed through the emergent community, which means that the others “mistook” the gesture but in doing so constituted it as gesture.  Even more, that first gesturer must have been attending to the especially aggressive grasp of one of his fellows, and that suprised attention led to his own “mistaken,” hesistant imitation in deference to that model. 

In other words, we never get imitation right, and this is the open secret at the heart of all culture.  Nothing is more shameful and embarrassing to witness, not to say experience, than a patently failed imitation–whether it is the kid trying to be “cool” or the graduate student mimicking too closely the prose of his teacher or favored scholar.  This doesn’t make such attempts any less imitative–to the contrary, it is the accumulation of such errors and the emergence of a norm which the shameful or embarrassing moment then validates that confirms our mimetic being.  What accounts for error, and makes it extremely interesting, is that some other habit, deriving from some other model, interferes with the imitation.  The kid trying to be cool is still marked by the habits of studiousness; the graduate student is marked by the habits of someone who needs to “prove himself” for some other reason. 

That moment of error, where everyone validates their own belonging to the mimetic space by noting that someone else is doing some other thing, whatever it is, is undoubtedly a major source of scapegoating–once someone finds her self outside of the circle in this way, it is only be the good graces of others that she will find her way back in.  But we can also treat error as generative, as the starting point of a new, eccentric habit region that arrests the habits of convergence of the rest of the group and emerges as a new, idiosyncratic model through a series of refinements, defilements, caricatures, and loving revelations.  Indeed, maybe our YouTube, Reality TV culture is making us more open to the generativity of error–but even such regions are careful not to dwell too much on the shame of error which draws us all in, and in public life we see mostly opportunistic reactions to error in the form of pious calls for “competence” which somehow no one is really able to define or describe in a convincing way in any concrete instance–they know it when they don’t see it.   There is less and less tolerance for the “gaffe,” however harmless–this is the bullying of the media which everyone complies with for reasons i’m not completely sure I understand.

The originary scene itself provides us with two models for the construction of habits in the concepts of “firstness” and “lastness”–the two, of course, imply each other:  once we reject the simultaneous emission by all on the scene of the sign, then someone must have gone first; and if someone must have gone first, someone, or some several, must have gone last.  As I just suggested, the first signifier sees his sign taken up by others and thereby recognizes it as sign, ultimately participating more deliberately in its dissemination–he conceives both his courage and his convictions in the process, and becomes invested in the sign’s successful circulation throughout the group.  The scene is fundamentally contingent for him, and the various errors in emission upon the scene are smoothed out or “normed”; nor does he have any thought as to what will come after the scene–he will participate in the sparagmos like everyone else, but he has been assimilated to the group, which can take care of itself, by that point.

The last, meanwhile, has already watched the scene take shape, and in a sense it pre-exists him.  He has imitated an already rather fixed or “standardized” sign, and the stakes of his participation are lower–he joins with the combination of cynicism and fear of exclusion which marks one who does what he “has to” while viewing the rules he must follow as rather arbitrary and probably serving other, mysterious, purposes.  His sign is of high value to the group, which will cohere much better in a unanimous gesture; and yet it is rather cheap because the group can after all do without his assent.  He feels the power of the sign primarily through the shaping of the scene, the coordinated movements he witnesses, and his own bargaining power is derivative in turn of that social more than divine power.

The habits of the first, then, involve modeling beginnings in the middle of things–the first works in the midst of error and norming and sharpens canons of recognition, judgment and acknowledgement (what counts as “x”?) that sustain the present itself as a model; the habits of the last, meanwhile, keep extending the completed scene as a model indefinitely into the future–the errors of representatives of the center are deviations from the perfected model for which dependents on the scene must be compensated.  The last also expects his own errors to be treated mercilessly, and therefore has no hesistation in using that weapon against others.  The last assesses the sign with one eye on the coming sparagmos, since he is never completely sure of his place at the table.

The first takes risks, but never everything he has except in an emergency, and certainly never what others have–the first needs his credibility so as to see the circulation of the sign through to the end.  The last eschews risks, but this might take various forms:  the discipline imposed by the poor parent–the unwavering insistence upon the exact imitation of the best models–upon the children who might, in a reasonably open environment, do better, initiate something of their own; or it might be a parasitic set of demands upon “society,” which, after all, is rich enough to support anyone.  The most productive errors in an open society where rituals have been mostly replaced by habits (that’s another story, isn’t it?) are precisely those where the lasts make their bid for firstness, and get the model all wrong–thereby transforming it into new models.  Meanwhile, the terror of error is reinforced by the alliance between those legatees of firstness who blame firstness for lastness and the worst habits of the last–both participants in this alliance collude in confirming for each other that their errors are nothing of the sort, but an arbitrary exclusion mechanism deployed by the firsts. 

So, extricating ourselves from the conjoined and mutually reinforcing crises of the Global Intifada and the financial meltdown, and getting the process started without any expectation of help or good models from our mainstream institutions, involves creating sites of generative interaction between the habits of those who are first on the current scene and those who are last but would be first on some future one.  The way to set these (or any) divergent sets of habit in productive interaction is to create assignments–minimal tasks and rules in following which the limits of each set of habits opens it to the other set.  A good assignment generates productive errors around which we then gather so as to turn them into a sign.  The best way to approach it is to establish a model, transcending both the first and the last, as absolute–our common starting point is then delineating its distinctiveness and establishing a “perimeter” distancing the model from criticism (resentment).  We all–first and last alike–then commence to iterate that model, knowing we shall all err, and committing ourselves to the creation of new models out of that array of errors.  The beauty of this approach is that we can all follow our own habits slavishly (a key ingredient of happiness) in perfectly good conscience because doing the same thing over and over again (iterating my own eccentric appopriation of the model) keeps making everything different–that is, generating new signs.  Together we gather maxims–which I am coming to see as the highest form of thinking–from the process:  maxims are translations of the interference of one set of habits with another into rules (both in the sense of discerning regularities and in the sense of obeying a series of imperatives) and are generative of new habits in turn–ultimately, what Charles Sanders Peirce called the “deliberately formed, self-analyzing habit.”

January 22, 2009

Assignments for President Obama

Filed under: GA — adam @ 9:15 am

I’d like, briefly, to propose a way to think about President Obama.  I would first like to summarize or reaffirm (insist upon?) the argument I have made so far regarding Obama’s rise and the wild, cult-like following he has acquired.  In my view, Obama is a transcendent figure, a political celebrity/demi-god, whose quasi-divinity, for his believers (right now commanding a sizable majority of American public opinion), consists in his completion of the public ritual of scapegoating George Bush.  Whatever the scapegoated Bush is, Obama is the “other”; what Bush is not, Obama incarnates:  from biography, to verbal facility, to manners, to associates, and, of course, race, to mention just a few.  Nobody speaks about Obama without an explicit or implicit gesture toward Bush (“we finally have a President who…”)

Such an analysis sufficed (for me at least) to understand Obama’s rise; but it is just a preliminary analysis for engaging his mode of governance.  I have also suggested that Obama is aware of his transcendence and has actively cultivated it; which also means that he is aware of the need to preserve, harvest and carefully deploy this transcendence.  This awareness, I believe, accounts for the studiousness with which he has distanced himself from the more rabid elements of the Left which facilitated his rise, and made some overtures to conservative figures.  Still, all this is just sparring–the bell for the first round has just rung. 

The way in which some of Obama’s policies, rhetoric and appointments appear to be Bush-lite has already captured the attention of some our leading comedians.  I don’t share the optimism of “center-right” people, though, that Obama recognizes the success of many of Bush’s (especially national security) policies and will simply continue them in a more rhetorically “effective” form.  I think the decisions he has been making are more telling of the kind of moves a well-practiced transnational progressive makes to transform the remaining liberal elements of our order into a bureaucratic, quasi-feudal order based on international law.  The transnational progressivists are minimalists and in their own way are better anthropologists than the right.  If you want to shift more power in society to unaccountable bureaucracies, the judiciary and, more specifically, transnational bureaucracies and legal forums governed by postmodern international law (the traveling war-crimes and human rights tribunals comprised of Western media, lefitst lawyers, celebrities, academics and discarded political leaders), you cannot try to install such an order all at one time.  You must pay token homage to the reality in front of you, and find a margin of difference between that reality and the reality that would be revealed under the proper “lighting,” i.e., under the gaze of the Human Rights World Picture.  The American occupation of Iraq and the government we have nutured there can now, for example, be found to be tied to all kinds of legitimating international forms, forms we should adhere to more obediently; at the same time, those very forms will “tell” us when we need to leave Iraq, and will afterward “tell” us to do or not do many other things as well.  That is, one works with events, events which make visible the boundary line between nationalistic, bourgeois, imperialistic, militaristic, racist, etc., motives and actions, and the legitimating frame which now christian those actions anew by attaching them to a new set of motives; or, alternatively, allow for penance to be done for those actions and reparations to be paid and reforms introduced and supervised.  And you choose events where popular opinion is already on your side, putting to work figures (“dissidents”) of the ancien regime who are willing play along.  Those transitional figures can then be discarded.

So, that’s my hypothesis:  Obama will husband his transcendence by representing himself as the connecting link between American interests and the emerging international order and realities, choosing to focus on those acts and deeds that can simultaneously improve America’s “image” before the “world” and make the world’s judgment seem less intimidating and more inevitable to Americans.  All the mythic events of the Bush years will thereby be cancelled out and replaced by new myths.  So, here’s what would falsify the hypothesis:

1)  As I have suggested before, one way in which Obama could genuinely risk his transcendence and become a real chief executive would be to evince an umistakable willingness to use military force in a situation involving American interests alone, at odds with international opinion and even agreements, and requiring “follow through” past the original phase of popularity or at least understanding. 

2)  A second way involves the domestic situation.  An obvious, but I think relatively easy move for Obama to make would be to defy Congressional Democrats on the “stimulus” package–that is, to rebuke them and send them back to the drawing board to compose a less pork-ridden, more austere bill clearly aimed at the most urgent business.  More important, though, would be a recognition that one essential element of lifting ourselves out of whatever we are sinking into is the generation, rapidly, of new sources of wealth; and that the most readily available source lies in energy–while new forms of energy production will simultaneously have very healthy effects upon our relation to the rest of the world.  There are really only two quick ways of dramatically increasing energy production and changing energy markets:  oil drilling in previously forbidden (for environmentalist reasons) and the construction of nuclear power plants.  Both would activate the defense of long standing taboos among Obama’s main constituencies.  Promoting, unambiguously, a move in this direction would be the second way Obama could spend his transcendence is such a way as to transitioning into becoming a genuine Chief Executive. 

« Newer PostsOlder Posts »

Powered by WordPress