GABlog Generative Anthropology in the Public Sphere

September 16, 2009

Hunters and Craftsmen

Filed under: GA — adam @ 6:22 am

I’ve just finished reading Thorstein Veblen’s The Theory of the Leisure Class. Obviously, I can’t claim that this puts me in the vanguard of anything, but I found his organization of economic analysis around the categories of, on one side, “invidious distinction,” and, on the other side, the “instinct of workmanship,” very provocative.  Economic life is organized around “invidious” distinctions when human life is predatory:  based on hunting, war and conquest.  Under such conditions, some men gain possessions and reputations that place them in a superior position to other men, and the way they manifest this superiority is through conspicuous leisure:  doing lots of things that serve no utilitarian purpose and, indeed, flaunt their contempt for utilitarian purpose.  For me, the analysis gets interesting when Veblen associates players on the market, or those driven by “pecuniary” interests, with the class of “predators,” and hence “archaic” by the standards of a “modern industrial” society.  He thereby places the entrepreneur, banker, broker, etc., at odds with those driven by the instinct of workmanship, who are interested in working out and applying causal relations: scientists, engineers, etc.  The economic figures driven by pecuniary interests are, then, simply hunters and warriors in a new, quasi-peaceful guise.  As, of course, are “administrators,” i.e., bureaucrats and the government.  

 

I suspect that a quick look at a transcript of some casual conversation among Wall Street brokers would confirm the plausibility of this classification, as is our use of terms like “robber barons” to describe the great corporate founders of the nineteenth century, idioms like “make a killing,” etc.  For Veblen, exchanges on the market are indistinguishable from fraud, an essentially predatory relation to others—all lines separating fraudulent form legitimate exchange are essentially contingent and pragmatic.  Are we so sure we could we say he is wrong about that?  He also associates gambling with the predatory disposition, and it is easy to wonder how much of our current economic crisis, especially that part attributable to the mysterious “derivatives,” is a result of nothing more than very high stakes gambling (with other people’s money, of course).  What is further interesting in Veblen’s account is his classification of Christianity as a religion grounded in the leisure class:  in this case, God is king/conqueror, and worship of Him, with its incessant emphasis on His infinite power, is the vicarious leisure of the servant class.  Veblen has quite a bit of fun with the clothing worn by priests, the architecture and decoration of Churches, and so on, in the process of establishing this claim.  Charity and philanthropy, further, fit into this characterization:  they are more conspicuous leisure, dedicated to promoting the honor and value of the benefactor warrior/king. 

 

Of course, those familiar with GA will notice several things here, which might be invisible to others.  First of all, we know that all human existence is based on “invidious distinction,” which enables us to reverse Veblen’s hierarchy of the two economic types.  Veblen argues that the “instinct for workmanship” is the more originary trait, characterizing human existence at a more primitive and peaceful stage, while the predatory element in human existence comes must later, and is ultimately a mere variant of the former.  For us, social relations based on invidious distinction are also based on the shaping of such distinctions into such forms as mitigate the inter-communal violence they would otherwise incite—if there were nothing but invidiousness, there would be no community at all.  The instinct of workmanship, meanwhile, we can easily locate in the esthetic element of the originary gesture, which is there from the beginning, as the gesture needs to “propose” some symmetry or harmonization of the group in order to take hold, but is nevertheless secondary to the felt need to interrupt the imminent violence itself.  This also means that the instinct for workmanship involves, first of all, a social relation between the maker and his/her fellows, rather than the direct relation to his/her materials and the manipulation of the causal relations articulating them, as Veblen would have it.

 

We could, further, identify Veblen’s account of the predatory/pecuniary interest with what we can call the “Big Man” stage of history—a stage of history which we have by now means exited (indeed, modern constitutionalist politics and the free market aim at harnessing Big Men more than at eliminating them), as Veblen, along with so many others, fervently hoped.  His discussion of Christianity and monotheism more generally is illuminating in this connection, since both Judaism and Christianity are invented as responses to the unimpeded rule of Big Men and the imperial moralities such rule generated.  If, as Eric Gans has argued, the centrality of scapegoating to social order only holds true for communities thusly organized, then faiths predicated upon a repudiation of the scapegoating morality of the Big Man presuppose his continued existence (and periodic chastisement).  If the total replacement of the Big Man as a social phenomenon by esthetic, conciliatory gestures and the reality revealed by norms of scientific inquiry (the instinct for workmanship) were to occur, then it makes perfect sense to assume that the monotheistic faiths would fade into oblivion. 

 

I’m not going to argue for the impossibility of such a development here—I’ll just say that the invention of the Big Man as an occasionally necessary medium of social deferral (albeit elected and subject to recall and liable to criticism and disobedience, or subject to the discipline of the market and the threat of and bankruptcy) can no more be revoked than the invention of nuclear weapons.  I’m more interested in the implications of Veblen’s classification of entrepreneurial and financial activity for the mode of economic theory I’m most interested in now, the Austrian theory of Mises and Hayek.  I assume that these thinkers, and those of their “school,” would vigorously repudiate Veblen’s claim:  for these free market thinkers and advocates, there is nothing more peaceful then the activity of exchange:  indeed, exchange is the antithesis of violence, it is what we do once we have successfully suppressed violence as a factor in human relations.

 

I want to explore the possibility that Veblen is right, and they are wrong—and the plausibility of this hypothesis lies not only in the very structure of competition, in which you can win just as easily by disabling your opponent as by improving yourself, and not only in the enormous destruction which can be deliberately wrought in the financial arena, but also in the very evident attitude of those who operate there, which seems to be one of obligatory triumphalism, machismo, threat, bluff, swagger, etc.  (Here, we would have to distinguish between those entrepreneurs who are closer to the workmanlike aspects of the job and those closer to the financial dimension—but no entrepreneur could indefinitely avoid the latter aspect.)  (In a similar vein, the Austrians like to believe that private property rights derive from occupancy and/or use of territory or object—but doesn’t it make more sense to say the property was first of all what one could take, defend, and persuade others to accept as a fait accompli—and that rights then emerged to mediate between property owners?) I also reject Veblen’s assumption that this position is obsolete.  So, if the pecuniary/predatory is here to stay, and is inseparable from a proper understanding of freedom, how do we incorporate that into our economic, ethical and cultural analyses?

 

Every commercial community must come to terms with the distinction between fraud and fair exchange—it is inevitable that such a distinction be made simply because even dealing among merchants would become impossible otherwise.  Even for purposes of fraud, reputation as a fair dealer is essential (Veblen associates “honor” with predatory/pecuniary fields; indeed, of what relevance is “honor” to an engineer, architect, dentist or plumber, except insofar as we confront them as merchants—we can see their work for ourselves); and you can only gain such a reputation if “fair dealer” has some shared meaning.  Such a distinction is inevitably rough and relative—there are a lot of things that could interfere with the fulfillment of a contract that couldn’t have been anticipated, whereas the parameters of expectations for the “workman” I just mentioned parenthetically can be much more tightly drawn.  The levels of required trust and acceptable risk will be drawn differently under different conditions—again, most unlike the standards of good workmanship:  the good dentist or carpenter is good in Boston or in Moscow, and their clients will be able to distinguish their work from more shoddy varieties.

 

This line will be drawn, like all lines, by events:  in the midst of a commercial culture given over, or in danger of falling into, general fraudulence and corruption, someone and/or some group will come to exemplify fair practices.  The establishment of fair practices would first of all be negative—we don’t do all the things our competitors do.  But it would eventually become subject to verifiable norms, and embedded in relatively transparent practices, and advertised as an intrinsic part of your experience, as a customer, with that business.  The fair dealers would seek each other out and, I think, would be genuinely “authenticated” by the business community and circle of customers once they had weathered some storm—once they had, for example, refused the compromise involved in obtaining some government sponsored monopoly, or abstained from participating in some boom or panic that wiped out other businesses, and ended up intact, perhaps even stronger, precisely due to the values implicit in their “fairness.”  Again unlike workmanship, though, where skills may deteriorate, but in fairly predictable ways, the “capital” of “fairness” can erode rapidly, and often as a result of what seemed at the time to be inconsequential decisions (cutting a corner here, lobbying the government there, when things got a little rough…). 

 

It further seems to me that the creation of such a capital fund of fairness will be in inverse proportion to government involvement in establishing and enforcing norms.  The government’s secondary function, indeed (second only to preventing violent assaults on citizens’ rights), is the prevention of fraud, which violates the sanctity of contracts.  But such a task would prove impossible to perform, or at least perform adequately, if standards of fairness had not already evolved within the commercial community itself, so that the government is essentially policing the margins of the community in accord with the norms of the community itself—it’s very hard to see on what basis the government (government lawyers, to be more precise—yet another set of predators who would need to establish a set of internal norms) could generate such norms in a non-arbitrary way.  But, of course, the government’s role will also be established through events—for example, through its protection of some “fair dealer” in danger of being scapegoated within the commercial community.  The relationship between business norms and the legal system, then, is an index of the moral health of the economy; and the moral health of the economy is itself an economic “factor”:  certainly, much wealth is lost to corruption and fraud, and gained by fair dealing.

 

In this way, the Christian morality that has emerged and sustained itself as a check on predatory Big Men (think of how focused both Judaism and Christianity are on the “haughty”), could become an economic value in its own right—perhaps one we could even learn to calculate.  Surely some economist could (or, for all I know, already has) invent a formula for determining the value of the moral economy (of course, we would need to be anthropologists to devise measures for the moral economy).  What is x number of people willing to leave cutthroat firms when they cross the line and become, not “community organizers,” but more honest versions of the business they have “exodused” from, “worth”?  Or x number of individuals willing to form companies in which their own money is at stake, instead of playing only with others’?  These are challenging questions, because below a particular threshold above which there would be enough of such firms to survive and impact the economy, their worth would be zero.  Even more challenging is determining which other, only indirectly economic elements of the culture would comprise a moral economy making such thresholds attainable.  We’re not just talking about honesty or altruism here—rather, “fair dealing” involves the ability to create, revise, and continually re-interpret, on the ground, in conjunction with others, sets of rules that are largely tacit.  Distinguishing between those who preserve and adhere to the rules so as to skew them in their direction and those whose actions always preserve a residue aimed at enhancing and refining the rules is a skill acquired, like any skill, through practice. 

 

The grammar of rules might be sought in a seemingly strange location.  Rules are difficult to describe—even the ones we follow flawlessly and thoughtlessly.  Indeed, the thoughtlessness is the problem—analytically, not necessarily morally. Rules always have a tacit dimension—if you ask someone (or yourself) how you follow the myriad rules you do follow to mediate all your daily interactions, you must either simply “point” to what you do and rely upon your interrogator’s own intuitions as a rule follower to understand; or, find a way to point to another set of (meta) rules which tell you how to follow the rules in question—but, then, how do you follow those rules?

 

A few posts back, I defined imitation as the derivation of imperatives from a model.  Iteration, meanwhile, derives from the response you get when you issue an imperative to your model in return, demanding that he/she show or tell you how to obey the previous imperative, subsequent to an inevitably failed attempt.  That initial attempt must fail because you will still be insufficiently like the model, hence indicating some portion of the imperative left unfulfilled.  To demand of the model another imperative, now part of a series (his implicit one to you, yours in return, and now his again), is to now treat the model as him/herself subject to imperatives, which he/she could convey intelligibly.  In that case, the two of you share the same source of imperatives; but this further means that part of the imperatives this newly revealed shared center issues involves articulating the imperatives each of you receives with those the other receives.  Hence, the birth of rules, which call upon one to act in such a way as to coordinate unknown acts along with everyone else. 

 

One is always within rules, but one becomes aware of the rules when they become problematic, and they become problematic when one must narrow them down, in a single case, to an unambiguous imperative—what must I do right here and now?  It is then that the origin of rules in a center issuing imperatives that must be shared becomes evident because one must then ask the center for guidance.  This, it seems to me, is the structure of prayer, which would mean that learning how to follow the “spirit” of rules means learning how to pray.  (“God, give me the wisdom to understand your will…”) (For Veblen, this is the kind of situation the “instinct for workmanship” could never lead us into.)  And in the monotheistic or, perhaps (I’m not sure where Islam is on this), anti-“haughty” faiths, such prayers would take on the greatest urgency in situations where one’s desire is to abuse the position of the “Big Man,” usurp that position, elevate oneself by discrediting the existing one, fantasize oneself as Big Man, or create a negative Big Man who will serve as the “cause” of some present crisis; but, also, where one’s desire is intertwined with the emptiness of the Big Man space, or the inadequacy of its current occupant—where one may need to help prop it up, in other words, but where such a need edges imperceptibly into these more sinful desires. 

 

Humbly demanding that the center, the iterable source of rules, or the “central intelligence,” come through with a clear imperative at such moments is the heart of the proper creed of our commercial civilization.  If we recognize that our entrepreneurial class is comprised, not of pacific servants of others unreasonably harassed by the predatory state but, with all the good they do, of actual and budding Big Men (who, of course, seek commerce with Big Men in other realms), thereby adding a political component to the economy, then we can find the economic value in the prayerful state that seeks a middle between haughtiness and debasement.  This middle would also turn out to exist between other poles inevitable in an increasingly sophisticated rule-based culture:  between the “letter” and “spirit” of the law; between the rules’ tacit and explicit dimensions; between preservation and innovation, and so on.  Such prayer is itself a kind of thinking, and I’m even thinking of considering prayer as the origin of the declarative sentence.  In another post.

September 1, 2009

Popular Culture

Filed under: GA — adam @ 7:37 pm

I take Eric Gans’ distinction between popular and high culture as axiomatic:  in popular culture, the audience identifies with the lynch mob, while in high culture they identify with the victim.  It seems to me, further, that this distinction manifests itself as one between two modes of reading, or “appropriation” or “consumption” of cultural materials:  in engaging high culture, one is enjoined to preserve the text or artifact as a whole—this means examining the parts and the text/artifact as a whole “in context,” with an eye towards its unity and purposefulness, as well as the accumulated historical labor expended on its production.  This also implies a hierarchy of interpreters and commentators and the institutionalization of the materials (museums, literature departments, etc.).  With popular texts and artifacts, meanwhile, elements of the cultural product can freely be iterated in contexts chosen by the user, without regard to the “intentions” of the producer.  We have no compunction about repeating catch phrases from a sitcom or movie in ways that show no respect at all to the way that phrase functioned in its “original” context.

 

Now, of course high cultural texts get treated in this way as well, but this just testifies to the dominance of popular culture in the contemporary world—that is, we are talking about ways of treating texts and artifacts as much as (or more than?—that’s part of the issue) about the texts and artifacts themselves; and, of course, putting it that way further testifies to the decline of high culture and the ascendancy of the popular.  We can also take as given the convergence of popular culture with the rise of the victimary:  the high cultural texts are themselves viewed as oppressors, and by “appropriating” them “violently” we take our justified revenge upon them for their presumption of centrality.  And we can also stipulate that the mass market and the “age of mechanical reproduction” have been central to this process. So far, nothing I have said takes us much beyond discussions of postmodernism going back to the 70s and 80s, which also highlighted the collapse of the high/popular boundary as well as the intensified “citationality” and “cannibalistic” nature of contemporary culture. 

 

But we can go quite a bit beyond those discussions, I believe, in particular in trying to figure out the consequences of these developments.  Left cultural theorists have tied themselves up in knots trying to convince themselves of the potentially “progressive” character of the rise of the popular, with results that have been brilliantly lampooned in a couple of essays on cultural studies by John O’Carroll and Chris Fleming.  Somewhat more serious, or at least earnest, approaches, like that of Gerald Graff, try another, in my view, equally flawed attempt to find something hopeful in our students’ attraction to popular culture.  For Graff, instead of trying to get students to engage thoughtfully with the products of high culture that we professors value, in order to develop and put to work their interpretive faculties, their ability to see things from different points of view and in “depth,” etc., we should recognize that students are really doing all these things already when they argue about their favorite sports teams, or the movie they saw last night, or the latest music video by their favorite artist.  Get students speaking about what they already know, already interpret, already canonize, already debate in more sophisticated ways then outsiders realize, and they will come to realize that they are already something like “scholars” or “academics” (or “critical thinkers,” or “interpretive agents,” or whatever you like).  What happens then seems to me less clear—if they are already engaged in serious discussions over esthetic and moral values, why do they need our high cultural texts, or the means of interpretation that have evolved in the history of responses to them?  On the other hand, if those discussions are not genuinely about such values, and the means of interpretation at work in them not comparable to institutionalized ones, then, in fact, they are not really doing what “academics” supposedly (or hopefully) do.  Nor do we have any reason to assume that having them attend to what they already do will get them one step closer to that goal. 

 

Defining popular culture as the free iteration of bits of models helps us to account for why these attempts to “redeem” popular culture can’t accomplish what the redeemers would like.  High culture is intrinsically totalizing, centralistic or holistic, whether it be the Marxist theory of history or the New Critical sacralization of the literary text—the idea from the start is to resist the fragmentation so celebrated by apologists for the popular.  The assumption is that some transcendent reality, embodied, albeit partially, in the most accomplished products of culture, is what militates against the scapegoating of those figures who stand out against ritual, tribal culture, figures utimately modeled on Socrates or Jesus.  No coherent political ethic can emerge from immersion in soap operas, Madonna videos or comic books; nor can any consistent and arguable esthetic stance be elaborated out of one’s baseball card collection, pornography addiction, or experimentation with shocked hair and body rings, because the entire notion of coherent and consistent ethics and stances derive from a different set of assumptions and practices.  At the same time, though, I don’t think there is any way of returning to the notion of high culture that presided up until, say the Second World War—not only has that notion of transcendence been displaced irrevocably, but it was flawed in important ways from the beginning, however great its service to the advance of humanity and however many the staggering accomplishments we owe it.    In that case, the problem with the cultural studies people (of whom Graff is one, of course, even if one of the moderate political center), is that they aren’t radical enough.

 

After all, the originary hypothesis confirms the central claim made by avatars of the 20th century’s “linguistic turn”:  human reality, at the very least, is indeed constituted by the way signs reveal relations between us through the things we move to appropriate, and not by the referential relation between language and a higher reality.  This must also mean that when we account for the human condition, we must do so in language and are therefore always further and newly constituting it—this “Heisenbergian” reflection irremediably undercuts any pretensions to knowledge of a permanent “human nature.”  Mimetic desire, rivalry and crisis will always be with us, and the bet made on traditional high culture is that that permanence renders different modes of deferral  secondary, so many “epiphenomena,” if you will—but if we reverse that claim, as I believe we must do as we become more conscious that we ourselves, everyday, are responsible for inventing such modes of deferral, then even those enduring traits of human reality are relativized by ever changing sign systems which not only resolve them in limited ways but shape their terms of emergence as well. 

 

And yet the paragraph I just wrote was, or so I would like to believe, composed on the terms of high culture—I am certainly aiming for the kind of “density” or “depth” in my discussion here that would mark this argument as one that would interrupt the prevailing modes of scapegoating.  And, of course, the theoretical and esthetic rebellions that have provided a vocabulary for the privileging of the free iteration of bits of models took place completely within high culture as well.  Indeed, notions of “depth,” “density,” “textual autonomy” and so on refer to our willingness, or our felt compulsion, to take the object on “its own terms,” to assume, as Leo Strauss put it, that its author knew more than us and was providing us with knowledge or an experience that was both valuable and one we couldn’t have procured or even thought to pursue on our own.  If we approach cultural objects with such an attitude, they become inexhaustible, but we will only do so as long as we believe the inexhaustibility lies in the object, not in our attitude towards it—once we assume there is no “text in this class,” to refer to Stanley Fish’s famous phrase, the sheer proliferation and ingenuity of interpretative strategies that have been accumulated over the past couple of millennia will not be able to sustain our interest for long.  The initial burst of enthusiasm deriving from the sudden sense that “hey, we’re really the ones who ‘made’ these texts!” will quickly dwindle into a deflated “you mean, it was just us all along?” 

 

The initial result of “unregulated iteration,” in both popular and high culture, was the creation of the celebrity—from the modernist writers and painters in the 1920s to the postmodern theorists of the 70s and 80s in the world of high culture, and from newly famed athletes, singers, actors, along with seemingly randomly elevated members of the idle rich, the scandalous, etc., also starting in the 20s, through the movie stars and rocks stars, also into the 1980s.  Perhaps this age in retrospect, if the title of Eric Gans’ recent Chronicle on Michael Jackson is correct, will be known as the “Age of Celebrity” as we move on to something else.  Maybe “celebrity” filled the space of sacrality previously filled by the Platonism of both the guardians of culture and the people, and now vacated, most immediately due to the historical catastrophe of the First World War; maybe it also fit an early stage in technological reproduction and the market, where such processes were far more centralized and monopolized then they are likely to be from here on in.  It seems to me that the precipitous decline in the power of celebrity which we are witnessing (and is perhaps best testified to by the openly staged, publicly “participatory,” “auditioning” for celebrity in shows like “American Idol”—the aura essential to celebrity cannot survive the public’s freedom to elect and depose celebrities at will, and with such naked explicitness) is more in accord with the logic of unregulated iteration, as well as healthier.  (It is noteworthy that while there may very well be something cultic in the devotion millions of people express towards political leaders like Obama and Palin, the nomination of these figures as “celebrities” was premature, as celebrity cannot survive the harsh criticism on inevitably divisive matters of public substance any political figure must endure—if an author touted by Oprah turns out to be a fraud, she apologizes publicly and has him come on the show and do the same; there is no analogous mode of “redemption” if, say, Obama’s leftist agenda crashes or Palin runs for President in 2012 and is thrashed in the Republican primaries.)  At any rate, though, one could imitate Babe Ruth’s swing or swagger in the playground, or Jordan’s moves in the gym; one could sing a Beatles tune or mimic some of Michael Jackson’s moves without having to have a “reading” of the “text as a whole”—while the celebrity of these figures, one might say, helped guarantee a unity and hierarchy of focus that could be shared nationally and sometimes globally, sustaining the type of community previously preserved through more transcendent means.  If celebrity is on its way out, we will have overlapping and often mutually uninterested, even repellent communities, sometimes aggregating into something larger but not in any predictable way.

 

If the generation of models in a period that is both post-transcendent and post-celebrity does not require a focus on “complete,” or “fleshed out” figures (about whom a story could be told, through whom a meaningful sacrifice performed), if they don’t have to conform to existing narratives so precisely (in part because the media, or means of establishing celebrity, are themselves increasingly decentralized and evanescent), it may be that the eccentric and idiosyncratic will come to the fore—not just any idiosyncrasy or eccentricity (and not necessarily the depraved or cartoonish) but, I would hypothesize, those that the make the figure in question just as plausible a figure of ridicule as of emulation.  Those who organize a space around a particular figure would do so with an awareness of this two-sidedness, which would in turn provide a basis for dialogue, friendly and hostile, with other groups—that is, “we” would organize ourselves around emulating a particular somebody and therefore knowingly organize ourselves against those dedicated to his ridicule; and vice versa.  (It seems to me that something like this is already happening with Sarah Palin who, despite what I said before, could, if she avoids putting herself in situations where her power of presence must be directly repudiated or ratified, might become an example of this new kind of…well, what would it be?)  What looks to one group like an accomplishment looks to the other like a botched job, what looks to one beautiful is grotesque to the other, a pathetic mistake to one is an innovation to another and so on—and, in the best of cases, each side will be able to see what the other is seeing.

 

In this case (to continue hypothesizing), popular culture will be performing what high culture might become increasingly interested in—that boundary between error and innovation, where rules get followed in ways that create “exceptions,” where the strictest literalism produces the wildest metaphors, where models get both emulated and mocked and it can be hard to tell which is which, where we find ourselves in the position of figuring and trying out ways of seeing others and objects as beautiful or repulsive, instead of simply being “struck” one way or another, where no one has proprietary rights in the line between “mainstream” and “extreme,” etc., but where one still has to come down on one side or another, at least at a particular moment.  High culture, whether carried out in the theoretical or artistic realms, would increasingly become so many branches of semiotic anthropology, interested the way in which avatars of the “human” keep coming to bifurcating paths (do nothing but keep coming before such bifurcations), going one direction or another for reasons we could guess at but with consequences we can identify and judge according to their irenic effects.  It’s not too difficult to imagine texts and performances being composed with this problem in mind, and critical and appreciative canons emerging to meet those texts and performances.  (Just think of the intellectual challenges imposed by the determination to write a text in which every phrase is a “taking” [an iteration or appropriation] as well as a “mistaking”—and think of how revelatory such an effort might be regarding idiomatic usage.)  (I suspect one could already construct a “genealogy” of such texts that have been classified as “modernist” or “postmodernist” while nevertheless sticking out as an anomaly.) I think high and popular culture would thereby become less hostile to each other, and both might become less sacrificial.

August 29, 2009

Why the Law is Enough

Filed under: GA — adam @ 7:54 am

As readers of my blogging (here and at the JCRT Live blog) and my most recent essay in Anthropoetics (“Marginalist Politics, Originary Grammar”), are aware, I have been compelled to address the issue of imperatives—in ethics, in economics, in politics and in thinking.  This is part of my project of generating a grammatical conceptual vocabulary, and the next step I would like to take along that line is to make my exploration of the imperative (as exemplary of everything that actually happens within the frame or space constituted ostensively) more complex and articulate; and, at the same time, to bring it more clearly into accord with other terms that have been important to my political thinking, in particular, covenant—which at first glance is located at the antipodes from what I have been calling the imperative order. Finally, these questions have converged, for me, with the polemic, which seems to me as important and underdeveloped as ever, between the Christian and Judaic revelations, which has in turn become urgent to me due to my growing attention to, and admiration for, Christianity.  I hope my framing of that polemic will take my originary grammar in new, productive and hopefully more evidently “relevant” directions.

 

First of all, I confess to have neglected the rich terrain of the imperative itself, which ranges from the brute command—which could be issued to an animal (“fetch!”)—to God’s command to the not yet existing world in Genesis:  “let there be…” (really, just “Be…!”).   “Have it your way” is, grammatically, an imperative, as are most forms of granting permission; while being “charged” with a task is somewhat different from “obeying” orders.  Think, also, about tellingly obsolete words like “heed” and “hearken,” which are used to frame imperatives but call for something much more than “obedience.”  So, how to organize this field?  The distinction that presents itself here regards the relation between imperatives and their accompanying ostensives.  Any imperative requires an ostensive signifying the fulfillment of the imperative (someone has to attest that I did as I was bid); but some imperatives require, in addition, an ostensive signifying acceptance of the imperative in advance of its fulfillment—an endorsement or acknowledgement, as opposed to mere a posteriori verification.  An imperative requiring acknowledgement presupposes two separate and autonomous persons, whereas as one calling only for verification after the fact implies complete domination, whether of one individual by the other, or both by some exigent circumstances (“let’s get out of here!”—the imperative seems to come from the reality itself, with the individual conveying it the equivalent of a ventriloquist’s dummy, however unjust the comparison to the person with the wits to respond to the emergency).  The acknowledged imperative implies a minimal equality (even, say in the soldier’s “Yes sir!”), while equality is either absent or beside the point in unacknowledged ones (as it would be as meaningless to speak about the “equality” of scattered masses fleeing a storm or a massacre as to speak of the equality of sheep in a flock). 

 

There are other important distinctions to be made.  For example, among acknowledged imperatives, we might distinguish between those which acknowledge the imperative and those which acknowledge their source. (“Yes sir!” affirms the source of authority, and is the same form used to reply to all particular imperatives, while less formalized responses would limit obedience to the specific task—“I’ll get right on that,” with the emphasis of “that.”)  We might distinguish between various periods allowed between imperative and its fulfillment, between different protocols for verification (must someone other than the source of the imperative be involved in the verification of its fulfillment?), and so on.  But I suspect we would be able to present all these distinctions as differentiations among acknowledged and unacknowledged imperatives—for example, a very prolonged period between imperative and fulfillment would seem to require acknowledgement, and the distinction between acknowledging the source and acknowledging the purpose of the imperative would really be a distinction between affirming a prior acknowledgement of an imperative order (an authority) and an acknowledgment concerned only with this particular imperative. 

 

So, I have consented when I have acknowledged the imperative before fulfilling it and when such acknowledgment is expected by the giver of the imperative.  It seems to me reasonable to assume that the notion of “consent” would have evolved out of asymmetrical situations involving imperatives that could no longer simply be imposed.  The broader sense of consent, say in the exchange of goods or promises, breaks with the simple asymmetry of the imperative not by transcending that asymmetry but by introducing a model that all parties are obeying equally:  the model of he-who-refuses-to-participate-in-scapegoating, or, even more, who-is-willing-to-take-the-scapegoat’s-place.  We can only have freedom, a free society, equality or isonomy, once that model is in place and we are deriving the imperatives of our being from it. 

 

The Jewish discovery, formulation and resolution of this “problematic” remains unparalleled in its radicalism:  it insists upon the minimality of the model of God at the center of the human scene.  The Bible provides models of God’s actions and God’s Being—we could write a “biography” of God drawing upon Biblical materials, and hence we could imitate Him—but always as a concession to present human capacities, and always as a way of drawing God’s people closer to that which make specific models of God less important:  the law.  By following the law, Israel is to become a model to humanity for living without representable models:  that is, models so minimal that they offer only general imperatives (do justice, choose life, etc.) that preclude (like the American Constitution’s prohibition on Bills of Attainder) singling out individuals and which each recipient must take upon him/herself.  This is only possible through the abolition of human sacrifice or scapegoating, through the felt need of a mode of divinity upon which hands could not be laid. 

 

The Judaic revelation, then, insists that once the law is revealed, no further revelations are necessary—the working out of the law is the realization and further perfection of the revelation.  The Christian objection to this argument, as I understand it, is that the law inevitably loses contact with its source and becomes formulaic, faithless and, in perhaps the most charged accusation in the New Testament, “hypocritical”:  your punishment of those outside of the law just reflects your satisfaction at being inside it.  In privileging “faith” over law, Christianity obviously isn’t promoting lawlessness; rather, it is arguing that only faith can give “spirit” to the “body” of the law—you must obey the law, surely, but not grudgingly and with an eye towards the approval of others; rather, you should, in your obedience to the law, fully put forth a sign of your acknowledgement of He who stands behind and transcends the law, and transcends your own attempts to fulfill it.  Indeed, this means (and here is where it seems to me Christianity is really at odds with Judaism) enacting the limits of the law through “faithful” actions the law couldn’t have anticipated and has no authority to forbid.

 

The Jewish counter-argument, it seems to me, lies in its sacralization of language—Hebrew is the holy tongue, while prohibitions against translating the Bible in Christian countries concerned the authority of the clergy and not the authenticity of the original language.  (Correct me if I’m wrong, but it seems to me that Christianity’s privileging of the signified over the signifier is almost total.)  And this sacralization is inseparable from writing, while the covenant is inseparable from the written text.  Written language makes language available as an object, divisible and given to various articulations—it is not only letters that we can only talk about as a result of writing, but syllables, words and sentences as well.  David Olsen argues that the logic governing writing in its representation of speech is to control the interlocutionary force of the utterance recorded, which is to say to reduce the utterance’s repetition by readers to the original linguistic event.  But it’s easy enough to turn this logic around and suggest that eventually writers would discover that this also meant the possibility of multiplying without limit the linguistic events generated by the text.  And the writers of the Talmud certainly did discover this.

 

The distinction between “oral” and “written” law in Jewish tradition paradoxically privileges the oral while acknowledging that what has been written down has been most worth preserving, and therefore the core of the oral law itself.  This concession to necessity also licenses a writing that mimics orality, in its dialogic and digressive character, while exploiting the full resources of the written text—sight puns, the possibilities of removing a single letter from a word, starting a sentence at various points, the numerical values assigned to letters and so on.  This honors the law by continually enhancing it and keeping us within its text.  And the other critical distinction of Rabbinic method, between “halakha” and “aggadah,” or law and story, does the same—the aggadah narrativizes the law, not only by playing out scenarios predicated upon one or another interpretation, but by transforming its progenitors, the post-exilic Rabbis, into almost biblical sized heroes, and transforming the actual heroes of the Bible into Rabbis, arguing the finer points of the law.  Indeed, God Himself often enters the scene, sometimes in familiar and even homely roles, other times in more menacing forms, but always in the manner necessary to hypothesize an origin of the law that sanctions both the law’s irrevocable nature and the legitimacy of endless discussion of its application.  The law is sufficient, that is, to continually generate hypotheses of the law’s emergence and revised terms of its evolution, to maintain its divine sanction while reducing that sanction to maintaining the collegiality and accountability of its interpreters and the inexhaustibility of the shared text.

 

If consent is when we endorse or affirm the imperative we have received, covenant is when we endorse or affirm the model that serves as a source of imperatives:  to treat each other as caretakers of the law.  The step from consent to covenant lies in our demand that our imperator and model instruct us in fulfilling his commands subsequent to our own, inevitably failed attempt to do so.  In making such a demand upon our model one realizes the inadequacy of the model to the demand, and the need for a formal model we can all share.  The model for the shared source of imperatives can’t be a super-imperator, because such a model would shut down the demands that called him into being; the model we are looking for must be one upon whom we have in our turn imposed impossible imperatives, and whom we would destroy in insisting upon their fulfillment.  That is, it is a negative model, a potential victim, which regulates all imperatives. 

 

The argument between Judaism and Christianity, then, involves how to construct this negative model.  For Christianity, it has to be someone who exposes our hypocrisy in treating the law as if it were just a set of automatic commands.  For Judaism, it is someone who asked for nothing more than the full measure of the law, and failed to receive it—maybe because of the hypocrisy of the law’s guardians, but maybe due to the political “sin” of factionalism, or the lapse into mimeticism the Bible refers to as “wishing to be like all the other nations,” or some other form of idolatry.  Christianity would have us embrace imperatives that we have not the power to obey—our sinfulness interferes with the faith we are commanded to have in God and the love we are commanded to live by.  A Christian society would therefore have us honor models that are incommensurable with the compromises of daily life, such as celibacy and monasticism.  For Judaism, everyone can attend to the law at least a bit, and that means Judaism allows us to protect ourselves from and direct our anger toward conscious and calculated enemies of the law, an important category of social being that Christianity would easily group with “sinners” more generally or even sympathize with as the victims of “Pharisees.”  As models for modernity, then, Christianity proposes Romanticism, also a scourge of hypocrisy and inauthenticity; Judaism proposes constitutionalism, founding as law, writing, power and the limits of power. 

 

In the end, we need both sides of this polemic—we need for them to remain separate, irreconcilable, and reciprocally admiring.  One way of articulating the relationship I am proposing is through an examination of the category of “righteous gentile,” invented to honor those members of “unmarked” groups who risked themselves, their families and their communities to save the “marked” during the Nazi genocide.  No doubt many of the righteous were Christians, performing what they saw as their Christian duty, and I obviously have no quarrel with this reading of the Christian revelation (for that matter, many were probably secular humanists close enough to humanism’s origin in the further “universalization” of Christianity).  But the “logic” of the category, if not the action, seems to me Jewish—what is Jewish, that is, is the codification of this action as a world-changing category of which the law can take cognizance.  As a legal and political category which, for example, one nation might recognize in citizens of another, hostile nation, the notion of the “righteous gentile” might support a worldly, even “realistic” politics that would prevent atrocities.  At the same time, though, does the centrality of communal self-preservation to the Jewish revelation make Jews qua Jews less likely to put themselves and everyone surrounding them at risk in this way for an Other threatened by some third party?  (I am too ignorant to know whether Jewish law accounts for such a possibility—say, protecting a Christian “heretic” who is seeking refuge from the Inquisition—but I suspect it is ill-prepared for it.) (But what about those secularized Jews who have helped extend Christian principles to public life, thereby accelerating modernity?  Would this not have been necessary for the universalization of the “righteous gentile”?) Maybe we need the singularity of Christian actions along with the systematization of Jewish codification.

 

I’m not sure who would be interested in this argument today.  It’s a shame it has never actually been had, except perhaps subterraneously, in the complementary emergence of Christianity and Talmudic Judaism in the early centuries of the Common Era.  The Christians demonized the Jews and the Jews pretended to ignore the Christians, but one suspects they were watching and listening to each other a lot more closely than that.  At any rate, this polemic would provide a better frame for handling our political and ethical discourse than any that we presently have—and might add some new dimensions to the polemics that have become canonical, like “Athens” vs. “Jerusalem.”

August 14, 2009

The Economic Imperative

Filed under: GA — adam @ 4:59 am

Post-gift economy, there are two ways of organizing economic relations:  through the free market, or bureaucratically.  Bureaucratic economics, the “command economy,” organizes distribution of labor and resources through a hierarchical series of imperatives; it is either a parasitic excrescence (even if serving otherwise indispensable purposes) upon the market, or it is constructed in the ruins of the market, and leaves nothing but ruin in its own wake.  All this is well known by now.  But there are some paradoxes to unpack here.  The free market emerged as a concept and rallying cry against the privileges of aristocracy, monarchy and Church, as part of the call for universalism against particularism.  The actually existing market itself has no such unanimous support, though—everyone has some particular interest in manipulations of the market in their favor, in rent-seeking.  At a certain point, we could imagine, the competition to achieve rents through government granted privileges, explicit or implicit (say, in the way in which regulations favor larger businesses capable of paying the costs of compliance), would choke off the market altogether.  What blocks this outcome, we can further assume, is expansion and innovation—through the 19th century, the creation and discovery of new markets, in the US West and for European countries imperialism, and increasingly important, technological transformation and the creation of new needs and desires.  The rent-seekers obstruct innovation, but could never anticipate all the possible channels it might take, and the innovators will defend their place on the market until competitors emerge and they become rent-seekers in turn.

 

At any particular moment, then, even while one producer may use free market rhetoric to chip away at the privileges of another, the consistent and at least partially conscious defenders of the market will be few and not coordinated with each other:  some small businesses, innovators with a head start on potential competitors, risk-takers who would like rewards to match risk, migrant or in some way “sub-standard” labor that relies upon enterprises where minimum wage, unions and other labor regulations are overlooked.  There is one other “class” with an interest in preserving the market—the consumers.  The availability of choices on the marketplace, or the decrease in the number of choices, is an unmistakable marker of the quality of life.  Even here, though, this interest is inconsistently advanced—prices, after all, can be lowered by “command,” choices reduced through regulation and privileges granted to one producer over others, and these privileges are often granted due to the health and safety (and, now, environmental) consciousness of consumers.  The benefits of economic command are immediately and intensely felt by very specific economic actors; but we never know what we have lost due to restrictions on freedom.  In the end, it is perhaps the pragmatism of politicians, who would know from personal experience how dependent their own pet projects are on wealth creation, who more than anyone else are responsible for us having as much of a free market as we have had so far.

 

Even more, the generalization of the free market requires a class of “protectors,” located within the imperative order, whose values cannot be squared with the market.  Soldiers can’t be given economic incentives to kill more of the enemy.  Most social orders probably have a separate class of “armed men,” but in a market order no political superiority can be granted to those who put themselves on the line to protect the rights of everyone else.  The reciprocal resentment thereby bred will only in very extreme conditions be a threat to social order, but it is permanent and consequential nevertheless—those living on the market don’t want to think too much about those “rough men” who keep them safe at night and would certainly prefer not to encounter them in their daily lives, while a certain tribalism is probably inevitable for the latter.  This is worth mentioning here because the values of the imperative order shape attitudes more generally—whatever the economic effects of the loyalty of some to American car companies, or the insistence that no immigrant be allowed in until all Americans have jobs, these are not economic attitudes.  And it is also true that one of the most formidable obstacles to the establishment of market relations and its normative supports is the persistence of social relations based on honor and kinship, or residual forms of the “big man”—whether in slums in Western countries or the Muslim world.

 

It seems to me obvious, then, that we still need a political economy—we need to think politics and economics in an integrated way, otherwise we are likely to make one of the following errors:  one, seeing politics as an arena where we guide, fix, organize, reconcile, etc., an economic system that goes off track, gets broken, and is continually getting caught in its “internal contradictions”; and, two, seeing government intervention as an arbitrary interference with natural economic laws.  I’m certainly much more sympathetic to error number two, but my answer to calls for laissez faire is to call attention to how much political action would be necessary to approximate that—there would have to be forms of collective action that in a very sustained, persistent and sophisticated way counter—by getting officials elected, by maintaining pressure on them, through targeted policy proposals, grassroots organization, at times civil disobedience, etc.—the events constantly generated by the rent seekers.  Those who think that the welfare and regulatory state could simply be rolled back through persuasion of our fellow citizens and we could all return to our private pursuits haven’t really thought it through.  Even leaving aside the perpetual resentments underlying rent-seeking, a free market politics would have to support ongoing debates over what would inevitably be enormously complex questions regarding the reshaping of contract law as the state’s reach receded.  Also, the cultural politics of free marketers will face its own complications:  we know very well that certain habits are required for participation in the free market, but if we cede areas of life like education to the private sphere we concede that anti-capitalist forces might be favorably positioned to conquer substantial cultural terrain. (And that’ leaving aside for now the problems of a pro-capitalist, pro-freedom foreign policy).

 

I would like to see if originary grammar can help us with political economy.  I would offer the following formulation:  the economic imperative is to arrange the imperatives one obeys so as to maximize ostensivity.  On one level, this is a phrasing, in terms of originary grammar, of a basic understanding of “economy” as presupposing scarcity:  we must (we are compelled, we are “commanded” to) gather our resources, use our skills, refine our skills, invent modes of cooperation, convert all of our limitations into positives to the extent possible in order to meet our needs, preserve our ability to meet tomorrow’s needs, and so on. 

 

We have a market economy when others’ actions are inextricable from my assemblage of imperatives—if lots of people want something, that becomes an imperative for me, and it affects the hierarchy of all the imperatives compelling me.  Unlike the gift economy, in the market economy the imperatives are impersonal and incalculable; but also more contingent and harder to norm.  And we should use the word “imperative” literally—when someone says “I have to have that!” they really mean it, even if it turns out there are other, overriding compulsions.  Meanwhile, let’s use the notion of “ostensivity” in its most precise, originary sense—not merely referring to something, but bringing into being a world by deferring some crisis through a gesture.  Wealth is a sign—for oneself, for others.  My desires model a certain kind subjectivity predicated upon possession—possessing wealth, displaying wealth, viewing the wealth of others, always conveys meanings, in the kind of intuitive, immediate and often unassailable (until it is too late, anyway) sense we associate with the ostensive.  What I am calling “maximizing ostensivity,” then, could be considered “ostentation,” and middle class frugality is as ostentatious as the conspicuous consumption of billionaires—it communicates discipline, concern for the next generation, belief in the rules of the game, etc., and the imperatives pressing upon the economic subject are articulated for the sake of ostentation. 

 

Labor is still problematic, insofar as it is driven, for most people, by overwhelming imperatives with limited opportunities for ostentation.  For the most part, people have much less choice of the kind of work they do than in their consumption practices.  Labor is, literally, “meaningless”:  it is rarely set up so as to put forth signs.  Hopefully this will change, but only slowly, I suspect—the ideal, probably never to be reached, would be that everyone be entrepreneurial, self-employed, and creative.  The abolition of wage labor is an admirable goal, even if making everyone employees of the state won’t get us there.   In other words, the more the desire for one job or line of work or another enters into one’s imperative space on other than sheer financial grounds, the better.

 

Once all the imperatives are placed in the same space for each individual, we can map economic activity in much more complex ways.  Family, habits and location emit pertinent imperatives, but we already knew that (even if economists don’t quite know what to do with it)—so do ethics and morality.  A lot of government intervention in the economy is premised on the assumption that it is better for people to choose some commodities over others, and that people don’t always know which are better; this is obviously true, and the only problem is with the assumption is that we can know who will know better.  But there are better ways to “politicize” and “moralize” the economy.  Here, I would like to draw upon the notion of “originary advertising” that Chris Fleming and John O’Carroll suggested at the latest GA conference.  The only real contribution made by the Left to contemporary politics has been in its pioneering use of boycotts—whether it be the strike, the Montgomery bus boycott, the boycott of South Africa in the 80s, and, more recently (and, of course, far less obviously virtuous), attempts to gin up shunning campaigns against “socially irresponsible” companies like Wal-Mart. 

 

Whatever one thinks of any particular cause, one can’t deny that the boycott is a completely voluntary and non-coercive form of political action—it may be experienced as coercive by its targets, but that just means that a new set of imperative have been introduced into your “table.”  If you wish to sacrifice sales in order to continue with practices you consider necessary and justified, that’s up to you.  (You can market yourself as a company willing to stand up to unwarranted intimidation—buy our products and stand alongside us!) My point here, though, is that advertising, that practice wherein the seller presents potential buyers with a model of what it would mean to possess the commodity or, to put it another way, where the producer or seller thinks about how its products and organization take shape in others’ self-representations, is where boycotts would show their results.  More commons and skillful uses of boycotts might lead to all kinds of economic “irrationalities” (according to what model of rationality, though?) but it might be that a richer sense of the assemblage of imperatives one articulates with each new sale and purchase would create a more rational system overall.  When some powerful activist group targets a corporation, there appears to be a conflict between the company’s duties to its shareholders and to some notion of social responsibility, but if ignoring the demands of that group ends up reducing sales, those duties are no longer competing.  Nor need things end there—other groups are free to weigh down on the other side, and the company itself is free to make its case to the public; others can propose boycotts of companies that cave into the noxious activist group, etc.  Boycotts can get more sophisticated and targeted (new companies would spring up to consult on them), and companies will more and more market themselves as “pro-family,” “pro-community,” or anything else.  Of course companies do this now, but given the kind of development I am proposing, these claims would come under closer scrutiny all the time, and branding become an activity carried out by consumers as much as producers.

 

The moral imagination might think it needs to discipline the market, but the opposite is likely to be the case more often—we will become more conversant in the economics of morality.  Indeed, we could imagine getting to the point where no moral claim for reform will be taken seriously without the proposal, at least hypothetical, as a kind of metric, of a boycott that would likely do more good than harm.  And perhaps this is the kind of vocabulary that we would need in order to speak seriously about regulation, including in the financial system.  In other words, before we could expect serious answers to the question of what kind of regulation we need to prevent crises similar to the one we are witnessing today from occurring in the future, we should be asking about the moral economy we would have to share, at least minimally, before the other question would become meaningful.  The moral economy, then, the mapping of our imperative space upon declaratives, would have to become part of economics.  (To think about it grammatically, we would be moving from “I want x,” which is technically a declarative but just barely [if x were in view, you wouldn’t need the sentence], to “I would compose myself x-ly,” which might open a multilayered ethical and esthetic discussion rather than prompting a rapid-fire comparison of preferences.)

 

Perhaps the assumption that certain moral and ethical dispositions (certain patterns in the relations between ostensives, imperatives and declaratives) are required for a healthy political economy would help account for and benefit from exploring the one time and place in history, so far as I know, that genuinely approximated a free market:  the 19th century Anglosphere, the U.S. and Great Britain (and Canada?) in particular.  One of the greatest accomplishments of early modern bourgeois culture was the conversion of aristocratic into republican values, as notions like “nobility” and “virtue” came to be attached to action and character as opposed to being markers of social class.  The “gentleman” and the “lady” were critical results of this process, and these figures eased the transition from status to individuality, maintaining their currency until very recently—only the cultural revolution of the 60s decisively dealt them their death blow (how long before the terms no longer even grace our public restrooms?).  The gentleman and the lady domesticated ancient notions of “honor,” directing them away from violence perpetuated in the name of tribal and patriarchal prerogatives and protection towards a harmonious balance between public and private life, centered on the division of sexual roles in the nuclear family.  My point here is not that we can revive ladies and gentlemen, but simply that no account of free market economics would be complete without them— without the assumptions of upward mobility and generational transmission through discipline and effort, including female responsibility for sexual deferral and “manly” self-reliance, implicit in these “categories,” the daunting rigors of Victorian laissez-faire economics would be unthinkable.  An originary political economy today, then, would likewise have to study the novel forms of individuality and family life emergent today.  An unsentimental and disinterested observation of today’s children and youth—if we can impose upon ourselves the discipline restraining us from either marveling at their supposedly splendid new qualities or flunking them due to their deviation from a more familiar model—would certainly be a good place to start, especially given the almost absolute independence and simulated internal coherence accredited to the world of teenagers in particular by the contemporary market.  Maybe the representation of children holds at least one key towards unlocking today’s political economy.

August 5, 2009

Beginnings in the Middle: Presence and the Infinitesimal

Filed under: GA — adam @ 11:44 am

Transcendence suggests something outside of us sustaining us; presence involves all of us sustaining the same object of attention.  This mutual attending is overlapping and continuous—your attention attracts mine, which takes on a different shape and intent, which attracts a third in some new manner, which finally comes back to you as you take a new look at the object in question.  What keeps this attention chain going?  We want to keep things going—we occupy a scene jointly, and we want to remain on the scene because if we are not on a scene we are nowhere.  This absolute need for scenicity accounts for the ecstasy of the mystic and the teenager driven by boredom to do just about anything.  We are always complementing a scene, completing it, creating a scene within a scene, entering a meta-scene purporting to include the scene we are on—drawing upon the resources of the scene so as to remedy some felt deficiency.  Indeed, any scene requires some feeling of deficiency; otherwise there’d be no need to keep it going.  Transcendence has us protect the separateness of the object; presencing is interested in the continuity of the scene—the object, then, would tend to devolve into a series of more or less premeditated pretexts for doing so.

 

We keep scenes going by iterating the sign which constitutes it—there are so many ways of doing this that they couldn’t be catalogued in advance; indeed, any iteration only discovers what it is doing in the midst of doing it.  Fulfilling an order iterates a sign, as does defying it; answering a question or asking one; redirecting attention from speaker to statement, or statement to speaker; introducing or subtracting irony; shifting the distribution of silence and speech among the participants in a conversations, etc., etc.  All that matters is that each element of the scene can be related to every other element in however roundabout a manner—if there’s cross referencing, there must be something getting crossed in the references, and we could call that something the articulation of sign and object providing the scene’s “texture.”  Of course, all this is extraordinarily complicated, as complicated as we want or need to make it.  On the most elemental level, though, one scene is always passing out of existence and a new one coming into being.  Indeed, how would we know when a scene has ended if not from within a new scene?  How, then, did we transition from one into another?  That a scene must be organized around some mimetic crisis—actual, imminent, anticipated, simulated as a kind of rehearsal—which the sign constitutive of the scene frames and defers sharpens the question:  how and when do we know when a scene has been closed and what does this knowledge consist of?

 

We must posit, I would suggest, a third scene, a disciplinary scene constituted so as to identify the boundary between the two scenes; to identify the boundary is also to identify the transition from one to the other, because it is the transition that creates the boundary.  Let’s say any scene has a beginning, middle and end.  Our problem is to get from an end to a new beginning.  We could say that a scene ends when a sign is generally shared, which will then set the terms for the restarting of mimetic desire and rivalry—we could posit a clean break between any two scenes.  Of course, I am proposing an ideal reconstruction here—there are millions of scenes passing through each other all the time.  That obvious observation doesn’t help us, though, if we consider the scene the basic “unit” of social, cultural and historical analysis.  If we want, for example, to treat the Holocaust as a scene, we must assume it began and ended, and we could argue about where to place those dates.  Or, we can say that in a sense it hasn’t ended, and that the sign that emerged in its wake is still active, still tenuous, and has not given way to a new one.  We could argue over this as well and, for that matter, develop a mode of analysis that compares differing ways of circumscribing the scene; but, again, these arguments and analyses only make sense if we assume it would be meaningful to posit a beginning and end.  And we can’t help but do so—it is built into our language.

 

The third, disciplinary, scene, then, has its beginning in the middle of the old scene, its middle on the boundary between the end of the old and the beginning of the new scene, and its end in the middle of the new scene.  In the middle of the first scene, the sign has begun to circulate and divergences in its emission have emerged, making an inquiry into its modes of iteration possible; the middle of the disciplinary scene is the midst of its own (reflexive) process of iteration and norming, and in that light the boundary between the two scenes can appear as a distribution of sign users normalizing the previous sign and sign users issuing the new one.  To put it another way, when we are single-mindedly focused, as artists in making the minutest and most crucial marks or scientists in detecting the slightest shifts, on figuring out what counts as the sign we are ourselves iterating, then we are prepared to see the new sign emerge on its background. 

 

The point of these “methodological” speculations is to provide a model for dealing with infinitesimals in originary thinking.  I know that infinitesimals are an important topic in mathematics, but I don’t really understand any of that.  What I mean by infinitesimals is boundaries and thresholds, where we must account for the emergence of something qualitatively different, the emergence of which, then, cannot be completely accounted for in terms of what came before.  Between the mimetic crisis and the sign is an infinitesimal—the crisis is itself insufficient to account for the emergence of the sign.  The infinitesimal is inexhaustible—if I were to hypothesize, as the boundary between crisis and sign, a relation between figures on the scene, one of whom is accelerating his grasp and the other recoiling, so as to posit a “turning point”—well, within each of those figures we could likewise posit a boundary, locating someone accelerating his own grasping in response to another’s more intense acceleration but nevertheless slowing his rate of acceleration, and so on, ad infinitum.  The infinitesimal must be felt at the time but could only be represented after the fact; moreover, representations of the infinitesimal keep producing more, including within our representations.  I am proposing, I suppose, albeit in a very different sense than some theologians, a God of the gaps.  Insofar as our conflicts always involve a relatively stable object of desire at some measurable distance from us, the infinitesimal interrupts our rush towards the object by, in the manner of Zeno’s paradox, always introducing intervening steps conditioning our possession.

 

If there is a way of revering the infinitesimal it is through intensified attention to boundaries and thresholds, including viewing all events and objects through their constitution through boundaries and thresholds.  Grammatical analysis is especially well suited for such reverence for the infinitesimal.  The imperative emerges out of the “inappropriate” ostensive—there’s a boundary; the interrogative emerges out of a margin of uncertainty in the interlocutor’s obedience to the imperative—another boundary; the negative ostensive barely modifies the interrogative—ditto; finally, I believe the verb emerges as an imperative attached to the negative ostensive in the event of the former’s failure and consequent reversion to an imperative crisis—which would mean that all of the aforementioned boundaries reside in the declarative as well. 

 

We could note the infinitesimal on the boundaries between these different modes of utterance.  An ostensive that “presents” as an imperative (or vice versa); an imperative that presents as an interrogative (and vice versa); the same with interrogatives and declaratives; imperatives embedded at different “levels” within declaratives, and so on.  Even more interesting is to treat these boundary manifestations as presenting differently for different interlocutors and readers; even more, to treat these different presentations, and the way they would come together to compose a scene, as maximally consequential (the smallest change that would make the biggest difference is always, it seems to me, what we are looking for as theorists).  And then we can iterate those sentences, to test out those consequences.  The sentences we work with should be exemplary ones, upon which we can hang larger pieces of text, and entire texts.

 

So, we can read declaratives as deferrals of imperatives, dangerous, or insistent and impossible, or incompatible; deferrals effected by extending those imperatives into interrogatives (just letting an imperative sit for a moment sets this conversion in motion); the articulation of noun and verb extends the interrogative to the point where a new imperative set is created:  an imperative to iterate the noun, or name, generated in this new linguistic event—an iteration that can involve assent to the “proposition,” its modification or qualification, practical implications, etc.  All of these processes are reversible intellectually—such reversals are also iterations—and so we have the makings of a very simple mode of thinking for analytical, interpretative and esthetic purposes.  We can treat a question like an imperative and see what follows; or we can posit and examine a hypothetical array of imperatives assimilated to a declarative.  And any utterance would be bracketed by an ostensive-imperative articulation on one end and an imperative-ostensive articulation on the other, each with its own set of boundaries (when, exactly, can we say an imperative has been obeyed?)—in other words, any sentence can be resolved into a kind of “exclamation” that opens it and leads into an imperative and an ostensive that would “verify” or “authenticate” that the imperative to iterate the sentence has been obeyed.  Very often these analyses or iterations will involve little more than minor word additions and subtractions—“He will come here” can be resolved into “Will he come?” “Come!” (but also “Make him come!,” among other possibilities) and “Here!”  The imperatives embedded in sentences can, with little more difficulty, be articulated in various ways:  “I will wait” makes sense differently if we see it as a command to “stay here with me!” or “Go ahead without me!” or some oscillation between the two.

 

As an example of what can be disclosed through the inquiry into the grammatical infinitesimal:  much of the leftist turn in the academy (from “ideology critique” to “cultural studies”) can be reduced to the following, simple imperative:  reduce declaratives to imperatives.  More expansively, reduce the presumably innocent and apparently ennobling declaratives central to bourgeois life to a series of insidiously concealed imperatives—imperatives to accept your lot, do what you are told, blame the wrong people for your problems, etc., etc.  It seems to me we could “demystify” a lot of victimary studies in this way, simply by pointing out that of course declaratives embed imperatives, and they operate much more complexly than dominant assumptions about “dominant assumptions” tend to assume.  (On the other hand, Louis Althusser’s notion, from his essay “Ideology and Ideological State Apparatuses,” of “interpellation” as a “mechanism” by which we are made recognizable within the social order might become interesting in a new way.)  And if we were to treat these leftist theses as the command manuals they also are, what might we reveal?  Such an approach can include complex, detached analyses, but also the kinds of performative gestures the Left has gotten much better at than conservatives.  And, as I have already suggested, this “method” would rival “ordinary language” and “speech act” theories in drawing upon any language user’s tacit understanding of the way language works:  we all know when, to take just one example, in hearing a simple declarative sentence, we feel like we have been given an order or ultimatum.  And we are all capable of becoming much more attentive to such things, in ways and with results that would utterly confound any assumptions about “power relations.”

 

Now, if we convert these terms as I suggested in my previous post, into a conceptual vocabulary capable of registering all social relations, we see the significance of the infinitesimal on another level.  If we can see politics as the compulsion to ensure the convertibility of imperatives and declaratives, through the formulation of declaratives that can include incompatible imperatives, then we can scrutinize political discourse very closely in terms of which of our imperatives are convertible and which aren’t—we could assume that any political principle would reconcile only the most urgent imperatives, leaving political discourse frayed around the edges.  The main tasks of politics—the generation of new declaratives, or “principles”—would involve tying up those loose ends without letting the already established ones come undone.  “Health care is a right” is a declarative, and it must bear some relation to the declarative “all men are created equal”—what relation?  If we could find exemplary imperatives that could be “backed” by one and not the other, or that could backed by both—we would have answers, or at least sites of discussion.  Perhaps new formulations of either or both of these declaratives would embed the imperatives that don’t seem to be indicated by both—we could treat such problems as assignments, very literally:  compose a declarative sentence that would lead to this set of imperatives or that would accommodate these several; we can then impose further rules, limiting the length of the sentence, or insisting it include certain words or kinds of words, based upon an esthetics and history of the political sentence, etc.  Thus would political discourse meet grammatical analysis, as the “middle” of our grammatical analysis would produce new political “beginnings.” 

 

« Newer PostsOlder Posts »

Powered by WordPress