GABlog Generative Anthropology in the Public Sphere

October 11, 2009

Common Sense

Filed under: GA — adam @ 7:10 pm

The originary hypothesis can yield for us a phenomenology and poetics of everyday life, and perhaps it can even do so in a manner respectful of reality, which is to say that doesn’t complain about the ways in which people don’t correspond to one or another “model” we have arbitrarily established for them.  Now, the sentence I just wrote is a manifestation of resentment (that doesn’t mean I’m taking it back!)—my description of my resentment would be that it is a counter-resentment to the resentment of elites bent on “improving” their fellow humans, i.e., making them more like the improver.  But, of course, we’d all like to say our resentments are mere “counter”-resentments, evening out the scales that have been placed out of balance by some previously manifested resentment.  And, fortunately, we can all say that, and we would all be right, because all resentments are countering another one, and resentment is nothing more than the imperative to even something out, to give something its “due.”  The sentence I just wrote, for example, is a resentful attempt to counter any resentment that claims to transcend resentment, and it anticipates its vulnerability to the same charge because, indeed, that charge will also always be both true and false:  Every resentment, insofar as it is given shape, does represent, in however small or imaginary a space, an infinitesimal balancing out that sustains some presence and to that extent can be shared and “transcendent.”

 

If we can speak of resentment as an “evening out,” creating “planes” along which other resentments can be lain, then we can also speak about “common sense” as a kind of calculus of resentment—each of us has to figure out ways of “fitting” our resentments within a present configuration that always threatens, however implicitly or distantly, to exclude our own.  One of the (in my view) great, and still neglected (toward what and whom is that resentment directed?), modern Western philosophies, is the “common sense” thinking founded by Thomas Reid and sustained and transmuted by American pragmatism (at least Peirce—who at times referred to pragmatism as “critical commonsensism”—and James), the ordinary language philosophy of Wittgenstein and Cavell, and the “personal knowledge” or “tacit dimension” of Michael Polanyi. Reid’s common sense philosophy was arguably the philosophical foundation for the Declaration of Independence’s assertion of “self-evident” truths, because that is, indeed, Reid’s central claim:  that our fundamental modes of experiential access to reality are grounded in axioms that cannot be denied, or even questioned, without thereby undermining the experiential basis we would need to question those axioms in the first place.  So, for example, one couldn’t deny that we can understand each other when we speak, because, before whom is that denial made?  Clearly someone assumed capable of understanding it.  And, even referring to the endless litany of actual misunderstandings assumes that we know what it would mean for us not to misunderstand each other.  We can understand such axiomatic access to reality (which Reid assumed couldn’t be explained, just accepted), which Eric Gans in Science and Faith refers to as “auto-probatory” (something which could not be said without having had the experience it refers to) in terms of the articulation of resentments embedded in language.  Indeed, resentment itself is the most immediate auto-probatory experience—everyone has experienced resentment, and everyone can acknowledge anyone else’s resentment (however odd the object of that resentment might appear) and to deny this would be to affirm it because denying one’s participation in the universal experience of resentment would be the most transparently resentful stance imaginable.

 

So, we can account for every scene in terms of the interactions between various calculi of resentment—I resent A because he got the job I wanted last year but B outwardly at least admires A (shares some of his resentments) and I can’t bear to have both B and A resenting me so I moderate my resentment toward A into a mild irony that can be recalibrated depending on the possibility of B no longer caring about maintaining appearances, or some C coming along who could absorb some of the resentment directed towards me, or who may take A’s job making it possible for my resentment towards A to be converted to a shared resentment towards C, etc.  Involved in all of this is a profound, and largely tacit, anthropological knowledge which manifests itself in all the maxims of everyday life that we all iterate constantly, and which are all pragmatic ways of measuring degrees and distinguishing modalities of resentment:  “give him an inch, he’ll take a mile,” “what goes around comes around,” etc.  Some of us, at least, resent the “clichés,” as there is always some felt sense that they conceal a more differentiated reality that we might attain privileged access to, and that is also true (and also very easily converted into a set of maxims/clichés), but I believe there are very few concrete interactions between individuals that don’t require the buffering mechanism of these anthropological maxims; or, in compensation, the creation of new ones. 

 

It is very important that resentment keep getting circulated like this because the alternative is the truly deadly resentment against reality as such that is characteristic of Gnosticism.  In more linguistic terms, we might see Gnosticism as an uncompromising abhorrence of maxims, of any sign that conceals or moderates rather than fully embodying the infinitely differentiated reality that we all intuit in our “best” or most “intense” moments.  This global resentment can’t be countered by more local ones—rather, it can only be dissolved by the most fundamental of all ostensive dispositions, gratitude.  A sheer gratitude for reality neutralizes resentment towards reality, and is therefore also a critical component of common sense.  The syntactic form that corresponds to the ostensive is, I would say, the exclamation:  “what a lovely day!” expresses that originary sense of gratitude as does “how awful!” because the latter expression equally presupposes some non-awful condition that allows us all to immediately recognize how awful the one indicated is.  And, of course, “thank you!” is an exemplary exclamation, one which simply does what it says, and does it only in that specific instance.  I wonder whether one might say that Gnostics are likely to find the exclamation (and above all thanking) especially obnoxious, in its call for immediate assent and suspension of any “critical” sense of, or suspicion towards reality.

 

If common sense is composed out of a symmetrical adjustment of resentments grounded in gratitude toward reality and manifested in maxims, then we can point to something universally “self-evident” in common sense.  Clearly, the arrangement and dispersal of resentments will vary from place to place and time to time, sometimes widely, sometimes so much so as to be incommensurable.  But we have and can devise maxims to account for these variations and to adjust for them, and this may be an expression of faith, but I am certain that anyone would be able to piece together a workable sense of a configuration of resentments bounded by gratitude wherever they go.  Anthropologists do it with “primitive” societies, and members of those societies are able to do it when they wind up in ours.  We can’t know in advance what will count as abuse or a violation of norms, but we know that something will; the same goes with expressions of affection, vows, promises, and so on. 

 

I am borrowing a bit from Hannah Arendt in this discussion, and one of Arendt’s concerns regarding common sense in the modern world was that it can be obliterated by ideology and, at the most extreme, totalitarianism—manifestations of that resentment toward reality I just associated with Gnosticism.  Common sense is strikingly unable to defend itself against charges that it is “naïve,” “irrational,” “hide-bound,” “unthinking,” “complacent,” and, of course, today all that also means “racist,” sexist,” “homophobic,” “fascist” and so on.  The only defense common sense has is that of the hedgehog, although in a somewhat (but not completely?) different context than that in which that creature stands in as a mascot for GA:  all common sense can do is roll itself up in a ball and let its needles protect it from the ideological foxes.  The “needles” are its maxims, and the most privileged and central of those maxims are what we call “principles.”

 

Here is Friedrich Hayek on principles:

 

“From the insight that the benefits of civilization rest on the use of more knowledge than can be used in any deliberately concerted effort, it follows that it is not in our power to build a desirable society by simply putting together the particular elements that by themselves appear desirable. Though probably all beneficial improvements must be piecemeal, if the separate steps are not guided by a body of coherent principles, the outcome is likely to be a suppression of individual freedom.

The reason for this is very simple though not generally understood. Since the value of freedom rests on the opportunities it provides for unforeseen and unpredictable actions, we will rarely know what we lose through a particular restriction of freedom. Any such restriction, any coercion other than the enforcement of general rules, will aim at the achievement of some foreseeable particular result, but what is prevented by it will usually not be known. The direct effects of any interference with the market order will be near and clearly visible in most cases, while the more indirect and remote effects will mostly be unknown and will therefore be disregarded. We shall never be aware of all the costs of achieving particular results by such interference.

And so, when we decide each issue solely on what appears to be its individual merits, we always overestimate the advantages of central direction. Our choice will regularly appear to be one between a certain known and tangible gain and the mere probability of the prevention of some unknown beneficial action by unknown persons. If the choice between freedom and coercion is thus treated as a matter of expediency, freedom is bound to be sacrificed in almost every instance. As in the particular instance we hardly ever know what would be the consequences of allowing people to make their own choice, to make the decision in each instance depending only on the foreseeable particular results must lead to the progressive destruction of freedom. There are probably few restrictions on freedom which could not be justified on the ground that we do not know the particular loss it will cause.

That freedom can be preserved only if it is treated as a supreme principle which must not be sacrificed for particular advantages was fully understood by the leading liberal thinkers of the nineteenth century, one of whom (B. Constant) described liberalism as “the system of principles.” Such also is the burden of the warnings concerning “What is Seen and What is Not Seen in Political Economy” (F. Bastiat) and of the “pragmatism that contrary to intentions of its representatives inexorably leads to socialism” (C. Menger).

All these warnings were, however, thrown to the wind, and the progressive discarding of principles and the increasing determination during the last hundred years to proceed pragmatically is one of the most important innovations in social and economic policy. That we should foreswear all principles of “isms” in order to achieve greater mastery over our fate is even now proclaimed as the new wisdom of our age. Applying to each task the “social techniques” most appropriate to its solution, unfettered by any dogmatic belief, seems to some the only manner of proceeding worthy of a rational and scientific age. “Ideologies,” i.e., sets of principles, have become generally as unpopular as they have always been with aspiring dictators such as Napoleon or Karl Marx, the two men who gave the word its modern derogatory meaning.

If I am not mistaken this fashionable contempt for “ideology,” or for all general principles or “isms,” is a characteristic attitude of the disillusioned socialists who, because they have been forced by the inherent contradictions of their own ideology to discard it, have concluded that all ideologies must be erroneous and that in order to be rational one must do without one. But to be guided only, as they imagine it to be possible, by explicit particular purposes which one consciously accepts, and to reject all general values whose conduciveness to particular desirable results cannot be demonstrated (or to be guided only by what Max Weber called “purposive rationality”) is an impossibility. Though admittedly, ideology is something which cannot be “proved” (or demonstrated to be true), it may well be something whose widespread acceptance is the indispensible condition for most of the particular things we strive for.

Those self-styled modern “realists” have only contempt for the old-fashioned reminder that if one starts unsystematically to interfere with the spontaneous order of the market there is no practicable halting point, and that it is therefore necessary to choose between alternative systems. They are pleased to think that by proceeding experimentally and therefore “scientifically” they will succeed in fitting together in piecemeal fashion a desirable order by choosing for each particular desired result what science shows them to be the most appropriate means of achieving it. “

I’ll just mention that the contempt for “ideology” here is for “ideology” in a different sense than that in which Arendt sees the danger for common sense—Arendt sees ideologies as “scientific,” totalizing explanations that claim to account for a guide all human affairs, and that mark those outside its terms as “retrograde” and ultimately superfluous.  Leaving that aside, the respective arguments of the two great anti-victimary thinkers converge.  Common sense can only protect itself by defending, “unreasonably,” its maxims:  “keep your nose out my business,” “live and let live,” and, more politically, “there’s no such thing as a free lunch,” to mention a few.  If you tell me that you need to mind my business, just this once, because there’s emergency, I might be able to see the immediate benefit or necessity, but I will never know what I have lost by letting you do so—I won’t know, first of all, what immediate solutions I might have improvised on my own and, more importantly, what capacities and possibilities I will have surrendered by losing the habit of minding my own business.  Similarly, we will never know what we have lost by letting our fear of unemployment or a credit freeze lead us to give politicians the right to determine terms of trade, to benefit one market competitor over others, to regulate the internal operations if businesses, and so on.

The relevance of this discussion to, say, the current health care debates, is obvious.  Sarah Palin’s warning about “death panels” was simply the stance of common sense:  if the state takes more control of health care, then the state will end up making more and more life and death decisions for us, to the point of determining whether saving or improving one’s life fits a cost-benefit analysis established by experts.  The defenders of Obamacare, meanwhile claim to be guided by “purposeful rationality,” and to “proceed experimentally” (if you don’t want the “public option,” we’ll try “co-ops”!), realizing, some consciously, others partially, others not at all, that the more the state interferes in the workings of a particular segment of the “spontaneous order of the market,” the more any future “problems” will automatically be framed so that only the state (and its experts) can have the “solutions.”  “Death panels” is just a common sense way of compressing this understanding into maxims—and I, for one, couldn’t care less what the Democratic legislators (or, really, some combination of their aides, lobbyists, assorted activist groups, etc.) really “meant” when they put a particular provision in the 1,000 page long bill (a provision that will, later on, be interpreted by one of their experts).  And we don’t know what innovations in the complex relations between patients/consumers, care givers, insurance companies, medical technology, etc., will not take place because of this dramatic shift towards central planning.

The survival of free citizens depends upon strict, unyielding, “dogmatic” adherence to the fundamental, common sense, maxims of a free society:  rewarding failure gets you more of it (no bailouts!); wealth results from production, not expenditure (no stimulus!); enemies are to be fought, allies supported (no appeasement!); rights are what you can do without government interference, not what the government gives you (health care is not a right!), and many more.  Notice how different these maxims are from, say “everyone should have health care” or “gay marriage is a right”—the maxims of freedom articulate power and accountability, the slogans of soft tyranny demand provisions and donations without recompense or corresponding responsibility.  Now, needless to say, our elected officials will very often go right ahead and do these things we insist they resist; occasionally, they will be right and responsible to do so (sometimes one really does have to allow for exceptions), and more than occasionally we will, “hypocritically,” re-elect them when they do so, whether they are right or not.  But none of that matters—politicians can corrupt themselves and our principles (they have risks and benefits to weigh, and we can’t expect them to have interests higher than their own professional survival, and when they do they also expect to take the hit for betraying principles in the name of our collective survival), and our principles will survive.  What our principles can’t survive is the failure of a solid majority of citizens to insist upon their application in undiluted form, spontaneously, reflexively, unambiguously and insistently.  And in that way, when our common sense enables us to see that their violation has been a bit more egregious than usual this time, so egregious that maybe common sense will no longer help us to navigate a new world of arbitrary regulations and authorities, that common sense can become revolutionary.

Common sense is the possession of the man in the middle—not the Big Man, with wealth or power, or those living on the margins of society.  The cultivation of common sense  requires you to confront limits regularly, but also that you have some capacity to shape and maneuver within those limits; it requires you to see the consequences of your actions, and not be able to project those consequences onto the “long term,” or lose them in the tangled webs of unintended consequences and intersecting intentions.  Maintaining your common sense when you get too high or too low calls for extra doses of discipline, and perhaps some continuity with a previous condition (such as friends and family who knew you when you were in the middle). 

In a less grave way then totalitarian rule, I wonder whether today’s victimary popular culture impairs common sense.  A critic whom I admire, James Bowman, writes often of the dominance of fantasy in today’s popular culture, and the way this dominance has seeped into public and political life.  Bowman finds it disturbing that even science fiction films like the recent Star Wars don’t feel obliged to play by the rules of the “reality” they construct for themselves; one might suggest that the Obama cult has been a result of this privileging of fantasy over reality.  The recent awarding of the Nobel Peace Prize to Obama is an example, something I wouldn’t have accepted as a premise for a Saturday Night Live skit, and yet it happened—the award committee has made a lot of mistakes (and worse) before, but this must be the first time the award was granted based on what the committee imagines all of us are imagining the recipient might accomplish (and perhaps it’s the first time a President was ever elected on a similar basis). 

It is also fascinating how the new fragmented media environment allows for large groups of people to see those on the other side though hand-picked fragments aimed at reducing them to familiar stereotypes, but the enduring political and economic institutions serve as a check here.  Indeed, the widespread opposition to Obamacare, whatever it actually is, suggests to me that when it comes to your own private sphere of existence, the skepticism and shrewdness we associate with common sense is still intact.  Still, I can’t help but see some fragility here, simply due to repeated violations of the common sense maxims I mentioned earlier, over many decades by now—so that it actually makes sense to a lot of people to say that the government wasting a trillion dollars will return us to prosperity.  A new reality has been constructed through the articulation of the welfare-warfare-regulatory-media-academic state (even though I think a good bit of the warfare part was necessary), and one while can’t just say that it’s an artificial reality, it is predicated upon the possibility of deferring payment and consequences indefinitely.  A Ponzi scheme is also real for the people first in, who do get paid.  Popular culture erodes common sense by valorizing Ponzi-scheme models of reality, including the valorization of esthetically appealing and successful (i.e., unpunished) criminals.

Still, it seems to me commonsensical to insist upon the self-evidence of optimism.  No matter how much I despair, no matter how unlikely it seems that a disastrous course will be arrested, the very articulation of that despair (even just to oneself) implies the possibility that it will reverberate with another.  And if with another, why not yet another?  If I bewail the coming fall of this civilization, that very complaint, precisely to the extent that it is true and prophetic, implies that the principles of civilization need not disappear along with this particular one—human beings have suffered such catastrophes and recovered and renewed, and they might do so again.  If I am speaking, even if I disavow any communication with any of my contemporaries, I implicitly assert the possibility with some kindred spirits yet to be born, maybe centuries hence, maybe mediated by layers of interlocutors and interpreters who understood me only partially, but enough to pass my words along—and why should that communication be any less valuable?  To put it simply, putting forth a sign entails faith in someone receiving and disseminating it in turn.  Anyone without such minimal optimism (itself a form of “gratitude”) would not bother to speak at all, and anyone who does speak while denying that minimal optimism is to that degree dishonest—indeed, culpably ungrateful—in his or her speaking.

 

 

October 5, 2009

The Political Economy of Freedom and Sovereignty

Filed under: GA — adam @ 7:39 am

The far Left and the Libertarian Right converge on the same enemy:  the unholy alliance of the State and Big Business.  On what victory in the struggle would mean they diverge:  the Left, of course, ultimately wants Big Business swallowed up in the rational and humanitarian State, while the Libertarians want the state abolished (they distinguish “state” from “government,” supporting a minimal version of the latter—there seems to be small anarchist contingent, though), in which case businesses might become big but not Big—they would assume their own risks and receive no protection, direct or indirect, from their competitors.  Marx had an explanation for this increasingly intricate and essential alliance:  the state never was anything other than a “general committee of the ruling class,” which under capitalism meant the protection of bourgeois private property; so, when capitalism moves into its more advanced stage, and must confront deadly new resentments (the proletariat) and dangers (the threat—and promise—of military competition between capitalist states) the state must itself expand so as to take on these tasks—and the “monopoly capitalists” will be happy to let them do so, even if they grouse occasionally.  And the libertarian explanation is… well, other than some vague references to our having forgotten our principles, it doesn’t seem to me they really have one—which would be why someone like Ron Paul exceeds even the most fevered Leftists in his conspiracy-mongering.  Someone must have made a dirty deal behind closed doors.

 

If entrepreneurs are essentially a predatory class, as I hypothesized in my “Hunters and Craftsmen” post, then the explanation is not that difficult.  Indeed, Libertarians are well aware, going back to Adam Smith, that any time businessmen get a chance to receive some privilege or monopoly from the state they grab it, the free market be damned.  Of course, entrepreneurs are a very peaceful predatory class, for the most part, and are themselves always vulnerable to expropriation—hence their alliance with the state is fruitful in many ways.  But predation within the peaceful space created by stable state power is still predation, and we must distinguish the small marketplaces that spring up when the division of labor has expanded enough so as to make everyone dependent upon others (even allowing for merchants to mediate between communities, including distant ones) and the power of money within a system of trade and ultimately a fully developed market system.  A baker or carpenter who brings his goods to market is still just a baker or carpenter, but moving capital around requires no “instinct of workmanship” at all.  The difference is between a stable division of labor and one that is in continual upheaval.

 

I hope I don’t need to, but just in case I hasten to add that there is no critique of capitalism or the market, or the entrepreneur here, neither explicit nor implicit.  “Predator” is just another way of speaking about the “Big Man”; without the Big Man, there would have been no way of centralizing resources needed to move humanity beyond the level of egalitarian hunter-gather tribes; civilization itself is predicated upon turning this predatory figure away from preying upon the weak of his own group toward defending that group against external predators (and this shift is predicated upon a truce between all the contending Big Men within the group); all I am adding is that the Big Man “function” continues to this day and that—this in my view reveals the Libertarian mindset, in all its provocative brilliance, to be utopian—we can’t imagine civilization without it.  For all our egalitarianism (which, I also hasten to add, is in its own way absolutely real, and a powerful check upon predation), there is almost never (I’m not sure I need the qualifier “almost”) a situation involving a group of people of any size that doesn’t generate a center of gravity—someone dominates the conversation; someone’s words or deed were more memorable afterwards; someone’s judgment was deferred to; someone had to make the “call,” and in the end someone did; someone had to be blamed, and they were, etc.  It may be paradoxical, but precisely in free associations, hierarchies, however informal and provisional, become indispensable. 

 

Whenever such hierarchies are made quasi-permanent and ritualized, we have sovereignty.  And sovereignty is the opposite of freedom.  But we can’t do without sovereignty—it meets some very definite human needs, and is, in fact, what people usually mean when they speak about “human nature.”  Sovereignty provides identity, which is first of all self-sovereignty, and, again, is inimical to freedom, as identity is just another set of shackles.  Sovereignty also provides recognition, which is impossible if we, as free beings, transmute ourselves continually.  Sovereignty is the source of pride and honor.  It provides continuity, security and protection.  And in its communal function it stabilizes the volatile system of mimetic rivalry.  Sovereignty is involved in Isaiah Berlin’s “negative” as well as “positive” freedom—it is the answer to the question of “how far should rule extend” (up until it meets my private sovereignty) and of “who should rule” (those who allot me a piece of their sovereignty so as to help me ensure my own).   And property is the form of economic sovereignty.  Freedom (freedom “of presence,” to make a conceptual distinction), meanwhile, is the act and process of becoming sign, and that can’t be represented or guaranteed in sovereign terms. 

 

So, an originary political economy would study the intersection of freedom and sovereignty in the way each of us articulates the imperatives sent our way by every other one of us.  I think of the kind of simple account of the workings of the free market as both the best means of satisfying needs and as discovery procedure:  I have a certain amount of money, and I invest it in the raw materials and technology I need to produce a certain number of a particular kind of good, continually adjusting the price I ask until I have sold as many of them as I can within the period of time I can allow myself before I must reinvest or, perhaps, repay my creditors.  If I don’t manage to sell enough to recoup my original investment, or come close enough to reinvest, then I fail, we learn that there is insufficient demand for the product I was selling (there are enough of them out there already, or enough of a sufficiently close version, or it’s simply unneeded and unwanted), and someone else will invest in the technology I had used, ultimately putting it to better use.  There is no other way to find out what people want, or how resources should be allocated, than this one.

 

If I am successful, then I expanded, however slightly, the social division of labor; or, in more anthropological terms, social differentiation.  If consumers are buying my product because it is cheaper than what they have been buying, then resources are freed up to buy other products; if they are buying my product because it is superior technologically or esthetically, then whatever skill or knowledge went into the innovation I have introduced to the world has been affirmed as a source of value, and will inspire various iterations; and, of course, if they are buying it because it does something new, then work that was previously done privately and/or less efficiently is now embedded in the new division of labor, or wholly new faculties and desires have been created, and which are sure to lead to new demands and new innovations.  My interference in the existing social division of labor stimulates others to take advantage of the possible alignments now opened up, no less than the conquest of a part of a weakened state inspires other states to participate in re-dividing the state and redrawing existing borders—and this process could also be described in “law-like” terms.  The difference, and it is a big one, in economic conquest is less in the dispositions of the players than in the fact that social rather than physical terrain is at stake, and social terrain is both inexhaustible and subject to much more limited control.  (To extend the idea slightly, doesn’t advertising make perfect sense in these terms, as camouflage, bluff and feint, warnings to a population about to besieged, pronouncements on the current status of operations, announcements of new imperial projects, etc.?)

 

George Gilder argued in Wealth and Poverty that far from being selfish, we should see the entrepreneur as remarkably altruistic, giving his time, energy and resources to help others.  Ultimately, there may not be so much difference between this claim and Ayn Rand’s harangues on the virtues of selfishness.  They are both the dispositions of the sovereign, who does favors for whom he will do favors and ill to whom he disfavors.  With all the current talk about how much regulation we need and what kind, it seems obvious to me that regulation is almost always beside the point because any new innovation and the subsequent reorganization of the division of labor will render the existing rules obsolete.  Regulations are always attempts to fight the last war, and arguments in favor of more of them are almost invariable obscenely oblivious to the advantages of hindsight—everything that seems to us to be a cause of whatever crisis or scandal occupies us should, as all reasonable people can agree, have been outlawed.  It might be more useful to think of entrepreneurship as—as I believe many of them, in fact, do—a kind of war-making, maybe in conventional terms, with large, well-stocked armies with a long-term battle plan; maybe a kind of guerilla warfare; at times even a kind of terrorism.  The enemy varies—it may be those representing the existing division of labor, supported by state subsidies direct and indirect; or, it may be those instigating disruptions of the status quo—but I don’t see how one could deny that, in addition to producing, improving and disseminating their products, businesses spend quite a bit of time addressing the various fronts on which they fight:  labor, the state, or this or that agency, and their competitors.  (And even warriors, in the literal sense, must give quite a bit of attention to the production and distribution of goods, services, and the enforcement of the rights of various officials and “property” owners.)

 

If reasonable rules for waging war can’t be composed in the course of the battle itself, the various agreements forged going into and following battles (truces, alliances) can be enforced—that is, contracts.  There is even something a little irrational about this, as contracts must always presuppose a continuous state of affairs that makes their fulfillment possible, but promises to abide by such shared hypotheses, even or especially when realities emerge which undermine them, is ultimately far more rational because continuities can only be carved, to some extent arbitrarily, out of discontinuity.  In fact, all of the attention of government should be directed towards the strict enforcement of contracts, if only to give the signatories powerful incentives to construct their contracts carefully and make their reciprocal obligations as transparent as possible.  And this answers the question of how big the government should be:  as big as necessary to arbitrate effectively, indeed, unquestioningly, between the largest of the economic barons.  But not big enough to help anyone of them if they lose their fiefdom. 

 

Consumer sovereignty is a nice slogan but unsupportable as an empirical claim.  The relation between consumers and companies is analogous to that between voters and political parties:  the organizations propose, and the consumers and voters dispose.  (Or, more provocatively, between occupied populations and their conquerors, taking into account the desire for an extremely gentle occupation regime, including one that realzies the benefits of recruiting its administrators from the population itself.)  That is, the final purchase validates or invalidates a particular use of capital within a generally valid field; consumers regularly bring down empires, but the imperial system itself remains.  In case it’s necessary, I’ll make it clear that this is not a critique—I see no reason to assume that consumers (or voters) should weigh in any more heavily than this.  But the capacity to redirect the channels through which capital flows plays a very important role morally, and in providing the tacit rules under which the system operates.  It certainly makes a big difference whether the most unhealthy fast food restaurants or diversified, and increasingly tasty, health food alternatives prevail; or whether the main streets of medium-sized cities are littered with strip clubs.  Such redirections of capital in turn depend upon, and register, the degree of thriving of families, churches and other neighborhood institutions.  Indeed, I think those political movements likely to produce the most lasting effects will be those which focus on modifying consumer behavior, directly (through boycotts and savvy ad campaigns) and indirectly (by strengthening civil society).

 

The tension between the entrepreneur and the “craftsman” so evident in Veblen’s The Theory of the Leisure Class lies, I think, in the way outlays of capital continually upend—indeed, have their very significance in upending—the existing division of labor.  Veblen associates the instinct for workmanship with knowledge of causal relations in nature (as opposed to the superstitious nature of the “predatory” classes), which makes sense, but equally important here are traditional methods and guild-style relations and an esthetic sense.  The most virulent opposition to capitalism has often come from those pushed out of their artisan status by mass production—much of the rhetoric, if not the reality, of anti-capitalist politics derives from this kind of complaint, with which it is easy enough to sympathize.  But knowledge of causal relation, that is, the application of science to production, has a more complex relationship to the entrepreneur.  For a long time, in Marxist circles, there were arguments regarding the long-term effects of capitalism on scientific “labor”—the most politically appealing argument was that scientists would increasingly be reduced to wage laborers and supervisors of wage laborers, with intensifying specialization making it impossible for them to protect their interests as a group or individuals, leaving perhaps a few very elite scientists who essentially join the “ruling class.”  And, certainly, scientists, and especially those responsible for important technological innovations, have been among the most important new members of the economic “aristocracy” over the past few decades.  But if traditional educational institutions are eroded (or continue in their present course of erosion), can the free market be counted on to produce the number of scientists and engineers needed to keep de- and re-stabilizing the division of labor?  The math and science proficiency of American students, and the increasing dependence of American companies upon foreigners for high-tech positions (while we seem to do just fine in producing all the lawyers we need) makes this a serious question.

 

A good way to start to tie all these issues together is by reflecting upon another issue where the far Left and Libertarian right converge (and where I have come, conveniently, to agree with both)—the illegitimacy and need for abolition of intellectual property.  Intellectual property is a state granted monopoly over the uses others can make of their private property—the state can prohibit me from using my own printer and paper to copy something and distribute it, or to use my own raw materials of any kind to replicate a physical or chemical structure.  The argument against intellectual property is most potent in dealing with patents, I think, given how arbitrary the distinction between a real invention and some tweak of a method or process that is already well known is; it is most problematic, even distressing, in dealing with copyright, when we know very well who authored, painted or composed that original and irreplaceable novel, painting or symphony and it seems only just that they benefit financially from it.

 

Either way, I don’t see how intellectual property can possibly be maintained into the future:  can all personal computers be checked for illegal downloads?  Can we make China protect Disney’s copyrights?  Will India deny its poor knock-off medicines based on those created by American pharmaceuticals?  So, it may be better to speculate on a world without it.  This might be a good time to remind ourselves that the origin of creation lies in freedom, the kind of freedom that has its telos in the “discipline,” or a conversation aimed at soliciting revelations from some shared object or, in more anthropological terms, to make some object an inexhaustible source of signification.  This is done by iteration for its own sake, and I’ll update my definition of iteration here as obeying the imperative to apply the rule to the infinitesimal—that is, discovering what you are doing in some space where the making of rules and the interchange of tacit and explicit rules is generating transformations and applying the rules of what you are doing to some as yet tacit dimension of it.  So, for example, I realize that I organize my thinking into a certain pattern that I hadn’t recognized previously, or that has just emerged as distinct, and I apply the rule constitutive to that pattern to elements of my thinking that run in more established or random routines.

 

Inventions for use follow this logic, but are ultimately incidental to disciplinary habits and desires.  If authors and creators are denied the monopoly on the right to use their work for profit (a right more often exploited by entertainment and other corporations anyway, often at the expense of their hired creators), they might use their talents to invite people into unique disciplinary spaces that transcend the reproduction of an object.  That is, creation will become more pedagogical, organized around websites, public appearances, and other mediated events that take the created object as a changing center, one which the audience pays for the right to help change and witness in its successive metamorphoses.  New drugs might come to be invented in hospitals and other health care sites, and be administered as part of a total care experience; new technological innovations in other fields might also become embedded in a holistic set of service relations, as already seems to be happening with computers.  This denial of a state monopoly to the giant companies best able to exploit it might, in turn, lead to a push for the government to stay outside of the company-consumer relationship, which would now require constant and far more subtle fine-tuning and communication between the parties, irreducible to external regulation.  And the instinct for workmanship might revive as well in such integrated work environments, and the marginalist political activities like civil disobedience and boycotts take on more precise objects—supporting loved and needed “customized” institutions from state depredations.  (The laws against fraud, though, might get some creative workouts if more people think they can get away with claiming to be the producer or author of another’s work, as opposed to just using or disseminating it.)

 

So, perhaps we can locate a new political economic lawfulness in the degree of faith we find in our society and ourselves that creative activity unsanctioned, unprotected and uncredentialed by the predatory alliance of Big Business and the State (they’ve earned their capital letters!) can thereby generate even more creative activity and social and cultural good.  The less faith, the more government regulation, the more business takes on static, administrative, imperial roles; the more faith, the more sovereignty learns to embed itself in, rather than prey upon, freedom—and the more social health and prosperity.  We might even develop an appreciation for the contribution to this lawfulness made by the disciplines organized around the praxical study of risk, like hedge funds, and other inquirers into the myriad ways the miracle of making money out of money takes place.  (Yes, the warriors are themselves ultimately driven by freedom, their actions an adventure in exploration and hence a mode of inquiry.)  Such scouts in the world of economic warfare are among the most faithful in their own intuitions and abilities, and in the tacit rules of the system to sustain them—and they test out which battle plans are real, and which will dissolve upon contact with the enemy.  

 

September 24, 2009

The Human Condition: A Commentary on Originary Signification

Filed under: GA — adam @ 9:46 am

Any functional sign must involve the following:

 

1)  The possibility of being a lie (I borrow this from Umberto Eco’s A Theory of Semiotics).  There are better ways of putting this, as “lie” presupposes a declarative, an assertion about something in the world independent of the person making the claim.  So, when I shake someone’s hand, I am not exactly telling the truth or lying; the affirmation or gesture precedes the proposition.  But in a sense I am—my handshake can be sincere, or I could be proffering my hand so as to disguise my irreconcilable enmity towards you.  The originary scene itself is, indeed, beyond truth and falsehood—that some central object is indicated is simply constitutive of the scene; to put it another way, no object, no convergence of attention, no scene, and therefore no lie.  But that being beyond truth and falsehood will never happen again, precisely because of the scene—any future gesture could be a deception.  And the deception could only work because of the absolute trust that must have prevailed on the originary scene because there, in the phrase I consider prior to the truth/lie binary, one and then each stood as surety for the presence of the object.  Every sign, to be meaningful, or to work, must have its audience presuppose someone to stand in surety for some material or immaterial object of the sign.  Not necessarily a referent, or even a signified, but the possibility of a gathering of attention around some “thereness” to attend to.

 

2)  A prayer to the central presence or intelligence.  A prayer is an imperative, however politely or supinely put, to the central intelligence—most elementarily, to save or protect the supplicant.  But this demand implies the duty to obey the center; so, the subsequent prayer or, really, continuation of this same one, is a demand that the central presence provide guidance in fulfilling a divine command.  This more articulate prayer recognizes a dominion under divine sovereignty, wherein the divine command must be shared, applied and interpreted.  In claiming the invocation as a condition of intelligibility, I am pointing to the regular, or grammatical element of semiotics.  Whatever the rules in any language or idiom, I must follow them; but what are rules other than the way a particular interplay of imperatives back and forth from the center has unwound?  If I am on the verge, say, of coveting something of my neighbor’s, and I hear God telling me not to, and I beseech God to give me guidance, and I discover a way of redirecting my attention so that I covet no more, a habit and therefore a preliminary grammar is in place.  If someone then trusts me enough to ask me to help them find the way in a similar circumstance, I can present my discovery, and they will have to implore God to help them find their own way, analogous to mine—and my grammar has been transmitted, which is what really makes it a grammar in the first place.  I don’t think it’s any different with things like word order, conjugation, inflection, etc., in words and sentences—they are all habits by which imperatives have been moderated and woven into a transactional fabric where they intersect with other, often contrary, imperatives.  The equivalent on the originary scene is each of us looking at all the rest of us and ascertaining that a rule of interaction supplanting the uncontrolled surge toward the center has emerged.  To put it simply:  conscientiously following the rules, including the construction of ideal or model modes of rule following, is a form of prayer and faith that the right or needful thing to do when the rules fall short will be made present to me.  And it is a reasonable faith, because when the rules fall short, the tacit rules undergirding the overt ones, which are the imperatives we have so thoroughly embedded as to have forgotten, and which have been  preserved in the overt ones, are there as back-up.

 

3)  A hypothesis regarding how my audience or interlocutor will respond to my sign.  This is my misreading of C.S. Peirce, whom I take to be claiming that the meaning of a sign is all of those consequences you can imagine following your issuance of the sign.  This hypothesis must be internal to the sign itself, it must emerge with the sign.  That is, I don’t hypothesize and then issue the sign, or issue the sign and then hypothesize (one could only hypothesize with signs, after all).  The hypothesis is the sign:  whatever presence needed to be filled (signs wouldn’t be issued if some presence did not need to be filled, because sheer absence can only mean terror and extinction, whether experienced on a personal or collective level), I first put forth my sign with an inchoate sense of attempting to fill it, and as the sign is composed, and I get glimpses of its reverberations and possible mistaking, it seems to be more or less likely to provide that space with presence, to indicate that the need was in fact other than I took it to be and so my sign must be redirected or to the extent possible withdrawn, or that the sign will require supplementation which it must somehow be composed so as to solicit, and so on.  The “proof” of this hypothetical element of the sign is that when I “understand” a sign, what someone tells me, I am aware that I have been inscribed within it, that it has anticipated me and that it requires something of me.  It is up to me to render it meaningful or meaningless—it has predicted, or bet, that I would make it real.

 

If we have no “human nature,” then, we can have, as Hannah Arendt (who contended that for us to claim to know our own nature would be like trying to leap over our own shadow) asserted, a human condition—a set of possibilities and limits, always distributed differentially across individuals and history.  We must guarantee, and demand guarantees of, reality; we must follow (and be followed by) rules, more or less “religiously,” and insist that others do so as well; and we must anticipate, speculate, project and hope, while interfering with such on the part of others.  At our best, we preserve, within our signs, these diverse vocations, and occasionally even repair the damage that is constantly done them through resentment of our humanness; at our worst, we arbitrarily assign one priority over the other, or even betray any or all of them.

September 16, 2009

Hunters and Craftsmen

Filed under: GA — adam @ 6:22 am

I’ve just finished reading Thorstein Veblen’s The Theory of the Leisure Class. Obviously, I can’t claim that this puts me in the vanguard of anything, but I found his organization of economic analysis around the categories of, on one side, “invidious distinction,” and, on the other side, the “instinct of workmanship,” very provocative.  Economic life is organized around “invidious” distinctions when human life is predatory:  based on hunting, war and conquest.  Under such conditions, some men gain possessions and reputations that place them in a superior position to other men, and the way they manifest this superiority is through conspicuous leisure:  doing lots of things that serve no utilitarian purpose and, indeed, flaunt their contempt for utilitarian purpose.  For me, the analysis gets interesting when Veblen associates players on the market, or those driven by “pecuniary” interests, with the class of “predators,” and hence “archaic” by the standards of a “modern industrial” society.  He thereby places the entrepreneur, banker, broker, etc., at odds with those driven by the instinct of workmanship, who are interested in working out and applying causal relations: scientists, engineers, etc.  The economic figures driven by pecuniary interests are, then, simply hunters and warriors in a new, quasi-peaceful guise.  As, of course, are “administrators,” i.e., bureaucrats and the government.  

 

I suspect that a quick look at a transcript of some casual conversation among Wall Street brokers would confirm the plausibility of this classification, as is our use of terms like “robber barons” to describe the great corporate founders of the nineteenth century, idioms like “make a killing,” etc.  For Veblen, exchanges on the market are indistinguishable from fraud, an essentially predatory relation to others—all lines separating fraudulent form legitimate exchange are essentially contingent and pragmatic.  Are we so sure we could we say he is wrong about that?  He also associates gambling with the predatory disposition, and it is easy to wonder how much of our current economic crisis, especially that part attributable to the mysterious “derivatives,” is a result of nothing more than very high stakes gambling (with other people’s money, of course).  What is further interesting in Veblen’s account is his classification of Christianity as a religion grounded in the leisure class:  in this case, God is king/conqueror, and worship of Him, with its incessant emphasis on His infinite power, is the vicarious leisure of the servant class.  Veblen has quite a bit of fun with the clothing worn by priests, the architecture and decoration of Churches, and so on, in the process of establishing this claim.  Charity and philanthropy, further, fit into this characterization:  they are more conspicuous leisure, dedicated to promoting the honor and value of the benefactor warrior/king. 

 

Of course, those familiar with GA will notice several things here, which might be invisible to others.  First of all, we know that all human existence is based on “invidious distinction,” which enables us to reverse Veblen’s hierarchy of the two economic types.  Veblen argues that the “instinct for workmanship” is the more originary trait, characterizing human existence at a more primitive and peaceful stage, while the predatory element in human existence comes must later, and is ultimately a mere variant of the former.  For us, social relations based on invidious distinction are also based on the shaping of such distinctions into such forms as mitigate the inter-communal violence they would otherwise incite—if there were nothing but invidiousness, there would be no community at all.  The instinct of workmanship, meanwhile, we can easily locate in the esthetic element of the originary gesture, which is there from the beginning, as the gesture needs to “propose” some symmetry or harmonization of the group in order to take hold, but is nevertheless secondary to the felt need to interrupt the imminent violence itself.  This also means that the instinct for workmanship involves, first of all, a social relation between the maker and his/her fellows, rather than the direct relation to his/her materials and the manipulation of the causal relations articulating them, as Veblen would have it.

 

We could, further, identify Veblen’s account of the predatory/pecuniary interest with what we can call the “Big Man” stage of history—a stage of history which we have by now means exited (indeed, modern constitutionalist politics and the free market aim at harnessing Big Men more than at eliminating them), as Veblen, along with so many others, fervently hoped.  His discussion of Christianity and monotheism more generally is illuminating in this connection, since both Judaism and Christianity are invented as responses to the unimpeded rule of Big Men and the imperial moralities such rule generated.  If, as Eric Gans has argued, the centrality of scapegoating to social order only holds true for communities thusly organized, then faiths predicated upon a repudiation of the scapegoating morality of the Big Man presuppose his continued existence (and periodic chastisement).  If the total replacement of the Big Man as a social phenomenon by esthetic, conciliatory gestures and the reality revealed by norms of scientific inquiry (the instinct for workmanship) were to occur, then it makes perfect sense to assume that the monotheistic faiths would fade into oblivion. 

 

I’m not going to argue for the impossibility of such a development here—I’ll just say that the invention of the Big Man as an occasionally necessary medium of social deferral (albeit elected and subject to recall and liable to criticism and disobedience, or subject to the discipline of the market and the threat of and bankruptcy) can no more be revoked than the invention of nuclear weapons.  I’m more interested in the implications of Veblen’s classification of entrepreneurial and financial activity for the mode of economic theory I’m most interested in now, the Austrian theory of Mises and Hayek.  I assume that these thinkers, and those of their “school,” would vigorously repudiate Veblen’s claim:  for these free market thinkers and advocates, there is nothing more peaceful then the activity of exchange:  indeed, exchange is the antithesis of violence, it is what we do once we have successfully suppressed violence as a factor in human relations.

 

I want to explore the possibility that Veblen is right, and they are wrong—and the plausibility of this hypothesis lies not only in the very structure of competition, in which you can win just as easily by disabling your opponent as by improving yourself, and not only in the enormous destruction which can be deliberately wrought in the financial arena, but also in the very evident attitude of those who operate there, which seems to be one of obligatory triumphalism, machismo, threat, bluff, swagger, etc.  (Here, we would have to distinguish between those entrepreneurs who are closer to the workmanlike aspects of the job and those closer to the financial dimension—but no entrepreneur could indefinitely avoid the latter aspect.)  (In a similar vein, the Austrians like to believe that private property rights derive from occupancy and/or use of territory or object—but doesn’t it make more sense to say the property was first of all what one could take, defend, and persuade others to accept as a fait accompli—and that rights then emerged to mediate between property owners?) I also reject Veblen’s assumption that this position is obsolete.  So, if the pecuniary/predatory is here to stay, and is inseparable from a proper understanding of freedom, how do we incorporate that into our economic, ethical and cultural analyses?

 

Every commercial community must come to terms with the distinction between fraud and fair exchange—it is inevitable that such a distinction be made simply because even dealing among merchants would become impossible otherwise.  Even for purposes of fraud, reputation as a fair dealer is essential (Veblen associates “honor” with predatory/pecuniary fields; indeed, of what relevance is “honor” to an engineer, architect, dentist or plumber, except insofar as we confront them as merchants—we can see their work for ourselves); and you can only gain such a reputation if “fair dealer” has some shared meaning.  Such a distinction is inevitably rough and relative—there are a lot of things that could interfere with the fulfillment of a contract that couldn’t have been anticipated, whereas the parameters of expectations for the “workman” I just mentioned parenthetically can be much more tightly drawn.  The levels of required trust and acceptable risk will be drawn differently under different conditions—again, most unlike the standards of good workmanship:  the good dentist or carpenter is good in Boston or in Moscow, and their clients will be able to distinguish their work from more shoddy varieties.

 

This line will be drawn, like all lines, by events:  in the midst of a commercial culture given over, or in danger of falling into, general fraudulence and corruption, someone and/or some group will come to exemplify fair practices.  The establishment of fair practices would first of all be negative—we don’t do all the things our competitors do.  But it would eventually become subject to verifiable norms, and embedded in relatively transparent practices, and advertised as an intrinsic part of your experience, as a customer, with that business.  The fair dealers would seek each other out and, I think, would be genuinely “authenticated” by the business community and circle of customers once they had weathered some storm—once they had, for example, refused the compromise involved in obtaining some government sponsored monopoly, or abstained from participating in some boom or panic that wiped out other businesses, and ended up intact, perhaps even stronger, precisely due to the values implicit in their “fairness.”  Again unlike workmanship, though, where skills may deteriorate, but in fairly predictable ways, the “capital” of “fairness” can erode rapidly, and often as a result of what seemed at the time to be inconsequential decisions (cutting a corner here, lobbying the government there, when things got a little rough…). 

 

It further seems to me that the creation of such a capital fund of fairness will be in inverse proportion to government involvement in establishing and enforcing norms.  The government’s secondary function, indeed (second only to preventing violent assaults on citizens’ rights), is the prevention of fraud, which violates the sanctity of contracts.  But such a task would prove impossible to perform, or at least perform adequately, if standards of fairness had not already evolved within the commercial community itself, so that the government is essentially policing the margins of the community in accord with the norms of the community itself—it’s very hard to see on what basis the government (government lawyers, to be more precise—yet another set of predators who would need to establish a set of internal norms) could generate such norms in a non-arbitrary way.  But, of course, the government’s role will also be established through events—for example, through its protection of some “fair dealer” in danger of being scapegoated within the commercial community.  The relationship between business norms and the legal system, then, is an index of the moral health of the economy; and the moral health of the economy is itself an economic “factor”:  certainly, much wealth is lost to corruption and fraud, and gained by fair dealing.

 

In this way, the Christian morality that has emerged and sustained itself as a check on predatory Big Men (think of how focused both Judaism and Christianity are on the “haughty”), could become an economic value in its own right—perhaps one we could even learn to calculate.  Surely some economist could (or, for all I know, already has) invent a formula for determining the value of the moral economy (of course, we would need to be anthropologists to devise measures for the moral economy).  What is x number of people willing to leave cutthroat firms when they cross the line and become, not “community organizers,” but more honest versions of the business they have “exodused” from, “worth”?  Or x number of individuals willing to form companies in which their own money is at stake, instead of playing only with others’?  These are challenging questions, because below a particular threshold above which there would be enough of such firms to survive and impact the economy, their worth would be zero.  Even more challenging is determining which other, only indirectly economic elements of the culture would comprise a moral economy making such thresholds attainable.  We’re not just talking about honesty or altruism here—rather, “fair dealing” involves the ability to create, revise, and continually re-interpret, on the ground, in conjunction with others, sets of rules that are largely tacit.  Distinguishing between those who preserve and adhere to the rules so as to skew them in their direction and those whose actions always preserve a residue aimed at enhancing and refining the rules is a skill acquired, like any skill, through practice. 

 

The grammar of rules might be sought in a seemingly strange location.  Rules are difficult to describe—even the ones we follow flawlessly and thoughtlessly.  Indeed, the thoughtlessness is the problem—analytically, not necessarily morally. Rules always have a tacit dimension—if you ask someone (or yourself) how you follow the myriad rules you do follow to mediate all your daily interactions, you must either simply “point” to what you do and rely upon your interrogator’s own intuitions as a rule follower to understand; or, find a way to point to another set of (meta) rules which tell you how to follow the rules in question—but, then, how do you follow those rules?

 

A few posts back, I defined imitation as the derivation of imperatives from a model.  Iteration, meanwhile, derives from the response you get when you issue an imperative to your model in return, demanding that he/she show or tell you how to obey the previous imperative, subsequent to an inevitably failed attempt.  That initial attempt must fail because you will still be insufficiently like the model, hence indicating some portion of the imperative left unfulfilled.  To demand of the model another imperative, now part of a series (his implicit one to you, yours in return, and now his again), is to now treat the model as him/herself subject to imperatives, which he/she could convey intelligibly.  In that case, the two of you share the same source of imperatives; but this further means that part of the imperatives this newly revealed shared center issues involves articulating the imperatives each of you receives with those the other receives.  Hence, the birth of rules, which call upon one to act in such a way as to coordinate unknown acts along with everyone else. 

 

One is always within rules, but one becomes aware of the rules when they become problematic, and they become problematic when one must narrow them down, in a single case, to an unambiguous imperative—what must I do right here and now?  It is then that the origin of rules in a center issuing imperatives that must be shared becomes evident because one must then ask the center for guidance.  This, it seems to me, is the structure of prayer, which would mean that learning how to follow the “spirit” of rules means learning how to pray.  (“God, give me the wisdom to understand your will…”) (For Veblen, this is the kind of situation the “instinct for workmanship” could never lead us into.)  And in the monotheistic or, perhaps (I’m not sure where Islam is on this), anti-“haughty” faiths, such prayers would take on the greatest urgency in situations where one’s desire is to abuse the position of the “Big Man,” usurp that position, elevate oneself by discrediting the existing one, fantasize oneself as Big Man, or create a negative Big Man who will serve as the “cause” of some present crisis; but, also, where one’s desire is intertwined with the emptiness of the Big Man space, or the inadequacy of its current occupant—where one may need to help prop it up, in other words, but where such a need edges imperceptibly into these more sinful desires. 

 

Humbly demanding that the center, the iterable source of rules, or the “central intelligence,” come through with a clear imperative at such moments is the heart of the proper creed of our commercial civilization.  If we recognize that our entrepreneurial class is comprised, not of pacific servants of others unreasonably harassed by the predatory state but, with all the good they do, of actual and budding Big Men (who, of course, seek commerce with Big Men in other realms), thereby adding a political component to the economy, then we can find the economic value in the prayerful state that seeks a middle between haughtiness and debasement.  This middle would also turn out to exist between other poles inevitable in an increasingly sophisticated rule-based culture:  between the “letter” and “spirit” of the law; between the rules’ tacit and explicit dimensions; between preservation and innovation, and so on.  Such prayer is itself a kind of thinking, and I’m even thinking of considering prayer as the origin of the declarative sentence.  In another post.

September 1, 2009

Popular Culture

Filed under: GA — adam @ 7:37 pm

I take Eric Gans’ distinction between popular and high culture as axiomatic:  in popular culture, the audience identifies with the lynch mob, while in high culture they identify with the victim.  It seems to me, further, that this distinction manifests itself as one between two modes of reading, or “appropriation” or “consumption” of cultural materials:  in engaging high culture, one is enjoined to preserve the text or artifact as a whole—this means examining the parts and the text/artifact as a whole “in context,” with an eye towards its unity and purposefulness, as well as the accumulated historical labor expended on its production.  This also implies a hierarchy of interpreters and commentators and the institutionalization of the materials (museums, literature departments, etc.).  With popular texts and artifacts, meanwhile, elements of the cultural product can freely be iterated in contexts chosen by the user, without regard to the “intentions” of the producer.  We have no compunction about repeating catch phrases from a sitcom or movie in ways that show no respect at all to the way that phrase functioned in its “original” context.

 

Now, of course high cultural texts get treated in this way as well, but this just testifies to the dominance of popular culture in the contemporary world—that is, we are talking about ways of treating texts and artifacts as much as (or more than?—that’s part of the issue) about the texts and artifacts themselves; and, of course, putting it that way further testifies to the decline of high culture and the ascendancy of the popular.  We can also take as given the convergence of popular culture with the rise of the victimary:  the high cultural texts are themselves viewed as oppressors, and by “appropriating” them “violently” we take our justified revenge upon them for their presumption of centrality.  And we can also stipulate that the mass market and the “age of mechanical reproduction” have been central to this process. So far, nothing I have said takes us much beyond discussions of postmodernism going back to the 70s and 80s, which also highlighted the collapse of the high/popular boundary as well as the intensified “citationality” and “cannibalistic” nature of contemporary culture. 

 

But we can go quite a bit beyond those discussions, I believe, in particular in trying to figure out the consequences of these developments.  Left cultural theorists have tied themselves up in knots trying to convince themselves of the potentially “progressive” character of the rise of the popular, with results that have been brilliantly lampooned in a couple of essays on cultural studies by John O’Carroll and Chris Fleming.  Somewhat more serious, or at least earnest, approaches, like that of Gerald Graff, try another, in my view, equally flawed attempt to find something hopeful in our students’ attraction to popular culture.  For Graff, instead of trying to get students to engage thoughtfully with the products of high culture that we professors value, in order to develop and put to work their interpretive faculties, their ability to see things from different points of view and in “depth,” etc., we should recognize that students are really doing all these things already when they argue about their favorite sports teams, or the movie they saw last night, or the latest music video by their favorite artist.  Get students speaking about what they already know, already interpret, already canonize, already debate in more sophisticated ways then outsiders realize, and they will come to realize that they are already something like “scholars” or “academics” (or “critical thinkers,” or “interpretive agents,” or whatever you like).  What happens then seems to me less clear—if they are already engaged in serious discussions over esthetic and moral values, why do they need our high cultural texts, or the means of interpretation that have evolved in the history of responses to them?  On the other hand, if those discussions are not genuinely about such values, and the means of interpretation at work in them not comparable to institutionalized ones, then, in fact, they are not really doing what “academics” supposedly (or hopefully) do.  Nor do we have any reason to assume that having them attend to what they already do will get them one step closer to that goal. 

 

Defining popular culture as the free iteration of bits of models helps us to account for why these attempts to “redeem” popular culture can’t accomplish what the redeemers would like.  High culture is intrinsically totalizing, centralistic or holistic, whether it be the Marxist theory of history or the New Critical sacralization of the literary text—the idea from the start is to resist the fragmentation so celebrated by apologists for the popular.  The assumption is that some transcendent reality, embodied, albeit partially, in the most accomplished products of culture, is what militates against the scapegoating of those figures who stand out against ritual, tribal culture, figures utimately modeled on Socrates or Jesus.  No coherent political ethic can emerge from immersion in soap operas, Madonna videos or comic books; nor can any consistent and arguable esthetic stance be elaborated out of one’s baseball card collection, pornography addiction, or experimentation with shocked hair and body rings, because the entire notion of coherent and consistent ethics and stances derive from a different set of assumptions and practices.  At the same time, though, I don’t think there is any way of returning to the notion of high culture that presided up until, say the Second World War—not only has that notion of transcendence been displaced irrevocably, but it was flawed in important ways from the beginning, however great its service to the advance of humanity and however many the staggering accomplishments we owe it.    In that case, the problem with the cultural studies people (of whom Graff is one, of course, even if one of the moderate political center), is that they aren’t radical enough.

 

After all, the originary hypothesis confirms the central claim made by avatars of the 20th century’s “linguistic turn”:  human reality, at the very least, is indeed constituted by the way signs reveal relations between us through the things we move to appropriate, and not by the referential relation between language and a higher reality.  This must also mean that when we account for the human condition, we must do so in language and are therefore always further and newly constituting it—this “Heisenbergian” reflection irremediably undercuts any pretensions to knowledge of a permanent “human nature.”  Mimetic desire, rivalry and crisis will always be with us, and the bet made on traditional high culture is that that permanence renders different modes of deferral  secondary, so many “epiphenomena,” if you will—but if we reverse that claim, as I believe we must do as we become more conscious that we ourselves, everyday, are responsible for inventing such modes of deferral, then even those enduring traits of human reality are relativized by ever changing sign systems which not only resolve them in limited ways but shape their terms of emergence as well. 

 

And yet the paragraph I just wrote was, or so I would like to believe, composed on the terms of high culture—I am certainly aiming for the kind of “density” or “depth” in my discussion here that would mark this argument as one that would interrupt the prevailing modes of scapegoating.  And, of course, the theoretical and esthetic rebellions that have provided a vocabulary for the privileging of the free iteration of bits of models took place completely within high culture as well.  Indeed, notions of “depth,” “density,” “textual autonomy” and so on refer to our willingness, or our felt compulsion, to take the object on “its own terms,” to assume, as Leo Strauss put it, that its author knew more than us and was providing us with knowledge or an experience that was both valuable and one we couldn’t have procured or even thought to pursue on our own.  If we approach cultural objects with such an attitude, they become inexhaustible, but we will only do so as long as we believe the inexhaustibility lies in the object, not in our attitude towards it—once we assume there is no “text in this class,” to refer to Stanley Fish’s famous phrase, the sheer proliferation and ingenuity of interpretative strategies that have been accumulated over the past couple of millennia will not be able to sustain our interest for long.  The initial burst of enthusiasm deriving from the sudden sense that “hey, we’re really the ones who ‘made’ these texts!” will quickly dwindle into a deflated “you mean, it was just us all along?” 

 

The initial result of “unregulated iteration,” in both popular and high culture, was the creation of the celebrity—from the modernist writers and painters in the 1920s to the postmodern theorists of the 70s and 80s in the world of high culture, and from newly famed athletes, singers, actors, along with seemingly randomly elevated members of the idle rich, the scandalous, etc., also starting in the 20s, through the movie stars and rocks stars, also into the 1980s.  Perhaps this age in retrospect, if the title of Eric Gans’ recent Chronicle on Michael Jackson is correct, will be known as the “Age of Celebrity” as we move on to something else.  Maybe “celebrity” filled the space of sacrality previously filled by the Platonism of both the guardians of culture and the people, and now vacated, most immediately due to the historical catastrophe of the First World War; maybe it also fit an early stage in technological reproduction and the market, where such processes were far more centralized and monopolized then they are likely to be from here on in.  It seems to me that the precipitous decline in the power of celebrity which we are witnessing (and is perhaps best testified to by the openly staged, publicly “participatory,” “auditioning” for celebrity in shows like “American Idol”—the aura essential to celebrity cannot survive the public’s freedom to elect and depose celebrities at will, and with such naked explicitness) is more in accord with the logic of unregulated iteration, as well as healthier.  (It is noteworthy that while there may very well be something cultic in the devotion millions of people express towards political leaders like Obama and Palin, the nomination of these figures as “celebrities” was premature, as celebrity cannot survive the harsh criticism on inevitably divisive matters of public substance any political figure must endure—if an author touted by Oprah turns out to be a fraud, she apologizes publicly and has him come on the show and do the same; there is no analogous mode of “redemption” if, say, Obama’s leftist agenda crashes or Palin runs for President in 2012 and is thrashed in the Republican primaries.)  At any rate, though, one could imitate Babe Ruth’s swing or swagger in the playground, or Jordan’s moves in the gym; one could sing a Beatles tune or mimic some of Michael Jackson’s moves without having to have a “reading” of the “text as a whole”—while the celebrity of these figures, one might say, helped guarantee a unity and hierarchy of focus that could be shared nationally and sometimes globally, sustaining the type of community previously preserved through more transcendent means.  If celebrity is on its way out, we will have overlapping and often mutually uninterested, even repellent communities, sometimes aggregating into something larger but not in any predictable way.

 

If the generation of models in a period that is both post-transcendent and post-celebrity does not require a focus on “complete,” or “fleshed out” figures (about whom a story could be told, through whom a meaningful sacrifice performed), if they don’t have to conform to existing narratives so precisely (in part because the media, or means of establishing celebrity, are themselves increasingly decentralized and evanescent), it may be that the eccentric and idiosyncratic will come to the fore—not just any idiosyncrasy or eccentricity (and not necessarily the depraved or cartoonish) but, I would hypothesize, those that the make the figure in question just as plausible a figure of ridicule as of emulation.  Those who organize a space around a particular figure would do so with an awareness of this two-sidedness, which would in turn provide a basis for dialogue, friendly and hostile, with other groups—that is, “we” would organize ourselves around emulating a particular somebody and therefore knowingly organize ourselves against those dedicated to his ridicule; and vice versa.  (It seems to me that something like this is already happening with Sarah Palin who, despite what I said before, could, if she avoids putting herself in situations where her power of presence must be directly repudiated or ratified, might become an example of this new kind of…well, what would it be?)  What looks to one group like an accomplishment looks to the other like a botched job, what looks to one beautiful is grotesque to the other, a pathetic mistake to one is an innovation to another and so on—and, in the best of cases, each side will be able to see what the other is seeing.

 

In this case (to continue hypothesizing), popular culture will be performing what high culture might become increasingly interested in—that boundary between error and innovation, where rules get followed in ways that create “exceptions,” where the strictest literalism produces the wildest metaphors, where models get both emulated and mocked and it can be hard to tell which is which, where we find ourselves in the position of figuring and trying out ways of seeing others and objects as beautiful or repulsive, instead of simply being “struck” one way or another, where no one has proprietary rights in the line between “mainstream” and “extreme,” etc., but where one still has to come down on one side or another, at least at a particular moment.  High culture, whether carried out in the theoretical or artistic realms, would increasingly become so many branches of semiotic anthropology, interested the way in which avatars of the “human” keep coming to bifurcating paths (do nothing but keep coming before such bifurcations), going one direction or another for reasons we could guess at but with consequences we can identify and judge according to their irenic effects.  It’s not too difficult to imagine texts and performances being composed with this problem in mind, and critical and appreciative canons emerging to meet those texts and performances.  (Just think of the intellectual challenges imposed by the determination to write a text in which every phrase is a “taking” [an iteration or appropriation] as well as a “mistaking”—and think of how revelatory such an effort might be regarding idiomatic usage.)  (I suspect one could already construct a “genealogy” of such texts that have been classified as “modernist” or “postmodernist” while nevertheless sticking out as an anomaly.) I think high and popular culture would thereby become less hostile to each other, and both might become less sacrificial.

« Newer PostsOlder Posts »

Powered by WordPress