Liberal democracy is predicated upon the severing of liberty and equality. For liberty and equality to be sustainable, they must be reciprocally defining and supporting: I can only be free together with my equals; I can only be equal with others within a reciprocal respect for each other’s freedom. The supposed tension between liberty and equality is a fabrication of liberal democracy, which has the state protect individual pursuits and equal outcomes, thereby limiting one in the name of the other. The outcomes must become ever more equal and the pursuits protected ever more marginal and inconsequential—economic freedom long ago dropped out of sight as a fundamental freedom, leaving sexuality as the main arena of protected freedoms (there will never be a law against “hate sex”).
The unsustainability of liberal democracy is becoming more evident, almost daily. It seems to me a good idea, then, to begin explore what might come after liberal democracy and, more broadly, the single-scenism of modernity—and to do so with as few prejudices (and, hopefully, resentments) inherited from that historical project as possible. I would begin by conjoining, once again, liberty and equality, under the following principles: the basic individual right is to leave a community, and the basic communal right is to determine whom to allow in. The terms of entrance into a community in turn define the specific rights that will be protected within that space.
All cultivated or occupied land, any established institution, actual or virtual, is owned by someone—even if ownership is shared and informal, in the end some will be let in and others kept out. Within any such space, there’s no point to speaking about free speech: I can throw someone out of my house or fire from or refuse entrance to my business because I don’t like what they say. They are free to speak on the street, but that’s because the street is owned by the government, which has committed itself to the defense of certain rights within spaces it owns. The same goes for, say, voting rights, which are granted on the agreed upon terms of the establishment of the community and its political institutions: a corporation can establish voting rights based on the number of shares you own. But the principle that I can’t force any person to stay in my house or business or country can be universally defended.
Defended by whom? If egress from any space is a fundamental right, it is because anyone hopes that leaving one space will enable entrance into or creation of another space, in agreement with others. Postmodern communities, then, will ultimately trace themselves back (in the best Western tradition of the Exodus and the Aeneid) to exiles and refugees who have left one place to establish a new order, and which will be competing for the best immigrants; and communities will also be distinguished between the more dynamic and static—that is, those that want more and those that want fewer, immigrants—with perhaps, some communities wanting none. The communities that want fewer immigrants are more likely to be those that imprison their own population or some portion of it. Precisely because freedom of movement is essential to the more dynamic communities, communities that so imprison those unfortunate enough to find themselves there will be treated with hostility: run your internal affairs however you like, but we consider your refusal to let individuals leave a threat, or at least an inimical act, to us. And making things very uncomfortable for the imprisoning communities will probably be enough to lead them to repudiate their emigration practices which, in turn, is likely to open up those communities. A relatively closed community might still be very attractive for some, and emigration can be strongly discouraged by, say, education practices that make the citizens of that community unsuited for life elsewhere, but such barriers will always be relative and, of course, any adult individual from even the most closed community can, with sufficient talent and effort, assimilate into more open ones.
We can use the word “community” very loosely here, to include neighborhoods and federations of neighborhood (and federations of federations, etc.), schools and school systems, businesses and networks of businesses, on-line and intellectual communities, military alliances, and so forth: any grouping with terms of membership. The terms of association would constantly be subject to negotiation, and each community might allow for various levels of citizenship and rights, as individuals decide where they would like to commit the most resources. A neighborhood might permit families to buy homes and offer them basic property rights (otherwise who would buy homes there?) but only allow voting rights to those who join some association that helps keep the neighborhood running (neighborhood watch, PTA, volunteer fire department, etc.) (Free speech rights may have to defer to the increasingly difficult to control communicative capacities of individuals—but the right to speak in specific, consequential, ways could certainly be calibrated.) A federation of neighborhoods and the businesses operating in them might establish various categories of aliens, for those whose work is valued but have families and friends in other communities established upon different value and whom those individuals don’t wish to disown, and who therefore can’t be completely trusted. Federations can be as large as they need to be, as large as today’s nation-states or larger, and the issues of self-defense, military establishments and war might not be all that different from now, except that the demands and conditions different political entities seek to place upon each other are likely to be much more minimal and transparent. Boundary disputes will involve conflict between competing plausible claims, or between those with claims to a particular property and those who, for whatever reason, have come to occupy it—as always, there will be those interested in aggravating those conflicts just as there will be those interested in resolving them—but as long as it is possible to impose upon aggressors the principle of non-imprisonment (if that’s where we’re all ready to make a stand, even if against the odds), then aggression will be ineffective or irrelevant, because the more peaceful and productive communities will always exercise a gravitational pull upon communities that wish to live parasitically off of others. The rules regarding federations and rights would constantly be subject to negotiation, including the rules regarding who can participate in such negotiations, how often they are to take place, where, etc.
The only dilemma I see confronting such an order is what to do with the misfits, those who are unable or unwilling to abide by the terms of membership in any community and will therefore be unwanted by all. We could say that if you don’t allow misfits from other communities into your own, you tacitly consent to their own treatment of them, in which case the problem must be solved in its own way by, presumably, living communities, where the misfits happen to be found, perhaps most often where they happen to have been born and raised. But those who are misfits in one place might flourish elsewhere, and one could imagine great cooperation among communities in trying to address this issue: perhaps communities would offer grants to other communities to take in misfits, maybe on a trial basis, maybe some communities would specialize in the treatment or accommodation of certain kinds of misfits, etc. We would have to imagine overlapping communities negotiating procedures for discovering the truth regarding alleged transgressions, means of enforcement and modes of punishment without any overarching structure of rights, nothing more than the pragmatic needs and conscience of the community, and the example of other communities, to guide them. That is, it’s easy enough to imagine someone who refuses to do the kind of work the community demands, to live in accord with the norms of any of the available neighborhoods, and yet hasn’t committed any crimes—on my account, the communal order would be free to expel him (or simply refuse him access to any necessary means of life), even if no other community wants him, and no one in the community would be obliged to take responsibility for him. Perhaps the leadership of the community will set aside some land, building, and minimal subsidies, for the hopefully very few who simply can’t or won’t fit in—sort of like ancient sanctuary cities.
In order to create what I would call practices of minimal civilization that might, as peacefully as possible, enable us to transition to such a new order, we will need innovative defenses of and uses for private property. A small example of what we will be up against was given during Rand Pauls’ Senate campaign in 2010, where his critique of the 1964 Civil Rights Act made him vulnerable to the charge of wanting to return to the segregated order of yesterday—the gravest political sin in the church of American politics. Indeed, if we argue for the right for any community to define terms of entrance and membership, we have to accept that some communities will determine what we consider to be unsavory terms. The main political task for advocates of a post-liberal democratic federated order would be the development of a political vernacular within which very unfamiliar ways of speaking about rights, responsibilities, obligations, and tolerance are created. More bluntly: we would need to remove the sting from victimary thinking, with its roots all the way down to the bottom of liberal democracy—less, I think, by arguing against it than by refusing to participate in it, by circumventing it through the means of taking up residence in all those literal and metaphorical locations where victimary rhetoric loses its hold on reality to such an extent that even its adherents must recognize it, in their deeds if not their words.
It might help to consider that any community must be modeled on disciplinarity, i.e., a scene of joint attention, itself ultimately modeled on the originary scene: to use Michael Polanyi’s terms, we attend from something to something else—we place something on the margin, unnoticed, and see the center which the margins are all pointing toward. But when the center fails to hold our attention, we notice those margins, which are us—that is, we now attend to what we were attending from. Whatever gesture from the margins redirects our attention in some shared way establishes a new center, and that’s the disciplinary space: the scene of worship, or of inquiry, or love—all different sides of the same thing. The margins will be where new actor/spectator scenes emerge as sites of contest and sources of value interpreted in terms of the founding center. The global market might be seen as such a founding center, emerging from the ruins of the epic battles of the 20th century, but values can only be created, tested and secured on marginal scenes which refer indirectly to that center. Community can only be sustained if people who participate in common practices daily also see most of the same contests and performances as being about the same thing, and often enough share the values disclosed by those performances and contests. So, for the mini-secessions I am proposing, what is necessary is that new performances and contests, as independent as possible from existing forms of authority, be created that refer in positive and illuminating ways to the global marketplace—bypassing as much as possible the noisy and acrimonious contests that obscure the global market, place it in doubt, and generate hostility towards it.
September 8, 2011
Exodus from the Dead End of History
August 6, 2011
A Little Social Theory
There are a few categories central to originary thinking: center-margin, vertical-horizontal, sign-object. We can multiply such categories, adding rather obvious ones which are probably already there, like inside/outside, and others, as necessary, perhaps less obvious: concealed/open, before/behind, amongst/amidst, visible/invisible, whole/rent, and so on. These are all very basic experiential categories, deeply embedded in language, often rooted in metaphorical extensions of human body parts and basic physical orientations to the world, and therefore rooted in the world of gesture. They are all extraordinarily rich in their conceptual possibilities—we can only face one way at a time, and as I face “forward” someone could be “backing me up” or “sneaking up from behind me.” In their metaphorical extensions, these categories can all be made to cover different ground, and will overlap each other in various ways: we can draw a line between those inside and those outside, but we will do so while we, on the margin, are facing the center. It might be that language is little more than such combinations and extensions. My first suggestion is that social thinking have recourse to these basic experiential terms, since these are the terms in which we necessarily think already.
The basic experiential terms are our only source for describing the sacred and transcendental realm, and consequently the terms in which we sacralize and describe reality. If it makes sense to speak about God as “high” (on the vertical/horizontal axis), then it makes sense that metaphors or height, elevation, and so on will be used to talk about value hierarchies (“hierarchy” itself, of course, a metaphor of “height”) and social ones—and spiritual, value and social hierarchies will all overlap, refer to, and reinforce each other. These terms also become the coin of public dialogue and discussion: you can point out, for example, that those who are “highest” socially are not the highest ethically, and this will inevitably appear scandalous. It is also the case that the creation of new social hierarchies will generate new ethical and spiritual vocabularies—the creation of vast empires, with extensive gradations, was likely necessary for it to become possible to think of God as the “most high,” the “King of Kings,” etc. And if the king is the highest, then whatever qualities can be attributed to the king can be applied to others who are thereby king-like, and elevated above their apparent status. If external or visible elevation can conceal internal or invisible degradation, then external or visible degradation can conceal internal or invisible elevation. So, the highest of the high can be within the lowliest. And we can seek out, look for visible signs of, that highest, as a way of ordering our souls (our invisible portion) and as a means of palliating, external, visible disorder (the distinction between order and disorder itself derivative of the distinction between “whole” and “rent” or “broken”). All these discussions are carried out by moving these different experiential categories around in relation to each other, and I don’t think that anything has or can change in this regard. What Eric Voegelin called “differentiation” as opposed to “compactness” is, I think, nothing more than this continual involution and articulation of these categories, creating new levels of reality.
We would then speak about any social or intellectual change in terms of some transformation in one of these categories, and in their relative prominence in discourse. To stick with the example I have just given, the various terms involved in verticality would become more important in an imperial order, and new possibilities of “heights” would become imaginable. And post-imperial societies would be especially sensitive to the abuse of “heights,” as are the “anti-haughty” religions of Judaism and Christianity, both of which took shape under imperial orders and the crisis of such orders (the same is true of Greek philosophy, an indispensable component of Christianity). Contemporary victimary discourse follows in this tradition, “heightening” our sensitivity to any “elevated” figures. Such sensitivities probably account for why those who occupy the “heights” economically (the rich, corporate executives) are almost always assumed to be “lower” ethically—as if we cannot accept that too many heights should be so close, reinforcing and “legitimating” each other.
This hypothesis, that our social relations are bound up with the various metaphorical extensions and articulations of the basic experiential categories, can be tested by anyone, in daily discussions about anything. Are we actually so concerned with “heights”—do notions of elevation work their way regularly into our hopes and complaints, do we argue about what should be seen as high or low, do we follow these terms across various levels of metaphoricity, do they impinge upon other terms, like boundaries between inside and outside, and so on? You can easily check for yourself, by paying a bit more attention to the way you and others speak and write. (Are “softness” and “flexibility,” as opposed to “hardness,” derivative of the inside/outside distinction, with a bit of whole/broken mixed in?) And I think it is an advantage for social thinking if we can trace our more theoretical discussions back to, and present them as modifications of, the ways society already “thinks itself.”
A test of this theoretical approach would be seeing how it helps us to describe the transitions from the egalitarian primitive community to the gift economy and, finally, the market economy. I have already pointed to a way of doing so in my previous post, where I suggested that we can see the emergence of the market economy in the not quite basic experiential category of actor/spectator—not quite basic, since even if the participants on the originary scene are watching each other (and therefore performing for each other), the categories certainly don’t differentiate themselves there. But the actor/spectator differentiation does build upon the center/margin distinction, perhaps using that experiential category to further differentiate the reciprocal observing on the originary scene—maybe the emergence of the actor/spectator distinction further differentiates the inside/outside boundary which is the concern of those looking at each other on the originary scene. This differentiation out of the actor/spectator distinction would first of all happen on the margins of the single “compact” scene; it would then get bounded (those permitted in, those kept out) or institutionalized; then there might be lesser scenes and “higher” ones; all the while new and unauthorized (eccentric, invisible to the center) scenes emerge which prove to be more inclusive. Essential to the actor/spectator set up is the affirmation on the part of the spectator of a winner and a loser, a better and a worse, a more or less preferable. (One is left standing, hand held up, the other falls, is in the dust, dejected, etc.—could we talk about winning and losing, success and failure, with metaphors of standing erect, kneeling, and lying?) On the compact scene, meanwhile, no such distinctions are allowed—the object attracts and commands all equally.
In that case, we have a tension between two scenes: the originary scene, upon which the entire community acts in relation to a single object, both whole and transcendent, on the one hand, and divisible among the members, on the other hand; and the platformed scene, where some act in competition with each other, and others observe and judge. These two types of scene persist until now: when we speak of the “public interest” or the “common good,” we are referring social wealth and the sources of social decision making as a single thing, which constitutes us as a “public,” and which is divisible in some “fair” way. It might be that society is impossible, that we can’t think or speak intelligibly, without being able to make such a reference or gesture—this tacit mapping of “it” and “us” upon one another might be the unmarked condition enabling language. Those located upon the “it/us” scene would resent those on the platformed scene for introducing division or, more precisely, interfering with the symmetry of “us” and “it.” And those on the platformed scene would resent those on the it/us scene taking away from individuals the opportunity to represent unique values. But those on the it/us scene would also desire the liberation of the platform, and those on the platform the security of it/us. Finally, of course, it should hardly be necessary to mention that we are all always on both scenes, at certain moments committing ourselves a bit more one way or the other.
There is an ambiguity in the platformed scene itself, where, from the very beginning, there is both a performance and a product. This would also go back to the very beginning: the better hunter brings back more prey, more food for the community, but he also probably puts on a better show, is a better source of stories, a more sought after pedagogue and mate, etc. We can see this split in the development of the market, showing the common root of warriors and entrepreneurs, and also the common origin of celebrity and consumer culture. We could hypothesize that in the gift economy and the honor/shame moral economy that goes along with it, it is competition among the performers, whose reliability is at stake at every instant, that is the issue; in a fully fledged market economy, it is competition among objects, and we don’t even have to see the performers. These are tendencies: what we now we refer to as “branding” directs our attention to reliable actors in the marketplace, and even the most honor-besotted warlord had to make sure the goods got home; still, there is a fairly radical break between the predominance of one or the other tendency. We also see a split in politics, between the tendency to emphasize and publicize performers, and even to produce, within a free society, simulacra of the monarchs and aristocracy of the previous political economy, on the one hand, and the tendency towards sortition, or the rotation in roles among essentially anonymous individuals, which gets resolved in the rights to speech and assembly and the selection of ruling elites.
The relation between these two types of scenes would generate the transformations in the basic experiential categories—the two scenes would have to be adjusted to each other regularly, and those categories would provide the resources for doing so. Performances get integrated on the primary ritual scene concerned with reinforcing the it/us. The goods procured by the “champion” are, of course, brought back to the central scene. It is much easier to accept the extravagances of the super-rich and their implication of all of us in their risk-taking if we believe that they thereby add to the total social wealth, and are following their “human nature” (the “us” in perfect conformity to the “it”). But the platformed scene will be perpetually available for localizing conflicts within a rule governed arena and thus used to frame the it/us scene, and differentiated itself—elevated in a hierarchy of scenes, with rules for ingress and egress at each level, with further differentiations between and amongst actors and spectators, respectively: the knowledge of the spectator might by the “highest,” accessible to only a few; or action might be the highest, especially when associated with insights (unconcealments) possible only on the spot, or when being first on the scene constitutes the scene. Social inquiry treats all of society as a set of reciprocally embedded and referring scenes, ultimately bounded by the imperative to maintain the it/us scene in some form. And, when we think, we might be said to be internal spectators of our own external and internal actions.
On the it/us scene, language is the attempt to map us onto it—there is a premium on transparency, that is. We want to be able to follow each other’s gesture to the object and back from the object to the gesture, and to ensure that the other can do the same with our own gesture. The sign, ideally, is the effluvium of the object, and the notion that the object is in some sense the product of the sign would be extremely dangerous. The shift from an object centered to a sign centered reality would follow the emergence of the various market scenes: if, as spectators, we can judge performances, as sign users we can perform as well, first of all in imitating and reporting the performance we judge. Rhetoric in the sense of persuading is present on the originary scene, but not the possibility of rhetoric as deception, distraction, or invention. The centralizing of individuals on the successive and ramified market scenes would set eloquence in tension with transparency. Signs and language themselves become part of the economy, i.e., a site of competition and exchange: as soon as we are doing more than affirming some shared attention, what each of us says provides some value added to the audience—that is, it is an object on the market, and storytellers will of course be in competition. When what I have to say provides you with access to a unique scene, it is a gift; when it provides you with a means for circulating among other scenes, it is more of a commodity; when it repeats a commonplace, it refers us back to common belonging on the ongoing originary scene. As language enters and permeates the market, the object, the it, itself becomes language, our shared linguistic being: an explicitly shared linguistic being, the acknowledgement that we are mutually creating reality by breaking up language and restoring it through linguistic exchanges would be the epitome of a marketized existence. Transparency in language would never go away, but it would now take the form of our playing on the same field, however contingently any particular ludic interval might last.
What “History” does is map the emergent political economic scenes onto the originary scene: that is, the actor/spectator platforms are modeled on the it-us scene. Every new scene can be placed in a sequence of scenes or within a hierarchy of scenes that “manifests” or “embodies” the originary condition in which mankind stands collectively before a single object—they are all just examples of the originary it-us scene. That the object is both material and transcendent, that mankind is both desiring and deferring allows for change, but it’s always change that further realizes or falls away from the initial conditions. Once you have packed all the diverse scenes recorded and imagined onto a single model scene you have your theory of history. This is the residue of imperial history, and the longstanding, dialectically entwined, resistance to it. (Imperial history is motivated by the desire to have the contest over and the winner declared; anti-imperial history by the desire to have that decision overruled, equally decisively.)
For a different way of looking at history, we can begin by examining the final sentence in the previous paragraph: “This is the residue of imperial history, and the longstanding, dialectically entwined, resistance to it.” That right there is the basis of a theory of history, and maybe the resentment towards theories of history is the source of theories of history, as the resentment towards imperial orders seems to generate new, more encompassing and monstrous ones. But there is plenty that escapes these centralizing scenes, and a way of attending to those other scenes might be to adhere to the simple rule that all historical innovation results from being situated on more than one scene simultaneously. I may have no choice, I may be obeying the imperative, to resist imperial scenes, especially in the deceptive forms in which they have come down to us today, but perhaps a still small voice, or a little social theory, also tells me that on other scenes life goes on without reference to the succession of imperial orders (or, for that matter, that on occasion the imperial position will remain the default one and can be sufficiently and even benevolently blended with other orders). So, the scene I am now impelled to construct is one which I offer as a gift to all those wishing to enter the space I am trying to open up, and anyone else entering pays me back in kind by “fleshing out” the scene or adding to the maxims governing it.
These scenes get constructed as pragmatic paradoxes, which is the way that mistakes (breaks in the originary body of language) get re-set and incommensurables commensurated. “The meek will inherit the earth” just replaces the extremely counter-intuitive “meek” for its opposite; “the last will be first and the first will be last” just reverses the two terms by implicitly positing two scenes: the visible one, on which the first will be first and the last last; and an invisible one, where the seemingly impossible, indeed, definitionally impossible, will take place. We can generate such paradoxes all day long: not until you reach the depths will you find yourself in the heavens; only among enemies will you discover your real friends, etc.
I don’t believe that the philosophical paradoxes are made up of anything more than such transitional scenes or events. A pragmatic resolution of, for example, the Cretan liar’s paradox would be to say that the Cretan has renounced his Cretanness, has converted to some more truthful identity, by exploiting his identity as a Cretan to confirm the difference of Cretans from the group to which his interlocutors belong (we can assume that he isn’t saying this to an audience of Cretans). We might see this resolution as self-deluded or opportunist—it would have a lot in common with, say, medieval Jewish converts to Christianity who then acted as experts on the Talmud for their new Christian colleagues—but it’s a good example for that very reason. The paradox lies in the terms of inside/outside relations: you can’t be an insider if every attempt to acquire the signs of belonging mark you as a mere imitator, i.e., even more starkly as an outsider; the resolution of the paradox lies in converting the markers of outsiderness to markers of an even deeper insiderness than those possessed by the insiders: as markers, for example, of one’s closeness to a newly discovered center or newly reached height, of obedience to the unity of symbolic and social verticality, of one’s skill in policing the border between inside and outside. Whether you happen to like a particular resolution or not, the fact remains that it is through such means that vagueness in the relation between symbolic and social verticality, the implementation of the imperatives of symbolic verticality, the precise border between inside and outside, and so on, gets clarified or exploited. The resolution of pragmatic paradoxes, which is all we ever do as thinkers, are the products of inquiry, of disciplinary spaces, which emerge any time we find that what some of us are attending from can in fact be attended to (we are on different scenes, even if we are in the same location) and that we therefore can and must create a new medium for joint attention.
So, expanding the fully marketized political economic order entails restoring the originary it-us scene in terms of language and originary mistakenness; and restoring originary mistakenness implies the creation of pragmatic paradoxes that issue in maxims offered as gifts upon new, hybrid scenes. Now, not all paradoxes and maxims have the same relation to reality: the Keynesian “spend in order to save” has the same look as the maxims I just tried out, and has probably been attractive for that reason and we might simply say it is false nevertheless. Still, Keynesianism was an attempt at a leap in being, perhaps following the failure of certain forms of the gift economy to back up the market economy. And maybe it had to be tried out in order to discover its limits, within the battle of maxims within history; while, perhaps, it still has some local, limited validity. All such leaps of being, formulated through paradoxes, concern boundaries and thresholds between the different interdependent political economic forms. If the market economy fails it is indeed because the market, the gift and primitive egalitarian elements of the political economy are at odds with each other in some way. To frame the problem in linguistic terms, each level of the political and moral economy must be robust enough to provide a vocabulary and grammar for the other levels: family life needs to be unproblematic enough to provide ways of speaking about the budget, habits of self-reliance (people “standing up” for themselves) within neighborhoods sufficiently shared so as to leaven our discussions of challenge and response in foreign policy, and so on. On the more pragmatic level, if people are not willing to take complete responsibility for a particular parcel of earth, for a specific organization of people, that is, to stake their honor and be willing to suffer shame on behalf of, that parcel and that group (what we often call “families”), the market economy won’t work either; if people are not willing to volunteer, to offer gifts that might not be reciprocated, things would collapse pretty quickly.
And the same is true if people are not willing to use each others’ words. The surest sign today that we all participate in shared being, upon some it-us scene, is the use and misuse of language in common use—linguistic cannibalism, to put it simply. The more rapidly and energetically we gobble up, spit out, digest, expel, chop up, vomit, etc., each other’s words, the more we have such a scene. I don’t know how one would prove or dispute this, but it seems to me that phrases and statements circulate through all the media—from politicians to pop stars to reality stars to newscasters, to facebook pages to everyday discourse and back again—far more rapidly and with much greater shifts in meaning, perspective, including irony, parody, from literal to metaphor, from metaphor to literal, than ever before. Effective discourse finds a way inside this circulation and takes a phrase hostage so as to reveal some boundary or threshold by manipulating the basic experiential concepts in a new way. (The notion of hostage taking, which rightly evokes so much horror today—except when leftists accuse Republicans of doing it to the economy—is actually a central part of the gift economy and, more broadly, the gift/honor/shame moral and political economy. The reason is the same as the centrality of other horrors, like honor killings of girls who have been raped. In the gift/honor/shame economy everyone is responsible for everything, regardless of the intentions behind any act, while unlike the primitive egalitarian community, individualization (big manness) has proceeded far enough so that the contagion can be localized and a single individual or part made to bear it. This is part of the gift economy that we can’t do without, even if we can eliminate every part of it at odds with justice under the law—we are beyond ambivalent by now about phrases such as a “credit to his race,” which of course implies he could have been a discredit, and Jews in particular I think still worry about whether the actions of Jews like, say, Bernie Madoff, will “rub off” on them. The unease is easy to understand, since any such hostage taking—for that’s what it is, when I reserve judgment on an entire group pending my assessment of one person’s character—violates the sanctity of the individual; but if we can’t find ways to preserve this form of hostage taking, which is still intact in some clannish immigrant groups, we will lose a necessary support of the market economy.) Of course, this process is also a highly competitive one, but without final winners and losers—the best re-engineering of a phrase for a very specific situation might exclude all other possible inventions for that situation, but it will inspire others for other, even very similar, ones.
In this way, we keep moving at diagonals from each other, moving forward, but in such a way that any future scene is unthinkable within any present one, in a negative theology of history which simply negates any populated future scene by acting in such a way as to not fit on it. I might mistake Freud’s terms and call this dispersal of the dots “parapraxis.”My resistance to a uni-scenic humanity might seem to counter the fundamental insight into our shared origin offered by GA, but any scene could only branch off of another, which in turn branches off of another, and no scene could be outside of this vast ramification of overlappings. Anyone else out there could attend to something I am attending from, and vice versa—what is transparent to one is opaque to the other, what is means for one is an end for the other, what is explicit for one is tacit for the other, and we could reverse these relations for each other. And that’s all the common humanity we need.
July 19, 2011
Post-millennialism and Originary Grammar
Eric Gans noted a while back that the first “market” was war, insofar as value is established through competition in a public space. According to that criterion the market can be traced further, to the most primitive hunting and gathering societies: one hunter would prove himself more proficient than others and the subsequent recognition, emulation, envy and resentment can be easily imagined. The seeds of the market are therefore planted in the earliest human societies, as Gans elsewhere suggests that private property can be sited on the originary scene itself, as in any scene of shared consumption—at the very least, that bit of food I am about to put into my mouth must be mine. Even more, the rough egalitarianism of the originary scene and early human societies would always have been countered by the marginal inequality that would result from some kind of proficiency, even if just speed and ruthlessness in the process of division.
In that case, the progression from egalitarian hunter/gatherer societies to the gift economy, via the prolonged and enormously important Big Man stage of social development and ultimately to the modern market economy can be described in terms of a dynamic between domination by the object (with guaranteed access to the object for all members of the community and absolute fealty on the part of all members of the community to the God immanent in it) and differentiation through competition on some public scene—by “public,” I mean no more than some spectator and a judgment. Resentment towards whoever so differentiates himself “too much” would be conveyed in the name of the center, and resentment towards the homogenizing effect of the centripetal pull of the object would be advanced in the name of some site of differentiation.
For most of human history the market, or public competition, is on the margins of the social order and strictly regulated—even more, such competition is channeled into the devotion to the center constitutive of the order. The gift economy would always be threatening to spiral out of control because competition between rival families and patriarchs would always be on the verge of violent eruptions—the honor/shame moral economy must be maintained so fanatically for precisely this reason. The single attempt to combine in a balance, an open marketplace of goods and ideas, with a slaveholding patriarchal, sacrificial order, the ancient Greek city-state, failed miserably, if spectacularly. The solution to the problem of transcending the tribal social order was the Big Man, and its further extension into imperial order. There is still a gifting economy here, but a permanently asymmetrical one—the people pay tribute to the king who, in turn, as god or representative of god, gives life to the people. Society is still sustained by a single, central scene, and the most pertinent competition, actual or potential, would be over occupancy of the center. The splendor of the center overawes any potential rival to the throne—the center is now embedded in a cosmological order: the hierarchal structure of reality guarantees the social hierarchy.
Both metaphysics and monotheism sustain that central scene by revising it fundamentally: the inquiry into the higher order, for Plato, enables us to so hierarchically order our own souls so as to submit and participate in a social order ideally governed by the best; God as king supplants Pharaoh, and the promised apocalypse involves the establishment of theocracy on earth. Of course, earthly kings are abolished or substantially downgraded, but the entire world remains a single scene—the only possible resistance to what Heidegger called the onto-theo-logical order is existential rebellion along with the forces of evil. The “aristocratic” and tribal values—striving for glory, for greatness, but also revenge and recognition by inferiors—are sharply restricted and, by now, close to extinct. (Part of the historical task of feminism was to chase these values out of the few communal and private spaces where they still flourished.) Keeping these values down has required the assertion of the single central scene—first of all, in each nation-state, but more generally in the scene of “history,” in which “great deeds” had their meaning extracted from them, becoming part of a process that would make further such acts unnecessary. The project of transnational progressivism, to subject all of humanity to a single (anonymous, as the in the EU) global authority under international and human rights law, should be seen, at least in part, as an attempt to make the recrudescence of the martial values unthinkable. We would have, at that point, less a single scene than a single scenelessness. Fortunately, that’s not possible.
The modern market emerged under the protection of the modified, singular, central scene, and it needed it, resented it, and revised it. The great sweep of the unfettered market in the Anglo-Saxon world proceeded under extremely minimal central governments, but also in close association with equally grand imperial projects—the subjugation of a large part of the world in one case, and the conquest of a continent in the other (or “others,” including Canada, Australia and South Africa). The creation of great fortunes looked very much like new monarchies, and the political economy justifying the great economic expansion spoke, up until the end of the 19th century at least (when the marginalist school of dissenters emerged), of “society” as comprised of a massive accumulation of “wealth” created by the totality of social “labor”—I think that Hannah Arendt was essentially right when she claimed that Adam Smith’s political economy was already implicitly communist, assuming as it did that the total social product could be calculated in terms of the total social labor—and Marx thought so too, turning Smith, Ricardo and the others back “on their feet” like he did with Hegel’s dialectic of “history” (wherein the Communist revolution would be the final singular scene). In that case, when the totalitarian upheavals of the 20th century licensed themselves with scientific and technological tales of an inevitable historical process, they were not essentially at odds with the theoretical cover under which the free market became dominant in its 19th century heyday. The often vibrant competition in the political realm has long been firmly subordinated to debates over how to grow the total social product or distribute it more equally—no politician, in an actual election campaign, would let the electorate know that a particular proposal might lower our standard of living a bit for a while, but it’s the right thing to do. Even more, I think it is a universally shared tacit assumption that, however honest, open, non-intrusive our governments might be, the consequence of even the tiniest decline in living standards would be a complete delegitimation of the political system, unless all involved declared the unequivocal moral equivalent of war against said decline. In other words, the Western ruling class has long shared with Marxists the assumption that politics is just a reflex of economics, and with vulgar Marxists that economics is a question of who get how much.
The post-millennial would mean that we don’t any more believe in an event to end all events. That would also have to mean no more singular central scene, no more “history,” and no more total social product. Instead, society would be comprised of a vast array of overlapping scenes, some of them provisionally elevated above others—those that, for the moment, are getting more “hits.” We already have the makings of such an order, but the obstacles to its free development are formidable: if there is no more total social wealth, then we can’t demand that someone give us more of it, or guarantee that there be more of it; if there is no central government, we would have to rely upon each other in our daily transactions; who would make sure that new dictatorships won’t emerge, new enemies of market society empowered, and new tribal wars break out; and, moreover, we would all have to live peacefully with very different people, free to express abhorrence of your own way of life—all these possibilities are simmering right underneath the placid surface of liberal democracy, and releasing our grip on the central scene and the object at its center might be just as likely to set them loose as to render them obsolete once and for all. At any rate, how could we know one way or another, and the stakes are as high as they get.
It would be counter to the entire idea of the post-millennial, though, to have a grand strategy for accomplishing it. Rather, helping to usher in the post-millennial would have to mean learning to attend to the kinds of practices, attitudes and relationships that would render the total social product and history unreal. These practices, attitudes and relationships would involve the firm, even absolute, assertion of binary oppositions (true/false, good/evil, friend/enemy, etc.) while simultaneously highlighting the paradoxical character of all such distinctions and the extreme contingency of any consequences we can imagine descending from any particular choice at a particular fork in the road. For example: I am pretty certain that we in the US will not resolve our current debt crisis, that our system of entitlements will become unsustainable, that neither our political class nor our citizenry is anywhere near capable of addressing these issues and therefore preventing a social collapse within a couple of decades, and that long term demographic trends will overwhelm us regardless. My thinking on all this is, in other words, apocalyptic—to quote John Derbyshire, we are all doomed. And I am willing to take the extreme political stance commensurate with the assumption that only the slightest sliver of possibility can prevent such doom: no increase in the debt limit, no way. Let’s decide right here and now what we really want to spend the money we actually have on.
Of course, having said this, there’s not much more to say. To paraphrase Gertrude Stein’s thoughts on the atomic bomb, either this social collapse will just be a somewhat worse version of the economic recessions and depressions, and the social crises (e.g., increases in lawlessness, etc.) that we are already familiar with; or it will be something totally different. If it will be the same, only worse, well, then, we will manage to muddle through it (well, maybe not all of us), in ways that we couldn’t predict or prepare for right now anyway; if it’s totally different, then we can’t have anything to say about it. Either way, it’s not “interesting.” What’s interesting is what we can only prepare for by ceding imaginary control over any central scene: how goodness will emerge out of evil, how the truth will assert itself amidst falsehood, how enemies will become friends, how small, marginal, neglected possibilities will create new centers, and so on. So, the assertion of good against evil slips into solicitation of good out of evil, which also means exploring the ways good turns into evil and evil into good—but this is essentially a grammatical problem, or at least can be treated as such. Pursue good and flee evil>Evil pursues and good flees you>Flight is good, pursuit evil>Good is the flight from your pursuits. That’s one line of grammatical inquiry, anyway, leading to potentially interesting questions. Flight from your pursuits to what? Do your pursuits in turn pursue you? What is involved in the literalization of these metaphors? What would be yielded by reversing the terms, designating flight as evil, pursuit good? These questions could only be answered in specific situations—originary grammar can only give us the form.
If metaphysics is the belief in the primacy of the declarative sentence (over the more dangerous and sacrificial ostensives and imperatives); while the God of Judaism adopted by the Christian world is named by the declarative sentence (a generic template of such a sentence, as Gans has recently noted), then the culture of the West created by the convergence between Athens and Jerusalem could be seen as a sustained defense of the declarative: “dedicated to the proposition,” to truncate a sentence from one of the greatest products of this convergence. In Western culture, nothing is acceptable or legitimate that can’t be “deduced” from a shared proposition. But while the metaphysical culture that prevailed through the early to middle stages of market society wishes to deduce all imperatives and ostensives from declaratives, a more modest post-millennial “indicativity” is content to be able to translate all other speech acts into the indicative mood.
I’d like to push this a bit further. Why this intrinsic connection between the declarative sentence and the cultural preliminaries of market society: the God who cannot be summoned by name and the space of open discussion? The connection, I think, is between the declarative sentence, which creates a reality resistant to our respective imperatives, and the spectacle created by the public competition I earlier suggested can be traced all the way back to human origins and which is the basic cell of the marketplace. One wouldn’t have needed a declarative sentence to indicate, affirm and name the central object, but you would to point out who won a competitive event. Peacefully concluded contests between “champions” would be the first events to direct our attention away from the central object and set us on the road to secular narrative. The power of Western—onto-theo-logical—culture lies in its assertion of a central stage and event that transcends such contests while bestowing meaning upon them: the struggle for holiness, to spread the gospel, to accomplish freedom and equality, or whatever; but this is because it recognizes the destructive force of rivalries, between bearers of absolute truths and gods who are absolute rulers. If we just take one step further and directly sacralize the creation of unique sentences, rather than the God or Truth named by such sentences, we can simply accept that everyone is following ostensives and imperatives they could hardly identify, much less “justify,” making the work of cultural pedagogy not the extirpation of unacceptable ostensives and imperatives but their framing as indicatives we could work on.
If there is no singular central scene, no historical unfolding, but rather a lot of “diagonal” movement in different directions, then we can’t recognize any scene that we don’t enter and constitute with our own words and gestures. I enter a scene you have started by following some ostensive-imperative chain and rendering the ostensive-imperative I see you following in a declarative form that can include the one I am following. To put it simply, I put what is incommensurate in our respective volitions into a sentence including them both and invite you to add another incommensurable and include that. If there is no total social product, no measurable accumulation of material wealth, but rather an ever changing global division of labor to which we attune ourselves, then economic productivity will migrate increasingly to enhanced social interactions—to people getting better at distinguishing themselves from each other and reading each other.
Here I’ll conclude in an anecdotal manner by noting that young people today seem to me to be highly self-reflexive, in a way consistent with the constant self-reference imposed by the various social media. You can call this “narcissism,” but why? Or, rather, so what—or then what? You will hear college and high school students say things like “can we get to the point where we… already”? Or: “this is the part where you…” In other words, directly framing their actions in terms of formulaic narrative structures. They are constructing themselves as potentially shared centers. They are also, I think, compensating for the radical absence of legitimate social models—they are constructing models for themselves out of the material—the debris, if you like—of a now centrifugal Western culture. We could describe this absence of models in ways that could allow us to blame a lot of people if we like, all of us who should have been better models, I suppose (we mocked the establishment, undermined authority, broke up families, etc., etc.)—but, again, so what? Maybe the models we have inherited have just exhausted themselves; maybe the only models are the ones we now construct, partially retroactively. And through language, as we are certainly in the middle of a sea shift in the linguistic possibilities of English unlike any we have seen for centuries—you just need to look at text-messaging, or consider the globalization of English, to see that. All these new means for distinguishing themselves, for making themselves the center of narratives they live and recite and pass on to each other, remains a dedication to the proposition. The best thing to do, as far as I can see (which is not very far, I admit), is to keep generating new sentence templates, and templates for generating templates (which, I suppose, would then be algorithms). It might be the only road away from serfdom.
May 22, 2011
Health Care
Health care, as we speak about it today, is a completely modern phenomenon. Hippocrates aside, if you go back maybe 150 years, doctors had no effect on their patients: your chances of recovery if you did see a doctor were identical to your chances if you didn’t. “Health care,” or the medical profession, emerges along with modern science and the application of the sciences to everyday life in the forms of hygiene and nutrition. And medicine has been a rich source of tropes for the framing of modern dilemmas, as recognized by the very widespread claim that, in our thinking about moral, political and ethical issues, “therapy,” and the associated categories of “healthy/sick,” “normal/pathological,” etc., has displaced notions of sin and guilt, good and evil.
It makes perfect sense, then, that progressive politics has always seen the incorporation of health care into the cradle to grave welfare system of the modern state as the jewel in the crown of the expert-centered organization of life central to such politics. The nationalization of health care makes state power potentially unlimited: not only directly medical issues, involving coverage, treatment, price of medical services, training of practitioners, research and innovation, etc., come directly within reach, but all questions indirectly bearing upon health do as well. And which questions don’t bear indirectly upon health? Whether it’s what parents tell their children about homosexuality, the hamburger you had for dinner, or the availability of birth control and, increasingly, social situations such as bullying, shyness, etc.—all affect health, all impose potential costs on the system, all sprout new forms of expertise and regulation. To use a medical metaphor, whether health care is centralized or decentralized is a life or death question for the free society.
The libertarian answer, to privatize medicine and insurance and render them sets of voluntary exchanges, is good as far as it goes. Libertarians rightly argue that what we call health insurance today is not really insurance in any meaningful sense—it is simply a way of pooling costs in government mandated ways, and in ways that makes the real costs of medical procedures inscrutable. Health insurance should be like car or home insurance: a premium in exchange for coverage for specified health care needs. But this analogy is limited—the sum total of bad things that can happen to your car or house is known in advance: if your house is worth 300,000$, then the insurance company knows that no catastrophe can exceed that. But there is no such ceiling when it comes to your body—if your insurance company agrees to cover “cancer treatments,” must that include a decade of increasingly expensive treatments with ever diminishing effect? Who decides? A court—according to what criteria? The doctor—which one? It seems that at some point, some irreconcilable disagreement between the parties is very likely, generating enormous resentment and terror as our media-saturated society is flooded with images of beloved parents and grandparents cut off from their treatments either by evil insurance companies or daughters and sons afraid of going broke. Politics is sure to channel such resentments, compromising the independence of independent arbiters of insurance contracts.
Such a system could only work if a significant majority of the members of society could openly accept the basic unfairness of life chances and death. We would have to be able to look on, with equanimity, as insurance companies withdraw support from dying patients, including those we love and ultimately ourselves; as grown children decide that funding their children’s education is more important than a few more years of life for their own parents, etc. And, of course, such equanimity would have to coincide with an acute awareness of the unprecedented character of all this, including the heart-wrenching possibility that a few more years might have lessened or even eliminated your particular dilemma. We don’t have to go back further than the lifetimes of many living today to recall when “health care” involved very few decisions, and certainly not the impossible ethical ones we are constantly confronted with today: you accepted your fate, you made people comfortable as they accepted the inevitable. Even as some reliable treatments became widespread and childhood mortality almost eliminated, aging, sickness and death still provided the proverbial contours of our existence—the problem is, they still do.
Here, it seems to me that the much maligned (especially by conservatives) “therapeutic culture” might come to our aid. Despite the vituperation and ridicule heaped upon the therapeutic, is there any reason to assume that the distinction between, say, “good” and “evil” is any more originary than that between “healthy” and “sick”? If we take the most basic distinction to be the one distinguishing sacred from profane, why is that distinction more adequately modeled on one binary rather than the other? They are just different ways of framing the more inclusive distinction between whole and rent—integrity vs. corruption, working vs. impaired, fixed vs. broken, etc., being other versions. To be healthy is to be whole, to retain one’s integrity, to be articulated, symmetrical—all are near synonyms for wholeness, which means to have a formal reality embodied in your physical one—just like the central object once we have all pointed to it and agreed to let it be.
The therapeutic culture, by way of its victimary turn, has also created our ability to, it seems, confer healthiness upon ourselves and each other. Perhaps the one product of the victimary culture that deserves to survive is our sensitivity the ways we describe “disabilities” (I, like I suspect most of us, cringe upon hearing—or remembering hearing, since you never do anymore—an older one, the unmarked term of my parents’ generation—like “crippled,” much less the brutal terms for mental disability: moron, idiot, even “retarded,” the more humane replacement for the preceding, and which is currently the object of a vigorous campaign across college campuses to proscribe “the ‘R’ word”). It is really marvelous to see what people confined to wheelchairs (and the blind and deaf) are often able to do now, and our Gnostic, often cloying insistence that they can do it has certainly supplemented the prodigious technological innovations we must credit. We have also seen the emergence of an entire culture concerned with ways of coming to terms with disease, decline and death and the ability to turn, once all resources have been exhausted, from attributing responsibility to others (the doctor, the insurance company, the hospital, the state…) to simply seeing to the integrity and dignity of the patient and her loved ones. There is, we might say, a “healthy” way to finally let go.
The individualization of the sickening, recovering and dying processes thus introduced will not only guarantee our constant chafing at the restrictions and cookie-cutter categories of homogenized health care systems but further facilitate another process which I believe is inevitable, indeed, already well underway: the pluralization of therapies. Why shouldn’t the government or insurance company pay for, say, Native American cures? Because they haven’t been scientifically verified? You would have to have a very naïve faith in public confidence in the modern cult of professionalism and expertise to imagine that answer will hold the fort for long. There will be more and more things government and insurance companies will have to and can’t pay for—but, at least, it’s possible to imagine the emergence of insurance companies which cater to the eccentric and desperate. So, as government presence recedes, health care decisions will devolve to the individual, producing more flexible norms of expertise. Does someone really need 6 years of medical school, 10 years of internship and residency, to help me with my aching back or cough? I doubt it and, more importantly, more and more people will come to doubt it, especially when they are the ones weighing costs. In the end it will be obvious that our health care needs are better met in this more differentiated manner, and on the open market, with practitioners, inventers of medical technologies and promoters of new methods engaged in competition with a close eye on the actual costs of skills and procedures.
At the same time, such a process will generate, in the short term and perhaps longer, inequalities and mistakes that will seem monstrous to many. There will be plenty of cases of people purporting to fix backs breaking them, of con men hawking fake treatments without fear of the regulator or licensing board, of new, prohibitively expensive treatments conspicuously available for a while (a long enough while to count the dead resulting from “health apartheid”) to only the very wealthy. And the question for us, as a civilization, will be: can we abide that? Health problems, today, have come to be experienced less as “acts of God” or the inevitable workings of Nature than as a kind of violence, uniquely, unpredictably and terrifyingly directed at individuals, violence to which we are all ultimately equally vulnerable—violence from private and public greed and callousness (insurance companies, doctors driving Mercedes, companies pumping carcinogens into the environment, pencil-pushing bureaucrats putting rules over compassion, etc.). The demand for universal health care, or at least coverage, taps into a kind of originary terror. We would have to be able, to make ourselves whole, to suspend that attribution of violence, and learn to use our greater powers of physical healing as metaphors to enable healing of a more transcendent kind.
March 3, 2011
Madison, not Cairo
I’m much more interested in what is going on in Wisconsin than in the Middle East. The Middle East is the business of Middle Easterners now—America gave up its pretentions as a superpower, or leader of the Free World, or hegemon, or whatever, with the election of Barack Obama. Who knows—maybe it’s for the best. If we get serious at some point and start electing real rather than vanity presidents we’ll have to start from scratch in designating allies and enemies, and maybe we’ll be less bogged down by whatever balancing act the State Department thinks they’ve been performing for the past few decades. Either some unstable, very flawed, but perhaps workable Islamic Republics (with the emphasis probably swinging back and forth between noun and adjective); or straight out Islamic (or, much less likely, secular) totalitarian regimes; or failed states with genocidal militias settling scores. Totalitarian regimes can stagger along for a couple of decades and sooner or later the scores get settled and everyone gets tired, and things revert back to Islamic Republic or Islamic Republic. In the meantime, it’s likely that the empowered elements in the region will be less capable of waging serious war, certainly against us but even against Israel—although Israel certainly needs to be ready to go it alone, and to drop the rules imposed by the international media/human rights community, or the continuous simulated Nuremberg Trial directed at Israel. They’ll probably need to sell us oil anyway, and if they want to throw acid on their own faces to spite the Great Satan, maybe it will shock us into seriousness, i.e., drilling and building nuclear power plants.
Wisconsin, though—the fate of Western society is at stake there. Public employee unions have the government extract dues from their members; they use those dues to fund Democratic Party politicians who “negotiate” fat benefit packages with those same unions which promise future bankruptcy but long after the politicians who gave away the store can be made accountable for it. Even more, in its most advanced form, public employee unionism creates little one-party states where, even if you elect budget-cutting, tax-cutting Republicans, the public employees have already been given the store, and can shut down the entire government in response to any attempt at breaking those promises. So, it’s better to keep electing Democrats, who at least might spread the largesse around. Even more: public employee unions have a monopoly within a monopoly—not only is there no competition for government provided services, not only does the government not have to worry about making a profit, but now the government can’t even exercise its power as monopolist to impose reasonable terms upon its workforce. And there’s even more… But you get the point.
Now, though, after the disappointing collapse of Arnold Schwarzenegger’s governorship at the first serious battle with those same public employee unions, a series of Republican governors have emerged who do seem serious: Chris Christie in New Jersey, John Kasich in Ohio, somewhat more equivocally, Mitch Daniels in Indiana—and, of course, most prominently right now, Scott Walker of Wisconsin. Walker in particular is getting at the heart of the problem, and is not only negotiating the benefits themselves but the conditions that make it possible for the public employees to hold their “employers” hostage. His bill is every bit as radical as his enemies say—it seems to me that it would have been enough to simply make the payment of union dues optional: that in itself would collapse the racket, and the Democrats’ money laundering scheme. And he could have presented such a plan as a pure defense of the right of individual union members, without mentioning collective bargaining at all. But I’m sure he and his fellow Republicans have their reasons for attacking the problem comprehensively, and it does allow for an all-out fight with everyone fully aware of the consequences.
The Left certainly understands the consequences, and they are enraged and desperate. If the Left can indeed be defeated decisively, even destroyed as a force in modern society, the path lies through Wisconsin. Take away public employee unions and you see a drastic decline in Democrat and left money more generally; you will see far more sparsely populated Leftist demonstrations; you will no longer see the intimidating enforcers at so many demonstrations of both the Left and Right; the Democrats start losing elections regularly, and then who gives them money or votes for them since all their support is predicated upon them being in power and able to give some people other people’s (or imaginary) money. The outcome, in Wisconsin, let alone the rest of the country, is still in doubt—we are at the beginning of the beginning. But the survival of the market economy and its political order right now depends upon what happens in Madison at lot more than on what happens in Cairo.