GABlog Generative Anthropology in the Public Sphere

January 6, 2013

Notes on Equality

Filed under: GA — adam @ 9:01 pm

“The Muslim position is a powerful attraction for the marginal (collectively and individually) and the disaffected. What it lacks, in its obliteration of the anthropological connection between God and humanity, is a way of theorizing the deferred equality inherent in firstness. The Islamist insistence on Sharia is a clear demonstration of the non-reciprocal nature of Islam. Sharia demands submission; ‘Islam’ means submission. We have all heard conservative complaints that feminists in the West find every straw in our own eyes but ignore the beam in that of Islamic societies. But there is a reason beyond political expediency for the difficulty in attacking, for example, Islam’s unequal treatment of women. Whatever the disparities in Islamic society between men and women, or even free persons and slaves, they exist on a base of firstness-free equality. Sharia is ‘the same for everyone,’ as though Islam effectively imposed the ‘veil of ignorance’ that defines John Rawls’ ‘original position.’ Sharia’s defender might well say: ‘Yes, Sharia distinguishes between men and women. But we all obey it equally. I obey Sharia as a man, but if I were a woman, I would submit to its rules in the same manner.’” Eric Gans, “Abraham’s Three Firsts,” Chronicle 435

Effacing the deferred equality inherent in firstness produces equality in the face of Sharia, which is to say, equality in the face of the destruction of deferred equality. With regard to both deferred equality and its demolition, there is a kind of inequality: in the first case, an inequality compared to the equality yet to come; in the second case, an inequality due to the arbitrary nature of the rules needed to abolish deferred equality. The equality, in both cases, though, is not an “objective” one, based upon some (which?) universally shared measure (the impossibility of which cannot, I think, be demonstrated any more effectively than Marx does in his Critique of the Gotha Program)—rather, in each case, equality means equality before God, or, more generally, some sacred center. This equality is reciprocally constitutive with whatever inequality it seeks to mitigate or reframe. Think of how easy it would be, in Gans’s example, to replace “men” and “women” with “king” and “subject,” or “master” and “slave”—given the right sacred center, these could also be seen as instances of equality—“I obey as a master, but if I were a slave I would submit to its rules in the same manner.” Master and slave, monarch and subject, would, indeed, be equal, if the sacred center established so as to defer some more terrible violence decreed the necessity of such positions. Any affirmation of equality singles out that feature which positions each one equidistant from some sacred center (with the measuring rod being a product of the center itself), and that form of equality defers the violence implicit in the remaining inequalities by providing some kind of access to some fundamental social good and meaning—including on the originary scene, where relations of leadership of some kind or another will be generated out of the results of the scene. That is why what often look to us like astonishing inequalities are enshrined in religious doctrines and rituals predicated upon equality before God without any sense of incongruity (even if such always exists somewhere o the margin)—what some mode of “framing” presents as inequities must be reframed as instances of the central equality. Otherwise, would good would that mode of equality be doing?

The first conclusion I see following from this formulation of (in)equality is that the modern notion of equality, which seeks equality outside of and even against any sacred center is both incoherent and insatiable. It will always be possible to identify some new form of inequality and render it intolerable, and there is no reason to assume that a corresponding and mitigating form of equality will always be imaginable. It might be simpler to say that the modern view of equality is simply insane. The American, not quite modern, understanding of equality in classical Christian and Judaic terms, as all men being created equal by God, is far less so, but the boundary separating equality before God from unbounded equality is not all that thick. All men (and women) can only be equal before the God who displaces a global (at least in principle) imperial center: without some God-man claiming a right to the lives, possessions and devotion of his subjects the one God before whom we are equal evaporates, and with it our equality. All modernity did is take the anti-imperialism of Judaism and Christianity seriously and direct its attention to overthrowing emperors. But the unanimous anti-imperialism (in the broader sense of anti-state) constitutive of revolutionary modernity requires a new empire, a more terrible empire claiming the right to shape its subjects and eliminate the misfits who unsettle that unanimity. All political talk now, on the left and the right, presupposes such a unanimous rejection of some form of tyranny with which the opposing party is complicit—the notion that liberal democracy has ushered in a new era of decision by open discussion is not only an illusion but conceptually incoherent because, in the end, one is modern or not, democratic or not, free or not. Liberal democracy absorbed the civil wars constitutive of the entrance into modernity and is now dissolving back into them—anyone who listens closely to even the more moderate Democrats and Republicans, and even when they are speaking to the center, can see that in the end neither side can really grant the legitimacy of the other. You can’t enter a discussion with those who will not change their views via that discussion, or who believe that the discussion itself is not decisive, and presenting your opponents as those who fit precisely that profile has proven irresistible and, ultimately, reasonable. The fit between democracy and the rule of law was always a rough one at best, but in the end why should the people accept the rule of law if it interferes with their desires?

The only sane alternative is to say that we are equal with those whom we reciprocally treat as equals. Each of us is then equal with many others, in many different senses, and at many different levels—I am equal with the friend with whom I share confidences, his and mine; I am equal with my children insofar as I fulfill the role of “father,” in which I deposit a certain sacrality that binds me to them as “children”; I am equal to my coworkers insofar as we all expect more or less defined fair treatment from our employer, who will not treat any of us as either slaves or cronies; I am equal to the vendor on the street from whom I buy a giant pretzel insofar as we each part with something we desire less for something we desire more and thereby better each other’s condition; and, beyond that, I feel a kind of liminal, potential equality with anyone whom I might someday encounter and try to engage in some way. Anyone can embrace a democratic spirit and seek to expand the circles of equality in which one participates, and the intensity of the equalities one already enjoys, but to pretend to equality beyond those circles, where there is no shared center, is utopian, savage, or both. Sometimes equality emerges out of the clash of incommensurables, and/or the decisive defeat of one by the other, as has happened with the warring parties of WWII, but that doesn’t mean one can elevate that possibility into a rule or method.

Once one form of equality is established, it is likely that others will be forthcoming—economic exchange can lead to political alliance and vice versa. But it is just as likely that one form of equality will reveal barriers to further engagement. At any rate, there is a problem of inequality, but it lies in some violation of the rules articulating all in relation to the shared center. Inequality is essentially a question of cheating—the rules are the “deferred equality inherent in firstness.” In that case, though, the solution is to rework the rules and/or their enforcement, or to accept that that particular form of association has been exhausted; it is also the case, then, that to foreground inequality as such as the problem is to poison the rules because then the rules become nothing more than a means to reduce or eliminate inequality, which is to say that the rules become nothing more than a weapon used by one side against the other, which is by definition outside of the rules.

The only possible politics in the ongoing self-dismantling of modernity is one that seeks to clarify the rules according to which we are playing. If the rules can’t be clarified in a way that satisfies all parties, then there is no “game” and the only reasonable and honorable alternative is to leave. The other side will do what they can to you and you will do what you can to stop them. I would be very curious to hear anyone try to clarify the rules according to which our federal government currently plays—I don’t think it can be done. The government simply rewards its friends and, if they are lucky, ignores its enemies, like any powerful patron or protection racket. Referring to the rules, like the law or constitutional principles, is futile because all of those rules have been weaponized. All that can be done is to avoid drawing the attention of the state, and, more importantly, to maintain the games one plays and the rules they rely upon, while preserving as much of their autonomy from state and society as possible. Study those rules, and divine the tacit agreements in which they are embedded—those tacit agreements, our idioms, will in turn reveal other possible agreements. One’s chosen equalities with others is simply external to the state—seeking to use them as levers to overthrow the state would reinstate the same totalitarian anti-imperialism that has brought us here. The state has not usurped some position of originary justice which it is now up to the people to retrieve and restore—the state is just the largest property owner, like the kings were, and even if it now invites a few citizens to help in the management of that property that doesn’t change the fact that your property is ultimately on loan from the state and that you are equal until it’s your property that the state sets its eyes on. Getting rid of the people presently managing the common realm will not solve the problem of how to manage or distribute it afterward. It’s better to prepare for that time when it might be possible to buy up bits and pieces of the state, maybe at bargain prices, when it starts to fall into pieces. Nothing, that is, prevents us from creating alternatives to the state, much less picking up the pieces of the relationships it destroys. And there will be a lot of pieces.

December 17, 2012

After Memory

Filed under: GA — adam @ 6:09 pm

“An act of pure attention, if you are capable of it, will bring its own answer. And you choose that object to concentrate upon which will best focus your consciousness. Every real discovery made, every serious and significant decision ever reached, was reached and made by divination. The soul stirs, and makes an act of pure attention, and that is a discovery.” D.H. Lawrence

The notion of having God’s will, ideas, or natural law “engraved” or “inscribed” on the heart or mind has been a constant of Western thought from the Hebrew scriptures through the founders of modernity like Locke and Kant. (There may be nothing inscribed on Locke’s “blank slate,” but where did the notion of a blank slate, prepared to take inscription, come from?) The metaphor obviously depends upon writing, and presupposes a process of inculcating a sense of duties radically at odds with those of an oral culture. In an oral culture one’s primary obligation is to know all the names (of ancestors and divine beings) constitutive of the web of existence, or to know who knows them. Such ostensive knowledge has an imperative component—one tries to find out what the named beings want, and then one does it. With writing comes a transcendent voice that says the same thing to everyone and comes from everywhere or nowhere. Anyone can repeat the Word over and over again, inscribing it internally. While it exists objectively and can be checked when needed, the written word is only effective if memorized—while the prodigious feats of memory of the epic poets of oral cultures are no longer necessary, the book could not stay with one without at least some degree of memorization, of key passages, of general themes, and so on—after all, the written word was not readily available (who could afford to possess books?) and was often accessible only through public readings and sermons, and in educational settings. God’s word is then written on the heart through constant oral repetition, and is embedded in culture through its transformation of the language—in the same way in which our own contemporary English is still, unknown to most English speakers, saturated with phrases from the King James Bible and Shakespeare.

Even as books became readily accessible this relation between the written word and its “inscription” on our minds and hearts has remained remarkably constant, I suspect. I remember, as a graduate student, even though I could have dozens or hundreds of books, privately owned and borrowed from the university library, constantly trying to inscribe on my mind passages from books I had read and, even more, cross references from one book to another. If a critic I happened to be reading made a reference to, say, something D.H. Lawrence said about Christianity, I would try to call to mind what I had read by or about Lawrence that might frame that reference, and chastise myself for, inevitably, not having “inscribed” on my mind what I now needed; I would then inscribe what that critic said and draw upon whatever traces of my previous reading of Lawrence might enable me to locate that framing reference. Indeed, even finding a passage that I wanted to quote for a paper required some prior inscription—it was somewhere in chapter 2, or sandwiched in between two other discussions which I had inscribed in broad strokes. Ultimately, what marks one as a worthy scholar is being able, much like the first users of texts, to take a written text, available to all, as a prompt for a dialogue with others or an internal dialogue with oneself that, in the end, would produce a new text—a process that requires some ongoing retention.

Joshua Foer, in “The End of Remembering,” explores one central consequence of the displacement of print by electronic culture—the fact the one needs to memorize less and less. One striking example he gives is phone numbers, even one’s own (I don’t, in fact, remember my cell phone number). For me, and I am sure for many others, a critical rite of maturation was being able to remember my phone number—it meant I could be trusted to go out on my own. Now, a small child can have a cell phone but doesn’t need to remember his home number. I think that electronic culture is having a related effect on scholarly work and education as well—you don’t need to remember where a passage in a text is, or where to go back and find a particular comment by D.H. Lawrence on Christianity, because all you need to do is compose a search term, which I suppose still requires some memory but very little since you can try out a whole series of search terms (D.H. Lawrence critical Christianity… D.H Lawrence hate Christianity… D.H. Lawrence Christianity eternity…) in all of 20 seconds. There are lots of accounts of the changes of consciousness in process as a result of the emergence of electronic or digital culture (inquiries generally modeled on the studies by Eric Havelock, Walter Ong, Jack Goody and other on the transition from oral to literate culture). I think that Foer is right that we might advance such inquiries considerably by focusing on this singular fact of the obsolescence of memory.

Well, you still need to know why you would be looking for that Lawrence passage, and what to do with once you retrieve it—but we can imagine such search and deploy missions taking on a very different character from traditional scholarship. If I am working with a text—and, as a student, when we first learn to work with texts, I am working with it because I have been told to, or because it’s the test everyone is working with (and, in fact, for the vast majority of working scholars, this changes very little—one reads what is read)—and I encounter a name or word that I don’t know, and I feel I need to know to make sense of that sentence or paragraph, I can do a quick Google search that tells who the name refers to or a definition of the word, taking, perhaps, the first several items—the answers I get won’t provide me with the kind of context that having inscribed texts in my heart and mind would have done, but how would I know that? If I then have to write about that text, I will use the names and words I have retrieved in what would look to a traditional scholar like semi-literate ways (probably both overly literal and a bit random), but the more people do it that way, constructing hybrid figures and meanings, the more that will be the kind of work done in the academy and elsewhere. One would simply fill in the gaps in one’s knowledge as they appear, and they would be considered “filled” insofar as they enable you to get to the next gap. Better work would be distinguished from worse in the patience that has been taken to construct links across a range of texts and the consistency with which one has used and cross-referenced those links—and, in exceptional cases, the ingenuity with which one has provided unexpected links for others to follow up on. You really wouldn’t have to remember anything—you would only need to have acquired the habit of searching and articulating links whenever confronted with a text and a task. It is quite possible to imagine a whole new kind of intellectual work emerging out of the process, one which applies across the disciplines, including the sciences, which are probably already closest to this model—after you’ve put together all the links everyone you have “read” has read, there will be certain gaps in knowledge (possible but unmade links)—you just go ahead and fill in one of those gaps.

Indeed, college instructors should be avoiding standard topics like “D.H. Lawrence and Religion” precisely because of the ease with which one can patch together a series of passages and critical comments through the internet. Instead it might be better to imagine unprecedented topics, like, for example, selecting a particular word or phrase that recurs in an author’s work or a particular text, gathering up all the instances of that word or phrase, checking the rate of its recurrence across the writer’s work and in comparison with its occurrence in other authors, and use the findings to challenge some established critical views of that writer (one could make such tasks increasingly complex, as necessary—one could form new search terms for the use of the word or phrase in specific contexts, in proximity to other words and phrases, etc.). Culture, in that case, will tend to be experienced as the distribution of probabilities with which commonplaces and differing modes and degrees of variants of those commonplaces appear. More conservative interventions would seek to stabilize the most relied upon commonplaces, while radical ones would seek wider distribution of more “deviant” variants. Entertainment would continue on its current path of arranging in different but not too different ways common scenes, narratives, catchphrases, etc. We would almost literally be going with the flow—the flow of information regarding how distant from the norm our current array of ready to go phrases and gestures is at the moment; freedom would involve determining how distant we want it to be.

The features of digitality more commonly discussed, like social networking, seem to have the same effect of rendering memory obsolete. If someone puts photos of himself with his girlfriend on Facebook, he has no need to remember the experiences recorded in the photos—here, the public nature of the exposure is what makes the difference: the photos represent his relationship to her for those have access to his page, and that is their meaning. If they break up and he changes his status, the pictures can come down and be disappeared. Maybe such an individual, today, has the same regrets, nostalgia, hopes for reconciliation, reconstructed memories and so on that a normal modern person, who has inscribed his feelings upon his heart and mind—but I don’t see any need to assume that this will continue to be the case. Loves and friendships may be more and more reduced to the needs of public display, and more and more people will be their Facebook page (and whatever networking forms emerge in the years to come) and therefore memoryless. Some form of sexual desire can be taken for granted, but romantic love centered upon monogamous long-term relationships, relationships dependent upon both memory and forward looking narratives, certainly cannot be. Emotional life might take on shapes drastically unfamiliar to us.

What kind of people would these memoryless, or “dis”-membered beings be? It’s easy to assume the worst. With the obsolescence of memory, what would promises be worth? Would anything we (“we” being traditionally humanist thinkers) recognize as “thinking” be possible? If the past is constantly disappeared how would the future be conceptualized? Would people save money? Have children? Be capable of any kind of sustained attention whatsoever?

Marshall McLuhan, of course, raised these kind of questions half a century ago, and his notion of a “global village,” meaning both instant connection across the globe but also a return to the kind of oral culture, focused on spectacular events, driven by rumor, gossip, moral contagions and celebrity (a kind of mini-divinity) seems as relevant as ever. As does Jean Baudrillard’s vision of a society of simulacra, in which we are ourselves the models out of which we construct ourselves. The viability of such a society, with minor as well as major powers possessed of nuclear weapons and the rise of a global Islamic fanaticism, not to mention the problems involved in managing a complex global economy, would be dubious.

If signs are not to be inscribed in hearts and minds, what does understanding signs amount to? Nothing more, maybe, than the most originary of understandings—the capacity to iterate the sign, to maintain and extend the joint attention (to follow a line of attention) it has initiated and which has drawn you in. The capacity to iterate the sign involves the knowledge of the difference between a broken scene and an intact one—which is to say, knowledge of what kind of gesture is likely to get the desired response—or, at least, a response one would know how to respond to in turn. I would think about this as a kind of sincere pretending in which individuals try not so much to be like other individuals, as to approximate a kind of projected or imagined norm. But it is not easy to imagine and approximate such a norm, especially since its formation is constantly in flux, and what is normative or average in one site might be on the fringes in another. There will always be cases in which the projected norm is in fact an extreme anomaly or, to put it positively, sheer possibility.

This is the form thinking may take, or already is taking, as we move into the order of electronic communication: the generation of possibilities, the more sheer, the more barely possible, the better. Start with the assumption that anything is possible, anyone is capable of anything, and modify that assumption as the scene takes shape. Quite a few postmodern thinkers have already pointed in this direction. I will put it this way: modernity continues metaphysics, which sought out the ultimate reality in a higher, hierarchically organized, intellectual and spiritual order, by shifting our attention to the ultimate reality to be found in lower, unseen forces: material interests, sexual drives, the unconscious, etc. What comes after modernity is “patatiquity,” with the “pata” from the pataphysics, the science of the exceptional invented by Alfred Jarry and “tiquity” a temporal suffix modeled on “antiquity.” Patatiquity is the age in which possibility is more real than reality. Research conducted through Google constructs a possible array of links; social interaction carried out through networking online constructs a possible community. In both cases, the possibility is “real” insofar as others iterate the signs of possibility one puts forth, and these iterations in turn generate new possibilities (like a new hierarchy of Google links).

So, is patatiquity sustainable? On the face of it (and both conservative and postmodern critics, with differing evaluations, agree here), patatiquity seems to herald an era of irresponsibility and carelessness we can ill afford—isn’t the Obama cult exemplary of patatiquity, with its investment in the sheer possibility of hope and change; isn’t endless debt, both personal and national, equally patatiquital (or, perhaps, continuing with the model of antiquity, “patacient”)? Maybe—that’s certainly one possibility. But it might also turn out that the most avid explorers and investors in possibility will insist on the kind of minimal reality that makes possibility possible: to take just one example, real money. The modern attempts to control the economy and regulate habits through the monetary supply just inhibit possibility by governing according to the norm extracted by experts. The more we insist on unequivocal laws governing distinctive areas of human actions taken as literally as possible, the more is left over for possibility. In fact, Gertrude Stein’s political conservatism seems to based on a similar line of thinking: in a series of articles written for The Saturday Evening Post in the 1930s she argued for the necessity that money be “real” (i.e., not fiat) and for the government’s approach to requests for further spending to be that of the stingy patriarchal father (a stock figure Stein otherwise tended to despise); more generally, the intersection of habits that generates infinitely varied human interactions and idioms can only do so if minimal, but strict, rules are taken as given.

Under such conditions, the law can function more as constraint than restraint: restraint seeks to hold back while constraint seeks to channel, like the rules of a game that enable a wide range of moves displaying an equally wide range of intellectual and/or physical capacities. Out of a set of constitutive rules—those rules that make the game a game—emerge all of the regulative rules determining strategies. But patatiquity suggests something more: the regulative rules reveal more constitutive ones. The right to property is a constitutive rule of a free society, but there are many ways of enforcing that right, and each one of them—protecting one’s property oneself through arms and security systems, a public police force, a private security force, etc.—reveals something about the right to property itself (what kinds of ethics and capacities it requires and evokes, where it stands in relation to other rights). Just so does the elevation of possibilities involve an ongoing revelation of a community’s constitutive rules. Agreements would be made explicit and their limits clarified, and norms and assumptions about rights would emerge from those agreements; more long term institutions, most importantly family, that transcend individualized, explicit agreements, might very well change dramatically, becoming, as is already the case, more contingent and mixed—how to ensure the care of children will be a real problem. On the other hand, there will probably be far fewer of them, but, contra Mark Steyn, that may not be socially fatal—at the very least it will impose some very difficult choices: for one thing, it will become increasingly obvious that we can’t have both commitments to our present day middle class entitlement programs and regulations and tax policies that cripple the kind of productivity required to provide the excess wealth needed to subsidize those ever more bloated programs..

In patatiquity the sheerly possible can reveal constitutive rules that a more normative, declarative culture conceals. Imagine writing according to the following rule: each word aims at being the least predictable, given the surrounding words, for the normal reader. Your writing, then, is first of all a study of your possible readers, in an attempt to give those readers an opportunity to study themselves. Following this rule (which will not be easy) you will produce the sheerest of possibilities, the possibility left after all the other have been exhausted. And to read such a work would be to start exhausting those inexhaustible possibilities—all the clichés, commonplaces, formulas, chunks and constructions in one’s linguistic inventory. If the first word of the sentence is a personal pronoun, the next is most likely a verb, and a verb referring to an action carried out by humans, and then adding in the context, your own personal proclivities and some guess work you anticipate with a 63% probability one “kind” of word and with 37% probability another “kind”; given that next word, the same process starts up for the next one, and so on all the way through; and you could do this backwards and forwards or starting in the middle, and over and over again. This is the way we always use language—someone starts to speak, or to gesture, or we starting reading the first line, and each sign plugs into an array of constructions and possible relations between constructions we are familiar with in varying degrees. So, pataphysical writing makes visible the constitutive rules of language use, precisely by loosening those rules as much as is humanly possible. And now you can read anything in those terms, as a certain degree off-center, as containing anomalies, as even the most predictable text will, then, embody pure possibility—perhaps especially the most predictable text if we consider what an odd thing that is.

De-memorization would then leave us with nothing but memory of the constitutive rules, and a desire to rediscover those constitutive rules over and over again through “acts of pure attention,” or “divination.” So, if we return to my first example, of thinking as linking, then the most compelling texts, scholarly, popular, or esthetic, would be those that articulate the most probable links in the most improbable ways, grounding them in sheer possibility. Elaborate, counter-productive rules like those promulgated and incompetently enforced by government bureaucracies would be discarded as requiring too much “self-inscription”—too much remembering of specific rules and their normative “meaning.” Very simple things, like acquiring the most useful skills, and saving as much money, or real wealth, as possible, would be preferred—you could always check the status of those things daily on the market. The future can be divined in signs of the present, while the firm fact that (real) money will always be useful allows for the future to be otherwise completely open, populated only be sheer possibility that one need barely adumbrate.

Once we realize that our selves are possible, not actual, our energies will be devoted to the creation of plausible possibilities and spaces where implausible ones can be safely engaged; even more, our assessment of institutions will turn on our assessment of their ability to enhance our creation of possibilities. One’s own economic possibilities—and more and more professions—will focus on creating possibilities for others—helping others be imagined as they imagine themselves being imagined. PR will become the queen of the sciences. If you want to construct a representation that will have effects on a particular audience in a particular way, you must study the desires and habits of that audience; even more, you must treat those desires and habits as malleable, within limits. You will game it out—someone who says x will be likely to want to hear or see y; someone who does x everyday will be happy to be given a chance to do y; someone who has bought a, b and c will like something like d (note that in each case there is no reason to assume that the audience actually wants, or has ever imagined, the y or d in question—the marketer is filling an imagined gap in their experience, a gap opened by the inquiry itself). Already, more and more selling of products involves selling such simulated images, filling such gaps, and telling the consumer of the gap and that it is being filled. This is objectionable from various enlightenment and romantic perspectives assuming the uniqueness of the individual and the integrity of the thought process, but if we set those objections aside we can see that a mode of “critical” thought and “high” culture is already implicit in this very model: opening up new spaces or gaps between the normalized experiences and those experiences which yet lie immanent in them.

Finally, this turn toward the trivial, or a continual lowering of the threshold of significance (more things becoming less significant) would lead to a very strong desire to reduce violence. We already see increasing distaste for sustained confrontations and enmities—maybe they require too much memory. There is a preference for constructing defenses that make confrontation unnecessary. The free and more advanced societies will be able to create sophisticated defenses that make them impregnable vis a vis the failed, resentful societies surrounding them, while sparing them the pangs of white guilt involved in retaliation—Israel’s missile defense, which costs far more than an all out war to destroy their Palestinian enemies would, is an obvious example here. A premium will be put on keeping people out, except under precisely defined circumstances—once someone is in, you need to deal with them, so a lot of intellectual energy will be invested in determining who can be let in. If governments don’t defend their borders, communities and businesses will do their job; and people will shape themselves so as to be acceptable members of the communities they wish to enter (as I suggested earlier, much business will be generated in helping them to do so). These strategies of avoidance might impoverish social interactions by ruling out a wide range of possibilities from the start; but it might enrich the relations that remain by making them more meaningful in the literal sense of causing all signs among those who have been properly vetted and therefore already give forth much information to contain layers of significance.

And, anyway, no one will remember what they are missing—there will be students of history, but I think the idea that there are lessons to be learned from it will disappear, and rightfully so because history is nothing but the history of struggles to own the asymmetrical gift economy centered on the state, and patatiquity can only come into being by putting all that aside.

November 26, 2012

Victimocracy

Filed under: GA — adam @ 8:05 pm

For a while I have tried to figure out how to define Barack Obama politically. “Socialist” is not quite right—he and his party are much more likely to coopt corporations than to take over ownership (and responsibility) for them. But he’s not a typical European social democrat either—how could he be, given that he only barely includes the industrial working class as part of his coalition? He’s not a New Deal liberal, or even a McGovernite either—he’s not just to the far left of particular American concerns like racial justice, individual liberty, civil rights, social welfare, etc.—he is too interested in seemingly odd cultural issues, like sticking it to the Catholic church, gay marriage and defending Mohammed from “defamation.”

The problem is that Obama is ushering in a new form of rule, which we can call “victimocracy.” Rights, under this regime, are defined by one’s claim to victimization, or by having oneself deemed an honorary victim, and legitimate arguments are those which defend some approved victim (not, for example, Coptic Christians in Egypt) against an officially designated oppressor group. The “race, class and gender” mantra that has been parodied for decades now as a symbol of the excesses of the academic left has been completely and unironically mainstreamed—indeed, the President’s successful re-election campaign was run according to that template and had no other content. The victimocrat regime is currently holding in jail Nakoula Basseley Nakoula, a man who has committed no crime, for the sole purpose of committing the US to a victimary narrative of the Islamic war against the West—well, against everyone other than them. An organ of the mainstream left, the Washington Post, has gotten on-board with the “argument” that all criticism of Susan Rice, the UN Ambassador and possibly next Secretary of State is racist and sexist, perhaps unless proved otherwise, and, by extension, the views of white men (especially from the South, that land of quintessential white maleness) are a priori discredited. “White male” (or “old white male”) is code for conservative—George Soros and Warren Buffett have exemptions from white maleness, because they have (I hope I am remembering the phrase correctly) “renounced their privilege.” But the code is interesting as, unlike other demonizations, like “bourgeois,” or even “Jew,” this one takes in the whole of what has been taken as normal and normative in our social order.

This seems to be the “revenge” that Obama offered his voters and, in truth, it might, for a while, provide for a very focused and consistent style of governance. Fiscal policy (to which groups and industries to direct loans, bailouts and subsidies) foreign policy (do I need to specify?) and law enforcement can all easily be run victimocratically. It might be a stable and somewhat less than totalitarian rule—the government need only appoint (many are already in place) official guardians of the interests of each and every designated victim group. As for who will guard the guardians?—asking such a question, I imagine, would just be a sign that your white maleness is showing.

Well, I’m working on another post now, and I just wanted to unveil this new category for the political scientists to mull over. I would assume that on some level, the American people have come to realize that victimary discourse must be allowed to play itself out until the end, which may or may not match the 70 years it took Communism. There is no resisting victimocracy—in the name of what—equal rights? Patriotism? Social peace? American interests? Prosperity?—only the white males/whales of the left’s Ahabist imaginary could possibly imagine that any of these categories contain other than victimary content. Whose interest? Whose prosperity? Whose peace (sans justice, no less)? Etc. In the end it must all crash, but in the meantime and in the aftermath, there is only one plausible response (actually, it’s just the least implausible): exodus.

November 9, 2012

After Democracy

Filed under: GA — adam @ 7:29 pm

If the determinist hypothesis were true, and adequately accounted for the actual world, there is a clear sense in which… the notion of human responsibility, as ordinarily understood, would no longer apply to any actual, but only to imaginary or conceivable, states of affairs. I do not here wish to say that determinism is necessarily false, only that we neither speak or nor think as if it could be true, and that it is difficult, and perhaps beyond our normal powers, to conceive what our picture of the world would be if we seriously believed it; so that to speak… as if one might… accept the determinist hypothesis, and yet to continue to think and speak much as we do at present, is to breed intellectual confusion. If the belief in freedom—which rests on the assumption that human beings do occasionally choose, and that their choices are not wholly accounted for by the kind of causal explanations which are accepted in, say, physics or biology—if this is a necessary illusion, it is so deep and so pervasive that it is not felt as such. No doubt we can try to convince ourselves that we are systematically deluded; but unless we attempt to think out the implications of this possibility, and to alter our modes of thought and speech accordingly, this hypothesis remains hollow; that is, we find it impracticable even to entertain it seriously, if our behavior is to be taken as evidence of what we can and what we cannot bring ourselves to believe or suppose not merely in theory but in practice… it is not much easier to begin to think out in real terms, to which behavior and speech would correspond, what the universe of the genuine determinist would be like, than to think out, with the minimum of indispensable concrete detail… what it would be like to be in a timeless world, or one with a seventeen-dimensional space. Let those who doubt this try for themselves; the symbols with which we think will hardly lend themselves to the experiment; they, in their turn, are too deeply involved in our normal view of the world, allowing for every difference of period and clime, to be capable of so violent a break. (Isaiah Berlin, Four Essays on Liberty, 71-72)

Gertrude Stein mentioned that she likes having habits, but she’s not a utopian because she doesn’t like other people talking about her habits. This seems to me a better starting place for inquiring into basic human rights than those grounded in either natural law (God given rights based on the divine image in each of us and contingent upon the use of the protected liberties to serve God) or natural right (the most basic right to protect oneself, as a lone, rational beast, against threats to one’s life). First of all, Stein’s observation is just as universal as those of natural law or natural right; second, it doesn’t require belief in the utopian fictions of a divine image or a lone, pre-social proto human. We all have habits, regardless of “period or clime,” and it is a fact well worth noting—animals certainly have repeated patterns of behavior but habits shape a human reality. Through our habits we carve out a space; you could probably learn more about a person through sustained study of their habits than sustained exposure to their speech, much less a recitation of their beliefs; indeed, one’s speech is itself a set of habits, replete with variations on widely shared formulas, chunks and grammatical constructions, accent, intonation, gesture and so on; and beliefs are just more specialized habits of speech, the way we answer certain kinds of questions when others need to know whether to include or trust us. And we do observe each other’s habits, with the same range of deliberate and focused to deeply unconscious attention as constitute the habits themselves. Habits are, on one level, private rituals, the creation of sacred spaces; on another level they are the internalization of the complex set of traumas (which habits apotropaically ward off) and moments of ecstatic bliss (which habits seek to recall in the manner of a cargo cult) which shape us all. Habits range from the highly intimate, even shameful, to the broadly public and contagious. We all like having habits—the notion of a free, rational individual on the Enlightenment model is utopian insofar as we would have to imagine beings without habits, habits which we can only with great effort wrench ourselves out of by giving ourselves repeated imperatives to construct practices which directly and usually painfully counter some habit until the point where the new practice becomes a habit itself. The “symbols with which we think will hardly lend themselves to the experiment” of imagining a single individual freely and rationally making it through a single day, or even a single hour, rational choice by rational choice.

Even more, none of us like others speaking about our habits—or, at least we would each get to our own point where direct reference to and examination of our habits would generate enormous, even panicky, resistance. That is, most of us (who knows, maybe Stein as well) would have little problem with playful satirizing of our insistence on a particular dish being made just right, or our over-reliance upon a particular expression, our “addiction” to some TV show or (in more intimate relations) lovemaking script—but it will not take very long before such probing will make further conversation simply impossible. Even more unbearable is other people talking amongst themselves about our habits, even more if their talk involves reforming our habits, even more if such reformation is to be carried out insidiously, by working on those habits themselves and, most of all, is it is to be attempted on a large scale, by authorized pseudo-expert elites, with the aim of making us fit into some scheme of social betterment. And that is the essence of utopianism, along with the source of social determinisms which, as Berlin notes, are both impossible to imagine or live while being real enough to wreck entire societies and hundreds of millions of lives during the 20th century. To plan a utopia you need exact knowledge of the human material you need to rearrange, knowledge of what has made it what it is and how it can be remade. For making the revolution, vaguer knowledge of how large social masses move in response to certain events and processes may suffice; to sustain the revolution once made you need knowledge of habits. Knowledge which you can never have, because habits will evolve in response to your attempt to track and re-train them.

Berlin’s (and not only his) critique of determinism and its link to totalitarianism is well known, and so, thanks especially to George Orwell, is his insight into the need for deterministic totalitarian movements to directly assault the common language shared by humans. But I don’t know of anyone who has grounded that common language in habit, or stated the corollary that not liking others’ talking about your habits is a basic, let’s say the basic, human right. Now, habits, like language, change, in superficial and more wide-ranging ways. But that you have a right to interrupt others when they speak about your habits wouldn’t change. And, unlike abstract rights to life, speech, property, religion, etc., which tacitly and, ultimately fantastically, presuppose some third agency who will be there to prevent someone else from taking your property, burning down your house of worship, threatening your life if you don’t shut up or, for that matter, just taking your life already, the right not to like others talking about your habits presupposes something much more realistic: people will, after all, talk about your habits but you won’t like it and the only way that talking about your habits can continue despite your interruptions is by shutting you up or you shutting up. There are all kinds of ways of shutting up and being shut up: establishing an independent board empowered to determine whether, say, allocating resources depending upon whether treating diseases characteristic of a particular demographic with identifiable life-styles is cost-beneficial is a way of shutting you up. And that marks such a board as utopian, which means that we can’t imagine, in ordinary language, the world that would match its deliberations any more than we could imagine a world with “seventeen dimensions.” The right to not like others talking about your habits doesn’t and can’t mean that some super-agency will prevent that board, established by some hypothetical health care law, from doing a cost-benefit analysis of your habits—it just means that you don’t recognize a political world in which that happens as anything other than a violent imposition on you. What that means practically is as hard to say as what it means to insist upon free speech rights under a tyrannical regimen, but your defense of your right not to like others taking about your habits (and it’s a right that can only exist in its defense) would speaking, and continually learning to speak, and learning how to only speak with others in a language presupposing freedom and responsibility and, to add to Berlin’s analysis, idiosyncrasy and mistakenness. Obviously, I could not consider giving a set of speech rules for freedom, but the reason why that can’t be done is rather interesting—as soon as anyone were to say that anyone who speaks in such and such a way speaks in a way inimical to freedom we would realize that someone speaking in that way in order to parody it would be speaking in a way that epitomizes freedom, and that one could never establish meta-rules for distinguishing one way of speaking from the other. This is just to say that the margin of freedom lies in the possibility that one might be mistaken—another might take my parody of totalitarian speech as threatening, or vice versa. The way that margin works in ordinary language is to open up language onto, not so much the abyss post-structuralism liked to invoke as a different rail. Habits of speech are idioms, and all habits require ongoing tending because habits are intimately dependent upon some parts of the environment while being highly resistant to other parts and there can’t be a general theory that will determine which is which for any particular habit and environments are always changing—language goes off one rail an onto another when idioms are dislocated and there is a discrepancy among the interlocutors over which presuppositions must be true for a particular statement to be understood. You say you really need a cup of coffee and the context and everything I know about you leads me to assume that you are struggling with your attempt to give up caffeine, while you are in fact mimicking what I would expect you to say and thereby signaling your transcendence of that craving and foiling of my expectations. You can triumphantly laugh off my gesture of sympathy. I can then join in your laughter or be offended because the joke seems to be on me, while if I join in you can leave it at that or tease me for my gullibility and if I take offense you can try to appease me or get offended in turn by my elevation of my own vanity over your life changing accomplishment. And so on—that is the rule for speaking freely and responsibly: each meeting of intersecting habits opens up ever ramifying binary choices, each of which is ultimately whether to more fully engage the scene and continue the ramifications, on the one hand, or to withdraw and put them to an end, on the other. That is normal speech, what human beings do, and what political and cultural theorists can do is expand our sense of the possible bifurcations by pointing out where habits lead anyone to see a straight, pre-determined path instead. When you make visible anyone’s habits in this way they won’t like it, which is their right, but attentiveness to the ethics and esthetics of the situation will make it possible for their resentment to be complemented by gratitude; if you are really attentive, even their expression of resentment will get incorporated into a learning habit and once speaking about your habits becomes a habit then you won’t want to give that up. And once we get the habit of not liking others speaking about our habits of speaking about our habits then all the other rights—speech, property, religion and so on—rights which presuppose an individualized world worthy of protection—will come firmly into place.

This is all after democracy because democracy, with its ever growing pantheon of rights, is utopian. Once the rage against hierarchies begins it will race right past the hierarchies you happen to dislike and not stop until it’s attacking the hierarchies you are unknowingly implicated in—in the end, there is no criterion other than the appropriation of signs of antinomic agency, also known as “cool.” The endpoint of liberal democracy is that the satisfaction of the rights of one require a great deal of talking about the habits of everyone. Putatively racist, sexist and homophobic impulses need to be programmed out of the population, but what counts as racist, sexist or homophobic is resistance or even indifference to the need for programming. We are all collectively responsible for everyone’s health, so we must all be concerned with everyone’s taking care and being insured. It isn’t often noticed how much can’t be discussed within liberal democracy, and it would be hard to tell how much that we can’t discuss goes unnoticed—and more will go unnoticed, since that of which we can’t speak can’t be noticed. The most obvious example to me is gender difference—everyone knows that there are differences between men and women, important and trivial, always with plenty of exceptions, capable of misunderstanding and misuse, but undeniably there—yet, except for the pervasive, smug, maternalistic assumption of female superiority spread through the advertising and entertaining industries it is virtually impossible to discuss them, especially in mixed company. And this in spite of the fact that the topic is very interesting in sheer intellectual terms (especially given women’s now extensive participation in all areas of social life, which reveals all these differences in very diverse ways as well as removing the issue of distorting, imposed social roles from the equation) as well as being of vital interest in everyone’s personal life. Equally obvious is the taboo on discussing racial differences, which genetic science is sure to disclose in years to come, probably in ways that bear little resemblance to the stereotypes that terrify us. In short, any expression that might by any chain of events conjurable by the imagination bear upon anyone’s exercise of their rights is beyond discussion. Also unspeakable is the fact that immigrants to the US have always brought the socialism and cronyism from their native countries and implanted them here—not just the Hispanics today but the Italians and Jews of yesteryear—the only difference is that previously the culture of Anglo-Saxon liberty was robust enough to contain the damage for a while and allow for at least a sizable proportion of those groups to expend their energies more productively. Anyone saying such things now would find himself banished to the far reaches of crankdom. What can be discussed are the supposed pathologies inimical to rights and what can be analyzed and dissected are the bearers of those pathologies—white racists, male sexists, heterosexual homophobes, Christian “haters,” the 1%, etc.

Utopianism breeds irreconcilable contradictions and makes it impossible to discuss them: the rights of seniors to free health care and undiminished pensions, health care coverage for everyone, increased spending on education, increased workplace, financial and environmental regulation, promises of jobs or some type of perpetual support for all can’t all continue forever, and it was fascinating to see the Republican candidate for President this year go out of his way to avoid giving the impression that he would infringe upon any of these “rights.” The left can at least claim that government will give all this to you—since they want to run the government, at least you can imagine them doing it. Conservatives, though, have to make the more counter-intuitive and equally utopian claim that the free market will provide all of these goodies, and its easy to see why people would be skeptical since on some level they know that getting these things on their own on the free market requires hard work and risk taking, and even then can’t be guaranteed. To say that no other way of life is remotely plausible than one in which our habits brush up against the rough edges of others’ habits, that a social order in which some rough relation between desert and outcome is discernable is the only endurable reality, is to mark oneself as consumed with hatred for those who, according to your own model, will fall short in some respects. It’s at least as bad when it comes to foreign policy, but why bother going into that, since the U.S. will not have any coherent foreign policy for some time to come. The point is that the most basic observations about undeniable social realities are unspeakable and any hint of them simply calls forth a barrage of aspersions on the habits of those who make the observations. Liberal democracy is not totalitarian, so we can of course speak about such things amongst ourselves (as long as we carefully choose “ourselves”), and write about them in marginal arenas like this blog. But the safeguarding of public discourse from such discussions is made ever more complete.

This utopianism cannot be attacked from within, so we will have to wait until it collapses, due to the contradictions I just pointed out and even more profound and unspeakable reasons like the demographic ones often discussed by Mark Steyn (rising aging population, declining birthrate, the need to import foreign workers to support the increasing benefits those workers have no reason to expect to see and therefore no reason to work to support for long—and, yes, Steyn is a fairly well known writer and so these ideas are not censored, but outside of conservative circles Steyn is demonized as a racist, war-mongering madman, to the extent that he is taken note of at all). When it does collapse, I think that “not liking others talking about my habits” or something close to it (I wouldn’t quibble over the wording) will provide a “remnant” with a way of restoring normal speech to public and even private discourse. There is no better way of refreshing one’s relation to reality than committing oneself to recognize, work around, subtlely re-shape, occasionally comment on and refrain from too openly examining others’ habits. And this can only happen publicly when people join together by explicit agreement to accomplish something, where they all have something to gain and something to lose, where the success of the work depends on them alone, and where they therefore allot to everyone specific responsibilities and rights while developing an “oral law” or set of idioms (normal, infinitely ramifying speech, sedimenting tacit agreements in the explicit ones) that keeps the project open. And that can only happen when enough people accept, to quote Gertrude Stein again, that “the most important thing is knowing what is your business and what is not your business.”

For now, though, it is at least possible to stop speaking democratese, predicated upon the supposedly compatible faiths in the individual and the people and the concomitant hatred of everything undermining such faith. A good starting point is the default assumption that no one can have any idea of what another might be capable of, for good or evil; but anyone right away puts forth signs that break that unknown “anything” down into two broad possibilities, to which one can tentatively assign probabilities; each new engagement leads to adjustments in the probabilities and further bifurcations within the original possibilities. Your own habits and idioms converge and diverge with theirs in particular ways, as you read your habits and idioms off their response to you and you assume they are doing the same—this is the establishment of joint attention. Entering any discourse involves alienating one’s desires and resentments, otherwise every conversation would involve each interlocutor simply listing things he/she loved and hated. The problem with democratese is that it cuts off the pathways from desires and resentments, which, cumulatively, are habits and idioms, to the shared public discourse. For some people this is certainly liberating—rights talk, like theory talk for academics, can be very empowering. Others see too many things they might like to say or hear said be denied utterance. If you are one of those “others” then I would propose taking sides with the lesser probability at each bifurcation because if you inculcate that pataphysical habit you will at the very least create a lot of new ways of having your habits and idioms intersect with others’.

In more straightforward political terms, that means looking for the signs of secession and nullification—for the free associations liberal democracy can’t digest. Obtaining exemptions and waivers for alternative markets and even currency will be the most worthwhile expenditure of political energy in the coming years. Instead of speaking democratese, it is possible to push back the frontiers of the state by imagining private ways of performing many, most, eventually all state functions, until references to statist abstractions are resolved into descriptions of possible yet denied agreements, tacit and explicit.

October 30, 2012

Jewish GA

Filed under: GA — adam @ 8:04 pm

The Jewish revelation establishes the principle that all humans are created equal because and insofar as they are created in God’s image. We are all like God because God is equidistant from all of us and from Himself: the “I am that I am/I will be what I will be” removes from God the self-sameness that would enable us to figure out how to get Him to do what we want him to do. But God did make a covenant with a people, in which He promised to protect them and have them prosper if they would obey His law. Judaism has an account of how that came to be. The immediate presence of God to man, from Adam and Eve to the generation of the flood, was a failure, as God Himself acknowledged. God cannot walk among men: His presence becomes the source of murderous envy, and tyrannical and mob-like emulation, until God is finally utterly repudiated and the world filled with violence, necessitating the destruction of His creation. I imagine that we see here a history of humanity subsequent to the emergence of the Big Man, the patriarch, the tribal leader and ultimately the emperor, which would have been the first forms in which the human being was divinized. Maybe Christianity’s re-divinization of man is predicated upon the judgment that Judaism had not settled that question in its absolute separation of God and man. So, we could ask whether Judaism might, in fact, have settled it, if only there had been sufficient patience and ingenuity to spread its more rigorous word of God throughout the world; or whether Christianity re-opened the question only to make it worse; or, whether both have failed, or reached their limits, which is to say have not exhausted the possible models through which we can think the originary scene. The means by which God proposes the separation of God from humanity is the covenant that founds a people governed under the law. Already subsequent to the flood this approach had been introduced as God imposed a code of 7 laws on Noah and his descendents (the “Noachide code”) as means of distancing the divine from the human while leaving the latter with an “image” of the former (God, then, is conceived less as a source of sustenance than as the origin of our capacity to covenant with each other and live under publicly shared laws). At any rate, we end up with the paradox that Eric Gans and others have noticed: the universal principle of human equality under God can only be proclaimed and instantiated in a single, small, weak people, in the midst of imperial orders predicated upon man creating himself as God. Judaism later (mostly powerfully in the thought of Maimonides) devised a perfectly plausible theory of how this paradox was to be resolved: the Jews, by living a well-ordered and holy life in observance of God’s law would be a “light unto the nations,” living propaganda for the superiority of living under God’s word over the merely man-made laws of other nations. Perhaps some peoples would convert to Judaism, but for the most part one could readily imagine other peoples treating the Bible’s history of humanity as their own and revising the law it provides according to their own national peculiarities and, finally, their own “oral law.” But this seems like a recipe for anti-semitism, doesn’t it? The Jews would still be at the center of this monotheistic world, they would still be responsible for preserving and exemplifying the law, any success they enjoyed would be taken as a sign that they were still god’s privileged children, and any catastrophes suffered by others blamed on the false promises of the Jews. In other words, the same problem of God’s too present presence because Judaism cannot shed the signs of its birth in a world of violent God men. It can’t be originary enough. As Eric Gans argued (or I, at least, took him to be arguing) in Chronicle 432, we should have some respect for our egalitarian ancestors, and not only because of their blissful lack of knowledge of fixed hierarchies but because of their richer and more diverse experience of the divine. All contemporary attempts, most egregiously through the UN, to create a global law under which we could all live as equals, can only lead to monstrous tyrannies or, perhaps, disorder. Even national systems of equality under the law are seriously fraying, as no one has yet found a way to resolve the twinned freedoms of economic initiative and of generating resentment towards the results of that first freedom. We then end up with endless, fruitless and acrimonious disputes over the real “meaning” of the law, our founding principles, freedom and equality and so on. The only spaces in which we can be free and equal are those where we multiply endlessly rather than withholding the names of God. I don’t expect colloquies with insects and mountains, or that every spring will have its own nymph, or that we will “Have sight of Proteus rising from the sea; Or hear old Triton blow his wreathed horn.” The gods are already separate from humanity under the original egalitarian order—the entire problem posed by the Bible doesn’t exist in equality under the sign rather than under the law. The gods are in language—any possibly meaningful utterance at least slows down our rush to appropriation, enough to make sense before proceeding. We can always pry open utterances and the materials of language so as to hear from the divine—struggle with a mistake, pretending that it makes sense, and miraculously, it will; add or subtract a letter from every other word from a sentence and you create a new sentence, in an oddly revelatory dialogue or argument with the original; the same if you reverse elements of a sentence—the declarative sentence is still the name of God, albeit in infinite manifestations. And maybe this is Jewish after all—the Judaism, most famously of the Kabbalah but certainly going all the way back to the Rabbis and no doubt beyond, to the very beginnings of the divinization of God’s word through the alphabet—the Judaism which contends that God looked into the Torah and created the world out of its linguistic materials.

« Newer PostsOlder Posts »

Powered by WordPress