GABlog Generative Anthropology in the Public Sphere

May 22, 2011

Health Care

Filed under: GA — adam @ 7:04 pm

Health care, as we speak about it today, is a completely modern phenomenon. Hippocrates aside, if you go back maybe 150 years, doctors had no effect on their patients: your chances of recovery if you did see a doctor were identical to your chances if you didn’t. “Health care,” or the medical profession, emerges along with modern science and the application of the sciences to everyday life in the forms of hygiene and nutrition. And medicine has been a rich source of tropes for the framing of modern dilemmas, as recognized by the very widespread claim that, in our thinking about moral, political and ethical issues, “therapy,” and the associated categories of “healthy/sick,” “normal/pathological,” etc., has displaced notions of sin and guilt, good and evil.

It makes perfect sense, then, that progressive politics has always seen the incorporation of health care into the cradle to grave welfare system of the modern state as the jewel in the crown of the expert-centered organization of life central to such politics. The nationalization of health care makes state power potentially unlimited: not only directly medical issues, involving coverage, treatment, price of medical services, training of practitioners, research and innovation, etc., come directly within reach, but all questions indirectly bearing upon health do as well. And which questions don’t bear indirectly upon health? Whether it’s what parents tell their children about homosexuality, the hamburger you had for dinner, or the availability of birth control and, increasingly, social situations such as bullying, shyness, etc.—all affect health, all impose potential costs on the system, all sprout new forms of expertise and regulation. To use a medical metaphor, whether health care is centralized or decentralized is a life or death question for the free society.

The libertarian answer, to privatize medicine and insurance and render them sets of voluntary exchanges, is good as far as it goes. Libertarians rightly argue that what we call health insurance today is not really insurance in any meaningful sense—it is simply a way of pooling costs in government mandated ways, and in ways that makes the real costs of medical procedures inscrutable. Health insurance should be like car or home insurance: a premium in exchange for coverage for specified health care needs. But this analogy is limited—the sum total of bad things that can happen to your car or house is known in advance: if your house is worth 300,000$, then the insurance company knows that no catastrophe can exceed that. But there is no such ceiling when it comes to your body—if your insurance company agrees to cover “cancer treatments,” must that include a decade of increasingly expensive treatments with ever diminishing effect? Who decides? A court—according to what criteria? The doctor—which one? It seems that at some point, some irreconcilable disagreement between the parties is very likely, generating enormous resentment and terror as our media-saturated society is flooded with images of beloved parents and grandparents cut off from their treatments either by evil insurance companies or daughters and sons afraid of going broke. Politics is sure to channel such resentments, compromising the independence of independent arbiters of insurance contracts.

Such a system could only work if a significant majority of the members of society could openly accept the basic unfairness of life chances and death. We would have to be able to look on, with equanimity, as insurance companies withdraw support from dying patients, including those we love and ultimately ourselves; as grown children decide that funding their children’s education is more important than a few more years of life for their own parents, etc. And, of course, such equanimity would have to coincide with an acute awareness of the unprecedented character of all this, including the heart-wrenching possibility that a few more years might have lessened or even eliminated your particular dilemma. We don’t have to go back further than the lifetimes of many living today to recall when “health care” involved very few decisions, and certainly not the impossible ethical ones we are constantly confronted with today: you accepted your fate, you made people comfortable as they accepted the inevitable. Even as some reliable treatments became widespread and childhood mortality almost eliminated, aging, sickness and death still provided the proverbial contours of our existence—the problem is, they still do.

Here, it seems to me that the much maligned (especially by conservatives) “therapeutic culture” might come to our aid. Despite the vituperation and ridicule heaped upon the therapeutic, is there any reason to assume that the distinction between, say, “good” and “evil” is any more originary than that between “healthy” and “sick”? If we take the most basic distinction to be the one distinguishing sacred from profane, why is that distinction more adequately modeled on one binary rather than the other? They are just different ways of framing the more inclusive distinction between whole and rent—integrity vs. corruption, working vs. impaired, fixed vs. broken, etc., being other versions. To be healthy is to be whole, to retain one’s integrity, to be articulated, symmetrical—all are near synonyms for wholeness, which means to have a formal reality embodied in your physical one—just like the central object once we have all pointed to it and agreed to let it be.

The therapeutic culture, by way of its victimary turn, has also created our ability to, it seems, confer healthiness upon ourselves and each other. Perhaps the one product of the victimary culture that deserves to survive is our sensitivity the ways we describe “disabilities” (I, like I suspect most of us, cringe upon hearing—or remembering hearing, since you never do anymore—an older one, the unmarked term of my parents’ generation—like “crippled,” much less the brutal terms for mental disability: moron, idiot, even “retarded,” the more humane replacement for the preceding, and which is currently the object of a vigorous campaign across college campuses to proscribe “the ‘R’ word”). It is really marvelous to see what people confined to wheelchairs (and the blind and deaf) are often able to do now, and our Gnostic, often cloying insistence that they can do it has certainly supplemented the prodigious technological innovations we must credit. We have also seen the emergence of an entire culture concerned with ways of coming to terms with disease, decline and death and the ability to turn, once all resources have been exhausted, from attributing responsibility to others (the doctor, the insurance company, the hospital, the state…) to simply seeing to the integrity and dignity of the patient and her loved ones. There is, we might say, a “healthy” way to finally let go.

The individualization of the sickening, recovering and dying processes thus introduced will not only guarantee our constant chafing at the restrictions and cookie-cutter categories of homogenized health care systems but further facilitate another process which I believe is inevitable, indeed, already well underway: the pluralization of therapies. Why shouldn’t the government or insurance company pay for, say, Native American cures? Because they haven’t been scientifically verified? You would have to have a very naïve faith in public confidence in the modern cult of professionalism and expertise to imagine that answer will hold the fort for long. There will be more and more things government and insurance companies will have to and can’t pay for—but, at least, it’s possible to imagine the emergence of insurance companies which cater to the eccentric and desperate. So, as government presence recedes, health care decisions will devolve to the individual, producing more flexible norms of expertise. Does someone really need 6 years of medical school, 10 years of internship and residency, to help me with my aching back or cough? I doubt it and, more importantly, more and more people will come to doubt it, especially when they are the ones weighing costs. In the end it will be obvious that our health care needs are better met in this more differentiated manner, and on the open market, with practitioners, inventers of medical technologies and promoters of new methods engaged in competition with a close eye on the actual costs of skills and procedures.

At the same time, such a process will generate, in the short term and perhaps longer, inequalities and mistakes that will seem monstrous to many. There will be plenty of cases of people purporting to fix backs breaking them, of con men hawking fake treatments without fear of the regulator or licensing board, of new, prohibitively expensive treatments conspicuously available for a while (a long enough while to count the dead resulting from “health apartheid”) to only the very wealthy. And the question for us, as a civilization, will be: can we abide that? Health problems, today, have come to be experienced less as “acts of God” or the inevitable workings of Nature than as a kind of violence, uniquely, unpredictably and terrifyingly directed at individuals, violence to which we are all ultimately equally vulnerable—violence from private and public greed and callousness (insurance companies, doctors driving Mercedes, companies pumping carcinogens into the environment, pencil-pushing bureaucrats putting rules over compassion, etc.). The demand for universal health care, or at least coverage, taps into a kind of originary terror. We would have to be able, to make ourselves whole, to suspend that attribution of violence, and learn to use our greater powers of physical healing as metaphors to enable healing of a more transcendent kind.

March 3, 2011

Madison, not Cairo

Filed under: GA — adam @ 6:49 pm

I’m much more interested in what is going on in Wisconsin than in the Middle East. The Middle East is the business of Middle Easterners now—America gave up its pretentions as a superpower, or leader of the Free World, or hegemon, or whatever, with the election of Barack Obama. Who knows—maybe it’s for the best. If we get serious at some point and start electing real rather than vanity presidents we’ll have to start from scratch in designating allies and enemies, and maybe we’ll be less bogged down by whatever balancing act the State Department thinks they’ve been performing for the past few decades. Either some unstable, very flawed, but perhaps workable Islamic Republics (with the emphasis probably swinging back and forth between noun and adjective); or straight out Islamic (or, much less likely, secular) totalitarian regimes; or failed states with genocidal militias settling scores. Totalitarian regimes can stagger along for a couple of decades and sooner or later the scores get settled and everyone gets tired, and things revert back to Islamic Republic or Islamic Republic. In the meantime, it’s likely that the empowered elements in the region will be less capable of waging serious war, certainly against us but even against Israel—although Israel certainly needs to be ready to go it alone, and to drop the rules imposed by the international media/human rights community, or the continuous simulated Nuremberg Trial directed at Israel. They’ll probably need to sell us oil anyway, and if they want to throw acid on their own faces to spite the Great Satan, maybe it will shock us into seriousness, i.e., drilling and building nuclear power plants.

Wisconsin, though—the fate of Western society is at stake there. Public employee unions have the government extract dues from their members; they use those dues to fund Democratic Party politicians who “negotiate” fat benefit packages with those same unions which promise future bankruptcy but long after the politicians who gave away the store can be made accountable for it. Even more, in its most advanced form, public employee unionism creates little one-party states where, even if you elect budget-cutting, tax-cutting Republicans, the public employees have already been given the store, and can shut down the entire government in response to any attempt at breaking those promises. So, it’s better to keep electing Democrats, who at least might spread the largesse around. Even more: public employee unions have a monopoly within a monopoly—not only is there no competition for government provided services, not only does the government not have to worry about making a profit, but now the government can’t even exercise its power as monopolist to impose reasonable terms upon its workforce. And there’s even more… But you get the point.

Now, though, after the disappointing collapse of Arnold Schwarzenegger’s governorship at the first serious battle with those same public employee unions, a series of Republican governors have emerged who do seem serious: Chris Christie in New Jersey, John Kasich in Ohio, somewhat more equivocally, Mitch Daniels in Indiana—and, of course, most prominently right now, Scott Walker of Wisconsin. Walker in particular is getting at the heart of the problem, and is not only negotiating the benefits themselves but the conditions that make it possible for the public employees to hold their “employers” hostage. His bill is every bit as radical as his enemies say—it seems to me that it would have been enough to simply make the payment of union dues optional: that in itself would collapse the racket, and the Democrats’ money laundering scheme. And he could have presented such a plan as a pure defense of the right of individual union members, without mentioning collective bargaining at all. But I’m sure he and his fellow Republicans have their reasons for attacking the problem comprehensively, and it does allow for an all-out fight with everyone fully aware of the consequences.

The Left certainly understands the consequences, and they are enraged and desperate. If the Left can indeed be defeated decisively, even destroyed as a force in modern society, the path lies through Wisconsin. Take away public employee unions and you see a drastic decline in Democrat and left money more generally; you will see far more sparsely populated Leftist demonstrations; you will no longer see the intimidating enforcers at so many demonstrations of both the Left and Right; the Democrats start losing elections regularly, and then who gives them money or votes for them since all their support is predicated upon them being in power and able to give some people other people’s (or imaginary) money. The outcome, in Wisconsin, let alone the rest of the country, is still in doubt—we are at the beginning of the beginning. But the survival of the market economy and its political order right now depends upon what happens in Madison at lot more than on what happens in Cairo.

January 14, 2011

Obama and Palin: Opposing Anthropologies

Filed under: GA — adam @ 8:47 am

Two speeches given the same day in response to the shootings in Tucson; one, by all accounts, brilliant, Presidential, conciliatory, the other, by most accounts, petty, small minded and self-serving. And I don’t find too much to object to in President Obama’s platitudinous remarks. But, in each speech there is a certain logical tension worth exploring. Obama says, in the line that has probably received the most attention:

“And if, as has been discussed in recent days, their death helps usher in more civility in our public discourse, let us remember it is not because a simple lack of civility caused this tragedy — it did not — but rather because only a more civil and honest public discourse can help us face up to the challenges of our nation in a way that would make them proud.”

A more civil and honest discourse—either civility and honesty are complementary (if not synonymous) or we would have to choose one over the other, in some cases. The context makes it clear, I think, that Obama would prefer civility, or a particular understanding of civility, over honesty:

“we can question each other’s ideas without questioning each other’s love of country and that our task, working together, is to constantly widen the circle of our concern so that we bequeath the American Dream to future generations.”

So, questions about others’ patriotism are declared out of bounds, even if we honestly come by them. Widening our circle of concern is to be preferred over, say clarifying and performing more diligently our existing duties and obligations, even if we honestly believe that too many people have widened their circle of concern so as to infringe upon others’ rights to determine the boundaries of their own “circle.” We are to “expand our moral imaginations” and “sharpen our instincts for empathy,” even if we think it’s enough to be moral, love our friends and families, and follow, intelligently, the rules of a spontaneous market order.

President Obama wants to tell us how to think, feel, and act—we must “thrive together,” as the t-shirt distributed at the speech he gave exhorts. Sarah Palin, meanwhile, unequivocally chooses honesty over civility: “Public discourse and debate isn’t a sign of crisis, but of our enduring strength. It is part of why America is exceptional”:

“No one should be deterred from speaking up and speaking out in peaceful dissent, and we certainly must not be deterred by those who embrace evil and call it good. And we will not be stopped from celebrating the greatness of our country and our foundational freedoms by those who mock its greatness by being intolerant of differing opinion and seeking to muzzle dissent with shrill cries of imagined insults.”

This is an extremely defiant repudiation (or “refudiation,” if we like) of the entire left wing argument regarding the sources of violence such as we saw in Tucson, an argument affirmed in general by Obama even if he rejected, at least implicitly, the more obscene particulars that have dominated the media. Obama wants more speech rules, more guardrails; Palin wants more arguments, more debates, more primaries. Her only rule for “civility” is the founding liberal one: “we must condemn violence if our Republic is to endure.”

There are opposing anthropologies here. For Obama, speech and violence lie in a continuum, and only carefully composed and tightly monitored speech can be removed from a vicious circle of speech in which marking others in virtually any way intiates the descent into scapegoating itself. “Civility” is the name of the process by which elites do the monitoring. For Palin, speech, vigorous, unregulated, “passionate” speech, unafraid of being “mocked” by the guardians of “civility,” is the antidote to violence. Indeed, it may be that the more the speech draws upon metaphors from violence, the more it models the transcendence of violence: “As I said while campaigning for others last March in Arizona during a very heated primary race, ‘We know violence isn’t the answer. When we ‘take up our arms’, we’re talking about our vote’.”

Palin’s speech reaches its logical paradox in its reference to the assault or, as she says, “blood libel” directed against her:

“Acts of monstrous criminality stand on their own. They begin and end with the criminals who commit them, not collectively with all the citizens of a state, not with those who listen to talk radio, not with maps of swing districts used by both sides of the aisle, not with law-abiding citizens who respectfully exercise their First Amendment rights at campaign rallies, not with those who proudly voted in the last election…But, especially within hours of a tragedy unfolding, journalists and pundits should not manufacture a blood libel that serves only to incite the very hatred and violence they purport to condemn. That is reprehensible.”

Well now, how, one might ask, can Palin say that acts of violence begin and end in themselves while, in the very next breath, accusing her opponents of inciting violence? The answer may lie in some implicit distinction between “monstrous” and more “ordinary” forms of criminality, in which case Palin is making a very local point about this particular incident, not about the relation between words and deeds more generally. But that wouldn’t be very helpful—would she then be saying that the Left’s argument about uncivil discourse might hold in other cases? It may be that Palin doesn’t yet have a way of talking about what is central to the kinds of verbal attacks directed most recently against her, but which we can easily recognize as those of White Guilt. Palin, for the Left, represents all that is unmarked in American society—she must be marked. Her argumentative strategy is to recognize this marking, but doing so in the same terms her opponents are using leaves her open to the charge that she also sees “incivility” as a “danger,” and in that case is not better than the Left insofar as she defines “incivility” as partisan attacks against her. Why isn’t blaming the shootings in Tucson on Palin just as “metaphorical” and therefore harmless as asserting that you want citizens to be “armed and dangerous” when they confront their elected officials with facts and arguments? Why isn’t it even beneficial, and to be celebrated as any other discourse driven by what Palin calls our “imperfect passions”? In other words, Palin seems to be tempted (as I think she has at other times) to play along with the Left’s massive inflation of the notion of “incitement” (along with “defamation” and other once strictly legal terms) which has culminated in contemporary hate speech laws.

At this point, the only answer is to look at her much criticized use of the term “blood libel,” and realize that we have a simple question of fact here. Was it a blood libel, or not? If she was accused of having innocent blood on her hands, then there’s the line at which discourse threatens to pass over into violence, because accusing someone of thereby stepping outside of the boundaries of legality and non-violence does lead to the conclusion that only answering the guilty in kind can restore those boundaries. Scapegoating should not be criminalized, but it is wrong; and, Palin would implicitly be asserting, we can recognize it if we are being “honest” (although perhaps not if we are merely worried about being “civil”). Is someone really dishonest enough to say that calling Obama a “socialist,” or that the health care law is “job-killing,” or even questioning Obama’s place of birth or religion, does the same—that is, accuse him of having innocent blood on his hands? I’m sure the answer is “yes”—and that’s a good starting point for vigorous debate that would still eschew the “dueling pistols” Palin refers to in mocking the nostalgia for more civil days. In fact, focusing arguments on what counts as scapegoating, and striving for a minimal account of the same, would provide for an ongoing inquiry into and performance of, “imperfect passion.”

Addendum, 1/15

It seems to me the concluding argument here can be clarified by applying the distinction between metaphor and reference to political discourse.  Whatever plausibility the argument against “heated” rhetoric has derives from the sense that violent metaphors (shooting, killing, targeting, blowing up, attacking, etc., etc.) in political speech have some correlation and, therefore, at least possible causal relationship to actual violence.  I can make my position simpler by saying, as I think is already implicit in my post, that I believe there is no such correlation, much less causation:  zero.  In fact, as I suggested as well, it is more likely that, as I think Palin implies, the relation can be reversed:  the transformation of words denoting violence into metaphors referring to political competition defers political violence, by making the political arena a richer and freer “combat zone.”  That is, you don’t need to step outside of it in order to express your “imperfect passions.” 

In that case, to return to the example I conclude with, the difference between holding Palin responsible for murder, and calling Obama a Muslim, is that the former makes a referential claim, one which could presumably be proven or disproved evidentially or through a demonstrable causal chain; the latter, meanwhile, as a question of faith and therefore, in American public discourse, an inherently “internal” and private issue, is subject to neither proof nor disproof.  Therefore, however vicious the intention behind the claim, however much an attempt to make Obama appear the usurping alien, the claim that Obama is a Muslim functions more as a metaphor than an accusation.  The only thing that would change if one were to make the metaphorical dimension explicit and say, for example, “it’s like we had a Muslim President,” or, as Rush Limbaugh already does, calling him “Imam Obama,” would be a loss of the sense that he is concealing his true faith.  But, while I am no expert in “Obama is a Muslim” political culture, it seems to me that this element, the years long deception which would have to be involved, and which would make Obama’s Muslimness truly scandalous, never seems to be the emphasis.  This is why the “charge” against Obama is subject to ideological revision in a way that the charge against Palin isn’t—one could say, how great it is that we have our first “Muslim President” (just as Clinton was our first “black President”) in a way that one could never say, “it’s great that Palin is a murderer.” (Indeed, other than of dishonesty, of what, exactly, would one be “accusing” Obama the Muslim of?)  So, aside from the extremely relevant fact that no major media outlet or elected official has made this claim (unlike the blood libel on Palin), the respective allegations are qualitatively different from one another.  Even if Obama were a Muslim, or if he really wasn’t born in the U.S., the proper response would still be voting him out of office or, at most, impeaching him; if Palin has been inciting murder, and in a way that makes her untouchable legally, the commensurate responses are very different.

December 20, 2010

Language, Inquiry

Filed under: GA — adam @ 10:55 am

Only after reading Eric Gans’s recent Chronicle (#403, “Heuristic Necessity”) did the obvious relevance of Gans’s definition of God as that word whose signified and referent are indistinguishable to originary linguistics and grammar strike me. But, then, since, as Gans also notes elsewhere, ultimately every word (but also, then, sentence, discourse, etc.) is the name of God, then the indistinguishability of signified and referent is definitive or constitutive of meaning as such. This indistinguishability applies, in other words, to ostensivity—which is when our pointing to something, or directing attention to something, right here and now, is what we “mean”—and all meaning is ostensive insofar as it’s impossible to imagine gesturing, speaking or writing without wanting to direct someone’s attention from one’s gesture, speech or text to something else.

The distinguishability of signified and referent, then, poses the real problem. That distinction must have been necessary for sign use to have moved beyond the ostensive or to create the “portable” and “reassemblable” ostensives that we could say constitute semiosis. My solution to that problem explains, for me, why Gans’s definition of God didn’t connect with my linguistic and grammatical thinking until just now—I haven’t really been using the traditional linguistic terms of “signifier/signified” and “referent.” First, I worked on developing a way of talking about language drawn exclusively from the succession of emergences of the ostensive, imperative, interrogative and declarative speech forms; more recently, I have been trying out ways of using the notion of unmarked/marked derived from the Prague School of linguistics, and to assimilate that distinction to the more originary one of norm/mistake. But we can articulate these various distinctions through another one I have worked with on occasion: that between the exchange of signs between participants upon a scene and the exchange of signs between a participant upon one scene and another, a “stranger” to the scene to whom one presents the results of the scene: I call the first scene the scene of presencing and the other the scene of representation. The best example of the scene of representation would be summary, which generally serves the purpose of providing another what they need to know so as to save them the trouble of reading the text itself.

These two scenes (and a third, constitutive scene that articulates them) are folded into the originary scene itself, insofar as that scene must involve the initial forming of the sign and the iterated circulation of the formed sign, and it is this doubled scene that then mediates the transition to the sparagmos: in the sparagmos the imminently chaotic devouring of the object threatens to overwhelm the agreement established by the shared sign, a tendency which can be mitigated only by the repetition of the sign throughout the consumption of the object at every indication of overreach on the part of a neighbor. In this case, the participants in the sparagmos are essentially summarizing the scene to each other. It is this latter use of the sign that distinguishes the signified from the referent: the signified, in other words, is the sign in its capacity to articulate a scene by composing the elements of an emerging scene on the model of and out of the remains of previous scenes; the sign as referent directs one’s attention to an object upon a scene already in place. The signified is the object as deferring and powerful; the referent is the object as available for orderly appropriation.

When we “understand” each other, what happens, then, is that the signified and referent coincide for us—if I say “it’s late,” you understand me insofar as you acknowledge my reference to this particular lateness, the one that, say, conveys a shared sense that whatever we have been doing has proven to be more important than something else we had planned to do, and has therefore carried us away, resituated and redefined us, etc., and in a way we can realize right here and now. To the extent that signified and referent don’t coincide, our understandings are overlapping and, of course, that is always the case, even while the overlap implies a zone of coincidence. Your remark points out the “lateness” to me and then I might make a remark that shows I have observed that particular lateness and that establishes a site of “joint attention,” or “disciplinary inquiry,” or “presencing”—or, what I will try calling an “anythisness agreement.” At the same time the “lateness” is not quite the same, at exactly the same time, for both of us, but we now have some sense of how to identify that signified/referent, how to look for it, how to conjure it: the imperative steps in to supplement the ostensive, as the object now tells us how to go about tracking, appropriating and preserving it, and we tell the object to appear before us. These imperatives, emerging from the ostensive, become the rules, or grammar, of the object and the convergences and divergences of the object as signified and referent—rules are imperatives marked by ostensivity. Part of this grammar is the prolongation of the imperatives into interrogatives, as the object doesn’t appear as commanded, and we fail to obey the object, and as we command the object to renew its commands, and extend and mitigate our demands upon it, these interrogatives take on the declarative form of hypotheses—all declaratives, indeed, are intrinsically hypotheticals.

Now, I would like to overlay the vocabulary of “markedness” on this succession of speech forms. Just as I think Gans’s analysis of the succession of speech forms opens up so far unimagined areas of inquiry into grammar, it seems to me that setting the unmarked/marked distinction upon the originary scene helps us to use that distinction to tie together various cultural, semiotic, social and anthropological levels. The unmarked is what you attend from to the marked—it is the water the fish swims in, pervasive, normal, and unattended to. But the originary hypothesis shows, I think, that only through a very extreme form of marking could unmarkedness emerge: first of all, the central object is marked as desirable, as we attend to it from everyone else’s approach to it; this markedness then spreads to the other participants on the scene, as we attend from the object to these obstacles to our possession of it. Markedness, first of all, then, has the meaning of targeted, and targeted for destruction. In attending back and forth from object to rivals, the hesitation or gesture is put forth, and we now attend from the sign (unmarking the sign-giver, first of all oneself) to the object, from which we can now attend to the shared cessation constituting the scene, thereby unmarking the object. What is now marked is any break from “protocol,” that is, any slide from gesture back into grasping, including any mistake indicating such a slide—and as I suggested before, the transition back into appropriation is mediated by the “referential” sign, assuring each other that one is taking only one’s share and warning each other to do the same. Everyone can now attend from the sign to transgressions which confirm it, so the marked now becomes the abnormal, anomalous, transgressive, idiosyncratic. (This, by the way, is why the politics of White Guilt—indelibly marking the unmarked, as in “white male knowledge”—will have so many unanticipated consequences: the unmarking of “knowledge” and, indeed, those European males who constructed the term along with it, saved us all from much worse forms of violence than is represented by the imposition of an unmarked, and extendible, “knowledge.” Once “knowledge,” “truth,” “justice,” “reality” and so on are irremediably marked we will find a catastrophic decline in our ability to talk about all kinds of things.) Subsequently, the unmarked/marked distinction can itself be unmarked, in this case de-escalated, so as to become a means of generated the distinctions needed by the signifying system—phonetic distinctions, word-type distinctions, tense distinctions, etc., etc. And this can all happen without the originary distinction being overturned—even now, as much as ever, being marked is being placed in some kind of danger.

The unmarked, or abstract sign, the “version” that has survived the norming process on the originary scene, and has received, so to speak, the full faith and backing of the central object, is the site for what I would call “everythatness agreements” upon the scene of representation: we move from “any,” or singularizing, to “every,” or spreading and eternalizing; and from “thisness,” or presence or firstness, to “thatness,” or reportage or secondness. Both dimensions are present whenever we use signs so as to make meaning, and they are present on the third, or constitutive scene, or semiotic use proper, where one or the other dimension is accentuated in constituting a field of semblances (the population of the world by object/signs). The third, or constitutive scene is where we use signs to make a difference by creating a new ostensive, and we do that by marking and then re-un-marking a particular use of an unmarked sign, thereby modifying the field of relationships between the marked and unmarked. To return to the succession of speech forms, we make meaning by turning an ostensive dimension of sign use into an imperative one (shifting register from an anticipated “I see” to “show me”), from imperative to interrogative (“look at this”—“where?”), and so on, or vice versa (treating declaratives as interrogatives and imperatives, etc.). In each case, one marks the unmarked, treating a portion of the scene assumed in any agreement or joint attention as defective, but ultimately not irreparable (even the most radical critique implies the possibility of some other scene, composed out of the elements—out of what else, after all?—of the present scene).

Markedness provides an excellent frame for conducting inquiry, not only because it allows us to travel from the phonetic way up to the highest cultural levels, but because it combines invariance (we all, all cultures, all individuals, make sense of things by (un)marking them, with plenty of striking cross-cultural similarities) and great variability: to take a simple example, in the word “nurse,” the feminine is unmarked; but that is just another way of saying that “nurse” is marked female. In other words, what is marked and unmarked depends upon the question being asked—meanwhile, even though this means the application of the terms requires judgment and involves disagreements and arguments (good things for any mode of inquiry), proof is often readily available in fairly convincing forms: we will never say “female nurse,” while “male nurse” is so commonplace as to be an easy punchline (especially since “nurse” also tends to be marked not only “female,” but “sexy female” in certain commonplace fantasies—which is why a markedly unattractive nurse also functions as an easy punchline). And, needless to say, all this can easily change rapidly, and undoubtedly already is doing so, as women become doctors in numbers almost equal to men and men (probably, but interestingly this change doesn’t seem so rapid) migrate into nursing in growing numbers.

Any mode of inquiry, then, unmarks some newly marked object, and singles out the rules according to which that object works as a “constituent element” of some structure at a particular level of inquiry. Again, it seems to me as if we can work completely within the terms of the successive emergence of the speech forms here, since the ostensive, the imperative, the interrogative and the declarative comprise distinctly different and complementary elements in the process of inquiry. An object becomes marked because it no longer works according to its normal rules, which means we can no longer attend from that object to others—our attention is now drawn, imperatively, towards the object, as we attend from the now failed rules of its operation within a system to the rules of its own constitution, from one of its constituent elements to another, and so on. We see what seems to still fit together—we command the object to compose itself in such a manner that we could against attend from it to other things—and what refuses our command leaves our command prolonged, hanging, so to speak, converting it into a question: what should be re-positioned so as to make things fit again?

All language is inquiry, and markedness allows us to see the stakes of the inquiry in a way that is made invisible in those spaces we have explicitly set aside and unmarked (made safe) for inquiry—an object that doesn’t work according to its normal rules, for example, might be a good friend or loved one, who is behaving “suspiciously” (to suspect someone is, obviously, to mark him or her). That person has so far acted as a sign for me, allowing me to attend from him or her to a range of other things in the world, a set of habits which the person is him or herself also a part of—that the person is unmarked doesn’t mean I neglect him or her, just that I can unproblematically enjoy the pleasures he or she brings me (and unproblematically soften and contextualize the pains). Once he or she becomes marked I will not be satisfied until I have succeeded to unmark her once again, by reducing the intrinsically anomalous “suspicious” behavior to some new set of rules, to which I can assign a formula (“she’s having trouble at work”) which identifies a new constituent element of a modified reality (the relation between home life and work has shifted) and which is in turn accommodated to a modified set of habits, which means I have unmarked him or her once more.

Terms like “constituent element,” “component part,” and “rules” are also, of course, both indeterminate enough to be used in many different ways and on every level of reality, and precise enough to produce the definitions and delimitations we need to hold the things in place long enough to get a good look at them. We seek clarification along these lines all the time (were you referring to x or y?) and often enough get it. I want to conclude by making another point, though, which is in fact what I wanted to get to all along. It seems to me that the mark or measure of a strong language and, by implication, a healthy culture and civilization, is that it allows for the simultaneous existence of varied and incommensurable “constituent elements” (identified within distinct idioms, each with its own “rights”). The mistake of modern scientism was to insist upon a single vocabulary to describe all of reality, which leads one to ruthlessly extirpate all other vocabularies, as they can only appear as obfuscating competitors. What I have in mind is a society in which we could analyze the psyche by, for example, breaking it down into “ego,” “id” and “superego,” or even a complex tree of stimuli and responses, without thereby disabling a word like “soul,” which would identify a “constituent element” within an integral structure every bit as real as “ego.” We would, then, be mature enough to live with “soul” being marked as unscientific in some discourses, while, say, “damaged soul” remains operative (unmarked) for marking certain sources of evil in other, moral and spiritual discourses. (The best example I have of such a richly plural and yet coherent linguistic reality, and which I hope to find a way of exploring in this connection, is the English of Tudor and Elizabethan England, the language of Tyndale, Cramner and Hooker, culminating, of course, in Shakespeare’s language and the King James Bible.) At this point, originary grammar comes into its own as a mode of cultural and social criticism, one which enables us to attend to the unmarked without feeling compelled to mark the unmarked permanently, in revenge for hiding itself. And, finally, the use of constraints or deliberately formulated rules so as to govern one’s own analytical discourse becomes a way of finding and generating new constituent elements, of shaking them loose, so to speak, from the unmarked formulas embracing us.

December 3, 2010

Sarah Palin, Anyown, and the Constitutional Reformation

Filed under: GA — adam @ 8:48 pm

I will lay down a marker right away—for me, the main criterion for supporting a Presidential candidate is that he or she knows what the left is; anyone who thinks that a Republican president will be able to settle into the White House in 2013, put on the green eyeshades, and start balancing the budget in a sober, bipartisan manner is criminally naïve, and I don’t want anyone like that anywhere near the Presidency. Normal America and free America are at war with the Left, and anyone one who is not ready to fire back when fired at need not apply. Sarah Palin seems to know what the Left is, and none of her potential contenders seems to have a clue. At this moment, the ability to create and run a political and economic media empire is more pertinent to presidential aspirations than the ability to balance a budget with your bare hands, which you can hire someone to do anyway.

But leaving that aside, Palin, and the Palin phenomenon are intrinsically interesting—there seems to be widespread agreement on that, at any rate. She, in her public persona, seems to me an almost perfect complement to Barack Obama, and the Obama phenomenon—she seems destined to be his nemesis, a role she seems to relish and which she plays very well. I think an Obama v. Palin race in 2012 would dramatize all the post-Bush, indeed, all the post-9/11 conflicts; even more, it would finally bring the entire Progressive Era in our politics, dating back to the turn of the 20th century, on the stage—and I think this would be both very healthy and incredibly exciting. We desperately need such a polarization now, and it would be nice to deal a blow to the illusions of the “fiscally conservative, socially liberal center” of the country. I don’t doubt that there are many Americans, maybe, depending upon definitions, a majority, who can be described as “fiscally conservative, socially liberal”; nor do I doubt that in a certain sense they are the “center,” picking and suturing together the least antagonistic items of both right and left. It’s an empty center, though, and a campaign that showed as much by forcing the “centrists” to choose would be healthy as well—if you support the kind of judicially driven federal government needed to push through and sustain the “socially liberal” agenda, than you can forget about fiscal conservatism. Fiscal conservatism would mean federalism and expanded property rights, both of which, as the politically savvy know, mean death to “social liberalism,” i.e., abortion on demand, gay marriage and religion out of the public sphere. And I might as well also say that I can’t say the word “gravitas” without, at the very least, smiling. I think that things are going to get rough, especially if the prerogatives of those plugged into the victimary public arena are even mentioned, much less trespassed upon—we need someone whose first instinct isn’t to placate the New York Times.

Even leaving Palin aside for the moment, it seems to me (I would be surprised if no one else has used the analogy) that the Tea Party movement is equivalent to a kind of Constitutional Reformation. The liberal judiciary, like the Catholic Church, has been, for the past 80 years, interpreting the holy text for the rest of us, and according to arcane and esoteric methods that ordinary citizens can’t penetrate. If you were to ask a member of the priesthood what the Constitution said about x or y, they would gesture towards piles of unintelligible commentary which it takes many years of training to navigate. Terms like “the Commerce Clause” have taken on a magical significance, changing the citizens property’s into the state’s. The Tea Partiers have simply insisted on reading the document for themselves (unfortunately there was no way of forbidding its translation into the vernacular). But the analogy extends further—just like the return to the biblical text itself, and an insistence on the individual’s right to interpret it himself, has led to more Protestant Churches than anyone can count (unless someone actually has counted them), so will the opening of the Constitution lead to many different, and often idiosyncratic, versions of the same. Not that many—the Constitution is a lot shorter and simpler than the Bible, and there is a tradition of rational argumentation and precedents prior to its appropriation by the advocates of the Living Constitution—but quite a few more than I imagine most originalists imagine. (Maybe they do imagine it and don’t mind—I certainly hope so.) There is plenty of room for idiosyncrasy, in other words, in this return to the real center, the founding events of the nation, just as there is plenty of idiosyncrasy in Sarah Palin, who also deliberately roots herself in that very center. It is this combination or “simultaneity” of centrality and idiosyncrasy, of the general “any” and the singular “one-y,” that I have in mind when I use the term “anyown.”

This mixture of the originary and idiosyncratic is best found, I think, in one of our most basic rights as Americans, the right to bear arms—number two, right after speech and religion, but arguably more fundamental, since how could we protect those rights without the right to bear arms? (I know, the order of the amendments was not meant to imply any order of rank—and yet they do often seem to be ranked this way.) And yet, as far as I know the right to bear arms holds a comparable rank in no other national or international charter of rights—it is a distinctively American “universal” right. The centrality of the right to bear arms can be traced back to founding liberal theorists like Hobbes, who considered the right to protect your own life prior to, and unaffected by, your obligations to the state, but for this very reason it is very difficult to integrate it coherently with the more peaceably exercised rights which we expect the state to guarantee for us. Indeed, the main rationale, at least among its most fervent defenders, of the right to individual ownership of firearms, is precisely that it turns the citizen into an effective barrier to the establishment of a tyranny. How, though, can the state protect such a right unambivalently, since there can be no pre-established or agreed upon rules for what, exactly, would constitute that tipping at which legitimate government turns into tyranny? The best or most convenient definition, I suppose, would be the point at which the government starts rounding up all the guns; but such an action might indicate that, for the government, the tipping point at which citizen vigilance becomes rebellion, has been reached.

Also, would anyone want to say there is no limit to the right to bear arms? I can own a pistol, a shotgun, a machine gun—how about a basement full of dynamite? Anti-aircraft missiles? What about the first billionaire who decides he wants his own nuclear warhead? If the real purpose of the right to bear arms is to deter tyrannical tendencies in government, wouldn’t we insist that citizens arm themselves in a manner commensurate with the power of the contemporary state—the contemporary American state? After all, what good would even “assault weapons” be against the tanks rolling into New Jersey and the planes strafing Manhattan? You could say that other rights have their limits in the infringement upon the rights of others—so, my right to free speech doesn’t permit to stand in front of my neighbor’s house with a bullhorn berating him for his leftist politics. But what is the equivalent here? My stockpile disturbs no one, and by the time my basement full of explosives violates your private property rights by blowing up the block on which both our houses stand, it will do you little good to sue me.

But there is another way of interpreting the right to bear arms that preserves its idiosyncratic centrality. The government can’t be everywhere to protect everyone, and we wouldn’t want it to be; where it can’t be, armed citizens can, and can serve, while protecting themselves, as a kind of informal militia or posse, making it clear to criminals that they are safe to commit their crimes nowhere. This implies complementarity between government and people and, at its outer limits, a near merger of the former into the latter. The deterrence of tyranny can itself thereby be pre-empted by the shared obligation to secure the order whose breaches provide the very invitation needed by the tyrant to exceed constitutional boundaries. The right to bear arms in this way involves the citizen in the preservation of ordered liberty, and can be detached from that utopian resentment implicit in indiscriminate “anti-government” sentiments. At the same time, though, the boundaries separating vigilance, vigilantism and criminality are not always bright and clear, and will take different shapes across and within communities, based as they must be upon shared tacit understandings with overlap with other understandings and constantly require adjustment. The more deeply rooted the right, the more inadequate the merely legal attempts to adjudicate it, i.e., the more idiosyncratic.

Anyway, here is Palin’s forceful and borderline incoherent response to Barbara Bush’s patrician cruelty (“I once sat down next to her. Thought she was beautiful. She seems to love it in Alaska. I hope she stays there”) which not only wishes Palin out of Presidential politics but out of public discourse altogether:

“I don’t want to sort of concede that we have to get used to this kind of thing because I think the majority of Americans don’t want to put up with the blue bloods — and I say it with all due respect because I love the Bushes — but the blue bloods who want to pick and choose their winners instead of allowing competition to pick and choose the winners.”
She then invoked the economic crisis to explain her point.
“They [blue bloods] kind of do some of this with the economic policies that were in place that got us into these economic woeful times, too,” Palin said. “So I don’t know if that kind of stuff is planned out but it is what it is. We deal with it, and we forge ahead and we keep doing what we’re doing.”

The Bushes are blue bloods (ok, so far, so good), but she still loves them—nothing wrong with blue bloods except for when they try to “pick and choose their winners.” Palin has a response to Bush here, but she has cut and pasted into that response her own political “idiom” of the moment—a very helpful idiom, which has put into practice the excellent idea of changing the terms of Republican politics through primary challenges. The idiom doesn’t really work so well here, though, because wouldn’t the Bushes saying who they prefer for President be part of that open, competitive process? After all, that helps those who respect or despise the Bushes sort out their own views of the candidates. But Palin doesn’t want to come out and suggest that Barbara Bush is a spiteful old shrew, representing the retrograde wing of the party, and I think she has imposed upon herself the kind of discipline which ensures that you don’t say anything in response to new situations which has been “piloted,” so we see the limits of her repertory here. The connection to “these economic woeful times” (as I’ve mentioned before, Palin’s grammatical choices can be fascinating—recently, she responded to a reporter trying to spring a question on her at a book signing with something like “can’t we get that good enthusiasm” back, in this case using a favorite adjective of hers with a favorite noun with which that adjective just happens not to go) is even more of a reach, but, paradoxically, she is getting at something here because there is a real connection between the “elites” (what Angelo Codevilla calls the “Ruling Class”) and the kinds of political-economic machinations that led to the Wall Street meltdown. Palin knows this, and has posted cogently on it on her Facebook page, but what I think we can see in this instance is an imperfect intuition regarding how to stitch together the various arguments, slogans and commonplaces at her disposal—especially since in this case getting too explicit would also be getting far more polemical regarding the Republican “establishment” than Palin wants, and can just barely avoid (which means that she is also very aware of the political boundaries she is operating within). We see this all the time with Palin, and it’s why she can, in fact, look stupid sometimes—she doesn’t know how to weave all the clichés together in a seamless manner as do most politicians operating at her level of exposure. But that’s also a way of saying she’s not very good at saying nothing. And in that way, more than any other, she is more grounded than anyone or anyown else in the emergent idiosyncratic center.

« Newer PostsOlder Posts »

Powered by WordPress