GABlog Generative Anthropology in the Public Sphere

July 17, 2014

After Liberalism 2

Filed under: GA — adam @ 8:02 am

The left’s propaganda offensive in the wake of the Supreme Court’s Hobby Lobby decision affirming the religious rights of business owners to not subsidize forms of birth control that violate their convictions involves arguing, as blatantly as they feel they can, that the Supreme Court (or, better: 5 men; or, even better, bleaching Clarence Thomas: 5 white men) has outlawed birth control. It’s easy to treat this as crazy, or breathtaking brazenness, a desperate bid to boost voter turnout amongst the stupidest elements of the Democratic base. The 2012 election, though, after months of assuring myself that no one, of course, could be stupid enough to believe that “War on Women” actually means something, much less that the Republicans were waging one, has taught me to take such assertions very seriously because, clearly, many others do. The assertion that “5 guys” have outlawed birth control is very similar to the insistence that, not only must “marriage equality” be immediately, universally and uniformly imposed, but that no decent person could bear to be exposed to anyone whose attitude towards it is anything other than acclamatory (dissenting opinions seem to have the status of second hand smoke in such discourses). The logic is the same in both cases: I am only allowed to do something only if everyone supports and celebrates my doing it.

And that is not, in fact, illogical at all. Only liberalism finds it outrageous. By “liberalism,” of course, I mean the traditional variety, which starts political reflection with the assumption that there is something pre-politically inviolable in the individual, that this inviolability implies a series of rights that the individual bears with him or her in entering political society, and that the main business of politics is cataloguing those rights, setting up hierarchies amongst them, figuring out how best to protect them, to prevent their exercise from leading to one colliding into another, and so on. Only a liberal in that sense can say “I may disagree with what you say but will defend to the death your right to say it,” even if very few liberals have ever gone anywhere near death defending ideas they consider obnoxious (even the ACLU tends to protect only those ideas that the middle class finds obnoxious).

But maybe liberalism is wrong. Or was right, for a limited time, in certain places, among certain sectors of the population. And maybe is no longer. The abstract freedoms advanced by liberalism suited the rising middle classes in their struggle against feudalism (and slavery and absolutism) perfectly, and then gained new life in the struggle against Communist and fascist totalitarianism. Liberalism’s victory in these struggles enabled it to be sold as a set of eternal principles (and to disguise its basic emptiness), but maybe these enemies were very contingent and time bound. I’m not sure that the “great debates” of the liberal order ever amounted to more that liberal ridicule of, and conservative prudential or sentimental defense of, some element of pre-modernity that persisted into modernity. Pre-modern elements having been thoroughly routed, liberalism no longer seems to provide a frame for the main disagreements in today’s social order. This would mean that liberalism has done its work. But the work of liberalism would, then, have been a very localized one, which was to fend off outmoded and especially dysfunctional alternatives to capitalist modernity—absolutism, slavery and the varieties of totalitarianism, which can be shamed merely by being brought into open debate among mobile, self-reliant people. Once all the pre-modern forms have been abolished and the more genocidal forms of totalitarian rule discredited, what, exactly, remains of liberalism? Does anyone close to power today propose a model of governance beyond local technocratic fixes to increasingly dysfunctional systems?

The issues that we have today don’t seem to lend themselves to the debating society model of liberalism. We don’t, for example, seem to have a vocabulary for discussing the rights and wrongs of the kind of statistical surveillance the NSA has been conducting since 9/11: on the one hand, developing algorithms for determining that certain individuals should be probed more closely (e.g., someone who has called Pakistan, Afghanistan and Saudi Arabia hundreds of times over the past few months, to numbers that dozens of other people have called hundreds of times) seems reasonable; on the other hand, it’s very hard to fit this practice into any traditional notion of a court sanctioned search and, on the other hand, accepting it requires a reservoir of trust in the government’s genuine interest in protecting us from attack and nothing more, a trust which few feel and fewer will admit—in large part because the government itself has abandoned even a show of liberal neutrality. But, of course, when there is a terrorist attack, most people will blame the government, the pendulum will swing wildly in the other direction, and objections will be vehemently dismissed, at least until the attacks become a distant memory (which seems to happen increasingly quickly). Or, maybe, we will stay steadfast in our libertarian and victimary convictions and absorb attack after attack, continuing to demonize anyone who suggests even the most general connection between Islam and terror. Either way, we have nothing resembling a traditional liberal “conversation” (and one starts to wonder to what extent we ever really did) over great questions of freedom and authority, war and peace, and so on. In the case of the NSA surveillance, it seems that either the government will do what it needs to do in order to fulfill responsibilities it claims have been delegated to it, regardless of how such actions can be squared with rights talk; or, there will be sufficient pushback, which will simply mean that “we” have rejected the government’s assertion of responsibility and have chosen to distribute it in a new way, or to just be irresponsible. In the end, more precisely, the government will find a way to operate in secrecy because bureaucrats and elected official prefer the fleeting obloquy of exposure to the delegitimation and loss of power actual attacks will bring; or, on the other hand, hackers and leakers and their friends in the media will make secrecy impossible—either way, these decisions are not being made in any recognizable liberal or democratic way. Other issues regarding privacy, innovations in health care and the biological sciences more generally, and intellectual property, for starters, seem equally immune to classically considered “debates”—these differences over the ambivalences of what is usually blandly called universal “interconnectedness” but might better be considered universal contagion or hostage-taking, seem more likely to be decided by unilateral initiatives which create irreversible facts on the ground, followed by ratcheting effects of one kind or another. For individuals and groups the choice will be stark: be inside or outside, and if you want to be inside play by the rules or find yourself outside; and you’d better not be caught outside unless you can manage to create a new inside.

That there is only approved behavior and disapproved of behavior seems much less counter-intuitive than the liberal claim that there is disapproved of behavior that we nevertheless allow, i.e., approve of. Less counter-intuitive and undoubtedly far more universal. The problem is that liberal society has upended the clear boundaries between approved and disapproved. What we are seeing now may be an attempt to restore those boundaries, which might be necessary, in the sense that human life is ultimately untenable without them. The difficulty lies in the lack of any consensus over what is to be approved. The solution is simple, if difficult—secession, partition, into smaller communities which can arrive at such a consensus. The only meaningful conversations we might be able to have in the near future will be over the terms of such a partitioning, and if there is anything to hope for it is that the last act in the liberal order will be to partition out of it into a new anti-federalism with some modicum of grace and a minimum of violence. (The problem of maintaining the viability of overlapping local communities would presumably then generate a new politics.) (I am encouraged by the news that California will be voting this year on a proposition to break the state up into 6 states. I assume it will fail, and Congress has to approve any such move even if Californians vote for it, but I believe once the idea is out there, and secession is de-stigmatized, we will see much more of it.)

Capitalist modernity! I used the term with ease a couple of paragraphs back, as a convenient other to feudalism, absolutism, slavery, communism and fascism. Capitalism has really only been tried in a few places, for brief periods, and most people seemed to have found it terrifying—what we have had mostly is corporatism. As soon as one starts to say that capitalism is the true way, we just haven’t gotten it right, one starts to hear echoes of identical arguments made in the name of socialism and communism. The free market is real, grounded in the reciprocity constitutive of the originary scene, and we can study its operations and promote its spread, but it would probably be realistic to resign ourselves to the fact that there is only sufficient popular support for the free market in carefully regulated and administered doses (and, regardless of popular imagination, it more often comes arbitrarily and crookedly regulated and administered doses), even though, fortunately, important innovations sometime sneak through before the bureaucrats have a chance to figure out what happened. Maybe what we have come to call modernity is the less grandiose fact that, for some time, there has always been some faction (and sometimes several at cross-purposes) that finds it in its interest to support the free market to some degree. The end of liberalism might also be the end of at least “Enlightenment” modernity, whose slogans were always just a battering ram to use against feudalism, and which has gradually lost its legitimacy as the self-proclaimed moderns continued to find more and more “pre-moderns” to denounce, hector and, when possible, outlaw. The only thing, though, that would prevent the upcoming partitioning from becoming a new dark age would be sufficient (define “sufficient”! I confess, I can’t, not sufficiently) recognition within and between communities of the need for free markets. But while some modernities have touted markets as vehicles of freedom and prosperity, the disciplinary order would equally stress the market as disciplinary agent, inculcating practically the realization that nothing will come from nothing. Indeed, an index of the health of any social order is the number of people who oppose restrictions on free exchange even if doing so benefits neither them nor anyone in particular as far as anyone can tell. If you want a genuine “veil of ignorance,” there you have it, and a very practical one available at any time: no one can tell who will benefit beyond the very short term by removing obstacles to free trade. Those who willingly reside behind that veil are the ballast of social order.

The way to act within the new, “disciplinary,” order, then, is as the representative of a discipline, which one advances unwaveringly and unquestioningly, at least to those outside the discipline; guarding the boundaries of the discipline, though, makes one a better participant in the market by leading one to respect all the other disciplines, which is to say to withdraw behind the veil of ignorance of the broader conditions of possibility of one’s disciplinary activity—a veil of ignorance which is also the condition of possibility of the local knowledge of surrounding disciplines constitutive of one’s own. Of course, anyone participates in several disciplines, which overlap and perhaps antagonize each other in varying degrees. This will be the source of ethical dilemmas in the disciplinary order. But it will be silly to complain of violations of rights which are not simultaneously rights of the discipline, just as it would be ridiculous for a doctor to complain that his free speech rights are violated by the fact that no hospital will allow him to carry out an unvetted form of surgery that strikes his colleagues as bizarre—the doctor proposing something new only has the right to complain that members of the profession fail to follow or reasonably revise their own protocols for approving new procedures. Disciplines are radically different from tribes, insofar as they are less exclusivist, make variable claims on the individual’s loyalty and make claims to knowledge and institute procedures for arbitrating and encouraging such claims, but they are more like tribes than they are like the polity of the liberal modernist imaginary insofar as they recognize no rights that are not constitutive of the discipline itself. And that, in fact, is the only coherent way of thinking about rights.

June 7, 2014

After Liberalism

Filed under: GA — adam @ 5:05 pm

If we can’t distinguish between defending, or at least accepting, someone’s right to say something, on the one hand, and agreeing with them, on the other, then liberalism, in the classic Enlightenment sense, no longer exists. This seems to be, increasingly, the case—marxists and other antiliberals have long argued that the “bourgeois” freedoms are disguises for the bourgeois privileges (the poor and the rich equally forbidden to sleep under the bridge, etc.), but it seems to me that something a little different is happening now. It’s not so much that people argue against the distinction (while implicitly acknowledging that under new, more just conditions, it would be legitimate), but that it is simply unintelligible to more and more people. If you say that those opposed to same sex marriage should have the right to voice their opinion, or that people who have expressed views that many or a majority would deem racist do not thereby surrender their property and other rights, the increasingly likely response is: why do you hate gays? Why do you support racism? There are many reasons for this development, which, not surprisingly, seems especially common among the young, and is complemented by the libertarian inability to distinguish between what is permitted and what is good (if, for example, you wonder about the effects of excessive consumption of pornography, the most likely response will be an indignant insistence on one’s right to do what one wants with one’s computer screen and body, along with aspersions about one’s own presumed puritanism, etc.). In short, more and more people want to do what they want to do, and to (not) have done to them what they (don’t) want done to them, and the whole question of grounds for doing one or another thing and of legitimate grounds for doing or not doing, is something we no longer seem to have the language for. The problem, as always, is that what some want to do is what others don’t want done to them, and the always largely fictional discourse of “rights” was there to adjudicate the competing claims—or, at least, establish an equilibrium as both sides use right-talk to entrench their interests within the state apparatus. If the means of adjudication collapse, and it’s too tedious to try and retrieve them from the 19th century, what happens? We have a standoff between the victimary and the libertine (I know that not all libertarians, maybe not most, are libertine—but the libertines have the most powerful cultural and political presence, since arguments in favor of drug legalization and against the regulation of various pleasures draw far more supporters than arguments about the evils of central banking), both equally plausible and legitimate children of modernity.

Let’s further factor into this Eric Gans’s most recent Chronicle (#463: More on the Victimary), which advances the discussion into what Gans seems ready to concede is likely the predictable result of the market system: a polarization in wealth that doesn’t produce immiseration at the bottom, but rather eviscerates the “middle” where normal forms of recognition (“respectability) can reasonably be expected. With the respectable middle cut out, what are left are pathological forms of “theater”—mass killings and other forms of cheaply and/or viciously acquired celebrity. The victimary, in this context, seems less a driving force in history than a rather feeble “sacrificial substitute” for this more devastating form of inequality that is beyond repair. To continue my own discussion, one might say that the victimary symbolically rebels against this polarization while the libertines vicariously identify with it. Both sides live in fantasy worlds.

But even if Picketty’s analysis, referenced by Gans, to the effect that today’s polarization is more representative of the trajectory of the market economy than the rough equality of the post-War years, is accurate, the two questions, distribution of wealth and distribution of recognition, can be delinked. It is not the emergence of billionaires that has destroyed “Fishtown,” or the “middle.” Even the much bemoaned decline in manual labor has been exaggerated—plumbers, roofers, contractors, carpenters and others employed in improving and fixing up can still make a very good living. In fact, there seem to be too few of them. Government enforced unionization destroyed Detroit, not the greed of the Big Three automakers. The welfare state’s assault on the family and victimary eruptions in the inner cities that crippled law enforcement and education destroyed wide swathes of black America (and ever larger pockets of white America), not the desire of corporations for cheaper labor overseas.

But there’s a limit to such socio-economic and cultural explanations, a limit evident in the problem of explaining the explanations. Things were gradually improving on all fronts during the 1950s and into the 1960s, and yet the 1960s proved the most disruptive period in American history since the Civil War. Nor is this anomalous—the same was true of the years leading up to the French Revolution. Why were the massive social experiments of the 1960s deemed necessary? The steady, if uneven and unbalanced, improvement of living conditions wrought by the expanded market has been unsatisfactory for many people—and, in particular, for those people who make things happen, the politically astute, the “cool,” the ambitious, the well-connected. Why? What is missing? The editor of the conservative journal The American Spectator, Emmett Tyrell, often says, perhaps tongue in cheek, that we tremendously underestimate the effect of boredom on world affairs. Serious or not, he has a point—boredom, what as sociologists we might call “anomie,” or as theologians “despair,” must be given its due. Boredom is part of the structure of addiction that is so prevalent in (not only) contemporary life—the addict wants to recover a novel and exhilarating experience, and rather than realizing that such experiences must be granted by immersion in reality, seeks it out, and seeks to secure it, in the identical form in which it was first experienced. As we all know, larger and larger doses are needed to attain a less and less satisfying approximation of that original experience. And in the ever more vast in-between, there is nothing but boredom, an itching for the next, inevitably disappointing, fix.

An important element of illiberal critiques of liberalism has been the observation that liberal rights—to speech, religion, association, etc.—really imply reciprocal indifference more than reciprocal recognition. The peace of the late medieval religious wars turned into the grave of meaning—without the (exhilarating) possibility of martyrdom, without the urgency of universal salvation, “belief” doesn’t amount to much. In that case, maybe the battle between the victimocracy and the libertine will have salutary effects. There might be a real stake in the libertine’s insistence, against feminist objections, on his right to play a sociopathic pimp in Grand Theft Auto. The libertines can try to secede from the victimocracy, and they may succeed, certainly to some extent (e.g., the “man-cave”). But insofar as they must operate on victimocratic terrain, they will have to subvert it from within, thereby revealing victimocracy’s many antinomies and anomalies. Perhaps the question of recognition can be addressed in new, fresh ways. (The victimocracy is of course inherently paradoxical—what they do with more power can’t possibly be what they want to do.) One thing we can thank the victimocrats for is intensifying the question of the relations between representation and reality. The victimocrats have not answered it and, indeed, it is one of those great questions that can never be answered definitively. Kevin Williamson, the National Review columnist that Gans has been referring repeatedly to recently, had an article lately on transgenderism, in which he insisted “Laverne Cox [a well-known “transgender” actor] is not a Woman.” On latest count, the number of comments is 8,195. The most interesting passage, for we Generative Anthropologists is, I think, the following:

The phenomenon of the transgendered person is a thoroughly modern one, not in the sense that such conditions did not exist in the past — Cassius Dio relates a horrifying tale of an attempted sex-change operation — but because we in the 21st century have regressed to a very primitive understanding of reality, namely the sympathetic magic described by James George Frazer in The Golden Bough. The obsession with policing language on the theory that language mystically shapes reality is itself ancient — see the Old Testament — and sympathetic magic proceeds along similar lines, using imitation and related techniques as a means of controlling reality. The most famous example of this is the voodoo doll. If an effigy can be made sufficiently like the reality it is intended to represent, then it becomes, for the mystical purposes at hand, a reality in its own right. The infinite malleability of the postmodern idea of “gender,” as opposed to the stubborn concreteness of sex, is precisely the reason the concept was invented. For all of the high-academic theory attached to the question, it is simply a mystical exercise in rearranging words to rearrange reality. Facebook now has a few score options for describing one’s gender or sex, and no doubt they will soon match the number of names for the Almighty in one of the old mystery cults.

Williamson’s position is the classically modern, Enlightenment one: the point of language is to represent reality accurately. We can see here the privileging of the declarative sentence over the ostensive and imperative that Gans has associated with Western metaphysics. (Williamson even alludes to the displacement of pagan polytheism by the one “Almighty” discovered/invented by the ancient Hebrews.) One’s genitals, and, perhaps, one’s hormonal and chromosomal structure, which can be observed by everyone according to shared clinical and experimental criteria, determine one’s gender—not more amorphous and “unfalsifiable” criteria like what one feels fated to be. Of course there are anomalies—the rare individual with an extra-chromosome, extremely unusual hormones, or un or over-developed genitals. But these don’t upset the basic classification, which can be justified by reference to broader biological assumptions: the anomalies can be safely sequestered because they don’t contribute to the reproduction of the species, the meta-criterion for biology. (But, of course, the discipline of biology evolves—some of the commenters on Williamson’s article make biological claims, with what plausibility I can’t say, for the “reality” of transgenderism. Even if they’re right, though, the cultural and political consequences are not obvious, or unambiguous.)

But the originary hypothesis allows us to at least entertain the possibility that these modernist assumptions are the anomaly, and perhaps not relevant beyond the specific disciplines whose ongoing inquiries they support, since we know that language has, in fact, created the most astounding reality of all—the human reality. We can, in fact, argue about how many genders there are, what they should be called, how they should be represented, what the possible relations between them are, and such arguments will change the way we live, love and reproduce. At the very least, such discussions focus our attention unwaveringly on signs, not on some utopia beyond representation—even if utopian fantasies got the discussions started in the first place. There is no ultimate transcending of biology, or other “material” realities, but all this means is that biology will always resist and deflect our attempts to represent it. In the end, maybe we will find ourselves with a comfortable middle or norm of the familiar two genders, with a bunch of unmolested, more or less interesting or annoying outliers; maybe not. Once tacit assumptions get excavated, they cannot be made tacit again—the historical function of the libertines may be to exaggerate and caricature and in this way, paradoxically, re-normalize the traditionally normal, this time as play and games (It’s worth keeping in mind that feminism, and certainly gay liberation, have had their libertine factions, now largely kept under wraps in the interest of political unanimity and momentum). And in the reciprocally aggravating chafing at constraints into which the victimocrats and libertines will hurl each other (the libertines wanting to express all the possibilities of a polymorphously perverse nature, the victimocrats demanding the uprooting and revision of all spontaneous desires) there may be space for the originary thinker to reflect upon a reality replete with examples of why we need constraints in the first place.

Instead of adjudication, and the increasingly encumbered and arbitrary discourses of “rights,” maybe there will be a space to treat culture as play and games. Back in the 80s, Jean-Francois Lyotard extended Wittgenstein’s notion of “language games” to propose an ethics of political culture, guided by the principle that one doesn’t try to eliminate a fellow player from the game. We certainly can’t count on such comity now, and, to be honest, I would not agree to play by such rules myself—nor do I think they can be made coherent, as cultural “pieces” are not as stable as those in chess. But we can certainly think in terms of “moves” rather than positivist, metaphysical or historicist truths, and of provisional, emergent rules that don’t presuppose some kind of transhistorical Truth Commission that in the end is sure to ratify one’s own truth claims. Play presupposes a field, a constitution of a portion of reality, or reality itself, that is to be governed by the rules of the game. A good player doesn’t want to “win” (any victory being very temporary anyway) so much as to keep remaking the field so as to multiply the number and variety of moves that might be made (first of all by the player himself, but how could opening avenues for oneself not do the same for others as well?). The minimal ethics governing the field is that we all take turns going first, as going first almost inevitably confers an advantage in any game—sometimes you speak in my terms, sometimes I speak in yours. Unless we’re really bent on mutual extermination, we should be able to manage that. We’ll see what the libertines and victimocrats make of each other’s playbook and field position.

In the end, I think both sides will undergo significant shock and stress, because, in the end, I think that the Jewish revelation is right in one crucial respect. The Jewish name of God, revealed to Moses at the burning bush, “I am/shall be that I am/shall be,” cannot be said by the believing Jew because you can’t say it without claiming to be God. This is my one, marginal, addition to Eric Gans’s extensive analyses of this revelatory event—if the name of God explicitly defers the desire to proclaim oneself God, I take this to be because that desire must have emerged in a powerful way in the ancient world of God-emperors. It is an enduring desire, manifested in the belief that following nature or reason will provide moral truths no less than in totalitarian attempts to remake the entire fabric of human relations. Indeed, the ideology of modernity is that we are all gods to ourselves. Descartes’s “I think, therefore I am” is another manifestation, and so I am much less ready, I think, than Gans, or than I once was, to celebrate the centering of each individual in his own desires. Only shared revelations, on particular scenes, of our reciprocal being-hostages-for-one-another (to borrow a term important to Emmanuel Levinas, and then Derrida), or what I have been calling “disciplines,” can create legitimate centers. No one can know how many such centers, or of what duration or quality, are necessary, but once there are enough of them, “inequality” won’t matter. And if there aren’t enough—well, the catastrophes that will result will make the symbolic holocausts of the victimocracy seem so much windmill tilting. But getting enough of them can only be a learning process and, as the pedagogical cliché has it, you have to start with where the learner is.

May 30, 2014

Psychogeography

Filed under: GA — adam @ 7:55 am

The line of inquiry, which I suppose could be called “psychological,” but perhaps would better be called, using a term I have come across in some radical writers, “psychogeographical,” I have undertaken in the past few posts seems increasingly important to me. I find myself in a position analogous to those Western Marxists following the failure of proletarian revolution in Western Europe during the 1920s—in response to the failed historical logic, according to which the proletarian would inevitably be propelled into revolutionary confrontations with the ruling class, theorists like Lukacs, Gramsci, Horkheimer and Adorno directed their attention to culture, aesthetics, and the unconscious. My own analyses of victimary discourse post-9/11 led me, not to a revolutionary, but a restorative hypothesis: now that we were at war with a privileged victim class, victimary assumptions could be made self-cancelling by conducting that war in the name of the victims of the putative victims—oppressed women and religious minorities in the Muslim world, peaceful Muslims, and freedom seeking democrats in majority Muslim countries, for starters. This is an idea continually pursued by the right in various arenas—anti-abortion as defending the victim of the victimized woman; school choice and enterprise zones in the inner cities to defend the black poor against the immiseration caused by their own leftist leadership; anti-union policies defending the individual worker against the authoritarian union bosses, etc. The idea is very good, but it almost never works—one could take the classical revolutionary position and insist that it just hasn’t been done the right way yet, but I prefer to cut my losses and proceed under the assumption that it can’t be done. (Of course, all the policies I just mentioned could be pursued on their own merits—my only point is that it’s an illusion to expect them to break the stranglehold of victimary thinking on our politics.) The reason for the impotence of such an approach is clear—only the victims of the center register for victimary discourse, while, first, victimizations carried out by the victims of the center only further indict the center (now for so brutalizing its victims as to turn them into oppressors), and, second, any victims who support the center (breaking solidarity, in Uncle Tomish manner, with their fellow victims) only prove the corrupting effects of the center—thereby more decisively disqualifying representatives of the center for any liberationist credentials.

This further means (as perhaps should have been obvious all along) that the victimary goes well beyond politics, striking deep roots in the culture and the psyche. Ultimately, it raises questions of fundamental “scenicity,” which is to say of the sacred. I have often thought about and discussed the victimary in terms of the sacred, but always in terms of a public, political sacred, or in terms derivative of Voegelin’s analysis of modernity as “Gnostic.” I don’t repudiate any of those arguments, but they simply raise the question, why are so many in the modern world vulnerable to gnostic faiths? Voegelin’s answer is, essentially, that the differentiations introduced by Christianity into the West are simply too demanding for too many—which is not a bad analytical starting point, but simply raises another question, i.e., how to make the needed upward moral innovations possible?

I have worked, recently, on two concepts that, in conjunction, seem useful here. Most recently, in my “Selfy” post, I introduced the notion of a “constitutive fantasy,” a concept which has a history in psychoanalytic and postmodern discourse that I wouldn’t be interested in tracing (it has been important to Slavoj Zizek, for one), but that I think can be given an originary meaning in a fairly precise way. I used the term in the process of working through Andrew Bartlett’s exploration of the erotic dimension of “personhood”—Bartlett starts by imagining someone imagining their own erotic centrality, as the sole and unwavering object of desire of every other individual on the scene. Bartlett goes on to say that this situation in reality would not be desirable (and to analyze the more realizable retreat of the couple who accord each other reciprocally exclusive erotic attention), but that would of course be the case for many fantasies, and this does not derogate from its power as fantasy; indeed, what makes a fantasy constitutive is that it is its unrealizability that provides the measure for every actual experience. Let’s think about this in scenic terms—desire does not aim just at possession of the object, but at the occupation of a position on the scene. If I’m just hungry and go into my kitchen, open a can of beans, scoop the beans out with my hand, and eat them right out of the can, there is no desire involved worth speaking of, just the brute satisfaction of appetite; if I go out to a restaurant, I want to see and be seen, and not just satisfy my appetite (even if only in the negative sense of not drawing unwanted attention to oneself). In the latter case, joint attention is involved, and one wants to shape and direct that attention in specific ways. It follows that the convergence of attention I am aiming at has a “vanishing point” at which desire would be satisfied because all attention would be distributed in the optimal way. In the case of the restaurant, that might mean all eyes on my companion and myself, with my companion noticing that attention, joining and basking in it, while further noticing my own Olympian indifference to (and which intensifies) it, etc. That would be my constitutive fantasy of the scene, and its relation to the actual scene may take many different forms—a source of frustration, of ironic amusement, of pleased surprise at how many elements of the scene seem to be in place, of self-skepticism as to whether my fantasy is in fact projecting those elements into the scene, and so on. This is the “reality testing” Freud saw as the source of the “Ego,” and which I would now speak of in terms of “ostentation,” another concept I have been using to denote binding up the various vectors of attention into self-presentation on a scene. The constitutive fantasy must have had its place on the originary scene, as desire multiplied by the desires of others, and is in implicated in any scene, and is both individual (no one else can have quite my place on the scene, or my history of participating in relevant scenes) and shared (the “vectors” of attention one’s desire retrojects back to the origin are necessarily drawn from the scene, and the previous scenes mapped onto this one, itself).

In an earlier post, I developed the concept of a “violent imaginary,” to account, first, for the fact that the collective self-immolation the originary hypothesis assumes is averted by the sign could only be imagined by the participants on the scene, being in fact very unlikely (the melee following the rush to the center would be disorganized, flailing and aimless, and would probably break up quickly with little permanent harm done anyone); and, second, to suggest that any subsequent scene is similarly informed by a more or less dimly apprehended “worst possible scenario” that grips what Coleridge called the “primary” imagination and thereby shapes the meaning that will be conferred on the scene. The worst case scenario can take various forms, as many as all the possible configurations of the scene and its breakdown—into a many on one assault, into group clashes, into one on one stand-offs, etc. As new configurations of the scene are evoked with historical developments violent imaginaries are varied and enriched in new ways—antisemitism would involve a particular violent imaginary (clever operators behind the scene, etc.), and anti-communism another (and I use these examples to make the point that justified as well as unjustified fears all have their violent imaginaries woven into them). Modern violent imaginaries seem to oscillate back and forth between fear of a monstrous Big Man and fear of a monstrous anonymous mob.

It follows that the constitutive fantasy would itself evoke a particular violent imaginary, as the idealized alignment of vectors of attention also produces the target around which the violent imaginary is articulated, with the subsequent sign or ostentation including the deferral of the specific mode of violence imagined. This would in turn involve the abandonment, but not forgetting, of the constitutive fantasy. One could only access the constitutive fantasy/violent imaginary through the signs put forth, through the ostentation—you have to learn the language by which the fantasy/imaginary is conveyed. I think this analysis can generally take the form of a kind of reverse engineering through negation—for example, if one argues for “discipline,” we can assume that “indiscipline” is central to one’s violent imaginary, and an orderly allocation of the object to one’s constitutive fantasy. Constellations of fantasy/the imaginary are extremely difficult to recognize (especially one’s own) and even more difficult to dislodge or modify. One could only do so by locating oneself within the narratives through which the fantasy/imaginary is played out. The constitutive fantasy and violent imaginary can be synthesized into what I called in my latest essay in Anthropoetics (“Attentionality and Originary Ethics: Upclining”) the “attentional loop,” or that moment in any scene in which the participant draws the attention of the others and must put forth his/her version of the sign that will redirect attention back to the center. The attentional loop is resolved into ostentation, which results from the submission of the originary fantasy and violent imaginary to constraints, or deferral—when these constraint break down phenomena like paranoia and sociopathy, or the reduction of all scenes to one’s own, result. If we are to speak of an internal scene of representation, it must be composed as any scene—by means of a sign of deferral of some appropriation that would, if attempted, destroy the scene. That appropriation would be the attempt to realize the closed circle of one’s constitutive fantasy/violent imaginary.

I will now try to move these abstract concepts closer to contemporary cultural experience by pointing to what seems to me an interesting and increasingly important problem in contemporary narratives: how do you construct a compelling narrative when the decisions and actions of the characters involved are determined by officially (i.e., expertly) labeled pathologies, rather than desires and resentments one could imagine to be universally shared, even if accentuated and articulated uniquely in the narrative agent? The movie critic James Bowman points to an interesting example of this phenomenon in his review of “Silver Linings Playbook”:

In order to like David O. Russell’s Silver Linings Playbook as it ought to be liked, it helps to see it as a movie about jealousy, even though that’s not quite the obvious way to see it. We learn in flashback that, when Patrick Solatano (Bradley Cooper) came home unexpectedly one day and found his wife, Nikki (Brea Bee), in the shower with another man, he beat the guy so severely that he had to be sent away to a mental hospital for eight months, as he was deemed to be suffering from “undiagnosed bipolar disorder.” The movie doesn’t make a big deal out of it, but it tells you something significant both about Pat’s subsequent history and about the state of our culture that the obvious cause of his behavior was seen as something to be ignored or rejected in order that it might be dignified, or made more socially and legally more acceptable, as a clinical condition — and that Pat himself accepts this medicalization of a moral matter as the only feasible way for him to make sense of his life.

We can readily understand a narrative logic by which a man who commits a violent act while overcome with jealousy might, say, refuse to recognize his own moral flaw and continue on a downwardly spiral path towards greater violence, the repetition of the same pattern with other women, etc.; or, on the contrary, learns to distinguish between genuine love and possessiveness. Either way, we remain within a scene that anyone can imagine sharing. But if the character is “suffering” from a diagnosed “disorder,” he is by that fact segregated from scenes upon which anyone might participate, and narrative alternatives seem to be limited to him either following the approved therapeutic instructions towards greater health (as has been the potted plot of many edifying films on alcohol and drug abuse), or ignoring them—only in the latter case do we have the chance for an interesting narrative, because the protagonist might be rebelling, noir-style (or Cuckoo’s Nest style), against some repressive authority. But the interest of such a narrative depends upon the audience’s suspicion (at least) of the therapeutic order that is being challenged, which would in turn require some residual “humanism”; what, though, if the therapeutic order is completely accepted (which would really just signify the complete victory of the victimary order—racism, sexism, homophobia, etc., are already understood to be pathologies, and as more and more attitudes and actions—as is the case now for more and more male-female attraction—are grouped under these categories, there will be nothing but pathology, trauma and healing)? As Bowman says, this particular film doesn’t make a big deal out of it, but no doubt more and more films, novels, TV shows and so on will. And the difficulty is further complicated once we take into account all the mood transforming (legal) drugs now available—in what sense do their alterations of one’s character affect our understanding of and identification with another’s weaknesses and strengths, faults and merits, responsibility for actions, and so on? Is one deemed “flawed” for becoming dependent upon a substance that, according to “objective” measures, “improves” one’s “performance” in important areas?

It may be that the universalization of the therapeutic provides a kind of solution: the division of the world into therapists and patients and, indeed, each individual into therapist and patient, might create a new kind of scenicity. This would be Freud’s revenge upon us for having reduced him to the status of a crank or fraud in recent years, because this is pretty much how he envisioned the long term impact of psychoanalysis. But the process of coming to realize that what one took to be a normal desire is in fact a virulent pathogen infecting the social body might be of considerable interest (“shame” would clearly not be the appropriate response to such a discovery—rather, we would expect the protagonist to gradually come to replace his native vocabulary with one or another normalizing procedure through various social mismatches); as might the process of resisting the “mimetic” impulse to respond in kind to another’s actions and coming to adopt the proper therapeutic response. These would be the kinds of disciplinary shifts we would all be undergoing all the time; more complex narratives would have characters taking on the “patient” role in one disciplinary setting and the “therapist” role in another, the diagnostician (ironically?) becoming the one most in need of diagnosis, etc.

This transformation would redeem the post-humanist argument against any human essence residing “in” each individual, in favor of the claim that we are all constituted by historically specific discourses, power relations and so on. Post-victimary thinkers can contemplate such a change without the fantasy of “resistance” that still clings to what remains of “cultural studies” style analyses—Foucault did eventually come to realize that to point to “uneven power relations” is not the same as identifying self-evident injustice, and we can certainly further that recognition by distinguishing, in any normalizing order, between those drawn into the normalizing whirlpool and those on the margins who, whether they call it “resistance” or not, define themselves not as outside of those normalizing systems but as the other of those systems. The best example I can think of is one I have seen on social media and public discourse: the young woman who is exquisitely aware of all the “strategies” used by the media to normalize women’s bodily appearance (thin models, photoshopping, ads for dieting, “fat-shaming,” etc.) and not only nevertheless “inhabits” those implicit models (judges herself as inadequate at every point in relation to them) but acquires and maintains her critique of them by doing so. The next step is to forge a new style that rewards those sophisticated enough to dis-identify with the model one cannot help but inhabit—a style that then enters and is (to use a perennial term of radical frustration) “domesticated” by those normalizing systems. (A mediating step here is probably to inhabit some therapeutic discourse encouraging positive body image, and then to dis-identify with that discourse through a recognition of its own normalizing paradoxes.)

In the terms I set up earlier, this would mean that meaningful cultural and political communication requires that we inhabit one another’s originary fantasies and violent imaginaries, and work with our interlocutors towards reciprocal dis-identifications. To return to my “Selfy” post, it seems to me that in this context the “self” is more important than the “person,” or the “soul” (the immortal part of one’s being)—“personhood,” in Bartlett’s account, involves the withdrawal of the lovers from the social into a private scene; the self, which I suggested is best understood as the reflexive assertion of sameness in the semio-social flux can more easily be seen as something one shapes and deploys in various ways. We find it very easy to speak of having various “selves”—a work self, a family self, a being with friends self, etc. It is a short step from there to think of the self as a kind of “probe” that one uses to elicit and frame the violent imaginaries and originary fantasies of others by positioning it at the convergent lines of the fantasy and imaginary. At the same time, the materials of one’s own fantasies and imaginaries would necessarily be put to use—the problem is one of finding a line of symmetry between the emergent scenes the interlocutors, respectively, bear. Sometimes it will be my unconscious scene that fails the reality test and needs your intervention to find another avenue towards the construction of a mode of ostentation; sometimes yours will rely on mine. At the same time, we can operate on varying scales, from the intimate to the global, from the highly idiosyncratic scene to those seized upon, exaggerated and intensified through refined and cynical propaganda techniques. Freedom would result from the study of the means of normalization/subjectification, a study that frees one, provisionally, from submitting to the violent imaginary of being normalized out of existence, and from indulging the originary fantasy of possessing a self outside of those processes.

The therapeutic order supersedes the victimocracy, but what supersedes the therapeutic order is the disciplinary-bureaucratic order. One thing that has been noted, but not often enough, is that the modern administrative state represents a gradual abolition of the liberal democratic order. Equality under the law is meaningless when there really is no law, but only grants of bureaucratic power to intervene without limits in the most private domains of everyday life; nor do elections mean anything if the permanent bureaucracy rules anyway; nor can basic freedoms of speech, worship and assembly be guaranteed (or even taken seriously) when any action carried out individually or collectively can be deemed a threat to some bureaucratic agenda (maybe those who now treat Presidential elections as a festival for celebrating our victimary bona fides have a prescient understanding of the increasingly symbolic meaning of such rituals). The boundary between speech and action has been abolished—there is no way to distinguish a protest against some EPA agent commandeering your backyard from a threat to him.

Even more, the post-humanist understanding of the self as constituted by a constitutive fantasy and violent imaginary demolishes the philosophical foundations of the liberal democratic order, which presupposes a kind of blank slate equality in all individuals. Liberalism and modern democracy wish to represent the individual as he or she enters the market, or the ballot box, with no relevant pre-existing characteristics that might qualify one’s status or right to enter either. But if we come to read any individual as marked by some distinct form of normalization and counter-normalization, as having a constitutive fantasy and violent imaginary that the therapeutic/corporate/consumerist order always already has designs on and, in fact, has to a great extent designed, we cannot help but assign and constantly revise probabilities of dangerous or costly action to each individual as they enter any institutions (nor will there be any way of reversing the obsolescence of any institution that doesn’t allow for a sufficiently thorough and expeditious risk-benefit analysis of any individual).

In other words, we could play pretend at a kind of originary equality as long as shared norms regarding morality, law and political legitimacy were intact (at least among social elites, and those who aspired to be such), but no longer. The victmocrats are so afraid of profiling because they know what an obviously effective practice it is, how closely it parallels their own violent imaginary, and how their own attacks on a justice system aimed, above all, at deterring victims from enacting their own private revenge makes it the only remaining plausible means of self-protection. At any rate, if everyone is to be profiled constantly, each must carefully self-profile, and since strict adherence to normalizing discourses is by definition more available to those in the fat part of the curve, many will need to compose self-profiles, or, simply, selves, that promise to maximize benefits over risks in new ways—and hence to refer more explicitly to the constitutive fantasies and violent imaginaries, to figure them so as to figure out new ways of deferring them.

The only viable political response to these developments that I can imagine is to form disciplines (for our purposes, let’s take this to mean any kind of association that people form in order to explore something, get good at something, or identify themselves in a particular way) and use the antinomies of the bureaucratic state to defend those disciplines and render the repressive vehicles of central power more incoherent. The victimocracy, by definition, chafes at constraints—it is driven by increasingly urgent resentments. The impatience of those who must have same-sex marriage, or have the Washington Redskins change their name, right now, resembles nothing so much as the temper tantrum of a 3 year old. Discipline means constraints—it means we don’t seek to realize our constitutive fantasies, or treat our violent imaginaries as realities. Discipline is a commitment to reality testing. The therapeutic is part of the emergence of the disciplinary order, but its outsized prominence up until now can be attributed to its ability to usurp the disciplines concerned with countering social pathologies, making its usefulness undeniable. There are disciplines other than the therapeutic, though—the therapeutic itself can be annexed to a wider disciplinary order concerned with our scenic nature, and therefore with pedagogy and the perpetual orientation of human interaction around some center, which is to say some kind of property.

And disciplines give way to bureaucracy—a bureaucracy is nothing more than a discipline that must account for itself in relation to some public, or other disciplines—those of us in the academy might like to do nothing but inquire and teach, but we have to concern ourselves (our we have to siphon off resources to those who so concern themselves) with accreditation, student admission, maintenance of the grounds, federal anti-discrimination laws, etc.; the entrepreneur might just want to invent, innovate, and spread the results of his inventions and innovations, but he must deal with tax codes, employment law, all manner of federal regulation, etc.—in both cases a substantial bureaucratic exoskeleton is secreted. At a certain point the exoskeleton becomes too heavy and crushes the body; meanwhile, less encumbered disciplinary forms emerge to recover the original impulse to create and build. But the bureaucratized institutions have all the political advantages, as they can pressure the government to make demands that favor their own strengths, and they can make gifts to a wide array of constituents (unions, suppliers and distributors, local governments, etc.).

Perhaps the above account exposes something of my own constitutive fantasy and violent imaginary. Even so, it seems to me that the relation between highly formalized, politicized and risk-averse bureaucracies on the one hand, and various “nomadic” figures (lone entrepreneurs, but also various “rogue” disciplines, artistic and political, that mock and subvert the bureaucracies from within) on the other, provides a more accurate accounting the contemporary socio-political field than the categories of liberal democracy (equality, voting, rights, freedom, etc.). To allude too briefly to Eric Gans’s latest Chronicle on the Cartesian iteration of the Hebraic Declarative Sentence-as-the-Name-of-God in the foundation of the modern internal scene of representation, the “self” seems to me to be this nomadic figure: if the “I” is because it thinks, the self is because it was, with each self-reference iterating and distancing the self from prior iterations. The self can inhabit the marvels and pathologies of the surrounding world without claiming to have any substance outside of those marvels and pathologies; the self can process the normalizing discourses and institutions that average out while reproducing those marvels and pathologies; and the self can replicate or clone itself in novel forms that elicit and display the fantasies and imaginaries those discourses and institutions and the selves inhabiting them seek to manage. At any rate the hierarchies are not going anywhere for the forseeable future, even though both left and right have their respective fantasies regarding how they might be mitigated or even abolished; the best we can do, and perhaps the basis of a kind of left-right alliance broached by both Glen Beck and Rand Paul, is to force the corporate order off its dependence on state largesse and for the absorption of its socialized costs, so it has to stand on its own, without exceptional legal protections or economic subsidies (what the state should be doing, once its umbilical cord to the corporate order is cut, reopens the right-left abyss). The self as brand/anti-brand/re-brand is probably the most productive assumption for now.

It is best to remember, though, that the self is a scavenger, gathering nourishment for the person and the soul, and whether nomadic self I am describing here can find such nourishment is an open question.

April 25, 2014

Mimetic Culture, Liminal Culture

Filed under: GA — adam @ 9:39 am

There are two kinds of moral innovations: one, upward, in which more distance is created between desire and appropriation; and the other, downward, in which that distance is shrunken by the violation of some prohibition with impunity (the innovation lies in the intimation of unlimited possibility, which mimics the generation of human possibility by the originary act of deferral). The great “axial age” moral innovations upward took place during the period of manuscript culture, where writing (and alphabetic writing, in particular, at least in the West) had been invented and was in use among a scribal elite and/or a small reading public sharing rare texts—manuscript culture was still deeply embedded in orality (texts were used to facilitate oration, or memorization), while making it possible to memorialize oral scenes and confer upon them the prestige and permanence of the written word—it is telling that the figures of Moses and the Hebrew prophets, Socrates, Confucius, Jesus and the Buddha are all very often situated within “quiet” scenes, dialogues with a few participants, or God, bordering on and often entering a silent inner dialogue with(in) the self. Words are inscribed in one’s heart, and can be recited exactly as they were originally said as many times as desired, enhancing the sacrality of those particular words, enabling the construction of communities devoted to their preservation and effectuation.

Print culture (McLuhan’s “Gutenberg Galaxy”) spreads the results of manuscript culture far more widely, while introducing the capacity and compulsion to fragment and reassemble, and therefore criticize, parody, and re-contextualize those results. Manuscript culture strives to approximate writing not just to speech, but to speech between co-participants in discussions over what is worthy to be preserved; print culture strives to make speech more like writing—normative, widely intelligible, uniform. (Part of the prodigious fertility of the Renaissance period lies in the interplay of the norms of manuscript and print culture, and of expanding literacy and the more varied layers of orality brought within the orbit of the written word.) Certainty, rather than proximity to the origin, becomes the primary value of reason, actions start to seek out widespread publicity rather than recognition as an enduring model, and thought aims at material transformation rather than contemplation. This transformation involves significant moral innovations, in particular those associated with rigors of life in the modern marketplace: punctuality, frugality, patience, politeness, respect for rules, large scale coordination, etc., along with a much less widely shared, but at least generally valued, fearlessness before the unknown and untried. It has also abetted new and unprecedentedly brutal forms of violence and empire, as control from the center was eased considerably, and made difficult to resist by the increasing specialization at the margins.

What about our emerging electronic and, especially, transparent and algorithmic culture? The intensified culture of celebrity and publicity thereby generated most obviously privileges the transgressive over the continent, the brash and boastful over the modest—the invisibility of the virtues of manuscript culture is intensified by the demand that everything be made visible, literal and blatant. The brazenness and self-exemption from morality print made available to the inventors and adventurers of the modern period are now available to anyone, and it is hard to see any reason why one should display even the most minimal patience. Most people, whether they realize it or not, assume that every individual is a god unto him (or her) self. At the same time, practical learning and participation are strongly encouraged, and can curb the excesses of self-idolatry. I will return to the question of the actual and possible upward innovations native to our now native culture.

Let’s imagine, as a conceptual baseline, a near absolute mimeticism. That is, imagine that every desire is immediately and comprehensively expressed in posture, gesture and word, and every posture, gesture and word is in turn immediately and comprehensively responded to by whomever it is directed towards, and whoever witnesses it. Such an order would involve constant mimetic contagion and hence aggression and violence; it could build no institutions and have no learning. Not exactly none, though, because insofar as it is a human community, the mimeticism could only be near absolute—our barely human community is at least able to restore if not maintain order through the emergence of spontaneous forms of unanimity, in which mimeticism is transformed momentarily into a stabilizing force, directed at more or less arbitrarily chosen targets of discipline and punishment (a very Girardian model, but I don’t assume that actual scapegoating, in the sense of human sacrifice, is necessarily the primary institution).

Something of this absolute mimeticism still resides in every human, and we still respond automatically to a smile or frown, a hint of aggression, a subtle offer of reconciliation, etc. But, of course, these spontaneous reactions are already highly mediated, as there would be no “hints” or “subtle offers” in the originary human community I have hypothesized—everything would be directly out in the open. The point of the originary barely human model is to provide us with a way of measuring moral innovation. The first step beyond near absolute mimeticism would have to be someone not responding immediately, repeating the originary hesitation, allowing an aggressor to have his way, while signaling (and having that signal received) that he will not continue to have his way indefinitely. Upward moral innovations are always of this kind: a new hesitation, but one that organizes posture, gesture and word together in a new way so as to present an imitable mode of hesitation. And downward moral innovations recognize the fragility of such ascents, and recover and display against them the sheer power of a more direct action-reaction cycle. We could see human history as the fluctuation and dueling of upward and downward innovations.

So, what replaces, in the upward moral innovation, the direct, automatic, spontaneous, full and commensurate response to an other’s expression of desire or resentment? It would be trivial to say, “an indirect response,” as that would beg the question—we must imagine, then, an equally direct, automatic, spontaneous, full and commensurate response, but to the other’s expression of desire or resentment as a sign, rather than appropriative act. A sign is, in the first instance, a truncated act; to treat the other’s act as a sign is to treat it as a truncated version of a larger act, an act that entails consequences signified even if not materialized in the act itself. Treating an act as such an exemplary sign involves an audience other than the actor himself—the third person we now assume on the scene is part of the shaping of the act, one that the potential respondent, but not the actor, accounts for in his response. The act would set in motion a chain of consequences that would require for its closure the intervention of the third and perhaps other parties; that future closure is what makes it possible to treat the act as a sign. Treating the act as a sign is an attempt to obtain the closure without the consequences. And in turn, the respondent becomes an exemplary sign.

To paraphrase Aime Ceasire, Western men and women speak all the time of freedom but never cease to stamp out freedom wherever they find it. The current rampage of the victimocracy is no accident—demands for freedom on the liberal and democratic models are really demands for revenge against those who one imagines have expropriated one’s freedom. But the first freedom is the freedom from one’s own desires and resentments, and only in the most extreme instances is the acquisition of such freedom not within one’s own grasp (one just has to stop grasping at something else); at the same time, such freedom is always provisional, always suffused with doubts, always needs to be recovered, and can have no external guarantees. Demands for economic and political freedom are only sustainable insofar as they aim at the space needed to practice and exemplify that first freedom. Has a single modern political theorist ever said that? Maybe—I haven’t read them all—but it’s certainly not any part of our liberal democratic commonsense—even the awareness one finds in thinkers like de Tocqueville and the American founders to the effect that moral responsibility must attend the individual freedom democracy unleashes see such responsibility as a concession to reality by enlightened self-interest—in other words, a more effective way of getting what one wants (or, in more theological terms, of imposing one’s own law on reality). (Only high manuscript culture, forged in self-adopted or embraced exilic relation to monstrous imperial orders and broader social decadence [by prophets, monks, small communities of teachers and disciples, self-lacerating disaffected elites], has ever understood this first freedom—which is no doubt the source of its continuing power today.)

Environmentalism admonishes us to shrink our “footprint”—they mean carbon, a trivial matter, but the metaphor is a nice one for thinking through the possible moral innovations enabled by the transparent and algorithmic. It does seem to me that a highly moral way of passing through this life is to leave only the slightest traces of footprints, i.e., identifying markers that can be definitively traced back to ones own intentions and efforts. Rather than clearly demarcated and strategically located footprints, better to do something to reveal the world as a world of signs, and oneself as just another one of the signs, one that has lowered the threshold of significance for yet to be revealed signs. Revealing the world to be a world of signs is to reveal the world as composed of truncated, fractured, fragmented actions unmoored from the desires and resentments that originally motivated them (a radical de-mimeticization) and arriving far away from their intended destinations. Even those bits and pieces of actions can be broken down further—excessive exposure to them would restore their wholeness and render them sentimental and sensationalistic, assimilating them to one or another “classical” model—as can the very act of breaking them down. This is not just a contemplative position within our transparent and algorithmic reality, in which everything already tends to get reduced to a gesture to everything else—it is always possible to withhold the mimetic response and represent the other’s act as an incomplete one and hence a sign, a sign of which one tacitly pledges to be the bearer. The algorithm makes it possible to project hypothetical transformations across unlimited, virtual fields—the fall of a sparrow can be aligned with various possible initial conditions to produce mappings far into the future and across vastly divergent causal chains, the point being to facilitate the reduction of any act to a fluctuating data point, and hence radically uncertain in its effects but maximally significant in its articulations with other signs. This moral innovation would install, there where mimetic culture presently is, liminal culture, a culture that continuously lowers the threshold at which we perceive, feel, and intuit emergent meanings. Old cultural forms like the maxim and the epigram might make a comeback, as such literary forms can be put on a t-shirt, a web page, or tattooed on one’s skin—but maxims and epigrams that subvert and invert some vapid or bullying slogan or public imperative.

Such a moral innovation would follow in the footprints of the print revolution, with its privileging of what Walter Benjamin called “mechanical reproduction”; but, well beyond that, it reaches back to the originary scene, where the sign was created through the truncation of an act, rendering it available for reproduction, segmentation and new articulations. Remembering forward, further de-mimeticization requires further specializations, specializations that lead, not to the mutilation of the individual but to participation in a culture of overlapping disciplinary spaces. Take, for example, the operative imperative for “Seinfeld,” “no hugging, no learning,” a slogan Eric Gans discusses in one of his Chronicles on the show. “Seinfeld” is often taken as accelerating a shift towards a more thoroughgoing irony in American popular culture, marking the point at which nothing is free from irony, i.e., the point of “cynicism.” And it is true that if you watch pre-Seinfeld sitcoms, even the “boundary pushing” ones like “All in the Family,” there is always some sentimental, preachy substratum to the humor—in the end, some things remain off-limits to laughter. To see this as a shift toward a general cultural cynicism is to miss the point, I think—it would make more sense to see this development as a form of social specialization. The point of a TV comedy is to make you laugh—it should be judged according to some measure of quality laughs per 23 minutes, not the “lessons” it teaches. Why would anyone turn on a TV show to learn about life or morality? If we really did so, that would be an alarming sign of cultural decay. You turn on a TV show (at least a comedy) to get something you couldn’t have otherwise: pieces of the world turned around so that situations that are not ordinarily funny become so. Once you realize that, attempts by the entertainment industry to tend to your character become ludicrous and insulting, and, anyway, the point of gesturing to moral pieties was always to avoid professional death by “controversy,” and was therefore always cynical itself—and, indeed, despite “Seinfeld” and all its would-be imitators, earnestness abounds in American culture. And specializing in comedy is very different than specializing in one stage in the production of pins, as it relies upon anthropological, historical and sociological intuitions—what is funny today is not what was funny 5 years ago, or, often, 5 days ago.

A similar development in higher education would be welcome, particularly in the humanities—rather than going to a literature or philosophy class in order to (at its best) enter the ongoing conversation over which works and ideas should be preserved, wouldn’t it be better for your literature or philosophy professor to provide you with a form of literacy, a way of working with language so as to generate new meanings out of existing ones that you could only with significantly greater labor and a lot of luck acquire for yourself? As with the specialist in generating laughter, the algorithmic (or what I coming to be called “digital”) humanities would enable the student to reveal new fields of signs as mutations of more familiar ones. On the level of scholarship, while mimetic theories ask what is “literature,” or “reason,” or “meaning,” or “humanity,” or “society,” and so on, liminal theories would ask, where is the boundary between all of these categories and whatever their “others” might be at a given moment—this kind of inquiry would also involve learning new modes of literacy, insofar as the boundaries are always shifting, in part as a result of the inquiries themselves. (In a sense, this would make all pedagogy and even all scholarship “remedial”—part of the problem with the traditional humanities, or at least an increasingly unavoidable part of the problem, is that students can’t really “read” Plato, Shakespeare, Joyce or any of the other “great books”—they can, at best, mimic their teacher’s reading of the texts as already read, which they must be insofar as they have already been designated “great.” Providing students with reading practices that would reveal these texts to them in their otherness, with all the messiness and stupidity that is sure to follow, might lead to something interesting, even if it’s not likely that many instructors will know what to do with it.)

I suppose this would mean that originary thinking is itself a new specialization, a discipline focused on revealing the consequences and implications of the maxim “representation is the deferral of violence.” Our project would be to show what difference this maxim makes in all of the disciplines with which ours does or could overlap. What does the originary hypothesis enable us to see that we wouldn’t otherwise? Does that mean that one doesn’t claim that the originary hypothesis is true, or gets us closer to the truth of human being than other ways of thinking? Well, to the extent that we are invested in or converted to originary thinking we have concluded that it is more revelatory than other ways of thinking available to us, which is pretty much synonymous with “truer”; but insofar as there is no neutral set of intellectual standards by which the relative truth of theories in the human sciences can be determined authoritatively, I would say we let the “long run” settle the question of truth and attend to our business of lowering the threshold of human things we can make new sense of.

To return to the concepts examined in my previous post (“Selfy”), it seems to me that the kind of disciplinary inquiry I am proposing as a moral innovation requires self-control, self-abolition and self-creation: the disciplinary self is a creation of the inquiry itself, much like the “narrator” of a novel, who is neither the author or a character (and where the narrator is a character, most obviously in first person narration, the reader posits another narrator behind the “I”), who exists only so long as the novel does, and is obliged to follow the rules of coherence and consistency constitutive of the narrative. Likewise, the disciplinary self is created by some boundary question or anomaly, and must remain the “same” insofar as questions raised must be answered or questioned in turn, and rigorous controls must be in place to ensure that the “real self” external to the inquiry, with its resentments and desires, does not interfere—even if those resentments and desires might (again, like the relation between author and narrator), properly treated, inform the disciplinary self. And into what does the disciplinary self inquire: well, among other things, the slippages within and between “identities,” a central cause of “threshold” questions in the modern world; and “personhood,” perhaps first of all the boundary between the constitutive fantasy of personhood (one’s own absolute erotic centrality) and its never completed reality of shared erotic centrality. (I refer, again, to my previous post, and in particular my reading of Andrew Bartlett’s originary analysis of personhood.)

April 14, 2014

Selfy

Filed under: GA — adam @ 7:55 pm

Everyone is taking selfies, but does that mean that no one is selfy, that is, self-like, anymore? It’s a serious question, even if it is prompted by the hilarious new song (I suppose that’s what it is) titled “Selfie,” which features a young woman, with an attention span of approximately 3 seconds whose only anchor in a stable reality seems to be the compulsion to take a selfie (and announce that she is doing so) every 10 seconds or so. The song, which, like so many other products of contemporary culture is a parody so immersive in its object as to blur the boundary between parody and celebration, seems to suggest a direct correspondence between ubiquity of the external and ultra-literally named marker of “selfiness” and the absence of any inner experience of the same.

Freud’s “Copernican turn” was his claim that human consciousness was on the margin, not at the center—the margin, more specifically, of immense and obscure unconscious processes that we could only ever know very imperfectly, and only affect minimally. Freud used the term “Ego,” which is not necessarily a close synonym for “self,” but we have already introduced the term “consciousness,” and cultural Marxists following in Freud’s footsteps (Lacanianly mediated) introduced the term “subjectivity,” to cover conceptual territory aimed at including and usurping that covered by “Ego,” “self,” “consciousness,” and others, like “individual,” “person” and “identity” (not to mention “soul”). The notion of “subjectivity” aims at greater precision, drawing on phenomenology to conceptualize the subject “constitutively” embedded in a world of objects and inter-subjectively mediated intentions, but also contains an implicit taunt in its allusion to subjection. No theorist of subjectivity will admit to being only a subject herself. All these terms, except for subjectivity, are used so widely and have been used for so long that it would be ridiculous to dismiss them as “mystifications” (and not only would I not dismiss “subjectivity,” either, but I will further explore the term’s implicit argument that modern society has progressively marginalized, impaired, diminished and even shattered what was once taken to be the human center). The task for originary thinking is explore the overlapping terrains covered by this sprawling vocabulary, and make sense of them as so many ways of being signifying beings.

Andrew Bartlett gets us off to a very good start in his “Originary Human Personhood” in the Fall 2011 issue of Anthropoetics. I will be more interested in the “self” than the person in this discussion, but Bartlett’s originary analysis of personhood, and the distinction he draws between “person” and “self,” suggests a way of starting to see these concepts in relation to each other. Starting from Eric Gans’s contention that the originary “person” was God, Bartlett proposes that the appropriation of a kind of derived divinity in the constitution of the human “person” takes place through the mediation of the private, erotic center. To be a person is to be lovable and to love (to confirm the lovability of another)—to be an inexhaustible source of desire for another who is in turn such a source for oneself; to be aligned with another as reciprocally orbiting centers of meaning and concern capable of shutting out the world. Meanwhile, Bartlett distinguishes the “person” from the “self” as follows:

To be a self is not quite yet to be a person. The self designates rather a denuded, anesthetic entity lacking both the concrete bodily vulnerability and the power to create meaning that belongs to the person. “He is a wonderful person” sounds fine; “he is a wonderful self,” awkward. “She is a giving person” makes sense; “she is a giving self” rings oxymoronic. The undesirability of the reputation of “selfish person” tells all: the self is not the person. To have achieved personhood and to have personality, to be personable, to have personal relationships–those are goods. But to have a self–well, we all have one of those, it takes no work to have one of those; having a self makes no distinction–what can one do with oneself? The erotic self–especially–knows that what it can do with itself is limited. (The erotic person, however, may seem limitlessly beautiful.) In the originary event, the moment of consciousness of self is the moment of resentment. In resenting the sacred center, we first experience ourselves as violently dispossessed by it. Originary selfhood would thus be resentfully but not interpersonally human. In naming the sacred Object only as object of resentment, we are not yet naming God as a person: the sacred Other whom we selfishly name in resentment is not the divine Person whom we name in love. By contrast, to love God as originary Person is to love something of the way the sacred central Object has moved and moves us. Likewise in human exchange, the self-dispossession of resentment opposes love. We cannot have true love for the one against whom we feel real resentment. These contrasting associations of the self with resentment and the person with love, it seems to me, are worth preserving.
And yet there is value in owning the mere originary self as a kernel of sign-using consciousness prerequisite to personhood. Individual agency, free will, moral responsibility: several founding texts of Generative Anthropology affirm the value of the contributions made by these categories to the project of our self-understanding. Acclamations of even a resentful free will are a valuable counterweight to the post-structuralist denials of agency that would sever the connection between our internal scenes of representation (i.e., our imaginations), and the many external worlds, local and global, where exchanges of signs and things produce concrete results and where ethical performances have often incalculable consequences for good and evil. Anybody who uses language is a self endowed with free will; to use the sign on the scene of representation is to be a human self. My first qualification aims simply to spotlight the fact that a self consumed by resentment militates self-defeatingly against the openness to exchange of others’ personhood, and therefore against its own. Resentfulness is parasitic on love. The totally resentful self is not yet a person because such a self must abolish without loving the otherness of the center, and the desire to abolish the center makes exchange with others as centers, as persons, impossible. Distinguishing between selfhood and personhood may, therefore, illuminate the boundaries between originary resentment and originary love. If I am consumed by resentment of the other, I have not stepped back from myself to recognize the otherness in myself. I have not learned to imitate the sacred central Other withdrawing itself in the founding move of erotic activity from which human personhood is derived.

Bartlett’s analysis explains (to follow in the tracks of his own linguistic observations) why “selflessness” is praised, and why the extinguishing of the self (as in Buddhism) can be transcendent project—none of which would apply to “personhood.” If the self is a “prerequisite” of personhood, then the purely resentful, self-protecting self must be a kind of “skeleton” supporting the fully “embodied” person. Implicit in this argument seems to me a couple of other consequences: first, that the self can survive the obliteration of the “person” (Bartlett does not say, but how could we deny, that one’s erotic centrality could be demolished under certain conditions); and that as long as the self persists, the reconstitution of the person remains possible, while the pulverizing of the self, if we imagine that to be possible, would make any such restoration impossible.

This identification of the self with resentment also provides insight into the grammar of “self,” in particular its use in reflexive pronouns, which itself derives from the ancient identity of meaning of “self” with “same”—when we say “itself,” we mean the same “it” that was just referred to. In that case, the self is sheer sameness of the individual, that whatever it is that makes the individual that individual from moment to moment, year to year, decade to decade. Originary resentment is what makes us our“selves,” while I suppose the originary love of the person is ecstatic, taking us outside of the continuous flow of the self-same. Would that then mean that feelings of guilt and shame (i.e., conscience) are attributes of the self, insofar as those emotions are experienced when we have not been self-same, have broken the line of continuity (maintained through promises to self and others) that makes commitment possible? And the “sovereign subjectivity” so despised by post-humanist theories would, then, also reside in the self, or would rather be the self which, like an ever-vigilant government is constantly policing its own borders, keeping out intruders and keeping intact the needed defense mechanisms. Paranoia would also be an attribute of the self, and schizophrenia its breakdown.

More interesting even than all of that is the light shed by Bartlett’s analysis on the particular vulnerabilities of both person and self in a decentered, centripetal modern world. I have wondered for a while why the sexual revolution has been such an obsession of liberatory movements (political and artistic) from the Romantic period on, and why modern means of mass manipulation target the erotic so relentlessly. In other words, if Bartlett is right, then a possible strategy of assault and domination becomes visible. The specific articulation of self and person Bartlett outlines would be the basis for an individual who can think for him/herself, resist illegitimate demands, live within his/her means, recognize human limitations, and so on. If the erotic can be plugged into broader circuits of desire driven by commodity production, then personhood can be kept under constant pressure—the fantasy Bartlett outlines in his essay as the basis of the erotic imaginary (“You find yourself surrounded and alone in the center and you notice that all the people on the periphery–who knew? — suddenly “want” you erotically. They all want consummation with you, the person…”) only to dismiss as unrealistic and undesirable would be the source of one’s vulnerability to mass produced erotic fantasies, only in this case without any place to withdraw to (such withdrawal being, in Bartlett’s model, the way one transitions to a more mature eroticism).

Another prong of this assault would target the self. We could see all the normalization processes of modern societies, in which disciplines like medicine, psychiatry, sociology, economics and so on become disciplinary practices aimed at homogenizing and regulating millions of individuals circulating through modern institutions (first of all teaching them reading, writing and arithmetic) as directed at the self first of all. All these practices can be reduced to devising and enforcing the procedures needed to maintain “sameness” across a bewildering array of institutions, situations, obligations, norms, etc. We could see the early modern period studied by Michel Foucault in which these institutions were set up and given their legal and political foundations as excessive, often brutal, ad hoc and easily exploited by charlatans and power-hungry psychopaths and yet, for all that, necessary and largely successful. But with the myriad tentacles of the marketplace (most obviously, the massive explosion of pornography in recent years) undoing the erotic foundations of personhood, the processes of self-regulation may be getting more desperate and haphazard, drawing upon the new bio-political disciplines (drug therapies, gene research, etc.). It may be that more and more selves can only remain the same insofar as they adhere to increasingly arbitrary and rigid regimes of regulation.

Obviously, it would be impossible to quantify or be certain about any of these claims—what would it mean to say fewer people are less completely persons or selves now than was the case, say, half a century ago? Maybe what looks (inevitably) as disintegration to those embedded in a particular meditated mode of being is simply a transformation to terms of personhood or selfhood we are not able to recognize. Maybe, more radically, the entire vocabulary of human self-reference is being remade, to the point that somewhere down the road people won’t really understand what we once meant by things like persons and selves. To take just one example, the fact that the genre of romantic comedy in the movies is just about defunct suggests that certain key elements of erotically mediated personhood are no longer operative—the movie critic James Bowman associates the esthetic power of the genre to the belief on the part of the couple (and the audience) that the two people were “made for each other,” were “destined to be together.” Such a belief seems to me a “necessary appearance” (to use Arendt’s term for beliefs about reality that survive all attempts at demystification) for the closed erotic circle Bartlett identifies as the source of personhood—if such a belief is becoming as alien to our sensibilities as tragedy has long been, then we are indeed witnessing a sea-change in the self-person configuration.

At this point I don’t want to pursue this analysis further; I just want to suggest that originary thinking should pursue such questions as the contemporary state (or states) or the “person,” the “self,” and other originary elements of the human; and we should do in a way that is as divested from, or defers any desire for, any particular outcome as possible. That is, no apotropaic invocations of the preferability of market society to other forms, or of the superiority or inevitability of liberal democracy—or, for that matter, any denunciations of the market or prophecies of doom regarding liberal democracy. I would recommend refusing the use of a particular historical form of personhood or selfhood as an invariant model against which we find contemporary forms to be degraded versions; or using an idealized model of the self or person in order to condemn contemporary institutions for “distorting” that model. Of course we must be interested in the outcome of any moment in the unending process of hominization; but the clarity of analysis will benefit from our keeping that interest as minimal as possible, at simply identifying the threshold at which new modes of signifying emerge. What are the new modes of attentionality; how are we seeing and giving ourselves to be seen (and heard and felt and imagined) in new ways?

OK, I’ll pursue it just a little further. It seems to me that what is central to modernity is something that Marshall McLuhan associated with print culture—the capacity and compulsion to analyze phenomena into to ever smaller fragments that can in turn be recombined and disseminated in new ways that bear less and less trace of their origins. To the extent the person and the self can be reduced to a set of fragmented, stereotyped gestures that can be turned into esthetic formulas and models of imitation aimed at directing the “subject’s” attention in pre-programmed ways (what Judith Butler, following Derrida, once called “citationality,” referring to the fact that we are always citing and quoting others, even or especially when we believe we are most “ourselves”), the less we are persons and selves. Restoring, re-imagining or instituting new forms of personhood and selfhood, or imagining forms of individuality or “agency” irreducible to those terms (we could just become indeterminate processes of semiosis, for example) would then depend upon entering, interfering with and commandeering when possible that process of analysis and composition of the elements. The skeptical, suspicious resentment of the self would be needed here, as would the ecstatic, even if fleeting, enthusiasms of the person. The problem would be to acknowledge that one is always taking on others’ words, down to one’s most inmost being, while remembering that they are, even when most our own, in the end still others’ words.

« Newer PostsOlder Posts »

Powered by WordPress