GABlog Generative Anthropology in the Public Sphere

July 2, 2014

Brain as computer

Filed under: GA — Q @ 9:22 am

The basic premise of much current brain research seems to be that the brain is a biological computer and evolution is the programmer. Theoretically, then, we should be able to find the codes and understand the working of the brain. According to a 2010 article on CNET:

Researchers at the Stanford University School of Medicine have spent the past few years engineering a new imaging model, which they call array tomography, in conjunction with novel computational software, to stitch together image slices into a three-dimensional image that can be rotated, penetrated and navigated. Their work appears in the journal Neuron this week. To test their model, the team took tissue samples from a mouse whose brain had been bioengineered to make larger neurons in the cerebral cortex express a fluorescent protein (found in jellyfish), making them glow yellow-green. Because of this glow, the researchers were able to see synapses against the background of neurons. They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth. (Elizabeth Armstrong Moore CNET).

A high end computer chip such as the Intel Quad i7 has 731 million transistors, which act as switches. The human brain, on the other hand, has an estimated 86 billion neurons and 1000 trillion synapses “In a related finding there was a new article that suggests the difference between human and other primates is the space between neurons in the prefrontal cortex, with humans having more space, which is speculated to allow more connections.” (Ward Plunet Link)

The fact that the brain is many times more complicated than a computer does not, by itself, refute the analogy. It does seem significant, however, that no computer yet devised has any degree of consciousness.

Scientists have been very succssful of late in manipulating living cells, especially in tinkering with the DNA, to create new plants and so on. But they are not yet been able to create life in the laboratory, starting with non-living compounds.

June 27, 2014

Further Reflections, Consciousness & Free Will

Filed under: GA — Q @ 8:57 pm

On one hand, nothing is more familiar to us that our own consciousness, which can we safely assume is essentially similar to that of other humans. It seems equally obvious that we have free will. I make decisions constantly, and I change my mind just as frequently. And I can see that others are not able to predict, reliably, what I will do or say next; nor can I predict what others will do. Furthermore, we can observe very clearly that animals share many if not all of the same characteristics of human consciousness. We may never know what it’s like to be a bat, but more familiar animals like deer or cats are obviously aware of their environment in basically the same way that I am aware of my environment. Humans are aware of different things than other animals (notably, right and wrong), but animals may be aware of things that I can’t perceive, like the cat who refused to board the ship destined to sink (as I learned about at the Victoria Maritime museum).

In any case, we are surrounded by living organisms capable of more or less degrees of consciousness. Life is almost omnipresent on this earth, even in places that might seem very inhospitable. So consciousness is the plainest empirical fact in the world, perhaps, as Descartes observed, the only indubitable fact, the one thing we can’t doubt. There is nothing we know better. And we see conscious beings being born, growing, developing, reproducing, and eventually dying all around us. From this perspective, there is no mystery of consciousness, nor of freewill. Consciousness is simply the nature of my existence. Arguably, then, “the burden of proof,” so to speak, should be on those who wish to question the possibility of consciousness. It’s an artificial question without any pragmatic consequences. If the sciences can’t explain the physical basis of consciousness, then so much the worse for them. They either aren’t posing the right question, or their methodology is inadequate.

On the other hand, consciousness and free will are completely anomalous in our universe. The physical sciences tell us beyond any reasonable doubt that our planet is 4.5 billion years old, while humans have only been around for about 2.5 million. And for at least a billion years, earth harbored no forms of life at all. Multicellular forms appeared only in the last billion years. Furthermore, there is no evidence of life on other planets, within or without our solar system. Given the vast size and age of our universe, it is more economical to assume that we are not unique; but the fact remains that as far as we can see or recover, life on earth is anomalous, and human life even more so. From this perspective, the existence of life on earth appears nothing less than miraculous. That, by some completely random process, some mud should get up and start walking around appears highly unlikely, even impossible. We can only wonder, with Blake,

Tiger, tiger, burning bright

In the forests of the night,

What immortal hand or eye

Could frame thy fearful symmetry?

In what distant deeps or skies

Burnt the fire of thine eyes?

On what wings dare he aspire?

What the hand dare seize the fire?

And what shoulder and what art

Could twist the sinews of thy heart?

And when thy heart began to beat,

What dread hand and what dread feet?

What the hammer? what the chain?

In what furnace was thy brain?

What the anvil? What dread grasp

Dare its deadly terrors clasp?

June 14, 2014

Reflections on reading Raymond Tallis, Aping Mankind

Filed under: GA — Q @ 11:50 am

The basic problem addressed by Tallis, it seems to me, is how matter becomes subjectively conscious. I say “subjectively” because we can’t directly observe the consciousness of another living being, and as Tallis points out, even the most advanced brain scans do not help us to understand human consciousness.

There are two basic approaches to the problem of consciousness. First is to say that consciousness is only possible for soul or spirit. This approach may or may not rely on a creator God, and it may or may not insist on a sharp dualism between matter and spirit. They can also be understood as two aspects of one living being.

The second is evolutionary. Once life develops, organisms evolve nerve responses that allow them to find food and mates and avoid predators. These responses are programmed into the DNA and are comparable to computer programs. In the case of extremely simple organisms, the nerve responses involved are also simple. The responses of more advanced animals are more complicated but directed to the same goals.

For some animals, notably the hominid line, flexibility in behavior, presumably involving some choice between alternative ways of responding to events, is an adaptive strategy. Consciousness can be understood, in Darwinian terms, as the ability to evaluate alternatives and adapt one’s behavior to different circumstances. While the neuro-biological basis of ape and chimp consciousness is still not well understood, this is arguably a problem of complexity. The principles are well-known; and their behavior, while more flexible than other species, is still, arguably, wholly the product of their instincts, conditioning, and learning (by imitation); and as a result is very predictable. Some chimps are presumably smarter than others, and thus better able to evaluate alternatives or invent solutions to problems, but intelligence is a genetic variable within the scope of an evolutionary paradigm.

It’s not clear that chimps have what we call free will. Significantly, everyone, even animal rights activists, recognizes that we can’t hold animals morally responsible for their behavior.

Human consciousness is in many ways comparable to chimps’—subject to instinct, conditioning, and learning—but in addition we have the subjective experience of free will. And objectively, humans are much more unpredictable than any other species. So in addition to consciousness, we have the philosophical problem of how a material organism, whose atoms and molecules individually are subject to all the laws of physics, is capable of free will, acts which seemingly cannot be explained in terms of physical causation, even given the vast complexity of the human body.

Tallis and others observe that humans are conscious of other humans in ways that other animals are not of their fellows. Human consciousness is somehow tied up with our relations with other humans—relations which by definition are not contained within the brain. They are relations, not physical objects or neural events. This approach fits in well with Generative Anthropology and the Originary Hypothesis. But the question still remains: is the resistance of human consciousness to scientific explanation basically a problem of complexity? If so, then the mystery of human consciousness and free will are in principle capable of scientific explanation, and existing studies of human evolution and neuro-phenomena are at least on the right track, even if still largely unfruitful, as Tallis argues.

Even when the dimensions of social relations and language are added in, we are still dealing with beings composed of molecules subject to the law of physics. While this adds a layer of complexity, it doesn’t refute the proposition that human behavior and the subjective experience of consciousness are ultimately reducible to physics, in the form of evolutionary processes and the neuro-phenomena of individuals in groups. It doesn’t make sense to say that consciousness is not a physical phenomenon, since the only place it’s found is with physical bodies. We should also remember that neuro-phenomena are already well-recognized as responsive to the environment, so the social nature of non-human consciousness is a given.

It’s also possible that the problem is somehow not ultimately reducible to physical processes. But if human consciousness is not so reducible, then the philosophical problem of explaining how material beings can experience consciousness and free will remains. Saying that human consciousness and free will are a function of our unique cultural/social “nature” may well be true, but it doesn’t seem to answer the philosophical problem of how exactly this is possible.

June 7, 2014

After Liberalism

Filed under: GA — adam @ 5:05 pm

If we can’t distinguish between defending, or at least accepting, someone’s right to say something, on the one hand, and agreeing with them, on the other, then liberalism, in the classic Enlightenment sense, no longer exists. This seems to be, increasingly, the case—marxists and other antiliberals have long argued that the “bourgeois” freedoms are disguises for the bourgeois privileges (the poor and the rich equally forbidden to sleep under the bridge, etc.), but it seems to me that something a little different is happening now. It’s not so much that people argue against the distinction (while implicitly acknowledging that under new, more just conditions, it would be legitimate), but that it is simply unintelligible to more and more people. If you say that those opposed to same sex marriage should have the right to voice their opinion, or that people who have expressed views that many or a majority would deem racist do not thereby surrender their property and other rights, the increasingly likely response is: why do you hate gays? Why do you support racism? There are many reasons for this development, which, not surprisingly, seems especially common among the young, and is complemented by the libertarian inability to distinguish between what is permitted and what is good (if, for example, you wonder about the effects of excessive consumption of pornography, the most likely response will be an indignant insistence on one’s right to do what one wants with one’s computer screen and body, along with aspersions about one’s own presumed puritanism, etc.). In short, more and more people want to do what they want to do, and to (not) have done to them what they (don’t) want done to them, and the whole question of grounds for doing one or another thing and of legitimate grounds for doing or not doing, is something we no longer seem to have the language for. The problem, as always, is that what some want to do is what others don’t want done to them, and the always largely fictional discourse of “rights” was there to adjudicate the competing claims—or, at least, establish an equilibrium as both sides use right-talk to entrench their interests within the state apparatus. If the means of adjudication collapse, and it’s too tedious to try and retrieve them from the 19th century, what happens? We have a standoff between the victimary and the libertine (I know that not all libertarians, maybe not most, are libertine—but the libertines have the most powerful cultural and political presence, since arguments in favor of drug legalization and against the regulation of various pleasures draw far more supporters than arguments about the evils of central banking), both equally plausible and legitimate children of modernity.

Let’s further factor into this Eric Gans’s most recent Chronicle (#463: More on the Victimary), which advances the discussion into what Gans seems ready to concede is likely the predictable result of the market system: a polarization in wealth that doesn’t produce immiseration at the bottom, but rather eviscerates the “middle” where normal forms of recognition (“respectability) can reasonably be expected. With the respectable middle cut out, what are left are pathological forms of “theater”—mass killings and other forms of cheaply and/or viciously acquired celebrity. The victimary, in this context, seems less a driving force in history than a rather feeble “sacrificial substitute” for this more devastating form of inequality that is beyond repair. To continue my own discussion, one might say that the victimary symbolically rebels against this polarization while the libertines vicariously identify with it. Both sides live in fantasy worlds.

But even if Picketty’s analysis, referenced by Gans, to the effect that today’s polarization is more representative of the trajectory of the market economy than the rough equality of the post-War years, is accurate, the two questions, distribution of wealth and distribution of recognition, can be delinked. It is not the emergence of billionaires that has destroyed “Fishtown,” or the “middle.” Even the much bemoaned decline in manual labor has been exaggerated—plumbers, roofers, contractors, carpenters and others employed in improving and fixing up can still make a very good living. In fact, there seem to be too few of them. Government enforced unionization destroyed Detroit, not the greed of the Big Three automakers. The welfare state’s assault on the family and victimary eruptions in the inner cities that crippled law enforcement and education destroyed wide swathes of black America (and ever larger pockets of white America), not the desire of corporations for cheaper labor overseas.

But there’s a limit to such socio-economic and cultural explanations, a limit evident in the problem of explaining the explanations. Things were gradually improving on all fronts during the 1950s and into the 1960s, and yet the 1960s proved the most disruptive period in American history since the Civil War. Nor is this anomalous—the same was true of the years leading up to the French Revolution. Why were the massive social experiments of the 1960s deemed necessary? The steady, if uneven and unbalanced, improvement of living conditions wrought by the expanded market has been unsatisfactory for many people—and, in particular, for those people who make things happen, the politically astute, the “cool,” the ambitious, the well-connected. Why? What is missing? The editor of the conservative journal The American Spectator, Emmett Tyrell, often says, perhaps tongue in cheek, that we tremendously underestimate the effect of boredom on world affairs. Serious or not, he has a point—boredom, what as sociologists we might call “anomie,” or as theologians “despair,” must be given its due. Boredom is part of the structure of addiction that is so prevalent in (not only) contemporary life—the addict wants to recover a novel and exhilarating experience, and rather than realizing that such experiences must be granted by immersion in reality, seeks it out, and seeks to secure it, in the identical form in which it was first experienced. As we all know, larger and larger doses are needed to attain a less and less satisfying approximation of that original experience. And in the ever more vast in-between, there is nothing but boredom, an itching for the next, inevitably disappointing, fix.

An important element of illiberal critiques of liberalism has been the observation that liberal rights—to speech, religion, association, etc.—really imply reciprocal indifference more than reciprocal recognition. The peace of the late medieval religious wars turned into the grave of meaning—without the (exhilarating) possibility of martyrdom, without the urgency of universal salvation, “belief” doesn’t amount to much. In that case, maybe the battle between the victimocracy and the libertine will have salutary effects. There might be a real stake in the libertine’s insistence, against feminist objections, on his right to play a sociopathic pimp in Grand Theft Auto. The libertines can try to secede from the victimocracy, and they may succeed, certainly to some extent (e.g., the “man-cave”). But insofar as they must operate on victimocratic terrain, they will have to subvert it from within, thereby revealing victimocracy’s many antinomies and anomalies. Perhaps the question of recognition can be addressed in new, fresh ways. (The victimocracy is of course inherently paradoxical—what they do with more power can’t possibly be what they want to do.) One thing we can thank the victimocrats for is intensifying the question of the relations between representation and reality. The victimocrats have not answered it and, indeed, it is one of those great questions that can never be answered definitively. Kevin Williamson, the National Review columnist that Gans has been referring repeatedly to recently, had an article lately on transgenderism, in which he insisted “Laverne Cox [a well-known “transgender” actor] is not a Woman.” On latest count, the number of comments is 8,195. The most interesting passage, for we Generative Anthropologists is, I think, the following:

The phenomenon of the transgendered person is a thoroughly modern one, not in the sense that such conditions did not exist in the past — Cassius Dio relates a horrifying tale of an attempted sex-change operation — but because we in the 21st century have regressed to a very primitive understanding of reality, namely the sympathetic magic described by James George Frazer in The Golden Bough. The obsession with policing language on the theory that language mystically shapes reality is itself ancient — see the Old Testament — and sympathetic magic proceeds along similar lines, using imitation and related techniques as a means of controlling reality. The most famous example of this is the voodoo doll. If an effigy can be made sufficiently like the reality it is intended to represent, then it becomes, for the mystical purposes at hand, a reality in its own right. The infinite malleability of the postmodern idea of “gender,” as opposed to the stubborn concreteness of sex, is precisely the reason the concept was invented. For all of the high-academic theory attached to the question, it is simply a mystical exercise in rearranging words to rearrange reality. Facebook now has a few score options for describing one’s gender or sex, and no doubt they will soon match the number of names for the Almighty in one of the old mystery cults.

Williamson’s position is the classically modern, Enlightenment one: the point of language is to represent reality accurately. We can see here the privileging of the declarative sentence over the ostensive and imperative that Gans has associated with Western metaphysics. (Williamson even alludes to the displacement of pagan polytheism by the one “Almighty” discovered/invented by the ancient Hebrews.) One’s genitals, and, perhaps, one’s hormonal and chromosomal structure, which can be observed by everyone according to shared clinical and experimental criteria, determine one’s gender—not more amorphous and “unfalsifiable” criteria like what one feels fated to be. Of course there are anomalies—the rare individual with an extra-chromosome, extremely unusual hormones, or un or over-developed genitals. But these don’t upset the basic classification, which can be justified by reference to broader biological assumptions: the anomalies can be safely sequestered because they don’t contribute to the reproduction of the species, the meta-criterion for biology. (But, of course, the discipline of biology evolves—some of the commenters on Williamson’s article make biological claims, with what plausibility I can’t say, for the “reality” of transgenderism. Even if they’re right, though, the cultural and political consequences are not obvious, or unambiguous.)

But the originary hypothesis allows us to at least entertain the possibility that these modernist assumptions are the anomaly, and perhaps not relevant beyond the specific disciplines whose ongoing inquiries they support, since we know that language has, in fact, created the most astounding reality of all—the human reality. We can, in fact, argue about how many genders there are, what they should be called, how they should be represented, what the possible relations between them are, and such arguments will change the way we live, love and reproduce. At the very least, such discussions focus our attention unwaveringly on signs, not on some utopia beyond representation—even if utopian fantasies got the discussions started in the first place. There is no ultimate transcending of biology, or other “material” realities, but all this means is that biology will always resist and deflect our attempts to represent it. In the end, maybe we will find ourselves with a comfortable middle or norm of the familiar two genders, with a bunch of unmolested, more or less interesting or annoying outliers; maybe not. Once tacit assumptions get excavated, they cannot be made tacit again—the historical function of the libertines may be to exaggerate and caricature and in this way, paradoxically, re-normalize the traditionally normal, this time as play and games (It’s worth keeping in mind that feminism, and certainly gay liberation, have had their libertine factions, now largely kept under wraps in the interest of political unanimity and momentum). And in the reciprocally aggravating chafing at constraints into which the victimocrats and libertines will hurl each other (the libertines wanting to express all the possibilities of a polymorphously perverse nature, the victimocrats demanding the uprooting and revision of all spontaneous desires) there may be space for the originary thinker to reflect upon a reality replete with examples of why we need constraints in the first place.

Instead of adjudication, and the increasingly encumbered and arbitrary discourses of “rights,” maybe there will be a space to treat culture as play and games. Back in the 80s, Jean-Francois Lyotard extended Wittgenstein’s notion of “language games” to propose an ethics of political culture, guided by the principle that one doesn’t try to eliminate a fellow player from the game. We certainly can’t count on such comity now, and, to be honest, I would not agree to play by such rules myself—nor do I think they can be made coherent, as cultural “pieces” are not as stable as those in chess. But we can certainly think in terms of “moves” rather than positivist, metaphysical or historicist truths, and of provisional, emergent rules that don’t presuppose some kind of transhistorical Truth Commission that in the end is sure to ratify one’s own truth claims. Play presupposes a field, a constitution of a portion of reality, or reality itself, that is to be governed by the rules of the game. A good player doesn’t want to “win” (any victory being very temporary anyway) so much as to keep remaking the field so as to multiply the number and variety of moves that might be made (first of all by the player himself, but how could opening avenues for oneself not do the same for others as well?). The minimal ethics governing the field is that we all take turns going first, as going first almost inevitably confers an advantage in any game—sometimes you speak in my terms, sometimes I speak in yours. Unless we’re really bent on mutual extermination, we should be able to manage that. We’ll see what the libertines and victimocrats make of each other’s playbook and field position.

In the end, I think both sides will undergo significant shock and stress, because, in the end, I think that the Jewish revelation is right in one crucial respect. The Jewish name of God, revealed to Moses at the burning bush, “I am/shall be that I am/shall be,” cannot be said by the believing Jew because you can’t say it without claiming to be God. This is my one, marginal, addition to Eric Gans’s extensive analyses of this revelatory event—if the name of God explicitly defers the desire to proclaim oneself God, I take this to be because that desire must have emerged in a powerful way in the ancient world of God-emperors. It is an enduring desire, manifested in the belief that following nature or reason will provide moral truths no less than in totalitarian attempts to remake the entire fabric of human relations. Indeed, the ideology of modernity is that we are all gods to ourselves. Descartes’s “I think, therefore I am” is another manifestation, and so I am much less ready, I think, than Gans, or than I once was, to celebrate the centering of each individual in his own desires. Only shared revelations, on particular scenes, of our reciprocal being-hostages-for-one-another (to borrow a term important to Emmanuel Levinas, and then Derrida), or what I have been calling “disciplines,” can create legitimate centers. No one can know how many such centers, or of what duration or quality, are necessary, but once there are enough of them, “inequality” won’t matter. And if there aren’t enough—well, the catastrophes that will result will make the symbolic holocausts of the victimocracy seem so much windmill tilting. But getting enough of them can only be a learning process and, as the pedagogical cliché has it, you have to start with where the learner is.

May 30, 2014

Psychogeography

Filed under: GA — adam @ 7:55 am

The line of inquiry, which I suppose could be called “psychological,” but perhaps would better be called, using a term I have come across in some radical writers, “psychogeographical,” I have undertaken in the past few posts seems increasingly important to me. I find myself in a position analogous to those Western Marxists following the failure of proletarian revolution in Western Europe during the 1920s—in response to the failed historical logic, according to which the proletarian would inevitably be propelled into revolutionary confrontations with the ruling class, theorists like Lukacs, Gramsci, Horkheimer and Adorno directed their attention to culture, aesthetics, and the unconscious. My own analyses of victimary discourse post-9/11 led me, not to a revolutionary, but a restorative hypothesis: now that we were at war with a privileged victim class, victimary assumptions could be made self-cancelling by conducting that war in the name of the victims of the putative victims—oppressed women and religious minorities in the Muslim world, peaceful Muslims, and freedom seeking democrats in majority Muslim countries, for starters. This is an idea continually pursued by the right in various arenas—anti-abortion as defending the victim of the victimized woman; school choice and enterprise zones in the inner cities to defend the black poor against the immiseration caused by their own leftist leadership; anti-union policies defending the individual worker against the authoritarian union bosses, etc. The idea is very good, but it almost never works—one could take the classical revolutionary position and insist that it just hasn’t been done the right way yet, but I prefer to cut my losses and proceed under the assumption that it can’t be done. (Of course, all the policies I just mentioned could be pursued on their own merits—my only point is that it’s an illusion to expect them to break the stranglehold of victimary thinking on our politics.) The reason for the impotence of such an approach is clear—only the victims of the center register for victimary discourse, while, first, victimizations carried out by the victims of the center only further indict the center (now for so brutalizing its victims as to turn them into oppressors), and, second, any victims who support the center (breaking solidarity, in Uncle Tomish manner, with their fellow victims) only prove the corrupting effects of the center—thereby more decisively disqualifying representatives of the center for any liberationist credentials.

This further means (as perhaps should have been obvious all along) that the victimary goes well beyond politics, striking deep roots in the culture and the psyche. Ultimately, it raises questions of fundamental “scenicity,” which is to say of the sacred. I have often thought about and discussed the victimary in terms of the sacred, but always in terms of a public, political sacred, or in terms derivative of Voegelin’s analysis of modernity as “Gnostic.” I don’t repudiate any of those arguments, but they simply raise the question, why are so many in the modern world vulnerable to gnostic faiths? Voegelin’s answer is, essentially, that the differentiations introduced by Christianity into the West are simply too demanding for too many—which is not a bad analytical starting point, but simply raises another question, i.e., how to make the needed upward moral innovations possible?

I have worked, recently, on two concepts that, in conjunction, seem useful here. Most recently, in my “Selfy” post, I introduced the notion of a “constitutive fantasy,” a concept which has a history in psychoanalytic and postmodern discourse that I wouldn’t be interested in tracing (it has been important to Slavoj Zizek, for one), but that I think can be given an originary meaning in a fairly precise way. I used the term in the process of working through Andrew Bartlett’s exploration of the erotic dimension of “personhood”—Bartlett starts by imagining someone imagining their own erotic centrality, as the sole and unwavering object of desire of every other individual on the scene. Bartlett goes on to say that this situation in reality would not be desirable (and to analyze the more realizable retreat of the couple who accord each other reciprocally exclusive erotic attention), but that would of course be the case for many fantasies, and this does not derogate from its power as fantasy; indeed, what makes a fantasy constitutive is that it is its unrealizability that provides the measure for every actual experience. Let’s think about this in scenic terms—desire does not aim just at possession of the object, but at the occupation of a position on the scene. If I’m just hungry and go into my kitchen, open a can of beans, scoop the beans out with my hand, and eat them right out of the can, there is no desire involved worth speaking of, just the brute satisfaction of appetite; if I go out to a restaurant, I want to see and be seen, and not just satisfy my appetite (even if only in the negative sense of not drawing unwanted attention to oneself). In the latter case, joint attention is involved, and one wants to shape and direct that attention in specific ways. It follows that the convergence of attention I am aiming at has a “vanishing point” at which desire would be satisfied because all attention would be distributed in the optimal way. In the case of the restaurant, that might mean all eyes on my companion and myself, with my companion noticing that attention, joining and basking in it, while further noticing my own Olympian indifference to (and which intensifies) it, etc. That would be my constitutive fantasy of the scene, and its relation to the actual scene may take many different forms—a source of frustration, of ironic amusement, of pleased surprise at how many elements of the scene seem to be in place, of self-skepticism as to whether my fantasy is in fact projecting those elements into the scene, and so on. This is the “reality testing” Freud saw as the source of the “Ego,” and which I would now speak of in terms of “ostentation,” another concept I have been using to denote binding up the various vectors of attention into self-presentation on a scene. The constitutive fantasy must have had its place on the originary scene, as desire multiplied by the desires of others, and is in implicated in any scene, and is both individual (no one else can have quite my place on the scene, or my history of participating in relevant scenes) and shared (the “vectors” of attention one’s desire retrojects back to the origin are necessarily drawn from the scene, and the previous scenes mapped onto this one, itself).

In an earlier post, I developed the concept of a “violent imaginary,” to account, first, for the fact that the collective self-immolation the originary hypothesis assumes is averted by the sign could only be imagined by the participants on the scene, being in fact very unlikely (the melee following the rush to the center would be disorganized, flailing and aimless, and would probably break up quickly with little permanent harm done anyone); and, second, to suggest that any subsequent scene is similarly informed by a more or less dimly apprehended “worst possible scenario” that grips what Coleridge called the “primary” imagination and thereby shapes the meaning that will be conferred on the scene. The worst case scenario can take various forms, as many as all the possible configurations of the scene and its breakdown—into a many on one assault, into group clashes, into one on one stand-offs, etc. As new configurations of the scene are evoked with historical developments violent imaginaries are varied and enriched in new ways—antisemitism would involve a particular violent imaginary (clever operators behind the scene, etc.), and anti-communism another (and I use these examples to make the point that justified as well as unjustified fears all have their violent imaginaries woven into them). Modern violent imaginaries seem to oscillate back and forth between fear of a monstrous Big Man and fear of a monstrous anonymous mob.

It follows that the constitutive fantasy would itself evoke a particular violent imaginary, as the idealized alignment of vectors of attention also produces the target around which the violent imaginary is articulated, with the subsequent sign or ostentation including the deferral of the specific mode of violence imagined. This would in turn involve the abandonment, but not forgetting, of the constitutive fantasy. One could only access the constitutive fantasy/violent imaginary through the signs put forth, through the ostentation—you have to learn the language by which the fantasy/imaginary is conveyed. I think this analysis can generally take the form of a kind of reverse engineering through negation—for example, if one argues for “discipline,” we can assume that “indiscipline” is central to one’s violent imaginary, and an orderly allocation of the object to one’s constitutive fantasy. Constellations of fantasy/the imaginary are extremely difficult to recognize (especially one’s own) and even more difficult to dislodge or modify. One could only do so by locating oneself within the narratives through which the fantasy/imaginary is played out. The constitutive fantasy and violent imaginary can be synthesized into what I called in my latest essay in Anthropoetics (“Attentionality and Originary Ethics: Upclining”) the “attentional loop,” or that moment in any scene in which the participant draws the attention of the others and must put forth his/her version of the sign that will redirect attention back to the center. The attentional loop is resolved into ostentation, which results from the submission of the originary fantasy and violent imaginary to constraints, or deferral—when these constraint break down phenomena like paranoia and sociopathy, or the reduction of all scenes to one’s own, result. If we are to speak of an internal scene of representation, it must be composed as any scene—by means of a sign of deferral of some appropriation that would, if attempted, destroy the scene. That appropriation would be the attempt to realize the closed circle of one’s constitutive fantasy/violent imaginary.

I will now try to move these abstract concepts closer to contemporary cultural experience by pointing to what seems to me an interesting and increasingly important problem in contemporary narratives: how do you construct a compelling narrative when the decisions and actions of the characters involved are determined by officially (i.e., expertly) labeled pathologies, rather than desires and resentments one could imagine to be universally shared, even if accentuated and articulated uniquely in the narrative agent? The movie critic James Bowman points to an interesting example of this phenomenon in his review of “Silver Linings Playbook”:

In order to like David O. Russell’s Silver Linings Playbook as it ought to be liked, it helps to see it as a movie about jealousy, even though that’s not quite the obvious way to see it. We learn in flashback that, when Patrick Solatano (Bradley Cooper) came home unexpectedly one day and found his wife, Nikki (Brea Bee), in the shower with another man, he beat the guy so severely that he had to be sent away to a mental hospital for eight months, as he was deemed to be suffering from “undiagnosed bipolar disorder.” The movie doesn’t make a big deal out of it, but it tells you something significant both about Pat’s subsequent history and about the state of our culture that the obvious cause of his behavior was seen as something to be ignored or rejected in order that it might be dignified, or made more socially and legally more acceptable, as a clinical condition — and that Pat himself accepts this medicalization of a moral matter as the only feasible way for him to make sense of his life.

We can readily understand a narrative logic by which a man who commits a violent act while overcome with jealousy might, say, refuse to recognize his own moral flaw and continue on a downwardly spiral path towards greater violence, the repetition of the same pattern with other women, etc.; or, on the contrary, learns to distinguish between genuine love and possessiveness. Either way, we remain within a scene that anyone can imagine sharing. But if the character is “suffering” from a diagnosed “disorder,” he is by that fact segregated from scenes upon which anyone might participate, and narrative alternatives seem to be limited to him either following the approved therapeutic instructions towards greater health (as has been the potted plot of many edifying films on alcohol and drug abuse), or ignoring them—only in the latter case do we have the chance for an interesting narrative, because the protagonist might be rebelling, noir-style (or Cuckoo’s Nest style), against some repressive authority. But the interest of such a narrative depends upon the audience’s suspicion (at least) of the therapeutic order that is being challenged, which would in turn require some residual “humanism”; what, though, if the therapeutic order is completely accepted (which would really just signify the complete victory of the victimary order—racism, sexism, homophobia, etc., are already understood to be pathologies, and as more and more attitudes and actions—as is the case now for more and more male-female attraction—are grouped under these categories, there will be nothing but pathology, trauma and healing)? As Bowman says, this particular film doesn’t make a big deal out of it, but no doubt more and more films, novels, TV shows and so on will. And the difficulty is further complicated once we take into account all the mood transforming (legal) drugs now available—in what sense do their alterations of one’s character affect our understanding of and identification with another’s weaknesses and strengths, faults and merits, responsibility for actions, and so on? Is one deemed “flawed” for becoming dependent upon a substance that, according to “objective” measures, “improves” one’s “performance” in important areas?

It may be that the universalization of the therapeutic provides a kind of solution: the division of the world into therapists and patients and, indeed, each individual into therapist and patient, might create a new kind of scenicity. This would be Freud’s revenge upon us for having reduced him to the status of a crank or fraud in recent years, because this is pretty much how he envisioned the long term impact of psychoanalysis. But the process of coming to realize that what one took to be a normal desire is in fact a virulent pathogen infecting the social body might be of considerable interest (“shame” would clearly not be the appropriate response to such a discovery—rather, we would expect the protagonist to gradually come to replace his native vocabulary with one or another normalizing procedure through various social mismatches); as might the process of resisting the “mimetic” impulse to respond in kind to another’s actions and coming to adopt the proper therapeutic response. These would be the kinds of disciplinary shifts we would all be undergoing all the time; more complex narratives would have characters taking on the “patient” role in one disciplinary setting and the “therapist” role in another, the diagnostician (ironically?) becoming the one most in need of diagnosis, etc.

This transformation would redeem the post-humanist argument against any human essence residing “in” each individual, in favor of the claim that we are all constituted by historically specific discourses, power relations and so on. Post-victimary thinkers can contemplate such a change without the fantasy of “resistance” that still clings to what remains of “cultural studies” style analyses—Foucault did eventually come to realize that to point to “uneven power relations” is not the same as identifying self-evident injustice, and we can certainly further that recognition by distinguishing, in any normalizing order, between those drawn into the normalizing whirlpool and those on the margins who, whether they call it “resistance” or not, define themselves not as outside of those normalizing systems but as the other of those systems. The best example I can think of is one I have seen on social media and public discourse: the young woman who is exquisitely aware of all the “strategies” used by the media to normalize women’s bodily appearance (thin models, photoshopping, ads for dieting, “fat-shaming,” etc.) and not only nevertheless “inhabits” those implicit models (judges herself as inadequate at every point in relation to them) but acquires and maintains her critique of them by doing so. The next step is to forge a new style that rewards those sophisticated enough to dis-identify with the model one cannot help but inhabit—a style that then enters and is (to use a perennial term of radical frustration) “domesticated” by those normalizing systems. (A mediating step here is probably to inhabit some therapeutic discourse encouraging positive body image, and then to dis-identify with that discourse through a recognition of its own normalizing paradoxes.)

In the terms I set up earlier, this would mean that meaningful cultural and political communication requires that we inhabit one another’s originary fantasies and violent imaginaries, and work with our interlocutors towards reciprocal dis-identifications. To return to my “Selfy” post, it seems to me that in this context the “self” is more important than the “person,” or the “soul” (the immortal part of one’s being)—“personhood,” in Bartlett’s account, involves the withdrawal of the lovers from the social into a private scene; the self, which I suggested is best understood as the reflexive assertion of sameness in the semio-social flux can more easily be seen as something one shapes and deploys in various ways. We find it very easy to speak of having various “selves”—a work self, a family self, a being with friends self, etc. It is a short step from there to think of the self as a kind of “probe” that one uses to elicit and frame the violent imaginaries and originary fantasies of others by positioning it at the convergent lines of the fantasy and imaginary. At the same time, the materials of one’s own fantasies and imaginaries would necessarily be put to use—the problem is one of finding a line of symmetry between the emergent scenes the interlocutors, respectively, bear. Sometimes it will be my unconscious scene that fails the reality test and needs your intervention to find another avenue towards the construction of a mode of ostentation; sometimes yours will rely on mine. At the same time, we can operate on varying scales, from the intimate to the global, from the highly idiosyncratic scene to those seized upon, exaggerated and intensified through refined and cynical propaganda techniques. Freedom would result from the study of the means of normalization/subjectification, a study that frees one, provisionally, from submitting to the violent imaginary of being normalized out of existence, and from indulging the originary fantasy of possessing a self outside of those processes.

The therapeutic order supersedes the victimocracy, but what supersedes the therapeutic order is the disciplinary-bureaucratic order. One thing that has been noted, but not often enough, is that the modern administrative state represents a gradual abolition of the liberal democratic order. Equality under the law is meaningless when there really is no law, but only grants of bureaucratic power to intervene without limits in the most private domains of everyday life; nor do elections mean anything if the permanent bureaucracy rules anyway; nor can basic freedoms of speech, worship and assembly be guaranteed (or even taken seriously) when any action carried out individually or collectively can be deemed a threat to some bureaucratic agenda (maybe those who now treat Presidential elections as a festival for celebrating our victimary bona fides have a prescient understanding of the increasingly symbolic meaning of such rituals). The boundary between speech and action has been abolished—there is no way to distinguish a protest against some EPA agent commandeering your backyard from a threat to him.

Even more, the post-humanist understanding of the self as constituted by a constitutive fantasy and violent imaginary demolishes the philosophical foundations of the liberal democratic order, which presupposes a kind of blank slate equality in all individuals. Liberalism and modern democracy wish to represent the individual as he or she enters the market, or the ballot box, with no relevant pre-existing characteristics that might qualify one’s status or right to enter either. But if we come to read any individual as marked by some distinct form of normalization and counter-normalization, as having a constitutive fantasy and violent imaginary that the therapeutic/corporate/consumerist order always already has designs on and, in fact, has to a great extent designed, we cannot help but assign and constantly revise probabilities of dangerous or costly action to each individual as they enter any institutions (nor will there be any way of reversing the obsolescence of any institution that doesn’t allow for a sufficiently thorough and expeditious risk-benefit analysis of any individual).

In other words, we could play pretend at a kind of originary equality as long as shared norms regarding morality, law and political legitimacy were intact (at least among social elites, and those who aspired to be such), but no longer. The victmocrats are so afraid of profiling because they know what an obviously effective practice it is, how closely it parallels their own violent imaginary, and how their own attacks on a justice system aimed, above all, at deterring victims from enacting their own private revenge makes it the only remaining plausible means of self-protection. At any rate, if everyone is to be profiled constantly, each must carefully self-profile, and since strict adherence to normalizing discourses is by definition more available to those in the fat part of the curve, many will need to compose self-profiles, or, simply, selves, that promise to maximize benefits over risks in new ways—and hence to refer more explicitly to the constitutive fantasies and violent imaginaries, to figure them so as to figure out new ways of deferring them.

The only viable political response to these developments that I can imagine is to form disciplines (for our purposes, let’s take this to mean any kind of association that people form in order to explore something, get good at something, or identify themselves in a particular way) and use the antinomies of the bureaucratic state to defend those disciplines and render the repressive vehicles of central power more incoherent. The victimocracy, by definition, chafes at constraints—it is driven by increasingly urgent resentments. The impatience of those who must have same-sex marriage, or have the Washington Redskins change their name, right now, resembles nothing so much as the temper tantrum of a 3 year old. Discipline means constraints—it means we don’t seek to realize our constitutive fantasies, or treat our violent imaginaries as realities. Discipline is a commitment to reality testing. The therapeutic is part of the emergence of the disciplinary order, but its outsized prominence up until now can be attributed to its ability to usurp the disciplines concerned with countering social pathologies, making its usefulness undeniable. There are disciplines other than the therapeutic, though—the therapeutic itself can be annexed to a wider disciplinary order concerned with our scenic nature, and therefore with pedagogy and the perpetual orientation of human interaction around some center, which is to say some kind of property.

And disciplines give way to bureaucracy—a bureaucracy is nothing more than a discipline that must account for itself in relation to some public, or other disciplines—those of us in the academy might like to do nothing but inquire and teach, but we have to concern ourselves (our we have to siphon off resources to those who so concern themselves) with accreditation, student admission, maintenance of the grounds, federal anti-discrimination laws, etc.; the entrepreneur might just want to invent, innovate, and spread the results of his inventions and innovations, but he must deal with tax codes, employment law, all manner of federal regulation, etc.—in both cases a substantial bureaucratic exoskeleton is secreted. At a certain point the exoskeleton becomes too heavy and crushes the body; meanwhile, less encumbered disciplinary forms emerge to recover the original impulse to create and build. But the bureaucratized institutions have all the political advantages, as they can pressure the government to make demands that favor their own strengths, and they can make gifts to a wide array of constituents (unions, suppliers and distributors, local governments, etc.).

Perhaps the above account exposes something of my own constitutive fantasy and violent imaginary. Even so, it seems to me that the relation between highly formalized, politicized and risk-averse bureaucracies on the one hand, and various “nomadic” figures (lone entrepreneurs, but also various “rogue” disciplines, artistic and political, that mock and subvert the bureaucracies from within) on the other, provides a more accurate accounting the contemporary socio-political field than the categories of liberal democracy (equality, voting, rights, freedom, etc.). To allude too briefly to Eric Gans’s latest Chronicle on the Cartesian iteration of the Hebraic Declarative Sentence-as-the-Name-of-God in the foundation of the modern internal scene of representation, the “self” seems to me to be this nomadic figure: if the “I” is because it thinks, the self is because it was, with each self-reference iterating and distancing the self from prior iterations. The self can inhabit the marvels and pathologies of the surrounding world without claiming to have any substance outside of those marvels and pathologies; the self can process the normalizing discourses and institutions that average out while reproducing those marvels and pathologies; and the self can replicate or clone itself in novel forms that elicit and display the fantasies and imaginaries those discourses and institutions and the selves inhabiting them seek to manage. At any rate the hierarchies are not going anywhere for the forseeable future, even though both left and right have their respective fantasies regarding how they might be mitigated or even abolished; the best we can do, and perhaps the basis of a kind of left-right alliance broached by both Glen Beck and Rand Paul, is to force the corporate order off its dependence on state largesse and for the absorption of its socialized costs, so it has to stand on its own, without exceptional legal protections or economic subsidies (what the state should be doing, once its umbilical cord to the corporate order is cut, reopens the right-left abyss). The self as brand/anti-brand/re-brand is probably the most productive assumption for now.

It is best to remember, though, that the self is a scavenger, gathering nourishment for the person and the soul, and whether nomadic self I am describing here can find such nourishment is an open question.

« Newer PostsOlder Posts »

Powered by WordPress