GABlog Generative Anthropology in the Public Sphere

May 4, 2020

Aspiration

Filed under: GA — adam @ 6:03 am

One broadly, maybe even universally held agreement among “postliberals” is that, contra to liberalism, a properly ordered polity would have a unified project that would command unanimous consent, even enthusiasm. Society should be a team, not a collection of individuals. Indeed, even liberalism itself does not escape this compulsion, even if it gets obeyed indirectly, through imperatives like spreading equality and liberty. If the highest human aspiration to “realize every individual’s potential,” wouldn’t that have to be a cooperative endeavor as well? As with much else in postliberalism, the problem is to make explicit obvious truths that liberalism obscures. In a sacral order, the community is established so as to serve the sacred being; the problem of social organization around a shared project only emerges in post-sacral orders. But if the question becomes, “to what should we, as a society, aspire,” we already border on the ridiculous—it sounds like we’re filling a slot in a questionnaire, rather than pursuing something organically grounded in our practices and institutions.

My starting point here is the deferral of appropriation on the originary scene. Deferral on the face of it is a “negative” act—something we don’t do. But it’s immediately positive and creative as well: we see and hear something new as a result of our deferral—the carcass we were about to fight over becomes a god, transforming us into a community, initiating morality, ritual and aesthetics. Anyway that we find to talk about creation or invention will involve some permutation of this deferral: it will, that is, involve something like “standing back and observing the whole,” or “identifying an emergent pattern,” or some other intellectual act predicated upon suspending some immediate ambition and “reconfiguring” the desire that led us to it. If we want to pursue this in a more deliberate way, we would pay much more attention to ourselves as mimetic beings: every act that we carry out, indeed, every “sub-act,” or gesture, is modeled on some other’s practice. If we want to be original, we must first divest ourselves of our presumptions of originality.

Imitation is really a fascinating business. No one has thought this through as radically as Marcel Jousse, whose anthropology of “mimism” would have us look steadfastly at the mimetic construction of everything we do. In other words, it’s not as if we imitate for a while until we’re mature, and then we’re “ourselves” and we can think more conventionally in terms of individuals as self-contained, coherent psychological and moral beings. But at the same time, imitation is never “perfect,” since any act or gesture is embedded in a particular scene, and its imitation will take place on another scene, giving it a different set of meanings, even if the act or gesture itself, from a purely physical perspective (maybe we could prove it through a video recording) is identical. There could never be an end to the excavation of our acts—and desires, thoughts, resentments, judgments, etc.—in layers of mimetic articulation. But we don’t have to become full-time archaeologists of ourselves as mimetic constructs. We just have to learn how to notice the one thing Jousse neglects—the ways our mimisms intersect with their progenitors and derivatives in such a way as to cancel themselves. In other words, at a certain point, imitation becomes impossible because everyone doing the same thing makes it impossible for anyone to do that thing anymore. The “imagination” entails looking at a particular mimism on a particular scene and expanding that scene to the point of “mimic” self-cancellation. And any creative or original act will be one that modifies that scene so that the “mimism” can convert itself in such a way that imitative “drift” provides for the flourishing of the mimism. At least for a while.

This still seems rather centrifugal, though—so far, the discussion is too individualized. The problem is that there isn’t yet a coherent order that allows us to think in terms consistently centered mimisms. That’s what our thinking has to anticipate and prepare for. With the breakdown of the sacral order, every individual can become a center, and this is what makes mimeticism uniquely uncontrolled and destructive in the modern world. The privileged position is to attract scapegoat level attention to oneself so as to leverage that scapegoat level attention into an immunity to persecution, thereby liberating and capitalizing on one’s desires. Imagination under liberal capitalism, then, involves a constant oscillation between these two poles, which means the ongoing depletion of the moral “capital” inherited from Christianity making the oscillation possible in the first place.

The crucified Jesus has operated so powerfully as a model through these developments as to become invisible—the “atheists,” whose entire cultural position is predicated upon their potential persecution by ignorant believers, are as steeped in Christian culture and morality as anyone else. It’s easy enough to see, given the iconography of Christianity, why the sacrifice of Jesus would become a template for seeing in a new way the treatment of the poor, the marginalized, the oppressed. But Jesus was not scapegoated for being powerless; quite to the contrary, the fear was that he was powerful enough to overturn all of humanity’s teachings regarding the divine and the moral order. The most important argument for mimetic theorists who wish to challenge liberalism to make, in fact, is that nothing in the condition of the powerless triggers scapegoating tendencies; quite to the contrary, it is always those who have or are believed to have “too much” power who are scapegoated. Even when we can observe instances of scapegoating targeting objectively powerless groups, it will always be because some kind of power is being attributed to that group, or exemplary members of it. They are taken to represent some hidden, and therefore all the more dangerous power. In that case, resistance to scapegoating is defense of the social center; indeed, even if one’s concern with the marginalized, the case to make is that only confusion regarding the articulation and exercise of power at the center makes it possible to project sinister and occult powers onto the marginalized (or, for that matter, to force some of the powerful to operate through the sponsorship of marginalized agencies).

The defense of the center provides the key to the articulation of mimetics and therefore desire under desacralized conditions. To defend the center is to anticipate opposition to, subversions of, even indifference to, the center. The way to anticipate anti-centrism is through the study of mimisms in all of their forms, from the tiniest twitch to the grandest project; and, not just the study of, but the reconstruction of mimisms by teaching and learning how to anticipate the self-cancelings of those mimisms, now as these self-cancelings pertain to the participation in the center. It is astonishing that, as far as I know, in spite of the basic assumption of mimetic theory that we learn through imitation, none of the major mimetic theorists—not Girard, not Gans—has paid the slightest attention to pedagogy (either as a limited practice in educational institutions or a broader social modality). But that is where the answer must lie: if the problem is that we blindly enter into conflicts with models we refuse to acknowledge are models, then treat every situation as one in which someone learns from and someone teaches someone else. Even if we disagree about who is learning from whom at a given point, we can at least agree about the general “settings” of the encounter, and give each other the opportunity to learn and teach in turn—we can therefore work towards clarifying rather than obfuscating our relations. On desacralized terrain, the replacement of the archaic formal hierarchies—explicit distinctions in rank—must be the more “fractal” hierarchies of pedagogical relations. In fact, those formal hierarchies were always, at bottom, pedagogical relations as well, most obviously in perhaps the most fundamental—the parental relation, and the initiation of the young into the community.

Every social encounter is a pedagogical relation and is to be made more overtly so. This doesn’t mean we should become irritating didacts—relations can be made overt through an accentuated gesture as much as through words. The social order as pedagogical order implies a significant moral transformation: in every encounter, each one of us must either submit to the authority of the other or step forward and assert authority in setting the terms of the encounter and revealing its pedagogical dimension. We are all doing this already—as soon as you speak, you monopolize the field, however small, and on what authority do you dare to do that? But actually describing our social interactions in these terms would be impossible under liberalism, because acknowledging pervasive, systematic hierarchy on the micro-level leads us to look for more stable and formalized forms on the macro level. Now, to assert pedagogical authority is to invite scapegoating, but not in order to exploit the tendency while backed by broader social prohibitions; rather, it is to elicit, on the model of political-pedagogical engagement I examined a couple of posts ago, in order to study in their self-canceling logic, the “mimic” structures that must be made productive.

So, to return to my starting question—to what shall we aspire?—which is prompted by the various “prometheanisms” and “faustianisms” which have become mimetically constructive on the postliberal right, with an accompanying futurist aesthetic, a focus on space travel, and so on. The most fundamental aspiration is to render the imagination productive. Work toward the abolition of resentment toward those who try to earn pedagogical authority and accountability, and thereby help those aspirants earn it by participating in the conversion of mimisms: if we look closely at anything anyone wants, we can see that it will interfere with and be interfered with by what others want, and out of this prospectively antagonistic modeling what everyone wants can be transformed. Authority will be asserted in the process, because, unless we collapse back into a liberal frame, we have to acknowledge that there must be a component of “this is what you shouldwant” in any genuine pedagogy. But it’s only pedagogy if the “should” is derived from an extended display of the desire to be transfigured in question.

This still seems very formal, and at a certain point one wants “content.” What shouldwe want, then—to conquer space? Terraform the earth itself? Make the depths of the ocean a new home? Eliminate disease? Liberate the human body from its own limitations? Abolish death? Maybe any and all of these—they’re not incompatible with each other, after all. And if there are going to be “factions” of the postliberal right, these would be good ways of self-distinguishing from others—these would be good arguments to have (they would provide startling and encouraging introductions to “normies” discovering these new political arenas), and would incidentally serve as an ongoing revelation of the squalidness of liberalism. Substantiating any of these aspirations, though, would entail turning all of us into the kind of people who could enthusiastically and competently contribute to such projects and set aside all desires that would interfere with doing so—and that more fundamental project is what I just referred to as making the imagination productive.

But maybe we can take this a little further. One implication of the “originary grammar” I have developed but have not yet explored very deeply is that we receive, quite literally, imperatives from objects. On the originary scene, the first humans were told something like “stop!”—strictly speaking, this is not yet an imperative, which emerges later, but that distinction is not important now. What matters is that the first compelling “word” we “hear” is from an object. In that case, we can learn how to “listen” to objects, to heed their imperatives. An act of deferral lets some object be—that act of deferral is iterated each time I “stop and look” or “inquire” rather than consume an object, or ignore it, or put it to some direct use. The object itself “catches my eye” and “tells” me to “hold on a minute.” So, all our inquiries, whether of the universe or the atom, are solicitations of imperatives from objects (even if the notion of “objects” becomes inadequate here). Those imperatives come through the very instruments we use to perceive, sense and measure phenomena (as Benjamin Bratton has pointed out, to “sense” is already to “measure”). The scientist wants to continually refine those instruments so as to “hear” more from the things, but this also means we want to further refine ourselves so as to build more sensitive instruments, and “build” people who can build and, first of all, want, more refined instruments—which means building institutions that can house such relations between people and instruments. So, all the things in the world along with us and our instruments are one. We want more of the world and more of the universe because it keeps telling us to do things we could never have imagined otherwise but we can now see allow us to let in more of the world and less of the delusory desires and resentments that keep the world out because they compel us to demand our “part” of it. What all the things of the universe will tell we cannot know until the refined instruments and those capable of using them do the necessary recording, but we can think in terms of making ourselves “part,” rather than demand our part—that is, we can look for ways to participate in the unfolding of our relation to everything else.

 

April 25, 2020

Nominalization, Imperativity and Reading, Quick or Patient

Filed under: GA — adam @ 7:50 am

David Olson makes nominalization, in particular the nominalization of verbs, central to his theory of the metalanguage of literacy. To review: the metalanguage of literacy supplements those elements of the speech scene that cannot be directly represented by writing. These elements can be helpfully reduced to all that involve the relation between the speaker and the speech of another which he is reporting. Whatever could be enacted in a speech scene—anything on the continuum from reverence to mockery—through tone, body language, etc., must be represented lexically. (This means that the spread of writing means the partial neutralization of mimesis—or its transformation into a more mediated, long-term affair.) Most of our conceptual vocabulary is comprised of variants of “say,” with “think,” “know,” “do,” “want” “happen,” and “feel” taking up the rest of the space. (A lot of what is usually associated with thinking, and even knowing, I would suggest, is really concerned with the supplementation of “saying.”) It’s also worth pointing out that studies of academic discourse have shown that what distinguishes academic, or disciplinary, discourse from “ordinary” discourse is the pervasiveness of nominalizations and noun phrases, especially operating as the subjects of sentences. Anyone who reads an article from any academic journal can easily verify this.

It seems, then, that nominalization must be central to the generative logic of translation I started working on a few posts back. Wierzbicka’s primes are very verb heavy: we have say, want, do, happen, feel, happen, see, hear, know, and touch. The nouns, meanwhile, are very generic: someone, people, all, something, thing, this, one, I, you—mostly pronouns, along with a few others like “place” and “time.” This makes sense, as neither names of people, names of places, nor names of gods could be in the primes, even though they must have been part of human language from very early on. Still, it’s interesting that even very simple nouns, like “food,” or “ground,” or “sky” or others one might imagine (body parts that all humans have, for example) are absent. Perhaps they were too particularized through being associated with deities or some kind of divine presence; perhaps what seems to us to be, self-evidently, a separate “thing,” can in fact be thingified in various ways. Verbs for various bodily functions, like breathing, are sources for many words denoting invisible objects (like “spirit”), but the invisible world is overwhelmingly populated by nominalizations of these prime verbs and those invented to supplement them.

The process of nominalization precedes its acceleration under writing—all of the verbs included in the primes have nominal forms, mostly deriving from some conjugation of the verb itself (at least in English, but I would assume this is generally the case); at most requiring a prefix or suffix (which perhaps marks that nominal as later)—thought, knowledge, feeling, want, doings or deeds, touch, hearing, sight, etc. What is the ontological status of these entities? We can trace a progression (relying, of course, on English grammar) from nominalizations that would be at one remove from the scene first represented: for example, if someone thought something, it is not a leap to refer to “a” thought, “the” thought, or “that” thought that they had; to the removal of determiners, getting us to “thought” as such. As the word is stripped of determiners, it becomes intelligible as the “object” of a specialized discipline. An oral world could have “thoughts,” but not “thought,” I assume.

Now, once we have a noun like “thought,” we can start attaching adjectives to it: political thought, cultural thought, critical thought, etc.; we can turn the noun into an adjective, like “thoughtful,” or “thought leader.” The center of gravity of the noun is the name, and so the tendency to generate nominalizations is rooted in the need to generate names, or centers (“nominalization” itself, of course, ultimately means “make into a name”). Verbs can’t be referred to, or placed at the center in the same way—the verb “think,” by itself, can only be an imperative. So, nominalization, which is the linguistic and therefore more fundamental form of “reification” and “hypostatization” is a descendent of the originary sign. A nominalization is “redeemed” in the same way, through the organization of “congregants” who can generate ostensives singularizing the nominalization. This is what a disciplinary space is: everyone in that space can say something about “thought” that everyone else in the space will recognize as saying that thing about “thought.” It’s circular, but so is all of sign use, and the circle is complete when the participation of our nominalization in events gives us something to point to, like, say, a text or “way of saying things” that we could describe as an “instance” of “thought transcending itself,” or something along those lines. (Attaching verbs to nominalizations so that they can starting “doing” things is a further step toward entrenching them. The similarity to mythical beings is transparent, and has been noted many times. Maybe this is what at least some mythical beings originally were.)

A generative logic of translation wants to target nominalizations, then—not to destroy or discredit them, but to produce new disciplinary spaces out of them. You want to turn references to “thought” into practices of thinking, which you do before, but manifest in, saying—perhaps so that saying can also become knowing. Constructing relations between the primes keeps us focused on the “destiny” of these verbs. The reference to thought is telling you to think, to think about thinking, in fact, which is to say it is issuing an imperative. I want to be very practical here—I am thinking pedagogically, like a teacher, who wants to help others learn how to do things they didn’t know how to do before. In this case, I’m also learning (a teacher who doesn’t learn things in teaching is a bad teacher). A generative logic of translation is a reading and writing practice; more succinctly, a practice of literacy (“literacy” is ultimately a “re-nominalization” of “letter”). As a teacher I’m very hostile to references to “understanding” things, which usually means something like “talking about them more or less the same way I do” (people tend to think others “understand” them when those others speak of them more or less the way that person would). I want to provide means for rewriting, or translating—from one statement to another, or to a question or command. I much prefer a translation practice that “misunderstands,” that gets things completely wrong, to a show of “understanding” that flatters the teacher but really privileges the more mimetic student, the one who has learned how to game the system—a translation that gets it “wrong” might at least initiate a series of responses that creates a new space, while “understanding” wraps things up like a test question. You understand things when you want to prove how smart you are; learning (or teaching) something plunges you into imitation and the risk of being stupid.

So, I propose deriving imperatives from nominalizations—references to “thought” are, quite literally, telling you to think about thinking. Discourses about “thought” or, more modernly, “cognition” in general, which have these nominalizations “doing” all kinds of things and possessing all kinds of capacities and characteristics are houses of cards—but the point is not so much to collapse them as to locate the point at which they would collapse most readily at the slightest movement. These points, I would hypothesize, are where “think” or “know” most closely bounds, to the point of overlapping with in part, the other primes—think and know with each other, either with want, or say, or feel, or see or hear or do or can. This provides us with a fulcrum. Of course, the cognitive sciences can speak of the relation between “cognition,” “desire,” “discourse,” “action,” and “sensation,” but only to show how they “influence” or “distort,” not constitute, each other—some pressure on those terms will fold them back into think or know, want, say, do and feel. You will always be able to locate some point where, for example, what one knows is “contaminated” by what one wants or, for that matter, what one hears, or touches. Again, this doesn’t necessarily mean one doesn’t “really” know that thing, it just means that knowing is a different kind of thing than we might have imagined.

I want to again insist on putting “say” at the center here. There is something irreducible and irreplaceable about the exact words someone says, here and now—not a paraphrase, not “well the point was,” not “what they were really getting at,” but what they actually said when the moment called for words. All these are translations, which can be interesting as translations, not “clarifications.” And there is something infinitely generative about taking that utterance and translating it into all the idioms, contexts and media in which one has some degree of fluency. All thinking, doing, wanting and so on find their way through what someone says. Every nominalization can be folded back into something like “when (if) X happens, someone says (can say) Y.” To ask someone speaking of human “behavior” in some conceptual framework, something like “what would this figure of the human you are constructingsayif…” is a way of demanding the further minimalization of the hypothetical underpinnings of the claim. It’s very productive to force the transition from a statement like, say, “denial of emotional investment in relationship double binds is highly characteristic of…” to “what would one of these individuals in such a double bind say if someone else said…”? This creates a space for the oral within the literate while making explicit the scenic accretions effected by literacy. This fulfills one of the dreams of postmodernism, to make all writing “literature,” while also making “literature” a rigorous vehicle—to capture the entire meaning of a highly nominalized statement would require the construction of a complex scene, and perhaps alternate scenes, perhaps a long novel. But when time is an issue, one can write flash fiction.

These would be scenes, moreover, on which both interlocutors are present, even if though an interface. What kind of double bind is constructed in the sample statement provided in the previous paragraph, and what is your investment in it, one might ask the author (for whom, in one’s own discourse, one would have to hypothesize answers). What would you, gentle author, say if… Now that you mention it, maybe this author has already said it—let’s look around. And then you’re reading the text in a very directed, “interested” way. You’re not writing 19thcentury style realist fiction, but a self-referential text exposing its own “devices”—but always to get at that question—what would someone say if…. What could someone say? What do I say? What will you say when…” And when you saythat, what do you thinkothers will do? What will you be able to say after they do it? For that matter, what are you saying right now? We always return to the crucible of the primes. We have a logic that moves from the highest level of generality, enters the most complex discourse, while remaining rooted in the most elementary relationships, where the primes overlap, interrupt and supplement each other. A reading and writing practice that can facilitate a quick intervention in a twitter burst or the writing of a doctoral dissertation. With plenty of room for playfulness, experimentation, and extending the margin of error while developing new ways to test for error. The purpose of logic is to anticipate and address objections in advance; even better, though, is to have possible (and even impossible?) objections converted into broadcasters of what you say.

April 13, 2020

The Pursuit of Appiness

Filed under: GA — adam @ 6:38 am

Think about how medical treatment currently works—it invariably involves some kind of intervention from the outside. Of course, there’s preventive care, or simply taking care of yourself, by paying attention to diet, exercise and so on—taking care of yourself relies upon the body’s own metabolic interactions: ingesting too much sugar induces certain biochemical reactions that ultimately lead to weight gain or diabetes (which in turn affects the operation of certain organs such that…). Intervention, whether surgical or pharmaceutical, starts with the assumption that the body can no longer manage itself—decisions about lifestyle can no longer prevent certain metabolic interactions, or the failure of certain organs to “process” the results of such interactions (which itself would be a metabolic failure). And, so, some “cause” is introduced from the outside that targets in order to suppress or enhance certain metabolic interactions. A lot of this seems to be ad hoc—quite often, it seems that no one is sure why a particular treatment works as it does—we can just verify, within some margin of error, that it does usually have the desired effect. And then it becomes necessary to monitor the body and run separate tests checking for “side effects,” i.e., consequences for other metabolic processes than the one being targeted for suppression or enhancement.

In other words, there is, in current medical research and practice, no totalizing engineering approach to human health, an approach that would transcend the natural/artificial distinction and make the organic metabolisms self-regulating even in response to breakdowns in normal metabolic operations. In fact, if we had such an approach, there wouldn’t really be “breakdowns”—the “mechanism” introduced into or, better, elicited from, the metabolic organization one is born with would include sensors that detect, well in advance, the signs that such “breakdowns” were likely in a given organ or process—and trigger, automatically, counteracting metabolic activity that, furthermore, is coordinated with metabolic activity throughout the body so effects somewhere don’t disrupt satisfactory operations elsewhere. Such mechanisms would most likely be “placed” or “induced” near the genetic level of human functioning, somewhere along the line where genotype produces phenotypes. Maybe these mechanisms would work, in part, by leading the human organism to spontaneously reject the unhealthy and embrace the healthy—for example, by inducing disgust at those foods it would be worst for you to eat right now and hunger for those foods that would be most beneficial.

The kind of mechanism I am discussing here, and which is really not all that hard to imagine becoming reality in the coming decades, would essentially be an “app.” An app is an interface that creates a relation between a user and the cloud. The kind of biological app that is the object of my speculations here would place the individual human body in relation to the entire, continually updated, database of biological, chemical and medical research produced by global research; included in this database would be the archived information about all human bodies, past and present, upon which information (gathered by the vary apps that are plugged into our bodies, which also become part of the archive) the individual app would draw in controlling the totality of internal metabolic activity. It’s hard to see how one could be against such developments—both in the sense that it’s hard not to see it as a tremendous improvement in human well-being and in the sense that it’s hard to imagine what could stop it. We might still die, because organs and functions might still “wear down” beyond the capacity of our total health app to complete reverse, but even death would be modulated so that we “ease into it” (as people occasionally do now, after a long, well-lived life) in a relatively painless, predictable way.

We could say that things get more complicated when we take into account that “health” includes “mental health,” and “mental health” is always going to be assessed by criteria that are at least in part historical, cultural and therefore political. But it may be that advances in research connecting brain states to mental conditions can help limit the abuse of treating people outside of the norm in terms of taste, interests or opinions as thereby “abnormal.” We do, apparently (suspending, for now, the justified skepticism regarding what anyone claims to know to a scientific certainty right now), know that schizophrenia, for example, is very directly correlated with, and therefore likely “caused by” identifiable abnormal brain states. (Otherwise, how could we have drugs that modify the experience of schizophrenics?) Anyway, it’s hard to imagine resisting developments in this area either. I recently had occasion to read a paper, written by a student at a fairly elite liberal arts college, in which the issue of mental illness came up, and noticed that where the word “normal” (or “healthy”) would previously had been the word “neuro-typical” now is. One can see how the distinction between “neuro-typical” and “neuro-atypical” would replace the distinction between “normal” and “abnormal” (or “healthy” and “sick”) in a victimary as well as more strictly medical framework. If we locate different behaviors within a range of brain activity somewhere upon a bell curve, then judgment is removed while the question of treatment can be made more “consensual.” Perhaps a highly neuro-atypical individual can be made cognizant of how his brain activity contributes to his idiosyncratic behavior or thinking and, as long as that individual is not unduly disruptive or dangerous, he might not only be permitted to “embrace” his neuro-atypicality but compel others to respect it as well. It’s easy to see the emergent ethic here: if you’re not ready to enforce medical intervention or, after the total health app has been installed, the cloud finds this person to be safe enough to leave to his habitual functioning, then you need to adjust to him just as much as you expect him to adjust to you.

All questions about “the good,” then, would become questions of designing apps that would “materialize” or “concretize” the cloud in a particular way. I’ve been using health care as a particularly illustrative example, but all of human life is taking shape along these lines: all practices are becoming apps, or, at least are, or could easily be, “appified.” Perhaps there will be decisions made on the cloud level and those made on the app level. At the cloud level, it would be determined, say, how neuro-atypical people can be permitted to be (depending upon a whole range of “factors”); on the app level, individuals would coordinate their neuro-atypicality with other individual types and institutional imperatives. Of course, there are apps that establish a very simple relation between the user and his environment—e.g., an app that lets you know the nutritional content of all the food in your refrigerator. But a lot of apps take the form of social experiments. Traffic apps fall into this category: if you use something like Waze to navigate, you not only rely on other people (who must be incentivized in some way) to warn you of speed traps, accidents up ahead, road work, potholes, etc., but you participate in a kind of paradoxical activity because the more people that are aware of present traffic conditions the more their knowledge will transform those conditions. So, a good traffic app would have to know how many people use that app, how they adjust their driving behavior accordingly, and what are the consequences of a certain number of people coming to learn that traffic will be lighter along one path than another an hour from now. Won’t that make traffic heavier along that path, and wouldn’t the app need to account for that?

Cloud policies will be determined by those in the clouds, but we can think about app policy as providing the feedback any cloud policy would depend on. What I want to bring out here is the difference between app practices and “normal” political practices. Normal politics can be seen, by analogy, as a very crude version of the interventionist practices characteristic of contemporary medicine. Something has “gone wrong,” and you try to apply some arbitrary principle (“equality,” “democracy,” “freedom,” ‘balance of powers,” some institutional “best practice,” whatever) to “fix” it. Needless to say the situation in politics is far worse than in medicine—no one really knows what the cause and effect relations, along with all the “side effects,” of any policy (always introduced, it should be noted, in a highly compromised, proviso-ridden way—almost as if you couldn’t prescribe a drug to lower cholesterol without that drug also including some mood-adjustment “amendment”) are—certainly not beyond the very short term (if you give people money, they will have that money, at least for 5 minutes; if you bomb a building, you will destroy the building, etc.)

“Appy” practices, meanwhile, would mimic the kind of self-regulation we could see to be already at work or possible within existing practices in some enhanced and explicit form. The goal is to act on the “genetic,” or generative, or scenic level and help bring order into the institutional “stack.” Much of this kind of work will have a satiric character. Take, for example, the way interventionist politics deals with the media—both sides, left and right, complain that it is “biased” and influenced by (or “in the pocket” of) “special interests” of one kind or another. Of course, there’s a lot of truth in such accusations. But they always are predicated upon a fantasy of a disinterested press serving a general public sharing the same perspectives and interests. The more media companies and institutions are independent centers of power, the less they can be anything other than information laundering extortion rackets whose sole purpose is to wield power over and on behalf of selected enemies and friends. Direct mouthpieces, whether of the government or specific institutions, would be more reliable, because at least you could imagine why that source wants to provide you with this information. But there’s little point to simply saying this, either—it doesn’t really help in filtering the vast swarm of information and disinformation swirling around us.

A more “appy” approach would be to attribute a plausible purpose to the media—say, to help people decide more intelligently whom they should vote for—and then translate all media pronouncements into something like “this source says that you do—or should—vote for politicians for X reason.” Let’s say some cable news anchor accuses or purports to prove that a particular politician has “lied” (always at some specific distance from some “truth”—attested to by someone, who we either know or don’t know much about, who has “attested” to other things of varying degrees of truthfulness; always in a specific context, always in a relation to other things said which may be more or less true, always given certain assumptions about what the “liar” actually knows and doesn’t; always with specific presumed consequences, and so on). Well, then, that source is telling you that you’re the type of person who is less likely to vote for someone who lies in that way, to that degree, in that context, with those consequences, and so on.

Our “app,” then, would generate a mapping of all politicians (maybe past as well as present) who have told that precise “type” of lie (of course, what counts as a particular “type” of lie is the kind of question the app would draw upon the cloud to answer), along with, perhaps, politicians who have told “types” of lies that more and less closely approximate that type, with varying criteria introduced to determine which kinds of “lies” are “worse” (in which conditions, etc.); along with which “types” of voters voted for and against all those different “types” of lying politicians. Now, the point of an “appy” practice of political pedagogy is not to produce such a map but to allude to or indicate or enact its possibility and its necessity if that particular report of that politician’s “lie” is to meansomething. In this way we learn to introduce, to use the idiom of contemporary media and politics, as “poisonous” a “pill” as possible into the relations between rulers, media and public. In the end, we would want to narrow down the question into a very specific “slice” of the “stack”: what are people with specific responsibilities saying, how and why, and how are those of us further downstream of those responsibilities listening to what they say—and how can we do so in a way that brings the way we listen into closest possible conformity with what our own modicum of power (the reach of our apps) best enables us to do. This is the app we try to install; or, this is the mode of being as an app we wish to install. The politics of planetary computation is the politics of converting users into interfaces. I’m a political app insofar as, in listening to me, you become an interface yourself by creating a slice of the stack as a grammatical stack: a failed imperative, prolonged into a question, issuing in a declarative revealing a new ostensive from which you derive a new imperative. At the very least you can generate a wider range of responses (hear a broader range of imperatives) to being told a politician has lied (why was he obliged to tell the truth on that occasion, anyway?) well beyond the reactive interventionist one, demanding he be “held accountable”—and into appier regions, as we start to build up self-regulatory inhibitors and activators that come to take in more of the system, in its totality of utterances.

April 2, 2020

Conversivity

Filed under: GA — adam @ 9:09 am

Hamlet, before following his father’s ghost’s demand to avenge his death, decides to put on a play. The play is to reproduce the event of Claudius’s murder of his brother, Hamlet’s father, and the reasoning is that if Claudius is indeed guilty he must betray that guilt in watching a performance in imitation of the murder he committed. It works even better than Hamlet had expected—not only is Claudius visibly disturbed by the performance, but it sends him to Church in a repentant mood, where Hamlet hears him virtually confess to the murder. In fact, for Shakespeare scholar Harold Goddard, the real tragedy of the play is that Hamlet does not continue to pursue this so far successful method of working through Claudius’s conscience to weaken his will to persist in enjoying the fruits of his crime. Perhaps Hamlet fears that having to confront a penitent Claudius andthendecide what to do would leave him even more paralyzed than we see him being throughout the play.

Hamlet’s abandoned method is a model of political-pedagogical engagement—a much more effective one than accusations of some kind of betrayal, or attempts through argument to convince the other with lists of pros and cons or some kind of proof. Accusations and arguments work on the margins, when much is already agreed upon and we are confronting, together, a decision that has to be made. A general who wants to win a war, or a surgeon general who wants to stop an epidemic, can find the evidence provided by one subordinate supporting one path of action more convincing than the evidence provided by another subordinate for another path because they are all on the same page, they all know what the goal is, what success would look like, what counts as a reasonable risk assessment, and so on. Within those parameters, you can expect an objective case to be heard fairly. Similarly, accusations are effective motivators when we are committed to making the same sacrifices in the name of a shared objective—it would obviously be ridiculous to accuse your enemy or a neutral of betraying you. Hamlet’s aesthetic approach, though, can be made to work under any conditions, for any audience, even if Hamlet’s own version of this approach is itself limited—it would have been much less effective, we must assume, if Claudius had been aware of what Hamlet was up to; and it would be even less effective for audiences less naively willing to suspend disbelief for fictional representations.

The aesthetic-political pedagogy involved, then, doesn’t necessarily involve putting on a literal reproduction of the failings or crimes of your antagonist or interlocutor—we’re all too suspicious of such transparent attempts at manipulation, anyway. Rather, it involves soliciting and representing the other’s sovereign imaginary. There’s never any neutral engagement—the other doesn’t address you “individual to individual”; you are always addressed as a friend or enemy, collaborator or potential collaborator or obstacle, leader or follower, etc. Furthermore, you are always addressed on a particular scene, in a particular medium, with a particular actual or possible audience—even a private conversation is likely to be repeated and ramify in various ways, in various settings. The aesthetic-pedagogical stance is to accentuate the mode of address—to make what is implicit in it a bit more explicit. You may be wrong—we can easily misread each other—but even then the other is solicited to represent the scene in another way, producing a new mode of address, and you can go from there. You need to accentuate the mode of address enough so that it can be noticed, but not enough to collapse the scene—the point is to exhaust the implications of the scene.

To turn an implicit role into an explicit one is to foreground the mimetic and scenic character of all social activity. There’s always a scene but there’s no set script, just fragments derived from previous, “similar” scenes—so, it’s not a question of line reading but of constructing the scene together. You do this by formalizing moves made as explicitly as possible—explicitly, not necessarily literally (but there might be quite a bit of literalness as well—we tend to feel stupid when we ask for things to be made literal, but sometimes it’s the most intelligent thing to do). Any scene is a descendant of a long line of previous scenes, and is nested in a vast complex of other scenes. One could try and step outside of the scene and provide a “history” or “sociology” of the scene. Nothing wrong with that, as long as you know that this involves stepping out of one scene onto another, not a transcendence of scenicity itself. But that’s not what interests me here. The kind of aesthetic-pedagogical practice I’m proposing here involves soliciting the boundaries of the scene within the scene itself.

Another model: Freud’s therapeutic practice of transference. We can leave aside what we think about Freud’s psychology or the efficacy of psychanalysis—Freud’s theory of transference, contrary to much of his own theory, is part of the 20thcentury turn away from metalanguages and towards an understanding of knowledge as participation. Freud realized that if you told, say, some young man that his inability to (for example) accept authority figures is a result of his love for his mother and hatred of his father (etc.), you won’t get anywhere. In fact, he might “agree” with you, and it still wouldn’t make any difference—the agreement would simply be recuperated as part of his “repression” and “resistance.” (You will find exactly the same thing if you explain to some conventional conservative the real power relations producing the concept of “freedom” he takes for granted.)

What does work is eliciting a response to yourself that is really meant for the resented authority figure. When the analysand starts accusing the analyst of constantly demeaning him, of deliberating frustrating his ambitions, of never really wanting him to succeed, and so on, then we’re getting somewhere. The analyst clearly can’t be doing any of these things, which means these accusations aren’t meant for him, and the analysand can be allowed to get to the point where the discrepancy between the accusations and any possible response to them on the part of the analyst becomes so obvious as to be inescapable. The unthought mimetic structure of the analysand’s resentments can be laid out on the table. Knowledge can then take the form of a(n ostensive) revelation rather than a (declarative) proof. The disinterested (although not exactly, because Freud also came to theorize a “counter-transference” on the part of the analyst) analyst is in a position to present the “blank” surface upon which the analysand can project repressed scenes and desires; the equivalent of that surface in the kinds of encounters and performances I’m suggesting models for here would vary, but the need for a kind of carefully prepared “trolling” is implicit. The point isn’t to generate outrage, but the possibility of a revelation of some disproportion between the resentment expressed by the other and any possible responsibility for generating that resentment on the part of oneself. The goal is to be able to say something like, “you can’t be this angry—or angry like this—with me; you’re imagining yourself on some other scene.” That scene can then be unfolded, in a spirit of inquiry—if the interlocutor wants to turn around and suggest you’re carrying some scenic baggage around with you as well, then, fine—we can open that up as well.

It seems to me that a very close examination of and engagement with language as the form of events is being marginalized today. Benjamin Bratton likes to reverse the Derridean slogan: “almost everything is outside of the text.” The outside of the text is everything that can be handled mathematically and “materially”—engineering, computing, design. These are all languages one can speak. It’s possible to lose patience with the history and forms of appearance of words and other pieces of language, and just say, “but the point is…” The “point” is our entry into a metalanguage whose a priori clarity we must pretend to in order to enter—often presented as the “common sense” we all know. But people always say things one way rather than another, and words, phrases and constructions have acquired specific centering powers for a reason. Bratton’s own style is one we might call “ultra-declarative”—every word in every sentence can be traced to some metalanguage, some discipline, creating a kind of forbidding inter-discipline—there is nothing “ostensive” or inviting, no privileged experience being appealed to, no “we.” He doesn’t “touch base” with you. If you read Buckminster Fuller you’ll see something similar. This is a form of writing with its own power and it produces a kind of utopian or perhaps “heterotopian” effect. But it’s definitely a form of writing, one that implicitly asks you whether you’d like to be addressed as a “user” or a ‘designer”—if the former, we’re talking about you, not with you.

It’s in the language that we use that the boundaries of the scene are constituted and made evident. It’s always possible to try and contain the scene for making explicit rules about what can be said here. Boundaries need to be set, but if they’re set defensively they’re more likely to fail because such attempts are always, like old generals, fighting the last war. It is other scenes that make you a delegate on the present scene. Your responsibility to share some task distributed across contemporaneous scenes, or to continue some project sent to you from previous scenes is what constitutes the boundary of the scene. Maximizing your responsibility for the things you can be responsible for (because you have the “quantum” of power enabling you to enact such responsibility) and treating others as co-responsible in accord with their powers is what creates the boundaries of the scene. Maximum distinction from other scenes is also maximum embedment of the present scene in those scenes. Self-presentation relies upon the possibility of such a moral relation with the other, while surfacing and representing the interference other scenes exercise upon that relation.

So, the role-playing or enactment I began by talking about ultimately aims at maintaining the boundary of the scene as maximal distinction and/as embedment. This involves ordering what we might call the “grammatical stack”: the articulation of ostensive-imperative-interrogative-declarative. Ostensives generative imperatives; but not all of those imperatives are heard, and therefore many go unheeded—we could say they never “make a sound.” Imperatives extend themselves into interrogatives, but here, too, there is much leakage, as plenty of imperatives trail off into oblivion. And interrogatives are converted into declaratives, but not all of them can be at a given moment. There are stray linguistic acts scattered around, but they’re all there in some form. Your speech (or media enactment, or interfaciality) is good when it articulates a form of the grammatical stack: your declaratives answer the most precise questions that emerge from the most urgent imperatives that were generated by the most anomalous ostensives. This is how one acts appropriately, as needed, “in the moment.” You can only do this by reaching into others’ declaratives, though. It is in representing the mismatches of the other’s articulation of the stack that your own stack takes shape. And there’s always some mismatch, even if only because the other’s stack has generated new ostensives that you now can, but the other couldn’t have, draw imperatives from.

This means that we have to be readers of texts of all kinds, including the texts of each other’s self-presentation. I’m defending “close reading,” but I want a form of close reading that travels a bit more lightly than the kind I learned as a graduate student. The closeness of your reading is manifested in the way you accentuate the role the other attributes, not completely knowingly, to you—getting it “right,” or approximating and translating more precisely as a scene unfolds. There’s still very much a place for detailed readings of complex texts—it’s becoming a lost art and fewer and fewer people know the power of this practice. But a future post will lay the groundwork for a practice close reading that will also be a quick and selective, “on the ground,” reading. (Somehow I have come to imagine myself on a scene where I am the target of the accusation of defending an antiquated form of linguistic practice, and have elected to plead guilty with circumstances so extenuating that they invalidate the accusation.)

So, we don’t need to implicate one another in murder—just in being less “present” than we might be to the traditions, obligations and discourses we participate in. One refuses the imperative exchange offered up—competing claims to centrality, whether personal, ideological or moral that can’t be settled. So much current discussion is modeled on the debate and courtroom forms of contestation, as if some transcendent judge will step in and declare us victorious, rather than an inquiry model. In the former you look for weak points, while in the latter you look for anomalies and paradoxes. After all, someone very interesting, with new things to say, might contradict himself more than someone who only makes safe and boring statements. There’s always some hypothesis implicit in someone’s discourse—really, anyone’s discourse can be articulated as a “stack” of hypotheses, in various relations of dependence upon each other—that’s being stretched in any utterance. If you derive some such hypothesis from the other’s discourse, including the other’s “accusations” directed at you (we are all stacks of walking, talking hypotheses) then your response can transform the scene by offering a test of that hypothesis.

March 24, 2020

The Stack, the City

Filed under: GA — adam @ 8:13 am

This will be my first run at Benjamin Bratton’s The Stack (2016), a book that is extremely interesting in its own right (and likely to continue to be so) while also representing a new area of inquiry—familiar with postmodern theory, and drawing heavily upon thinkers like Foucault and Deleuze, while taking full account of all the implications of “planetary-wide computation.” As I mentioned a little while back, while Bratton, and his colleagues at the Moscow Strelka Institute (from which much more is promised) and the e-flux journal, is certainly “leftist,” he can barely be bothered to even pay lip service to the trendy race, gender, sexuality issues, or gesture toward “power and wealth disparities.” Rather, his politics is almost exclusively concerned with climate change, and in reading Bratton and some of his colleagues it become fairly obvious that what most fascinates in ecology is the pretext it provides for design projects that would match the scope of the supposed problem and draw upon the resources available through planetary computation. In fact, if, rather than obsessing over trying to minimize (and even shrink) the amount of carbon in the environment, we were to say, “well, why don’t we just accept that all the things the climate changers say is going to happen—melting polar caps, flooded coastlines, super-storms and the rest—will happen and redesign our human habitat in response,” we’d have an “absolutist” or “autocratic” project precisely parallel to Bratton’s in scope, ambition, and disregard for present political pieties.

Bratton sees planetary scale computation as a challenge, not necessarily insurmountable, to existing forms of sovereignty. He shifts Schmitt’s “nomos” from the earth to the “cloud,” as in cloud computing. The “stack” is the vertical and “accidental” articulation of different “layers”: the Cloud layer, the Earth layer, the City Layer, the Address layer, the Interface layer, and the User layer. This model is clearly meant to replace or significantly “update” our outdated models of nations, sovereignty, citizenship, rights, and all the rest—but the problem of articulating all these levels coherently leaves open the possibility that some kind of traditionally conceived sovereignty (political will) might be beneficial or even necessary to help create the “stack of the future.” This opens the possibility for very interesting discussions. Before saying a little about each of these layers, and zeroing in on one in particular, I want to point out that, with the exception, I suppose, of the “Cloud” and “Earth” layers, which seem to be clearly the highest and lowest, respectively, the layers seem to me to be less piled on top of, than wedged (in very complicated and uneven ways) into each other.

The Cloud is the layer of the accumulation and processing of the massive amounts of data now produced, intentionally and inadvertently, through all of our daily activities. The Cloud sovereigns are Google, Apple, Amazon, Facebook (I’m not sure whether Bratton would—or should—put Twitter, or others—into this pantheon). Google seems to be primus inter pareshere. I don’t think anyone needs to be convinced that whether and how these “polities” transcend and subordinate (or eliminate), on the one hand, or are integrated into, on the other hand, traditional forms of sovereignty, is one of the more pressing medium-term questions of the present order. The Earth is the earth as the source of the massive ongoing extraction of raw materials required to keep the Cloud going—the entire earth being scoured for minerals and power sources, in the use of which planetary-scale computation dwarfs by a great deal all other forms of power use. And, of course, the Earth absorbs all the consequences of this enormous burning of energy. Needless to say, all kinds of questions of economic and political control enter into ensuring continual access to (and responsibility for) Earth. The Address layer is where institutions and individuals (the latter increasingly through institutions) gain access and make themselves accessible to the Cloud; as an Address, we are each of us entered into the Cloud in various ways, from various points of entry. The Interface layer is the ways in which users are provided access to the Cloud and through it to institutions. There is always an Interface, and, the Interface level is the one where the vocabulary of the Stack most overlaps with more familiar vocabularies—we start to notice that every human interaction involves (or can be described in terms of) some kind of “interface,” which is probably going to replace the older, more philosophical term “mediation.” The Interface is a site of interesting design problems—the way the website looks and works, the series of clicks one must employ to “enter” some online enclave is enormously consequential for the shape of the subsequent “exchange.” And we all know what the “User” is, since we are all users, all day long, at various sites. Bratton seems to me to suggesting pretty strongly that “User” (with its, as I’ve seen others point out, connotations of addiction and dependency) is coming to replace “citizen” as the way we are all identified within and participate in the Stack.

Furthermore, Bratton makes it clear that Users are not necessarily human—in fact, the vast majority of them are not—or, at least, that will eventually be the case. Companies and institutions can set up proxy users, automated users with addresses through which business can be transacted. And this brings us to another aspect of what, for now, I’ll call “the thought of the Stack”—its development of tendencies within posthuman and postmetaphysical discourses that relativize or, better, “relationalize” the human in relation to the non-human—the mechanical and algorithmic as well as the animal, vegetable and mineral. To put it simply, humans are not the only agents—although the question seems to be left open (Bratton often seems to be ready to close it, though) as to whether humans are a particularly important or special kind of agent. The transcendence of liberalism would be the transcendence of humanism as well, so there are legitimate questions for postliberals here as well—certainly, if we assume that desire and resentment are always of the center, that we only have being in and through the center, we’re not exactly “humanists” either, insofar as humanism means putting humans at the center. I would insist on the distinctiveness of joint attention, but animals certainly exercise attention, the metabolics and chemical composition of other materials can be said to have some form or “tendency” analogous to attention (we could invoke Aristotle here, or point out that “attention” might be on a continuum with something like  “responsiveness”) and our machines have simulations of attention and intention programmed into them—so, humanity’s “specialization” within the Stack can be acknowledged while we see a continuum along various “layers” of being. Anyway, I just mark these as questions to be taken up as more of us, I hope, familiarize ourselves with Bratton’s and his colleagues’ work.

This brings us to the City layer, the one that I think really stands out here—all the rest of the layers have come into existence over the past few decades, but there have been cities for 10,000 years. The city is, by definition and etymology, a political entity. Bratton, it seems to me, ultimately wants to see the city insofar as it is integrated into the other layers—as a conglomeration of users and architectural interfaces that allow the Cloud nomos to organize production, circulation and consumption. But it’s impossible to avoid questions of power here, and Bratton draws upon Deleuze’s concept of a “society of control,” which Deleuze saw as replacing Foucault’s ‘disciplinary society”—whereas the disciplinary society, through institutions like schools, prisons, militaries, factories, etc., worked directly on the bodies of its subjects, the society of control “modulates” the interfacial means providing ingress and egress to various institution and interactions. This distinction has always seemed to me overstated, insofar as Foucault’s notion of “panopticism” already includes the idea of self-regulation in response to anticipated responses to one’s possible behavior, but we don’t need to “relitigate” this debate within postmodern theory here (or anywhere else, probably). Either way, controlling behavior by making it clear that certain kinds of decisions will give you a bad credit rating a decade down the line is far more effective than constantly punishing or shaming people for trivial purchases—at least on a systemic, if not always on an individual level. (Distinguishing between those who need constant “stimuli” and those who can find patterns and anticipate is also a good way of sorting people out.)

Bratton’s discussion of the City layer, like all of his discussion, is complex, interesting and rather breathless—he refers back to ancient cities as the city of temples, sacrifice, and distribution (not much, if anything, on palaces and kings, though), discusses airports as a model for thinking the contemporary city, and much else. Still, the fact that the capitals of countries, where the government is seated, are cities, seems to interest him less, as does the imperial nature of at least the major cities. Cities are the center. Like markets and money, to which cities are constitutively related, cities seem to have generally (if not invariably) been created by the imperial center. Jane Jacobs makes a very interesting, counter-intuitive argument in her The Economy of Cities, to the effect that the urban precedes the rural—that, in fact, agricultural communities were established to feed the city rather than, as seems more “natural,” cities being a result of the development of farming to the point where extensive exchange became possible (this seemingly natural assumption is strikingly and suspiciously similar to the seemingly natural assumption of barter growing to the point where money became necessary to mediate the sheer volume of exchanges). At any rate, the better we get at discussing “the City,” the better we will be able to argue that it is within the City layer that the agency needed to make all the layers of Stack more consistent, internally and with each other, will come from within the City. And, unless you believe in the possibility of technocracy (as Bratton does), that is the kind of argument you will need to make.

Needless to say, there have been lots of cities and many different kinds of cities. But perhaps we can say that cities are where individuals are abstracted from kinship and cult relations and related directly to an at least potentially desacralized authority. Even when there’s a cult of the city, it’s a cult abstracted from and shared by the separate tribes and families with their traditional cults. The city is where divine kinship replaces sacral kingship, and where the mobilization of masses of instrumentalized and de-socialized slave laborers is initiated. The city is therefore the site of intensified and distributed mimetic activity, of endless mimetic crises and deferrals, which are in turn converted into models of governance. The pastoral, the aesthetic mode that celebrates the natural and virtuous countryside to the artificial and vicious city is itself a product of and reflection upon urban life—the “artificial” city is the source of “Nature” (part of Bratton’s project is to eliminate the entire notion of “nature” as well as “culture” by acknowledging the artificiality of everything—a development heralded by the city). The city is the cynosure and produces cynosures (“celebrities”). Cities are modeled on other cities and are modeled and remodeled on themselves, or some imaginary project of themselves. To capture the city is at least a precondition to capturing the entire country—sometimes it actually seems to be a sufficient condition.

Cities have an egalitarian tendency, due to their abstractness, but they are above all centers generating satellites: other cities, suburbs and countryside, geopolitical peripheries. It is from the standpoint of the city—Washington D.C. in relation to New York and LA, in relation to Des Moines, Dallas and Orlando, in relation to the “heartland”; in relation to London, Paris, Beijing, Moscow, Dubai, Jerusalem, Cairo, and so on, and through these centers to other peripheries (and feel free to contest my American-centrism if you think another order is emerging)—it is only by subordinating the Stack to a coherent ordering of these center-periphery relations that the Stack can be integrated into the human order, rather than the reverse. But these reflections are, I emphasize, by way of laying the groundwork for engaging these new disciplinary spaces.

Addendum:

 

After writing this post, I happened to come across an essay (“On Anthropolysis,” published in 2018) by Bratton that touches on the question of human origins. Here are the first two paragraphs:

 

Anthropogeny is the study of human origins, of how something that was not quite human becomes human. It considers what enables and curtails us today: tool-making and prehensile grasp, the pre-frontal cortex and abstraction, figuration and war, mastering fire and culinary chemistry, plastics and metals, the philosophical paths to agricultural urbanism and more.Given that Darwinian biology and Huttonian geology are such new perspectives, we may say that anthropogeny, in any kind of scientific sense, is only very recently possible. Before, human emergence was considered from the distorting perspective of local folklores. Creation myths, sacred and secular, have been placeholders for anthropogeny, and still now defend their turf. When Hegel was binding the history of the world to the history of European national self-identity, it was assumed among his public that the age of the planet could be measured in a few millennia (103 or 104 years), not aeons (109 years). The fabrication of social memory and the intuition of planetary duration were thought to operate in closely paired natural rhythms. While the deep time of the genomic and geologic record shows that that they do not, the illusion of their contemporaneity also brought dark consequences that, strangely enough, would actualize that same illusion. In the subsequent era, the meta-consequence of this short- sighted conceit is the Anthropocene itself, a period in which local economic history hasin fact determined planetary circumstances in its own image.The temporal binding of social and planetary time has been, in this way, a self-fulfilling superstition.

As such, how is the anthropos of anthropogeny similar to or different from the anthropos of the Anthropocene? Are they correspondent? Does the appearance of the human lead inevitably toward, if not this particular Anthropocene, then an Anthropocene, and some eventual strong binding of social and geologic econo- mies? Whether the two anthropoi are alike or unlike in origin, can they converge or diverge? Instead of becoming human, does a sharp temporal linking also speak to becoming something else? That is, in what ways is a post-Anthropocene—a geo-historical era to come, eventually—aligned with “anthropolysis”or the inverse of anthropogeny—a becoming inhuman, posthuman, unhuman, or at least a very different sort of human?

 

The Anthropocene is that period in the history of the earth where the earth is decisively marked, even made over by, human activity. There is some interesting equivocation in Bratton’s discussion here. On the one hand, human origins can only be seriously explored after the scientific innovations of Darwinian theory and modern geology—prior to that, there was plenty of talk of human origins, but all of it mythical and folkloric (Bratton’s Voltairean contempt of anything smacking of religion or myth comes out especially strongly in this essay). In other words, only in the Anthropocene could a plausible account of human origins emerge—even if Bratton doesn’t consider the question important enough to do more than gesture towards brain development, war, fire and food. What we discover in and through the Anthropocene is that the earth and its history have no regard for human scale. At the same time, the delusional belief that the history of the earth was tailored to human needs and purposes, and was therefore to be mastered, was the very attitude that, in a “self-fulfilling prophecy,” produced the Anthropocene, the age in which the human transforms and even endangers the earth. It then makes sense for Bratton to ask whether “the appearance of the human lead[s] inevitably towards, if not thisAnthropocene, at least someAnthropocene.”

We are in the middle of some very interesting paradoxes here. What kind of being must this human be if it was “destined” to produce some Anthropocene? Presumably a being compelled to see itself as essential to the world, to see the world as created for its own sake. Why should developments in the cortex, the mastering of fire, and so on create such a being? That the human leads to the Anthopocene, and the Anthropocene leads to Anthropolysis, the “breaking up” of the human into the “inhuman, posthuman, unhuman, or at least a very different sort of human,” is very suggestive. But most of the rest of this essay is an attack on contemporary ‘reactionaries,” who wish to return to national ethnic, religious, etc., fairy tales and reject the science that will remake humans into—what, exactly, and why?—finally drifting in and out of various science fiction visions. The limits of Bratton’s anthropolysis lie in his refusal to take seriously the question of anthropogenesis. But he does end with the following thought:

If the Anthropocene binds social time to planetary time, then let the former scale up to the latter, not the latter down to the former. With maximum demystification, make human economies operate according to the geologic scale we found hiding under the rocks. This inversion of the temporal binding we have is the kind of good definition of the post-Anthropocene that we need, and the inversion of the humanist position and perspective it would require is the anthropolysis we want.

In a way, this formulation parallels that of the inherently anthropocenic human—in both cases, it seems essential to have the human scale match the planetary scale. The human must make itself a match to the planetary; or, to put it in terms that might repel Bratton, the human has to make the planet a home. I’ll appeal here to Walter Ong, who, in his posthumously published Language as Hermeneutic: A Primer on the Word and Digitizationargues that the ongoing “analysis” of reality through the process of breaking it up into smaller and smaller “bits” in fact raises more questions of “interpretation” at each point along the way. Similarly, the process of anthropolysis, of becoming a “very different sort of human” (aren’t we always becoming a different sort of human?), raises questions of anthropogenesis. GA has not, perhaps, paid enough attention to the human as a world maker, but the originary hypothesis has us zero in on the human as a scene maker, or, we might say, stage designer. Bratton is right: our stage is now the planet, and we will be designing it, one way or another. Bratton, though, seems to want to clear the stage of the clutter caused by those who still want to reduce the planetary to their all-too-human scale. The anthropomorphic way of thinking about it is to see our discoveries regarding the materials with which we are to design, and the spaces upon which we have to stage our shows, as, simultaneously, revelations regarding the new roles we and our fellow players might inhabit. We can be patient as we (diligently) elicit these possibilities, and try out different ways of scaling things up, or restaging—and, really, Bratton can afford to be patient too, because whatever sponsors he might have in mind are not going to scale up to the dimensions of his project anytime soon.

« Newer PostsOlder Posts »

Powered by WordPress