GABlog Generative Anthropology in the Public Sphere

March 26, 2019

The Central Imaginary

Filed under: GA — adam @ 12:33 pm

A while back I formulated the concept of the “sovereign imaginary.” This concept represents the assumption anyone makes who expresses a desire or some resentment, who says “we should…” or “someone should…,” regarding some authority who could do the thing “we should” do. If you say “Medicare for all,” you imply a model of a state that would implement Medicare for all and would do so in the way you intend. If you say “Medicare for all” you’re not thinking of the frauds that will be parasitic on it, the bureaucrats who will make cruel and capricious decisions, the drug companies that will donate to politicians who will push to have their drugs purchased at high prices, etc. In other words, you airbrush out of the picture all of the crisscrossing powers that would make the reality of Medicare for all far different from the intentions of its supporters. You imagine a unitary executive power, who issues orders that will be obeyed by subordinates, who will in turn issue orders obeyed by their subordinates; you imagine competent people with integrity placed where they belong and allowed to do their jobs. Even if you say, yes, I know there is corruption and incompetence and that bureaucracies develop their own interests, etc., you are still assuming that these are marginal to the sovereign power you imagine—if not, you wouldn’t be able to say “we should…” This seems to me a very useful observation to make because, if it is accepted by an interlocutor (and it’s very hard to deny), the following conclusion is also very hard to evade: such a sovereign power might have very different ideas on how to handle the health care system, and, freed of all the interfering powers (all the conflicting “we shoulds”) would have very little reason to care what you think. So, implicit in your political desire is its cancellation. Even better, the same must be true for me, and for anyone else participating in the conversation. So, instead of arguing about Medicare for all vs. private insurance vs. treatment for cash, we can talk amicably about something upon which we have just found we agree: there “should be” a central authority that can carry out policies unhindered by interest groups, nosy NGOs, bureaucratic factions, and so on.

Now, we are no more in a position to institute our desire for clear and secure central authority than we are to implement our version of Medicare for all, and so arguments over how to do this are equally pointless. We don’t need to imitate the pathetic revolutionary movements that split into a dozen factions over how to define a particular institutional reality or assess a particular event. But we are now listening to the center, and we can ask, what kind of practices will enable us to project possible paths towards clarified and security central authority, to prioritize among those paths, to invest what energy and resources we have in the most favorable paths, all the while maintaining our initial agreement that all our desires and resentments indeed point in that direction. Even more, the projection of possible paths and the prioritization among them should be guided by the need to maximize that agreement, to spread it, and to ground it more thoroughly in the disciplinary spaces we enter and sustain. That is the kind of activity that will let us see the possible paths when they take shape, and to distinguish among the opportunities they offer. What is essential here is that this imaginary is in our language—no one expressing a political desire can be exempt. So, every conversation about every policy or every social evil (poverty, “racism,” etc.) can be directly converted into a conversation about the kind of authority you seem to be imagining as capable of eliminating or mitigating that evil (or perhaps redefining it as not-evil) or implementing that policy, and the kind anthropological, epistemological, ontological, and so on assumptions you must be making so as to consider such an authority worth considering. And this approach can be applied to all of culture, not just narrowly political discussions—a movie will represent a particular sovereign imaginary, as will a dispute between parent and child, a conversation between friends, a psychological theory, and so on. Even if you want to argue for democracy or liberalism, you have to be imagining a sovereign that can protect “free speech,” the integrity of elections, whatever you imagine to be the role of the media in informing the public, and all the rest. Even a globalist, even an anarchist, inhabits a sovereign imaginary, whether it be international human rights courts and organizations mediating trade disputes, on the one hand, or spontaneously formed agreements between unbound individuals, on the other hand. The sovereign imaginaries, when acknowledged, are the starting point of needed conversations; when unacknowledged, are the sources of all the conflicts over the actual sovereign. So, making them explicit is the first step toward ending those conflicts, towards overcoming ideology.

Needless to say, I continue to consider this concept essential and unimpeachable. But I formulated it before I had thought through sufficiently the consequences of installing, so to speak, the concept of sovereignty at the heart of absolutist theory. The concept is itself ultimately a liberal one, assuming a “natural” condition of violence among abstract individuals that can only be quelled by a sovereign with direct power over each individual. It is better to see ruling as helping to maintain and enhance the peace preserved throughout the social order by its various corporate powers and governors. I have come to use the term “central authority” rather than sovereign, and so I will now speak in terms of a “central imaginary.” This should also help to conserve, within this concept, the concept of “listening to the center,” which I have discussed in quite a few recent posts. We listen to the center as the center speaks through our central imaginary. Once we have identified the central imaginary as the “topic” of our conversation, we can start to seek out imperatives from the center. Once I find in my desire for Medicare for all a faith in the possibility of clear and secure authority, I cannot but start to think of what I must do to increase the future possibility of such an authority. This imperative, and the subsidiary ones it generates, concerning maximizing agreement on this point, now restructures all my transactions with the world. And I start working on reshaping the declarative order by composing declarative sentences that answer the questions those imperatives turn into as the means of fulfilling them are exhausted in a particular case.

So the problem then becomes, how to think of that theoretical, declarative practice. I must engage with a particular person who happens to be crossing my path in some consequential way so as to enhance our agreement regarding the existing traces and elements of a clear and secure order. But I’m talking with this person and no way of fulfilling that imperative fills me with confidence at the moment, so the imperative converts itself into an interrogative, which is a way of demanding information rather than commanding a specific act. How do I arrive at a “good” answer to that question? How do we talk about doing so productively? The disciplinary space we need here should be filled with thought experiments. In devising such thought experiments, we can derive inspiration from the kinds of experiments cognitive psychologists devise in order to identify the various cognitive thresholds children pass through. David Olson made much use of these experiments in order to determine the cognitive consequences of literacy. So, you want to see whether children of a certain age, or certain degree of exposure to literacy, are capable of understanding the concept of having been wrong—that is, of realizing that they know something now that they didn’t before. You can give them a box with pictures of candies on the cover and ask them what they think is in it. Candies, of course. You open the box, and it’s something quite different—say, pencils. What is in the box? Pencils. What did you think was in it before? Pencils! The children are incapable of grasping the concept of moving from one state of mind to another in response to changes in observed reality. Or, you show children a man opening a drawer and finding something in it—neckties, say. The man leaves the room. The children see someone else come in and replace the neckties with bowties. You then ask the children: when the man comes back, what will he expect to find in the drawer? Bowties, they say. They can’t separate what they know from what someone else knows—they don’t yet have a “theory of mind.”

The question then is, what distinction or threshold do we want to uncover when our imperatives from the center turn into interrogatives that “command” us to compose a declarative response? What opens space to hear the imperatives of the center is deferral, which also means deferring to the other and waiting for a reciprocal gesture of deferral in turn. This is not a question of politeness of considerateness (not that there’s anything wrong with them) but of developing a discovery procedure: from another’s ability or lack thereof to defer an imposition of a compelling resentment we discern the extent to which he is ready to open inquiry into future imperatives from the central authority.

Derrida associated “defer” with “differ,” and the two words are really the same. If we want to “assess” deferral, we can therefore do it through practices of differentiation. The originary scene ends, and, more importantly, is remembered as ending, with everyone putting forth an identical gesture: hence the strictness with which ritual (not to mention grammatical conventions) is enforced. But on the originary scene there would have had to have been significant differentiation: not everyone’s sign was issued simultaneously, was held equally long, was equally well-formed, was equally responsive to the preliminary gestures and potential lapses of others. Future instances of deferral will require someone to re-activate the generative scene out of the ritualized and habitualized one. Tacitly acknowledged elements of social practices need to be turned into signs. This is not a difference between “progressive” and “reactionary”: even the preservation of traditional practices requires this kind of renewal.

The practices of literacy constellated in “classical prose” are similarly homogenized through the supplementation of an imagined represented speech scene by metalinguistic devices aimed at placing all readers with each other and the writer on that imagined speech scene. The study of language required to create classical prose makes language into the center of a disciplinary scene; the generation of future disciplinary scenes will depend upon turning those metalinguistic devices into centers of new disciplinary scenes. If you take metalinguistic representations as referring to “objects” (“beliefs,” “assumptions,” etc.) your discipline will simply reiterate the construction of “prosaic” metalinguistic literacy itself. Of course, it’s necessary to hold some concepts constant while you work with others, but that’s just a question of the degree of “flux” you want in the disciplinary space so as to conduct a particular inquiry, not of more “constant” vs. more “variable” objects or domains of reality.

You use a concept (like “assumptions”) in a particular way for a particular inquiry, which differentiates that use of the concept from the history of usage it derives from. That history of usage has taken shape in a normative center of usage, which your own usage will to some degree imperfectly iterate. Now, we can assume that the speaker’s meaning is the same as the word’s meaning, which collapses the very distinction introduced by literacy. In this case, our meta-inquiry will concern itself with guaranteeing the normativity of the use of the concept. This is a way of resisting any differentiation beyond that required to ensure the continuation of the discipline itself. In a sense, you are then like the child who thinks he always thought there were pencils in the box, and like the participant on the originary scene who forgets the event itself in its ritualization. Or, you can accentuate the difference, indeed the differences, between your use of the concept and those circulating around the normative center. The purpose of doing this is not a romantic attempt to make yourself a center of hostile attention in what is really just another ritualized form of modernist individualization. Rather, you want to ensure the commensurability of speaker’s meaning and word’s meaning—not by policing deviations from the latter, though, but by making explicit the supplementary character of the meta-concepts of literacy. We have concepts like “assumptions” and “beliefs” because the iteration of the sign is always problematic, and always requires a disciplinary “gathering” or “assembly.” In a sense, the most fundamental human question, the question upon which our emergence depended, is whether this sign that we use now is indeed the same sign as when we used it before. Literacy makes this question explicit, and all subsequent media do so as well in different ways. We need for the sign to be the same in its different uses, but if we imagine that it simply is the same, we commit ourselves to resisting differentiation. The only thing that makes the sign the same is that members of a disciplinary space establish continuity with other uses of the sign, which means with other disciplinary spaces. Accentuating the difference of the sign is taking on the responsibility of the disciplinary community: it is the way we make this concept our own and potentially others’ out of all the other ways other disciplinary spaces have done this. Only in this way can you remember that you first thought there was candy in the box, and what changed your mind; and you can remember the bustling, hypothetical scene even as you fill your allotted role in the rituals and habits in which its residue is deposited.

Perhaps the whole of a “centered” social and political practice lies in devising experiments for determining how a given practice is determining the sameness of the sign it is centered on.

March 19, 2019

The Worlding Event

Filed under: GA — adam @ 7:21 am

I have argued previously for the priority of “attentionality” over “intentionality”—attention must precede intention, and “intention” individualizes what is “joint” in attention, making it more of a declarative than an ostensive concept. We can trace the emergence of intentionality from attentionality, whether by “intentionality” we mean the more philosophical notion of constituting an object or the more everyday use of the term as meaning to do something. On the originary scene, all participants attend to the central object, and attend to each other attending; the sign, as the gesture of aborted appropriation, is really nothing more than the demonstration of this reciprocal attending to their joint attention. Self-referentiality, then, is built into the originary scene. Even more, what is action if not a prolongation of attention? I see the other attending to me, which becomes a kind of self-attending, as I can single out that in my gesture that might be articulated in the other’s attention, and in that way move myself so as to fit the shifting attentional structure of the other. My movements, and therefore my actions, enter into and are supported by the attentional space I have co-created with others. In all of our actions, then, we are tacitly referring to this attentional space, of which we are mostly unaware at any moment. As Michael Polanyi says, we know more than we can say. But we can say more and more of what we know, in the process producing more knowledge we can’t yet say—becoming a representation of this state of affairs is what ethical action entails.

For originary thinking, the human being has a telos: to speak and act along with the center; to enter the history of deferral in such a way as to construct the world as the effect of and continuation of that history. We assume everyone is trying to do that as well, which is why we know every utterance includes a sovereign imaginary eliciting commands from the center. Traditional ethical thinking will start to speak in terms of will, judgment, capacities, desire and its education and so on and all of that is fine, but we can just speak of the center one becomes as soon as one is amongst people, a center both actual and possible, and that each of us constructs as the ways we want attention drawn to or deflected from us. You can compete with other centers within the economy of attention, or you can redirect attention from you to the center enabling you to so redirect attention. Sometimes the very competition with other centers can be turned towards that end.

Performing the paradox of self-reference is the highest good for originary thinking. Turn every reference to something else into a reference to you and every reference to you into a reference to something else. You can never run out of things to do this with because everything is marked by the history of such reciprocal reference, and so keeps becoming something new. In this way you keep turning the world into a completely internalized self-referential system. This would seem to be a completely closed, and therefore dead, system, but in relation to the center this self-referentializing system is itself just a thing comprised of references to the center. You point to something, enabling others to see it, which enables it to be, but its being in turn enables you to see it and to point to yourself seeing it along with others—the center makes its appearance in this layering of the scene and the impossibility of determining whether new things are coming into view or we are sharing attention so thoroughly that we’re not sure where your seeing begins and mine ends. The center tells us to sustain that, by constructing institutions out of sites where the articulation of shared reference and self-reference (where we find a way of saying to each other, “here’s how we’re making sense of each other”) can become a model of deferral.

We don’t need to invent clever ways of enacting the paradox of self-reference, like saying “I am lying.” ‘I see that” is quite paradoxical enough, because “I” can only see that because “you” and “others” are least potentially able do so (and have therefore “always already” done so) as well; “that” is that only because I am seeing it; and I “see” that because our deferral, our laying back from appropriation, lets that object, like all objects since the first object, set itself off against a background—seeing is always a refrained touching and tasting. The disciplined forms of literacy try to suppress the paradoxicality of the declarative by supplementing sentences within imaginary scenes whose parameters are set by those defining the abstractions used to perform the supplementation. To define “perception” in terms of physiological structures and learned Gestalts is to try to abolish the paradoxicality of “I see that.” But, of course, we have to say things like that, so it’s best to say them in the manner of little satires on these suppressive supplementations, reintroducing the paradoxes they hope to avoid. Eventually, these running satiric digressions become indistinguishable from the primary discourse itself. If you can find ways of iterating this digression-within-the-discourse in new variations within emergent events so as to have each variant naming the previous ones you enable others to join in self-referential centering.

One way of breaking with Western metaphysics is by acknowledging the traditional character of all thought. The concepts you are working with have been worked with in other contexts, and are conversions of earlier concepts, which solved problems within a now extinct paradigm which has nevertheless bequeathed to us some of it problems and some of its materials for solutions. But this means that the more we shape these concepts to our own purposes the more we are participating in an ongoing inquiry with those who did so earlier, and had no idea we were coming along. But since the most fundamental and universal tradition is language itself, it seemed to me that the self-aware participation in traditions of thought could more simply be understand as a form of language learning. When you learn a new language, or when children learn language, the process involves imitating chunks of discourse in ways that are inevitably mistaken because you must intuit their uses in unanticipated contexts—how else could anyone learn? In the process, you generate new idioms, and this is how language changes—enough people take the mistake, or even a shift in emphasis, as “correct.” We never stop learning, so we’re always students, but we also have to step outside of the flow of learning in order to teach people who we see falling into what we fear (but we could be wrong) are less productive patterns of error. Here, we have, broadly, two choices: one, we situate ourselves within a more or less institutionally protected orthodoxy, and correct those whose language usage doesn’t conform. The advantage here is that you guarantee you’ll always be right and smarter than anyone who comes along. Or, you re-use the misused idiom with some of the weight of inherited uses which the newcomer might be less aware of and thereby incorporate the mistakes into a regenerated tradition of discourse. Here, authority has to prove itself by showing itself capable of allowing digressions to flow back into a larger current. You keep emulative mimesis in play by allowing that play to construct the very space in which the implications of language usages can be explicitly hypothesized.

Many years ago I started working on what I called “originary grammar” because I felt that GA needed to be more than just another “theory,” one that offered its own “readings” of texts and “explanations” of social structures and historical events. I thought it needed to generate its own comprehensive vocabulary—a language others would have to and want to learn—rather than just saying something like, “here’s how we think it all began” and then proceeding to talk about ideas and interpretations and principles and beliefs and arguments and proving things like everyone else. And the way to do that was out of the dialectic of linguistic forms Gans worked through in the first work in GA, The Origin of Language(the new edition of which is of course available, and the Amazon page for which is still sadly bereft of comments). I was encouraged in this by the fact that Gans used a kind of grammatical approach to defining the two key intellectual and cultural transformations constitutive of the West: he defined “metaphysics” as taking the declarative sentence as the primary speech act; and he defined Judaic (I think “Israelite” is better) monotheism as “the name of God as the declarative sentence.” In both cases, the post-sacral or imminently modern world is constructed in terms of some tension between the declarative, on the one hand, and the imperative, or, more broadly, the entire ostensive-imperative network, on the other hand. Wouldn’t anything we would want to talk about be included in this field of tension?

Originary grammar should supersede scientism while preserving all the intellectual advances of science. Instead of “facts,” we have what is known ostensively: what could become an object of shared attention. Something could only become an object of shared attention on a scene, which cannot itself be prepared ostensively: we are driven to create new scenes by the breakdown of a previous scene, the central object upon which eventually generated new desires it could no longer defer. (Of course, the new scene could feature the “same” central object in a different way.) If the scene is not simply to break down; if a transition to a new scene is to be achieved, asymmetry must enter the arena in the form of an imperative: someone issuing an “inappropriate” ostensive regarding a new or old/new object. Here, the preservation of presence on the scene can be united with maximum innovation on the scene: we allow a space for inappropriate ostensives, to see which might work as imperatives. Finally, we can bound declaratives to the scene by allowing the declarative field maximum freedom to explore all the complexities of declarative possibilities (to cross over time and space, to organize all of reality around one center or another) on the condition that it represent actual and possible ostensive-imperative articulations. The declarative sentence constructs a linguistic present, the present in which you can utter the sentence, that, unlike the ostensive and imperative, can be separated from any particular scenic present—but that means that the “vocation” of the declarative sentence is to keep restoring the continuity and extension of the trillions of human scenes, each of which threatens in a new way to break that continuity. The declarative would be most interested in suggesting ways of preparing us, or issuing imperatives, to share new ostensives.

In this way we would have a completely self-contained and completely open system in which we would always be talking about what we’re doing in the language through which we are doing it. The content of our declarative sentences would be the way other declarative sentences have commanded us to draw lines connecting objects around a centerized one. So, discussions would take something like the following form: “you say I’ve been looking at things in such a way that others see what I don’t and this is because of where and how I stand and in saying this you are telling me to be led by the configuration which I have not yet identified as a configuration and thereby to see and lean toward something that would compel others to join me in reconfiguring it…” The specific details of any particular scene at the center of an array of scenes would be inserted.

We would be more precise than this sample indicates because each sentence modifies in some way inherited chunks of language and meaning is thus generated by the modification itself—in a language user’s noticing that you have eschewed the expression that 87.8% of listeners would have expected to come at that point in your discourse in favor of a rarely or never before used one because you want that point in the discourse to operate as a center that has you reworking language along with perception, intention and  intuition. And the next declarative in the discussion could point that out or, even better, iterate it in a new modification that the language learners around you would be able to iterate in turn so as to open new fields of objects. So, we’d be talking about things in the world while talking about how we talk about things in the world while talking about how we can rework the way we and others talk about things in the world and it’s all really one “talking.” This still seems to me to be the imperative.

March 12, 2019

Dialectics

Filed under: GA — adam @ 7:24 am

Dialectics is the rendering of paradox pragmatic. There are two ways of thinking about dialectics. One is as a mode of generating new ideas through probing, critical dialogue, in which each side tries to make explicit the assumptions underlying the other’s discourse. This notion of dialectics goes back to Socrates, and a particularly interesting modern example can be found in R.G. Collingwood’s understanding of dialectics as the attempt to find agreement underlying disagreement. The agreement, which, in Derridean terms, was “always already” there (insofar as argument was possible in the first place), is nevertheless, once explicated, a position that neither side knew they held in advance. In other words, something both originary and new emerges.

The other way of thinking about dialectics is as a way of understanding a historical process, or even as that process itself, whereby events are generated by contradictions in an existing social form, so new configurations emerge which both fulfill and confute the intentions of the actors who initiated them. Historical dialectics acquired a bad name as a result of its association with orthodox Marxism, which used “dialectical materialism” as a ‘guarantee” of both the inevitability and justice of its own victory, but Eric Gans employs a much subtler version in his account of the emergence of the imperative speech form from the ostensive and then the declarative speech form from the imperative (by way of the interrogative). Here, the shared intentionality bound up in a particular sign is put to the test (“contradicted” by) an “inappropriate” use of that sign; the tension is resolved as the desire to maintain shared intention (“linguistic presence”) generates a new speech form, “recouping” the “mistake.”

Unlike Marxist dialectics, this Gansian version allows for all the times where this “transcendence” of the previous form would fail to take place—linguistic presence can be broken, and some form of violence and social crisis ensues. The result of a dialectical process, then, can only be assured once the new form has been spread through imitation sufficiently so that it has proven itself capable of deferring the antagonisms those failures would have aggravated. In other words, “historical dialectics” proceeds in a manner beyond the intentions of any participant, but must be “authenticated” by shared intentionality at each point along the way and eventually yield a higher level of shared intentionality. But this also means that the two meanings of “dialectic” are one: the emergence of new historical forms is a process of more advanced dialogues taking place at the margins and gradually providing the means of deferral that enable a reconstructed center to resolve some crisis. Thomas Kuhn’s notion of scientific revolution provides us with the best model for understanding this process: the margins where the more advanced, “disciplinary” dialogues are taking place are where those who have perceived the anomalies of the existing social order in such a way as to doubt whether they can be “recouped” within that order produce questions invisible within that order. Their work is then focused on developing and trying out possible paradigms that might replace the prevailing one.

We could see the emergence of Generative Anthropology itself in just such dialectical terms. At the center, according to the originary hypothesis, sits a potential victim. It is in designating this potential victim, and refraining from victimizing it, that the sign emerges and the group is formed. But how did this clear, minimal insight become possible? If the making of victims is a matter of course, whether it be through conquest, those in power destroying those who might pose even a distant threat, sacrifice, mass slavery, and so on, one would never consider that the production of victims could be a source of any significant insights. In fact, I wonder whether a word equivalent to “victim” would even have been used (the word “victim” itself, according to the Online Etymological Dictionary, comes from the creature brought as a sacrifice). Certainly those whom we would today consider victims, like conquered, displaced and massacred populations, would have not thought of themselves in those terms: they would know, of course, that they had been bereft of their gods, rituals, territory, wealth, kinsfolk, institutions, and so on, and they would mourn all this and bemoan their destruction or enslavement, but this would be a source of shame and loss of faith more than of a complaint anyone would be expected to attend to. Our gods have failed us, or we failed our gods; what else is there to say?

Only with the emergence of justice systems can the notion of a “victim” be conceptualized—that is, once wrongs are not addressed directly through a vendetta but through some socially sanctioned process of determining punishment. This indicates an added degree of deferral, which opens a new realm of paradoxes. The law is established so as to do justice, because “justice” by definition is the proper allotment as determined by anyone who is in the “right” position to determine it—so, something we could call “law,” even if that means the sifting through, by legal professionals, of privileged precedents, rather than a written code, will emerge with the concept of “justice.” But, then, isn’t “justice” merely an effect of what the law, with its own institutional history, has decided? In that case how do we determine whether the law has been rightly decided? For this, we must step outside of the system, to reclaim its origin, but this stepping outside is a dialectical process which requires the model of the exemplary victim of the justice system itself. At that point, the concept of the victim becomes increasingly central culturally until, in Christianity, we have the worship of the exemplary victim. As Christianity permeates all cultural sites to the extent that it can be detached from its origins and its victimolatry separated from the carefully demarcated exemplary victim defining it, all of culture comes to be obsessed with the search for victims and self-representation as victims. The history of democracy, liberalism and romanticism trace this negation of Christianity from within Christianity. With post-structuralism, even language becomes grounded in victimization. Victimary thinking becomes so central as have destroyed any “other” it could distinguish itself from for some moral purpose. Once this ontological colonialism has proceeded to a certain point, it becomes possible to consider that it is not victimization that is at the origin, but a refusal to victimize. And then it becomes possible to think the originary hypothesis.

We can posit a related dialectic as the form of modern politics. Eric Gans speaks of an oscillation between “firstness” and “reciprocity” as constitutive of liberal democracy, but this can’t be a dialectic because nothing new can come out of it. The distributive demands of the moral model will always be assailing the innovators and merit-based hierarchical structures that make those demands for equality possible in the first place. The only thing that could keep the pendulum swinging back and forth is a sufficient degree of cynicism on the part of the redistributors—they must know, as the Schumers and Pelosis surely do, that the eat the rich and get whitey talk is just to keep the contributions flowing and the voters and activists mobilized—they know better than to actually kill the goose laying the golden eggs. But their successors, like AOC, Ilhan Omar, Rashida Tlaib and others, don’t know this. They’ve grown up saturated in the political simulacra of Media Matters, and take all the egalitarian talk quite literally. Even if they “grow in office” and realize what the progressive ideals are really for, we wouldn’t really have a dialectic: the increasing disparity between ideals and the cynicism with which they are advanced can’t lead to anything new. Even if the pendulum keeps swinging, all it can lead to is more corruption and more advanced degeneration.

We could, though, speak of a dialectic between the model of the originary scene and the model of the “second revelation,” that of the Big Man. Here we have a genuine dialectic that has always produced cultural novelties. Ancient Israelite monotheism—the name of God as the declarative sentence—is itself a product of this dialectic: a retrieval of the originary relation to a shared center on the terrain created by the ancient empires, heirs of the Big Men. Rather than a figurable center, like a sacrificial animal, a non-figurable God; rather than a sacred grounded in ritual specific to a closed community, a relation to the center any people could imitate; rather than a deity with whom to engage in imperative exchange, a God who commands reciprocity with our neighbor. But neither Israelite monotheism, nor its Christian and Islam successors, reject monarchy—rather they, seek to constrain and edify it. Nor do any of these faiths recommend a universally shared relation to the center that would override all hierarchical political institutions: the imperative to seek the peace of the kingdom where you live is always intact—and, of course the Israelite God is Himself paradoxically and scandalously, national as well as universal. As with any dialectic, new problems are generated out of the solutions of old ones.

Liberalism might be seen as an attempt to stall this dialectic by internalizing it within the economy, producing a pseudo-dialectic between expanded production and expanded consumption. This also cannot create anything new. But if we see the adherence to the model of the originary scene as itself a product of struggles between hierarchs seeking to efface their descent from the Big Man, we can set the dialectic in motion again. The logical endpoint of victimocratization would be the direct branding, like with sports stadiums, of groups demanding absolute, genuflecting respect from anyone marginally more normal than them by corporations defending their fiefdoms within the global distribution process. Facebook’s Women’s March; Amazon’s Black Lives Matter; Google’s Committee to End Transphobia, etc. The “antithesis” to this WokeCapital hearkening back to the emergent originary scene is, first, that the position of the hierarch is left unclaimed; and, second the originary scene as configured around a center has also been abandoned. Pretty much anyone who asserts the right to issue commands, and the grace to obey them, simply because there has to be a social center, is an avatar of autocracy, and heir to the Big Man, consciously or not. And virtually anyone who gathers others together to study some thing unresentfully, letting the object speak or, in Heideggerese, “be,” has created a direct line back to the originary scene. The “synthesis” comes when those forming disciplinary spaces turn their attention to the emergent autocrats, and those autocrats revise their command structures upon receiving feedback from the disciplinary spaces.

This “synthesis” can only take place in the middle, in the meeting of those upholding the normal and some “allotment,” and those marginal to the official disciplines. Together they will have to form a “spine” which can act once enough elites realize that their role is to govern in their own name rather than ginning up the mobs in whose name they can then claim to govern. But this involves keeping a kind of double dialectic at work. On the one hand, there is the dialectic between WokeCapital and the disciplinary/disciplined, as the latter learn from the negative example of the former how to disentangle the command structure from the demand for sparagmos now. On the other hand, there is ongoing dialectic between the disciplined and disciplinary themselves, as the former imbibe modes of moral and ethical prescription from the latter, while the latter learns from the former to be more pragmatic and pedagogical, to be that hardest thing of all for thinkers—useful. The norm-setting distinction of the victim currently situated most antipodally to the normal can then be met by the re-marking of the normal as the vertex of convergent resentments.

March 5, 2019

Identities

Filed under: GA — adam @ 7:20 am

We are all products of the center; we all want to participate in the center. Any discussion of who any “I” or “we” had better take that as its starting point. Any individual life can be traced from center to center: the parent(s) at the center of the family, the teacher at the center of the classroom, the principal at the center of the school, the cool kid at the center of the peer group, the boss at the center of the workplace, and many more. These are the centers from which imperatives are issued, and which impose a nomos on the scene: the “fair” or “just” division of goods, attention, sympathy, protection amongst siblings, classmates, co-workers. In the modern world, there are centers backing these with which we are in direct contact: corporations, media, the state. These larger centers support the local ones, or encourage us to resist them, or some complicated combination of both that individuals need to figure out. The local centers, meanwhile, may support or subvert each other—the cool kid implicitly or explicitly encourages us to defy our parents and teachers. And, of course, the power of the cool kid might be enforced by entertainment media while the authority of parents might be reinforced by the state. Whatever goes into making up individuals will be the “processing” of all these articulated centers in tension with each other within a more or less stable and dynamic structure of desires, resentments and imperatives. (I don’t deny the importance of biological and ultimately genetic make-up to the formation of individuals, but I don’t have anything to say about that and any genetic predispositions would still get “processed” through the structures outlined above.)

We will find that all of these local and intermediary centers are supported by the central authority—family, school, workplace, even informal groupings like clubs, leagues and associations are ultimately legitimated by the state. The most informal of these groupings, such as friendships and romances, are not, but are closely supervised by these other structures. One of the most powerful fantasies of the modern world is that of forming an intimate bond with another that is outside of, and transcends, all formal authorities—“us against the world.” But it’s a powerful fantasy because it is produced so often in art and entertainment, and it is part of the long term political process of demolishing intermediary authorities and leaving each individual face to face with the central authority. The production of this fantasy draws heavily upon the Christian iconography foundational to Western culture (and I wonder whether other cultures even have something like this): central to the “us against the world” narrative is the martyrdom of one or both of the couple, who somehow evoke the mimetic resentment of the authorities and those who accept them unquestioningly, and whose actual or social death reveals the violence behind the apparently placid normalcy of everyday life. I wonder if it would be possible to test the hypothesis that to be a fully participating member of Western culture, one must have experienced oneself as the victim within this narrative at some point in one’s life. Perhaps that is what makes one “interpellatable.”

One’s relation to the state could be seen as an articulation of one’s relations to actual, possible, and residual sovereignties. An Irish-American, for example, is first of all subject to the American state, while having more or less distant ancestors who were subject to Irish sovereignty or, more likely, Irish potential sovereignty in more or less open rebellion against British. This residual allegiance might subside into irrelevance and be subsumed into a new mixture of lapsed allegiances; but it might also be leveraged against the American state or other groups (others with analogous more or less phantom allegiances). This play of identities we can also see as ultimately an effect of the degree of unity of the central authority: the more pluralistic the state, that is, the more it invites different elites to levy sections of the population to vie for control over an increasingly centralized state, the more sharply defined and reciprocally antagonistic (with various shifting alliances) these groups will be. But there’s no reason to assume that the absorption of all these residual and possible allegiances into a single homogeneous identity subordinate to the state is the privileged model, either—in fact, even the most fractious state will have to recur to that centralizing identity on occasion, making it simply part of a larger system of domination: a proxy of some kind. Where there are residual and possible allegiances (which exist even in non-immigrant societies, where nations are formed out of tribes or regions once subordinate to local kingdoms or aristocratic families), partial and local forms of responsibility can be delegated. Everyone should be grouped up, and groups should be allowed to exercise the executive and judicial powers needed to maintain themselves as such. What about individuals who want to escape their groups? Like quitting a job, you’d have to find another group willing to “adopt” outsiders, which they might have all kinds of reasons for doing. Think of the self-exiled black American artists who became, essentially, honorary Frenchmen and women over the course of the early to mid 20thcentury.

Liberalism abstracts, ruthlessly; counter-liberalism should concretize. The central authority wants all forms of authority to flow into its own, and that might involve inheriting residual and possible modes of authority borne by its people. A great deal of the ruler’s activity should involve issuing and supervising charters: for corporations, for townships, for local forms of authority, for associations of various kinds. If you want a recognized identity, apply to the central authority for a charter, or apply to subscribe to an authority that has already been chartered. The more generic, and potentially disruptive identities, like those promoted by feminism, can be broken down and sorted out: women’s groups focused on the education of women, on moral improvement, on counseling wives, on the preservation of traditional ceremonies and customs, and so on. If a woman wants to experience womanhood in sisterly relations with other women, there can be plenty of opportunities for that in non-antagonistic forms. If an identity can’t really be chartered for some purpose the central authority can acknowledge, then it’s best to let it dissolve.

Through all of this, to be an individual is to be a morally responsible person. This involves, not imagining oneself outside of, and victimized by, “society,” but establishing practices that defer centralizing violence. What is important is the character of the violent intention one resists, not its target: seditious violence against the state can display the same mimetic contagion as the merciless bullying of an unpopular child, and both cases require an inspection of the authorities that have allowed the contagion to grow to the point where outside intervention has become necessary. But moral action can only be carried out through the identities. The media strategy of dispersal and incorporation involves providing models of victimary self-centering and transgressive charisma. You can put yourself on the market as a victim of the normal, or as a defender of such victims who exposes the oppressive underbelly of the normal. It’s a way of taking yourself hostage and demanding the ransom payment. This media strategy works because it managed to plug into the dominant, pre- and non-Christian, heroic narratives of mass culture, which always involve a single man or small group of men defeating many more or much more powerful enemies in defense of the victims of those evil enemies. Reprogramming that narrative is a simple matter because the vicarious pleasure taken by the viewer is too obvious and too obviously exploited and hence somewhat shameful when exposed explicitly—so, the pleasure can only be preserved if the evil enemy is turned into some “exemplary deviation” from the cultural source of the heroic narrative itself. So, Captain America has to fight, not the Nazis, but the Nazi within all of us, embedded in what we take to be normal. His charisma becomes transgressive; but, as I said, this is not so difficult to accomplish, because constructing perfectly evil villains already elicits a kind of guilty transgressive pleasure—unrestrained violence is allowed where it normally wouldn’t be.

Centralizing violence, then, is primarily directed against the normal, or what we could call the normalest normal, the exemplary normal. The normal that’s so normal it has no idea how oppressive it really is. Obviously, in today’s culture this means white+male+Christian+straight+conservative+middle class. Moral action then needs to deflect this centralizing violence from the normal, but this is no easy matter—defending some ordinary guy against a virulent hate campaign because he said something currently deemed racist or sexist invites comparisons between his suffering and all the suffering that has been experienced historically, even if not in this particular case, by those who might, following some familiar if far-fetched chain of consequences, be possibly victimized through the racist or sexist statement. And there’s no transhistorical frame for determining the right terms of comparison. How do you weigh the humiliation and economic deprivation experienced by some middle class white guy against hundreds of years of violence done to black bodies, etc. To defend someone is to enter the legalistic game of attack and defend, and even if you can occasionally manage to turn the tables the prosecutorial initiative always lies with the defenders of the marked on the market against the unmarked.

The normal is the unmarked, and the postmodern critique that norms produce their own deviations is self-evidently true. The lives saved and improved, the cultural “equipment” made possible, because of the restraints placed on desires and resentments so as to reinforce the most local centers, are all invisible; those chafing under those restraints, unable to comply with them through, arguably, no fault of their own, are highly visible. The long term horizon of liberalism is that we will all be unmarked; until then we must keep up the war against the unmarked, who by definition, “structurally,” mark the others. If we are to get to the condition of universal unmarkedness, then, that means the most marked of today (the transgendered handicapped Somali refugee…) will someday become the norm. But does it not follow, then, that at the origin of any norm is the most marked? There is nothing more marked than inhabiting the name others ostensively designate you with because that’s who you in fact have turned out to have been. To be marked is to perform the paradox of self-reference—to be both liberated and constrained by the name. Everyone’s mimetic rivalry circles around this marked one, and mimetic violence is always just below the threshold of convergence upon him while he manages to expose the potential violence, make a nomos out of it, and recruit everyone to defer early on future signs of such violence. This is where a new norm comes from.

Moral action, then, entails performing the hypothetical origin of the norm. This involves opening a disciplinary space within the disciplines—it is the disciplines that control the system of naming. The disciplines can say, “X is Y,” or someone characterized by this feature is going, according to some probability we are competent to establish, to have this other feature. Go ahead and treat X as if he has this other feature, then—the burden of proof is on him. This organization of reality is inevitable, and only immoral if a space is not left open for that burden of proof to be met. Moral action is meeting that burden of proof while imposing a like burden on the disciplinary agents who establish it—what, exactly, do sociologists, psychologists, economists, etc., and the activists mimicking them at a distance, “tend” to do? The terms establishing burdens of proof all come from the nominalizations resulting from the supplementations of literacy, upon which the disciplines are founded. A word like “legitimacy” will have been derived from precise rituals and ceremonies that would have once served to mark one as institutionally recognized; now, it’s an abstract concept manipulated by those in the disciplines taking sides in power struggles. In that case, there’s a kind moral “arbitrage” that can be enacted by referring the competing nominalizations in any confrontation back to these power struggles. Attaching various “qualities” (the “Ys” mentioned above) to, say, “white males,” indicates some power differential—the “accuser” thinks this will be effective in someway. What power does it enact? Well, “history,” or “equality,” or “morality”—OK, but name some people, institutions, powerful figures embodying this power. Whom are they contending with, and for which discernable stakes? What will the victor be able to determine? Sure, in placing a burden of proof back on you (“people who believe in ‘racism’ are…”), I’m also hoping to leverage one power against another. In that case, no one is unmarked. That must mean we all want everyone to be marked in such a way as to defer, rather than incite, centralizing violence against them. The power struggles circulating through us make that impossible—each power can contend against the other only by means of incitement. The most moral thing to do then—to sound Kantian—is to act as if my act will increase the likelihood of an orderly arrangement of power that will mark (“(re)deem”) everyone accordingly—even though I can’t know in advance where I might fall within that order (a little bit of Rawls there as well). I’m a sign of disorder if that prospect repels you (and you need your dose of centralizing violence), and of order if you can imagine a complementary relinquishment on your part. In that way—to sound Nietzschean—we forge new norms. We return the disciplinary nominalizations back into acts conferring faith, trust and loyalty. The markings of racist/sexist/homophobic/transphobic/… are converted into notations on the accomplishments and responsibilities those charges aim at dispersing.

Powered by WordPress