March 19, 2019

The Worlding Event

Filed under: GA — adam @ 7:21 am

I have argued previously for the priority of “attentionality” over “intentionality”—attention must precede intention, and “intention” individualizes what is “joint” in attention, making it more of a declarative than an ostensive concept. We can trace the emergence of intentionality from attentionality, whether by “intentionality” we mean the more philosophical notion of constituting an object or the more everyday use of the term as meaning to do something. On the originary scene, all participants attend to the central object, and attend to each other attending; the sign, as the gesture of aborted appropriation, is really nothing more than the demonstration of this reciprocal attending to their joint attention. Self-referentiality, then, is built into the originary scene. Even more, what is action if not a prolongation of attention? I see the other attending to me, which becomes a kind of self-attending, as I can single out that in my gesture that might be articulated in the other’s attention, and in that way move myself so as to fit the shifting attentional structure of the other. My movements, and therefore my actions, enter into and are supported by the attentional space I have co-created with others. In all of our actions, then, we are tacitly referring to this attentional space, of which we are mostly unaware at any moment. As Michael Polanyi says, we know more than we can say. But we can say more and more of what we know, in the process producing more knowledge we can’t yet say—becoming a representation of this state of affairs is what ethical action entails.

For originary thinking, the human being has a telos: to speak and act along with the center; to enter the history of deferral in such a way as to construct the world as the effect of and continuation of that history. We assume everyone is trying to do that as well, which is why we know every utterance includes a sovereign imaginary eliciting commands from the center. Traditional ethical thinking will start to speak in terms of will, judgment, capacities, desire and its education and so on and all of that is fine, but we can just speak of the center one becomes as soon as one is amongst people, a center both actual and possible, and that each of us constructs as the ways we want attention drawn to or deflected from us. You can compete with other centers within the economy of attention, or you can redirect attention from you to the center enabling you to so redirect attention. Sometimes the very competition with other centers can be turned towards that end.

Performing the paradox of self-reference is the highest good for originary thinking. Turn every reference to something else into a reference to you and every reference to you into a reference to something else. You can never run out of things to do this with because everything is marked by the history of such reciprocal reference, and so keeps becoming something new. In this way you keep turning the world into a completely internalized self-referential system. This would seem to be a completely closed, and therefore dead, system, but in relation to the center this self-referentializing system is itself just a thing comprised of references to the center. You point to something, enabling others to see it, which enables it to be, but its being in turn enables you to see it and to point to yourself seeing it along with others—the center makes its appearance in this layering of the scene and the impossibility of determining whether new things are coming into view or we are sharing attention so thoroughly that we’re not sure where your seeing begins and mine ends. The center tells us to sustain that, by constructing institutions out of sites where the articulation of shared reference and self-reference (where we find a way of saying to each other, “here’s how we’re making sense of each other”) can become a model of deferral.

We don’t need to invent clever ways of enacting the paradox of self-reference, like saying “I am lying.” ‘I see that” is quite paradoxical enough, because “I” can only see that because “you” and “others” are least potentially able do so (and have therefore “always already” done so) as well; “that” is that only because I am seeing it; and I “see” that because our deferral, our laying back from appropriation, lets that object, like all objects since the first object, set itself off against a background—seeing is always a refrained touching and tasting. The disciplined forms of literacy try to suppress the paradoxicality of the declarative by supplementing sentences within imaginary scenes whose parameters are set by those defining the abstractions used to perform the supplementation. To define “perception” in terms of physiological structures and learned Gestalts is to try to abolish the paradoxicality of “I see that.” But, of course, we have to say things like that, so it’s best to say them in the manner of little satires on these suppressive supplementations, reintroducing the paradoxes they hope to avoid. Eventually, these running satiric digressions become indistinguishable from the primary discourse itself. If you can find ways of iterating this digression-within-the-discourse in new variations within emergent events so as to have each variant naming the previous ones you enable others to join in self-referential centering.

One way of breaking with Western metaphysics is by acknowledging the traditional character of all thought. The concepts you are working with have been worked with in other contexts, and are conversions of earlier concepts, which solved problems within a now extinct paradigm which has nevertheless bequeathed to us some of it problems and some of its materials for solutions. But this means that the more we shape these concepts to our own purposes the more we are participating in an ongoing inquiry with those who did so earlier, and had no idea we were coming along. But since the most fundamental and universal tradition is language itself, it seemed to me that the self-aware participation in traditions of thought could more simply be understand as a form of language learning. When you learn a new language, or when children learn language, the process involves imitating chunks of discourse in ways that are inevitably mistaken because you must intuit their uses in unanticipated contexts—how else could anyone learn? In the process, you generate new idioms, and this is how language changes—enough people take the mistake, or even a shift in emphasis, as “correct.” We never stop learning, so we’re always students, but we also have to step outside of the flow of learning in order to teach people who we see falling into what we fear (but we could be wrong) are less productive patterns of error. Here, we have, broadly, two choices: one, we situate ourselves within a more or less institutionally protected orthodoxy, and correct those whose language usage doesn’t conform. The advantage here is that you guarantee you’ll always be right and smarter than anyone who comes along. Or, you re-use the misused idiom with some of the weight of inherited uses which the newcomer might be less aware of and thereby incorporate the mistakes into a regenerated tradition of discourse. Here, authority has to prove itself by showing itself capable of allowing digressions to flow back into a larger current. You keep emulative mimesis in play by allowing that play to construct the very space in which the implications of language usages can be explicitly hypothesized.

Many years ago I started working on what I called “originary grammar” because I felt that GA needed to be more than just another “theory,” one that offered its own “readings” of texts and “explanations” of social structures and historical events. I thought it needed to generate its own comprehensive vocabulary—a language others would have to and want to learn—rather than just saying something like, “here’s how we think it all began” and then proceeding to talk about ideas and interpretations and principles and beliefs and arguments and proving things like everyone else. And the way to do that was out of the dialectic of linguistic forms Gans worked through in the first work in GA, The Origin of Language(the new edition of which is of course available, and the Amazon page for which is still sadly bereft of comments). I was encouraged in this by the fact that Gans used a kind of grammatical approach to defining the two key intellectual and cultural transformations constitutive of the West: he defined “metaphysics” as taking the declarative sentence as the primary speech act; and he defined Judaic (I think “Israelite” is better) monotheism as “the name of God as the declarative sentence.” In both cases, the post-sacral or imminently modern world is constructed in terms of some tension between the declarative, on the one hand, and the imperative, or, more broadly, the entire ostensive-imperative network, on the other hand. Wouldn’t anything we would want to talk about be included in this field of tension?

Originary grammar should supersede scientism while preserving all the intellectual advances of science. Instead of “facts,” we have what is known ostensively: what could become an object of shared attention. Something could only become an object of shared attention on a scene, which cannot itself be prepared ostensively: we are driven to create new scenes by the breakdown of a previous scene, the central object upon which eventually generated new desires it could no longer defer. (Of course, the new scene could feature the “same” central object in a different way.) If the scene is not simply to break down; if a transition to a new scene is to be achieved, asymmetry must enter the arena in the form of an imperative: someone issuing an “inappropriate” ostensive regarding a new or old/new object. Here, the preservation of presence on the scene can be united with maximum innovation on the scene: we allow a space for inappropriate ostensives, to see which might work as imperatives. Finally, we can bound declaratives to the scene by allowing the declarative field maximum freedom to explore all the complexities of declarative possibilities (to cross over time and space, to organize all of reality around one center or another) on the condition that it represent actual and possible ostensive-imperative articulations. The declarative sentence constructs a linguistic present, the present in which you can utter the sentence, that, unlike the ostensive and imperative, can be separated from any particular scenic present—but that means that the “vocation” of the declarative sentence is to keep restoring the continuity and extension of the trillions of human scenes, each of which threatens in a new way to break that continuity. The declarative would be most interested in suggesting ways of preparing us, or issuing imperatives, to share new ostensives.

In this way we would have a completely self-contained and completely open system in which we would always be talking about what we’re doing in the language through which we are doing it. The content of our declarative sentences would be the way other declarative sentences have commanded us to draw lines connecting objects around a centerized one. So, discussions would take something like the following form: “you say I’ve been looking at things in such a way that others see what I don’t and this is because of where and how I stand and in saying this you are telling me to be led by the configuration which I have not yet identified as a configuration and thereby to see and lean toward something that would compel others to join me in reconfiguring it…” The specific details of any particular scene at the center of an array of scenes would be inserted.

We would be more precise than this sample indicates because each sentence modifies in some way inherited chunks of language and meaning is thus generated by the modification itself—in a language user’s noticing that you have eschewed the expression that 87.8% of listeners would have expected to come at that point in your discourse in favor of a rarely or never before used one because you want that point in the discourse to operate as a center that has you reworking language along with perception, intention and  intuition. And the next declarative in the discussion could point that out or, even better, iterate it in a new modification that the language learners around you would be able to iterate in turn so as to open new fields of objects. So, we’d be talking about things in the world while talking about how we talk about things in the world while talking about how we can rework the way we and others talk about things in the world and it’s all really one “talking.” This still seems to me to be the imperative.

March 12, 2019


Filed under: GA — adam @ 7:24 am

Dialectics is the rendering of paradox pragmatic. There are two ways of thinking about dialectics. One is as a mode of generating new ideas through probing, critical dialogue, in which each side tries to make explicit the assumptions underlying the other’s discourse. This notion of dialectics goes back to Socrates, and a particularly interesting modern example can be found in R.G. Collingwood’s understanding of dialectics as the attempt to find agreement underlying disagreement. The agreement, which, in Derridean terms, was “always already” there (insofar as argument was possible in the first place), is nevertheless, once explicated, a position that neither side knew they held in advance. In other words, something both originary and new emerges.

The other way of thinking about dialectics is as a way of understanding a historical process, or even as that process itself, whereby events are generated by contradictions in an existing social form, so new configurations emerge which both fulfill and confute the intentions of the actors who initiated them. Historical dialectics acquired a bad name as a result of its association with orthodox Marxism, which used “dialectical materialism” as a ‘guarantee” of both the inevitability and justice of its own victory, but Eric Gans employs a much subtler version in his account of the emergence of the imperative speech form from the ostensive and then the declarative speech form from the imperative (by way of the interrogative). Here, the shared intentionality bound up in a particular sign is put to the test (“contradicted” by) an “inappropriate” use of that sign; the tension is resolved as the desire to maintain shared intention (“linguistic presence”) generates a new speech form, “recouping” the “mistake.”

Unlike Marxist dialectics, this Gansian version allows for all the times where this “transcendence” of the previous form would fail to take place—linguistic presence can be broken, and some form of violence and social crisis ensues. The result of a dialectical process, then, can only be assured once the new form has been spread through imitation sufficiently so that it has proven itself capable of deferring the antagonisms those failures would have aggravated. In other words, “historical dialectics” proceeds in a manner beyond the intentions of any participant, but must be “authenticated” by shared intentionality at each point along the way and eventually yield a higher level of shared intentionality. But this also means that the two meanings of “dialectic” are one: the emergence of new historical forms is a process of more advanced dialogues taking place at the margins and gradually providing the means of deferral that enable a reconstructed center to resolve some crisis. Thomas Kuhn’s notion of scientific revolution provides us with the best model for understanding this process: the margins where the more advanced, “disciplinary” dialogues are taking place are where those who have perceived the anomalies of the existing social order in such a way as to doubt whether they can be “recouped” within that order produce questions invisible within that order. Their work is then focused on developing and trying out possible paradigms that might replace the prevailing one.

We could see the emergence of Generative Anthropology itself in just such dialectical terms. At the center, according to the originary hypothesis, sits a potential victim. It is in designating this potential victim, and refraining from victimizing it, that the sign emerges and the group is formed. But how did this clear, minimal insight become possible? If the making of victims is a matter of course, whether it be through conquest, those in power destroying those who might pose even a distant threat, sacrifice, mass slavery, and so on, one would never consider that the production of victims could be a source of any significant insights. In fact, I wonder whether a word equivalent to “victim” would even have been used (the word “victim” itself, according to the Online Etymological Dictionary, comes from the creature brought as a sacrifice). Certainly those whom we would today consider victims, like conquered, displaced and massacred populations, would have not thought of themselves in those terms: they would know, of course, that they had been bereft of their gods, rituals, territory, wealth, kinsfolk, institutions, and so on, and they would mourn all this and bemoan their destruction or enslavement, but this would be a source of shame and loss of faith more than of a complaint anyone would be expected to attend to. Our gods have failed us, or we failed our gods; what else is there to say?

Only with the emergence of justice systems can the notion of a “victim” be conceptualized—that is, once wrongs are not addressed directly through a vendetta but through some socially sanctioned process of determining punishment. This indicates an added degree of deferral, which opens a new realm of paradoxes. The law is established so as to do justice, because “justice” by definition is the proper allotment as determined by anyone who is in the “right” position to determine it—so, something we could call “law,” even if that means the sifting through, by legal professionals, of privileged precedents, rather than a written code, will emerge with the concept of “justice.” But, then, isn’t “justice” merely an effect of what the law, with its own institutional history, has decided? In that case how do we determine whether the law has been rightly decided? For this, we must step outside of the system, to reclaim its origin, but this stepping outside is a dialectical process which requires the model of the exemplary victim of the justice system itself. At that point, the concept of the victim becomes increasingly central culturally until, in Christianity, we have the worship of the exemplary victim. As Christianity permeates all cultural sites to the extent that it can be detached from its origins and its victimolatry separated from the carefully demarcated exemplary victim defining it, all of culture comes to be obsessed with the search for victims and self-representation as victims. The history of democracy, liberalism and romanticism trace this negation of Christianity from within Christianity. With post-structuralism, even language becomes grounded in victimization. Victimary thinking becomes so central as have destroyed any “other” it could distinguish itself from for some moral purpose. Once this ontological colonialism has proceeded to a certain point, it becomes possible to consider that it is not victimization that is at the origin, but a refusal to victimize. And then it becomes possible to think the originary hypothesis.

We can posit a related dialectic as the form of modern politics. Eric Gans speaks of an oscillation between “firstness” and “reciprocity” as constitutive of liberal democracy, but this can’t be a dialectic because nothing new can come out of it. The distributive demands of the moral model will always be assailing the innovators and merit-based hierarchical structures that make those demands for equality possible in the first place. The only thing that could keep the pendulum swinging back and forth is a sufficient degree of cynicism on the part of the redistributors—they must know, as the Schumers and Pelosis surely do, that the eat the rich and get whitey talk is just to keep the contributions flowing and the voters and activists mobilized—they know better than to actually kill the goose laying the golden eggs. But their successors, like AOC, Ilhan Omar, Rashida Tlaib and others, don’t know this. They’ve grown up saturated in the political simulacra of Media Matters, and take all the egalitarian talk quite literally. Even if they “grow in office” and realize what the progressive ideals are really for, we wouldn’t really have a dialectic: the increasing disparity between ideals and the cynicism with which they are advanced can’t lead to anything new. Even if the pendulum keeps swinging, all it can lead to is more corruption and more advanced degeneration.

We could, though, speak of a dialectic between the model of the originary scene and the model of the “second revelation,” that of the Big Man. Here we have a genuine dialectic that has always produced cultural novelties. Ancient Israelite monotheism—the name of God as the declarative sentence—is itself a product of this dialectic: a retrieval of the originary relation to a shared center on the terrain created by the ancient empires, heirs of the Big Men. Rather than a figurable center, like a sacrificial animal, a non-figurable God; rather than a sacred grounded in ritual specific to a closed community, a relation to the center any people could imitate; rather than a deity with whom to engage in imperative exchange, a God who commands reciprocity with our neighbor. But neither Israelite monotheism, nor its Christian and Islam successors, reject monarchy—rather they, seek to constrain and edify it. Nor do any of these faiths recommend a universally shared relation to the center that would override all hierarchical political institutions: the imperative to seek the peace of the kingdom where you live is always intact—and, of course the Israelite God is Himself paradoxically and scandalously, national as well as universal. As with any dialectic, new problems are generated out of the solutions of old ones.

Liberalism might be seen as an attempt to stall this dialectic by internalizing it within the economy, producing a pseudo-dialectic between expanded production and expanded consumption. This also cannot create anything new. But if we see the adherence to the model of the originary scene as itself a product of struggles between hierarchs seeking to efface their descent from the Big Man, we can set the dialectic in motion again. The logical endpoint of victimocratization would be the direct branding, like with sports stadiums, of groups demanding absolute, genuflecting respect from anyone marginally more normal than them by corporations defending their fiefdoms within the global distribution process. Facebook’s Women’s March; Amazon’s Black Lives Matter; Google’s Committee to End Transphobia, etc. The “antithesis” to this WokeCapital hearkening back to the emergent originary scene is, first, that the position of the hierarch is left unclaimed; and, second the originary scene as configured around a center has also been abandoned. Pretty much anyone who asserts the right to issue commands, and the grace to obey them, simply because there has to be a social center, is an avatar of autocracy, and heir to the Big Man, consciously or not. And virtually anyone who gathers others together to study some thing unresentfully, letting the object speak or, in Heideggerese, “be,” has created a direct line back to the originary scene. The “synthesis” comes when those forming disciplinary spaces turn their attention to the emergent autocrats, and those autocrats revise their command structures upon receiving feedback from the disciplinary spaces.

This “synthesis” can only take place in the middle, in the meeting of those upholding the normal and some “allotment,” and those marginal to the official disciplines. Together they will have to form a “spine” which can act once enough elites realize that their role is to govern in their own name rather than ginning up the mobs in whose name they can then claim to govern. But this involves keeping a kind of double dialectic at work. On the one hand, there is the dialectic between WokeCapital and the disciplinary/disciplined, as the latter learn from the negative example of the former how to disentangle the command structure from the demand for sparagmos now. On the other hand, there is ongoing dialectic between the disciplined and disciplinary themselves, as the former imbibe modes of moral and ethical prescription from the latter, while the latter learns from the former to be more pragmatic and pedagogical, to be that hardest thing of all for thinkers—useful. The norm-setting distinction of the victim currently situated most antipodally to the normal can then be met by the re-marking of the normal as the vertex of convergent resentments.

March 5, 2019


Filed under: GA — adam @ 7:20 am

We are all products of the center; we all want to participate in the center. Any discussion of who any “I” or “we” had better take that as its starting point. Any individual life can be traced from center to center: the parent(s) at the center of the family, the teacher at the center of the classroom, the principal at the center of the school, the cool kid at the center of the peer group, the boss at the center of the workplace, and many more. These are the centers from which imperatives are issued, and which impose a nomos on the scene: the “fair” or “just” division of goods, attention, sympathy, protection amongst siblings, classmates, co-workers. In the modern world, there are centers backing these with which we are in direct contact: corporations, media, the state. These larger centers support the local ones, or encourage us to resist them, or some complicated combination of both that individuals need to figure out. The local centers, meanwhile, may support or subvert each other—the cool kid implicitly or explicitly encourages us to defy our parents and teachers. And, of course, the power of the cool kid might be enforced by entertainment media while the authority of parents might be reinforced by the state. Whatever goes into making up individuals will be the “processing” of all these articulated centers in tension with each other within a more or less stable and dynamic structure of desires, resentments and imperatives. (I don’t deny the importance of biological and ultimately genetic make-up to the formation of individuals, but I don’t have anything to say about that and any genetic predispositions would still get “processed” through the structures outlined above.)

We will find that all of these local and intermediary centers are supported by the central authority—family, school, workplace, even informal groupings like clubs, leagues and associations are ultimately legitimated by the state. The most informal of these groupings, such as friendships and romances, are not, but are closely supervised by these other structures. One of the most powerful fantasies of the modern world is that of forming an intimate bond with another that is outside of, and transcends, all formal authorities—“us against the world.” But it’s a powerful fantasy because it is produced so often in art and entertainment, and it is part of the long term political process of demolishing intermediary authorities and leaving each individual face to face with the central authority. The production of this fantasy draws heavily upon the Christian iconography foundational to Western culture (and I wonder whether other cultures even have something like this): central to the “us against the world” narrative is the martyrdom of one or both of the couple, who somehow evoke the mimetic resentment of the authorities and those who accept them unquestioningly, and whose actual or social death reveals the violence behind the apparently placid normalcy of everyday life. I wonder if it would be possible to test the hypothesis that to be a fully participating member of Western culture, one must have experienced oneself as the victim within this narrative at some point in one’s life. Perhaps that is what makes one “interpellatable.”

One’s relation to the state could be seen as an articulation of one’s relations to actual, possible, and residual sovereignties. An Irish-American, for example, is first of all subject to the American state, while having more or less distant ancestors who were subject to Irish sovereignty or, more likely, Irish potential sovereignty in more or less open rebellion against British. This residual allegiance might subside into irrelevance and be subsumed into a new mixture of lapsed allegiances; but it might also be leveraged against the American state or other groups (others with analogous more or less phantom allegiances). This play of identities we can also see as ultimately an effect of the degree of unity of the central authority: the more pluralistic the state, that is, the more it invites different elites to levy sections of the population to vie for control over an increasingly centralized state, the more sharply defined and reciprocally antagonistic (with various shifting alliances) these groups will be. But there’s no reason to assume that the absorption of all these residual and possible allegiances into a single homogeneous identity subordinate to the state is the privileged model, either—in fact, even the most fractious state will have to recur to that centralizing identity on occasion, making it simply part of a larger system of domination: a proxy of some kind. Where there are residual and possible allegiances (which exist even in non-immigrant societies, where nations are formed out of tribes or regions once subordinate to local kingdoms or aristocratic families), partial and local forms of responsibility can be delegated. Everyone should be grouped up, and groups should be allowed to exercise the executive and judicial powers needed to maintain themselves as such. What about individuals who want to escape their groups? Like quitting a job, you’d have to find another group willing to “adopt” outsiders, which they might have all kinds of reasons for doing. Think of the self-exiled black American artists who became, essentially, honorary Frenchmen and women over the course of the early to mid 20thcentury.

Liberalism abstracts, ruthlessly; counter-liberalism should concretize. The central authority wants all forms of authority to flow into its own, and that might involve inheriting residual and possible modes of authority borne by its people. A great deal of the ruler’s activity should involve issuing and supervising charters: for corporations, for townships, for local forms of authority, for associations of various kinds. If you want a recognized identity, apply to the central authority for a charter, or apply to subscribe to an authority that has already been chartered. The more generic, and potentially disruptive identities, like those promoted by feminism, can be broken down and sorted out: women’s groups focused on the education of women, on moral improvement, on counseling wives, on the preservation of traditional ceremonies and customs, and so on. If a woman wants to experience womanhood in sisterly relations with other women, there can be plenty of opportunities for that in non-antagonistic forms. If an identity can’t really be chartered for some purpose the central authority can acknowledge, then it’s best to let it dissolve.

Through all of this, to be an individual is to be a morally responsible person. This involves, not imagining oneself outside of, and victimized by, “society,” but establishing practices that defer centralizing violence. What is important is the character of the violent intention one resists, not its target: seditious violence against the state can display the same mimetic contagion as the merciless bullying of an unpopular child, and both cases require an inspection of the authorities that have allowed the contagion to grow to the point where outside intervention has become necessary. But moral action can only be carried out through the identities. The media strategy of dispersal and incorporation involves providing models of victimary self-centering and transgressive charisma. You can put yourself on the market as a victim of the normal, or as a defender of such victims who exposes the oppressive underbelly of the normal. It’s a way of taking yourself hostage and demanding the ransom payment. This media strategy works because it managed to plug into the dominant, pre- and non-Christian, heroic narratives of mass culture, which always involve a single man or small group of men defeating many more or much more powerful enemies in defense of the victims of those evil enemies. Reprogramming that narrative is a simple matter because the vicarious pleasure taken by the viewer is too obvious and too obviously exploited and hence somewhat shameful when exposed explicitly—so, the pleasure can only be preserved if the evil enemy is turned into some “exemplary deviation” from the cultural source of the heroic narrative itself. So, Captain America has to fight, not the Nazis, but the Nazi within all of us, embedded in what we take to be normal. His charisma becomes transgressive; but, as I said, this is not so difficult to accomplish, because constructing perfectly evil villains already elicits a kind of guilty transgressive pleasure—unrestrained violence is allowed where it normally wouldn’t be.

Centralizing violence, then, is primarily directed against the normal, or what we could call the normalest normal, the exemplary normal. The normal that’s so normal it has no idea how oppressive it really is. Obviously, in today’s culture this means white+male+Christian+straight+conservative+middle class. Moral action then needs to deflect this centralizing violence from the normal, but this is no easy matter—defending some ordinary guy against a virulent hate campaign because he said something currently deemed racist or sexist invites comparisons between his suffering and all the suffering that has been experienced historically, even if not in this particular case, by those who might, following some familiar if far-fetched chain of consequences, be possibly victimized through the racist or sexist statement. And there’s no transhistorical frame for determining the right terms of comparison. How do you weigh the humiliation and economic deprivation experienced by some middle class white guy against hundreds of years of violence done to black bodies, etc. To defend someone is to enter the legalistic game of attack and defend, and even if you can occasionally manage to turn the tables the prosecutorial initiative always lies with the defenders of the marked on the market against the unmarked.

The normal is the unmarked, and the postmodern critique that norms produce their own deviations is self-evidently true. The lives saved and improved, the cultural “equipment” made possible, because of the restraints placed on desires and resentments so as to reinforce the most local centers, are all invisible; those chafing under those restraints, unable to comply with them through, arguably, no fault of their own, are highly visible. The long term horizon of liberalism is that we will all be unmarked; until then we must keep up the war against the unmarked, who by definition, “structurally,” mark the others. If we are to get to the condition of universal unmarkedness, then, that means the most marked of today (the transgendered handicapped Somali refugee…) will someday become the norm. But does it not follow, then, that at the origin of any norm is the most marked? There is nothing more marked than inhabiting the name others ostensively designate you with because that’s who you in fact have turned out to have been. To be marked is to perform the paradox of self-reference—to be both liberated and constrained by the name. Everyone’s mimetic rivalry circles around this marked one, and mimetic violence is always just below the threshold of convergence upon him while he manages to expose the potential violence, make a nomos out of it, and recruit everyone to defer early on future signs of such violence. This is where a new norm comes from.

Moral action, then, entails performing the hypothetical origin of the norm. This involves opening a disciplinary space within the disciplines—it is the disciplines that control the system of naming. The disciplines can say, “X is Y,” or someone characterized by this feature is going, according to some probability we are competent to establish, to have this other feature. Go ahead and treat X as if he has this other feature, then—the burden of proof is on him. This organization of reality is inevitable, and only immoral if a space is not left open for that burden of proof to be met. Moral action is meeting that burden of proof while imposing a like burden on the disciplinary agents who establish it—what, exactly, do sociologists, psychologists, economists, etc., and the activists mimicking them at a distance, “tend” to do? The terms establishing burdens of proof all come from the nominalizations resulting from the supplementations of literacy, upon which the disciplines are founded. A word like “legitimacy” will have been derived from precise rituals and ceremonies that would have once served to mark one as institutionally recognized; now, it’s an abstract concept manipulated by those in the disciplines taking sides in power struggles. In that case, there’s a kind moral “arbitrage” that can be enacted by referring the competing nominalizations in any confrontation back to these power struggles. Attaching various “qualities” (the “Ys” mentioned above) to, say, “white males,” indicates some power differential—the “accuser” thinks this will be effective in someway. What power does it enact? Well, “history,” or “equality,” or “morality”—OK, but name some people, institutions, powerful figures embodying this power. Whom are they contending with, and for which discernable stakes? What will the victor be able to determine? Sure, in placing a burden of proof back on you (“people who believe in ‘racism’ are…”), I’m also hoping to leverage one power against another. In that case, no one is unmarked. That must mean we all want everyone to be marked in such a way as to defer, rather than incite, centralizing violence against them. The power struggles circulating through us make that impossible—each power can contend against the other only by means of incitement. The most moral thing to do then—to sound Kantian—is to act as if my act will increase the likelihood of an orderly arrangement of power that will mark (“(re)deem”) everyone accordingly—even though I can’t know in advance where I might fall within that order (a little bit of Rawls there as well). I’m a sign of disorder if that prospect repels you (and you need your dose of centralizing violence), and of order if you can imagine a complementary relinquishment on your part. In that way—to sound Nietzschean—we forge new norms. We return the disciplinary nominalizations back into acts conferring faith, trust and loyalty. The markings of racist/sexist/homophobic/transphobic/… are converted into notations on the accomplishments and responsibilities those charges aim at dispersing.

February 26, 2019


Filed under: GA — adam @ 7:24 am

It’s common to hear some event or discussion denounced as a “distraction.” A distraction, presumably, from what is really important. A distinction between what is more and what is less important is essentially a distinction between what is more real and what is less real. What is more real, it will always turn out, is what better fits the model of reality you presuppose—and wish to impose on others. So, people pay more attention to some lurid scandal manufactured by media outlets than to the latest study showing a decline in the wealth of the middle class. Clearly, the latter is more real, more important, because it is a sign of other things that are real and important: a decline in consumption, leading to a recession; growing dysfunction among members of the affected group, leading in turn to growing dropout, drug addiction and crimes rates with potential for a higher risk and less stable society; the possible emergence of new political forces trying to represent the dispossessed, with the possibility of upsetting the existing establishment, and so on. Meanwhile, what follows from the scandal? Nothing real—one corrupt politician gets replaced by another, maybe a new rule, soon to be forgotten, gets imposed—no one will remember it a few years down the road.

But in attributing such a higher degree of reality to certain processes, a further assumption is made: that those who are enjoined to pay attention to those processes in proportion to their reality can also affect the event, or its subsequent consequences, in proportion to the attention paid to it. Why criticize or ridicule others for being “distracted” or “distracting” if distributing their attention in a more appropriate way is not going to pay off in commensurate power over what one pays attention to? Otherwise, why not just pay attention to local, everyday, “petty” events and issues that one might be able to influence; or, to what one finds amusing or exciting? The one criticizing the distraction and the distracted, then, is the one out of touch with reality: more people paying attention to the latest economic developments does not add up to more people having intelligent, informed discussions about those developments, which would not, anyway, in turn, lead to a shift in the commitments of policymakers, such that they would now start formulating and implementing policy in accord with the presumably coherent and essentially unanimous conclusions drawn by those intelligent and informed discussions. The pathways from events, to reporting of those events, to taking in that reporting, to public opinion, to official political responses to public opinion are all cut in a manner unrecognizable to one who takes the model of the public spirited, informed, citizen seriously.

Well, then, how should one organize one’s attention? In such a way as to find vehicles for thinking through the anomalies and paradoxes that most forcefully present themselves to you. If there really are intrinsically more important or more real realities, that’s the way you’re going to find them anyway. This means we’re always working with what’s “at hand”—even when we want to be important and talk about important things we end up carving our own little niche within them, like arguing some technical point that hardly anyone else considers important. The desire to pitch one’s tent at the realest of the realities is the desire to have a commanding metalanguage that enables you to give orders, at least in your own mind, or the space you share with others, to those who actually command, who occupy centers. When a pundit or resentful intellectual says that some politician did this or that in order to distract us from what he’s really doing, resentment is expressed in a satisfying way insofar as one is superior to those so easily distracted and to the politician who thinks he can hoodwink you. You can construct a pleasing image of the political leader who will come along and carrying out your instructions to the letter.

A better criterion for determining relevance and reality is to employ as much as possible of the signifying means available on the scene where you find yourself. You’re on a scene—you’re thinking about things, which is to say rehearsing potential future scenes; you’re observing something; you’re speaking with people, even if mediated through a screen. The scene has propos and supports; it has a history. The participants have entered this scene from other scenes. All of this leaves traces on their posture, gestures, tone, words on the scene. All of it can be elicited. How much, and what, exactly, of it, should be elicited? Well, this is at least a much better way of posing the question of relevance than looking for an objective hierarchy of importance. Elicit whatever can be turned into a sign of the center of the scene. Any scene falls prey to mimetic rivalry: one actor tries to one-up or indebt the other, maybe even without realizing it. Everyone involved wants to be at the center, which might very well mean subverting another’s bid for centrality. It certainly means evincing resentment towards whatever keeps us all on the scene in the first place—even if, in fact, we’re all there to see that person, her attempts to usurp others’ attempt to be at least the center of their own site of observation is a form of resentment. And, of course, pointing these things out on the spot leaves one, justifiably, in fact, vulnerable to charges of deploying an escalatory form of resentment oneself.

Any sign of resentment toward the center is also a sign of genuflection before it. You can always show another their resentment by simultaneously showing their worship of what they resent. And of whatever it is that counts as a center upon the scene. The resented/worshiped figure, itself, points to some other center: whatever we deem to be “in” or “of” the resented object is also elsewhere, in whatever allows that object to carry on in such an offensive way. If your argument with someone escalates, it gets to the point where it becomes “excessive”—but what does that mean? Excessive according to what measure? Well, the argument started with an “issue,” but the stakes have now raised to the point where the “issue” has become secondary—the confrontation becomes the thing itself. Like it or not, your resentment toward the other is a form of worship: you devote attention to him, and attribute to him power over your own actions (he’s making you angry). But this means that the original “issue” hasn’t been left behind—it turns out that that issue was a mere proxy for this new one, this new form of devotion. And who’s to say it’s less relevant? But what form of worship will this turn out to be? If he kicks your ass, it ends anti-climactically, and you return to your own group in shame. If you kick his, well, maybe it’s the same, because it turns out he wasn’t a worthy adversary, which is also a bit shameful and not very “relevant”; but if you return in triumph, you install him as a kind of permanent deity, whose prowess proves your own. You construct an idol, and will require ritual repetitions of the same battle.

But it’s also possible that the two warriors will discover that they worship, not the other, but something that is neither of them. Whatever allows them to make peace with honor intact, whatever they can swear by together—that is what they worship. But now every sign put forth by the other—every sign of fear overcome by courage, all evidence of training, sacrifice, self-denial, skill—that one can emulate, that one has put forth oneself, are signs of devotion to that center. All the words that the two will henceforth speak to each other, and that others, telling their story, will speak of them, testify to that center. If one gets unreasonably angry with the other they can both laugh, because that resurgent resentment recalls the scene upon which its predecessor was transcended, and therefore becomes a sign of that transcendence. The show of resentment is just demonstration of the gift of vigor given by the center.

This brings us to critical ontological and epistemological questions. We’ve already dealt with the question of “reality,” that is, whatever is inexhaustibly signifying. It’s also a question of truth, which, in social and cultural terms, can only mean the eliciting of signs of one another’s relation to the center. One central principle of modernist art is that aesthetic value lies not in what a given work represents (ideas, a social reality, etc.) but in the extent to which it makes full use of its materials—colors and shapes on a flat surface, words on a page, and so on. Modern art and its theoretical defenders were right to defend art against its social utility, which in practice means kitsch, but were mistaken in thinking that rigorous artistic practices meant eliciting desires concealed or suppressed by the civilized social order. The materials of art are the materials of other areas of life, which also use colors, shapes, surfaces, words, sounds, etc. The vocation of art is to retrieve those materials from the disciplines, which use them to establish the hierarchies of relevance through which they hope to subordinate those who occupy the center. To some extent this always means the disciplinary establishments of the arts themselves.

Whatever is presented as relevant in itself is to be presented anew as a product of a scene. This includes all the aesthetic materials that, in a disimperativized declarative, disciplined order, are set up for purposes of control—for the anticipatory capture and sequestering of resentments generated by the carousal for rotating power itself. The more you can event-ize and scenicize the conceptual hierarchies streaming toward you the more reality and the closer to truth you are getting. These conceptual hierarchies always stream toward you through other people, people mediated by scenes and media. The conceptual hierarchies, then, need to be performed along with them—one needs to help elicit from them this performance, and help them elicit it from oneself. When the conceptual hierarchies dissolve, the real hierarchies that don’t need their support become more visible. The concepts can then be put to use discerning what the real hierarchies demand of us.

Here’s another way to think about it. Our lives are increasingly run by algorithms, which are really just a technological extension of the desire to predict what others will do. If I’m in a difficult situation, and I can predict what others will do for the next 5 minutes, I might get out of it; if I have machines that can predict what pretty much everyone will do over the course of their entire lives, I can dominate them fairly easily. Two things are necessary to build such machines: first, humans must be analyzed, broken down, into parts (fears and desires, primarily) that make them predictable; second, a social world is built that constantly elicits those anthropological mechanisms. It’s a bit too science fictiony to say we currently live in such a word, but it’s obvious that we are governed by those who are trying very hard to construct something along these lines. If you want to approach this in libertarian terms, you could say freedom depends upon being anti-algorithmic; in autocratic terms, you could say that clarity in the command structure requires it. The ideal of the algorithm is to separate the declarative order from the ostensive and imperative worlds once and for all. In the perfected algorithmic order no one need ever command because everyone would always already be guided spontaneously upon the path that maximizes frictionless coordination.

It’s pointless to ask, well, what would be so terrible about that (even if we could answer the question), because the ideal of the algorithmic order is really the opposite of what its appears to be. It’s just a war machine. It grinds you up to generate the inputs it needs. The victimary left thinks it opposes the algorithmic order because it reproduces the hierarchies resulting from behavioral differences—but the left just wants to control the machine. Which just proves human decisions are necessarily made to determine what counts as “inputs.” So, the left can’t have a counter-algorithmic program. Countering the algorithm would involve asking, what would be predicted of me now? And then confounding the prediction. I don’t mean that, if your “profile” suggests that you will behave compliantly in a given situation you should instead kill a bunch of people. Indeed, a slightly modified algorithm could predictthat. It means looking at the markers of compliance, as many of them as one can in a given scene, and delineating their imperative structure. We’re following orders here—we can all see this, right (look at what that guy just did.. why do you think this space is arranged in just this way?… did you notice how that gesture made her nervous?…)? The algorithm can’t account for an ongoing exposure of the terms of obedience. There’s no telling where it will lead—not necessarily to disobedience; maybe to subtle shifts in obedience that might eventually add up to decisive ones. The algorithm can’t account for someone seeking out an other worthy of being obeyed, or trying to become worthy of being obeyed oneself. The algorithm can’t account for the irreducible determination of relevance, of centrality, on the scene. It can’t account for the reading and writing, literal and figurative, of all the signs of the sign as signs of centrality and marginality—and therefore of relevance.

February 19, 2019

Can Networks Crowd Out Markets?

Filed under: GA — adam @ 7:24 am

When I go to the store to buy a loaf of bread, I have to pay the supermarket because I am not performing any equivalent service for them, or because, as is the case in David Graeber’s “communism,” we are not part of a community in which it is a matter of course that each takes what he needs and contributes what he can. So, how does the supermarket know how much to charge me? They buy the bread from a baker, and they have to charge enough beyond what they pay the baker to cover the costs of the building, the store technology, salaries and wages, etc., and still make a profit. The baker, meanwhile, needs ingredients, equipment, a building, employees, etc., so that determine how much he has to charge for the bread. And likewise for those who sell the baker the ingredients, etc. If we start at the other end, the supermarket charges what consumers will pay, which comes down to how much consumers prefer buying bread here as opposed to some other place, how much they prefer bread to possible substitutes (e.g., fajita wraps), how willing they are to forego bread compared to other types of food if they have less money to spend, etc. Still, at any point in time, there must be some minimum cost of making bread, and if it isn’t sold at that cost, bakers will simply cease to exist.

The standard argument for markets against planning is that no one can know how many people over, say the next month, will want how many loafs of bread, and, then, how much the operating costs of the supermarket, the ingredients and equipment of the baker, the machines that make the equipment for the baker, the machines that make those machines, etc., will cost over the next month. The only way to find out is to let supermarkets sell as much bread as they can, then have the bakers buy the ingredients they need to meet the demands of the supermarket, and so on, until we end up oscillating around an average amount of bread sold per month—sometimes the store will be out for a little while, sometimes it will have to throw away some extra, but that then gets worked into the cost. And, of course, with some products (probably not bread, though), the fluctuations can get really dramatic, to the point of forcing retailers and manufacturers out of business.

But, what if this all worked on something like a subscriber system? I, along with many others, sign up for a certain amount of bread (or a “basket” of goods, within a prescribed range of variation); the supermarket, along with other shops, subscribes with a “co-op” of bakers, who in turn subscribe with producers of wheat and ovens, and so on, who in turn subscribe with those they need to procure materials from? This could only work, or—not to get ahead of ourselves—be imagined to be possible, if everyone one joined various co-ops and virtually all economic activity took place through them. Those who make the machines to make the ovens sold to the bakers must ultimately, through a vast network of subscribers, subscribe those who employ the ones buying the bread. Obviously, I’m trying to imagine an advanced economy without prices and therefore without money, but also without central planning, and if that’s utopian, so be it, but I at least want to think it through in absolutist terms, and maybe someone else could incorporate these reflections into more sustainable ones. The question is whether the more advanced form of coordination that the elimination of liberalism, democracy, which is to say, “politics,” might make possible, would in turn make cooperation through a vast network of agreements between directors of enterprises possible.

It is already the case that much, if not most, economic activity is organized through networking: if you’re starting a new company, you look for customers and suppliers through established channels, including friendships and other informal associations; also, you will prefer to work with more stable companies with good reputations, in communities with reasonably favorable and predictable government, etc. You will prefer customers and suppliers you expect to be around for years over those who might leave or go under next month. None of this is directly priced. But all this means that the most basic value is an institution in which the high, the middle, and the low, cooperate rather than struggle against each other, in an area where other institutions are not trying to subvert the ones you want to work with. Companies that are well established and well run, and that value their reputations, will, of course, charge one another for goods and services, but they might also be more open to various forms of cooperation and exchange where payment can be waived or indefinitely deferred. With enough companies like this, you have your network of subscribers, and below a top-tier of networks, you could have lower tiers that would be more unstable and might need further supervision, or might even rely on money, but the damage done by their failures might also be contained.

Two big problems arise (at least that I can think of now). First, wouldn’t this be a very static system? How would innovation be possible—how could anyone break into the existing networks, and why should the top-tier companies feel any need to improve their products and services? Second, within this vast network of agreements that all depend on each other within an enormously complex system (the baker can promise x amount of loaves because the farmer has promised y amount of wheat, which he can do because the manufacturer has promised z amount of replacement tractor parts, within everyone bought in all the way across the social order), how can self-interested defections (cheating, strikes, adulterations of materials, side deals, black markets, etc.) be prevented? The eligibility of subscribers would have to be regularly assessed. We must assume the highest value of all is a unified and coherent command structure stemming from a central authority and reaching into all institutions. Individuals reporting directly to authorities would be in each company, and everyone in each company would be expected to consider himself a “delegate” of the government. There would be social pressures from other subscribers, who would stand to be disqualified if some of their members were found to be defectors. (If a particular subscriber can no longer fulfill its responsibilities, the central authority’s responsibility is to replace or reconstruct its governing structure.) We really have to assume the individualist, utilitarian ethics and morality of liberal society can be eliminated, and people can think of themselves as directly social. While it doesn’t quite prove anything, doesn’t the fact that so many millions of people can be recruited into an increasingly apocalyptic leftism that in many cases at least cuts against individual economic self-interests suggest that for many, if not most, utilitarian ethics are extremely unsatisfying? Doesn’t the fact that so many wish to resist the new order, without any guarantee that it will benefit them materially, indicate the same?

Innovation is really the bigger problem, since any innovation would have to, it seems, disrupt an extremely complexly integrated set of networks. How would R&D be conducted, and by whom? Here, I would follow up on some of my earlier “social market” posts and emphasize the centrality of the state to economic activity. This has always been the case, and is certainly so today—research is overwhelmingly shaped by state investment and subsidies, starting with the state’s monopoly on military equipment (which, of course, involves myriad spin-offs), but including quite a bit of investment in medicine, environmental technology, computing, communications, infrastructure, space exploration and so on. We can now add to that social network and surveillance technology. The state, then, perhaps in the form of its various agencies, would be a major subscriber within the networks. The state would be a major “consumer” of technological innovations and the co-ops who want to subscribe to certain forms of technological innovations could do so. (What does subscribing to the state entail? What does the state provide in return? Land, corporate and monopoly rights, airwaves, electronic networks, etc.) The co-op would then be able to provide a better product to its subscribers, and could solicit more subscribers as a result; as a subscriber to its suppliers, meanwhile, that co-op could be more demanding—its greater influence would enable it to get its employees into better co-ops or subscription lists.

I know it sounds crazy to speak of an advanced, civilized social order without money, in which everyone asks another for what they need, and in turn give others what they ask for. Maybe it is crazy, but I think it’s worth the speculation if we are to think beyond liberal fetishizations of the market. Almost everyone will concede that markets require some state support and regulation, but such concessions almost always assume that such support and regulation is the “lesser evil,” and so encourage us to constantly chafe against it, and assume it could always be reduced. Nationalist economics imagine a more positive role for the state, but that still involves intervening in already existing markets in very targeted ways—the basic liberal anthropology is not challenged. “Big government” left-liberal political economy, meanwhile, always presupposes an adversarial relationship between agents in the private economy—all the state does is take sides in that struggle, or sometimes act as a referee. But if we are genuinely to see the central authority as the source of social organization, as, essentially, the owner of the entire territory over which it rules, with subordinate agencies having delegated quasi-managerial powers over the “productive forces, then we should try to formulate that relationship prior to any mediation, like the various departments of a corporation. A corporation has external constraints, of course, but, then again, so does a government: it must show itself a respected and responsible actor on the international scene, whatever its place among an international hierarchy of states. But more important than all this is how to think about the creation, or re-creation, of an ethics, morality and aesthetics that transcends liberal ontology and anthropology.

I think the conventional view that sees pre-modern peoples as more “spiritual” and less “selfish” than moderns has it completely wrong. With all of the adherence to ritual and belief in supernatural agencies, pre-modern peoples are driven by the most material interests—fear and need. If one sacrifices regularly to the gods and is careful not to violate any ritual prescriptions, one will be provided for—one will have victory against enemies, a good hunt, rain for the crops. In a post-sacrificial order, there are no more exchanges with the gods, or even God: what God has given us is everything and incommensurable with any return; what each of us gives to God is also everything, all of us, even in the knowledge of its utter inadequacy. This desire is no less powerful in the atheist. The post-sacrificial epoch would better (and more positively) be called the epoch of the absolute imperative, a concept I take from Philip Rieff. The absolute imperative is absolute because no imperative can be issued in return by the commanded (no “God get me out of this and I’ll never…”). The absolute imperative is to stand in the place of whomever is violently centralized, i.e., scapegoated. The absolute imperative has its corollaries, enjoining us to construct and preserve justice systems that place accused individuals at the center in a way that defers, delays and ultimately transforms the scapegoating compulsion, or represent actions and uses of language that reveal the scapegoating compulsion in less than obvious places. Obviously, we (at least in the West) “hear” the absolute imperative because of the Judaic and Christian revelations, but it can certainly be made “audible” in other ways. But all the radicalisms and “holiness spirals” of the modern world, however “puppetized” and “proxified,” are set in motion by an attempt to obey the absolute imperative. Despite the best and worst intentions of economistic thinkers (who are really obeying the absolute imperative in their own way), human beings will not be satisfied with an affluent society, not even if we make a little bit more affluent. Or, at least, enough humans will not be so satisfied to allow affluenza to settle in, undisturbed, once and for all.

What, then, are the economic consequences of the absolute imperative? Eric Gans—while not speaking of an “absolute imperative”—sees the economic consequences of the Christian revelation to be, precisely, the free market, where exchanges are voluntary and non-violent and the natural world can be exploited in increasingly productive ways. This may be part of it: exchanges mediated through money are a huge moral advance over such economic practices as pillage and slavery. But if the most powerful players on the market simultaneously centralize and destabilize central authority, the ethical and moral advantages of both the market and central authority are compromised beyond repair. The two must be kept separate: money must be kept out of politics, but once the money is out, what is left of the politics? What, indeed, is left of the money? If money is first created by central authorities in order to enable individuals to purchase their own animals for sacrifice, then it from the beginning is a consequence of the derogation of authority over a shared sacrificial scene. The same is the case when money and markets are created by the imperial state to provide for soldiers in conquered territories—here, as well, money is a marker of the limits of authority, which means what money really measures more than anything else is the degree of pulverization of central authority. A secured central authority would, then, have to contain the market within its embedding, enabling, moral and ethical (disciplinary) boundaries. The use of money as an abstract sign of the goods, services, and capacities to be commanded by its possessor would necessarily dwindle: the uses of money would be qualified in many ways. How much could that use dwindle? If it dwindled to nothing, wouldn’t that mean that economic activity has been wholly re-embedded in a thoroughly articulated self-referential social order devoted to ensuring the institutionalization of the absolute imperative? That, at any rate, is the thinking behind the thought experiment attempted here.

Older Posts »

Powered by WordPress