GABlog Generative Anthropology in the Public Sphere

September 11, 2018

Signing Up

Filed under: GA — adam @ 7:42 am

The human is that being for whom repetition is problematic. A sign has meaning insofar as it can be repeated, which is to say, repeated as the same sign. We can go further and say that the meaning of a sign is precisely the various ways and occasions upon which it can be repeated. One’s understanding of a sign is demonstrated by the ways one is able to repeat it and have it accepted as that sign. But since a sign refers to a shared center, others, whose cooperation, or even attention, cannot be ensured, meaning can never be guaranteed in advance, just like you can never be sure whether a joke will fall flat. It is conflicting desires and resulting resentment that makes signs possible, necessary, and problematic. All culture is created so as defer violence and protect the center that enables us to do so, but in that case it might be more minimal to say that all culture is concerned with making repetition as certain as it needs to be—to ensure that this sign remains this sign for as long as and for whom it is necessary.

The originary hypothesis assumes an event at the end of which one thing is significant (the gesture of aborted appropriation) and one thing is sacred (the central object)—meaning is completely concentrated in that gesture—the rest of the world is (now) “meaningless.” If meaning is articulated by the ostensive gesture, though, it must articulate the entire bodily posture of the individuals involved. Pointing toward the central object while standing still, and bending slightly backwards, would help endow the sign with meaning in a way that pointing toward the object while leaning, or creeping, forward would not. There would be a tendency to saturate the human bodies with meaning as the originary event is repeated, both in ritual and in its extension to other practices. Certain situations would call for certain kinds of accentuation, and certain kinds of downplaying, of rendering “null,” certain components of the sign. We learn to calibrate the accentuation and nullification—any English speaker can make out, for the most part, the same sentence as spoken by an American Southerner, a New Yorker, a Midwesterner, a Brit, an Australian, etc., which is to say we can control for accent when we’re interested in a specific kind of meaning (semantic); at the same time, the accent can at times become an important part of the meaning.

I would assume that the ability to exercise such control is a consequence of millennia of learning and the development of media that singled out specific features of meaning, the most important of which media I would consider to be writing. The earliest sign users must have moved quickly from that one, single, meaning, to a world bursting with meaning. The slightest move by another member of the group would take on the form of some kind of menace, suggestion or invitation, which must be directly responded to, because there could be no question as to the meaning of the movement, and that meaning must be “verified” or extended, which is to say, repeated, by a complementary sign/movement. The surrounding world would also be replete with meaning—every animal, plant, change in the weather, etc., would be saying something. And those signs would be repeated as well. A “system” would develop, but it would look nothing like Saussure’s “system of differences,” or grammatical or logical systems; rather, it would be a system of what Marcel Jousse called “gestes,” in which sound/gesture/posture articulations, each with its own balance and rhythm and communal meaning, would complement other such articulations, in a never ending process of generating social coherence. Coherence would result, which is to say repetition relatively ensured, by making this gestural-oral system finite and exhaustive, such that every sign can be seen as indirectly referring to all the others. It is the textual reduction of meaning that creates the infinite system. (Is there, then, a tendency in the “secondary orality” of electronic communication, to return us to exhaustive finitude?)

In a finite, exhaustive, or ergodic system, all signs must ultimately be coming from the same place: the center. All individuals are mouthpieces, or enactments, of the center—this doesn’t imply a lack of individualization on the part of early, gestural-oral communities; in fact, there are many reports that the more primitive communities contain a greater richness of individual differences than our more civilized ones, and I think this is credible because trying to speak for the center might easily generate far more diversity than striving to distinguish oneself from it, which actually gets monotonous pretty quickly. It is also the center that would enable a hierarchy of significance, making it possible to distinguish between higher and lower stakes events—every gesture by any other member of the group that suggests in the slightest one’s lesser value within the group doesn’t necessarily have to be answered with maximum and immediate force. But you know that because the center, in the form, say, of a ritually consecrated ancestor, who is at the same time you, tells you so. This is to say that most insistently and carefully repeated signs create a tissue and texture that helps ensure consistent repetition all around—unvarying repletion of meaning would be extremely wasteful.

The most consistently and completely meaningful system, then, would have been sacral kingship—there, all meaning flows from the center and is directed back to the center, with that center claiming its centrality by way of its descent from the origin of the group, which is to say humanity, itself. After sacral kingship, we no longer speak from within the center. I hope it’s needless to say that no nostalgia for sacral kingship is implicit here: the point of remembering it is to explain why the reductions of meaning that followed sacral kingship tended to assume the surest way of maintaining stable repetition was by distinguishing the individual from the center. Just as the distribution of money to buy victims for the sacrifice replaced the collective presence of the group on the scene, the distributibility of the center leads one to focus on the rules for distributing the pieces. This involves a diminishment of meaning, or “disenchantment.” It’s logical to assume that the trajectory to be followed here is to keep “clarifying” meaning, and distributing it in discrete, measurable chunks. There will always be significant power centers that find this trajectory convenient, and those power centers will have large constituencies, precisely because it produces conveniences more broadly.

The counter to this process is a re-embedding of meaning irreducible to its calculated distribution. Clearly, a return to pagan ritual and sacral kingship is not an option here. In this case one must accept the cliché that in order to get out one must go through. The new mode of thinking initiated by the originary hypothesis provides us with a conceptual vocabulary for describing, with great detail and accuracy, every single desire and resentment, and to do so in terms that would not be chosen by the bearers of those desires and resentments but that would be simultaneously very difficult for them to deny. This kind of “parrhesia” provides for a convergence of GA with much of the alt-right and neo-reaction, both of which similarly wish to map out, openly and honestly, the “mechanics” and rules of interaction between individuals and groups. It is only such a peeling back of illusions and ideologies that can make a “formalist” political project, in which actual power relations are formalized, possible. (Without a disciplinary space trained on all the various articulations of power, how could the actual relations be formalized?) Pursuing such an inquiry is the highest vocation of the human sciences.

The question, then, is how does this contribute to the re-embedding, the re-repletion of meaning? First of all, I will note that an indication of how depleted meaning is for us is that the most meaningful thing one can do today is mock, ruthlessly, the circulation of the clichés and commonplaces that have hidden large chunks of reality for decades. What is eminently mockable is diminishing in meaning (the mocking accelerates this process) which means that a process of diminishing returns is in play here. It’s very interesting to consider that, for example, as Jean Baudrillard proclaimed long ago (and Slavoj Zizek, among other postmodern thinkers, have a good sense of this as well), we can all cease to believe in any of the propositions of the “dominant ideology”—we can all come to realize that liberalism, equality, democracy, rule of law, etc., are all jokes—and that the system can go on, because of as much as in spite of this. But Baudrillard, Zizek and the others don’t know that meaning is deferral.

To describe desire and resentment in “long form” is to make explicit much of what is usually left tacit; it is to put what usually remains on the ostensive and imperative levels into declarative sentences. X resents Y at work for getting the better office. It would take quite a few sentences to unpack this resentment into a series of explicitly stated relations of difference, power, signs of status, the limits of possible responses to this “injustice,” the concept of ‘injustice” itself, the reasons for the limitations on possible responses, and so on. Think about explaining things like desire and resentment to intelligent, non-human beings. This has always been the goal of metaphysics, and more narrowly, the human sciences, even if mathematics replaces some of the propositions that would be required. But metaphysics and the human sciences have done so to reduce to a minimum the hold ostensives and imperatives have on us—the liberal millennium, which not coincidentally looks a lot like the singularity, would be the complete replacement of the ostensive and imperative realm by the declarative. That would inaugurate the “Age of Reason,” but we would find all those declarative themselves devoid of meaning, since they would never actually be referring to anything; or rather, they would have pure power meanings, as they would be built to subjugate anyone insufficiently proficient in their articulation.

But if we are creating a new human science that has a different goal, which is to use declaratives to study the intricate networks of ostensive, imperative and interrogative sentences that in fact make them possible and are inscribed within them, we are free to note the inherently parodic results of precisely the most accurate and detailed transcriptions of desires and resentments. There is a good reason that pretty much all good modern literature is in one way or another a satire of disciplinary or, more broadly, “hyper-declarative,” thinking. The explosion of language generated by the human sciences can so easily be used to show the desires and resentments of the human scientists themselves. This satiric take on the disciplines is effortlessly included in the new, originary, human science, which defuses desires and resentments by exposing them, while also revealing the social relations we assume and therefore the obligations we take on in nevertheless experiencing slightly more deferred desires and resentments.

So, the “red-pilled” or “uncucked” right, whatever it will be called and whatever it will be, is inherently a satiric operation (perhaps the first constitutively satiric politics ever). Not “satiric” in the narrower sense of criticizing present day norms, mores, and “follies,” but more like what Wyndham Lewis called “metaphysical satire,” one directed at humans as repeating beings who never quite get repetition right. Satire, more than other literary forms, is based on repetition—it purports, unlike “realism,” to represent actual and not merely possible actions (for satire, even when fictionalized, to work we need to have specific targets in mind), and to do so in a way that is “distorted” from the standpoint of the target but truer for the satirist. It is therefore also the most responsible form because its goal is to help continue to check and improve our iterative capacities (is that portrait like so-and-so or not? How can we tell?). It therefore is well suited for the project of replenishing the world with meaning again, as it implicates the fundamentally paradoxical nature of our being as sign users (signers?). with its help, we can see signs of the origin of our human being everywhere.

September 4, 2018

Moral, Ethical Governance

Filed under: GA — adam @ 8:26 am

No theory of government could be more insistent than liberalism that government must be morally neutral, and not choose between different versions of the “good life.” And no form of government is more perpetually frenzied by moral panics than liberalism. The contradiction in the liberal stance is obvious at first sight, has been often pointed out, and need not detain us too long. If your theory of government is that the government is to remain neutral between different versions of the good life, then the full fury of that government must be brought to bear on whomever would put forth their practice of the good life as superior to others. Now, in a certain sense, everyone does this, making liberalism especially incoherent on this point—even your live and let live guy is presenting living and letting live as a superior version of the good life to be protected by the state over others, with its own privileged attitudes, legal regime, and so on. But the real force of the liberal argument is against a state religion, since that provides for the most systematic imposition of a notion of the good life, so the state religion of liberalism is the transgression of all religions that would purport, even gently, even tacitly, to represent the good life, while the state theology of liberalism is to find a state religion lurking in the doctrine and even daily habits of your political enemies.

But liberalism’s frenzied anti-moral moralism has made the consideration of some very simple and basic questions almost impossible. Could anyone deny that the ruler or the state could act in ways that tend to make its subjects better people? At the very least we should be able to get some agreement that it could make its subjects worse people—by compelling them to engage in vicious acts, for example. And if it can make them worse it can make them better. Shouldn’t it be doing that, then? At this point in the conversation, the liberal, at any point on the spectrum, from libertarian or American conservative to Antifa communist, is hurling nasty epithets at you. You are getting called totalitarian, authoritarian, socialist, communist; you are being told that “this” didn’t work in a long list of places, you are being asked “who are you to say…,” etc. In other words, you are being dragged into a LARPing of WWII and the Cold War. Not a single liberal has ever genuinely scrubbed the idea of the state as the “night watchman” out of his head, or failed to add all of the state enemies of the past century to the list of examples of violations of this norm.

In some recent posts I have laid the groundwork for addressing this issue. From my discussion of morality in “Fraud and Force,” we could say that responsible parties should establish institutions that extract from a situation in which some violent centering is possible the facts of the situation and the mode and degree of responsibility of all involved, freed from the logic of the vendetta. This institutions can be local, private, even “vigilante” ones—the point is not proceduralism, but disciplinary spaces where authoritative individuals are entrusted with search for the truth, the best remedy, and the preservation of the peace and cohesion of the community—since these different aims can be at odds in some cases, those entrusted to arrive at coherent decisions are given a great deal of leeway—if they are to be judged, it is to be after the fact, and with an eye toward improving the system. Those entrusted, to be more precise, must be those willing to draw some of the violent centering toward themselves, if necessary. Furthermore, we can say that the government has an interest in every individual having opportunities to reduce the “meaning gap” I identified between “speaker’s meaning” and “sentence’s meaning,” between the way one sees oneself and all the ways one is seen by others. Does this mean the government must guarantee to everyone a meaningful life?, interjects the fuming liberal. First of all, it means the government wishes to see the entirety of the social order bound up in disciplinary spaces, or what Alasdair MacIntyre calls “practices,” in which forms of human excellence are made possible, even created, by constrained, systematic and cooperative activities. Finally, what binds these moral and ethical imperatives together is the existence, discussed in “Way, Way, After Sacral Kingship,” of a similar gap in the issuance of any imperative—a gap between the imperative issued and the imperative obeyed and enacted, always under conditions not completely accounted for by the “imperator.”

Moral, ethical governance involves protecting existing practices and disciplines, helping them to become more practice-like and disciplinary, and providing the conditions for other activities to become practices and disciplines. The simplest way of doing this is by establishing constraints to be adhered to by any organization or institution incorporated by the government, which is to say any organization or institution. These constraints would range from institution specific to society-wide; from purposeful and efficiency-oriented to aesthetic and even arbitrary. All buildings, or all buildings of a certain kind, might be required to include a particular design; how they have to include it might be loosely or tightly prescribed (some may have it prominently exposed, others may hide it in some corner). Of course, similar to the safety regulations in modern societies, all buildings might be required to be prepared for fires or other emergencies, but it’s important that “regulation” not be solely functional. There need to be some constraints that are devised through collaboration with representatives of the institution itself, but others must solicit the contributions of surrounding and interconnected institutions, while yet others must be the government imposing the stamp of the social order itself on all institutions. There has to be a game-like or play-like structure to these meta-constraints, because otherwise the government is reduced to liberal utilitarianism, leaving itself to be assailed constantly for providing less than maximum happiness for the greater number, rather than interwoven into the entire social fabric as the guiding thread.

Rather than public discourse getting obsessed with rights, needs, inequalities, inequities, etc., it would always be framed by discussions of the state of constraint. More important than whether injustices are allowed in or committed by a particular institution is addressing the anomalous nature of rule. Any system must have it anomalies, because any system must have some element within it that is simultaneously outside of it and therefore can’t be completely assimilated to the terms of the system itself. More precisely, the founding, or chartering, element of the system is anomalous insofar as it judges everything else within the system but can itself be judged, indeed, is judged, at least tacitly, by every action taken within that system. This is the permanent anomaly of any system—the incommensurability of responsibility and power that can be minimized (the discipline and practice of government is concerned with nothing more than minimizing it) but can never be abolished. You can’t know whether someone can do something until he tries to do it, and once he gets started it will become something at least a bit different than he set out to do. The purpose of constraints is to “thematize” the anomalous inside/outside position of the one in charge, so that any judgment assumes he is intrinsically part of the game, and not an imposition on some spontaneous order.

This anomaly pervades all systems, right up to the top, and it must be faced, because any attempt to abolish this anomaly will merely be an attempt to conceal it under some procedural “plug.” So, there must be an inviolability granted to those in charge—to put it simply, someone charged with doing or running some collaborative effort must be given every benefit of the doubt by everyone he must ask to help him do it. This may mean making the wildest excuses for the most evident failures, anticipating failures based on the ruler’s previous performances and trying to prevent and minimize them in advance without attempting to take any credit that would reflect discredit on the ruler. Any possible judgment is displaced onto the implementation of imperatives in such a way as to reconcile whenever necessary their authoritative source with their benefit to the institution. The supreme ruler, we may assume, has agents in every institution providing him with accurate information regarding the performance of his subordinates; moral and ethical participation in an institution means being simultaneously ready to become such an agent upon request and never acting like one unless requested. To arrogate to oneself such a position is to point fingers outside of any established framework, i.e., it is to violently centralize others; it is also to try and control the meanings one gives off, rather than allowing one’s practice to speak for itself, to ramify among the responses of others. In other words, it is immoral and unethical upon the terms laid out in “Fraud and Force.”

Installing this inviolability confronts what may be the most pernicious and tenacious element of modernity. Rene Girard, in his account of mimetic desire, distinguishes between “external” and “internal” mediation. In external mediation, the model one (along with others) imitates is outside of the system, and therefore beyond reach of any rivalrous claims. Obvious examples here would be gods and kings, but it would include any model separated by a formalized distinction from his imitators, such as a member of a higher class or caste—one peasant cannot compete with another to become a noble. With internal mediation, our models are not fundamentally different from ourselves, which means there are no limits to competition, and no established models that could put an end to it. Girard argues that with modernity, external mediation has been completely replaced by internal. My argument here implies the need, in the face of deeply entrenched commitments to internal mediation, to restore external mediation. The responsibility of the ruler, sane, moral and ethical government, requires placing certain people, in certain positions, beyond criticism. (In an interesting discussion, in the wake of the Michael Jackson trial which, following the O.J. Simpson trial, raised the question of whether it was possible to convict a celebrity of a crime, Eric Gans contended that the modern celebrity system serves a purpose similar to external mediation—but in democratic, liberal market terms, that can only have whatever effectiveness it does precisely because celebrities aren’t really responsible for anything.)

What this renewed external mediation might look like must remain an open question. It’s hard to imagine saying that this individual, who is now being appointed school principal, cannot be criticized—even though, yesterday, when he was a teacher, he could be mocked ruthlessly, and once he steps down he can be endlessly attacked for his decisions as principal. (In highly functional organizations and enterprises, things do work this way, so it may be possible to have a social order as whole do so as well, eventually—but even then we could never assume this to be the case once and for all.) In other words, some kind of aura would presumably have to surround the individual preceding and following his tenure in a particular position. In other words, the recreation of external mediation seems to imply the establishment of something like permanent class or status distinctions. The model of rule I am proposing is to a great extent a “team” model—someone who wants to get something done appoints someone he knows is best able to do it, and that person in turn assembles a team of the best he can find—who will be absolutely loyal to him because they also want the job done and they know he is the best person to lead them; while he, in turn, knows that since they are “on board,” he barely has to issue them orders, much less boss them around. All organizations and institutions approximate this model as best they can. Now, families are also teams, as are neighborhoods and towns, and these kinds of teams, if they attain leadership positions, will make it part of the game to continue to deserve it. We have never, in fact, had any kind of society in which family names didn’t mean something, and that’s in large part because in any society families can and do invest a great deal of energy into ensuring they do. A genuine aristocracy would probably have to be landed, but an absolutist order would have to at least be at ease with informal aristocracies, which get formalized in various provisional ways (the ruler might give a particular family firm an established position in some industry, or establish a university under the aegis of an especially accomplished academic family). Members of such families would then have a kind of penumbra of inviolability, a benefit of the doubt, before entering the specific positions where they will really need it. Needless to say, preserving the positions of such families once they have entered decline would allow the institution in question to be pervaded by practices and disciplines incompatible with its own. In the end, we can never outrun the anomaly, or the paradox of power, but it can be made generative: the study of the temporality of imperatives (at what point have they actually been obeyed or disobeyed?) feed back to those issuing imperatives, helping them to defer, hopefully indefinitely, the becoming crisis of the anomaly.

August 28, 2018

Money and Capital as Media and Power

Filed under: GA — adam @ 9:06 am

Let’s begin with some of what we know about money from Richard Seaford’s Money and the Early Greek Mindand David Graeber’s Debt: The First 5,000 Years: it doesn’t emerge out of barter, through a gradual selection of one particularly apt commodity to serve as a universally exchangeable object. Money is introduced by the state, or the ruler: according to Seaford, as a way of replacing a more archaic and egalitarian form of sacrifice, where all members of the community were present on the scene, to one in which sacrifice is organized by a warrior “Big Man” (although Seaford doesn’t use the term) who distributes the offering according to merit (as judged, of course, by the Big Man or chief); according to Graeber, in order to pay soldiers in wars of conquest where the soldiers are away from home and need to supply themselves where they are stationed—according to Graeber, this is also the origin of markets, which emerge in order to supply the soldiery. I would give Seaford’s account primacy, because he is further accounting for how the Greeks became the first fully monetized society, and he also shows how the Greeks systematically connected money to the instability of power relations. Of course, money could have been introduced both ways, and other ways, in differing times and places. What we could say at this point, though, is that money displaces the sacrificial scene, but without abolishing the center-margin configuration constitutive of that or any scene. Insofar as money is established and controlled by the ruler, and so is the market wherein money circulates, we could say that a sum of money represents access to a piece of the center, whatever that entails in a particular place and time. At the same time, we can see that money is certainly a significant factor in undermining sacral kingship, because it creates centers of power and competencies that a sacral king cannot control. Along with the circulation of money comes the circulation of power.

But Seaford and Graeber also show that money generates new ways of seeing and thinking, which is to say, it is a medium, every bit as much as writing or electronic communication—as has often been noted, money is a sign system. Seaford shows how the metaphysical binaries like ideal and material, mind and nature, universal and particular, individual and society, and others, are products of the form of money (he is of course dependent on the history of reflections on these matters within Marxist theory). Graeber, meanwhile, shows how debt, which comes into being along with money and markets, and is in fact central to their creation, generates entire theologies regarding the relation between humans and the gods, or God. This is a good time to point out that, as linguistic beings, we are always within some medium, so it’s not as if the media distort some natural, non-mediated perspective—in fact, one could say that the very notion of a natural, non-mediated perspective is a product of money, which creates the possibility of thinking about an individual directly confronted with “nature.” Within, or perhaps “tilted” towards one medium we can see and say something about our relation to others. Power, of course, would be another medium, with the central power a sign of the center. Money, we could say, clouds our view of the medium of power, because it represents power as something to be bought and sold and as a way of utilizing others and oneself for some immediate advantage. Power, meanwhile, interferes with the way money would “like” to “present itself”: rather than a representation of the intrinsic properties of its possessor, through the medium of power we can see money as commanding a certain “portion” of the center (and hence as reducing the center to “portions”).

Capital, at the very least is something that makes it possible to produce other things. But while it may be possible to project the concept back to early ages (the native’s spear is capital, etc.) the concept really has its meaning in a social order in which the “economy” has been removed from ritual and political control and is therefore governed by the power of money. Capital is most fundamentally mobile and therefore expressed through money—the most advanced technology is useless without the power of money to command labor, knowledge, resources, supply chains and end consumers. Within capital as a medium, all human activity is homogeneous and exchangeable, while any particular activity presents itself as subject to never-ending growth. I would say that if money represents power over a piece of the center, capital represents power over the disciplines. Once capital acquired the capacity to outlast labor in any class battle, it also acquired a power independent of either state established discipline or traditional social orders—indeed, the prerogatives of all such became impediments to this new power. And once capital acquired the power to remove itself from one community and move to another, and then from one country to another, it acquired powers that didn’t quite eliminate those of states but certainly penetrated deeply into state power. As theorists like Baudrillard (and many others) pointed out, everything must prove itself before capital, everything must show its usefulness and exchangeability. The sense of one’s own body as a set of parts that might be replaced, repaired, sold, or junked, which is implicit both in much “end of life” discussions and AI fantasies, come very easily within the medium of capital.

Capital is so all-embracing and all penetrating that it’s hard to imagine what it would mean to think outside of, or beyond it. For Marx, this is what the proletariat was for. The Italian Marxism pioneered by Antonio Negri, which has branched out and continues strongly in Europe, at least, also informed the anti-globalization and “Occupy” movements (look through a book list of Autonomedia or Semiotext[e]), transfers this revolutionary power to “social labor” exercised by an increasingly highly trained and intellectual working class. This collectivized, global social labor overcomes all attempts to confine itself with capital’s boundaries or those of the state—some future anarchistic order is more or less explicitly prophesied here, with “self-management” as the highest value. More important to me here than the political prospects here is the mediumistic claim, i.e., the claim to be able to see through capital and not fall prey to its machinations. The “hermeneutics of suspicion” here trains its vision on the way everything presented as a “value” by the capitalist order is in fact astratagem for disciplining, confining, controlling social labor. As Deleuze pointed out in his “Society of Control,” all this has also been autonomized, an argument that the algorithms of Google, Facebook and Twitter make very visible. There has been much news lately about how these companies have been manipulating these algorithms to suppress right wing perspectives; while obviously true, from the “autonomista” approach, this is beside the point and not really necessary. Once these companies overcome this momentary panic, the argument will go, they will see that letting the algorithms work on their own will provide all the social control necessary for capital.

The problem for the autonomistas is that without the arbitrary assumption of some inherently free laboring subject, it would be hard for them to say what, exactly, is wrong with the algorithmic order. Not that it’s so much easier for anyone else, once all naturalistic conceptions of freedom are set aside. A good ruler would want as much information as possible about his people; he would want such information gathered, compiled and analyzed by competent and trustworthy sources; and he would want such information to be put to use to anticipate future possibilities and pre-empt potential problems. Why wouldn’t he want this, and why wouldn’t his subjects want this, other, again, than in the name of some fetishized notion of freedom? The algorithm is the best way of doing this right now. But rather than some temporary, subjectivizing distortion of the algorithmic medium, the jiggering of algorithms by the big tech companies demonstrates that the setting of the algorithms is never automatic or neutral. That good ruler would have to determine, or have determined, what counts as “information.” Within the algorithmic medium each individual is thinking about how one action or even thought serves as an indicator of the relative probabilities of other actions and thoughts. The main source of income for the social media corporations is advertising, and what advertisers want is knowledge of how likely someone interested in one thing is to buy another thing—a detailed profile of consumer habits is immensely valuable. So, that’s going to be one vector of algorithmism. The question for the algorithms constructed by the social media companies, though, is a bit different—they want to keep you within their system, and the way to do that is to make the system consistent on the terms on which you enter it. Even when we’re obsessed with buying things, we don’t really see ourselves as “consumers”—rather, we see ourselves as interested in certain things—sports, and specific kinds of sports, specific discussions about sports; books and intellectual “ideas” and “topics,” and specific discussions within these spheres; family, friends, members of our social group; and so on. The social media company wants to help us find our way from something we are interested in to something we might be interested in, and out of that network can be carved various consumer profiles useful to other companies.

So, the algorithmic is the way this “stage” of capital leverages the disciplines for its own purposes. But, to use another Marxist term (it’s not my fault if no one has given capital as a whole way of life, a medium, as much thought as the Marxists), capital must grant the disciplines some “relative autonomy” in doing so. It must allow us to pursue our interests if only in order to capitalize on those interests; within the more paranoiac streams of “oppositional” thought we could imagine that capital has “always already” channeled those interests in ways guaranteed to flow back to capital in full, but how could capital know how to do that without granting its knowers some leeway in the first place. Someone must plug the variables in the algorithm. Now, liberalism can only accelerate capital’s “logic” by trying to access some level of freedom yet unpenetrated by capital. If the medium of capital can be interfered with, it will be through the power medium, first of all by pointing to capital as a power, or a network of powers, rather than an amorphous monster. Power is more of a retardant than an accelerant. Working to see every decision you make as commanded might seem terribly constraining and oppressive, but you’re the one working to see it that way, and it is at least more truthful than seeing everything you do as a result of your unconstrained will. Capital really does attempt the ultimate decentering, but it cannot accomplish it—if it were to “succeed,” it would produce catastrophe, requiring the re-establishment of order (if still possible) from the remains.

If I say something, I mean what I say, but what I say has a meaning beyond what I myself mean. The modern subject, or subject of capital wants to control the meaning of what he says by making it correspond to what he means. We still see this all the time—as soon as something one says gets out of that person’s control there is a furious reaction, whether it be denouncing those who “distorted” what he “really” said (taking it “out of context”) or apologizing, reframing, “walking back,” etc., so that what I mean can be revised to conform to what I turned out to have meant. It’s incredibly hard to let go of one’s meaning, because that is all that protects one from complete subsumption in the machinery of capital. But the very idea of a meaning fully intended by the speaker or writer is itself a product of money and capital—it is within their media (one is thinking in terms of copyright). Self-control, or discipline, is central, but desperate attempts to claim one’s own meaning subvert it. The distinction between what David Olson calls “speaker’s meaning,” on the one hand, and “sentence’s meaning,” on the other (drawing upon Frege’s distinction between “meaning” and “sense”), can be played out otherwise. My “own” meaning is in fact the probabilistic range of all the meanings my sentences, my discourse, might have in one “context” or medium after another, with some of the contexts and media registering meanings constructed in previous contexts and media, and so on; and, it iterates previous sentences and discourses (said by others as well as myself), with theirentire “range.” What “I” want is not so much others to hear from me as for all of us to hear from the center. We could imagine an “average” of all the possible meanings of what one has said, but we can also imagine a centering of them: if we’re all focused on the “same” meaning, then that meaning has been detached from any of us and we’re trying to figure out what the center is saying through us; we are obeying the imperative to derive further meaning from the center. We keep showing differences between speaker meaning and sentence meaning, between the speaker meaning and the sentence meaning of the one who shows the difference, and so on. The center speaks through these differences: the more what any “I” says generates a range of meaning different than that “I”’s, the more what that “I” says is the discourse of the center. The disciplinary space that can singularize any speaker’s meaning while treating it as product of all the ways it has been taken up is the discipline training itself to listen to the center. The discipline of the discourse of the center sustains a medium irreducible to capital, and it is within this medium that the power medium can be seen as distinguished from capital as well.

August 21, 2018

Fraud and Force

Filed under: GA — adam @ 7:24 pm

We can consider the emergence of the Big Man out of the primitive egalitarian community as the beginning of civilization. With civilization comes the placing of some individual at the center of the community, as the source of power (this could just as easily be described as some individual appropriating the center). A new moral order is thereby initiated. With the central object, prey animal, ancestor/icon at the center, the overriding moral principle is precisely preventing anyone from seizing the center—the ritual means of distributing food, mates and other goods follows from that imperative. Once a human occupies the center, that human can be held responsible for everything attributed to the center, which is everything required for the well-being of the community. Sacrificial morality involves adhering to the rules surrounding the worship and eventual sacrifice of the central figure. These rules are already a deferral of the immediate killing of the central figure as soon as some failure in his mediation of the cosmos for his people is revealed. The most moral one can be in the sacrificial community is to increase this space of deferral, by attributing as much of the responsibility as possible to the ruler for actions one might imagine he could actually have carried out otherwise, or left undone. But under sacrificial conditions there is no way of consistently isolating which actions might fall into this category.

Post-sacrificial civilization (accomplished via the Judaic, Biblical, and Christian revelations in the West and otherwise—via Buddhism, Taoism, Confucianism, etc., elsewhere) is the ongoing effort to bring that category into focus. Once the individual occupying the center can be blamed for anything that goes wrong (because everything, good or bad, comes from the center), then any individual, occupying any central position, can also be blamed equally indiscriminately. And the erection of one center leads to the proliferation of other, orbiting centers, so the resistance to potentially unlimited scapegoating becomes the moral problem. How is such resistance possible, and how could the “immunity” in question be built up? There are two ways, which are not mutually exclusive and can even support each other, but one of which will nevertheless be dominant in a given case. The first way is the insight gathered by rulers that perpetual scapegoating, as well as its more organized form in the vendetta, is socially destructive and a threat to the sovereign itself, and must be suppressed. Part of the suppression involves the inculcation of self-control, which means refraining from acting upon your resentment in other than social approved of ways, and also means constructing the means of social approval, i.e., some kind of justice system, that will ensure such restraint has the desired effect. Here we get, I assume, the Confucian (for example) model of the wise man, who respects authority, doesn’t act upon impulse, pursues moderation, places the family and tradition first, and so on. Such a man will defer to the authorities and not act out vengefully. But there is some question as to whether such resistance to scapegoating is more than just “pragmatic,” with the resentment being enacted in some ritual or aesthetic manner.

The other way, which is more transformative but also leads to more vulnerabilities, is modeled by what Gans, following Girard fairly closely here, calls the “Christian revelation.” The Christian revelation confronts each one of us with the bad faith implicit in our impulse to scapegoat, to pile unlimited responsibility on some other placed at the center. Jesus proclaims a universal moral reciprocity, retrieving the symmetry of the originary scene, but this time in a way that forces constant confrontation with sacrificial institutions, even the more deferred, mediated and “symbolic” sacrificial culture of Rabbinic Judaism from which Jesus emerged, and from which he adopted and adapted the call for reciprocity. The center demands, first of all, refraining from violence, including revenge, against your neighbor; but sacrificial religions and polities displace this violence by establishing the proper form and rationale of sacrificial violence, and they can’t tolerate the exposure of the emptiness of those rationales. It is for this reason that Jesus is sacrificed, i.e., in the name of preserving sacrificial violence and the institutions and resentments predicated upon it. Since Jesus has done nothing wrong, and in a sense nothing at all other than expose these institutions and resentments, our universal complicity in his murder reveals our own implication in sacrificial violence. Any time we find ourselves starting to put someone at the center, then, a move which always implies the possibility of a violent outcome, we are to question our sacrificial investments in doing so. Since we must put individuals in the center, we must continually disinvest our resentments in the process, and reduce centrality to the barest necessity; the construction of institutions and culture is all directed towards identifying, tagging, studying those sacrificial investments and building regulated forms of interaction that systematize this moral imperative. This moral form is more transformative, because it is reflected back to us in all our engagements with the other, and not just in our acknowledgement of authority; it is more vulnerable because, while not incompatible with authority, the particular form that our restraint before our tendency to centralize, violently, the other, takes can never be set once and for all. We can always identify yet a further, previously unnoticed, incitement to sacrificial resentment and, even more important, can always find grounds for condemning authorities for not protecting the victims of that resentment.

All of this is really by way of review. The further analytical step I want to take here is to explore a substantive ethical account to supplement these post-sacrificial moral forms. Morality involves the “thou shall nots,” open-ended imperatives, what Gans in The Origin of Languagecalls the “operator of interdiction.” Ethics concerns self-shaping, bringing one’s actions, and therefore one’s intellectual and emotional prompts to action, into a hierarchical order directed towards a center. No sustainable ethics can be immoral, but morality can’t dictate the content of ethics. There are a lot of different ways, corresponding to different historical situations and individual capacities, of restraining one’s sacrificial resentments. For some people basic self-control, the reminder that they will be “bad people” if they commit certain transgressions, may be enough. For others, the imperative to refrain from sacrifice includes nothing less than world-building. It is also the case that ethical failures, the confirmed sense that one has fallen short of the model one has generated or adopted for oneself, that is, the inescapable feeling of being a fraud, is a prominent, I am even tempted to say the only, source of lapses into immorality.

In a ritualistic culture, one cannot be a fraud—one fulfills or violates what is required. Before we can speak of fraud, we have to have disciplines. Someone purporting to be a doctor, lawyer, banker, investor, soldier, teacher, etc., can be a fraud. All these professions are products of literacy (even soldiers who can’t read and write are part of a literate culture, which makes their discipline and method possible). The authenticity of one’s professionalism, or participation in a discipline, then, depends upon one’s relation to the metalinguistics of literacy. To review: writing, presenting itself as reported speech, supplements the elements of the speech act that are lexically inaccessible (tone, body language, etc.). The proliferation of metalinguistic terms supplementing primes like “think,” “say,” “see,” “feel,” “know,” “want,” and “do” follows. And then the nominalization of those supplementing terms. The imperative of the written text, codified in “classic prose,” is to “saturate” the speech scene, to place the reader there with the writer in his imagination. This imperative to saturate the scene is the source of the easily ridiculed and despised “jargon” so prevalent in the disciplines—the psychologist can’t simply say his patient “thinks” and “feels” certain things—those feelings and thoughts need to be made more precise (their scenic preconditions made explicit) and they need to be given a location (e.g., in the “unconscious”).

The question for the disciplines, then, is what particular thoughts, feelings, etc. (even to nominalize “think” and “feel” is to situate us within a discipline) mean. To say what something means is to refer it to something else, which is to make it less than “prime” and auto-intelligible. Wierzbicka doesn’t find “mean” or “meaning” among the primes; Olson does, interestingly, locate it in the pre-literate English vocabulary he compiles, along with a list of post-literacy equivalents and supplements, in his The World on Paper. “Mean” is a borderline word/concept, then (the Online Etymological Dictionary seems to give it virtually “prime” status, as it doesn’t present It as emerging from a metaphorical transformation of another word). Olson himself may provide the explanation for this in The Mind on Paper, when he points out that literacy introduces the distinction between “speaker’s meaning” and “sentence’s meaning,” which itself rooted in an older distinction between “say” and “mean.” Once language, through writing, becomes an object of inquiry, words, sentences and the grammatical rules that get us from one to the other are objectified and standardized, which means that we can judge what an individual speaker says against those standards. So, “meaning,” which first marks the observation that something is concealed by the speaker comes to refer to what is concealed from the speaker in his own speaking. There’s no reason both senses of the word couldn’t be represented by different words, so “mean” might be more “elemental” in some languages than others. “Mean” as metalinguistic concept refers to the always existing discrepancy between speaker’s meaning and sentence’s meaning—there is always something in the words and sentences we utter that is irreducible to whatever we thought we were doing with them.

The imperative to saturate the scene constitutive of classic prose, then, is also an imperative to abolish the distance between speaker’s meaning and sentence’s meaning. Think about how much effort is put into avoiding misunderstandings, fending off misinterpretations, attacking “distortions” and “de-contextualizations” of one’s words. All this is an attempt to fold sentence meaning back into speaker meaning. This is the central ethical problem, because all sustainable self-shaping depends upon accepting and living within that distance: what you are for yourself can never be quite what you are to others, and we all need to find ways to have our representations of ourselves to ourselves be complementary to all the representations we “give off” to others. To refuse to accept that, whether by completely identifying oneself with the successive representations we give off, or (far more commonly) by trying to control one’s self-representations so as to rule out meanings other than one’s own, is to be a fraud. On the one hand, one is a Don Juan or con man; on the other hand, one is a bureaucrat of the self, or hypocrite. Either way, one is likely to assuage the sense of shame by assuming everyone else falls into the same category, in which case suspending all moral obligations to others is the sensible course. Violent resentment and projecting accusation is directed towards whoever re-opens the difference within meaning.

A sustainable ethics would have to place speaker’s meaning in the midst of the multitude of actual and possible sentence meanings. We have a definition of “competence” and “virtue” (and perhaps “phronesis”) here—neither competence nor virtue is about who you “really” are, or about what you can induce others to believe about you. Both involve a kind of constant interplay in which one keeps refining one’s meaning by soliciting feedback from the ramifications of the meanings of one’s sentences. The first thing one is inclined to do upon being called, implicitly or explicitly, a fraud, at least if one suspects some truth the accusation is to lash out at, centralize the accuser, and prep him for a symbolic lynch mob. Here is where the ethical problem slides into the moral one. In order to violently centralize the other you will have to “saturate” yourself—the other is “that” because you are “this.” Building and shaping oneself while and by refraining from such violence involves creating spaces that bring other speakers’ meaning into proximity with your sentence’s meaning. As others repeat, in different contexts, for differing purposes, your sentences (and you can of course join in as well), they keep exposing the distance between the two meanings, for themselves as well as for you. With this in mind, you would already write and therefore think differently, more hypothetically—if writing is always implicitly a record of speech (even speech one has with oneself), it makes more sense to explore the various settings in which that speech could have been uttered than to try to reproduce a full, present speech situation that is by definition absent. In that case, the distance is addressed from the beginning, not “patched up” afterwards. That distance, and the imperative to make it oscillatory and therefore a disciplinary space of inquiry, resisting the imperative to shut it down and resolve the discrepancy antagonistically, is the articulation of the moral and the ethical.

August 14, 2018

Narrative

Filed under: GA — adam @ 8:09 am

Like other formerly arcane theories that have now become part of everyday political discourse (e.g., the multiplicity of gender, the pervasiveness of implicit racism), “narratology” had a long incubation period in the academy. In this case, the “breakout” is a good thing, and the current “framing” of political rhetoric in terms of “narrative” has more promise than cruder concepts like “ideology,” “propaganda,” “manufacturing consensus,” and so on. The assertion that some event has to “fit the narrative” to be made visible recognizes both that all events are framed narratively and that narratives have “laws,” or at least constraints, governing them. There is still some room for improvement here, as the concept of “narrative” tends to be used fairly loosely—for example, “Republicans are racists” is not a narrative. It is, though, a character description, and character descriptions imply a range of narrative options. You can then go on to shape events in such a way that, at the end of the narrative, the “moral” will be that “Republicans are racist.” But there are more and less effective ways of doing this and there are more and less effective ways of constructing counter-narratives and infiltrating the dominant narrative with the counter ones.

Narratives, by definition, have beginnings, middles and ends. They have characters, or agents—usually in some hierarchy of importance (main character, supporting character, etc.). They have events: things happen. The things that happen propel the narrative forward. Narratives are generally set in motion by some problem, or conflict, and what keeps the narrative going are the attempts to solve the problem or resolve the conflict. The end of a narrative generally involves a solution or resolution; ongoing narratives sustained by the media posit, explicitly or implicitly, some resolution to which events are tending. If you want the narrative to sustain interest you introduce counter-agents who prevent the main agent from solving the problem—the closer the main agent comes to solving the problem, without quite doing so until the end, which is to say the more evenly matched the antagonists, the more compelling the narrative. Simplistic narratives are set up in terms of a good vs. evil conflict: we root for the good guy against a powerful bad guy—to keep things interesting, the bad guy has an advantage precisely because he is bad and is willing to do things the good guy won’t. The interest in such a narrative is in the revelation of the resources of goodness—being good must, in the end, provide some advantage that makes a successful resolution possible. Meanwhile, more complex narratives make the evaluation of the antagonists subtler and ambiguous—the good guy carries out actions that make him not so unequivocally good, while we are shown things about the bad guy that qualify our condemnation. Good and evil might switch sides, or the distinction be completely blurred.

This is all simple and obvious enough but it’s simple and obvious enough because narrative is the primary way of exploring and representing mimetic desire. Whatever kinds of “communication” can be attributed to animals, what is certain is that they don’t tell each other stories. Hitchcock’s dismissive reference to the goal sought by the protagonist as the “MacGuffin” is correct, because the object is less important than the structure of rivalry itself. I think everyone has had the experience of choosing a side, in politics or any other form of competition, for what seems like a good, justifiable, limited, reason, and then finding that the act of choosing sides and engaging in the competition itself generated goals that seem urgent but would not have even seemed important without that initial act of taking sides. A narrative “hooks” us by getting us to take sides, to see the agent’s actions and goals as our own. But, looked at this way, narratives generate delusions by inflaming and providing new pretexts for our mimetic desires and resentments. We can easily see how this is the case with political narratives, where people can find themselves convinced that the future of the republic depends on whether some tax bill passes, or an executive order is overturned.

If we don’t want to just get jerked around, then, that is, become bit players in someone else’s narrative (someone much richer, more powerful and in the know than us), we need to be able to resist the narrative structures imposed on us. Hopefully, no one who has read a few of my posts will be surprised when I reject what might seem the obvious solution: don’t think narratively; think “logically,” or “analytically” instead. There are, indeed, on some authoritative accounts, these two kinds of thinking: narrative vs. abstract. So, if I’m thinking abstractly and probabilistically, I can see that this tax bill or executive action will have specific effects, some of which I can anticipate within a certain range of predictability, others within a wider range, others not at all—on fact, it will probably have all kinds of effects I can’t even imagine. And they may be very small and irrelevant ones. This is all fine, but when we think abstractly we’re not really doing anything more than widening the narrative field upon which we work. The one who takes the apocalyptic view of the tax bill does so because he sees the possibility of some evil agent (“the rich” or “the bureaucracy”) being dealt a fatal blow (or sees the “little guy” or “private initiative” as the one being dealt that blow). The more abstract approach just means we get more discerning about whom we designate agents—there are lots of different rich people, and some of them might be dealt blows while others might find new opportunities to get richer as a result of the bill; indeed, if we back up and take a wider view of the protagonists in our narrative, perhaps some of those initiating the bill are quite rich. When we think abstractly, the one big narrative is broken down into lots of little narratives, all of them interfering with the others—in narrative terms, a main character in one narrative is a sidekick in another, the good character in one narrative is evil in another, there are narratives within narratives, and so on. (Even if we try to work with “pure data,” how do we determine what we are to gather data on, if not the figures in some narrative we are constructing?)

All of which means that the way to resist narrative, or disable the delusional investments in narrative that help make one a dispensable extra (like those guys who get vaporized in the first scene of the old Star Trek show), is not to try and get out of the narrative but to have other ways of getting into it. (Indeed, trying to get out of it immediately generates a narrative logic of its own—trying to escape the clutches of the evil dominant ideology, etc.) As I’ve been doing in recent posts, I’m, to some extent, giving a more abstract formulation to what lots of people are already doing. So, for example, the writers on the Power Lineblog have a kind of running gag where they point out references in the Minnesota media to “Minnesota men” who commit some kind of crime or are arrested for some terrorist plot. Invariably, the “Minnesota man” is a Somali Muslim immigrant, who, indeed, most likely has a Minnesota address, driver’s license, etc.—but that’s not what the headline means to suggest. Similarly, the website VDAREplays a similar kind of “find the hidden immigrant” game in media references to criminal activities. What they are doing is interfering with the narrative by looking a little more carefully at how the main character is constructed. The mainstream media outlets want to control who gets to be the good guys and the bad guys by proxy. The point of having a more general formulation of these practices is, of course, to make them more readily replicable.

Self-referential narrative strategies have been more widely exploited in modernist and postmodernist literature than previously, but such strategies go way back (e.g., the 18thcentury British novel Tristram Shandy), probably back to the beginnings of narrative itself, because it exploits such an obvious feature of narrative—the fact that telling a story, and, even more, creating a story, takes on a narrative structure itself. Such metafictional strategies provide what is probably the most comprehensive way of engaging politics narratively within simply accepting the terms of another’s narrative. Again, part of what I’m doing here is bringing more abstract theory to bear on what has become a fairly common memeing strategy. To point out that reporter X is referring to the criminal as, say, a “Texan” rather than a “Mexican” in order to manipulate the reader is to compose a meta-narrative in which the reporter is playing a part. It’s better to have your enemy in your narrative than them having you in theirs. And once they are in your narrative, all kinds of narrative and “generic” possibilities open up: you can provide a hypothetical “back story” to the “moves” you show X to be making. You can suggest possible satiric outcomes, point to various dead ends this storyline “typically” leads to, “intercut” other popular narratives and narrative clichés, and so on. You can get more abstract and stretch out further narrative lines in the past and projected into the future—X is really a “puppet” in some larger historical narrative. And you are yourself now in the narrative, giving you a kind of pedagogical responsibility—you are showing your reader, here’s how you do this, and then you might try that, and you can invite your reader to join you in some new storylines as well. You may even start to think about ways to turn your narratives into edifying performance art, like Pax Dickinson’s spectacular trolling of reporter Amanda Robb. We could even say that the winning side, politically, is the one that keeps the other side in its narrative.

We all have, at some level of generality and provisionality, what we take to be an “end game” of our own practices—if pressed, each of us could say, more or less vaguely or hesitantly, “this is where I want things to end up.” Of course, the ending up would be the beginning of a new narrative. But the point here is that even if abstract thinking and meta-narrative interference tend to multiply the narrative lines we still have “grand narratives” we see working themselves out historically. So, what is the relation between the two narrative levels? It’s really a question of the relation between probability and reality—we can identify a series of possible paths from A to B and give each of them a probability—path 1=15%, path 2=30%, path 3=1% and so on. We do this regularly even without attaching numerical values—there’s a slight chance that this idea will get me fired but I feel really good about the possibility that it will get me a promotion, etc. One of the paths will become the real one, of course, and sometimes it is a very “unlikely” one—maybe the guy will get canned (of course, we might have been wrong about it being unlikely—but does the fact that it happened prove that it wasn’t, in fact, unlikely?). (Point B could be the same end point—e.g., lots of different ways one side in the war will win—or a set endpoint we are trying to predict, like what will US demographics look like in 2040?) All the micro-narratives we generate by acting meta-narratively are the “paths,” and enacting the various paths as richly as possible, while also allowing the narrative materials to crystallize into highly unlikely paths, ones you couldn’t have imagined without opening things up meta-narratively—that’s the way we surface, test and refine the “grand” or “master” narrative that we always have going, that is always guiding us, even if tacitly, in the way it points us toward designating certain agents, noticing certain actions, being alert to certain conflicts, etc.

Narrative does have its limit, even if that limit isn’t abstract thinking. That limit is the present. Everything that has happened in the past is past because it has led up to now, where its meaning is revealed to us in a certain way; everything that is going to happen in the future will happen in now’s future, and every future we project narratively is a construct of the narrator’s relation to everyone else now. We see, or imagine we see, things finishing up, things gaining momentum, things slowing down, things starting to emerge, right now. We can see this vast, sprawling tableau of the present insofar as we carry out acts of deferral, stepping outside of whatever narrative commands us to take a role right now. The beginning of one narrative is the middle of another and the end of yet another—in situating ourselves at that point we exempt ourselves, presently at least, from all of them. It’s like removing yourself from the force of a vortex by placing yourself at its center. Such presenting eventually gives way to resistance to the most malignant narrative one is able to resist, the one with the too-convenient bad guy, the too-predictable plot, the too-heroic good guy, the too-satisfying payoff, etc. Then you can work on constructing narratives that include the narrative of you placing your finger on the scales, which can itself be converted into you constructing and enacting the narrative of the center, which is the narrative of the ongoing exposure of all resentments that interfere with the order issuing from the center.

« Newer PostsOlder Posts »

Powered by WordPress