August 3, 2020

The Model of Data

Filed under: GA — adam @ 8:19 am

In my latest post on self-appification, I proposed that algorithmic inquiry begins with a model event on one side, and actual events on the other side, with the subsequent inquiry identifying markers of the actual event as a sample of whatever the model event is a model of. The example I used there was “racism,” and the point is that no one thinks about “racism” in propositional terms and then tries to “apply” all the “features” of “racism” explicit and implicit in its “definition”—rather, one works with what we can call an “originary” event of “racism,” and then calls subsequent acts “racist” insofar as they are “similar to” the originary event. The originary event must be an event that “woke us up” to the phenomenon in question, or is retroactively posited as having done so (there’s not clear line between these two possibilities). I now want to explore this example, and bring it to the foreground as a way of engaging our present immersion in data without any nostalgia for a more “human” mode of being—which is to say, for a mode of the human modeled on earlier forms of media.

The very words used in political attacks make evident their reliance upon model events—“fascism,” “white supremacist,” “Nazi,” etc. Whatever “idea” the people issuing such epithets have of the model event in question, it is obvious that there is some historical referent behind them. The Holocaust, of course, in Eric Gans’s analysis (to which I’ve made my own contributions), is the ur-event of the modern victimary position. Godwin’s law parodies the rules of the game whereby whoever makes the most forceful identification of some current event with the Nazi genocide of the Jews “wins.” We can imagine a method for working out and developing an algorithm for determining the similarity of any modern event to the Holocaust. Is there a “player” who is unalterably opposed to the very existence of all those playing the role of a member of a group identified according to some immutable characteristic, and are there “bystanders” who allow the event to proceed because it’s “convenient” or they don’t want to make trouble? The criteria for determining each of these roles and the acts that count as them inhabiting these roles would have to made explicit so that search instructions can be designed. Here, we have the perpetrator—but he’s not an obvious perpetrator, because maybe he’s doing things other than trying to destroy all the members of some other group and maybe he’s not even doing that unambiguously at all. But we need to shape the scene, because this model event is all we have, so we need to identify markers that would allow us to identify his perpetrator status, and construct the event in such a way that those markers are more predictive of his actions, and the actions of those who would take him as a model are more “real” than statements and actions that would indicate other trajectories. And the other roles, along with the events actualizing those roles, would have to be similarly specified.

Now, my claim is not that victimary agents have constructed and follow such a method with such self-awareness. Obviously, like Beria, Stalin’s KGB head (a model event for me to work with), they’re looking for crimes to fit the people they want to eliminate. If you ask them to explain what makes this or that “racist,” they will rarely be able to tell you, and almost never convincingly. (This is why it’s a good idea to ask.) What framing their actions and agendas algorithmically, though, does, is, first, help us to display how automatic and programmed their actions are—it provides us with a satiric grammar; and, second, it allows us to construct hypothetical rule-following processes that helps us intervene and interfere when possible with transfer translations of their words and actions. But beyond all that, we have here a method that serves our own purposes in remaking declarative culture so that it is directed towards filling the imperative gap rather than inventing outlandish “authorities” meant to generate imperatives that subvert the command structure.

A lot follows from the realization that we’re always working with model events rather than propositional “principles,” “beliefs,” “ideas” and quasi-mathematical representations of reality. Originary models, for one thing, give us something we can always talk about, and return to, in order to revise and extract more “data” from. We can always “thicken” or “thin out” the model event as necessary. We can test its plausibility, and make it more plausible when necessary—or use it to test various norms of plausibility. Take the “American founding,” for example. Like any historical event, it has given rise to many interpretations—it was based on certain principles, it advanced the interests of a particular class, it continued precedents set by earlier historical events, and so on. But how would any of these interpretations be pared down to an “event,” rather than some broader historical “process” or “idea”? Where did, say, the “emergent merchant class” as a coherent, intentional agent, doX or Y? How did the “desire for freedom” manifest itself in this or that meeting among revolutionary leaders? (What else manifested itself in that meeting?) We can disqualify any model that can’t be represented as an event, and while the questions I just posed are not merely rhetorical, representing those processes as events in which specific people do and say things would turn them into different originary models.

Here’s what can be represented as an event: the designing, in the composition of the US Constitution, of the executive branch with the knowledge that the first occupant of that branch would be George Washington. I’ve mentioned this in several places, and I would have to do a search to find where I might have come across this (intuitively obvious) claim of seemingly marginal importance, but taking this as our originary model of the America order has some interesting consequences. It would mean, whatever else they were doing, the leading figures of the American revolution were watching and weighing one another and with a great deal of precision identifying those who performed best in the most important roles in the most trying times. It means they carried over these assessments to their thinking of a government structure, especially in the wake of the failure of the Articles of the Confederation, which failed to create a central office that gave due weight to the most important qualities of human leadership. It also means that they thought of the forging of the Constitution not just as a formal document laying down rules for the passing and implementation of laws, the transition of power, the division of offices, and so on, but as a model to be filled in with certain kinds of characters. They hoped that Washington’s performance in office would shape the performance of future presidents. Washington was elected unanimously in the electoral college; perhaps the framers of the constitution hoped every president would be elected unanimously, through sheer and undisputed recognition of the superior quality of the man.

Of course, it didn’t quite work out that way—the “factionalist” disease of left and right was immediately imported from the French Revolution coming in the wake of the American (and, of course, it was really there already). But the design of the highest office with the greatest man in mind can, first, be constructed as an event—we can work with records of the delegates to the Constitutional Convention’s thinking, and we can easily imagine how they might have discussed amongst themselves making the match of man and office as close as possible. There’s no plausibility problem here. And we can use this model event as a measure of the defects of their design as they have been manifested throughout American history—but also as a measure of American strengths, almost all of which can be attributed to “energy in the executive” in one form or another. And it’s a good model—that is, a model of how things should be ordered—so, in measuring other events against it, we are thinking in an intrinsically normative way, and one we can make fairly transparent: government office should be shaped so as to amplify the highest forms of leadership. I think that if we devise an algorithm for oscillating back and forth between subsequent events and this originary one we would develop an increasingly rich critique of the American social order and one that would preserve everything great about it. Nor need we ignore everything else the founders were doing other than designing the office to match the man. Everything else they were doing was either tributary to or interfering with this, what we hypothesize to be their central project. In this way the originary model is an inexhaustible source of normatively molded data.

A large part of the power of models figuring an “exemplary victim” is precisely the plausibility and richness of the events they are drawn from, with Jesus on the cross, or Socrates sitting with his students waiting to drink his hemlock among the most obvious examples. Most events promoted by liberals and leftists to this day take these events (especially the crucified Jesus) as their model, as a brief look at the iconography emerging after George Floyd’s death will confirm. Martyrs have been central to anti-tyrannical political practices from the beginning, as such practices are only barely intelligible without them: a broad generalization about “police brutality” would get lost in the weeds of statistics, the vast diversity of situations in which police encounter civilians, the difficulties of working out the intent of people involved in any situation, and so on. A display of physical force ending in death dispenses with all that, once it’s inserted into the right spot within an algorithmic process matching that event with others. What we’re witnessing in such cases, as gets noted occasionally, is a subtle form of human sacrifice—where martyrs are needed, they will be produced (the whole point of Antifa riots—of terrorism in general—is to produce usable martyrs).

The aesthetico-ethical problem of the ve/orticist app, then, is to construct model events that can withstand scrutiny as to their particulars, and can, without denying that victims can be framed in any event, replace the victimary with the authoritative center as the data source. It is certain that a structure of centered ordinality can be extracted from any event, and the process of production of victims can be interfered with by pointing out that even in scenes focused on the victim someone had to construct that focus. In cases where the victimization is real in accord with widely accepted frames (that is, when we’re not simply dealing with a hoax), there is always someone, before or after the fact, who tried to close the breach that led to the act of victimization. These instances can provide extremely compelling narratives. The polemical counter to the extension of victimage from the more egregious to the more implicit (from the macro to the micro to the nano-aggression) is the construction of events of representation that enact a centered ordinality that points to a structure of centered ordinality in the very events adduced.

July 26, 2020

The Imperative Constant

Filed under: GA — adam @ 6:50 am

A practice, as I have been using the term, is doing something so that something happens as a result of what you have done. The better the practice, the more it is reduced to only those means that produce these results, and the more it can be ascertained that these results can be attributed completely and only to that practice. Since a given practice relies upon other practices in order to have the necessary means available, practices also convert other practices into auxiliary practices, or practices to which the practice in question is auxiliary. What I will be more explicit about now is that part of the practice is narrating the history, outcome and purpose of the practice, all of which derive from the limitations of other practices—this part of the practice can also, of course, be perfected. Since all of human life is comprised of practices, articulated with each other in various ways, and at various stages of perfection, the theory of practice is the theory of society and culture.

The power of this theoretical approach can be demonstrated by the similarity between practices and rituals. A ritual also aims at producing a result that can be completely and reliably attributed to the shared act. The ritual arranges a gifting to the being at the center in such a way as to obey the imperative attached to the gift expected in return from the being at the center. A good ritual would be one that excludes all acts, thoughts or gestures that don’t contribute to the devotion to the center enacted by the ritual. The outcome of the ritual is, of course, uncertain—burning the choice chunk of meat to the god, or renewing one’s “membership” in the “clan” led by the antelope-god will not in itself make the upcoming hunt more successful. But if we keep in mind that rituals are collective acts, aimed at increasing mutual trust and cooperation amongst the congregants, it may very well in fact be the case that a ritual can be deemed successful. This is especially so since the tightly scripted and choreographed ritual can be replicated in other activities, further enhancing effectivity—which is in turn enhanced through legend and lore retelling of hunting expeditions, war, and other shared projects, through which the discovery and deployment of techniques become ritualized. Ritual and myth shape the “soul” so as to encourage imitation of the models considered most admirable in the community, which also is part of their “perfection.”

This similarity between ritual and practice means that we could think about post-sacrificial history, as the imperative to participate in the ongoing conversion of rituals into practices. Without a shared sacrificial center, ritual cannot survive, and without ritual, myth becomes detached stories rather than communication with the founders of the community. So, there’s no way to return to a ritual order, but this doesn’t make the Enlightenment approach to “demythification” any less delusional and destructive. If a particular ritual/myth nexus is dismantled, it has to be replaced by an equivalent—the most historically important project of demythification, Christianity, understood this. If a particular ritual/myth nexus is not replaced by a higher form of sacrality and a new integration of ritual and “theology,” it will be replaced by lower forms of centralized violence, or scapegoating. What anyone says and “believes,” and their enactment of their priorities and commitments, is an account of their relation to the center and that relation to the center must be revised not “debunked.” And that goes for any of us, as the center is always taking on new “data” that changes our relation to it, making the narratives we rely on at least partially “mythical,” indicating (as so many contemporary deperfecting practices do, on fantasies of return to a shared sacrificial center).

The conversion of ritual into practice provides us with a practice of history. What is the relationship between what anyone says and does, on the one hand, and the expected vs. the actual outcome, on the other hand? This is always a very interesting topic of conversation! What, exactly, are you trying to do here? And, assuming you manage to do it, then what? These questions can never get old. Whatever is ritualistic and mythical in your practice is that which “serves” a particular figure in your narrative (the “free” man, the “anti-racist” man/woman, etc.) but can’t be shown to actually follow from your practice. We can perfect the practice of zeroing in on this ritualistic and mythical residue by oscillating between macro and micro frames. So, for example, you want a “more equal” society and so you go to this demonstration, hold up this sign, shout that this counter-demonstrator, argue with these less doctrinaire comrades, etc. What path do you see from doing all these things to a “more equal” (in what sense, measured how?) society? What other practices would need to be constructed so as to actualize that path? How would the construction of those practices follow from the practice you are currently engaged in? How would you know those practices when you see them? These are very good questions for anyone. At every point along the path, then, you construct hypothetical practices, keep perfecting them as practices by fitting them to other practices, and the chances are very good that the “path” ends up looking very different than was originally imagined.

The conversion of ritual into practice follows the imperative of the center to construct an iterable scene around an object. On the originary scene, a gesture must have been “perfected,” at least “sufficiently.” The “outcome” of the gesture was the repetition of the gesture on the part of the others in group, as a “marker” of each member’s refraining from advancing unilaterally toward the center. All subsequent actions are to be coordinated, and any “unilateralism” is to lead to a distribution including all within the group. The gesture both says and does this. The construction of practices that identify and preempt violent centralization is identical to the construction of practices that transform the social order into reciprocally supporting practices. So, in trying to hear the imperative of the center, you convert whatever command you do hear (from your boss, your parent, your priest, your conscience, your president…) and convert it into a practice that identifies, translates and where necessary discards whatever is ritualistic and mythical in it. This is what it means to resolve the ambiguities of any command by following that command back to an earlier, and then yet earlier one—each command you seek the traces of is ordering you to bring what you do and what happens into greater conformity, both with each other and with what others do and make happen. Like on the originary scene, the aim is a sign everyone can say is “the same.”

We can call this practice of converting ritual into practice the imperative constant, which makes all practices increasingly consonant with each other. Only an imperative from the center can make the perfection of practices more than a kind of professional scruple. Even the professional scruple presupposes a social order in which such scruples can be formulated and protected, and the resulting work properly distributed and appreciated. Wanting to become the best teacher, doctor, welder, landscaper, writer, etc., I can be only makes sense if all of these skills are integrated into a community in which people can tell and value the difference between good and bad teaching, welding, and so on. And if they can’t, I can note that as a measure of social dysfunction or decline. I can’t possibly want anything other than the internal anomalies of the practice and its intersection with other practices to enter into my work on perfecting the practice. And this means I want the imperative from the center—to do nothing other than convert rituals into practices—to remain constant, for each member of society to be able to show any other member how he is doing this. From a total ritual society to a total disciplinary society—whatever we do is perfected so as to make the relation between doing it and this transformation happening more consistent and iterable.

This practice is a performative one—practices are always on display, even if in different ways and to differing extents for different “audiences.” And it is always moral, even when seemingly primarily or even completely technical. There is always resistance to the perfection of practice and relapse into more ritual practices and mythical narratives. Here is where we can locate such “mimetic structures” as desire, envy and resentment. The perfection of the practice always rubs up against existing habit, relations and hierarchies—it always threatens to shift existing relations to the center. Even under the most collaborative conditions, with a group of dedicated practitioners wholly in accord regarding the shared end, to propose some further perfection is always to appear a bit of a usurper of the center. Registering these appearances as they are distributed across the group, giving them representation, and converting them to proposals for distinctive contributions by those less central is itself a practice that one perfects.

All of the historical and conceptual material I’ve been generating and gathering—the “exemplary victim,” the “metalanguage of literacy,” more recently, the “ve/orticist app” and so on—should all be used to bringing about the conversion of rituals into practices. These are features of discourses to be surfaced and identified, with the language in which they are articulated subjected to translation practices. It is quite remarkable that it’s almost impossible to speak about politics without some victim of the other’s practices to gather around—the whys and the hows of victim selection and promotion and insertion into practices is always a productive site of attention (George Floyd died, therefore we…. What, exactly, and why?). The problem with the metapolitical concepts generated in opposition to “tyranny”—justice, freedom, equality, democracy, the republic, etc.—is that they are resistant to being converted into practices, which marks them as ritualistic and mythical.

At the same time, though, we can’t simply discard all these words and expect others to do so as well. They must be turned into transitional concepts as they are stripped of their mythical content and the victimologies through which they cohere subjected to the pressure of more perfect practices. New concepts derived from the “stack” and the data-driven algorithmically articulated reality are themselves meaningful as parts of practices that break up oralizing fantasies of community and distribute signs and discourses across practices within disciplines that can accordingly be infiltrated and their practices perfected. The exemplary victim and the metalanguage of literacy allow us to construct model scenes and narratives against which we can generate various algorithmic paths to the scenes and models constructed by the media—and transitioning to the defense of the center (the perfecting practice) and infralanguages of literacy help us to block those paths. All metalinguistic concepts are aimed at obstructing some “tyranny” and thereby indirectly indicate some possible executive action—to put it simply, what should be done is something the anti-tyranny metaconcept enjoins. Convergence upon a victim is a ritual and mythical practice, but it signifies, not a specific form “tyranny” targeting that victim, but disordered power to which the specificities of the victim are incidental. Infralinguistic centering practices follow the imperative constant to disable the victimary-metalinguistic link. “This violence against this victim means that the system is guilty of this form of tyranny which we must devote our entire being to overthrowing” always needs to be translated into “this anomaly (whether in a law enforcement, or economic, or reporting, or educational practice) indicates the need to perfect this cluster of practices.” This would be the continual conversion of ritual into practice.

July 16, 2020

Truth and Practice

Filed under: GA — adam @ 10:15 am

What is true? Whatever enables you to further perfect your practice. You have a practice when you can point to something that happens that could only have been a result of something you did. So, if I drop a glass and it shatters, something that only happened because I dropped it, do I have a practice? Yes, but a limited one—you could expand it by, for example, dropping other things and seeing that they too shatter; or by hitting the glass with a hammer or throwing it against the wall, and seeing that these actions also lead to its shattering; but that’s about it. On the other hand, if you drop the glass on a pillow, and see that it doesn’t shatter, and then try it out on surfaces intermediate in hardness between floor and pillow, we might start getting somewhere interesting. You have a practice that we might describe as testing the resilience of various substances under various conditions, and that could get very sophisticated and be extremely useful. (Obviously, it is.) Every time you identify some correlation between resiliency and constraining conditions you have said or recorded something true.

Part of having a practice is reframing things you were doing previously as imperfect versions of that practice. Looking back, the scientist testing the resiliency of objects can say that’s what he was “really” doing when he dropped the glass. And that’s also true, if you can trace that accident to your present practices—no one can know everything about one’s attentional state in some prior event, and very often later actions in a sequence are needed to bring out the truth of earlier actions. That’s part of the practice—deriving the elements of your practice from previous, maybe “unconscious” attempts at it. This is also a helpful way to remind yourself that you don’t know everything you’re doing right now, and to conduct your present practices in such a way that subsequent versions will reveal what presently remains obscure. The more inclusive of past practices and the more anticipatory of future ones they are, the more truth your present practices generate.

This doesn’t mean, though, that the earlier practices “contain” or inexorably lead to, the later practices—nothing about breaking a glass sets you on the path to create technologies to test the effects of various temperatures on specific objects. Social and technological histories have to intervene. Various substances come to be used for various purposes; a social space must be created in which people have the time to “specialize” in certain modes of production; and this means certain kinds of violence must be minimized and certain kinds of authority constructed. We can leave scientific practices to the scientists, except for when those practices cross over into other domains; what we can focus on, though, is the practical structures of those other domains, which “receive” the results of scientific practices and provide the conditions for them. And in the domains of human interaction in its various media, the question of what counts as a practice, or as the perfection of practices within a system of practices, is more complex. When I speak with someone, what makes that a practice? What happens, and happens in such a way that I can point to it so that others can see it as only as a result of what I say? How do I conduct my speech so as make things happen so that their effects can be singled out in this way?

It’s good to be both matter of fact and revelatory at the same time—you’re doing something that can be repeated, i.e., made routine and practicable for anyone, while you’ve designed that practice, and determined the site of its use, in such a way as to produce some knowledge that wouldn’t have existed otherwise. The construction of a practice is simultaneous with realizing that you’ve already been constructing a practice. The starting point is always an anomaly or a mistake—someone does or says something that doesn’t fit the frame of expectations that enable us to make sense of something. The first step is to suppress your impulse to “harmonize” the anomaly with the field of expectations or correct the mistake, and in the latter case to suppress your shame if it happens to be your own. It has to become interesting. The breaking of a frame makes you realize there was a frame; since a mistake or anomaly is essentially the collision of some other frame than the one determining your expectations, you now have two frames that happened to interfere with each other. Such interference is what brings the newly recognized frame into view.

You now have a question around which to organize your emergent practice: what does each frame include and exclude? To answer this question, you have to run tests: repeat the mistake or anomaly, and see how the frame responds. But this raises a question—what counts as a “repeat”? No gesture or utterance can simply be repeated, because part of the gesture or utterance is the context, which has been transformed by the gesture, or utterance, or sample in question. You need to single out what, exactly, in that sample you are identifying as iterable. Since what we’re interested in is the frame which has been disrupted, what needs to be singled out is a particular form of disruption of that frame, or that “kind” of frame. What makes it that kind of frame is the practice that initiated it, and the way it draws in the elements and means from the whole. We can converge the two: the evidence of the practice that initiated a given frame or field is in the way it appropriates and converts elements from the surrounding fields. Something in those surrounding fields will resist incorporation, and the attempt to subdue or ignore this resistance will generate anomalies.

If we know the starting point of inquiry is the anomaly or mistake, we can refine our attention so as to lower the threshold for what we count as an anomaly or mistake. We can do this by imagining the contexts, actual or potential, in which some sample would appear anomalous. There’s a short step from such refinements to adopting the perpetual disciplinary stance that is always on the lookout for what might be anomalous or mistaken in any sample we come across—always looking at things askew, we might say. In this way we see the possibilities of cultural innovation everywhere, because, as we can know from Eric Gans’s study of the succession of speech forms in The Origin of Language, the new cultural form emerges from the treatment of the mistake as the creation of a new form of presence, if only one can find a way to turn it into a practice others might repeat. So, as we’re lowering the threshold for the identification of mistakes, and widening the hypothetical fields in which those samples would be mistakes, we are also, in the very act of ostensively identifying these mistakes, modeling a way of turning them into the origin of new practices.

The path, then, toward perfecting our practices, lies in the ongoing surfacing of the mistake/practice interfaces all across the field of the present. This involves iterating samples in the closest possible way, making our iterations as indistinguishable as possible from the original; while at the same time revealing everything mistaken or anomalous in the “original” and producing the practice that would make up the difference. It’s a kind of infiltration that’s right out in the open and transforms the space being infiltrated so that it’s no longer an infiltration. The practice is the transposition of a sample from one field to another, in such a way that the fields are converted into elements appropriable by the practice. One would never say anything other than what is being said, but in such a way as to summon everything that makes it sayable.

We can frame this in algorithmic terms. What we notice are low probability events. We notice them against the background of high probability events, which can be held constant. The paradox here is that the low probability event, if it happened, was in fact very high probability—100%, in fact. What made it seem low probability, then, were precisely all the other events that were being held constant. (The originary hypothesis is a very helpful model here: there’s nothing that we find ourselves more entitled to hold constant than our existence as human beings, whatever we take that to entail—but holding all that constant makes the emergence of the human itself very low probability, since not having the center is unimaginable.) Whatever our system of measurement is equipped to detect made it incapable of detecting whatever pointed to the emergence of what actually happened. So, we work backwards from the supposedly low-probability event to the system of measurement and we identify everything that pointed to the surprising occurrence, and set it alongside what was actually noticed instead. That’s the instruction that sets the algorithm to work: find all the markers of the event’s emergence, from beginning to end (the parameters for all this would have to be determined), and determine the threshold of detectability of those markers within the existing system of measurement. An obvious example here would be the 2016 election: an intellectually honest prognosticator who was 99% sure Hillary Clinton would win the election might want to do a study of the forms of attention that led to that conclusion, and part of doing that would be to go back and look for all the things you could have noticed and articulated into a better prediction, but never saw because you disdained the source (as opposed to more “reliable” ones), or saw but relegated to the irrelevant because it conflicted with other information you held constant as relevant, things that you noticed and found curious or troubling but never pieced together because that didn’t fit a paradigm that had been successful in the past, and so on. You could imagine this being done through a continual refinement of search terms taking you through the archives, through the feedback you received from the previous search. The algorithm would be the formula or set of instructions enabling the computer to do this on its own, producing various models for you of reconfigured attentional structures that would have led to different results.

So, right now, spreading out to the fringes of your awareness and beyond, there are emergent events one outcome of which, if the event were to be brought to your attention, would seem to be 73%, another outcome 17%, another 5%, and so on, until we get to an outcome that seems .000001% likely to happen. Of course, this breakdown will be wrong in some ways, and it will be wrong in more ways as the predictions get more “granular.” (Someone was right in predicting Trump would win, but did they predict he’d win Wisconsin, etc.?) You would then want an ongoing thought process that’s looking into all the ways you might be wrong and refining your explicit and implicit predictions, not so much to be right more often (this is actually not particularly important) but so as to continually lower the threshold at which you notice things. What, exactly, is an “implicit prediction”? That’s everything you’re paying attention to and not paying attention to, everything you’re hoping and fearing, all the people and institutions you rely on to show you things—every move you make presupposes a structure of predictions.

There’s a question of whether probabilities are “real,” or just a method of thinking: we can’t help but consider one outcome more likely than another; and once we assign greater likelihood to one outcome, we can consider how much greater likelihood to attribute to it, and so on. This presents itself as reality to us. But whatever happens, actually happens. Rather than enter this debate, I will say that what is processed by us, more or less formally, as probabilities, can be resolved into who we think is where, doing what. If I think Trump has a 25% chance of winning the election, then I’m attributing to his supporters, opponents and neutrals a “mass,” a set of motivations and capacities, very different than if I think he has a 75% chance of winning. The same goes for all the social institutions that are facilitating one outcome rather than another. The distribution of probabilities is really a distribution of “anthropomorphized” people, ranging themselves in relation to each other. To clarify, at the risk of caricaturing, the point, there’s some guy in Michigan upon whom my system of measurement hangs who “must” be accessing certain sources of information, have a certain circle of friends, be ready to argue with others and help campaign to a specific extent, be annoyed or outraged when he sees and hears certain things, and so on—we could construct a detailed profile, which is much of what present day algorithms do. My thinking, we might say, is entangled with the existence of this guy, as a kind of tipping point. We can, then, people the probabilities, which involves peopling ourselves as well—who I am is the sum total of everyone out there I imagine manning their stations, or not. “Peopling yourself” is therefore a practice of distributing the present: present who you imagine as your tipping point for whatever event, and you begin to elicit a model of the entire social order as others do the same. We are all of us tipping points for some set of events and you find out which those are by peopling yourself.

July 7, 2020

The V(e/o)rticist App

Filed under: GA — adam @ 11:54 am

This post continues the thinking initiated in “The Pursuit of Appiness” several posts back. What I want to emphasize is the importance of thinking, not in terms of external attempts to affect and influence others’ thinking and actions, but in terms of working within the broader computational system so as to participate in the semiotic metabolism which creates “belief,” “opinions,” “principles” and the rest further downstream. The analogy I used there was prospective (and, for all I know, real) transformations in medical treatment where, instead of counter-attacking some direct threat to the body’s integrity, like bacteria, a virus, or cancerous cells, the use of nanobots informed by data accessed from global computing systems would enable the body to self-regulate so as to be less vulnerable to such “invasions” in the first place. The nanobots in this case would be governed by an App, an interface between the individual “user” and the “cloud,” and part of the “exchange” is that the bots would be collecting data from your own biological system so as to contribute to the ongoing refinement of the organization of globally collected and algorithmically processed data. The implication of the analogy is that as social actors we ourselves become “apps,” and, to continue the analogy a bit further, these apps turn the existing social signs into “bots.”

This approach presupposes that we are all located at some intersection along the algorithmic order—our actions are most significant insofar as we modify the calculation of probabilities being made incessantly by computational systems. Either we play according to the rules of some algorithm or we help design their rules—and “helping design” is ultimately a more complex form of “playing according to.” The starting point is making a decision has to how to make what is implicit in a statement explicit—that is, making utterances or samples more declarative. Let’s take a statement like “boys like to play with cars.” Every word in that sentence presupposes a great deal and includes a great deal of ambiguity. “Boys” can refer to males between the ages of 0 to 18—for that matter, sometimes grown men are referred to, more or less ironically, as “boys.” Does “liking” and “playing” mean the same thing for a 4 year old as for a 14 year old male? How would we operationalize “like”? Does that mean anything from being obsessed with vintage cars to having some old toy hot rods around that one enjoys playing with when there’s nothing else to do? Does “liking” place a particular activity on a scale with other activities, like playing football, meeting girls, bike riding, etc.? Think about how important it would be to a toy car manufacturer to get the numbers right on this. We could generate an at least equally involved “explicitation” for a sentence like “that’s a dangerous street to walk at night.” What counts as a danger, as different levels of danger, as various sources of danger, what are the variations for different hours of the night, what are the different kinds and degrees of danger for different categories of pedestrians at different hours of the night, and so on. Every algorithm starts out with the operationalization of a statement like this, which can now be put to the test and continually revised—there are various ways of gathering and processing information regarding people’s walks through that street at night and each one would add further data regarding forms and degrees of dangers. Ultimately, of course, we’d be at the point where we wouldn’t even be using a commonsensical word like “danger” anymore—we’d be able to speak much more precisely of the probability of suffering a violent assault of a specific kind given specific behavioral patterns at a specific location, etc. Even words like “violent assault,” while legal and formal, might be reduced to more explicit descriptions of unanticipated forcible bodily contact, and so on.

All this is the unfolding of declarative culture, which aims at the articulation of virtualities at different levels of abstraction. There are already apps (although I think they were “canceled” for being “racist”) that would warn you of the probability of a particular kind of danger at a particular place at a particular time. And, again, you being there produces more data that will be part of the revision of probabilities provided for the next person to use the app there. But there is an ostensive dimension to the algorithm as well, insofar as the algorithm begins with a model: a particular type of event, which must itself be abstracted from things that have happened. When you think of a street being dangerous, you think in terms of specific people, whose physical attributes, dress, manners and speech you might imagine in pretty concrete terms, doing specific things to you. You might be wrong about much of the way you sketch it out, but that’s enough to set the algorithm in motion—if you’re wrong, the algorithm will reveal that through a series of revisions based on data input determined by search instructions. The process involves matching events to each other from a continually growing archive, rather than a purely analytical construction drawing upon all possible actors and actions. The question then becomes how similar one “dangerous” event is to others that have been marked as “dangerous,” rather than an encyclopedia style listing of all the “features” of a “dangerous” situation, followed by the establishment of a rule for determining whether these features are observed in specific events. Google Translate is a helpful example here. The early, ludicrously bad attempts to produce translation programs involved using dictionaries and grammatical rules (the basic metalanguage of literacy) to reconstruct sentences from the original to the target language in a one-to-one manner. What made genuine translation possible was to perform a search for previous instances of translation of a particular phrase or sentence, and simply use that—even here, of course, there may be all kinds of problems (a sentence translated for a medical textbook might be translated differently in a novel, etc.), but, then, that is what the algorithm is for—to determine the closest match, for current purposes (with “current purposes” itself modeled in a particular way), between original and translation.

Which kind of event you choose as the model is crucial, then, as is the way you revise and modify that event as subsequent data comes in. To be an “app,” then, is to be situated in that relationship between the original (or, “originary,” a word that is very appropriate here) event and its revisions. For example, when most Americans think of ‘racism,” they don’t think of a dictionary or textbook definition (which they could barely provide, if asked—and which are not very helpful, anyway), much less of the deductive logic that would get us from that definition to the act they want to stigmatize—they think of a fat Southern sheriff named Buford, sometimes with a German Shepherd, sneering or barking at a helpless black guy. This model has appeared in countless movies and TV shows, as well as footage from the civil rights protests of the 50s and 60s.  So, the real starting point of any discussion or representation of “racism” is the relation between Buford and, say, some middle-aged white woman who gets nervous and acts out when a nearby black man seems “menacing.” The “anti-racist” activist wants to line up Buford with “Karen,” and so we can imagine and supply the implicit algorithm that would make the latest instance a “sample” derived from the model “source”; the “app” I’m proposing “wants” to interfere with this attempt, this implicit algorithm, to scramble the wires connecting the two figures. This would involve acting algorithmically—making explicit new features of either scene and introducing new third scenes that would revise the meaning of both of our starting ones. There’s a sliding scale here, which allows for differing responses to different situations—one could “argue” along these lines, if the conditions are right; or, one could simply introduce subversive juxtapositions, if that’s what the situation allows for. Of course, the originary model won’t always be so obvious, and part of the process of self-appification is to extract the model from what others are saying. In this way, you’re not only in the narrative—you’re also working on the narrative, from within.

Working on it toward what end? What’s the practice here? You, along with your interlocutor or audience, are to be placed in real positions on virtual scenes. We all know that the most pointless way of responding to, say, an accusation of racism, is to deny it—if you’re being positioned as a racist on some scene, the “appy” approach is to enact all of the features of the “racist” (everything Buford or Karen-like in your setting) minus the one that actually marks you as “racist.” What that will be requires a study of the scene, of course, but that’s the target—that’s what we want to learn how to do. And the same thing holds if you’re positioned as a victim of a “racist” act, or as a “complicit bystander.” If you construct yourself as an anomaly relative to the model you are being measured against, the entire scene and the relation between models needs to be reconfigured. The goal is to disable the word “racist” and redirect attention to, say, the competing models of “violence” between which the charge of “racism” attempts to adjudicate: for example, a model of violence as “scapegoating” of the “powerless,” on the one hand, as opposed to a model of violence as the attack on ordered hierarchy (which is really a case of scapegoating “up”), on the other. If we’re talking about “violence,” then we’re talking about who permits, sponsors, defines and responds to “violence.” We’re talking about a central authority whose pragmatic “definition” of “violence” will not depend upon what any of us think, but which nevertheless can only “define” through us.

This move to blunt and redirect the “horizontalism” of charges of tyrannical usurpation so as to make the center the center of the problematic of the scene is what we might call “verticism.” The vertical looks upward, and aims at a vertex, the point where lines intersect and create an angle. The endpoint of our exchange is for all of our actions to meet in an angle, separate from all, which someone superintends the whole. Moreover, verticism is generated out of a vortex, an accelerating whirlpool that provides a perfect model for the intensification of mimetic crisis—and a vorticism aligned with verticism also pays homage to the artistic avant-garde movement created by Wyndham Lewis. “Vertex” and “vortex” are ultimately the same word, both deriving from the word for “turn”—from the spiraling, dedifferentiating and descending turns of the vortex to the differentiating and ascending turns of the vertex. The “app” I have in mind finds the “switch” (also a “turn”) that turns the vortex into a vertex. From “everyone is Buford” to “all the events you’re modeling on Buford are so different from each other that we might even be able to have a couple words with Buford himself.” So, I’m proposing The V(e/o)rticist App as the name for a practice aimed at converting the centering of the exemplary victim into the installation of the occupant of the center.

June 29, 2020

Toward a Media-Moral Synthesis

Filed under: GA — adam @ 12:27 pm

Haun Saussy, in an excellent book on the relation between orality and literacy (and media history more generally), suggests a way of thinking about orality that reframes the whole question. Rather than trying to define empirically how to sort out what in (or “how much” of) a community is constituted through orality, what we are to count as “writing,” what criteria we are going to have for “literacy,” and so on, he suggests thinking about orality as ergodic in its constitution. Here’s the online dictionary definition of “ergodic”:

relating to or denoting systems or processes with the property that, given sufficient time, they include or impinge on all points in a given space and can be represented statistically by a reasonably large selection of points.

With regard to language, this means a signifying system that is finite: given enough time, all the different “elements” of the system will be used. This view of language runs counter to the assumption shared, I think, by all schools of modern linguistics, which is that language is constituted by a set of combinatorial rules that make possible utterances unlimited—new things can always be said in the language, and always are said, and not necessarily by language users who are particularly creative or inventive. Language is intrinsically generative and therefore infinite. If we follow up on Saussy’s suggestion, though, this is in fact only the case for written languages. Languages in a condition of orality are constituted by a finite number of “formulas,” or “commonplaces,” or “clichés,” or “chunks,” that are not infinitely recombinable.

This new way of framing the question could raise a whole series of questions. One could say that language was always “potentially” infinite, and so modern linguistics would still be essentially right—and there must be some sense in which this is true. One could say that it is the specifically metalinguistic concepts introduced in order to institutionalize writing (and writing was institutionalized from the beginning), like the “definition” of words, and, especially, grammatical “rules,” that introduced the infinitization of language. One might even want to argue that, perhaps, we are wrong in thinking languages even in their literate form are inexhaustible—after all, how could we really know? What I will do is follow up on some hypotheses I’ve taken over from thinkers of orality/literacy like David Olson and Marcel Jousse and explore the relation between the emergence of literacy and Axial Age moral innovations.

Remember that for Olson the entry point into the oral/literate distinction is the problem of reported speech—telling someone what someone else said. Under oral conditions, the tag “X said” would be used (which reminds us that “say” is one of Wierzbicka’s primes), but the reporting of speech would be performed mimetically—the one reporting the speech not only wouldn’t paraphrase or summarize, but would say the exact same thing in the exact same way. That’s the presumption, at least, even if an outside observer might notice discrepancies. What is said is shared by the two speakers, and this presumption is strengthened by the ergodic nature of language under orality, which means that no one can say anything that hasn’t already been said, and won’t be said again. Individual speakers are conduits of a language that flows through them, and that they are “within”—and the language of ritual and myth would, further, be the model and resource for everyday speech, as everyone inhabits traditionally approved roles. Everyone is a_________, with the blank filled in by some figure of tradition.

When writing, you can’t imitate the way someone said something, so everything apart from the actual words needs to be represented lexically. This leads to the metalanguage of literacy, involving the vast expansion of words representing variations on, first of all, “say,” and “think.” You can’t reproduce the excited manner in which someone said something, so you say “he exclaimed.” This is, of course, an interpretation of how it was said, and so, one could say, was the imitation, but this difference in register makes it harder to check the interpretation against the original—it would be easier for a community to tell whether you provide a plausible likeness of some other member than to sort out whether he indeed “exclaimed”—rather than, for example, simply “stating.” Proficiency in the metalanguage provides authority—you own what the other has said—which is why an exact replication of the original words would become less important.

What is happening here is that while a difference is opening up between the original speaker and the one reporting the speech, differences are also opening up between the reporter and the audience and, eventually, within the speaker himself. This is the creation of “psychological depth.” Did he “exclaim” or “state”? Or, for that matter, “shriek”? That would depend on the context, which could itself be constructed in various ways, and never exhaustively. The very range of possible descriptions opened up the metalanguage of literacy generates disagreements—defenders of the original speaker would “insist” he simply firmly “stated,” while his “critics” would “counter” that he in fact, was losing it. It then becomes possible to ask oneself whether one wants to be seen as stating or exclaiming, to examine the “markers” of each way of “saying,” and to put effort into being seen as a “stater” rather than as “exclamatory.” Which then opens up further distinctions, between how one appears, even to oneself, and what one “really” is. On the surface I’m stating, clearly and calmly, but am I exclaiming “deep down”? (Of course, the respective values of “exclaiming” and “stating” can be arranged in other ways—what matters is that the metalanguage of literacy necessarily implies judgments regarding the discrepancy between what someone says and what they “really mean,” whether or not they are aware of that “real meaning.”)

Oral accounts involve people doing and saying things; the oral accounts preserved most tenaciously are those in which what people do and say place the center in some kind of crisis, a crisis that is then resolved. Such narratives will remain fairly close to what can be performed in a ritual, and thereby re-enacted by the community. Writing is neither cause nor effect of a distancing of the community from a shared ritual center, but it broadly coincides with it. Writing begins as record-keeping, which right away presupposes transactions not directly mediated by a common sacrifice. Record-keeping implies both hierarchy—a king separated from his subject by bureaucratic layers—and “mercurial” agents, merchants, who move across different communities, sharing a ritual order with none of them. The earliest form of literacy is manuscript culture, where a written text serves to aid the memory in oral performances. The very fact that such an aid is necessary and possible, though, means we have moved some distance from the earliest “bardic” culture.

Where things get interesting is where the manuscripts start to proliferate, as they surely will, and differ from each other. Member of an oral culture might enforce certain kinds of conformity very strictly, but could hardly keep track to “deviations” from an original text, especially since such a text doesn’t exist. Diverse written accounts would make divergences unavoidable and consequential, because the very fact that a text was found worthy of committing to the permanence of writing (an expensive and time-consuming process) would add a sacred aura to it. As we move into a later form of manuscript culture, in which commentaries, oral but also sometimes written as well, are added to the texts, these differences would have to be reconciled—generating, in turn, more commentary. This is an early version of what Marcel Jousse called “transfer translations,” i.e., translations into the vernacular of a sacred text preserved in an archaic language—according to Jousse, the inevitable discrepancies between the translation and original, due to the differing formulas in each, respectively, generates commentary aimed at reconciling them.

Reconciling such discrepancies could involve nothing more than “smoothing out” while keeping the narrative and moral lessons essentially intact. There will be times, though, when the very need to address discrepancies allows for, and even attracts, complicating elements. Let’s say the prototypical oral, mythical narrative involves some agent transgressing against or seeking to usurp the center in a way that disrupts the community and then being punished (by the center or the community) in a way that restores the community. If there’s no longer a shared ritual space, such narratives are less likely to be so unequivocal. To transgress against the center is now to transgress against a human occupant of the center. It is possible to refer to a discrepancy between that occupant and the permanent, or signifying center. There can be a discrepancy between human and divine “accounting” or “bookkeeping,” in which sins and virtues, crime and punishment, must be balanced. The discrepancies between “accounts” will attract commentaries exploring this discrepancy. The injustice suffered, the travails undergone, perhaps the triumphs, real or compensatory, experienced by the figure of such a discrepancy will come to be incorporated into a text that is, we might say, “always already” commented upon—that is, such a more complex story will include, while keeping implicit, the accretion of meanings to the “original” narrative. This is what gets us out of the ergodic, and into the vertiginous world of essences (new centers) revealing themselves behind appearances, as well as historical narratives modeled on such ambivalent relations to the center.

Once such a text, or mode of textuality, is at the center of the community, we are on the way to a more complete form of literacy, in which the metalanguage of literacy overlays and incorporates originally oral discourses. Literacy is crucially involved in the shift in the heroic narrative from the “Promethean” (and doomed) struggle against the center to the victim who exemplifies what we can now see as the unholy, even Satanic, violence of the imperial center. This means that the figure of the “exemplary victim,” that is, the victim of violence by the occupant of the center, a violence that transgresses the imperative of the signifying center, is simply intrinsic to advanced literacy. Our social activity is therefore a form of writing the exemplary victim. Liberal culture has its own way of doing so—the exemplary victim is the victim of some form of “tyranny” and demonstrates the need for super-sovereign approved form of rule that bypasses or eliminates that tyranny. It’s almost impossible to speak in terms other than “resisting” some “illegitimate” power in name of someone’s “rights” (as defined by the disciplines—law, philosophy, sociology, psychiatry, etc.).

If “postliberalism,” or what we could call “verticism,” is genuinely “reactionary,” I would say it is in redirecting attention from the exemplary victim back to the occupant of the center, highlighting that occupant’s inheritance of sacral kingship and therefore vulnerability to scapegoating and sacrifice. The exemplary victim could emerge in the space opened by the ancient empires, where the ruler was too distant from the social order to be sacrificed, but post-Roman European kings never definitively achieved this distance, and liberalism is predicated upon putting the center directly at stake, predicating the center’s invulnerability so as to exacerbate its vulnerability. All scapegoating attributes some hidden power to the victim, which is to say, places the victim at the center; all scapegoating of figures at the margin, then, is a result and measure of unsecure power at the center; so, refusal to participate in scapegoating, or violent centralization, is really bound up with the imperative to secure the center. This means treating the victim as a sign of derogation of central authority, rather than levying the victim against that authority. So, it’s not that we can ignore the exemplary victim; rather, we must “unwrite” the exemplary victim. This may be the hardest thing to do—to renounce martyrdom, to acknowledge victims but deny their exemplarity in order to “read” them as markers of the center’s incoherence—while representing that incoherence in order to remedy it. The very fact that we are drawn to one victim rather than another—this “racist” who has been canceled, that website that has been de-platformed or de-monetized—itself tends to make that victim “exemplary,” and we do have to pay attention. Nor do we want to “victim-blame” (if only they had been more careful, etc.), even if discussions of tactics and strategy are necessary.

Insofar as we inherit the European form of the Axial Age moral acquisition, we can’t help but see through the frame of the exemplary victim—even a Nietzschean perspective which purports to repudiate victimary framings and claim an unmediated agency is the adoption of a position shaped by Romantic claims to subjective centrality and therefore sacrificiability (Nietzsche’s own “tragic” end reinforces this). The exemplary victim is constitutive of our language and narratives, which is why it needs to be “unwritten.” The whole range of exemplary victims produced across the political spectrum constitutes our “alphabet” (or, perhaps, “meme factory”). The most direct way unwrite might be to follow up on the observation that the function of the disciplinary deployments of the exemplary victim is to plug executive power into the disciplines, which then can turn on and off the switch. But these detourings of centered ordinality nevertheless anticipate some use of the executive—those most deeply invested in declarative cultures like the law want the executive to crack down on their enemies as much as anyone else. So, it’s always possible to cut to the chase and propose and where possible embody that use of executive power which would most comprehensively make future instances of that form of victimage as unlikely as possible. One proposes, that is, some increased coherence in the imperatives coming from the center (and, by implication, in the cultivation of those dispositions necessary to sustain that coherence). If we did X, this victim over whom we are agonizing would be irrelevant—we could forget all about him. One result would be the revelation of how dependent liberal culture is upon its martyrs—so much so that they’d rather preserve their enshrinement than solve the supposed problem and thereby write them off. In the meantime, we’d be embarking upon a rewriting of moral inheritances that would erase the liberal laundering of scapegoating through the disciplines once and for all.

Older Posts »

Powered by WordPress