This post continues the thinking initiated in “The Pursuit of Appiness” several posts back. What I want to emphasize is the importance of thinking, not in terms of external attempts to affect and influence others’ thinking and actions, but in terms of working within the broader computational system so as to participate in the semiotic metabolism which creates “belief,” “opinions,” “principles” and the rest further downstream. The analogy I used there was prospective (and, for all I know, real) transformations in medical treatment where, instead of counter-attacking some direct threat to the body’s integrity, like bacteria, a virus, or cancerous cells, the use of nanobots informed by data accessed from global computing systems would enable the body to self-regulate so as to be less vulnerable to such “invasions” in the first place. The nanobots in this case would be governed by an App, an interface between the individual “user” and the “cloud,” and part of the “exchange” is that the bots would be collecting data from your own biological system so as to contribute to the ongoing refinement of the organization of globally collected and algorithmically processed data. The implication of the analogy is that as social actors we ourselves become “apps,” and, to continue the analogy a bit further, these apps turn the existing social signs into “bots.”
This approach presupposes that we are all located at some intersection along the algorithmic order—our actions are most significant insofar as we modify the calculation of probabilities being made incessantly by computational systems. Either we play according to the rules of some algorithm or we help design their rules—and “helping design” is ultimately a more complex form of “playing according to.” The starting point is making a decision has to how to make what is implicit in a statement explicit—that is, making utterances or samples more declarative. Let’s take a statement like “boys like to play with cars.” Every word in that sentence presupposes a great deal and includes a great deal of ambiguity. “Boys” can refer to males between the ages of 0 to 18—for that matter, sometimes grown men are referred to, more or less ironically, as “boys.” Does “liking” and “playing” mean the same thing for a 4 year old as for a 14 year old male? How would we operationalize “like”? Does that mean anything from being obsessed with vintage cars to having some old toy hot rods around that one enjoys playing with when there’s nothing else to do? Does “liking” place a particular activity on a scale with other activities, like playing football, meeting girls, bike riding, etc.? Think about how important it would be to a toy car manufacturer to get the numbers right on this. We could generate an at least equally involved “explicitation” for a sentence like “that’s a dangerous street to walk at night.” What counts as a danger, as different levels of danger, as various sources of danger, what are the variations for different hours of the night, what are the different kinds and degrees of danger for different categories of pedestrians at different hours of the night, and so on. Every algorithm starts out with the operationalization of a statement like this, which can now be put to the test and continually revised—there are various ways of gathering and processing information regarding people’s walks through that street at night and each one would add further data regarding forms and degrees of dangers. Ultimately, of course, we’d be at the point where we wouldn’t even be using a commonsensical word like “danger” anymore—we’d be able to speak much more precisely of the probability of suffering a violent assault of a specific kind given specific behavioral patterns at a specific location, etc. Even words like “violent assault,” while legal and formal, might be reduced to more explicit descriptions of unanticipated forcible bodily contact, and so on.
All this is the unfolding of declarative culture, which aims at the articulation of virtualities at different levels of abstraction. There are already apps (although I think they were “canceled” for being “racist”) that would warn you of the probability of a particular kind of danger at a particular place at a particular time. And, again, you being there produces more data that will be part of the revision of probabilities provided for the next person to use the app there. But there is an ostensive dimension to the algorithm as well, insofar as the algorithm begins with a model: a particular type of event, which must itself be abstracted from things that have happened. When you think of a street being dangerous, you think in terms of specific people, whose physical attributes, dress, manners and speech you might imagine in pretty concrete terms, doing specific things to you. You might be wrong about much of the way you sketch it out, but that’s enough to set the algorithm in motion—if you’re wrong, the algorithm will reveal that through a series of revisions based on data input determined by search instructions. The process involves matching events to each other from a continually growing archive, rather than a purely analytical construction drawing upon all possible actors and actions. The question then becomes how similar one “dangerous” event is to others that have been marked as “dangerous,” rather than an encyclopedia style listing of all the “features” of a “dangerous” situation, followed by the establishment of a rule for determining whether these features are observed in specific events. Google Translate is a helpful example here. The early, ludicrously bad attempts to produce translation programs involved using dictionaries and grammatical rules (the basic metalanguage of literacy) to reconstruct sentences from the original to the target language in a one-to-one manner. What made genuine translation possible was to perform a search for previous instances of translation of a particular phrase or sentence, and simply use that—even here, of course, there may be all kinds of problems (a sentence translated for a medical textbook might be translated differently in a novel, etc.), but, then, that is what the algorithm is for—to determine the closest match, for current purposes (with “current purposes” itself modeled in a particular way), between original and translation.
Which kind of event you choose as the model is crucial, then, as is the way you revise and modify that event as subsequent data comes in. To be an “app,” then, is to be situated in that relationship between the original (or, “originary,” a word that is very appropriate here) event and its revisions. For example, when most Americans think of ‘racism,” they don’t think of a dictionary or textbook definition (which they could barely provide, if asked—and which are not very helpful, anyway), much less of the deductive logic that would get us from that definition to the act they want to stigmatize—they think of a fat Southern sheriff named Buford, sometimes with a German Shepherd, sneering or barking at a helpless black guy. This model has appeared in countless movies and TV shows, as well as footage from the civil rights protests of the 50s and 60s. So, the real starting point of any discussion or representation of “racism” is the relation between Buford and, say, some middle-aged white woman who gets nervous and acts out when a nearby black man seems “menacing.” The “anti-racist” activist wants to line up Buford with “Karen,” and so we can imagine and supply the implicit algorithm that would make the latest instance a “sample” derived from the model “source”; the “app” I’m proposing “wants” to interfere with this attempt, this implicit algorithm, to scramble the wires connecting the two figures. This would involve acting algorithmically—making explicit new features of either scene and introducing new third scenes that would revise the meaning of both of our starting ones. There’s a sliding scale here, which allows for differing responses to different situations—one could “argue” along these lines, if the conditions are right; or, one could simply introduce subversive juxtapositions, if that’s what the situation allows for. Of course, the originary model won’t always be so obvious, and part of the process of self-appification is to extract the model from what others are saying. In this way, you’re not only in the narrative—you’re also working on the narrative, from within.
Working on it toward what end? What’s the practice here? You, along with your interlocutor or audience, are to be placed in real positions on virtual scenes. We all know that the most pointless way of responding to, say, an accusation of racism, is to deny it—if you’re being positioned as a racist on some scene, the “appy” approach is to enact all of the features of the “racist” (everything Buford or Karen-like in your setting) minus the one that actually marks you as “racist.” What that will be requires a study of the scene, of course, but that’s the target—that’s what we want to learn how to do. And the same thing holds if you’re positioned as a victim of a “racist” act, or as a “complicit bystander.” If you construct yourself as an anomaly relative to the model you are being measured against, the entire scene and the relation between models needs to be reconfigured. The goal is to disable the word “racist” and redirect attention to, say, the competing models of “violence” between which the charge of “racism” attempts to adjudicate: for example, a model of violence as “scapegoating” of the “powerless,” on the one hand, as opposed to a model of violence as the attack on ordered hierarchy (which is really a case of scapegoating “up”), on the other. If we’re talking about “violence,” then we’re talking about who permits, sponsors, defines and responds to “violence.” We’re talking about a central authority whose pragmatic “definition” of “violence” will not depend upon what any of us think, but which nevertheless can only “define” through us.
This move to blunt and redirect the “horizontalism” of charges of tyrannical usurpation so as to make the center the center of the problematic of the scene is what we might call “verticism.” The vertical looks upward, and aims at a vertex, the point where lines intersect and create an angle. The endpoint of our exchange is for all of our actions to meet in an angle, separate from all, which someone superintends the whole. Moreover, verticism is generated out of a vortex, an accelerating whirlpool that provides a perfect model for the intensification of mimetic crisis—and a vorticism aligned with verticism also pays homage to the artistic avant-garde movement created by Wyndham Lewis. “Vertex” and “vortex” are ultimately the same word, both deriving from the word for “turn”—from the spiraling, dedifferentiating and descending turns of the vortex to the differentiating and ascending turns of the vertex. The “app” I have in mind finds the “switch” (also a “turn”) that turns the vortex into a vertex. From “everyone is Buford” to “all the events you’re modeling on Buford are so different from each other that we might even be able to have a couple words with Buford himself.” So, I’m proposing The V(e/o)rticist App as the name for a practice aimed at converting the centering of the exemplary victim into the installation of the occupant of the center.