GABlog

July 7, 2020

The V(e/o)rticist App

Filed under: GA — adam @ 11:54 am

This post continues the thinking initiated in “The Pursuit of Appiness” several posts back. What I want to emphasize is the importance of thinking, not in terms of external attempts to affect and influence others’ thinking and actions, but in terms of working within the broader computational system so as to participate in the semiotic metabolism which creates “belief,” “opinions,” “principles” and the rest further downstream. The analogy I used there was prospective (and, for all I know, real) transformations in medical treatment where, instead of counter-attacking some direct threat to the body’s integrity, like bacteria, a virus, or cancerous cells, the use of nanobots informed by data accessed from global computing systems would enable the body to self-regulate so as to be less vulnerable to such “invasions” in the first place. The nanobots in this case would be governed by an App, an interface between the individual “user” and the “cloud,” and part of the “exchange” is that the bots would be collecting data from your own biological system so as to contribute to the ongoing refinement of the organization of globally collected and algorithmically processed data. The implication of the analogy is that as social actors we ourselves become “apps,” and, to continue the analogy a bit further, these apps turn the existing social signs into “bots.”

This approach presupposes that we are all located at some intersection along the algorithmic order—our actions are most significant insofar as we modify the calculation of probabilities being made incessantly by computational systems. Either we play according to the rules of some algorithm or we help design their rules—and “helping design” is ultimately a more complex form of “playing according to.” The starting point is making a decision has to how to make what is implicit in a statement explicit—that is, making utterances or samples more declarative. Let’s take a statement like “boys like to play with cars.” Every word in that sentence presupposes a great deal and includes a great deal of ambiguity. “Boys” can refer to males between the ages of 0 to 18—for that matter, sometimes grown men are referred to, more or less ironically, as “boys.” Does “liking” and “playing” mean the same thing for a 4 year old as for a 14 year old male? How would we operationalize “like”? Does that mean anything from being obsessed with vintage cars to having some old toy hot rods around that one enjoys playing with when there’s nothing else to do? Does “liking” place a particular activity on a scale with other activities, like playing football, meeting girls, bike riding, etc.? Think about how important it would be to a toy car manufacturer to get the numbers right on this. We could generate an at least equally involved “explicitation” for a sentence like “that’s a dangerous street to walk at night.” What counts as a danger, as different levels of danger, as various sources of danger, what are the variations for different hours of the night, what are the different kinds and degrees of danger for different categories of pedestrians at different hours of the night, and so on. Every algorithm starts out with the operationalization of a statement like this, which can now be put to the test and continually revised—there are various ways of gathering and processing information regarding people’s walks through that street at night and each one would add further data regarding forms and degrees of dangers. Ultimately, of course, we’d be at the point where we wouldn’t even be using a commonsensical word like “danger” anymore—we’d be able to speak much more precisely of the probability of suffering a violent assault of a specific kind given specific behavioral patterns at a specific location, etc. Even words like “violent assault,” while legal and formal, might be reduced to more explicit descriptions of unanticipated forcible bodily contact, and so on.

All this is the unfolding of declarative culture, which aims at the articulation of virtualities at different levels of abstraction. There are already apps (although I think they were “canceled” for being “racist”) that would warn you of the probability of a particular kind of danger at a particular place at a particular time. And, again, you being there produces more data that will be part of the revision of probabilities provided for the next person to use the app there. But there is an ostensive dimension to the algorithm as well, insofar as the algorithm begins with a model: a particular type of event, which must itself be abstracted from things that have happened. When you think of a street being dangerous, you think in terms of specific people, whose physical attributes, dress, manners and speech you might imagine in pretty concrete terms, doing specific things to you. You might be wrong about much of the way you sketch it out, but that’s enough to set the algorithm in motion—if you’re wrong, the algorithm will reveal that through a series of revisions based on data input determined by search instructions. The process involves matching events to each other from a continually growing archive, rather than a purely analytical construction drawing upon all possible actors and actions. The question then becomes how similar one “dangerous” event is to others that have been marked as “dangerous,” rather than an encyclopedia style listing of all the “features” of a “dangerous” situation, followed by the establishment of a rule for determining whether these features are observed in specific events. Google Translate is a helpful example here. The early, ludicrously bad attempts to produce translation programs involved using dictionaries and grammatical rules (the basic metalanguage of literacy) to reconstruct sentences from the original to the target language in a one-to-one manner. What made genuine translation possible was to perform a search for previous instances of translation of a particular phrase or sentence, and simply use that—even here, of course, there may be all kinds of problems (a sentence translated for a medical textbook might be translated differently in a novel, etc.), but, then, that is what the algorithm is for—to determine the closest match, for current purposes (with “current purposes” itself modeled in a particular way), between original and translation.

Which kind of event you choose as the model is crucial, then, as is the way you revise and modify that event as subsequent data comes in. To be an “app,” then, is to be situated in that relationship between the original (or, “originary,” a word that is very appropriate here) event and its revisions. For example, when most Americans think of ‘racism,” they don’t think of a dictionary or textbook definition (which they could barely provide, if asked—and which are not very helpful, anyway), much less of the deductive logic that would get us from that definition to the act they want to stigmatize—they think of a fat Southern sheriff named Buford, sometimes with a German Shepherd, sneering or barking at a helpless black guy. This model has appeared in countless movies and TV shows, as well as footage from the civil rights protests of the 50s and 60s.  So, the real starting point of any discussion or representation of “racism” is the relation between Buford and, say, some middle-aged white woman who gets nervous and acts out when a nearby black man seems “menacing.” The “anti-racist” activist wants to line up Buford with “Karen,” and so we can imagine and supply the implicit algorithm that would make the latest instance a “sample” derived from the model “source”; the “app” I’m proposing “wants” to interfere with this attempt, this implicit algorithm, to scramble the wires connecting the two figures. This would involve acting algorithmically—making explicit new features of either scene and introducing new third scenes that would revise the meaning of both of our starting ones. There’s a sliding scale here, which allows for differing responses to different situations—one could “argue” along these lines, if the conditions are right; or, one could simply introduce subversive juxtapositions, if that’s what the situation allows for. Of course, the originary model won’t always be so obvious, and part of the process of self-appification is to extract the model from what others are saying. In this way, you’re not only in the narrative—you’re also working on the narrative, from within.

Working on it toward what end? What’s the practice here? You, along with your interlocutor or audience, are to be placed in real positions on virtual scenes. We all know that the most pointless way of responding to, say, an accusation of racism, is to deny it—if you’re being positioned as a racist on some scene, the “appy” approach is to enact all of the features of the “racist” (everything Buford or Karen-like in your setting) minus the one that actually marks you as “racist.” What that will be requires a study of the scene, of course, but that’s the target—that’s what we want to learn how to do. And the same thing holds if you’re positioned as a victim of a “racist” act, or as a “complicit bystander.” If you construct yourself as an anomaly relative to the model you are being measured against, the entire scene and the relation between models needs to be reconfigured. The goal is to disable the word “racist” and redirect attention to, say, the competing models of “violence” between which the charge of “racism” attempts to adjudicate: for example, a model of violence as “scapegoating” of the “powerless,” on the one hand, as opposed to a model of violence as the attack on ordered hierarchy (which is really a case of scapegoating “up”), on the other. If we’re talking about “violence,” then we’re talking about who permits, sponsors, defines and responds to “violence.” We’re talking about a central authority whose pragmatic “definition” of “violence” will not depend upon what any of us think, but which nevertheless can only “define” through us.

This move to blunt and redirect the “horizontalism” of charges of tyrannical usurpation so as to make the center the center of the problematic of the scene is what we might call “verticism.” The vertical looks upward, and aims at a vertex, the point where lines intersect and create an angle. The endpoint of our exchange is for all of our actions to meet in an angle, separate from all, which someone superintends the whole. Moreover, verticism is generated out of a vortex, an accelerating whirlpool that provides a perfect model for the intensification of mimetic crisis—and a vorticism aligned with verticism also pays homage to the artistic avant-garde movement created by Wyndham Lewis. “Vertex” and “vortex” are ultimately the same word, both deriving from the word for “turn”—from the spiraling, dedifferentiating and descending turns of the vortex to the differentiating and ascending turns of the vertex. The “app” I have in mind finds the “switch” (also a “turn”) that turns the vortex into a vertex. From “everyone is Buford” to “all the events you’re modeling on Buford are so different from each other that we might even be able to have a couple words with Buford himself.” So, I’m proposing The V(e/o)rticist App as the name for a practice aimed at converting the centering of the exemplary victim into the installation of the occupant of the center.

June 29, 2020

Toward a Media-Moral Synthesis

Filed under: GA — adam @ 12:27 pm

Haun Saussy, in an excellent book on the relation between orality and literacy (and media history more generally), suggests a way of thinking about orality that reframes the whole question. Rather than trying to define empirically how to sort out what in (or “how much” of) a community is constituted through orality, what we are to count as “writing,” what criteria we are going to have for “literacy,” and so on, he suggests thinking about orality as ergodic in its constitution. Here’s the online dictionary definition of “ergodic”:

relating to or denoting systems or processes with the property that, given sufficient time, they include or impinge on all points in a given space and can be represented statistically by a reasonably large selection of points.

With regard to language, this means a signifying system that is finite: given enough time, all the different “elements” of the system will be used. This view of language runs counter to the assumption shared, I think, by all schools of modern linguistics, which is that language is constituted by a set of combinatorial rules that make possible utterances unlimited—new things can always be said in the language, and always are said, and not necessarily by language users who are particularly creative or inventive. Language is intrinsically generative and therefore infinite. If we follow up on Saussy’s suggestion, though, this is in fact only the case for written languages. Languages in a condition of orality are constituted by a finite number of “formulas,” or “commonplaces,” or “clichés,” or “chunks,” that are not infinitely recombinable.

This new way of framing the question could raise a whole series of questions. One could say that language was always “potentially” infinite, and so modern linguistics would still be essentially right—and there must be some sense in which this is true. One could say that it is the specifically metalinguistic concepts introduced in order to institutionalize writing (and writing was institutionalized from the beginning), like the “definition” of words, and, especially, grammatical “rules,” that introduced the infinitization of language. One might even want to argue that, perhaps, we are wrong in thinking languages even in their literate form are inexhaustible—after all, how could we really know? What I will do is follow up on some hypotheses I’ve taken over from thinkers of orality/literacy like David Olson and Marcel Jousse and explore the relation between the emergence of literacy and Axial Age moral innovations.

Remember that for Olson the entry point into the oral/literate distinction is the problem of reported speech—telling someone what someone else said. Under oral conditions, the tag “X said” would be used (which reminds us that “say” is one of Wierzbicka’s primes), but the reporting of speech would be performed mimetically—the one reporting the speech not only wouldn’t paraphrase or summarize, but would say the exact same thing in the exact same way. That’s the presumption, at least, even if an outside observer might notice discrepancies. What is said is shared by the two speakers, and this presumption is strengthened by the ergodic nature of language under orality, which means that no one can say anything that hasn’t already been said, and won’t be said again. Individual speakers are conduits of a language that flows through them, and that they are “within”—and the language of ritual and myth would, further, be the model and resource for everyday speech, as everyone inhabits traditionally approved roles. Everyone is a_________, with the blank filled in by some figure of tradition.

When writing, you can’t imitate the way someone said something, so everything apart from the actual words needs to be represented lexically. This leads to the metalanguage of literacy, involving the vast expansion of words representing variations on, first of all, “say,” and “think.” You can’t reproduce the excited manner in which someone said something, so you say “he exclaimed.” This is, of course, an interpretation of how it was said, and so, one could say, was the imitation, but this difference in register makes it harder to check the interpretation against the original—it would be easier for a community to tell whether you provide a plausible likeness of some other member than to sort out whether he indeed “exclaimed”—rather than, for example, simply “stating.” Proficiency in the metalanguage provides authority—you own what the other has said—which is why an exact replication of the original words would become less important.

What is happening here is that while a difference is opening up between the original speaker and the one reporting the speech, differences are also opening up between the reporter and the audience and, eventually, within the speaker himself. This is the creation of “psychological depth.” Did he “exclaim” or “state”? Or, for that matter, “shriek”? That would depend on the context, which could itself be constructed in various ways, and never exhaustively. The very range of possible descriptions opened up the metalanguage of literacy generates disagreements—defenders of the original speaker would “insist” he simply firmly “stated,” while his “critics” would “counter” that he in fact, was losing it. It then becomes possible to ask oneself whether one wants to be seen as stating or exclaiming, to examine the “markers” of each way of “saying,” and to put effort into being seen as a “stater” rather than as “exclamatory.” Which then opens up further distinctions, between how one appears, even to oneself, and what one “really” is. On the surface I’m stating, clearly and calmly, but am I exclaiming “deep down”? (Of course, the respective values of “exclaiming” and “stating” can be arranged in other ways—what matters is that the metalanguage of literacy necessarily implies judgments regarding the discrepancy between what someone says and what they “really mean,” whether or not they are aware of that “real meaning.”)

Oral accounts involve people doing and saying things; the oral accounts preserved most tenaciously are those in which what people do and say place the center in some kind of crisis, a crisis that is then resolved. Such narratives will remain fairly close to what can be performed in a ritual, and thereby re-enacted by the community. Writing is neither cause nor effect of a distancing of the community from a shared ritual center, but it broadly coincides with it. Writing begins as record-keeping, which right away presupposes transactions not directly mediated by a common sacrifice. Record-keeping implies both hierarchy—a king separated from his subject by bureaucratic layers—and “mercurial” agents, merchants, who move across different communities, sharing a ritual order with none of them. The earliest form of literacy is manuscript culture, where a written text serves to aid the memory in oral performances. The very fact that such an aid is necessary and possible, though, means we have moved some distance from the earliest “bardic” culture.

Where things get interesting is where the manuscripts start to proliferate, as they surely will, and differ from each other. Member of an oral culture might enforce certain kinds of conformity very strictly, but could hardly keep track to “deviations” from an original text, especially since such a text doesn’t exist. Diverse written accounts would make divergences unavoidable and consequential, because the very fact that a text was found worthy of committing to the permanence of writing (an expensive and time-consuming process) would add a sacred aura to it. As we move into a later form of manuscript culture, in which commentaries, oral but also sometimes written as well, are added to the texts, these differences would have to be reconciled—generating, in turn, more commentary. This is an early version of what Marcel Jousse called “transfer translations,” i.e., translations into the vernacular of a sacred text preserved in an archaic language—according to Jousse, the inevitable discrepancies between the translation and original, due to the differing formulas in each, respectively, generates commentary aimed at reconciling them.

Reconciling such discrepancies could involve nothing more than “smoothing out” while keeping the narrative and moral lessons essentially intact. There will be times, though, when the very need to address discrepancies allows for, and even attracts, complicating elements. Let’s say the prototypical oral, mythical narrative involves some agent transgressing against or seeking to usurp the center in a way that disrupts the community and then being punished (by the center or the community) in a way that restores the community. If there’s no longer a shared ritual space, such narratives are less likely to be so unequivocal. To transgress against the center is now to transgress against a human occupant of the center. It is possible to refer to a discrepancy between that occupant and the permanent, or signifying center. There can be a discrepancy between human and divine “accounting” or “bookkeeping,” in which sins and virtues, crime and punishment, must be balanced. The discrepancies between “accounts” will attract commentaries exploring this discrepancy. The injustice suffered, the travails undergone, perhaps the triumphs, real or compensatory, experienced by the figure of such a discrepancy will come to be incorporated into a text that is, we might say, “always already” commented upon—that is, such a more complex story will include, while keeping implicit, the accretion of meanings to the “original” narrative. This is what gets us out of the ergodic, and into the vertiginous world of essences (new centers) revealing themselves behind appearances, as well as historical narratives modeled on such ambivalent relations to the center.

Once such a text, or mode of textuality, is at the center of the community, we are on the way to a more complete form of literacy, in which the metalanguage of literacy overlays and incorporates originally oral discourses. Literacy is crucially involved in the shift in the heroic narrative from the “Promethean” (and doomed) struggle against the center to the victim who exemplifies what we can now see as the unholy, even Satanic, violence of the imperial center. This means that the figure of the “exemplary victim,” that is, the victim of violence by the occupant of the center, a violence that transgresses the imperative of the signifying center, is simply intrinsic to advanced literacy. Our social activity is therefore a form of writing the exemplary victim. Liberal culture has its own way of doing so—the exemplary victim is the victim of some form of “tyranny” and demonstrates the need for super-sovereign approved form of rule that bypasses or eliminates that tyranny. It’s almost impossible to speak in terms other than “resisting” some “illegitimate” power in name of someone’s “rights” (as defined by the disciplines—law, philosophy, sociology, psychiatry, etc.).

If “postliberalism,” or what we could call “verticism,” is genuinely “reactionary,” I would say it is in redirecting attention from the exemplary victim back to the occupant of the center, highlighting that occupant’s inheritance of sacral kingship and therefore vulnerability to scapegoating and sacrifice. The exemplary victim could emerge in the space opened by the ancient empires, where the ruler was too distant from the social order to be sacrificed, but post-Roman European kings never definitively achieved this distance, and liberalism is predicated upon putting the center directly at stake, predicating the center’s invulnerability so as to exacerbate its vulnerability. All scapegoating attributes some hidden power to the victim, which is to say, places the victim at the center; all scapegoating of figures at the margin, then, is a result and measure of unsecure power at the center; so, refusal to participate in scapegoating, or violent centralization, is really bound up with the imperative to secure the center. This means treating the victim as a sign of derogation of central authority, rather than levying the victim against that authority. So, it’s not that we can ignore the exemplary victim; rather, we must “unwrite” the exemplary victim. This may be the hardest thing to do—to renounce martyrdom, to acknowledge victims but deny their exemplarity in order to “read” them as markers of the center’s incoherence—while representing that incoherence in order to remedy it. The very fact that we are drawn to one victim rather than another—this “racist” who has been canceled, that website that has been de-platformed or de-monetized—itself tends to make that victim “exemplary,” and we do have to pay attention. Nor do we want to “victim-blame” (if only they had been more careful, etc.), even if discussions of tactics and strategy are necessary.

Insofar as we inherit the European form of the Axial Age moral acquisition, we can’t help but see through the frame of the exemplary victim—even a Nietzschean perspective which purports to repudiate victimary framings and claim an unmediated agency is the adoption of a position shaped by Romantic claims to subjective centrality and therefore sacrificiability (Nietzsche’s own “tragic” end reinforces this). The exemplary victim is constitutive of our language and narratives, which is why it needs to be “unwritten.” The whole range of exemplary victims produced across the political spectrum constitutes our “alphabet” (or, perhaps, “meme factory”). The most direct way unwrite might be to follow up on the observation that the function of the disciplinary deployments of the exemplary victim is to plug executive power into the disciplines, which then can turn on and off the switch. But these detourings of centered ordinality nevertheless anticipate some use of the executive—those most deeply invested in declarative cultures like the law want the executive to crack down on their enemies as much as anyone else. So, it’s always possible to cut to the chase and propose and where possible embody that use of executive power which would most comprehensively make future instances of that form of victimage as unlikely as possible. One proposes, that is, some increased coherence in the imperatives coming from the center (and, by implication, in the cultivation of those dispositions necessary to sustain that coherence). If we did X, this victim over whom we are agonizing would be irrelevant—we could forget all about him. One result would be the revelation of how dependent liberal culture is upon its martyrs—so much so that they’d rather preserve their enshrinement than solve the supposed problem and thereby write them off. In the meantime, we’d be embarking upon a rewriting of moral inheritances that would erase the liberal laundering of scapegoating through the disciplines once and for all.

June 19, 2020

The Imperative of the Occupant of the Center

Filed under: GA — adam @ 6:55 pm

To my knowledge, no one has ever placed the transition of power at the center of political theory—neither as an explanatory principle distinguishing regime forms from each other, nor in normative terms, as a way of accounting for what makes a form of government good or just. Propagandists for democracy like to talk about the “peaceful transfer of power,” but generally in the context of fearing it might not take place—never as a defining feature of the regime itself. Such propagandists are savvy enough to know it isn’t a particularly strong selling point—indeed, defenders of democracy know better than to claim their favored form of regime even provides for the best governance; they know better than to direct inquiry in that direction. But even monarchy hasn’t approached the question in this way (at least as far as I know)—maybe because there is no single monarchical method of transitioning from one occupant of the center to the text. Primogeniture is, I suppose, the most common monarchical method of succession, and one can see how it would minimize conflicts over succession, but the weaknesses of this approach are obvious, and history is full of its consequences—kings without sons, or with idiot or wicked sons, open up the power structure the system was designed to prevent, without any clear way of closing it up again (once the chain is broken, there can always be questions about the legitimacy of the monarchy). So, maybe no one has wanted to center political thinking on the question of succession because no one has ever felt confidence in any answer. But it really is the best way into theorizing governance: any regime that could present its form of succession as representing a form of continuity that could be traced back with as little question as possible to the origin of the social order itself would surely be the best possible regime. This is a very economical approach.

Anthropomorphicspresents a solution: the present occupant of the center chooses his successor. This follows from the rejection of any form of imperium in imperio, or “super-sovereignty”: if there is some rule of succession independent of the ruler, then the interpreter of that rule is sovereign. And, of course, the ruler could choose his son, or a family member—and that would sometimes be the best choice. But sometimes it wouldn’t be, and we can therefore derive a rule for selecting a successor: whoever is going to succeed as ruler must have the character to set aside his personal and familial interests for the sake of the country. This is not a rule that could be imposed on the present occupant of the center (it couldn’t even be formulated coherently enough for that), but one that would be part of the education of the ruler, instituted by the first ruler to choose a successor outside of his family, if not earlier. Anthropomorphicslays out a series of such “rules,” again, understood as optimal cultural and pedagogical conditions sure to be discovered from the first principle of selection of successor. Here, I’d like hypothesize regarding the necessary character of a ruler under the kind of post-sacral, post-liberal conditions we have to imagine to conduct our political thinking; and draw the implication of that for our political thinking.

Let’s continue with the selection, education and sequestering of the successor by the current occupant of the center and draw out the implications for actual occupancy from that. The question of succession being central, the entire social order would be oriented towards the process. Competitive academies for training the next generation of governing elites would solicit applications from across the country, giving each community a stake in seeing its native sons and daughters “fast-tracked” to those academies. At a certain level, a small number of students are put on the rulership track, to undergo more specialized training in occupying the center. In being selected for this track, the participants forego other ambitions, for the sake of a much grander ambition which, however, the odds are against them ever fulfilling. The highest level candidates—say, a couple of dozen—from which the current governor would always have one selected, cannot exercise power themselves. They cannot be permitted either to become associated with a particular location or institution, or to build a separate power base. They would live their lives publicly, as the succession game would be fascinating to follow, as the current governor could change his mind regarding his successor, and so the prospective successors would have a kind of celebrity, like a royal family, but would have to comport themselves so as to use that celebrity to model lives of pure service. This would be a continual test, and a candidate who tries to become a “star” would be immediately and permanently removed from consideration. While not exercising any direct power, the candidates would “shadow” the ruler, learning the ins and outs of governing, making “sample” decisions, allowing the governor to study their abilities. The candidates would live separately, and rarely if ever see each other or interact; and I think it would have to be considered a gross breach of protocol for them to refer to each other, especially in the presence of the governor. Those candidates who are not chosen to succeed may be kept in the pool by the new governor when the time comes, or they might be removed and sent back to ordinary life, without any prejudice, of course, but having squandered at least some of their prime years that could have been spent on building some other career.

So, we would have rulers with a strong sense of discretion and modesty, a capacity for solitariness, a sense of having been chosen, to a great extent due to their own merit but, at the same time, with a sense of having given over their lives to their country with the possibility of a “reward” that is at least to some extent arbitrary, or at least unknown—it would be impossible to know completely why the ruler decided to place the bet of the country on you, specifically. Each ruler would be aware of being undergirded by powerful institutional and cultural supports which pave the way for clear rule from the center, but without having the support of a powerful family or institutional clique to lean back on, or operate informally through. The success of his rule will depend very largely upon his ability to promote, directly and indirectly, the smoothly functioning practices of the major social institutions. He would have a family, and, as I suggested earlier, might very well build what might become a dynasty (we could imagine a strong presumption that a child of his would have to go through the normal process, but this would be within his prerogative)—anti-monarchical prejudices would be ridiculous under such conditions—but it would be very difficult under advanced technological conditions to use the office to acquire the kind of wealth and institutional power that could guarantee its permanency—only a sequence of good rulers could do so. In that case, the normal process could be retained as a back-up, which would surely be needed at some point—the demands of social command would be rigorous, and eventually there would be either no heir, or one whom the ruler would have to concede is not up to the job. But the responsibility that comes with knowing that, even if it is your own son, you have chosen your successor, would temper any temptation to do more than bend the established protocol.

For social theory, we have use the following means of regulation of “quality control,” or what me might call anthropomorphics’ six imperatives from the center. First, power and responsibility are to be matched as closely as possible—it’s immoral for someone to have power without uses of that power flowing back to communal goods, or for someone to be given the responsibility to perform some task without being provided the means to do it. Second, “from each according to his abilities, to each according to his needs,” as long as we keep in mind the needs of the able, which might be considerable if they are to give in accord with their abilities. Third, while all scapegoating, or violent centralizing, obfuscates and produces regrettable actions, the most dangerous violent centralizing, the type to which all others tend, is that of the occupant of the social center: the usurpatory motives we might attribute to the occupant of the center, motives which serve as an anchor giving pattern to facts and events, are to be converted into imperatives from the center we make as consistent as possible. Fourth, we are to continually work on articulating the traces of previous scenes into the elements of practices, as argued for in my previous post. Fifth, the mimetic dimension of practices, our reliance on models and previous practices, are to made more explicit as an ongoing socially bonding pedagogical order. And, sixth, the social order is to be seen as a project, with “society” treated as a team of teams, directed toward that project—entering any institution is joining a team, and therefore learning its rules, taking up established (or creating new) roles, and respecting the “captain” and associated hierarchies.

All of these imperatives overlap with each other and none of them provide the basis for any kind of super-sovereignty because they are all immanent to an existing order and paradoxical. There’s no external point from which needs and abilities can be articulated—any attempt to do so would be employing something theoretical or managerial ability which would already be relying upon certain needs being met. Similarly, power and responsibility can only be matched in relation to some ongoing exercise of power or claim of responsibility—again, to try and stand outside and “measure” power and responsibility would itself be an attempt to take responsibility on the basis of some actual or aspired to power. Violent centralizing is always very precise and context-specific and can only be detected on the spot, in its emergence, by someone positioned so as to either accelerate or decelerate the process. Even a social project is more something that is pointed to, abstracted from, and turned into a model for transforming, an existing hierarchy of practices. All these imperatives provide entry points into extant practices which are entered so as to make them more thoroughly and coherently practices.

A good ruler promotes, enforces, exemplifies and obeys these imperatives. The best way to examine how this will shape his character would be to start with number three. The ruler is aware that all resentments can ultimately be channeled his way, especially once the democratic alibi of pretending that his decisions and authority are not really his own is rejected. The ruler is above all a specialist in formulating and issuing commands—this is his discipline, his practice, his pedagogy. There is always an “imperative gap” between the command issued and the command obeyed—no order can be obeyed without at least some degree of discretion being exercised. The practice of commanding is both to minimize this gap and to fill it with preceding exemplars, previous decisions, and previous exercises of discretion which can be translated for current purposes, along with an entire sensory and investigatory apparatus to follow up on and therefore inform obedience to the imperative. Every command issued by the occupant of the center refers back to the mode of occupation intrinsic to that command, while simultaneously grounding that occupation in all the positions, subsidiary centers, occupied throughout the social order. The decision is represented as both as minimal and as consequential as possible: in an enormously complex and intricate order, one tiny “switch” is turned; that one tiny switch is chosen precisely where the choice between bifurcating paths would make the most difference. The command has an economy to it: no more and no less is said than necessary; commands are issued only to those who need to obey them; and this economy models the way further commands for implementing the prime one are to be issued. The ruler both disappears into his commands and stands outside of them. Any complaint directed to the occupant of the center becomes a question—an extension of the command which one delays obeying by complaining—regarding the economy with which one has situating oneself at a bifurcation. The character of the good ruler is one that can always say, I’m doing at my point at a particular bifurcation nothing more and nothing less than what I’m asking you to do at yours.

June 11, 2020

Recirculating the Center

Filed under: GA — adam @ 5:08 am

The ether is replaced by the constancy of the speed of light; phlogiston is replaced by oxygen; and, of course, geocentrism is replaced by heliocentrism. In each case, critical experimental results effected the scientific revolution, but what I’m interested in here is how the logic of scientific revolution can be applied to the revolution in the human sciences I take the originary hypothesis to initiate—a scientific revolution that is qualitatively different because the scientist is part of the phenomenon under study, and must study that phenomenon by acting within and therefore changing it. Scientific revolution is not only a valid, but an essential model here, because what both levels of inquiry have in common is what Gaston Bachelard called “epistemological obstacles,” which is to say, concepts grounding a process of inquiry that are themselves ungrounded in anything other than inherited institutional and what we could call “mythical” imperatives. The theological and therefore moral implications of the displacement of geocentrism by heliocentrism are well known, as is the “trauma” of Darwin’s hypothesis regarding the origin of species. I don’t know of any equivalent investments in phlogiston and the ether, but there were certainly intellectual and perhaps aesthetic investments—such concepts presumably provided a kind of apparent coherence that would have been lacking otherwise. Meanwhile, moralized resentments against the decentering of the conscious, self-centered human subject brought about by modern theorists like Marx, Nietzsche and Freud were also for quite a while grist for highbrow ruminations. The continuity between the natural and human sciences, then, is that the replacement of one disciplinary center by another requires the reordering of an entire constellation organized around that center, and such an event is always consequential.

As in my previous post, I want to bring the model of scientific revolution, or center replacement, from the level of the one or two in a lifetime event to our day to day thinking or “signifying” (or “sampling”). In a way, the problem gets much more interesting on this level. Once astronomy rejects geocentrism, or chemistry phlogiston, those paradigms are gone because inquiry now proceeds on the transformed terrain; but everyday discourse throws up new epistemological obstacles regularly, because ongoing events always need to be thought through on terms that can’t be completely given in advance. There are always assumptions in place that make it possible to see some things and impossible to see others. Moreover, in human affairs, not everything can be made explicit—indeed, with everything we do make explicit, more implicit assumptions are generated. There is always what Hannah Arendt called a “necessary appearance.” (Her example was that, however up to date my cosmology, the sun still looks like it is rising in the morning.) On the originary scene, it “appears” that the central object is holding the assembled in place. The same is true every time we attend to something—I’m already looking at something or thinking about something before I can ask why I’m doing so. I’m always being “held” in some way before “reflection” kicks in and, in fact, reflection tightens the grip of whatever holds me because my reflections find it to be necessary, or motivated, or rooted in something “deeper” that holds me, or an entry point into some network that encloses me, or a malevolent spirit that must be combatted, etc.

The structure of a scientific experiment is similar to that of a sacred ritual insofar as in both cases we have a closed space on which external effects are excluded, we have a precisely organized practice aimed at generating an event with a specific range of expected effects, as a result of which something will be revealed. “Scientific” thinking, in the sense of a practice organized so as to produce a revelatory event, was obviously “applied” to the human community well before it was applied to things. In that case, all human practices must have this structure—we are always assembling our body as a system of signs, conjoined with the mediatory and technological signs across which our attention and its effects are distributed, in order to reveal something: this something will always be some center, which will tell us what we need to do to be “held” by it. When a practice fails, which is to say that the center does not extend us an answer we can “process,” we draw upon our relation to the center as a model for a narrative that will re-position us in relation to the center. We can then translate that narrative into new practices, aimed at revelation. Of course, this process, taken on its own, is just as likely to lead to further obfuscation as clarification. And that’s really the question—how do we distinguish one from the other, and generate practices, narratives and translations that allow us to make this distinction regularly and in a controlled manner? Without the controlled scientific space, we must ourselves be both subjects and objects of virtual experiments that never leave the realm of the hypothetical. So, what makes for a “good,” or “generative,” hypothesis in the human realm?

It’s one that makes the practice generating it more of a practice. The simplest way to think about a practice is that as a result of some performance, something comes into existence that wouldn’t have come into existence without that performance, and this emergence produces a new scene onto which a performer of practice could enter and perform anew. Games provide good examples of this kind of thing—a good move in chess sets up a subsequent move, etc.—but we could think in terms of asking someone a question. A good question is one that elicits a statement that wouldn’t have been made without that question, and that will now enable a new question that itself wouldn’t be possible without the previous question-answer sequence—that allows the questioner to continue as questioner in an unanticipated way that the previous sequence nevertheless prepared him for. So, you could think in terms of continually becoming a better questioner, or interviewer, as a practice. As this happens, you will discover that both you as the questioner, and the one being questioned, however important or interesting, recede into the background of the event of questioning itself. The more you focus on specific things you yourself would want to know, or imagine a reader or hearer would want to know, the less perfect your practice; the same with a focus on the interviewee as the center—you and the interviewee are nothing but the preconditions of this particular practice of questioning. Let’s say you have to keep the focus on the interviewee, and the specific things people want to hear from him, because those conditions are what made the questioning possible in the first place—in that case, those would have to become further preconditions of a more constrained but still potentially excellent practice of questioning. (Of course, the constraints could become such as to make anything approaching a genuine practice impossible, in which case one might be ethically obliged to decline the assignment.)

What we see here is an act of decentering and then recentering: from the interviewer or interviewee being the center, which in a sense is the natural situation in a conversation, the process of questioning itself becomes the center, which the individuals involved merely serve. With one of the individuals as the center, the oscillations of desire and resentment generate the scene—the interview humbly defers to the great man, but also hopes to catch him out in some remark that will diminish him, so he projects onto the great man the intentions and qualities corresponding to his own imperfect practice—the great man is arrogant, or insincere, or indeed great beyond all comprehension, etc.—all the narratives of a failed practice. The perfection of the practice purges such narratives and translations—insofar as both are being constructed and constituted in this space, through this event, as figures or subjects of this singular line of questioning, all those projections are dispersed. If you think about, or come to narrate, your life as a sequence of practices, and your life as a whole as a practice of practices, within a social order in which those practices are situated and is continually reconstituted by and as those practices, then the problem of the continual replacement of the center comes into focus.

The mythical narrative interferes with the perfection of practice. It keeps in place a failed practice. This happens because a failed practice at one point must have been successful, or at least seemed more likely to be successful than alternatives. It relies on a narrative whose exhaustion has not been acknowledged, and a relation to some center that seems to have no alternative other than “chaos.” The only way out of a mythical narrative and a center that can no longer keep its “satellites” in “orbit” is to continue in the path of perfection of that practice. First, though, you need to understand that what you’re doing is a practice, even if only the decaying remains of one. This means directing your attention to whatever you are doing that you are not incorporating into some practice. When faced with some problem, or encounter, or confrontation, there is probably something in your engagement that you can’t situate within a practice—something that indicates the remains of some gesture that, you imagine, once “worked.” There might be many such things; perhaps there’s nothing you can see in what you do that is the product of a practice. What you are noticing are many at least partially failed practices, and the corresponding narratives and translations of narratives into new practices will to that extent deserve to be called “mythical.” There is some event with a center that you are faithful to but, rather than constructing a practice that allows for continual recenterings of the center of that event, you resist anything that interferes with attempts at reconstituting the entire scene that seems inseparable from the event. The mythical narrative and its practical translations are essentially cargo-culting.

Even more: whatever in your own doings and thinking you can’t represent as a practice is by virtue of your inability a part of others’ practices. If you’re thinking of yourself as an individual, with a conscience and consciousness, with character traits, a personality, beliefs, likes and dislikes, and so on, without being able to represent all of this within your practice of your life as a practice of practices, then there can’t be any doubt that all of these things are the results of practices of education, public relations, propaganda, entertainment, the social sciences, and so on that others have constructed for you. The perfection of practices always involves inhabiting all these practices produced for you, decentering the desire for recognition, the fear of public rejection, the immersion in thoughtless narratives and all the other centers created by those disseminated practices which provide prepared scripts for the repetition of familiar revelations—and recentering the composition of practices shared by others that treat the practices circulating through as practices rather than pre-given scenes. The good hypothesis, then, is the one that proposes a possible structure as a practice for some experiential given that has been revealed as an indication of a failed practice. Say you feel impotent rage at some failure or humiliation, or betrayal at what has turned out to be misplaced trust. Bound up in these feelings is a narrative involving characters with certain rights, possibilities and responsibilities, and somewhere in that you placed yourself on a scene just because it conformed to a model of experience of some other scene. There’s something in there that hasn’t been constructed as a practice, some form of mediation between you and others that just seemed inherent in the scene. That experience indicative of a failed practice and pointing to the need to incorporate hitherto unnoticed practices into your own is the moral equivalent of the scientific “anomaly” that calls for a new “paradigm”—a paradigm in which others would be invited to co-construct practices with you, rather than re-inforce a relation of “co-dependency.”

I approached, in this post, a very similar question as the one I approached in a very different way in the previous post. They’re in different languages, you might say, and we should all be multilingual. I think they are completely mutually translatable into each other without loss, but I’ll think about it. If a practice is fundamentally making oneself over as a “sample,” then I think the crossover becomes easy.

May 31, 2020

Deriving the Sample to its Source

Filed under: GA — adam @ 12:11 pm

When you “signify” in any way, there are two ways of thinking about what you have done: first, you have conveyed or communicated some meaning, or content, in a package, so to speak, to be delivered to some recipient; second, you are modifying the mass of signifying material transmitted to and circulating around you from the totality of language users. The problem with the first way of thinking about it is that whatever content you believe yourself to transmitting is not something outside of language but is, rather, made up of transmitted and circulating signifying material, which references, then, other “contents,” which are themselves comprised of…. Leading us to infinite regress. The problem with the second way of thinking about the process is similar: that great mass of signifying material is signifying material because it is signifying “something,” something that is presumably not reducible to the signifying material itself. Here, again, we are led into infinite regress, as we can only track the various paths taken by signifying chains by referring to their to some extent at least extra-linguistic referents (i.e., “content”).

This antinomy is a metaphysical one, insofar as it presupposes the primacy of declarative culture, where we need to keep providing content for sentences but the content can only be more sentences. The originary hypothesis transforms this antinomy into a generative paradox by positing the ostensive sign as the first sign, so that the sacred object at the center is also the first “content,” but content only made available through the act of signification itself. So, there is indeed some “content” “outside” of any act of signification, but it is a content that is the content of that particular act of signification, under those conditions of signification, within a specific event of signification, which thereby produces that content. Since that act, event, and those conditions must be the performance of positions, rules and possibilities created by the entire history of language and humanity, the creation of that content could just as easily and accurately be described as a modification of signifying material transmitted and circulating—kind of like pulling a switch that directs a chain of signification of one path onto another.

There is “content,” then, because we can use the “same” sign pointing, or providing a kind of map enabling us to point, to the “same” thing. This is really a single problem, because the “same” sign is the same because it is pointing to the “same” object. What makes this possible is what I call a “disciplinary space,” but it would be more precise to say that this is what a disciplinary space is. But we can just as readily use Eric Gans’s terms from The Origin of Language: “linguistic presence,” which is maintained or restored by “lowering the threshold of significance.” The only really satisfying answer to the question, “what do you mean by that?,” is some version of “look at this.” The whole problem then resides in being in the same “place,” “facing” the same “direction,” undistracted by other things one might look at which might obscure “this,” and so on. And this is a problem that can only be solved within some practice, a practice constructed at least in part in order to solve it, here and now. (What “here and now” means is also determined by a disciplinary space: there can be a “here and now” stretching across the earth and the millennia—we can share a disciplinary space with the “recipient” of an ancient divine revelation.) All of our conversations are shaped by some form of the question the novice asks the expert when told to look through some specialized device of observation: “what am I looking at here?”

The implications of the paradigm-specific nature of knowledge has been studied extensively, by Gaston Bachelard, Ludwik Fleck, Thomas Kuhn and others—the Marxist philosopher Louis Althusser has some interesting things to say about Bachelard’s notion of an “epistemological break” separating one paradigm from another—Kuhn’s “scientific revolution.” For a contemporary thinker who goes over this material in an informed and thorough (and accessible) way, I would recommend Hans-Jorg Rheinberger, whom I just came across myself. But my own ambition is to bring “paradigm dependency” into the realm, not only of the human sciences, but that of normal and idiosyncratic signifying activity, which is to say, social interaction. This would also bring the question into the moral and ethical fields: it would be immoral to ignore the “anomalies” that bring an established “paradigm” into “crisis,” because in doing so you would be abetting the crisis. But what this means in, say, a conversation between two people, or a political debate, must be very different than what it means in an established scientific discipline. The trick of a certain kind of progressive is to ignore these differences so as to license themselves to harangue their political enemies with what might at best be slightly more “qualified” claims from some “expert” domain. But if you ask such a progressive for the theory of social interaction and signifying activity informing such bullying, you’re very likely to draw a blank. He won’t be able to tell you what he’s doing, and what could be more important to thinking about what is “good” and what is “bad” than being able to say what you are doing?

The best way of infiltrating all discourse with some translation of paradigm dependency is to articulate all the speech forms identified in The Origin of Language, and explored in various directions in Anthropomorphics: An Originary Grammar of the Center. Any speech act, in any medium, articulates the ostensive, imperative, interrogative and declarative levels of discourse. We can borrow from Benjamin Bratton’s The Stack, and, as I have been doing already without remarking upon it, refer to the “grammatical stack”—those levels of discourse are articulated in what we could call a particular “slice” of the stack (one might say a “cross-section”) in any utterance. The “meaning” of an utterance (and, like “here and now,” what we mean by “utterance” is determined within a disciplinary space: an epic poem, even an entire tradition, can be treated as an utterance) is the way it slices the stack. And another language user or (risking confusion) “signifier” acknowledges this meaning by slicing the stack in a way that is possible only because of the previous slice. We all can tell the difference between meaningful and meaningless statements. For example, a billionaire insisting on the need for greater “equality” while ordering his sub-minimum wage illegal alien domestic worker to scrub a stain is not really making a “meaningful” statement: “hypocrisy” is the ready at hand word for this kind of meaninglessness. But what does anyone mean when they “call” for greater “equality”—where is the route from that declarative statement to a set of ostensives and imperatives that would lead to a result we could point to together and say, “yes, that’s what ‘greater equality’ looks like”? If you can’t answer that kind of question, what you say is just as meaningless as the virtue-signaling of the most transparent hypocrite. And if this doesn’t strike you as an important problem, your pretensions to being a moral actor are perfunctory, at best.

I propose approaching this by treating signifying acts or utterances as samples. “Sample” might seem like a narrowly scientific term, of dubious application when applied to humans, but the word has a richer history than that—it is really a “spin-off” of “example,” which means it carries the meaning of a “model,” or “match,” and is part of a family of Indo-European words with the root “em,” which means “to take, distribute” (from the online etymological dictionary, of course). So, when we “sample,” we’re passing around parts of the whole, not, in this case, to consume, but to use them to figure out what the whole consists of. Any use of language is a sample of language, and its relation to language as a whole is precisely what is in question. Here I will invoke, as I have done many times, Peirce’s assertion that knowledge involves determining the relation between some proportion between elements in the sample and the proportion between those same elements in the whole. If you could represent the whole you wouldn’t need samples, and there’s no doubt that with language you can never have the whole. So, when I say something, I’m presenting not only a sample of language (and myself as a sample of language users), but a(n intrinsically open) hypothesis regarding the relation between that sample and the whole (the hypothesis being that the study and iteration of my sample will enable you to generate samples that better approximate the whole than would otherwise be the case). This hypothesis is far more often than not implicit, but it’s definitely there insofar as my sample, or part, or slice, is a “response” to others (rather than the feeble “response,” we can say that my sample repairs a break in linguistic presence threatened by a previous sample, using the reparative means provided by that sample). One sample includes, via allusion, impersonation, citation and translation, others, and thereby proposes a better match between sample and whole.

What makes for a better match is that some “same” sign is now seen to be marked by difference as a consequence of a new same sign (or sample). We could say that the origin of the declarative is iterated: an ostensive is shown to be lacking, or referentless, or distributed among so many referents as to be inoperative; while a new ostensive realigns the field. This can be seen as a scientific practice—multiplying anomalies until the new paradigm can be constructed—it can also be seen as a moral and ethical practice of reparation, and an aesthetic practice of framing. The more we move away from established scientific disciplines and toward “everyday life” or, more precisely, more open-ended scenes, the more the latter aspects of the practice become the decisive ones. The “anomaly,” in moral and aesthetic terms, is the break in linguistic presence. It is a breach one steps into. Your sample has to be a sample ofthe missing layer of the stack presented by the other’s sample. This is the kind of practice I have discussed many times before: you might take the other’s declarative as an imperative, thereby revealing the contrary or inoperable imperatives implicit in it; one might take oneself to be named in some “meaningless” reference in another’s discourse, and act out that absence; one might repeat another’s declarative in a series of declaratives, each producing a word or phrase in the other’s sentence, thereby laying bare what we are expected to think here. Of course, this need not be antagonistic—one could use these kinds of practices to amplify another’s discourse, to accentuate the fullness of meaning. In fact, one is always doing a bit of both, because even the meaningless discourse must be acknowledged as enabling the breach one can now step into.

I’m always trying to introduce further gradients of differentiation and deferral into these hypothetical renderings of linguistico-moral-aesthetic practices. We can’t get to the point of writing all-purpose pedagogical scripts (but that may be an imperative from the center that can’t be unheard), but we can clarify an imperative and create a vocabulary for naming its “stations.” We keep putting forward samples with a relationship to the whole that is indeterminate and nevertheless more closely matched than another sample to be included within our own. Every sample is distinct—distinctiveness is the relation between the “elements” in the sample and the relation between those same elements within the whole. The sample is the same as itself, as “verified,” “confirmed,” or “acknowledged” by the other samples it generates. (You could say the determination that any sign or sample is the same is a “fiction,” but as opposed to what reality?) Insofar as the sample can be “authenticated,” though, this sample iterates and is therefore the “same” as a whole series or “sprawl” of samples.

So, you can always locate any sample at some point on a continuum where at one end we identify everything that makes the sample the same as lots of other samples, all that reduces it to a “stereotype”: the use of words and phrases in the same way, the reliance on grammatical constructions and rhetorical commonplaces, the deployment of familiar tropes, the reliance on the affordances of the media employed, and so on: this brings into focus the tasks of “media studies.” At the other end of the continuum, meanwhile, we identify everything that distinguishes this sample from any other, including time, place, audience and the various possible modifications of inherited means of expression. The breach is where you accentuate both, or represent an oscillation between the two, showing how accentuating one end of the continuum ends you up back at the other end—where the most insistent adherence to fixed models produces the greatest originality. The title of this post has the inappropriate “to” instead of “from” so as to accentuation the simultaneity of discovering and constructing the source of any sample. “Derive to” is a sample of mistakenness, interfering with the linearity implicit in the notion of “derivation.” Maybe a good sample, maybe not. Leaving your sample to simultaneously be an absolute novum and a complete copy is language learning as the definitive moral act—you discover what you “mean” by minimally but systematically differentiating your utterance from others. Anything we would take to be moral, above all refraining from projecting your own mimetic crises onto the background of others so we might see them as following the same imperative as us, follows from the derivation of the sample to its source.

Older Posts »

Powered by WordPress