GABlog Generative Anthropology in the Public Sphere

June 29, 2020

Toward a Media-Moral Synthesis

Filed under: GA — adam @ 12:27 pm

Haun Saussy, in an excellent book on the relation between orality and literacy (and media history more generally), suggests a way of thinking about orality that reframes the whole question. Rather than trying to define empirically how to sort out what in (or “how much” of) a community is constituted through orality, what we are to count as “writing,” what criteria we are going to have for “literacy,” and so on, he suggests thinking about orality as ergodic in its constitution. Here’s the online dictionary definition of “ergodic”:

relating to or denoting systems or processes with the property that, given sufficient time, they include or impinge on all points in a given space and can be represented statistically by a reasonably large selection of points.

With regard to language, this means a signifying system that is finite: given enough time, all the different “elements” of the system will be used. This view of language runs counter to the assumption shared, I think, by all schools of modern linguistics, which is that language is constituted by a set of combinatorial rules that make possible utterances unlimited—new things can always be said in the language, and always are said, and not necessarily by language users who are particularly creative or inventive. Language is intrinsically generative and therefore infinite. If we follow up on Saussy’s suggestion, though, this is in fact only the case for written languages. Languages in a condition of orality are constituted by a finite number of “formulas,” or “commonplaces,” or “clichés,” or “chunks,” that are not infinitely recombinable.

This new way of framing the question could raise a whole series of questions. One could say that language was always “potentially” infinite, and so modern linguistics would still be essentially right—and there must be some sense in which this is true. One could say that it is the specifically metalinguistic concepts introduced in order to institutionalize writing (and writing was institutionalized from the beginning), like the “definition” of words, and, especially, grammatical “rules,” that introduced the infinitization of language. One might even want to argue that, perhaps, we are wrong in thinking languages even in their literate form are inexhaustible—after all, how could we really know? What I will do is follow up on some hypotheses I’ve taken over from thinkers of orality/literacy like David Olson and Marcel Jousse and explore the relation between the emergence of literacy and Axial Age moral innovations.

Remember that for Olson the entry point into the oral/literate distinction is the problem of reported speech—telling someone what someone else said. Under oral conditions, the tag “X said” would be used (which reminds us that “say” is one of Wierzbicka’s primes), but the reporting of speech would be performed mimetically—the one reporting the speech not only wouldn’t paraphrase or summarize, but would say the exact same thing in the exact same way. That’s the presumption, at least, even if an outside observer might notice discrepancies. What is said is shared by the two speakers, and this presumption is strengthened by the ergodic nature of language under orality, which means that no one can say anything that hasn’t already been said, and won’t be said again. Individual speakers are conduits of a language that flows through them, and that they are “within”—and the language of ritual and myth would, further, be the model and resource for everyday speech, as everyone inhabits traditionally approved roles. Everyone is a_________, with the blank filled in by some figure of tradition.

When writing, you can’t imitate the way someone said something, so everything apart from the actual words needs to be represented lexically. This leads to the metalanguage of literacy, involving the vast expansion of words representing variations on, first of all, “say,” and “think.” You can’t reproduce the excited manner in which someone said something, so you say “he exclaimed.” This is, of course, an interpretation of how it was said, and so, one could say, was the imitation, but this difference in register makes it harder to check the interpretation against the original—it would be easier for a community to tell whether you provide a plausible likeness of some other member than to sort out whether he indeed “exclaimed”—rather than, for example, simply “stating.” Proficiency in the metalanguage provides authority—you own what the other has said—which is why an exact replication of the original words would become less important.

What is happening here is that while a difference is opening up between the original speaker and the one reporting the speech, differences are also opening up between the reporter and the audience and, eventually, within the speaker himself. This is the creation of “psychological depth.” Did he “exclaim” or “state”? Or, for that matter, “shriek”? That would depend on the context, which could itself be constructed in various ways, and never exhaustively. The very range of possible descriptions opened up the metalanguage of literacy generates disagreements—defenders of the original speaker would “insist” he simply firmly “stated,” while his “critics” would “counter” that he in fact, was losing it. It then becomes possible to ask oneself whether one wants to be seen as stating or exclaiming, to examine the “markers” of each way of “saying,” and to put effort into being seen as a “stater” rather than as “exclamatory.” Which then opens up further distinctions, between how one appears, even to oneself, and what one “really” is. On the surface I’m stating, clearly and calmly, but am I exclaiming “deep down”? (Of course, the respective values of “exclaiming” and “stating” can be arranged in other ways—what matters is that the metalanguage of literacy necessarily implies judgments regarding the discrepancy between what someone says and what they “really mean,” whether or not they are aware of that “real meaning.”)

Oral accounts involve people doing and saying things; the oral accounts preserved most tenaciously are those in which what people do and say place the center in some kind of crisis, a crisis that is then resolved. Such narratives will remain fairly close to what can be performed in a ritual, and thereby re-enacted by the community. Writing is neither cause nor effect of a distancing of the community from a shared ritual center, but it broadly coincides with it. Writing begins as record-keeping, which right away presupposes transactions not directly mediated by a common sacrifice. Record-keeping implies both hierarchy—a king separated from his subject by bureaucratic layers—and “mercurial” agents, merchants, who move across different communities, sharing a ritual order with none of them. The earliest form of literacy is manuscript culture, where a written text serves to aid the memory in oral performances. The very fact that such an aid is necessary and possible, though, means we have moved some distance from the earliest “bardic” culture.

Where things get interesting is where the manuscripts start to proliferate, as they surely will, and differ from each other. Member of an oral culture might enforce certain kinds of conformity very strictly, but could hardly keep track to “deviations” from an original text, especially since such a text doesn’t exist. Diverse written accounts would make divergences unavoidable and consequential, because the very fact that a text was found worthy of committing to the permanence of writing (an expensive and time-consuming process) would add a sacred aura to it. As we move into a later form of manuscript culture, in which commentaries, oral but also sometimes written as well, are added to the texts, these differences would have to be reconciled—generating, in turn, more commentary. This is an early version of what Marcel Jousse called “transfer translations,” i.e., translations into the vernacular of a sacred text preserved in an archaic language—according to Jousse, the inevitable discrepancies between the translation and original, due to the differing formulas in each, respectively, generates commentary aimed at reconciling them.

Reconciling such discrepancies could involve nothing more than “smoothing out” while keeping the narrative and moral lessons essentially intact. There will be times, though, when the very need to address discrepancies allows for, and even attracts, complicating elements. Let’s say the prototypical oral, mythical narrative involves some agent transgressing against or seeking to usurp the center in a way that disrupts the community and then being punished (by the center or the community) in a way that restores the community. If there’s no longer a shared ritual space, such narratives are less likely to be so unequivocal. To transgress against the center is now to transgress against a human occupant of the center. It is possible to refer to a discrepancy between that occupant and the permanent, or signifying center. There can be a discrepancy between human and divine “accounting” or “bookkeeping,” in which sins and virtues, crime and punishment, must be balanced. The discrepancies between “accounts” will attract commentaries exploring this discrepancy. The injustice suffered, the travails undergone, perhaps the triumphs, real or compensatory, experienced by the figure of such a discrepancy will come to be incorporated into a text that is, we might say, “always already” commented upon—that is, such a more complex story will include, while keeping implicit, the accretion of meanings to the “original” narrative. This is what gets us out of the ergodic, and into the vertiginous world of essences (new centers) revealing themselves behind appearances, as well as historical narratives modeled on such ambivalent relations to the center.

Once such a text, or mode of textuality, is at the center of the community, we are on the way to a more complete form of literacy, in which the metalanguage of literacy overlays and incorporates originally oral discourses. Literacy is crucially involved in the shift in the heroic narrative from the “Promethean” (and doomed) struggle against the center to the victim who exemplifies what we can now see as the unholy, even Satanic, violence of the imperial center. This means that the figure of the “exemplary victim,” that is, the victim of violence by the occupant of the center, a violence that transgresses the imperative of the signifying center, is simply intrinsic to advanced literacy. Our social activity is therefore a form of writing the exemplary victim. Liberal culture has its own way of doing so—the exemplary victim is the victim of some form of “tyranny” and demonstrates the need for super-sovereign approved form of rule that bypasses or eliminates that tyranny. It’s almost impossible to speak in terms other than “resisting” some “illegitimate” power in name of someone’s “rights” (as defined by the disciplines—law, philosophy, sociology, psychiatry, etc.).

If “postliberalism,” or what we could call “verticism,” is genuinely “reactionary,” I would say it is in redirecting attention from the exemplary victim back to the occupant of the center, highlighting that occupant’s inheritance of sacral kingship and therefore vulnerability to scapegoating and sacrifice. The exemplary victim could emerge in the space opened by the ancient empires, where the ruler was too distant from the social order to be sacrificed, but post-Roman European kings never definitively achieved this distance, and liberalism is predicated upon putting the center directly at stake, predicating the center’s invulnerability so as to exacerbate its vulnerability. All scapegoating attributes some hidden power to the victim, which is to say, places the victim at the center; all scapegoating of figures at the margin, then, is a result and measure of unsecure power at the center; so, refusal to participate in scapegoating, or violent centralization, is really bound up with the imperative to secure the center. This means treating the victim as a sign of derogation of central authority, rather than levying the victim against that authority. So, it’s not that we can ignore the exemplary victim; rather, we must “unwrite” the exemplary victim. This may be the hardest thing to do—to renounce martyrdom, to acknowledge victims but deny their exemplarity in order to “read” them as markers of the center’s incoherence—while representing that incoherence in order to remedy it. The very fact that we are drawn to one victim rather than another—this “racist” who has been canceled, that website that has been de-platformed or de-monetized—itself tends to make that victim “exemplary,” and we do have to pay attention. Nor do we want to “victim-blame” (if only they had been more careful, etc.), even if discussions of tactics and strategy are necessary.

Insofar as we inherit the European form of the Axial Age moral acquisition, we can’t help but see through the frame of the exemplary victim—even a Nietzschean perspective which purports to repudiate victimary framings and claim an unmediated agency is the adoption of a position shaped by Romantic claims to subjective centrality and therefore sacrificiability (Nietzsche’s own “tragic” end reinforces this). The exemplary victim is constitutive of our language and narratives, which is why it needs to be “unwritten.” The whole range of exemplary victims produced across the political spectrum constitutes our “alphabet” (or, perhaps, “meme factory”). The most direct way unwrite might be to follow up on the observation that the function of the disciplinary deployments of the exemplary victim is to plug executive power into the disciplines, which then can turn on and off the switch. But these detourings of centered ordinality nevertheless anticipate some use of the executive—those most deeply invested in declarative cultures like the law want the executive to crack down on their enemies as much as anyone else. So, it’s always possible to cut to the chase and propose and where possible embody that use of executive power which would most comprehensively make future instances of that form of victimage as unlikely as possible. One proposes, that is, some increased coherence in the imperatives coming from the center (and, by implication, in the cultivation of those dispositions necessary to sustain that coherence). If we did X, this victim over whom we are agonizing would be irrelevant—we could forget all about him. One result would be the revelation of how dependent liberal culture is upon its martyrs—so much so that they’d rather preserve their enshrinement than solve the supposed problem and thereby write them off. In the meantime, we’d be embarking upon a rewriting of moral inheritances that would erase the liberal laundering of scapegoating through the disciplines once and for all.

June 19, 2020

The Imperative of the Occupant of the Center

Filed under: GA — adam @ 6:55 pm

To my knowledge, no one has ever placed the transition of power at the center of political theory—neither as an explanatory principle distinguishing regime forms from each other, nor in normative terms, as a way of accounting for what makes a form of government good or just. Propagandists for democracy like to talk about the “peaceful transfer of power,” but generally in the context of fearing it might not take place—never as a defining feature of the regime itself. Such propagandists are savvy enough to know it isn’t a particularly strong selling point—indeed, defenders of democracy know better than to claim their favored form of regime even provides for the best governance; they know better than to direct inquiry in that direction. But even monarchy hasn’t approached the question in this way (at least as far as I know)—maybe because there is no single monarchical method of transitioning from one occupant of the center to the text. Primogeniture is, I suppose, the most common monarchical method of succession, and one can see how it would minimize conflicts over succession, but the weaknesses of this approach are obvious, and history is full of its consequences—kings without sons, or with idiot or wicked sons, open up the power structure the system was designed to prevent, without any clear way of closing it up again (once the chain is broken, there can always be questions about the legitimacy of the monarchy). So, maybe no one has wanted to center political thinking on the question of succession because no one has ever felt confidence in any answer. But it really is the best way into theorizing governance: any regime that could present its form of succession as representing a form of continuity that could be traced back with as little question as possible to the origin of the social order itself would surely be the best possible regime. This is a very economical approach.

Anthropomorphicspresents a solution: the present occupant of the center chooses his successor. This follows from the rejection of any form of imperium in imperio, or “super-sovereignty”: if there is some rule of succession independent of the ruler, then the interpreter of that rule is sovereign. And, of course, the ruler could choose his son, or a family member—and that would sometimes be the best choice. But sometimes it wouldn’t be, and we can therefore derive a rule for selecting a successor: whoever is going to succeed as ruler must have the character to set aside his personal and familial interests for the sake of the country. This is not a rule that could be imposed on the present occupant of the center (it couldn’t even be formulated coherently enough for that), but one that would be part of the education of the ruler, instituted by the first ruler to choose a successor outside of his family, if not earlier. Anthropomorphicslays out a series of such “rules,” again, understood as optimal cultural and pedagogical conditions sure to be discovered from the first principle of selection of successor. Here, I’d like hypothesize regarding the necessary character of a ruler under the kind of post-sacral, post-liberal conditions we have to imagine to conduct our political thinking; and draw the implication of that for our political thinking.

Let’s continue with the selection, education and sequestering of the successor by the current occupant of the center and draw out the implications for actual occupancy from that. The question of succession being central, the entire social order would be oriented towards the process. Competitive academies for training the next generation of governing elites would solicit applications from across the country, giving each community a stake in seeing its native sons and daughters “fast-tracked” to those academies. At a certain level, a small number of students are put on the rulership track, to undergo more specialized training in occupying the center. In being selected for this track, the participants forego other ambitions, for the sake of a much grander ambition which, however, the odds are against them ever fulfilling. The highest level candidates—say, a couple of dozen—from which the current governor would always have one selected, cannot exercise power themselves. They cannot be permitted either to become associated with a particular location or institution, or to build a separate power base. They would live their lives publicly, as the succession game would be fascinating to follow, as the current governor could change his mind regarding his successor, and so the prospective successors would have a kind of celebrity, like a royal family, but would have to comport themselves so as to use that celebrity to model lives of pure service. This would be a continual test, and a candidate who tries to become a “star” would be immediately and permanently removed from consideration. While not exercising any direct power, the candidates would “shadow” the ruler, learning the ins and outs of governing, making “sample” decisions, allowing the governor to study their abilities. The candidates would live separately, and rarely if ever see each other or interact; and I think it would have to be considered a gross breach of protocol for them to refer to each other, especially in the presence of the governor. Those candidates who are not chosen to succeed may be kept in the pool by the new governor when the time comes, or they might be removed and sent back to ordinary life, without any prejudice, of course, but having squandered at least some of their prime years that could have been spent on building some other career.

So, we would have rulers with a strong sense of discretion and modesty, a capacity for solitariness, a sense of having been chosen, to a great extent due to their own merit but, at the same time, with a sense of having given over their lives to their country with the possibility of a “reward” that is at least to some extent arbitrary, or at least unknown—it would be impossible to know completely why the ruler decided to place the bet of the country on you, specifically. Each ruler would be aware of being undergirded by powerful institutional and cultural supports which pave the way for clear rule from the center, but without having the support of a powerful family or institutional clique to lean back on, or operate informally through. The success of his rule will depend very largely upon his ability to promote, directly and indirectly, the smoothly functioning practices of the major social institutions. He would have a family, and, as I suggested earlier, might very well build what might become a dynasty (we could imagine a strong presumption that a child of his would have to go through the normal process, but this would be within his prerogative)—anti-monarchical prejudices would be ridiculous under such conditions—but it would be very difficult under advanced technological conditions to use the office to acquire the kind of wealth and institutional power that could guarantee its permanency—only a sequence of good rulers could do so. In that case, the normal process could be retained as a back-up, which would surely be needed at some point—the demands of social command would be rigorous, and eventually there would be either no heir, or one whom the ruler would have to concede is not up to the job. But the responsibility that comes with knowing that, even if it is your own son, you have chosen your successor, would temper any temptation to do more than bend the established protocol.

For social theory, we have use the following means of regulation of “quality control,” or what me might call anthropomorphics’ six imperatives from the center. First, power and responsibility are to be matched as closely as possible—it’s immoral for someone to have power without uses of that power flowing back to communal goods, or for someone to be given the responsibility to perform some task without being provided the means to do it. Second, “from each according to his abilities, to each according to his needs,” as long as we keep in mind the needs of the able, which might be considerable if they are to give in accord with their abilities. Third, while all scapegoating, or violent centralizing, obfuscates and produces regrettable actions, the most dangerous violent centralizing, the type to which all others tend, is that of the occupant of the social center: the usurpatory motives we might attribute to the occupant of the center, motives which serve as an anchor giving pattern to facts and events, are to be converted into imperatives from the center we make as consistent as possible. Fourth, we are to continually work on articulating the traces of previous scenes into the elements of practices, as argued for in my previous post. Fifth, the mimetic dimension of practices, our reliance on models and previous practices, are to made more explicit as an ongoing socially bonding pedagogical order. And, sixth, the social order is to be seen as a project, with “society” treated as a team of teams, directed toward that project—entering any institution is joining a team, and therefore learning its rules, taking up established (or creating new) roles, and respecting the “captain” and associated hierarchies.

All of these imperatives overlap with each other and none of them provide the basis for any kind of super-sovereignty because they are all immanent to an existing order and paradoxical. There’s no external point from which needs and abilities can be articulated—any attempt to do so would be employing something theoretical or managerial ability which would already be relying upon certain needs being met. Similarly, power and responsibility can only be matched in relation to some ongoing exercise of power or claim of responsibility—again, to try and stand outside and “measure” power and responsibility would itself be an attempt to take responsibility on the basis of some actual or aspired to power. Violent centralizing is always very precise and context-specific and can only be detected on the spot, in its emergence, by someone positioned so as to either accelerate or decelerate the process. Even a social project is more something that is pointed to, abstracted from, and turned into a model for transforming, an existing hierarchy of practices. All these imperatives provide entry points into extant practices which are entered so as to make them more thoroughly and coherently practices.

A good ruler promotes, enforces, exemplifies and obeys these imperatives. The best way to examine how this will shape his character would be to start with number three. The ruler is aware that all resentments can ultimately be channeled his way, especially once the democratic alibi of pretending that his decisions and authority are not really his own is rejected. The ruler is above all a specialist in formulating and issuing commands—this is his discipline, his practice, his pedagogy. There is always an “imperative gap” between the command issued and the command obeyed—no order can be obeyed without at least some degree of discretion being exercised. The practice of commanding is both to minimize this gap and to fill it with preceding exemplars, previous decisions, and previous exercises of discretion which can be translated for current purposes, along with an entire sensory and investigatory apparatus to follow up on and therefore inform obedience to the imperative. Every command issued by the occupant of the center refers back to the mode of occupation intrinsic to that command, while simultaneously grounding that occupation in all the positions, subsidiary centers, occupied throughout the social order. The decision is represented as both as minimal and as consequential as possible: in an enormously complex and intricate order, one tiny “switch” is turned; that one tiny switch is chosen precisely where the choice between bifurcating paths would make the most difference. The command has an economy to it: no more and no less is said than necessary; commands are issued only to those who need to obey them; and this economy models the way further commands for implementing the prime one are to be issued. The ruler both disappears into his commands and stands outside of them. Any complaint directed to the occupant of the center becomes a question—an extension of the command which one delays obeying by complaining—regarding the economy with which one has situating oneself at a bifurcation. The character of the good ruler is one that can always say, I’m doing at my point at a particular bifurcation nothing more and nothing less than what I’m asking you to do at yours.

June 11, 2020

Recirculating the Center

Filed under: GA — adam @ 5:08 am

The ether is replaced by the constancy of the speed of light; phlogiston is replaced by oxygen; and, of course, geocentrism is replaced by heliocentrism. In each case, critical experimental results effected the scientific revolution, but what I’m interested in here is how the logic of scientific revolution can be applied to the revolution in the human sciences I take the originary hypothesis to initiate—a scientific revolution that is qualitatively different because the scientist is part of the phenomenon under study, and must study that phenomenon by acting within and therefore changing it. Scientific revolution is not only a valid, but an essential model here, because what both levels of inquiry have in common is what Gaston Bachelard called “epistemological obstacles,” which is to say, concepts grounding a process of inquiry that are themselves ungrounded in anything other than inherited institutional and what we could call “mythical” imperatives. The theological and therefore moral implications of the displacement of geocentrism by heliocentrism are well known, as is the “trauma” of Darwin’s hypothesis regarding the origin of species. I don’t know of any equivalent investments in phlogiston and the ether, but there were certainly intellectual and perhaps aesthetic investments—such concepts presumably provided a kind of apparent coherence that would have been lacking otherwise. Meanwhile, moralized resentments against the decentering of the conscious, self-centered human subject brought about by modern theorists like Marx, Nietzsche and Freud were also for quite a while grist for highbrow ruminations. The continuity between the natural and human sciences, then, is that the replacement of one disciplinary center by another requires the reordering of an entire constellation organized around that center, and such an event is always consequential.

As in my previous post, I want to bring the model of scientific revolution, or center replacement, from the level of the one or two in a lifetime event to our day to day thinking or “signifying” (or “sampling”). In a way, the problem gets much more interesting on this level. Once astronomy rejects geocentrism, or chemistry phlogiston, those paradigms are gone because inquiry now proceeds on the transformed terrain; but everyday discourse throws up new epistemological obstacles regularly, because ongoing events always need to be thought through on terms that can’t be completely given in advance. There are always assumptions in place that make it possible to see some things and impossible to see others. Moreover, in human affairs, not everything can be made explicit—indeed, with everything we do make explicit, more implicit assumptions are generated. There is always what Hannah Arendt called a “necessary appearance.” (Her example was that, however up to date my cosmology, the sun still looks like it is rising in the morning.) On the originary scene, it “appears” that the central object is holding the assembled in place. The same is true every time we attend to something—I’m already looking at something or thinking about something before I can ask why I’m doing so. I’m always being “held” in some way before “reflection” kicks in and, in fact, reflection tightens the grip of whatever holds me because my reflections find it to be necessary, or motivated, or rooted in something “deeper” that holds me, or an entry point into some network that encloses me, or a malevolent spirit that must be combatted, etc.

The structure of a scientific experiment is similar to that of a sacred ritual insofar as in both cases we have a closed space on which external effects are excluded, we have a precisely organized practice aimed at generating an event with a specific range of expected effects, as a result of which something will be revealed. “Scientific” thinking, in the sense of a practice organized so as to produce a revelatory event, was obviously “applied” to the human community well before it was applied to things. In that case, all human practices must have this structure—we are always assembling our body as a system of signs, conjoined with the mediatory and technological signs across which our attention and its effects are distributed, in order to reveal something: this something will always be some center, which will tell us what we need to do to be “held” by it. When a practice fails, which is to say that the center does not extend us an answer we can “process,” we draw upon our relation to the center as a model for a narrative that will re-position us in relation to the center. We can then translate that narrative into new practices, aimed at revelation. Of course, this process, taken on its own, is just as likely to lead to further obfuscation as clarification. And that’s really the question—how do we distinguish one from the other, and generate practices, narratives and translations that allow us to make this distinction regularly and in a controlled manner? Without the controlled scientific space, we must ourselves be both subjects and objects of virtual experiments that never leave the realm of the hypothetical. So, what makes for a “good,” or “generative,” hypothesis in the human realm?

It’s one that makes the practice generating it more of a practice. The simplest way to think about a practice is that as a result of some performance, something comes into existence that wouldn’t have come into existence without that performance, and this emergence produces a new scene onto which a performer of practice could enter and perform anew. Games provide good examples of this kind of thing—a good move in chess sets up a subsequent move, etc.—but we could think in terms of asking someone a question. A good question is one that elicits a statement that wouldn’t have been made without that question, and that will now enable a new question that itself wouldn’t be possible without the previous question-answer sequence—that allows the questioner to continue as questioner in an unanticipated way that the previous sequence nevertheless prepared him for. So, you could think in terms of continually becoming a better questioner, or interviewer, as a practice. As this happens, you will discover that both you as the questioner, and the one being questioned, however important or interesting, recede into the background of the event of questioning itself. The more you focus on specific things you yourself would want to know, or imagine a reader or hearer would want to know, the less perfect your practice; the same with a focus on the interviewee as the center—you and the interviewee are nothing but the preconditions of this particular practice of questioning. Let’s say you have to keep the focus on the interviewee, and the specific things people want to hear from him, because those conditions are what made the questioning possible in the first place—in that case, those would have to become further preconditions of a more constrained but still potentially excellent practice of questioning. (Of course, the constraints could become such as to make anything approaching a genuine practice impossible, in which case one might be ethically obliged to decline the assignment.)

What we see here is an act of decentering and then recentering: from the interviewer or interviewee being the center, which in a sense is the natural situation in a conversation, the process of questioning itself becomes the center, which the individuals involved merely serve. With one of the individuals as the center, the oscillations of desire and resentment generate the scene—the interview humbly defers to the great man, but also hopes to catch him out in some remark that will diminish him, so he projects onto the great man the intentions and qualities corresponding to his own imperfect practice—the great man is arrogant, or insincere, or indeed great beyond all comprehension, etc.—all the narratives of a failed practice. The perfection of the practice purges such narratives and translations—insofar as both are being constructed and constituted in this space, through this event, as figures or subjects of this singular line of questioning, all those projections are dispersed. If you think about, or come to narrate, your life as a sequence of practices, and your life as a whole as a practice of practices, within a social order in which those practices are situated and is continually reconstituted by and as those practices, then the problem of the continual replacement of the center comes into focus.

The mythical narrative interferes with the perfection of practice. It keeps in place a failed practice. This happens because a failed practice at one point must have been successful, or at least seemed more likely to be successful than alternatives. It relies on a narrative whose exhaustion has not been acknowledged, and a relation to some center that seems to have no alternative other than “chaos.” The only way out of a mythical narrative and a center that can no longer keep its “satellites” in “orbit” is to continue in the path of perfection of that practice. First, though, you need to understand that what you’re doing is a practice, even if only the decaying remains of one. This means directing your attention to whatever you are doing that you are not incorporating into some practice. When faced with some problem, or encounter, or confrontation, there is probably something in your engagement that you can’t situate within a practice—something that indicates the remains of some gesture that, you imagine, once “worked.” There might be many such things; perhaps there’s nothing you can see in what you do that is the product of a practice. What you are noticing are many at least partially failed practices, and the corresponding narratives and translations of narratives into new practices will to that extent deserve to be called “mythical.” There is some event with a center that you are faithful to but, rather than constructing a practice that allows for continual recenterings of the center of that event, you resist anything that interferes with attempts at reconstituting the entire scene that seems inseparable from the event. The mythical narrative and its practical translations are essentially cargo-culting.

Even more: whatever in your own doings and thinking you can’t represent as a practice is by virtue of your inability a part of others’ practices. If you’re thinking of yourself as an individual, with a conscience and consciousness, with character traits, a personality, beliefs, likes and dislikes, and so on, without being able to represent all of this within your practice of your life as a practice of practices, then there can’t be any doubt that all of these things are the results of practices of education, public relations, propaganda, entertainment, the social sciences, and so on that others have constructed for you. The perfection of practices always involves inhabiting all these practices produced for you, decentering the desire for recognition, the fear of public rejection, the immersion in thoughtless narratives and all the other centers created by those disseminated practices which provide prepared scripts for the repetition of familiar revelations—and recentering the composition of practices shared by others that treat the practices circulating through as practices rather than pre-given scenes. The good hypothesis, then, is the one that proposes a possible structure as a practice for some experiential given that has been revealed as an indication of a failed practice. Say you feel impotent rage at some failure or humiliation, or betrayal at what has turned out to be misplaced trust. Bound up in these feelings is a narrative involving characters with certain rights, possibilities and responsibilities, and somewhere in that you placed yourself on a scene just because it conformed to a model of experience of some other scene. There’s something in there that hasn’t been constructed as a practice, some form of mediation between you and others that just seemed inherent in the scene. That experience indicative of a failed practice and pointing to the need to incorporate hitherto unnoticed practices into your own is the moral equivalent of the scientific “anomaly” that calls for a new “paradigm”—a paradigm in which others would be invited to co-construct practices with you, rather than re-inforce a relation of “co-dependency.”

I approached, in this post, a very similar question as the one I approached in a very different way in the previous post. They’re in different languages, you might say, and we should all be multilingual. I think they are completely mutually translatable into each other without loss, but I’ll think about it. If a practice is fundamentally making oneself over as a “sample,” then I think the crossover becomes easy.

Powered by WordPress