GABlog

January 9, 2018

Absolutism, the Axial Age and the Laboratory

Filed under: GA — adam @ 6:30 am

The moral and intellectual innovations of the Axial Age—from Confucianism and Buddhism in the East to philosophy and monotheism in the West—create an interesting dilemma in thinking through the implications of the abolition of imperium in imperio, or divided sovereignty. Under sacral kingship, the centrality of the king involves not just rule but ritual duties and ensuring the connection between the community and the cosmos. It may be that the occupant of the position wasn’t very secure (there would be many ways one could be found to have failed) but the position itself was. Under god-imperial rule, the occupant of the position becomes far more secure, while the sacral efficacy of the position becomes thinner and molded more precisely to the functions of rule itself—in other words, rationalized. Local and ancestral forms of worship continued to operate more directly on communities. On both levels, though, the sacred is essentially sacrificial: the origin of all benefits is identified, and a commensurate return of some part of those benefits must be made to that origin or its representative. The greater the benefit, the greater the obligation, which means that the sacrificial is always tending towards human sacrifice as its telos. This means slavery, mass armies and the conscription of large populations for imperial labor projects. As David Graeber has pointed out, these developments coincided with the introduction of coinage and debt that involved the “abstraction” of individuals from the communal and ritual forms in which they were embedded—“abstraction” through enslavement and dispossession.

At the same time, this abstraction and the development of markets on which abstracted individuals can engage in exchange leads to systems of justice: the measurement of acts against promises and obligations. So, we have two interrelated processes: one, the disappearance of situated individuals into anonymous masses; two, the singling out of individual rights and wrongs against a background of precedents and oaths, with judgment carried out by a specialized class of professionals. When “injustice” was done, it would likely appear as if the former process was impinging upon the latter: as if the individual treated unjustly were being sacrificed for some mass, impersonal, mindless purpose. The emergence of exemplary victims of sacrificial injustice would lead to the clarification of this appearance, and its articulation in legal, political and sacral discourses. It would be possible to look for such victims, and see them as implicit indictments turned back against the supposed justice system itself; more articulate victims would come to frame their plight in these terms. Critics of the justice system would come to see themselves as potential victims, and develop moral discourses of anticipatory victimage; they would gather around themselves a following, including many from among disaffected elites; and their victimization (which they would more or less deliberately be courting) would be revelatory. We would have cases in which the exemplary sacrifice would, in fact, be guilty according to the prevailing and perhaps rather sophisticated and indulgent political and legal norms; and, nevertheless, legible in their execution would be the implication of even a healthy justice system in sacrificial practices—remember, the mass sacrifice and the concept of justice have a common origin. The subsequent intellectual and moral revolution would play out differently under different conditions, but in all cases a new problem has been created: it is now possible to imagine a law that is “higher” than the law presided over by the monarch, and therefore a sacrality that supersedes that of the God-Emperor.

So, this is the problem that has gone unsolved until this day. Some Christian kingdoms of medieval Europe seemed to be close for a while, but those efforts didn’t last. We can blame competing elites for exploiting the opportunities afforded by the very concept of a “higher law” to introduce a wedge between that higher law and the “earthly” one, but the problem nevertheless remains, unless one believes it possible to dispossess ourselves of the acquisitions of the Axial Age—and no conceivable power center could do that because so dispossessing itself would not only make it too evil but too stupid to rule. In moral terms, the “axial” involves a prohibition on scapegoating: on reviving and reversing the logic of sacral kingship by imposing responsibility for the evils and ills of the community on some marginal individual or group. The way realize that prohibition is by building and fortifying institutions that ensure punishment is monopolized by accountable institutions and for offenses that have been named for the harm they do the community and the higher law. The implication is to confer a kind of sacrality on the individual: to collectively lay hands on an individual is to threaten to introduce uncontrolled violence into the community. This horror is the ancestor of today’s victimary discourses, but even before that of liberalism and democracy, with their elevation of the individual and the common man, regardless of the intellectually confused ways in which this elevation has been asserted. Now, while the implication of axial morality has been to confer a kind of sacrality on the individual (at least in the West—but could that be because it is in the West that axial logics have been vigorously pursued beyond elite circles?), that does not mean it is the only, or only possible implication.

Originary thinking, or anthropomorphics, helps us out here because it provides us with the hypothesis that the axial is in fact a recovery of the originary scene, in which the newly human community all participated in “addressing” a shared center. Such a recovery was needed in the massive dislocations, brought about at a high level of civilization, leading to the axial age. One way of superimposing the model of the originary scene on imperial civilization is to imagine a single human center: Truth, or God, toward which all can orient themselves and partake of this new center. These are the interrelated paths the West, in pushing axial logics as far as possible, have taken. But within the assumption of the global or universal center there is also the realization that the center can only be discerned within what we could call a “congregation of inquiry.” Christianity started out with small groups testifying to Christ revealing himself to them; philosophy and the ancient sciences likewise started out with small groups of adepts or inquirers who separated themselves from the confining ritual practices of the community. The “universal” radiates outward from such congregations, and can only be preserved by recreating them over and over again.

At some point power at higher levels must support and incorporate these congregations—that is ultimately the only way they could actually be “universalized.” But this also seems to be the starting point of all those conflicts between higher and secular law. The solution must lie in the incorporation of the congregation of inquiry into the very form of sovereignty. In universalism, the individual is imagined as potential victim of overweening power, and the solution is for that individual to be ever further abstracted so as to be acted upon by an even overweenier power. By contrast, the individual within the corporate congregation is imagined in his service to the sovereign, in exemplifying and further perfecting the sovereign’s identification with the higher law. The corporate congregants permeate the social order, bringing their more specialized inquiry into the originary-within-the-sovereign to bear on other areas of life. The missionary or evangelical goes out among men, preaching the word, living the word, and doing so, as much as possible, within the lives and languages of those amongst whom he moves. The undercover police agent represents the law within the lawless, and must pass as the lawless, while never forgetting their loyalty to the law, lower and higher, and their other law-preserving brethren. In both cases we have the enactment of the tension between the lower and higher, but the undercover agent is the better example for us now because it is impossible for that police officer, as long as he remains honest, to do anything other than serve the sovereign. He cannot rebel, or resist the sovereign, other than by becoming a criminal himself, which is not really rebellion or resistance; moreover, he serves as a harmless but potentially powerful corrective to misuses of power within the sovereign order itself, misuses that the sovereign would want to know about. This is especially the case because we can have undercover agents not only in lawless groupings but in organizations where the lawlessness would be a deviation, but with potentially devastating consequences. The undercover agent within the normal institution, or, each of us acting as if there are undercover agents within the institutions where we congregate, or, even more, as if we might have to take on, maybe even unsolicited, that role, represents the complete assimilation of the axial acquisition to the sovereign order. Disciplinary groupings or social “skunkworkers” permeating and infiltrating all institutions by naming their relation to the sovereign center is the form taken by the retrieval of the originary scene within advanced, civilized social orders.

We can think about this in terms of the apparently very different institution of the laboratory—perhaps the highest and most consequential result of “axialism.” The laboratory constructs a space in which all possible physical interactions are excluded except for the one we want to study. Often this is done hypothetically, by randomizing the selection of subjects for the study, or introducing probability calculations to eliminate the effects of processes that can’t be physically excluded. In fact, this is the kind of thing you do anytime you are seriously thinking about what the best thing to do is, in other words moral inquiry involves setting aside one’s own resentments and desires, “controlling” for them. The mode of thought is equally applicable to religious and secular, social and physical sciences—part of the laboratory model is to think “experimentally” about these very differences. If you are thinking experimentally you are retrieving the originary scene and representing it within the actual scene, because you are thinking, what act would introduce another degree of deferral into this congregation, and make us more focused on whatever our object is? As long as you are thinking of a bounded scene, freed as much as possible from obscuring interferences, you cannot possibly think of mobilizing a mob or identifying a possible sacrifice. If such practices are being endorsed, wittingly or not, by the sovereign, you can only stand as an example against it—not as a counter-power, because your very centrality in this case depends upon you eschewing any higher order centrality, which could only introduce interference into your scene. Once the higher law is made immanent to, constitutive of, constrained by, sovereign law, all the imperium in imperio problems invented by liberalism disappear. In assessing institutions and judging actors, we always look to the corporate congregants in those institutions—if we watch and listen to them, we will learn what is going on and what needs to be done.

October 13, 2020

Scenic Design Practices

Filed under: GA — adam @ 5:59 am

It seems to me that we’ve gotten to the point, with the emergence of computing as a “metamedia,” where there’s no basis for distinguishing between “media” and “technology.” All media, including the most basic, such as speech, gesture and writing, are implicated in the vast array of recording and monitoring technologies, and as subject to algorithmic governance or “planetary scale computation” as anything else; while all technology is now dependent on code (itself a form of literacy) and contributes to the spatial arrangement of human activity no less than media such as buildings and cinema. And we can include “infrastructure” in this as well. Drawing, then, upon the understanding of “media” I proposed in Anthropomorphics, as all the means of constituting scenes, and the understanding of “technology” I proposed, that is, as the articulation of desacralized and abstracted human practices, I would synthesize it all as “scenic design practices.” Every practice is designing a scene; or, really, redesigning a scene, or some portion of a scene, with the techno-media (this term to be revised presently) available. This would include all practices, large and small, carried out by the powerless as well as the powerful, obviously with different effects and constraints across the spectrum.

I’m a little, most indirectly, familiar with some of the more significant contemporary theorists of technology, such as Gilbert Simondon, Bernard Stiegler and Freidrich Kittler, each of whom reaches back into other traditions of thought, and I’ll be doing more reading here, but I’ve already got some sense of where anthropomorphics will differ from and offer something unthought by these other approaches, so I’ll lay down some of that thinking here. The first “medium,” of course, was the scene constructed and embodied as the originary scene, part and parcel of the emergence of the sign, which I would see expansively as the posture as well as gesture of an entire human body conforming itself in real time to the postures and gestures of an emergent community of other humans. (One could always draw the boundary between sign and scene/media differently.) The first tool, or implement, meanwhile, would have had to have been some ritual enhancing “device,” that is to say, something that would materialize the memory of the originary scene by drawing or enhancing the boundary between center and periphery. A circle of bones, perhaps, marking the ritual space, eventually something like an altar, and whatever materials might have gone into constructing a permanent likeness of the central figure. Maintaining the means of ritual would involve the development of “skills,” which is to say repeatable movements that can be transmitted to others and perfected, and the creation of increasingly differentiated and specialized “tools,” which would always be bound up with sacred purposes. The real “instructor” here would be the central figure itself, who teaches the community how to create and use the tools in accord with ritual prescriptions. And, of course, myths would be generated explaining how some sacred figure provided the community with these tools and prescriptions. Since each new element of ritual must have displaced a previous one, it makes sense that such myths are often associated with some kind of transgression, an aura which until modern times has always surrounded knowledge or technical innovation.

This would all hold true for the development of weapons used for hunting and war, all of which would involve the same intimate, imperative relation to the gods and ancestors and the prescriptions and tools for building altars and conducting rituals. I’m assuming that nothing can be outside of the ritual-mythic nexus, a very tightly bound up system of imperative exchange, until the emergence of the ancient empires and their serial destruction of local communities with the mass enslavement of their populations (of course, as Engels observed a long time ago, enslavement was an alternative to extermination, once the ruler became powerful enough to make use of subjugated populations). We now have vast populations excluded from the ritual center, which means that anything can be done to or with them. It is at this point, I’m hypothesizing, that we can start to speak of “technology,” as the direct, i.e., de-sacralized and de-ritualized aggregations of humans who can be combined in an orderly way for concentrated purposes involving the use and transformation of “nature.” As Nitzan and Bichler point out, referring to Mumford in Capital as Power, the very conception of technology is an effect of power: seeing all this “labor power” at your disposal would inflame the imagination.

This imaginative impulse is both constructive and pulverizing. Not only does it afford massive and complex structures, but also for continual grinding up into ever more minute particles. “Analysis” is a result of this. So, we can take any scene and treat it as a media apparatus which is simultaneously one small piece of much larger apparatuses, but can itself be broken down into any number of mini-apparatuses. I use the word “treat” here rather than, say, “view,” because to treat something is to change it in some very specific way, to prepare it for a particular use. Insofar as we treat the scenes we inhabit or infiltrate as an articulation of design practices, we are participating in those practices. We’re designing the scenic features that will more or less eventually go into the production of some utterance, or sample. If you’re rich and powerful, this might entail organizing a studio, hiring specific kinds of directors, getting certain performers under contract, hiring publicity people to create and maintain a brand, so that you get to the point where your design issues in a movie in which a particular actor stands in a particular way, on a particular scene, and says something, which a designed for audience will hear and pass on in various ways to secondary and tertiary audiences. Designing algorithmic orders that make it more probable that certain items turn up in a search is an even more obvious example. And this entire configuration can continually be redesigned.  If you’re essentially powerless, like most of us, the appropriation of design practices nevertheless gives you a way of thinking about how to spend your time and energy—in various ways, you’re contributing to the construction and maintenance of a range of scenes, and the more precise you get about the samples you’d most like to generate, the more effectively you can think where to replace certain scenic elements with others so as to bring about resonant attentional shifts. The powerless can do this because the mega-machines into whose service we have been pressed require the “pieces” to take some initiatives and assume some responsibilities.

The design of assignments for students in pedagogical situations provide as good an example of design practices as anything else I can think of. (I’ll issue my customary broadside against my academic colleagues and point out that, even though this is their main job, in my experience very, very few of them give this any disciplined thought at all. They ask students to do what they imagine themselves doing, thereby favoring the students best able to mimic their own gestures.) The purpose of an assignment is to have the students learn how to do something that they could only have learned how to do laboriously or even serendipitously without the assignment, with the thing they learn to do also being something they will not only have to do very often in other situations, but that will also be a condition for them to do a lot of other things. This is what makes an assignment, to use a little bit of contemporary pedagogical jargon, “high-impact.”

For one thing (I’m speaking about my own field, “freshman comp,” here), this requires breaking down “academic reading and writing,” i.e., a particular advanced form of literacy that is best defined in terms of the continuous production of nominalizations that function as subjects and objects of sentences in a hierarchical system of distributed citationality. (In “academic discourse,” “a hierarchal system of distributed citationality” readily becomes the subject of a sentence—so, what has to be learned, for example, is what does “a hierarchical system of distributed citationality” do; that is, what verbs does one put after it? What is it like, i.e., which adjectives modify it, etc.?) More simply, academic discourse is characterized by vast reserves of implicit references to disciplinary conversations in the abbreviated form of their stored conceptual innovations. You break this down by defining the practice you want rehearsed (say, distinguishing “distributed citationality” from something significantly other to it) against the “epistemological obstacles” that interfere with performing it. Those obstacles are located in the mythical language of everyday life, where “Big Scenic thinking” prevails, and one relies on the dictionary meaning of words and imagines oneself “agreeing” and “disagreeing” with discrete statements, and therefore having “opinions,” “viewpoints,” and so on, rather than working out the implications of a concept. So, the assignment stages confrontations between the epistemological obstacles of mythical everyday discourse, on the one hand, and the language of some disciplinary space, on the other. This produces an “inter-language,” where we find the learner using everyday vocabulary in the grammar of the disciplinary discourse and the vocabulary of disciplinary discourse in the grammar of everyday language. The inter-language now becomes the center of the disciplinary space, as the students construct a vocabulary and grammar to describe and analyze it, thereby preparing themselves to move self-reflexively into other disciplinary spaces.

Now, all this is on a small scale—a class with one teacher and 15-20 students occupying the same space. But the practice of staging confrontations between Big Scenic, mythical thinking and emergent disciplinary spaces which expose the limits of Big Scenic thinking can be scaled up as large as one likes. This is the way to think about tweeting, blogging, constructing websites, publishing, even the creation of new currencies, political organization, or anything else we might be doing. Everything is the staging of pedagogical hypotheses that will in turn generate resonant, ramified hypotheses of and against the Big Scene. This entails hypothesizing the vast range of possible responses to what you might do, and how you might interfere with that field of possible responses so as to facilitate more encounters between the Big Scene and disciplinary practices. This is no doubt why trolling has become such a prominent, almost all-inclusive feature of social media and more traditional practices that have been transformed by the social media ecology—trolling is aimed at generating responses you can then use so as to compose an utterance that will generate more responses you can use… The problem with trolling is that it locks everyone into their initial positions, whereas it is better to open up new positions.

You can then, imagine yourself not so much giving an assignment across the techno-media or field of scenic design practices (after all, who are you to give others assignments?) but as performing some assignment that provides an example for others. This is the only way through the “technological world picture,” and through Capital, for that matter, which also depends upon Big Scenic mythology. The iterative center assigns to each of us the practice that will distinguish it from the sacrificial remnants chewed over in Big Scenic thinking insofar as you specifically complete the practice. And that practice is some scenic design practice that will translate that same assignment into some other institutionally, infrastructurally mediated scene.  The translation will be the creation of the scene, the transference of some vocabulary within some grammar to another technological idiom. You do this by translating the constraints and affordances of platforms into imperatives and questions to be redesigned as assignments. Such practices entail inquiring into the scenic design practices that have produced each and every one of us. A constant pulverizing has been going on since the ancient empires—the Axial Age was a limited response to that, but we still haven’t seen anything better. What would be better is not trying to pick up all the pieces and put them back together again, but treating the pieces (habits that program us to try and get our rightful slice of the central object) as materials for scenic design that will have us all looking to help one another find our respective name and place. We all have some place we should be, and some name we should inhabit, and we can only find our name and place within a social order designed to produce scenes that afford such findings.

Maybe we could call the “techno-media” the field of “mimological impressments” upon which scenic design practices operate. “Mimological” derives from Marcel Jousse’s “mimisms,” which is a concept that enables us to identify any human action as an articulation of infinitesimal gestures rooted in imitation; the word “impressment” means coerced recruitment into some military or industrial mega-machine, but property can also be impressed, and therefore so can anything in the natural world, all of which is pressed into service; but, even more, if we abstract the word “impress” from impressment, we have the (admittedly secondary) meaning of making a mark on something, with the something being the resources of the world subject to (transformed into resources by) the operations of the sciences, and the mark being made by some mimological arrangement. All of “nature” is made to imitate the forms of human activity, and that activity is further pulverized into new mimisms in imitation of the new impressments. Drawing on my previous post, I’ll suggest that the “assignment” here is to treat our entanglements with mimological impressments as both liminally obsolete and still under construction. This allows us to defer the demands of imperative exchanges mimological impressments impose on us—it provides a space for studying what it means to be “on” Twitter, or Facebook, or Google, or a blog and turning the demands of these spaces into platforms for staging pedagogical scenes that show where the boundaries between one scene and others lie. (The same holds, even if more indirectly, and through a different kind of operational chain, for building bridges and roads, or weapons, or buildings—it’s all scenic constitution through mimological impressments.) Once you identify a boundary, you can imaginatively, thought-experimentally, place some thing in an oscillatory relation to the respective sides of the boundary. So you take the same sample (utterance, statement, meme, icon, imperative…) and treat it as belonging to different spaces. (The simplest example: some statement or action that would be sign of madness on one scene but of genius on another—this allows you to assess the statement, and the respective scenes, and to respond to the statement or one “similar” to it so as to further test the hypothesis.) Ultimately, if you do enough of this, and bring enough and the right others into doing enough of this, you’ve established a discovery procedure for fulfilling the imperative of the center. Everyone would just be interested in compiling sample utterances in such a way that all participants find themselves maximally addressed by them.

September 6, 2020

Hypothesis/Practice Vs. Narrative: The Iterative Center

Filed under: GA — adam @ 6:38 am

In my previous post I found myself in possession of a neat and very promising distinction between the ritual/myth nexus, on the one hand, and the practice/hypothesis nexus, on the other. This means that the hypothesis, or, more precisely, hypothesizing the present, has the same relation to practice as myth does to ritual. Myth provides a narrative explanation for the vagaries of the imperative exchange that is ritual: the community gives what is prescribed to the central being, who, in return, ensures that the community will have more of the sustenance out of which a portion is returned to the central being. This exchange is, of course, not 100% predictable and successful, and if the central being doesn’t provide what it has promised, some accounting needs to be provided: perhaps the community didn’t, or didn’t “really,” conform to divine instructions; perhaps the divine being has some lesson to teach, or some longer game in mind. These possibilities provide a rich source of narrative possibilities, all of which must ultimately reconstruct an origin of the ritual itself—and therefore of the exchange arrangement which is constantly being revised and examined. This can work for a very long time because what the ritual and myth actually does is produce communal coherence along with a set of practices and discursive rules for negotiating differences and sustaining coherence. Along the way, we can assume ritual and myth reciprocally inform and transform each the other. All this presupposes, though, that at the end there is a central object to be divided and consumed equally, in the sense of including all members.

A practice, meanwhile, is stripped of any pretensions to imperative exchange: its relation to the center involves a higher form of reciprocity.  A practice iterates the originary scene as a whole—the possibility of refining and perfecting our scenic aptitude is itself the gift from the center, and the commitment to refinement and perfection is the return. A practice aims at producing an ostensive sign: as a result of the practice we can all see something that is there only because of the practice, and we see insofar as we participate in the practice—even as an observer, which is a role many practices provide for. We can speak of scientific practices, or ludic practices (games, sports), or artistic practices, but every time we try to clarify or agree upon the meaning of a word we construct a practice. Let’s say we want to define “courage,” which is really to say we want a model of courage. But we must want a model of what courage means here and now, an example of a contemporary possibility of courageousness. We can then construct a practice which treats other practices, whatever their aims, as exemplifying either courage or cowardice. If we want the best example, we would seek out one where the boundary between courage and cowardice is thinnest—where some action that seems like cowardice from the outside, or to those with undeveloped practices of discrimination, is in fact the most courageous for this very reason. We are practicing seeing and hearing, and directing attention to what normally goes undetected. If we turn out to be wrong in a particular case, the results of the study remain applicable; indeed, realizing our mistake would be a result of the further perfection of the practice. Getting clearer about what you’re looking for is more important than finding it in any instant.

Such a practice contains and generates within itself the hypotheses which it also tests. The hypothesis is the generation of minimal possible differences in the course of conducting the practice. “I thought that guy was courageous but then he goes ahead and does X.” Here, one’s practice of courage detection has produced evidence of imperfection. Something I took to signify courage must have signified cowardice; or, something I’m taking now to signify cowardice in fact signifies courage. There’s your hypothesis: what does this action, or word, or gesture, actually mean? If he now does Y, it means cowardice; if Z, courage. And then he does something that’s not quite Y or Z, and you refine the hypothesis. The analogy with the articulation of myth and ritual is very precise. For the sake of the hypotheses, the whole world becomes nothing but possible signs of courage and cowardice—that’s the practice of transformation, one which can, of course, carry over into your interactions, as you test this or that individual in order to elicit such signs. Any argument can be reframed this way: should we “take action” now or lay the groundwork for when “conditions are ripe”? Here’s a hypothetical approach: what could you be doing right now that would be equally meaningful—more meaningful than anything else, even—whether the possibility of contributing directly to the kind of transformation you want arises 5 days or 50 years from now?

It is now clear to me that the conversion of ritual/myth into practice/hypothesis implies the opposition to narrative, which is really the continuation of myth. Narrative is always sacrificial, regardless of the best efforts of its most sophisticated practitioners. Proof of this is not only in the invariance with which the moral truth of narrative can still only be proven through the trial of the protagonist, but in the very fact that there is a protagonist along with other, dispensable characters who are essentially props, butts of jokes, and so on. A narrative in which all the characters are equally important, and in which actions and events are so open-ended as to make it impossible to draw “repeatable” conclusions from the consequences of those actions and events would not be recognizable as a narrative. This is no less true of high cultural than of popular or mass cultural narratives. The point is not that the sacrificial character of narratives makes them “bad,” or not worth preserving and enjoying—I’m not interested in that question at all. The point is that in a post-sacrificial order and for a post-sacrificial practice, such narratives suffer a credibility defect—like myths do, once the rituals to which they are adjunct fall into disuse. We no longer have a sacrificial center which can be shared and devoured, and about the distribution of whose parts we can therefore argue meaningfully. But we do have a center, which is occupied, which cannot be sacrificed, and through which we also cannot sacrifice ourselves by opposing it. The center we have can only be perfected through the perfection of our practices, by iterating the originary scene in the creation of ostensives—which is to say, names. The center we have abolishes sacrifice along with the vendetta, and replaces them with an articulation of practices that can be entered into through other practices. A world of disciplines and practices cannot be interested in narrative.

But wait! How can we do without narratives? I mean, things happen, and we have to recount them, don’t we? First A happened, then as a resultB, and then as a result of that, C, etc. Yes, but what makes a narrative a narrative is the skeleton it provides for hanging “attributes” to characters, fleshing them out in order to produce the sacrificial moral lesson. We can recount events without that. And it’s true that causality will get thinned out along the way—indeed, causality is reduced to those charged with specific responsibilities and allotted specific powers with doing what they can and should, or not, through some imperfection in their practices. But those occupying delegated centers and sustaining or derogating them in some manner is really nothing more than the iteration of the scene itself. Everything that happens answers to a particular hypothesis regarding the constitution of the scene. An instance of violent centralization directs our attention to a lapse in responsibility or a misallocation of power somewhere on the scene, not to the trials and agony of the victim. So, if someone in a position of authority delegates power to a subordinate because that subordinate has displayed the requisite mode and degree of courage in previous assignments, the hypothesis constructed above regarding the meaning of “courage” finds its place within a practice. We can become students of courage in order to formulate and test such hypotheses as effectively as possible, but we will never exhaust all the causes of courage and cowardice and so we will always have to restrict our hypotheses to the fitness of this person for this task. Maybe, in fact, his relation to his father (for example), or some childhood trauma, is relevant here; but, maybe not, and, at any rate, no possible “causes” can become independently interesting in relation to determining the meaning of “courage” in this case. (Of course, as a kind of data, it can be preserved and might become relevant for later hypothesis formation.)

So, what was once narrative becomes scenic intelligence. As we come to know that we are intrinsically scenic beings, we aim at making our scenicity more overt and subject to practices—which counters the reliance on narrative formulas. I’ve drawn before on a model of scenic temporality drawn from Charles Sanders Peirce, and this seems helpful here. How can you determine the borderline between the inside and outside of an object, or between two objects? Everyplace you try to draw it, some of the outside is inside and some of the inside outside. So, Peirce says, the border is where there are an equal number of particles of both objects, or inside and outside. We look toward a distribution rather than an ontologically replete object. The equivalent of this for time, Peirce says, and the way we can therefore distinguish when one event is over, and another has begun, is as follows. The beginning of any event is the middle of another event and the end of yet another (to just stick to the strict, narrative, which is also to say, scenic, beginning-middle-end parameters). So, the end of one event is identified as the beginning of another and the middle of a third event—all within the same space. In Kafka’s The Metamorphosis, Gregor’s death coincides with his sister’s rebirth, and marks the resumption of his parents’ normal middle class life, once they’ve cleaned up Gregor’s room. Kafka’s novella is certainly a narrative, but perhaps it contains something anti-narrative as well, and that’s something we can learn to look for.

So, our hypothetical lapse in the exercise of authority is a kind of end, as the authority becomes no longer authoritative; but we can locate in it the beginning, somewhere on the scene, of a more adequate form of authority (maybe even the same person in a different incarnation), in the midst of more and less adequate exercises of authority in adjacent practices distributed across the scene—the beginning and middle “measure” the end. In this scenic auto-generation, the scene reconfiguring itself out of its own (a)symmetries, we have the source of our hypotheses. Whether or not the authority in question has in fact become less authoritative, if what looked like courage is in fact cowardice, is going to be measured by the middles and beginnings stretched out across this end. We have the elements of a narrative but we stay in the present as those middles are supporting and auxiliary practices of the supposed end, and the beginnings are its continuations, refoundings, sproutings, or repudiations. Our focus is not narrative, even if it’s temporal, invoking overlapping temporalities—our only interest is in perfecting our practices of inquiry by introducing hypotheses into the practices we examine.

Narrative, with its mythical, sacrificial roots, is a kind of addiction. Everyone speaks of “The Narrative,” and the need to have a counter-narrative. The practice/hypothesis nexus will prove to be more powerful than narrative. Narratives make people hysterical and it pumps them up like a drug because they don’t really believe it, because has no ritual grounding—and attempts to provide, e.g., demonstrations with a ritual form are just as pathetic as the narratives themselves. Large scale, “big scenic” narratives that have generalized agents (unified groups with coherent motivations) are just preparations for lynch mobs, whether the agent in question represents potential perpetrators or victims. Rather than counter-narratives, hypotheses should be used to dismantle narratives and to show that people are capable of things no one has yet seen. And we actually have a model of this in President Trump, whose presidency continues to astonish despite all the carping by people who supplement dubious news stories with the narrative fleshing out they crave. Whatever happens in what remains of his presidency, whether it’s 5 months or 5 years, will continue to be a rich source of models for hypothetical interventions in practices. A practice of studying Trump under the assumption that he knows what he’s doing better than you do would yield far more than turning Trump into a prop in your own narrative. It’s amazing how few people with even the most marginal public persona are capable of admitting that they are learning from someone else.

So, instead of narratives, we have the generation of scenes out of scenes, as a beginning is treated as a middle, a middle as an end, and so on. If the originary scene must have taken a while to pervade all human activities, once it did, it’s only possible to think about one scene from within another scene, and that other scene must more and more come to be just the foregrounding of the scenicity of the scene itself. But we could have only arrived at this point through the emergence of expanded scenes that completely absorbed and demolished local scenes. I’m referring to the ancient empires, which no doubt destroyed thousands of little worlds, a process that has continued in various forms since then. Without the imperial scene of demolition, there would be no “meta” scene. The meta derives from the imperial external position. It also derives from the “minor” scenes that preserved their scenic memory and posited a “meta” that transcended the imperial. From the Axial Age moral acquisitions, in other words.

The meta is only fully accomplished once the fantasy of installing a “genuine” center to monitor and control the occupied center has been relinquished. (Perhaps there are various layers of trauma that are being worked through in this connection.) At that point, and in a sense this is that point, we can speak of the “iterative center,” and not merely the “post-sacrificial.” Instead of trying to institute a global scene modeled on the originary one, with the inevitably apocalyptic consequences, we can accept the imperative to iterate the originary scene in practices where new ostensives can be affirmed, and practices named. Such practices don’t stand alone—they overlap with each other, and report upward and downward in other, pedagogical practices. We accept the center acting at a distance, because the ostensives it allows us to generate also replenish the center in ways we can hypothesize as samples. The point, again, is not to think small, but to ask the biggest questions of the center in a way commensurate with our practices at a certain level of perfection.

June 29, 2020

Toward a Media-Moral Synthesis

Filed under: GA — adam @ 12:27 pm

Haun Saussy, in an excellent book on the relation between orality and literacy (and media history more generally), suggests a way of thinking about orality that reframes the whole question. Rather than trying to define empirically how to sort out what in (or “how much” of) a community is constituted through orality, what we are to count as “writing,” what criteria we are going to have for “literacy,” and so on, he suggests thinking about orality as ergodic in its constitution. Here’s the online dictionary definition of “ergodic”:

relating to or denoting systems or processes with the property that, given sufficient time, they include or impinge on all points in a given space and can be represented statistically by a reasonably large selection of points.

With regard to language, this means a signifying system that is finite: given enough time, all the different “elements” of the system will be used. This view of language runs counter to the assumption shared, I think, by all schools of modern linguistics, which is that language is constituted by a set of combinatorial rules that make possible utterances unlimited—new things can always be said in the language, and always are said, and not necessarily by language users who are particularly creative or inventive. Language is intrinsically generative and therefore infinite. If we follow up on Saussy’s suggestion, though, this is in fact only the case for written languages. Languages in a condition of orality are constituted by a finite number of “formulas,” or “commonplaces,” or “clichés,” or “chunks,” that are not infinitely recombinable.

This new way of framing the question could raise a whole series of questions. One could say that language was always “potentially” infinite, and so modern linguistics would still be essentially right—and there must be some sense in which this is true. One could say that it is the specifically metalinguistic concepts introduced in order to institutionalize writing (and writing was institutionalized from the beginning), like the “definition” of words, and, especially, grammatical “rules,” that introduced the infinitization of language. One might even want to argue that, perhaps, we are wrong in thinking languages even in their literate form are inexhaustible—after all, how could we really know? What I will do is follow up on some hypotheses I’ve taken over from thinkers of orality/literacy like David Olson and Marcel Jousse and explore the relation between the emergence of literacy and Axial Age moral innovations.

Remember that for Olson the entry point into the oral/literate distinction is the problem of reported speech—telling someone what someone else said. Under oral conditions, the tag “X said” would be used (which reminds us that “say” is one of Wierzbicka’s primes), but the reporting of speech would be performed mimetically—the one reporting the speech not only wouldn’t paraphrase or summarize, but would say the exact same thing in the exact same way. That’s the presumption, at least, even if an outside observer might notice discrepancies. What is said is shared by the two speakers, and this presumption is strengthened by the ergodic nature of language under orality, which means that no one can say anything that hasn’t already been said, and won’t be said again. Individual speakers are conduits of a language that flows through them, and that they are “within”—and the language of ritual and myth would, further, be the model and resource for everyday speech, as everyone inhabits traditionally approved roles. Everyone is a_________, with the blank filled in by some figure of tradition.

When writing, you can’t imitate the way someone said something, so everything apart from the actual words needs to be represented lexically. This leads to the metalanguage of literacy, involving the vast expansion of words representing variations on, first of all, “say,” and “think.” You can’t reproduce the excited manner in which someone said something, so you say “he exclaimed.” This is, of course, an interpretation of how it was said, and so, one could say, was the imitation, but this difference in register makes it harder to check the interpretation against the original—it would be easier for a community to tell whether you provide a plausible likeness of some other member than to sort out whether he indeed “exclaimed”—rather than, for example, simply “stating.” Proficiency in the metalanguage provides authority—you own what the other has said—which is why an exact replication of the original words would become less important.

What is happening here is that while a difference is opening up between the original speaker and the one reporting the speech, differences are also opening up between the reporter and the audience and, eventually, within the speaker himself. This is the creation of “psychological depth.” Did he “exclaim” or “state”? Or, for that matter, “shriek”? That would depend on the context, which could itself be constructed in various ways, and never exhaustively. The very range of possible descriptions opened up the metalanguage of literacy generates disagreements—defenders of the original speaker would “insist” he simply firmly “stated,” while his “critics” would “counter” that he in fact, was losing it. It then becomes possible to ask oneself whether one wants to be seen as stating or exclaiming, to examine the “markers” of each way of “saying,” and to put effort into being seen as a “stater” rather than as “exclamatory.” Which then opens up further distinctions, between how one appears, even to oneself, and what one “really” is. On the surface I’m stating, clearly and calmly, but am I exclaiming “deep down”? (Of course, the respective values of “exclaiming” and “stating” can be arranged in other ways—what matters is that the metalanguage of literacy necessarily implies judgments regarding the discrepancy between what someone says and what they “really mean,” whether or not they are aware of that “real meaning.”)

Oral accounts involve people doing and saying things; the oral accounts preserved most tenaciously are those in which what people do and say place the center in some kind of crisis, a crisis that is then resolved. Such narratives will remain fairly close to what can be performed in a ritual, and thereby re-enacted by the community. Writing is neither cause nor effect of a distancing of the community from a shared ritual center, but it broadly coincides with it. Writing begins as record-keeping, which right away presupposes transactions not directly mediated by a common sacrifice. Record-keeping implies both hierarchy—a king separated from his subject by bureaucratic layers—and “mercurial” agents, merchants, who move across different communities, sharing a ritual order with none of them. The earliest form of literacy is manuscript culture, where a written text serves to aid the memory in oral performances. The very fact that such an aid is necessary and possible, though, means we have moved some distance from the earliest “bardic” culture.

Where things get interesting is where the manuscripts start to proliferate, as they surely will, and differ from each other. Member of an oral culture might enforce certain kinds of conformity very strictly, but could hardly keep track to “deviations” from an original text, especially since such a text doesn’t exist. Diverse written accounts would make divergences unavoidable and consequential, because the very fact that a text was found worthy of committing to the permanence of writing (an expensive and time-consuming process) would add a sacred aura to it. As we move into a later form of manuscript culture, in which commentaries, oral but also sometimes written as well, are added to the texts, these differences would have to be reconciled—generating, in turn, more commentary. This is an early version of what Marcel Jousse called “transfer translations,” i.e., translations into the vernacular of a sacred text preserved in an archaic language—according to Jousse, the inevitable discrepancies between the translation and original, due to the differing formulas in each, respectively, generates commentary aimed at reconciling them.

Reconciling such discrepancies could involve nothing more than “smoothing out” while keeping the narrative and moral lessons essentially intact. There will be times, though, when the very need to address discrepancies allows for, and even attracts, complicating elements. Let’s say the prototypical oral, mythical narrative involves some agent transgressing against or seeking to usurp the center in a way that disrupts the community and then being punished (by the center or the community) in a way that restores the community. If there’s no longer a shared ritual space, such narratives are less likely to be so unequivocal. To transgress against the center is now to transgress against a human occupant of the center. It is possible to refer to a discrepancy between that occupant and the permanent, or signifying center. There can be a discrepancy between human and divine “accounting” or “bookkeeping,” in which sins and virtues, crime and punishment, must be balanced. The discrepancies between “accounts” will attract commentaries exploring this discrepancy. The injustice suffered, the travails undergone, perhaps the triumphs, real or compensatory, experienced by the figure of such a discrepancy will come to be incorporated into a text that is, we might say, “always already” commented upon—that is, such a more complex story will include, while keeping implicit, the accretion of meanings to the “original” narrative. This is what gets us out of the ergodic, and into the vertiginous world of essences (new centers) revealing themselves behind appearances, as well as historical narratives modeled on such ambivalent relations to the center.

Once such a text, or mode of textuality, is at the center of the community, we are on the way to a more complete form of literacy, in which the metalanguage of literacy overlays and incorporates originally oral discourses. Literacy is crucially involved in the shift in the heroic narrative from the “Promethean” (and doomed) struggle against the center to the victim who exemplifies what we can now see as the unholy, even Satanic, violence of the imperial center. This means that the figure of the “exemplary victim,” that is, the victim of violence by the occupant of the center, a violence that transgresses the imperative of the signifying center, is simply intrinsic to advanced literacy. Our social activity is therefore a form of writing the exemplary victim. Liberal culture has its own way of doing so—the exemplary victim is the victim of some form of “tyranny” and demonstrates the need for super-sovereign approved form of rule that bypasses or eliminates that tyranny. It’s almost impossible to speak in terms other than “resisting” some “illegitimate” power in name of someone’s “rights” (as defined by the disciplines—law, philosophy, sociology, psychiatry, etc.).

If “postliberalism,” or what we could call “verticism,” is genuinely “reactionary,” I would say it is in redirecting attention from the exemplary victim back to the occupant of the center, highlighting that occupant’s inheritance of sacral kingship and therefore vulnerability to scapegoating and sacrifice. The exemplary victim could emerge in the space opened by the ancient empires, where the ruler was too distant from the social order to be sacrificed, but post-Roman European kings never definitively achieved this distance, and liberalism is predicated upon putting the center directly at stake, predicating the center’s invulnerability so as to exacerbate its vulnerability. All scapegoating attributes some hidden power to the victim, which is to say, places the victim at the center; all scapegoating of figures at the margin, then, is a result and measure of unsecure power at the center; so, refusal to participate in scapegoating, or violent centralization, is really bound up with the imperative to secure the center. This means treating the victim as a sign of derogation of central authority, rather than levying the victim against that authority. So, it’s not that we can ignore the exemplary victim; rather, we must “unwrite” the exemplary victim. This may be the hardest thing to do—to renounce martyrdom, to acknowledge victims but deny their exemplarity in order to “read” them as markers of the center’s incoherence—while representing that incoherence in order to remedy it. The very fact that we are drawn to one victim rather than another—this “racist” who has been canceled, that website that has been de-platformed or de-monetized—itself tends to make that victim “exemplary,” and we do have to pay attention. Nor do we want to “victim-blame” (if only they had been more careful, etc.), even if discussions of tactics and strategy are necessary.

Insofar as we inherit the European form of the Axial Age moral acquisition, we can’t help but see through the frame of the exemplary victim—even a Nietzschean perspective which purports to repudiate victimary framings and claim an unmediated agency is the adoption of a position shaped by Romantic claims to subjective centrality and therefore sacrificiability (Nietzsche’s own “tragic” end reinforces this). The exemplary victim is constitutive of our language and narratives, which is why it needs to be “unwritten.” The whole range of exemplary victims produced across the political spectrum constitutes our “alphabet” (or, perhaps, “meme factory”). The most direct way unwrite might be to follow up on the observation that the function of the disciplinary deployments of the exemplary victim is to plug executive power into the disciplines, which then can turn on and off the switch. But these detourings of centered ordinality nevertheless anticipate some use of the executive—those most deeply invested in declarative cultures like the law want the executive to crack down on their enemies as much as anyone else. So, it’s always possible to cut to the chase and propose and where possible embody that use of executive power which would most comprehensively make future instances of that form of victimage as unlikely as possible. One proposes, that is, some increased coherence in the imperatives coming from the center (and, by implication, in the cultivation of those dispositions necessary to sustain that coherence). If we did X, this victim over whom we are agonizing would be irrelevant—we could forget all about him. One result would be the revelation of how dependent liberal culture is upon its martyrs—so much so that they’d rather preserve their enshrinement than solve the supposed problem and thereby write them off. In the meantime, we’d be embarking upon a rewriting of moral inheritances that would erase the liberal laundering of scapegoating through the disciplines once and for all.

January 19, 2020

Design and the Attentional Economy

Filed under: GA — adam @ 6:55 am

I’ve been working for a while with the assumption that the “Axial Age” created the conditions for the generation of a new, post-sacrificial morality. Sacrificial morality relies, ultimately, on human sacrifice: someone is put in the place, ultimately, of the sacral king, who served as the target of the mimetic crises that plague any human community. Girard called this “scapegoating,” and I have been calling it “violent centralization,” and I have been following Girard, and then Gans, in attributing to the Christian scriptural tradition the revelation of the “bad faith” of sacrifice—the members of the community must blind themselves to the fact that what they see as an act of deserved retribution (the victim must always been rendered “guilty” in some way) really has nothing to do with the victim and everything to do with their own internal relations as a group. Calling the social orders marked by this revelation “post-sacrificial” is not to argue that such bad faith centering of the other no longer takes place—obviously, it’s quite common—but that everyone knows it’s wrong, can see it in others, and require elaborate rationalizations to carry it out. When we do it, we must insist it’s something else—and, of course, sometimes it really is.

I believe that, so far, I share this understanding of what Gans calls the “Christian revelation” with just about everyone who has been working in GA since, say, the 90s. In other words, it’s “canonical,” or “orthodoxy.” There is a seemingly obvious corollary that is equally canonical or orthodox, but which I reject. This corollary is that a certain understanding and reality of the “individual” results from the transcendence of scapegoating: the individual who is “equal” to other individuals, within the framework of what gets called “moral equality.” I’ve criticized this concept before, but my recent thinking about design provides it with a larger frame. My initial claim is that the social injunction to refrain from scapegoating implies nothing, and need imply nothing, regarding the “being” of the potential victim. In order to justify and reinforce that injunction, or the prohibition on scapegoating, it might indeed be helpful to project onto those not to be sacrificed the qualities which make them undeserving of such treatment. So, for example, if human beings all inherently somehow possess something we can call “dignity,” then it is because of that dignity that they must be treated in certain ways. The same goes for things like “consciousness,” “conscience,” and what Gans has always called an “internal scene of representation.” Rather than such projections, all we need to be able to say about the self is that is continually constructed as a sustainable center of attention, that of others and the self itself. These qualities and entities, along with the aforementioned “moral equality,” and notions of the “soul,” are all, that is, parts of a mythology of the individual, a way of invoking the center (drawing from it imperatives) to match the imperative to refrain from marking individuals in ways that have proven communally destructive.

It would be at least as easy to say that this prohibition on “marking” the other as victim (or “stigmatizing”) leads us, not to an ontology of the “individual,” but a semiotics of marking. So, we could say, if you frame this kind of behavior in this way, it is likely to incite this kind of response from a particular audience, and so on. A cataloguing of such “markings” would tell us nothing about individuals, but only of possible social constructions of them. And which markings needed to be attended to, and cautioned about, in different cases, would differ considerably—in other words, the prohibition on scapegoating could just as easily lead to an insistence on attending to lots of differences among individuals. Such an approach would be far more effective than the one based on “moral equality,” which leads us to scapegoat anyone who notices anything that might make us skeptical of that moral equality, and the way it is enforced under any given regime, and therefore leads straight to our current victimary order, which is has significant sacrificial elements. It would be more effective because it would direct attention where it needs to be, on the proclivities of the community and the various fluctuations in mimetic tensions, rather than upon the imaginary qualities of potential victims and potential perpetrators.

If our only interest is in “marking,” then, we need no ontology of the individual—nothing, no consciousness, no soul, inner being, free will, nothing. But people would, naturally, construct their behaviors in ways that make the markings most potentially relevant to them as irrelevant (or “counter-relevant”) as possible—to put it simply, they would both be aware of the way certain stereotypes might apply to them, and do what they could to disrupt the application of those stereotypes—which, in turn, would make things easier for those who don’t want their thinking to be in the grips of such stereotypes, but also don’t want to censor themselves for noticing differences. In fact, we would be finding ways to take the sting out of stereotypes, for ourselves and others, by making them explicit and thereby making it possible to modify behaviors, even by turning “negative” stereotypes into “positive” ones. All this would obviously be very different from the way we go about things now, and, I’ll repeat, requires no projection of an ontology onto the “individual” nor any assumptions of “equality.”

What it will do, though, is turn individuals into designers—of practices and institutions. I’ve been doing some reading in contemporary design theory, of the kind that is very cognizant of postmodern thought (I’ll mention briefly the work of Benjamin Bratton, especially his The Stack, and his colleagues in the Strelka Institute in Moscow), and one can see the tendency towards a very promising post-humanism. The notion that individuals were “constructed” was once a fairly esoteric theoretical speculation, but how does one deny it now that our whole lives are very tightly governed by algorithms under the control of corporations and states that now, between them, regulate all social interactions? Now, this intellectual tendency is very clear about how the complex of systems constructing our lives—which they are sure to do far more intensively, down to the molecular level, as technology improves—practically dismantle the mythology of the individual I’ve been referring to—where does one find “freedom,” or “conscience” in all of this?—assertions of such qualities are themselves programmed gestures. But the same does not hold for the prohibition on scapegoating, which I would say, counter-intuitively, but in agreement with Girard’s claim that Europe didn’t stop burning witches because it became scientific but, rather, became scientific because it stopped burning witches, that the prohibition of scapegoating has made all of modern technology and even more so its current, scary, intrusive, seemingly uncontrollable social media technology possible.

It’s not hard to find people with complaints about the totalitarian nature of social media and the forms of government surveillance and information gathering and keeping that work seamlessly with them. But, despite the very serious criminality of sections of the American government that has been revealed through inquiries into the Russia collusion hoax, a criminality almost universally shared with the major American media (which is really nothing more, and probably never has been anything more, than a racket trafficking in information and what we could call “information laundering”), it is still worth pointing out that, for example, these ubiquitous means of social monitoring and control have not led, say, to the isolation and targeting for elimination of large social groups. You could say I’m setting a low bar, but if it were the case that this thoroughgoing construction of the individual revealed morality to be a myth concealing sheer utilitarian power struggles or the conveyance of collective resentments, such things would be happening (as they seem to be in China). Meanwhile, if it’s the case that it’s the origin of these technological capacities in the study of the various “dangerous” markings that the prohibition on scapegoating calls for, then the evidence of clear moral limits on the use of this immense power is no surprise. In fact, if we set aside the dominance of much of social media by the “wokeratti,” what this media mostly does is provide security and enhance knowledge dissemination. It’s actually much easier to use it to exonerate rather than frame the innocent.

A lot of scapegoating takes place on social media—at times it seems like little else goes on there. My claim here is that the nature of social media is more to be used to design social interactions or “interfaces” that foreground dangerous markings along with ways of deferring their danger. I’m obviously also saying that those who want to abolish victimary practices should be using social media in this way. Also, I’m just using social media as an example here—post-liberalism should be a project of design across the board. The human sciences should be practices of design—mimetic theory channeled through the originary hypothesis allows us to diagnose institutional dysfunction in terms of ineffectively designed modes of deferral caused by undetected modes of mimetic rivalry; and such diagnoses would lead to proposed designs that would acknowledge the rivalry and re-set them.

You could say that this leads to a practice, if not ontology, of the individual—the individual as designer of social interactions. Again, nothing needs to projected onto individuals—we don’t need to say that humans are “by nature” designers, that it is their telos to design, that they are genetically determined to be designers, etc.—it’s enough that we are designers as a result of the ways our ancestors and predecessors designed the institutions producing us. We don’t all need to be equally good at it. Those who are better at have an interest in helping the less skilled; indeed, they have an interest in designing institutions and practices that will make people better designers. Making design the definitive neo-absolutist practice supports the kind of dedifferentiated disciplinary spaces I’ve argued for elsewhere. We’re always starting with a practice, which we can assume fits a model, and has therefore been designed more or less directly. We can start right where we are, in other words, in improving the design of our own practices and interactions so as to minimize the damage unthinking mimesis does to them. Once we’re committed to a particular practice, we become interested in organizations and institutions that can house and support them. This, in turn, generates new design projects. Designs can be made across the moral, aesthetic, pedagogical and political spheres—we design assignments to enhance learning; we design impossible objects, like perpetual motion machines or Rube Goldberg-style devices, to satirically expose failing institutions and unconsidered assumptions; we can design inspiring utopian visions in the great tradition of such visions; we can unite the infinite with the infinitesimal in our designs; we can design projects for social reform for potential patrons (indeed, wouldn’t they demand it?). In this way, any discussion can be put on entirely new footing, and piles of ideological baggage swept away—we can be designing to make sure that happens as well.

Design involves translation: a problem into confluence of reciprocally counter-acting designs; desires into a project; a territory into a map; a map into directions; patterns of social interactions into accumulations of reciprocal mimetic modellings; declaratives into an imperative meeting an absolute imperative; imperatives into extended ostensives; any utterance into spread out presuppositions and implications of that utterance; oral into written. Measuring is translating; money is a medium of translation. Any two terms you could put an “=” sign between involves a translation. Even more, then: the use of words and phrases at different times involves what we could call a translation of a term into itself, insofar as it becomes different over time. The designing frame entails looking at everything as problems of translation (and if we want to push this a bit further, transcription and transliteration as well0. You ascertain that the two terms are the same, that the “=” is appropriate, which makes you identify all the ways one could introduce a / through or an ~ above the =. When you design you confirm the = by eliminating all the /s and ~s. This is done on the scenes upon which you design narratives and articulate human movements with materials so as to inhabit and suspend the /s and ~s; you are being designed on this same scene, since the most basic reciprocal translation is that between design and designer.

Older Posts »

Powered by WordPress