GABlog Generative Anthropology in the Public Sphere

May 28, 2019

Market Capillarism

Filed under: GA — adam @ 3:00 pm

I’m going to follow up on this definition of the “market” that I offered in my “The Event of Technology” post: “what people without direct authority for maintaining the social center do with knowledge, information and skills when they are being protected and bounded but not directly supervised by such authorities.” The market, in its most abstract, praxeological terms, is understood as the interactions generated by the free choices of individuals. As an ontology, this is absurd, if for no other reason than that no one chooses the language in which these choices are made. But it makes good sense if we think about the market as those interactions that take place under the radar of some kind of direct supervision, especially if we consider that such “radar” is never absolutely comprehensive (we couldn’t even imagine what “absolutely comprehensive” supervision would mean, since each form of supervision would generate margins for decision making undetermined by the supervision itself). So, if I’m supervising a group of children, and I give them a strict schedule of activities and distribute roles in a hierarchical manner amongst them, and focus mostly on making sure they are doing what the schedule says they should be doing and receiving reports from the individuals in charge at certain points I leave open plenty of space for the children to exchange responsibilities and resources amongst themselves. So, one of the children’s duties is to sweep the classroom; another is to clean the dishes—one child who assigned sweeping duty asks some child who has playtime to sweep for him and, in exchange, the first child will do the dishes for the other later. We have the makings of a “market” here, and there’s not necessarily any reason for me to concern myself with it, as long as it doesn’t interfere with and perhaps even enhances the functioning of the institution. I can raise and lower the threshold of supervision depending on how beneficial the “market system” seems to be, and I can make sure the threshold doesn’t get high enough so that my own position gets implicated in the market.

The introduction of money into the system provides those engaged in market exchanges with more flexible means of establishing long-term interactions, while also ensuring the control of the central authority over this expanded process. Money is introduced as debt, debt which is ultimately owned by the central authority, whether or not finance is nominally controlled by privatized agencies. The more money in the system, the more the central authority is likely to be marketized as well. This is another way of saying that money is power, and this form of power competes with the form of power exercised by the central authority. The power of money is the power of abstraction—that is, the power to separate groups and individuals from the larger settings in which they are embedded. If you can separate groups and individuals from their settings you can mobilize them for your own projects. The power of money becomes the power of capital, which is the power to abstract not only individuals and groups but disciplines, which is to say knowledges, media and technologies, from the results of the abstractions those disciplines had helped to effect. The problem of containing the market system within the terms of central supervision is one, needless to say, that modern politics has not solved; indeed, the most cherished principles of the liberal social order make sacrosanct the primacy of market power over central authority—any reversal of this primacy is deemed “tyranny” or “totalitarianism.” And yet the market is still inconceivable without the central authority, reconceived as a political market, in which citizenship is defined as a certain quantity of tokens authorizing one to make withdrawals from the center.

The traditionalist opposes abstraction in the name of full embedment, but the possibility of rejecting abstraction disappeared with the rise of divine kingship a few millennia ago. By now, the forms of embedment defended against abstraction are the results of previous abstractions that have been re-embedded. The question is, in what form will abstraction proceed? Or, what kinds of mobilizations are necessary? If the market operates within the capillaries of the system of supervision, then abstractions should contribute to that system. The paradox of power is that the more central the authority, the more authority depends upon the widest distribution of the means to recognize authority; to put it in grammatical terms, the paradox of power is the paradox of the most unequivocal imperative leaving the largest scope of implementation of that imperative. To think about the scope of the market is to think about how to make this paradox more explicit. As I pointed out in “The Event of Technology” (and as Andrew Bartlett explains very thoroughly in his “Originary Science, Originary Memory: Frankenstein and the Problem of Modern Science”), abstraction always involves some desacralization or, to put it more provocatively, some sacrilege. Sacrilege can be justified on the grounds that the innovation it introduced will enable new forms of observance of the founding imperatives of the social order. So, the sacrilege should be, as Bartlett argues, “minimal,” while the new forms of observance (I depart from Bartlett’s formulation here) should be maximal. Abstraction creates new “elements,” and therefore new relations between elements. Monetary and capitalist abstractions are pulverizing, creating new elements that are identical to each other, and therefore most easily mobilized for any purpose. This is the process of “de-skilling,” with its ultimate telos being automation, that labor theorists have known of for a very long time. An absolutist mode of abstraction, meanwhile, would make ever finer distinctions between skills, competencies and forms of authority within disciplinary spaces. In this way, abstraction carries with it its own form of re-embedment.

The market economy, then, becomes a measure of fluctuations around the threshold at which the paradox of power is made explicit. All social conflicts can’t be reduced to this fluctuation, but all social conflicts are “processed” through it. This is most obviously the case for everything grouped under the concept of globalization, most especially movements of capital (at the “high” end) and migration (at the “low” end). Globalization represents a raising of the threshold at which the paradox of power is made explicit: global corporations have been released from obligations to any central authority and construct their own command chains, which include governments as subordinate partners; advocates of increased migration exercise power across borders that national states find it difficult to counter. In both cases, states are set up so that they must respond to the same “market” incentives as the corporations and migrants themselves. We could imagine a point at which the paradox of power would have to reach such a threshold to become explicit that central authorities would not be issuing “operational” commands at all—commands would just be one more incentive (or disincentive) agents further down in the chain of command would have to take into account by assessing the likelihood of any penalty for disobedience.

Within a market order, then, any action, event or relationship is characterized by a fundamental duality. On one side, however thinly, the paradox of power is in play: all actors recognize that their sphere of activity is protected by some more powerful agency and constrain and direct their activity accordingly. On the other side, to some extent, imperatives are converted into market signals—that is, a site of exchange where one person’s power to punish or reward you must be balanced against lots of other peoples’ power to do so. In both cases we find an interaction between center and periphery—in the first case, one acts in a way that redounds to the authority of the center, thereby creating space for the further replacement of external by auto-supervision; in the second case, one tries to subject the central authority to incentives and disincentives similar to the ones we are all subject to—this ranges from simple bribery and other forms of corruption to the vast avenues of influences made legal and even encouraged within a liberal social order, like lobbying, forming interest groups, political donations, think tanks, media propaganda and so on. We could locate anything anyone does, thinks or says somewhere along this continuum and study social dysfunctions accordingly.

Probably the most intuitively obvious argument in favor of the “free market” is the Hayekian claim that all the knowledge required to carry out production and cooperation at all the different social levels is far too distributed and complex to be centralized and subordinated to a single agent. This is of course true, but also a non-sequitur and a distraction. A general must provide some leeway to his subordinates, and they to theirs, and so on, and for the same reason—the general can’t know exactly what this specific platoon might have to do under unexpected circumstances, and he can’t even know all that one would need to in order to prepare them for those circumstances. There will therefore be “markets” all along the line, as people instructed to work together to address some exigency organize “exchanges” of knowledge, skills and actions amongst themselves in order to do so. The general doesn’t need to know 1/1,000 of all the specifics of these interactions to still be the general—that is, to issue commands that can be obeyed, and to place himself in a position to ensure that they will be. The same is true for those institutions charged with providing communications, health care, education, transportation, housing and so on. In each case, capillaries along the margins of these institutions can be adjusted in accord with the level of responsibility to be allowed consistent with meeting the purpose of the institution. The argument for markets is really saying no more than that you can’t do a very good job if you’re being micromanaged at every point along the way. It’s equally true that you can’t do a very good job if the terms of each move you make have to be “negotiated” with a constantly changing range of agents.

Liberalism has generated the illusion that what appears below the threshold of direct supervision is what, in fact, determines the form of supervision; even more, that the supervision is a servant of those actors which have merely been provided some leeway. This situation produces destructive delusions, because the presumably free agents are nevertheless aware of their utter dependence upon their “servants.” Is there any businessman who thinks he would be able to protect himself against violence, fraud, robbery and extortion by those readier than him to use violence and break laws without the force of the state? No businessman believes this, but in a way they all believe it, because their political theory leads them to assume that, first, there were a bunch of individuals engaged in peaceful exchange with each other and then, only when criminals and invaders, presumably attracted by the wealth thereby created, tried to take it using force, was the state “hired” as a kind of Pinkertons to maintain order. This makes it impossible to think coherently about the simplest things, such as how a policy everyone would recognize to be beneficial might be conceived and implemented in the best way. Someone should make a “this is your brain on liberalism” public service announcement.

May 21, 2019

Beyond “Post-Sacrificial”

Filed under: GA — adam @ 7:00 am

I’ve been using the phrase “post-sacrificial culture,” generally in conjunction with the “Axial Age acquisitions,” to refer to the breakdown of the “imperative exchange” constitutive of sacrifice. Sacrifice involves an imperative exchange because the human member of the community offering the sacrifice (bringing his goat, or whatever, to the altar) is following an order issued by the deity to offer up some of the fruits provided, ultimately, by that deity; while, in exchange, the sacrifice represents a request, on the part of the one offering the sacrifice, that the deity continue providing these benefits (more goats). In a sense, our culture is not post-sacrificial, and it may be that no culture can ever be so, definitively—we engage the institutional orders around us in terms of imperative exchanges all the time, simply by assuming if we “play by the rules” we will be commensurately rewarded, and in resenting the failure of the institutional center to hold up its end of the bargain. But it’s still correct to call our culture post-sacrificial, because our “sacrifices” are figurative and not directly measurable—we speak in terms of trust, consent, contract, and so on, and keep extending those terms into areas where the ”superstitious” nature of our “faith” on institutional structures would be embarrassing. But the fact that it would be embarrassing, for most, in most situations, to say we offer up a “piece” of ourselves not so much to our boss (which would feel especially ridiculous) or even the institution, but to some “idea” of the institution, even one we don’t really “believe” in, is what makes us post-sacrificial.

But any concept with a “post” (or, for that matter, a “neo”) in it is still a placeholder, and therefore unsatisfactory. Now, we can say much more about how we have arrived, through those “Axial Age acquisitions,” at a “post-sacrificial” culture. Sacrifice is “embarrassing” because it has been discredited, and it has been discredited because the violent centralizing involved in sacrifice—we commit violence against this being that we all focus on in exchange for peace and prosperity—has been revealed as fraudulent. It’s our own mimetic desires that confer centrality on the “sacred” being, not any attribute of the being itself. And we see this because all sacrifice tends towards human sacrifice and, paradoxically, as Eric Gans shows in The End of Culture, does so the more God Himself is understood in “human” terms—that is, more as a mediator between humans than a central focus precluding the emergence of humans as centers in their own right. If the gods give us food, then we owe them food in return; if God has created us out of nothing, we owe him everything, even our firstborn. But how could even that be enough?

Human sacrifice emerges along with human hierarchy, as the first figures to make a claim to permanent centrality were sacral kings, who were no doubt often killed in manners that, with immense variation across cultures, became increasingly ritualized. The sacral king mediated between the community and the cosmos, and if that mediation went wrong sacrificing the king would restore it; at a certain point it would make sense to regularize the oscillation between effective and ineffective mediation. Divine kings introduced layers of bureaucratic mediation between themselves and those they ruled, so they themselves could no longer be sacrificed. But divine kings eventually established justice systems to deal with disputes between the new centers inevitably created within those new “layers” between themselves and the people. Regularized forms of compensation are established. Large scale imperative exchanges are established between the divine king and those situated in the various layers, all of whom bring tributes to the divine king who has, of course, provided his people with everything. With a justice system, injustice becomes a possibility; if injustice becomes a possibility, it is also possible for the system as a whole, in failing to remedy that injustice, to itself be unjust. And its unjust nature could be concentrated in a singular figure, a victim who has become meaningful in a new way, by being victimized precisely as a result of revealing systemic injustice. The sacrifice of such a victim in order to resolve some crisis would take on the ritual sacrificial forms but would be impossible to “contain” within those forms. This process would set in motion the erosion of sacrificial forms, and of the imperative exchange they institute.

So, the divine king inches ever closer to demanding a “total” gift or sacrifice, but can only do so in terms that are so monstrous that the more civilized regions of the system make it possible to see those terms as an indictment of divine kinship itself. From here, those in a position to negotiate in some way with the divine center can go in one of two directions: toward cynicism and nihilism, on the one hand; or, towards another form of “total donation” on the other. Cynicism and nihilism can only be a local phenomenon indulged in by the privileged. The new kind of total donation is to a new kind of center, which cannot be embodied in a central figure, and certainly not a central ruler—this is a center that commands a refusal to engage in the discredited forms of violent centralizing. A genuinely and completely post-sacrificial center would be devoted to propagating and embodying, or signifying, this command. Very few do so wholly, but it is only a certain number, which we couldn’t determine in advance, which would necessary to exemplify the limits of sacrifice and preventing the social order from being engulfed in it.

So, if we don’t want to call this order “post-sacrificial,” what should we call it? Part of the difficulty is that liberalism “launders” sacrificial imperative exchanges through a post-sacrificial order. Needless to say, scapegoating goes on constantly within a liberal order—much of it remains symbolic, which raises a question: are we irredeemable scapegoaters, so the best we can do is make scapegoating more symbolic, and less violent?; or does a liberal democratic system predicated upon symbolic scapegoating prevent us from more decisively marginalizing scapegoating? If the latter is the case, the only way of creating an order that would be more than merely “post-sacrificial” would be the establishment of an order we might call “charismatic autocracy.” “Charismatic” in Philip Rieff’s “graceful” sense of charisma as deferral in obedience to an absolute imperative (in our terms, the imperative to defer violent centralizing). “Autocracy,” meanwhile, is essential, because as long as we have hierarchical societies, someone will be at the center, and the only way to avoid constant accusations of illegitimate usurpations of the center and hidden powers behind the temporary occupant of the center would be to place the occupant of the center beyond any external criteria of “legitimacy.” That would represent a radical curtailment of sacrificial logics, because the desire to replace the figure at the center is the most “bad faith” desire possible because it self-evidently represents an attempt to be closer to the center oneself. A general renunciation of that desire would represent a quantum leap in the deferral power of all members of the social order. The argument for such an order would be predicated upon the assumption, for which we could find a wealth of practical examples, that symbolic scapegoating is really just a “gateway drug” prepping us for the real thing. The “charismatic” component of the “autocracy,” then, is less a quality possessed by than conferred upon the autocrat, who is himself in fact desacralized and represents nothing more than the need that someone occupy the center. (This doesn’t mean a social order wouldn’t want, and couldn’t arrange for, the best possible person to occupy the position—it just means that such arrangements must themselves be bound up with the irreducibility of the central authority.)

In grammatical terms, “charismatic autocracy” involves a movement past “imperative exchange” to “interrogative imperativity.” Under the regime of imperative exchange, declarative culture is ultimately a kind of scorekeeping, trying to figure out the respective “values” that are being exchanged. To this day, most discussions of morality take this form. But once the imperative is to resist or defer imperative exchanges, an interrogative, a question, is introduced explicitly into the proceedings. Not the question, “how much is this worth,” which is never a real question because it’s just a way of accommodating oneself to the powers framing the existing order; rather, the question is, what violent centralizing lies at the end of this imperative exchange? All the linguistic means by which you construct yourself as a center then become open to “interrogation,” as either demands for a better “deal” or “intimations” of the creation of new centers that would render any deal irrelevant. Only the demand for this state of questioning can satisfy the command for a total donation.

Within the imperative exchange, declaratives essentially involve haggling over prices—what one owes the gods/institutions, what they owe us, and, further down, what we owe to each other, whether in market terms or in terms of honor and kinship. Within interrogative imperativity, declaratives take on a far wider scope, that of converting possible (and impossible) imperative exchanges into a rule or constraint for deferring “analogous” imperative exchanges. The first question, rather than, “how do we get what we’re owed,” becomes more like, “what makes you think obligations can be calculated?” And then an inquiry is opened up into all the different ways people can imagined they’re owed this or that—and once the strict terms of obligation have been displaced many more such possibilities become imaginable. All the mythical and ritual imperatives you are obeying to imagine each and every one of them become evident. Narratives accordingly shift from telling of the spiraling out of control of one imperative economy until it leads to a reset, to putting all imperative economies in question, exposing the imbalance in all presumed balances.

The most powerful way of doing this is originary satire, which involves turning every threshold and boundary into a narrative wherein figures on both sides of the boundary or threshold turn into each other, so that the terms of some expected imperative exchange are reversed. That is, the “vocation” or telos of the sentence is represent other scenes within the scene of composing and hearing/reading the sentence itself. Everything grammar does—tense, mood, aspect, etc.—it does in order to articulate relationships between the scene of utterance and the other scene(s) it refers/defers to. In that case, all these boundaries and thresholds are themselves materials for originary satire: the relation between present and past, between a continual and a completed action, between possible and actual are all abstracted and re-embedded in narratives. It’s just as easy to say we spoke with each other a thousand years ago as it is to say we spoke with each other yesterday; that we are in the middle of doing something that’s been going on for decades and is further metastasizing even as we speak as it is to mention where we are right now; something that is unbelievably unlikely can be set alongside something that seems obvious; linguistically, you could be saying what I think as easily as I can. Originary satire targets chunks of language, stereotypical sentence types, which tend to harden into the marshalling of evidence for imperative exchanges, for their beneficial or inevitable nature—i.e., sentences that supplement imperative exchanges, rather than extracting samples of language from them so as to remind us that language is always received on a scene. In simpler terms, our expectations of one another rest on the bedrock of imperative exchanges, and the purpose of disciplinary spaces aimed at satirizing those expectations is so that we can see them, ultimately in order to construct charismatic and autocratic modes of interaction requiring more “input” into the construction of expectations. In truth, this is the most realistic use of language, because we are always, still, on the originary scene, which can never “close.”

The concept of “interrogative imperativity” makes it possible to pose more explicitly a question that has been implicit in my earlier discussions of literacy as a kind of supplementary originary scene: why is the scene of classical prose objectionable, or worth exposing? Because it fulfills one imperative of the declarative (to defer imperatives by “absenting” the demanded object) by renouncing the other imperative of the declarative—to articulate other scenes with the scene of language itself. This means that the literate declarative scene can only keep reiterating and justifying its own supplementations (again, all the “beliefs,” “assumptions,” “claims,” “suggestions,” “implications,” etc.) so as to sustain the unitary prose scene—it must systematically obfuscate the declarative’s grounding in the ostensive-imperative world. Classical prose and the “classic” disciplines are interested in making beliefs, assumptions, etc., unequivocal, that is, used the same way by everyone—for this reason, they cannot construct, or even imagine, the possible ostensives and imperatives that would come before any “belief” or “assumption.” Originary writing, in that case, restores this grounding, but not, of course, by pretending the literacy revolution never happened. Rather it takes the nominalizations constructed by classic prose as names which we can apply beyond their restriction, imposed by classic prose, to the unitary scene—most directly, by applying them to the disciplinary iterations of classic prose itself. So, originary writing obeys an imperative from the center discovered/invented by the nominalizations of classic prose and the disciplines. That imperative is to generate more potential ostensives and what these ostensives do is name sites of emergent dangerous violent centralization, as early on in their onset as possible. Some nominalizations will end up being genuine names for practices of advanced deferral; some will turn out to have been incitements toward violent centralization—the work of the disciplinary space is to iterate these nominalizations/names so as to discover/expose which is which (or to detour them to new uses).

When we study “reality,” that is, what we are doing is inventing and deploying concepts enabling us to detect potential sites of mimetic contagious outbreak. We can do this because of the cognitive consequences of literacy, which parallel and contribute to the discrediting of sacrifice. But classical prose and its metaphysical superstructure just contain and normalize sacrifice by classifying and ordering the markings of the potential victim rather than relying on the spontaneous crisis. Still, it is only through that prose and those superstructures that we can generate the terms of a charismatic autocracy. The supplementary concepts used to simulate a shared scene for writer and reader can be turned into means for generating new scenes of origin of deferred scapegoating. If you take a concept out of its context so as to conceptualize the context itself you create a disciplinary space within that context—that disciplinary space will either reveal that the discipline (the “context”) is too bound to its unitary scene to generate further potential ostensives, or recover and prolong the origin of the discipline/context in a recontextualized ostensive.

May 14, 2019

The Paradoxical Telos of the Aesthetic

Filed under: GA — adam @ 8:11 am

The origin of the aesthetic lies in the oscillation of the participant on the originary scene’s attention between the sign (the aborted gesture of appropriation) put forth by the other, on the one hand, and the central object, on the other. The sign barring access to the object enhances the desirability of that object, while the object, lacking meaning without the sign, directs the attention back to the “well formed” sign. So, wherein lies the aesthetic, then? In the object, which is turned into something like an image of itself; or, in the sign, which presents deferral as an attractive model, and constitutes the first body image? It must be in the oscillation itself—in some object of desire as seen through the gesture, which is to say the constitution of the scene, which makes it an “formal” rather than “material” object. So, historically, works of art have mostly been of potentially desirable, or even potentially repellent, things in the world, rather than (directly) of the others who mediate our relation to it—but the work of art presents this object as so mediated—i.e., as socially protected and inaccessible in some way, as opposed to the object it might be representing.

 

Eric Gans speaks of the history of aesthetics as the history of the incorporation of the scene of representation within the work of art itself. This history commences once aesthetics is distinguished from ritual. So, the earliest secular artworks, like Greek tragedy, do not represent the scene of representation at all—in a manner minimally (but very importantly) distinguished from ritual, the audience participates in the resentment toward the central figure, a resentment that is “purged” by identification with that figure’s suffering. What interests me here is that art, as an immersive experience is, like ritual, institutionally separated from the rest of life. This is because the social hierarchy that makes one, but not others, of intrinsic interest, is taken for granted. Once other centers emerge in a post-sacrificial order, the work of art must include peripheral figures within the work, even if the focus remains, as in Shakespeare’s tragedies, on the “Big Men.” This involves obvious forms of self-references like the play-within-the-play, but also figures and scenes within the play (like plebeians expressing resentment towards their superiors) that comment on the events involving the Big Men.

 

I think we can see this as a broader process of undoing the ontological separation between the work of art and the social world of the audience experiencing it. Once the voices of those similar to the audience are represented within the play, why not the audience itself? Why shouldn’t the participation of the audience be the play? It may be considered an astonishing testament to the institutional power of artistic representation that not only did it take so long for the idea to emerge that the creative primacy of the artist is ultimately a mere adjunct to the experience of the “recipient,” but that this idea has still not moved much beyond the artistic “avant-garde” margins to more mainstream or officially sanctioned works. The pleasure of transcending resentment by subordinating ourselves to the “domination” of the artist is certainly part of the resistance to an aesthetics that would be nothing more than minimal shifts in attention producing maximum oscillation between the created scene and other scenes.

 

The broader problem, though, is that trying to undo the life/art boundary requires that the practices of “life” that resist participation in “art” must be represented; and those artistic conventions that “segregate” the audience from the work must also be represented. Otherwise, how would we know we were transgressing a boundary? But these must be critical representations, of conventional “complacency” that wishes to be “spoon-fed” artistic pleasure, on the one hand, and traditions of representation that “condescend” to and “manipulate” a “passive” audience. Taking on the art/life boundary is asymmetrical warfare, i.e., terrorism, which is always snuffed out in the end. This has always been the dilemma of the avant-garde which always, amusingly enough, saw itself as bringing art to the “people.” Even with much more pacific and patient approaches, moves towards abolishing the art/life boundary will always involve moves that reconstitute it.

 

That just means, though, that this paradoxical relation between the institutionalized scene of art and the other scenes that art scene must itself stage would be transferred to the domain of everyday practices. The paradoxical telos of the aesthetic is to make all of life aesthetic. Or, rather, since all of our practices already have an aesthetic dimension, this telos is to open up “everyday life” to artistic creativity. The romantic and modernist utopian vision was that everyone would become an artist, once freed of inhibiting conventions; an absolutist approach, more modest, is that everyone would take an interest in noticing and enhancing the aesthetic dimension of those conventions. It follows from the formalist maxim that all relations of power and authority be made explicit and named that the norms and conventions governing all areas of life would likewise be made explicit and named, and naming is best embedded in a memorable act—and, making acts memorable is part of what art is for.

 

Such daily aesthetic activity would be intensely interactive: just like on the originary scene itself, we would all be imitating and “inflecting” one another’s signs. Now, if the aesthetic includes the oscillation between sign and object, the recognition of the formality of the sign (which is to say, its iterability and therefore imitability) must take place on the periphery itself, horizontally between the participants on the scene. If we ask, how would the sign “coalesce” into a final shape in the reciprocal gazes cast around on the scene, I think the answer is that it would emerge out of another oscillation which each participant would see in the others: an oscillation between vulnerability and threat. The tension between these opposing attitudes on the scene is what would paralyze everyone sufficiently to arrest the progress towards the central object. This pre-aesthetic oscillation is what would break down the pecking order and require some new means of preventing conflict.

 

This pre-representation of the other as equally and alternately vulnerable and threatening is what I have called “originary satire,” and posited as the initial moment of the aesthetic. Think of what would be involved in representing everyone this way—in drawing out everything monstrous, dangerous, vicious and menacing about them, while simultaneously finding everything pathetic, impotent, desperate and cowardly. Some rather remarkable, if ultimately static, characterizations would be possible, especially since presenting oneself as a threat can be seen as a way of concealing or compensating for vulnerability, while at times there can be nothing more threatening than a vulnerable, “cornered” animal. If we all saw each other exclusively like this, human life together would be impossible, and an art work that stopped at this pre-moral satire would be incapable of any real closure—I wonder if that is why Wyndham Lewis’s satires often seem awkward, somewhat arbitrary and unfinished, as he claimed to be aiming at such a non-moral satire. So, aesthetic practice must proceed from what is really the most egalitarian practice of representation imaginable back to the center, and the “asymmetry” of placing someone or something at the center and projecting the oscillation of threat and vulnerability onto that individual. Eventually, the figure’s vulnerability is concentrated in high culture, and its threatening character in popular, and then mass, culture where we identify, as Gans says somewhere, with one or a few good guys killing lots of bad guys.

 

But originary satire would need to become part of the telos of the aesthetic in the kind of formalist integration of art into life I proposed above. It takes very little to frame another as vulnerable or threatening—in fact, we do it all the time, when we calculate advantages and try to neutralize the aggression of others. Representations in daily life that construct the oscillation between the two would institute a genuine model of deferral, though. “Do unto others as you would have others do unto you” and “what is hateful to you do not unto another” were revolutionary moral advances at the time of their invention, but if you look at them took carefully they are thin, inconsistent, and capable of all kinds of cynical applications. What if others like what is hateful to me? I suppose we could move to the meta level and say, well, in that case, treat the maxim in a more complex manner and figure out is analogous, for the other, to what is hateful to you. At that point, though, we need another maxim. “When you see the other as threatening, imagine how he might be vulnerable; when he seems vulnerable, look for what might make him a threat” would be a much better source of moral reflection, as it would enable us to identify the role we play in constituting the other as victim or victimizer.

 

If originary satire is to provide our preliminary aesthetic framing of the other, we would then construct ourselves and others as centers so as to elicit signs of threat and vulnerability in the other, and continue our construction of these modes of centeredness so as to have what is threatening and vulnerable in us “match” that which we find in others. The other might be threatening physically, emotionally, or intellectually, which means that I present a vulnerability to that particular threat along with a threat of my targeting what I perceive as the other’s vulnerability, should he or she in fact prove a threat. It’s in both sides mutual interest to proceed in this way, which preserves the symmetry needed for interaction along with the difference needed for the generation of new signs. It would be a learning process, involving trial and error and constant revision. As we proceed in our interaction, we build trust by coming to constitute one another’s centrality primarily in terms of the other’s vulnerability, and to satirize one another less. Relapses are always possible, of course. (By the way, I don’t see this reciprocity exclusively in terms of modern social orders—I think that egalitarian hunting and gathering communities are probably extremely satirical in their dealings with each other.)

 

The aesthetic practice of everyday life involves, to use that phrase from Gans’s The Origin of Language, “lowering the threshold of significance.” We can always uncover new layers of threateningness and vulnerability, and potential layers, hypothetical layers, and so on. The aesthetic practices of everyday life would provide representations with at least a trace of this pre-aesthetic representation, resolving the oscillation into a center based on one pole or the other—resolving the oscillation this way more or less, depending upon how much originary satire can be borne in a given setting. The practice of non-moral satire, which aims at an elemental humanness, not simply to hurt and ridicule the other (because, if done right, the practitioner doesn’t escape either), but to represent the most basic materials of any moral order, would be an extremely important thing to teach children at an early age. It would discipline some of the cruelty and terrors to which children are liable and vulnerable; even more important, it would inoculate them strongly against taking their resentments in a socially transformative direction, since bred into them would be the knowledge that these human fundaments can’t be transformed.

 

The relation between “art” and “life,” then, would be bridged by the reciprocal satire of artist and audience. Any scene becomes an artistic scene insofar as it includes another scene as audience and co-creator, and which turns the artist into a sometime spectator as well—in the end, maybe we can’t tell the difference between one and the other, leaving us with pure oscillation. Social media and “meme-ing” already enact this kind of satirical oscillation, as bits and pieces of language are constantly taken out of their context and used to create other contexts in which anyone might have uttered those words. Imagine B, C, D, E and so on saying this X which A just said—this is an infinitely replicable form, which reveals something threatening/vulnerable about those we can’t imagine saying just as much as it does about those we can. Of course, the lack of any need for start-up funding is crucial here; and, of course, this also makes the “memers” highly vulnerable to the vagaries of leftist political ratcheting within the various platforms. But the “dial” on boundary abolishing originary satire can be turned up or down. If we think about artistic practices as shaping cultural participants, providing them with language and making them better language learners within the disciplines, originary satire should provide us with ways of thinking about dissemination and infiltration, which requires working just below the threshold at which the cultural censors are programmed to detect transgression.

May 7, 2019

The Event of Technology

Filed under: GA — adam @ 7:34 am

Insofar as power is desacralized, there is nothing but mutually hostile “interests” engaged in struggle over the decaying corpse of the social body; at the same time, power is never genuinely desacralized, because as soon as the sacred center is punctured, mythicized centers like “the common good,” “the voice of the people,” “Constitution,” “rule of law,” and, eventually, “GDP” are set up as masks of what everyone must assume is there—an unquestioned authority rooted in a singular origin. These mythicized centers are intrinsically arbitrary and divisive, though, which means they must eventually escalate hostilities into some “total” form.

Desacralization of power, though, is possible because there is a difference between the ritual center and activities engaged in outside the center. In the earliest human communities, we can assume that in activities apart from the ritual center nothing at all changed, and the ritual center reproduced as precisely as possible the originary event. But the sign deployed on the originary scene, along with the constraining structure of ritual, would be extended to other activities; at the same time, linguistic development towards the declarative would involve the attribution of actions to (“mythical”) occupants of the center. The mythical interpretations of ritual would be drawn from the far less interesting but nevertheless determinative actions outside the central aura and be converted into actions modeling behaviors for the community. Out in the field, hunters battle their prey; on the narrativized ritual scene, the sacred beast gives life to the group.

As social cooperation increases, stories of the origin of each new mode of cooperation would be “heard” or derived from the center—it would probably be the case that you couldn’t do or create something new without attributing the discovery to a mythical agent. You would in turn be obliged to that mythical agent, and would give to it some part of the fruits of your labor, which in turn would be part of the individual’s contribution to the center for the entire community. The gift the god has given you comes with an imperative: in one form or another, that imperative would be to use it in such a way as to honor the donor. In return, the individual issues an imperative to the mythical being: a prayer, requesting aid in successfully using the skill or implement. All the implements of work and war would be created within this frame, of what I have been calling an “imperative exchange.”

The implements themselves, their parts, and the implements used to produce the implements, are themselves all part of this imperative exchange. This is to say there is a “magical” component to the process: ritual words and gestures must be applied to all acts involving production and use, and instances of successful or failed use would implicate the implements themselves, which don’t simply break, and aren’t simply poorly used, but refuse, for reasons that may be more or less formulated, to follow the commands given them. In a certain sense we could say that, of course, an early human smoothing out his spear knows that this has to be done so that it can fly straight and fast when thrown, but his way of thinking about it will be framed completely in terms of being in harmony with all the agencies of the surrounding world. Such processes become institutionalized, and to craft some item in a way that is not traditionally prescribed and monitored by the upholders of that tradition would also be unthinkable.

So, the question is, how did it become possible for “technology” to emerge—that is, production conducted outside of these forms, in accord with the logic of continually reducing the elements of one process to another set of elements produced by another process? I think that the answer must be: when it becomes possible to see other human being as implements. The divine kings, commanding hundreds of thousands, even millions, in their slave war and labor armies, would first get a view of all these individuals as “parts” of a whole that might be more than the sum of its parts. Some could be added; some subtracted; some moved over here; some over there. If some worked harder, the possibility of combining all the better workers would come to mind; if workers or soldiers improvised and found some new way of cooperating with each other, that could be remembered and reproduced. This is already a kind of technology.

The Axial Age acquisitions made it increasingly difficult to levy these vast, sacrificial, masses. So, in the European middle ages, while there was steady technical development, and some remarkable feats of engineering and architecture, such development never exceeded the limits set by existing corporate and authority relations. The masses confronted in the New World and, especially, those flowing into the cities from the farmers enclosed out of their land must have ignited a new technological imagination. For quite a while, the development of machinery seemed to track pretty closely intensifications in the division of labor, with each laborer being given increasingly simpler tasks within an increasingly complex process. If automation has now itself become an autonomous process, it is because men were first automated. Eventually, of course, technology came to alleviate and eliminate human labor, but in the process the disciplines, focused on both technological and human resources, became the main drivers of social development. The human sciences, which took over from theology and philosophy, treat humans in technological terms, as composed of parts that work together in ways that can be studied and modified. Even attempts to “humanize” disciplines like psychology reduce people to set of interchangeable and predictable clichés.

The disciplines naturally think they should run the government which, after all, is just another technology. And whatever claims the government might make on its own behalf, like fulfilling the “popular will,” are best left to the disciplines, upon whom the government would anyway be dependent in measuring such things. The emergence of data and algorithm driven, all-intrusive social media which more and more people simply can’t live without is a logical extension of this process, as is the elimination of millions of jobs through new modes of automation. But desacralized technology, like desacralized power, provides a frame within which ultimately unlimited struggles ensue. Indeed, technology is the dominant form of power. If technology presents itself to us as an enormous system of interlocking imperatives which provides a very precise slot for us to insert our own imperatives, who or what is that the center? What ostensive sign generates the system of imperatives?

Technology is completely bound up with the specific forms the centralization of power takes in the wake of the desacralization of power. It is part of the same furious whirlpool of decentralization, as old forms of power, predicated upon earlier forms of technology, are broken up, and then recentralization, as new forms of power exploit the new technologies to remove mediating power centers in zeroing in on each individual. In that case, the commands of the center are mediated technologically, which is to say through our self-centerings as both objects of technological manipulations and imaginings and subjects becoming signs of the algorithmic paradoxes: our choice here is to become either predictable and unreliable, or unpredictable and reliable. In this way, we situate ourselves at the origin of the technological event, and model forms of power that will advance participation in the reinscription of technological markings upon us.

The telos of technology, then, is to make technologically produced human interactions into models for further analysis of practices into networks of sub-practices, out of which new practices are synthesized. In the process, the cultural work of deferral becomes increasingly technological—this means that we will think more in terms of deferring possible conflicts in advance, in making them unthinkable and impossible, rather than intervening crudely after the fact. We would work on turning binaries into aggregated probabilities, and making those aggregated probabilities capable of expression in language—this would be a source of important artistic and pedagogical projects. It would be as if we were producing futurity by continuing to work on the originary scene itself—in, say, settling “in advance” some dispute between friends, a particular wrinkle in the fluctuations of aborted gestures on the scene is revealed—the scene, one can now see, would only have cohered if one member had shaped his sign of deferral while positioning himself just so in relation to his neighbor and the center.

What about all the moral and ethical questions bound up with technology—gene manipulation, increasingly destructive weapons, pharmaceutical interventions into behaviors, deficiencies and capabilities that were once within the normal range but now, at a higher resolution, seem to call for remediation, etc.? Behind all these anxieties is the fading away of a sense of the human that was formed logocentrically, which is to say through the assimilation of the literate subject to the scene of speech, in which all are present to each other, and intentions are inseparable from signs. Humanism is a degenerate form of the Axial Age acquisitions. But this is not to say that our telos as technological beings is simply to go full speed ahead on all counts. We need a new way to think about these things, one that doesn’t rely on what are ultimately historically bound feelings of defilement. There is a human origin, and origins that iterate that origin, but no human nature. The event of technology, in which we become, collectively, models of further interventions that will in-form us, is itself originary.

Some of those moral and ethical questions are not real questions, relying on dumbed down or falsified versions of actual or possible scientific developments. The answers to those of them that are real questions will depend upon the state of the disciplines. Only within disciplinary spaces will it be possible to ask whether a proposed innovation or line of inquiry, i.e., some proposed new power, will have commensurate responsibilities assigned to it. Only in properly composed disciplines can these questions be raised free of scapegoating pressures demanding remediation to enjoy new “freedoms” or to avoid some form of ostracism. Anthropologically grounded disciplines would have to work to make new innovations and inquiries consistent with the basic terms of social coherence, while using new possibilities to continue studying those terms; and then we would have to assume open channels between the disciplines and central authority. There is even a place for “letting the market decide,” as long as we keep in mind what the “market” is: what people without direct authority for maintaining the social center do with knowledge, information and skills when they are being protected and bounded but not directly supervised by such authorities. Supervision can be relaxed and tightened for various purposes, and one of the purposes for relaxation is certainly to see what intelligent and talented people can do when encouraged to engage in skunkworks. In this case, as in all cases, the ultimate test for the reception of any novelty would be whether it helps sustain the pyramid of command starting from the central authority, and even contributes to ensuring the continuity of that authority from ruler to ruler. And the disciplines will accordingly, make themselves over into articulations of practices refined by the latest divisions in labor that study the diverse forms of human interaction for models of technological transformation—in the process establishing meta-practices for representing this dialectic in a way intelligible to central authority.

Powered by WordPress