GABlog Generative Anthropology in the Public Sphere

November 14, 2017

Declarative Imaginary

Filed under: GA — adam @ 7:07 am

We think in language, which means we think almost exclusively in declarative sentences. We hear and read lots and lots of sentences throughout our lives; we remember very few of them, but we distill from all of them a stock of sample sentences. When we read or hear new sentences, we measure them against that stock: we assimilate new sentences to some in the stock and “understand” them, we find no way to assimilate those sentences and so we reject them because “they make no sense,” or we use the sentences to expand, diversify and reorganize our stock. And when we write or speak, we aim at affecting others’ stocks in the same way.

A sentence has a topic, and a comment—in the most familiar grammars, a subject and predicate. There is something to which we are being asked to pay attention, or to which it is assumed we are already paying attention, and something is being said about that thing. This presumes some “issue” regarding the topic—if someone needs to know something or be told something about the topic then something about the topic’s “identity” or existence is in question. For originary thinking, what I call “originary grammar” or “anthropomorphics,” what is unsettled about the topic is the status of some imperative directed towards it. The question is more directly associated with the sentence, but the question is asked as a way of holding the commands and demands, however distant, at bay. The comment forbids the interlocutor (however hypothetical) to press the demand further because the referent of the topic has been rendered off limits due to circumstances beyond our control. The sentence is a cease and desist order.

It is also an order to contemplate, rather than grasp, or change, or possess the “topic.” To “contemplate,” in grammatical terms, means to try out new “comments” along with the topic. These might be comments or predicates more likely to gain us access to the topic, or at least so we hope, but they might be comments or predicates that place the topic even further beyond our original desire. Eric Gans, in his The Origin of Language (of which a new, streamlined and much more accessible but equally rigorous edition is on its way) examines the “esthetics of the sentence.” For Gans, esthetics has its origin on the originary scene itself, in the moment of hesitation created by the first sign: the sign, the “aborted gesture of appropriation,” directs the participants’ attention to the central, desired object, while also forbidding that object, which turns the attention back to the sign. This oscillation between object and sign is “esthetics.”

A similar structure holds for the declarative sentence, which forbids the desired access to the object expressed in the imperative, while at the same time making that object all the more attractive by “framing” it with the “forbidding” comment. Moreover, the object as represented in (and desired as a result of) the sentence is no longer the object originally desired: possessing the actual object would abolish the object as desired through the sentence, while the sentence renders the actual object inaccessible. What Gans calls the “esthetics” of the sentence, though, I would prefer to call the “imaginary” of the sentence, since the “imaginary” is better suited for examining the foundation of communities.

What is the “imagination”? I like to work with R.G. Collingwood’s very economical and simple account from The Principles of Art. Let’s say you’re looking at a lawn, at the end of which you see a wall, beyond which you can no longer see the lawn. You can extend the lawn in your mind, “seeing” it continue past the wall by extrapolating from what you see now. You “imagine” the extended, completed lawn. So, the imagination is always an extended, extrapolated and completed representation of something “cut off” by whatever is “framing” it in your actual vision description. But the very possibility of the lawn extending indefinitely, or following any one of a number of possible “extrapolations,” is what makes it appear “framed” and “bounded” in the first place—even the extended lawn will meet and therefore, to use the Derridean term, has “always already” met, a boundary. The possibility of the object being other than it is, and the object being as it is, rather than one of those other possibilities, are both creations of the imagination.

The “imaginary,” then, is the constitutive frame of a shared reality, always implicitly distinguished from some other possible reality. If I can’t imagine my friend betraying me, and hence inhabit, along with that friend, a world in which betrayal is constitutively excluded, then that is the imaginary of that friendship—one bounded, ultimately, by the threat of betrayal, which would put an end to that world. None of the actions I have ever seen my friend perform, none of the words I have ever heard him speak, can be extended or extrapolated so as to fit into a betrayal scene—which means they negate all those indications of possible betrayal which might otherwise lead one to form expectations of its eventuality. It is through an act of faith in my friend that I have constructed his words and actions thusly.

So, the imaginary of the declarative sentence is in the process I described above of maintaining the oscillation between the sentence and the desired topic/subject/object by trying out different comments or predicates that keep the real object in sight or mind while reminding us that any appropriation must be in a mediated and transformed form that doesn’t restart the imperative crisis whose imminent or distant possibility first informed the sentence. Now, the roots of the declarative imaginary are in our imperative relation to reality. Gans speaks of an “imperative-ostensive” dialogue that precedes the emergence of the declarative. He models this on the ordering of surgical implements by the surgeon of the nurse or assistant who in turn confirms while obeying the command: “Scalpel!” “Scalpel!” We should imagine such an ostensive confirmation to imperatives to be the norm in the pre-declarative community: it would be an important way of maintaining “linguistic presence,” which is to say assuring one another that our words and gestures continue to sustain a shared world.

But there is an imperative-imperative dialogue, the “imperative exchange” discussed in my previous post and prior to that. This dialogue is with the center—but it should be pointed out that all dialogues are with the center—our fellow humans are representatives of the center, or a particular manifestation or aspect of the center. Prayer is the most fundamental imperative-imperative dialogue: God, tell me what to do. We request from the center a command; a command, ultimately regarding the most propitious way of approaching the center in the future. We constantly make the same requests from all objects, ultimately stand-ins for the center: when we deal with a new device or tool, we ask it to teach us how to use it. The imperative world is a magical one in which words create realities, and when the imperative turns out to be infelicitous, that world, or imaginary collapses, and a declarative one must come to take its place.

The declarative imaginary is predicated on possible failures of the imperative imaginary: our declarative sentences are concerned with predicating all the ways all our imperative exchanges can fail. The declarative imaginary is what constitutes and “manages” our stock of sample sentences: all the sentence “prototypes” available for “retrieval” for the purposes of reading, speaking, writing and thinking are cut to size to fit the declarative imaginary. The declarative imaginary is defined, on one side, by what we could call the “marginal imperative”: the weakest but still active and therefore possibly disturbing imperative. Think about something that offends you, but just barely. Being “offended” is answering to an imperative: what has been done to you must be “answered,” even if only in your own mind. Below the barely perturbing offense there is an act that could be constructed as an offense, but you wouldn’t bother. But someone else would bother, and maybe you did yourself not that long ago. That you no longer perceive any offense means that some imperative territory has been “colonized” by declarative force: you may notice and interpret that action, introduce it into your calculations, but it no longer “compels” you—in fact, that it no longer compels enables you to notice things you wouldn’t have in a reactive mode. There is a new reality behind the action which is more worthy of attention than the action’s (minimal, as you now can see) effect on you.

The marginal imperative is varied across a culture (otherwise it would be impossible for an individual to set aside a particular imperative) but there are cultural baselines here: in a civilized culture, for example, the imperatives associated with blood feuds fall below the “margin.” But what makes this rising of the threshold of imperativity possible is what bounds the declarative imaginary on the other side: the absolute imperative. The absolute imperative derives from the original declarative scene: don’t pursue impossible imperatives that inevitably lead to violence without achieving their desire. This is a very expansive category—determining what makes an imperative possible and which imperatives must lead to violence depends upon how far down the road of a chain of consequences you can see, which is to say it depends upon how much reality your raising of the marginal imperative has allowed you to imagine. But that in turn means that the absolute imperative is translated into the command to raise the marginal imperative. And what answers to the command to raise the marginal imperative is to de-center yourself as a source of imperatives: the more you see yourself as a center, and the more you elevate your centrality and aspire to higher degrees of same, the more you must answer to and answer for. Imagine taking offense at everything that might indicate some slight, however minimal.

To work on raising the marginal imperative is to take an interest (to hear new imperatives that compel you to inquire into) in the hierarchy of imperatives. There are still things one “must” take offense at—those, then, solicit more important imperatives. That also means some “imperators” are more authoritative than others, which is to say some speak from a place closer to the center. The absolutist imaginary extends and extrapolates from this hierarchy to an explicitly recognized hierarchy in which the source and scope of imperatives is constantly clarified—clarified by our heeding and obeying them. Imperative exchanges are continually breaking down and being reconstructed here as elsewhere: the higher the marginal imperative and the more extended the absolute imperative the more effectively this declarative work, this “sentencing,” proceeds. If we de-center ourselves effectively, hear and obey the imperative to not seek to occupy the center, then we may return to the center, stand in for the center, in a new way: as one who has visibly enhanced the reciprocal action between the marginal imperative and the absolute imperative and has thereby clarified the imperative order for others who desire its clarification. Of course, there will always be a place for those who de-center themselves even from that centering, who refuse centrality altogether in the name of naming the forms of centrality that might emerge in the long run. That would be a way of life devoted exclusively to expanding the declarative imaginary, deliberately renewing the stock of sample sentences, always sentencing.

November 7, 2017

Felicity

Filed under: GA — adam @ 7:10 am

J.L. Austin, in originating the concept of “performative” speech acts, considered such acts to be “felicitous” or “infelicitous.” Performative speech acts effect some change in the world, rather than saying something “about” something, and therefore either “work” or don’t “work,” as opposed to being true or false. The canonical example is the words spoken in the marriage ceremony: “I do”; “I now pronounce you man and wife.” In this case, the groom and bride are not describing how they feel about each other, nor is the pastor describing their relationship—all three are participating in in creating a new relationship between the two. Such speech acts are felicitous if carried out under the proper, ritual, ceremonial, sanctioned conditions: if I happen to hear, in a store, one customer say to one salesman, “I do” (when asked, say, if he would like to look at another pair of pants) and another customer say “I do” (“do you like that perfume”) and I shout out “I now pronounce you man and wife,” nothing has happened, even if the two might appreciate my quick wit. The problem for speech act theory or philosophy has always been where and how to draw the line between performative speech acts and what Austin called “constative,” or referential speech acts (which can be judged true or false). As is often the case, what seems to be a simple and intuitively obvious distinction gets bogged down in “boundary cases” the more closely we examine it. Even a scientific claim, with its proof replicated numerous times, requires its felicity conditions: a “sanctioned” laboratory, a scientific journal, an established discipline, etc. Genuine theoretical advances always come from cutting such Gordian knots by subordinating one concept to the other, with the subordinate concept (like Newtonian physical laws within Einsteinian physics) becoming a limiting case of the dominant one. Within the disciplinary space created by the originary hypothesis, the first speech act was undeniably performative, creating humans, God, and a world of objects that could be referred to, the decision is an easy one: all uses of language are to be understood as performative, with the constative the limiting case.

Seeing language as performative is easy in the case of the lower speech acts theorized by Gans in The Origin of Language; the ostensive and the imperative are, from any perspective, acts which do something in their saying: such acts only make sense if they work, i.e., change something in the world. The problem comes with the speech act traditionally defined in terms of truth conditions, the declarative. Declarative sentences are, first of all, true or false; that it be reducible to truth or falsity seems almost be a definition of the declarative sentence. So, what do declaratives do? Well, for starters, they answer questions. As R.G. Collingwood pointed out, any sentence can answer, at a minimum, one of two questions: a question about the subject or a question about the predicate. If I say “John is home,” I can be answering a question about John’s whereabouts or about who is home. Introducing modifiers increases the number of (quite possibly mutually inclusive) questions that might be answered by the sentence: “John is safe at home” answers, along with at least one of the previously mentioned questions, a question about some danger presumably or imaginably faced by John. We might say that a good sentence is one that maximizes the questions it elicits and answers. And a good question would be answerable by a declarative sentence. Of course, what makes a question answerable, and which questions a sentence might be answering, depends upon the space, ultimately a disciplinary space of historical language users, within which the sentence is uttered, written and/or read; and sentences provide us with evidence, perhaps the best we can have, regarding the constitution of those spaces. Our sentences are informed by tacit, unasked questions.

But what are questions? The fact that any question can easily be re-written in the form of “tell me…” indicates the interrogative’s dependence upon the imperative. If you look at it from the other side, we can imagine the process of transition from imperative to interrogative: get that! Go ahead, get it! Come on, get it already! Get it, please! Will you get it? Could you get it? Will you let me know whether you might be willing to get it? If the shared focus is maintained, an unfulfilled (either refused or impossible) command turns into a request for the performed action or object, and finally a request for information regarding its possibility. Imperatives themselves, meanwhile, are an immensely complicated and varied batch—from plea and prayer on one side to command and directive on the other, with summons, requests, instructions and much else in between. I have focused, perhaps inordinately, upon the imperative, and intend to continue to do so, because very few people like to talk too much about it. The reason is obvious: imperatives are intrinsically asymmetrical, implying some difference in power or access, even if momentary—if I tell you to pass the salt because it’s at your end of the table, neither of us thereby has more power, but it is precisely that kind of relation—one person in possession of something others need—that makes a more structural imperative relation possible. Linguistically speaking, the liberal fantasy is for a world without imperatives: the mere statement of facts and description of realities would be sufficient to get us all doing what we should. But what is the dominant means of production in the contemporary world, the algorithm, if not series of imperatives strung together declaratively (if A, then implement B; if C or D, implement E…)?

And, finally, what is an imperative? It has its origins in an infelicitous ostensive—the ostensive involves shared pointing at something, for which the verbal equivalents are naming and exclamations (“What a beautiful day!” doesn’t make an empirical claim but rather assumes the listener to will join in appreciation of the day). The infelicitous ostensive that leads to the imperative is naming—what happens if someone, out of ignorance, impatience, desire or naughtiness names an object that’s not there? If it happens to be nearby, someone might just go and get it, and we have a new speech act. All these speech acts, then, from pointing to the most convoluted sentence, emerge from the Name-of-God directed at the object at the center on the originary scene. Now that we have brought the center into play, we can work our way back in the other direction. The imperative, according to Gans, would have been invented (or discovered—the line between the two is very thin here) on the margins—the (ritual) activity at the center among these earliest humans would not have allowed for such mistakes (or at least would not allow for them to be acknowledged). But it would quickly come to be applied to the center. The basic relation between humans and deities is a reciprocally imperative one: we pray to God and God issues commands to us. This is what I elsewhere called an “imperative exchange”: if we do what God says we can expect our requests to Him to be honored. But the imperative exchange accounts for our immediate relation to the world more generally. In originary terms, the world consists of nameable objects—not everything in the world is named, but anything could be. Those names are all derivative of the center, the source of nameability itself. We engage in imperative exchanges with all named objects, all objects that are “invested” linguistically: we accept commands from them that require us to “handle” them in specific ways, and in return they yield to our own demands that they nourish, or guide or refrain from harming us or otherwise aid us. We of course have little crises of faith all the time in this regard. One thing we do in response is firm up the world of things, make it more articulated, make the chain of commands issuing from it more hierarchical and regular. In other words, a technological understanding of the world is essentially the ordering of all the imperative exchanges in which we participate. A very powerful theory of technology in general, and contemporary technological developments in particular, will follow from this.

Now, Gans provides for a complex derivation of the declarative from the failed (infelicitous) imperative, and I would like to preserve that complexity—this is no place for shortcuts. (In my reading, despite its natural relation to the imperative, the interrogative actually emerges after the more primitive declarative forms, filling in a gap between the imperative and declarative.) Someone in the community makes some demand or issues some command and you either refuse or (more likely) are unable to comply—the object is unavailable, the act cannot be performed. This must have happened often in the purely imperative community, but it must have also been resolved fairly quickly, because we have, of course, no record of any human community that stopped at the imperative. The problem is, how to communicate, how to find the resources for communicating, the infelicity of the imperative? We have to imagine a kind of brief equilibrium—the “imperator” is not withdrawing his command, but is presumably not proceeding to act directly on its ‘refusal” violently; the recipient of the command is presumably standing his ground, but also not eager to initiate violence; there’s some danger, therefore, enough to make some innovation necessary; but not enough to make it impossible—there’s a need to think and some space to do so.

In Gans’s construction of this (let’s say, proto-declarative) scene, the target of the imperative repeats the name of the object requested along with an “operator of interdiction.” The operator of interdiction is an imperative, forbidding in an open-ended way, some action: examples would be “don’t cross at the red light”; “don’t smoke”; “don’t eat fatty foods,” etc. The operator of interdiction is an imperative, that seems closer than any other to the originary sign itself, which is essentially an interdiction on appropriating the central object. The operator of interdiction must have emerged when one member of the community needed to bring another member into a familiar form of shared attention or “linguistic presence” in which others were already participating—think about situations where it’s enough to say “don’t” for the other to understand what they shouldn’t do; it would subsequently have been used repeatedly in cooperative contexts, when impatience or imminent conflict threatened to undermine the group’s goal: a gesture meaning “don’t move” or “don’t make a sound” would be readily intelligible in situations where it was evident that that is precisely what someone was about to do. The interdiction is a slightly asymmetrical ostensive and a very gentle imperative. The linguistic form of the interdiction would have gradually been extended over longer periods of cooperation where dense tacit understandings unite the participants, until the form became generally available.

Its meaning, though, juxtaposed to the repeated name of the object, in this novel context, seems multidirectional: what is the “imperator” being told to refrain from? Issuing the imperative itself? Proceeding from the infelicitous imperative to violent retaliation? Desiring the object altogether? The imperator will recognize an interdiction being imposed upon him, but why should he obey it? What makes it convincing? Only a realization of the absence of the object. The problem, though, is that it is on this scene that the means for communicating the absence of the object are created. If the operator of interdiction is also directed toward the object, though, that is, if the object itself is being commanded to “refrain” (from being present and available), then the two-pronged imperative can have the necessary effect. So, in this primitive declarative—the operator of interdiction is the first “predicate”—the imperator is told to cease and desist “because” the object has been ordered away. And the only possible source for the imperative issued to the object is the center itself, or God. But in that case, the interdiction issued by the speaker must have the same source, since it is intrinsically connected to that issued to the object. The declarative sentence, then, opens us up to imperatives from (to mangle Spinoza) “God, which is to say, reality.” Declarative sentences respond to or anticipate the failure of some imperative exchange by conveying a command from the center to lower or redirect our expectations, which involves redistributing our attention. Unlike the ostensive and the imperative, the declarative establishes a linguistic reality that does not depend upon the presence of any particular object or person in the world: it creates and sustains, in the face of the constant force of imperative realities, a model of the world that allows more of the world to be named. They utter the Name-of-God outside of any ritual context. That is what declarative sentences do, that is their performative effect.

This language centered discourse needs to be put to work, and that will be done. For starters, consider the following: why do you, does any of us, do what we do? We can always ascribe rational motives to ourselves by retrojecting a chain of reasoning for what we have done, but obviously there wasn’t a chain of reasoning that got you started on that chain of reasoning in the first place. Why were you interested in the thing you started thinking about, and interested in the way that started that particular line of thought? We can give psychological and even biological explanations, but there is ultimately a leap from some purported internal “mechanism” to language that can’t be bridged. No, you do what you do because you are obeying a command. Where in “reality” (material exigencies; tradition, or a long chain of commands) that command comes from, how it has been reshaped in the processes of arriving at you, how you have to modify it in order to fulfill it, when its authority lapses, and that of another imperative takes its place, are all among the most interesting questions. But we are command obeying beings.

A final, ethical conclusion. How are we to find felicity, that is, a general felicitousness of our speech acts? In the continual clarification of each of them in themselves and in their relations to each other. In the ostensive domain, we engage perpetually in the Confucian “rectification of names.” In the imperative domain, we clarify the commands we heed (and those we in turn transmit), trace them back to a larger chain of commands, and cleanse them of reactive, resentful, prideful counter-commands (the commands we heed themselves provide the resources for this). Our questions should be grounded in some imperative “blockage,” and made answerable (if not necessarily once and for all) by declaratives. And our declaratives should decomposable into such questions while letting through higher, more central imperatives, commanding us to renounce stalled imperative exchanges and the resentment towards the center they generate.

October 31, 2017

The Single Source of Moral and Intellectual Innovation

Filed under: GA — adam @ 6:38 am

The graded, or staggered, model of action I presented in my next to latest post, and which I have elsewhere called “centered ordinality,” can provide us with a model of thinking along with one of morality. If the first sign appeared as a deferral of violence, then every sign appears likewise: not, needless to say, imminent collectively destructive violence as on the originary scene, but whatever would count as self-threatening violence for the thinker. (By “sign,” I mean anything that is taken to produce meaning). Even the most commonplace thoughts and ideas would fit this model—you produce a sign, i.e., you think of something, something occurs to you, as part of a feeling that something would be lost or destroyed otherwise. This is the firstness of thinking, and it doesn’t matter if the sign is original to you in some way or the most tired cliché—it’s doing what it’s doing for you right at that moment. And it’s doing it for you in the plural, even if you’re not directly interacting with others—at the very least it’s the two or several in one each of us is. We’re not the exact same person we were a second ago, if only because the thought we just had mediated the transition from one to another, and we’re always mingled in various ways with everyone else. The kind of panic, or oblivion, or complacency that shuts down thinking is a kind of violence conducted from the outside (the semiotic ecology) and imposed on oneself—if I think beyond this setting something will be unsettled that I’d like to consider settled. It is feeling the imminence of this shutdown as violence that keeps thinking going.

So, you start off thinking against this imminent violence, and it crystallizes in some encounter with another line of thinking (perhaps the line of thinking that led to the panic, oblivion or complacency) from which it must distinguish itself. This is the secondness of thought—its channeling through inherited formations. But, of course, the thinking itself could never have been outside of inherited formations—after all, the thinking must have been done in language, the most inherited of all formations. But thinking in its firstness takes its departure by rerouting what has been inherited back through its originary structure—an expression that has obviously been said by millions of people but was said by this person at this time and place in this way; a prayer you’ve repeated a thousand times but for the first time seemed to be really heard; a phrase in a book that takes on new meaning because it’s referenced by another book, etc. A sign can only be meaningful insofar as it has previously generated meaning, but it can also only be meaningful if it represents a new beginning. The secondness of thought wrenches the sign out of that originary context by imposing on it the weight of all the other, and especially the historically most weighted, contexts. The secondness of thinking makes the sign retroactively predictable.

Predictability is the both the issue and bane of thinking. We are seeing, on the alt-right in particular, a very vigorous defense of stereotypes, and it forces one to realize how stereotyped and complacent anti-stereotyping thinking has become. Of course there are differences between groups, however we might argue over explaining them, and these differences are registered in both commonsensical and more rigorous modes of thought. It has been courageous and liberating for the alt-right to affirm these suppressed truths. The acronym NAXALT (Not All Xs Are Like That) has emerged as a standing mockery of the feebleness of most attempts to “counter” stereotypes. Stereotyping is the highest form of sacrificial thinking: if someone needs to be blamed for some social calamity and excluded or made an example of, the stereotype tells you where to look and allows for no appeal—that is, it will not allow the pursuit to be hindered. We can never be completely outside sacrificial thinking (just saying that stereotyping is sacrificial is itself a kind of stereotyping and therefore sacrificial claim). And we certainly can’t refute it. But it is in the nature of sacrificial thinking and action to initiate chains of events that are unintended by and consume the initiator, because, following the laws of mimesis, invidious distinctions operate virally. You start with a clear distinction between same and other and eventually find yourself possessed by an other within. All we can do to interrupt such chains is lower the threshold of significance: if a particular group has a disproportionate proclivity to commit certain harmful acts, then you can formalize those acts and target the doers rather than the social reserve from which they sally forth. There will then remain the less violent residue of social stigma and marginalization but, first, it’s less violent and, second, from there another lowering of the threshold might be attempted, if the still remaining level of potential violence continues to provoke thought. (But let’s say the group in question is so powerful, self-interested and relentless that it blocks any attempt to formalize and institutionalize—well, then, either that group rules through the hierarchy it has established and will find itself with the same need to contain virality; or, it exploits some weakness in the ruling order and does you the favor of pointing out that weakness so it can be repaired—if you restore the capacity to destroy that group, it will no longer be a dire threat, or even the “same” group.)

Of course, if the lowering is not formalized and institutionalized, the lowering process can destroy itself by putting in place another even more viral distinction (between those who continue to stereotype and those who reject all stereotyping and therefore end up stereotyping their “other” especially virulently). To set yourself against stereotyping as such is to place yourself in opposition to social order and thinking itself. If society is oppressive because of stereotyping, then the deepest, most taken for granted stereotypes must be the most pernicious. You then have to destroy the most obvious things, and get outraged by boys preferring trucks and girls preferring dolls. The attempt to completely abolish sacrifice issues in the gnostic mania of monotheism; a more enduring monotheism keeps noting that whatever order you are trying to protect by conducting sacrifices really derives from another, prior and more permanent order that your sacrifice will violate, even while your sacrifice might defer, make more indirect and mitigating, another more terrible one. The creation of the sign precedes the division of the object and all the sacrifice can do is restore a practice of division that will reset the terms of mimetic rivalry. Sacrifice relieves us of the rigors of deferral by providing everyone with a fair share of the victim. Maybe sometimes we need to relax the rigors of deferral—this is what Philip Rieff called “remissions.”

The lowering of the threshold of significance constitutes a kind of renewal of firstness within secondness, and is accomplished by incorporating thirdness into the thinking process. Thirdness is the recipient and normalization of the interplay of firstness and secondness, founding and institutionalization, but it is also the position of the witness or spectator. The ability to detach yourself sufficiently from ongoing events so as to observe them as an unfolding drama is an originary source of thinking, morality and esthetics. Of course, this means being able to observe yourself, as both actor and observer, and therefore to see yourself falling into predictable roles and patterns. This self-reflexivity represents the extension of firstness into thirdness. The most moral and the most thoughtful position is one wherein you turn yourself into a sign that reveals the panic, oblivion and complacency that suppresses thought and provides a new means of deferring the violence those dispositions evade. This means inventing practices that lower the threshold of significance. The means of such invention are to be found in repetition, which has the effect of taking a sign from firstness through thirdness, as well as continually retrieving its firstness. Nothing has really happened until it has repeated, because the meaning of any sign is predicated upon its iterability. The more you deliberately repeat a sign, the more it is both stripped of meaning and becomes sheer sign, nothing but a way of centering attention. Maybe the most accessible form of repetition is satire, which pretty much anyone can do—repeat a familiar sign in a way that’s believable, recognizably not what it is repeating, distinguished by a stripping away of attributes that protect it from certain kinds of scrutiny. The moral and the intellectual come together in satire: the thing represented is “unconcealed,” and implicitly measured according to some standard of the good. To become a sign is paradoxical, both preempting and accepting vulnerability to satire, oscillating between firstness and thirdness.

A model of thinking is always a model of a disciplinary space. A disciplinary space is organized around a sign oscillating between predictability and novelty—a discipline like sociology comes into being because something unrecognizable had emerged in human groups, something that didn’t fit terms like “community,” “nation,” “polis,” “republic,” “people,” “kingdom,” etc. Genuine disciplinary spaces tend to take shape in the corners of the established, institutionalized ones, through “satirical” repetitions of their founding gestures and concepts. Disciplines are determined to make a few terms, bringing to attention a specific cluster of phenomena, work—they start with the assumption that they will work, and don’t abandon that assumption until something else comes along that might include what has been organized through a broader concept. But it should always be possible to come back to the founding paradox of a discipline—the decision to see everything one way even though everything appears utterly different than that way (if a discipline just reproduced what we already saw and knew, it would be unnecessary). Let’s say we wanted to view the same social situation as one of complete order and as one of complete disorder. We could easily do it, by adding predicates to either order or disorder—what appears to be disorder is really an invisible order, what appears to be order is really moral disorder, etc. If you keep accumulating predicates on both sides, you would get to the point where you could say, looking at this phenomenon, if we’re willing to see this set of predicates as operating hierarchically in this way so as to articulate the substantive, we’re going to see this kind of order; if we’re willing to see this other set of predicates, etc., we’re going to see this kind of disorder. As thinkers in firstness, we should always be on that boundary; as actors and artists in secondness and thirdness we will inevitably be struck by the order or disorder (or uneven combination of both) that actually appears and narrows the world of possibilities. What thinking does is make being struck in this way a starting point for thinking.

 

(Those familiar with the thinking of Charles Sanders Peirce will notice my indebtedness—somewhat distant by now—to his philosophy and semiotics, in particular his categories of firstness, secondness and thirdness. I would note in particular his essay, “On a Neglected Argument for the Reality of God.)

October 26, 2017

Autocracy Stalks the End of History

Filed under: GA — adam @ 6:23 am

Eric Gans’s readiness to put “liberal democracy in question” would have already made his most recent Chronicle of great interest, but his subsequent “supplement” made it absolutely essential to address this discussion. Gans’s recent discussions, even explicit affirmations, of liberal democracy have had the effect of making this mode of government seem far more hideous and grotesque than I would be able to manage myself, until he got to the point of finding no real argument in favor of liberal democracy other than superior economic growth. So, obviously some questioning has been going on, and the ongoing cannibalization of liberal and democratic institutions and norms alike by the left has reached a certain threshold of unacceptability. What is particularly interesting is that Gans is now willing to consider China a genuine, if still to his mind, undesirable, alternative to the liberal West, and this would put originary thinking on new and untried terrain—GA has focused almost exclusively on Western developments, but it seems we may have to start studying Confucianism; it may also be that China represents a vast, untapped market for GA itself. Here’s a good place to get started:

In the absence of political parties and free elections, political debate in authoritarian societies takes place among factions whose pluralism varies inversely in proportion to the strength of the central power. If previous Chinese leaders, wary of repeating the disastrous results of Mao’s later years, have preferred to share power among several factions, Xi’s economic successes appear to have provided him a sufficient basis for a new hegemony, allowing him to acquire near-absolute power, so far at least without the irrationality that characterized the reigns of Mao or Stalin.

Let’s note here the acknowledgement that an “authoritarian system” reliant on playing one faction against another (essentially a more controlled form of divided or insecure power) can transition to a more “autocratic” one, with power centralized in the hands of a single individuals. And that such a transition need not be irrational (i.e., it can be rational). One of the interesting things about discussing autocratic rule is that it’s hard to deny that it is better at some things than liberal and democratic forms of rule; and, once you acknowledge that, it’s hard to deny that it can get better at what it is already competent in, and better at things that have been assumed to be antithetical to that form of rule.

Among the more striking facts of recent history is the ease with which central authorities perpetuate themselves unless toppled from without. Aristotle and Montesquieu described the perilous nature of the tyrant’s role, as illustrated by the oft-assassinated Roman emperors and various examples of “Oriental despotism,” but today’s despots, including Putin and Erdogan, let alone the Kims and the Castros, or for that matter, Saddam and Khadafy before their countries were attacked by Western powers, seem invulnerable to internal overthrow. The crucial difference between them and “strong men” like the Shah or Hosni Mubarak would seem to be greater ruthlessness. But in none of these cases has autocracy provided, as Xi promises to do, superior economic performance in exchange for the loss of political freedom. (Singapore under the late Lee Kuan Yew might be considered an exception, but this city-state can hardly serve as a model for a full-sized country.)

Another relevant difference is that both the Shah and Mubarak were betrayed by their patron and thrown to the wolves. But this certainly is an interesting observation. Attributing survival to ruthlessness seems a bit circular without some independent measure of ruthlessness—otherwise, their survival itself becomes proof of greater ruthlessness. Maybe it’s just that single man rule is just as coherent and “natural” as liberal democracy. Maybe more—it’s been around a lot longer.

The fundamental question is whether such a system can ultimately become more prosperous than our messy old market system. In schematic terms: one market or two? Economic markets in both cases, but in one, the higher-level regulation of the market is imposed by a self-perpetuating central authority rather than in the hands of changing representatives of the electorate.

The one market or two question refers back to Gans’s analysis (recapitulated briefly earlier in this Chronicle) of liberal democracy as comprised of two markets: the economic market, and a political market that allows for a form of collective decision making that elicits, contains and at least in part addresses the resentments generated by the inequalities caused by the economic market.

The crux is whether an authoritarian system can generate greater political efficiency to make up for its diminished economic efficiency, which will presumably be affected by the damage to morale inflicted by thought control. Which obliges us to turn once more to the rise of the victimary in the West and the not-so-soft institutional thought control that it produces, increasingly indoctrinating the young with victimary clichés and taboos and obliging its citizens to salute, in place of the national flag, the idol of “diversity.”

Whether an authoritarian system (but why not “autocratic,” or “absolutist,” since China seems to be closing in on that, and that was the very point of Gans’s discussion leading up to this question?) can generate greater political efficiency is an excellent way to formulate the question, but why presuppose the diminishment of economic efficiency? The reason Gans gives here seems especially weak—it would be very interesting to find a way to compare the collective “morale” of China with Western Europe or the US, and I don’t think anyone would be all that surprised to see the former outperforming the latter in this field. It would seem odd to assume that political efficiency must somehow be at odds with economic efficiency—don’t businesses, scientists and engineers prefer a stable social environment?

Xi’s ambition for “modern socialism” challenges my response to Ryszard Legutko’s ominously ironic assimilation of Western PC to the dogmas of Eastern Euro-Communism (The Demon in Democracy, Encounter, 2016 [2012]; see Chronicle 532): that, à tyrannie égale, at least the West has relatively healthy economies. But leaving the economy aside for the moment, if there is indeed to be tyrannie égale, then the very foundation of liberal democracy on the continued implicit consent of the governed is placed in jeopardy. Grosso modo we may say that the rise of the “alt” versions of right and left reflects this tendency, neither one accepting the traditional gentlemen’s agreement that its opposition will remain “loyal.” Significantly, in contrast to the Old Left, with its high hopes for the Soviet Union, the new alt-left is not at all dependent, nor even terribly interested in the fate of socialism outside its home borders. Its conviction of the inherent evil of “capitalism” is not based on a contrast with an exemplary model, utopian or otherwise, but is fundamentally moralistic. Victimary critique takes the place of every form of structural criticism. Since every practice can be shown to “victimize” in some way or other, we must engage in a constant battle against all of them, with “the end of discrimination” the only ultimate goal.

 

American society’s ability to deal effectively with victimary extremism has yet to be demonstrated…

This is really the crux—it seems to me that Gans is inching closer to the conclusion that victimary extremism cannot be controlled in America (or the West more generally), in which case exploring the possibilities of other forms of government is essential, even urgent. Gans still sees liberal democracy as the more “ideal” form of government, even if he has been brought to the point of accepting the possibility of settling for second best. But such judgments are inherently unstable—if the second best government can thrive while the best crashes, doesn’t that mean we must reverse our assessment? Gans’s continued hope for a recovery of liberal democracy (and even an ultimate turn in that direction by China itself) must also assume (although he doesn’t take up the point here—but Chronicle #532, referenced above, is a good place to take a look) that the victimary is some parasitic growth upon liberal democracy, perhaps caused by an over-reaction to the horrors of the Holocaust, rather than a (not necessarily the) logical conclusion of liberal democracy itself. As Gans himself acknowledges, liberal democracy has always been to some extent victimary—why should it be surprising that, as the still extant layers of tradition are peeled off one by one, liberal democracy would be revealed to be victimocratic to the core?

Gans persists in seeing “autocracy” (which should mean “self-rule,” shouldn’t it?) as “bad,” even if potentially better in one (albeit crucial) respect than the “good” liberal democracy. But his supplement gives us an opening to examine the question in a rather rich way:

Supplement (October 24, 2017)

Having read this Chronicle, a friend pointed out to me an October 21 piece by Rachel Botsman in Wired magazine entitled “Big data meets Big Brother as China moves to rate its citizens” (http://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion), which describes an elaborate rating system that gives everyone a “national trust score,” and that will become the official Chinese basis for all kinds of judgments well beyond financial credit by 2020.

This gave me the idea of a clearer way of comparing Chinese with Western authoritarianism. These scores will definitely put a premium on loyalty to the regime, and, to the extent they are detectable, keep expressions of dissent to a minimum, as well as stigmatizing easily detectable vices such as video games. Certainly a step toward neo-1984. But there is an upside to this reliance on “objective” measures.

China (and Japan, and I imagine, South Korea) admit students to universities based on examination scores. American universities, even where racial criteria are supposedly illegal, as in California (hard to believe that prop.­ 209 would get the vote of today’s woke electorate) increasingly give out admissions based on “diversity.” There is also increasing pressure to do the same in industrial hiring, and we are constantly asked to lament the “white privilege” of the whites (and Asians) who get most of the good jobs in high-tech industries. So if we can say on the one hand that the West’s freer economy is a plus over the managed economy of socialism even at its most enlightened, and that it’s arguably preferable to be able to express one’s resentments freely rather than whisper them with the shower turned on, the advantage of these freedoms is certainly offset by the dilution of objective criteria in personnel selection. As opposed to the old Soviet dogmas, today’s Chinese dogmas are more methodological than doctrinary, and in contrast to such things as Lysenkoism, they take their science straight (even when taking ours). What this suggests is that the autocratic nature of the society and its repression of dissent bear increasingly on the mechanisms of social control rather than on the specifics of decisions to be made in the economic and technical spheres.

Of course this discussion brackets such things as the Chinese takeover of “territories” in the South China Sea, and its under-the-table encouragement of North Korea, as well as China’s push for economic hegemony in Asia (New Silk Road) and throughout the Southern Hemisphere. But it does allow for an element of objective comparison. As our society becomes more digital-technological, hence farther from the old norm of “labor power” as the rough equivalent of moral equality that inspired Marx’s Labor Theory of Value, meritocratic selection becomes increasingly important—not just to get the “best” people, but to get everyone to strive to be the best. (Which is the major reason why—but don’t let the UC Diversity folks hear you say this—Chinese kids are good at math.)

Conversely, it is precisely the evil of meritocracy (“disparate impact”) that is the focus of the ascriptive victimary thinking that has virtually eliminated all other thought on the Left today.

The Botsman article is a pretty interesting read. Pretty much any autocratic (which is pretty much a synonym for “absolutist”) system with access to advanced electronic technology would ultimately end up employing some version of China’s social credit system—Gans’s emphasis, in comparing China’s autocracy with America’s victimocracy is on the centrality of some notion of objective merit to any social order depending upon advanced technology (you simply need competent engineers, scientists, doctors, teachers, etc., and therefore “competence” must be valued in itself). In fact, insofar as the victimocracy is intrinsically hostile to all objective, non-political measures of merit, Gans seems to be settling the issue right here. But, of course, if autocracy is capable of privileging merit so singlemindedly, it can’t simply be “bad.” In fact, if it can be brought to focus increasingly insistently upon merit, it would get better and perhaps find ways of reducing corruption and grounding its autocracy in something other than Communist Party rule (the continued repetition of inane “socialist” slogans and verbal formulas isn’t very meritorious, is it?).

But what about that social credit system itself? As Botsman points out, it’s really just an extension and centralization of what we already see developing in the West, in which records of all activity are preserved online and in one way or another made available to those institutions that have to “credit” each of us in some way; the most obvious example is our credit score. China wants to add indicators of virtue to the social credit score, by, for example, crediting someone who puts their salary toward a mortgage rather than toward gambling, and to directly reward and punish individuals based on this score. The possibilities here are endless, and would depend upon a discussion of what counts as “virtue,” for which contemporary societies would therefore have to equip themselves: should the citizen who goes to the museum housing acknowledged national art treasures get more points than the one who goes to the latest postmodernist exhibit? Should baseball be ranked above football, or MMA? Staid but informative documentaries over horror movies? Etc. Botsman also raises the question of gaming the system, which the Chinese have apparently gotten quite good at when it comes to standardized testing.

In West, the only available answer is to say, “who’s to say?,” and blather on about privacy, individualism and freedom, while railing against the “surveillance state” and “creeping totalitarianism”—you can write up the debates before they even occur (1984!). It is clear that the autocracy would be capable of hosting a much more robust and mature discussion of questions of value and virtue, however it chooses to organize that discussion. Social credit scores would be determined by algorithms, of course, but this wouldn’t be rule by algorithm—the state, the autocrat, would have to determine what criteria should guide the creation of the algorithms. This would certainly be a learning process for all involved—if the state discovers that its point system with its rewards and punishments makes a large portion of the population economically unviable (by, say, determining that they can’t use banks or public transportation, or would find it impossible to rent a home or find a mate), clearly the algorithms would have to be recalculated. In general, people would orient themselves toward the social credit ranking system, implicitly participating in dialogues over its determinations. (How many social credit points do you get for blogging on ways of improving the social credit algorithms?) Insofar as something like a social credit system creeps into the West (in the usual confused, indecisive, partly apologetic, partly arrogant way), reactionaries could use that creep to point out that if individualism is being replaced by something like an electronic village, it is preferable for that village to be centrally run and governed by a shared conception of virtue. The Chinese should really find a way to transition from Communism to Confucianism, and maybe we should as well.

October 24, 2017

Mimetic Theory and High-Low v the Middle

Filed under: GA — adam @ 6:42 am

Let’s imagine a scene, let’s say an accident on the side of the road: a few people rush to the scene and start helping the victims; if a few more come and there is nothing more for them to do for the victims, they call for help and help keep others from entering the primary scene; then, others come, with nothing much to do, but they serve as witnesses and in case some instrument or specialty must be fetched (a mechanic or doctor; a first aid kit). I think this is the best way to think about social organization, as always centered on specific needs and dangers, and as set up to differentiate people in accord with the role they can best play in meeting those needs and facing those dangers. In the scene presented above, there is a bit of chance and bit of natural difference: it may be that those first on the scene just happened to be closest, while some of those standing around later might have been just as qualified to help. Still these things tend to sort themselves out—someone who happened to be first but is afraid to take responsibility (or is unqualified, which means that he has avoided such situations, and neglected preparing for them, in the past) is likely to slip back into the crowd, while someone among the later arrivals who is willing and qualified to help is likely to present and announce himself.

According to Eric Gans, the first human scene, upon which we can model later ones like that sketched above, is more precisely specified. Here we have a desirable object, presumably some food item, at the center of the not yet human group: these advanced, highly imitative apes, have their appetite for that central object inflamed, made into desire, by the awareness of the desire of all the other members of the group. This intensifying desire overrides the animal pecking order that normally maintains peace within the group—the alpha animal eats first, the beta animal eats when the alpha is finished, and so on. The alpha could never withstand the force of the group as a whole, but animals never “organize” themselves as cooperative, coordinating groups. Now, as all start to rush to the center, the animal hierarchy is abolished. What takes its place, according to the originary hypothesis, is the sign—what Gans calls the “aborted gesture of appropriation.” Think about traditional gestures of greeting, like hand shaking—it’s a way for each side to show it is not holding any weapons. Stretching out your hand with a weapon in it would signal violence; here, the same physical gesture is converted into a renunciation of violence. Think, for that matter, of a threatening gesture (which I doubt anyone does any more), like shaking your fist at someone—by demonstratively withholding the act of violence, you actually provide a space of peace, even if coupled with a warning. The initial sign was the invention and discovery of this “method” of converting violent actions into gestures of deferral. The gesture is likely to be more effective and enduring the more it actually mimics and therefore evokes the violence deferred—when we shake hands now, we don’t do so (in civilized zones, at least) with a sense of the relief that the hand coming towards us isn’t holding a knife—which is what makes the handshake an essentially empty gesture (it’s not good enough to seal a deal any more, that’s for sure).

The car accident seems like a very different scene—there’s no object of desire, and therefore no cause for conflict. Everyone can just focus on helping the victims. But that’s not the case—every human scene has an object of desire and hence contains within it potential conflict. Something goes wrong in the attempt to extricate the victim—wait a minute, whose idea was that!? The rescue effort can turn very quickly into an exercise in blame shifting and power struggles. There must be someone first on the scene in a more primary sense—someone who can command the gestures of deferral needed to prevent those resentments lying right beneath the surface from becoming manifest and distracting from the effort. Maybe everyone involved is good at that—like trained medics would probably be. But that’s the result of the institutionalization and trans-generational transmission of the necessary gestures. Someone, then, had to build and maintain those institutions, and doing so involved an analogous process of deferring the resentments inherent in any collaboration and creating the norms and models of leadership others can inherit.

I’ve explored in a couple of recent posts the problems involved in the process of institutionalization. There’s nothing new here—in one of the commemorations I’ve read recently for the just deceased science fiction and military writer Jerry Pournelle, I’ve heard attributed to Pournelle the observation that in every institution there are those who are concerned with the primary function of the institution, and those concerned with the maintenance of the institution itself. Anyone who has ever worked in any institution knows how true this is, with the exception that plenty of institutions don’t even have anyone concerned with (or cognizant of) its primary function any more. Those concerned with the primary function should be making the most important decisions, but it will be those interested in institutional maintenance who will be most focused on and skilled at getting into the decision making positions. But someone has to be concerned with the maintenance of the institution—those absorbed in its primary function consider much of the work necessary for that maintenance tedious and compromising. (The man of action vs. the bureaucrat is one of popular culture’s favorite tropes—in more fair representations, we are shown that sometimes the bureaucrat is needed to get the man of action out of holes of his own digging.)

If we go back to the simple scene outlined in the beginning, we can see this is a difference between those who are first on the scene, and those who are second—for simplicity’s sake, we can just call them “firsts” and “seconds.” The seconds establish the guardrails around the firsts as the latter do their work, and they make for the “interface” between the firsts and those who gather around the scene (the “thirds”). They will also decide which resources get called for and which get through to the firsts, who are too busy to see to such details. There is no inherent conflict between the firsts, seconds and thirds, but there is the potential for all kinds of conflict. The firsts (and the first among the firsts) should rule, and should be interested in nothing more than enacting all the signs of deferral that have been collected through successive acts of rule. Even defense against external enemies is really a function of enhancing the readiness of the defenders of the community, and the community as a whole, and doing that is a function of eliminating all the distractions caused by desires and resentments, with the most attention dedicated to where it matters most. The seconds should be filtering information coming from below, marshalling resources, and transmitting commands and exhortations from the ruler. And the thirds, the vast majority of the community, should be modeling themselves on and ordering their lives in accord with the hierarchy constitutive of the community. The problem of institutionalization is the problem of the relation between firsts and seconds, or firstness and secondness (since all of us occupy different “ordinal” positions in different settings).

But, of course, sometimes the first is not up to the task—maybe he once was, but no longer is, while being unwilling to cede power, without their being any definitive proof of his unfitness. And once there is a formalized form of firstness, the tradition or mechanism by which someone is placed in that role will sometimes elevate someone unworthy. In such cases, the seconds, who will be the first to notice, start to worry—they may start to think one of them should be in charge (but which one…?); or that they have to exercise power behind the scenes, reducing the person presently in charge, but very likely his successors as well, to a position of dependence. Under such conditions, the right thing to do is to above all preserve the ontology implicit in the originary scene, what some of us call an “absolutist ontology,” which should therefore be inculcated as part of the accumulated signs of deferral bred into the community. We all know that in an emergency, or in any really important situation, no one thinks in terms of democracy—everybody, except for saboteurs, thinks in terms of manning the stations each is best suited to man. But that also means taking the stations each is presently manning, or is accustomed to man, as the default. A reliable indicator of firstness is the ability to revise previous assessments and assignments and to formalize present fitness. If the first is not up to the task, the radical solution of removal must come very far down on the list of remedies—we must first of all carry on as if he is capable, and if the seconds have to lend some support that will go unnoticed and unacknowledged, so be it. (This is itself a form of firstness on their part.) It may even be necessary, after the fact, to narrate events in such a way as to attribute centrality to the designated first. Of course, if removal becomes absolutely necessary for the survival of the community, such practices will make it all the more difficult; this is a good thing, though, and these practices also ensure that any remove and replace actions will be carefully crafted so as to preserve absolutist ontology.

Absolutist ontology is rejected when these practices, these attempts to bring formalized roles and assessed capabilities into closer correspondence, are abandoned and some among the seconds start to exploit the gap between attributed power and actual power of the ruler. If the second’s efforts must sometimes go unacknowledged, the same goes for the first’s dependence on the second, and this can be a lever for increasing that dependence. Then a struggle, partly overt, partly covert, commences, and it is at this point that both parties (or all parties, because the seconds are likely to fall out amongst themselves under these conditions, while the king thereby surrenders his firstness) seek allies, or proxies, among the thirds. The king has been granted power, but he doesn’t really deserve or properly use that power; perhaps he doesn’t really exercise that power, which is in fact wielded by secret, insidious forces. The hierarchy inherent in absolutist ontology can in this case no longer serve as a model for the thirds to use in composing their lives—rather, it is a mere appearance, hiding a reality that the action proposed by one or another of the seconds (or the first himself, turning against what Imperial Energy calls his “essentials”) will unveil. Skepticism, pluralism, and all the rest follow, and here is where HLvM has full sway. What has happened is that mimetic desire, that is, envy of the putative being possessed by the other, which the centuries or even millennia of accumulated deferral has converted into a complex array of signs assigning roles and duties, has now been introduced as a legitimate principle within the community (the king/your lord is keeping something from you, so, therefore, are his supporters, and maybe your neighbor as well)—and once this happens, mimetic desire, corrosive as it is, must become the dominating principle of the community. Then you have institutionalized civil war, and democracy is nothing other than this institutionalization, with voting blocs at most several steps away from dissolving into armed camps. The problem is how to avoid taking sides in this civil war, or at least not just taking sides; the only solution is to find ways of realigning ourselves as firsts, seconds and thirds in as many (and sufficiently visible) ways as possible, and thereby recovering and creating as many gestures of deferral (while marking them as such) as we can.

« Newer PostsOlder Posts »

Powered by WordPress