GABlog

February 12, 2019

Paradox, Discipline, Imperative

Filed under: GA — adam @ 6:12 pm

If the signifying paradox is constitutive of the human, then humanistic inquiry, or the human sciences, really involves nothing more than exposing and exemplifying that paradox in forms where it had previously been invisible. The paradox here is that we know what we’re going to find, but we’re going to find it, if we’re searching properly, precisely where we assumed our search for it was paradox free. I’ve been hypothesizing that what constitutes the post-sacrificial disciplines has been the concealment of the scene of writing (and subsequent media) upon which those disciplines depend. Drawing upon David Olson’s discussion of “classical prose,” in which he shows that writing historically took the form of a supplementation of the speech act represented in writing, I’ve been arguing that this supplementation occludes of the scene of writing itself. What the scene of writing reveals is that words (and ultimately all other signs) can be separated from their scene of utterance and the intentions of those on that scene and iterated on further scenes and taken up by other intentions. As Derrida claimed, writing reveals that what is truly originary in the sign is its iterability, not its meaning or the intention behind it; we can take the next step and say that its iterability, which guarantees the possibility of future human scenes, is its meaning, and is the intentionality of anyone issuing a sign. So, the meaning of the word “dog” is something like “I reference, with varying degrees of directness, all previous uses of the word ‘dog’ in order to enable a potential ostensive that will enhance scene construction in more or less vaguely conceived future instances of emergent mimetic conflict.”

The disciplines, starting with the mother of them all, philosophy, want to abolish paradox. An acceptance of paradoxicality would situate the disciplines as supplemental to the paradox of imperatives issued by the center: the narrower and more precise the imperative, the more all of its intended subjects must make themselves ready and worthy of obeying it in unanticipated settings. Inquiring into this paradox would be all the human sciences we ever need, but in this case the disciplines would have to “abdicate” their self-appointment as those who provide the criteria upon which we judge the legitimacy of the sovereign. Is the sovereign doing “justice,” is he protecting and respecting the “rights” of his subjects, is he meeting their “needs,” adhering to “international law,” enforcing the “law,” ensuring “prosperity,” “wealth creation,” “growth,” etc.? Has he been selected and does he rule according to procedures in a way satisfactory to all those who have themselves been appointed by certain procedures; all of which procedures merely lead us back to the establishment of those procedures according to other procedures, which…? If no, then he’s not the “real” sovereign, and in order to know whether he is or not you have to be a political scientist, a legal theorist, an economist, a sociologist, etc. To maintain that position, you must suppress the paradoxicality of your own utterances. You must provide certain, clear, unequivocal declaratives yielding universally available virtual ostensives that lead to only one conclusion regarding whether the central authority is rightly distributing whatever it is your science assumes he must be distributing.

The human sciences claim they conduct inquiries modeled on the experimental sciences, with their process of hypothesis generation and testing, but they really don’t. (Do the physical sciences? Shouldthe physical sciences? I leave these questions aside for now.) I worked my way to this realization through reflection upon my own little field, the teaching of writing. I came to see that all the criteria used to determine whether student writing was “good” or “improving” was circular—terms like “clarity,” “precision,” “deep analysis,” “reading comprehension” really don’t mean anything, because what it means to be clear, precise and all the rest depends upon the situation, i.e., the discipline. The assumption is that the instructor him or herself knows what clear, precise, analytical, etc., reading and writing because, otherwise, what would he or she be doing teaching writing at an accredited institution? But that means that all of these supposed concepts really translate into the teacher saying “become more like me.” And how can the student tell what the teacher is “like” (since the condition of the student is defined precisely by being unlike the teacher)? Well, I’ll tell you when you are or aren’t. So, for most writing or English teachers out there, this is why your students always ask you “what you want”—they have intuited that the entire structure of your pedagogy is predicated upon you desiring from them a reasonable facsimile, not of who you really are (that would be hard enough) but of who you imagine yourself to be.

From this I concluded that what is to be rejected in this conception of teaching and learning is that its “standards” do not provide “actionable” imperatives. No one can obey the imperative “write more clearly,” unless there is already a shared understanding of what “clarity” entails for the purposes of that communication. And, again, in the educational setting, such imperatives are issued precisely because the student doesn’t have access to such a shared understanding. So, I concluded that the only kind of “fair” and effective pedagogy is one that provides students with imperatives such that they can participate in creating the shared understanding making it possible to determine when those imperatives have been obeyed. This generally involves something like “translate (or revise) X according to rule Y,” i.e., some operation upon language from within language. I don’t want to go any further here (but if anyone is interested… https://wac.colostate.edu/docs/double-helix/v6/katz.pdf)–butthe point here is that the conclusion applies to all the human sciences (which are all, really, if unknowingly, pedagogical). That is, a genuine human science would have to participate in its “object” of study, producing imperatives aimed at improving the social practices it studies, along with generating the shared criteria enabling the practitioners to assess the way and degree to which the imperatives have been fulfilled. (Of course, political scientists, sociologists, economists and the rest make suggestions to policy makers all the time—indeed, they are routinely hired and subsidized for this very purpose. But the results of these suggestions and proposals can only be assessed in the language and means of measurement of the disciplines themselves—they therefore represent different ways of imposing a power alien to the practice in question. They are attempts to give imperatives to, rather than receive imperatives from, the central authority.)

The next question, then, is how do paradoxes generate actionable imperatives? To get to paradoxes generating imperatives, we can start with the imperative to generate paradoxes. Find the point at which the relation between the name, concept, or title becomes undecidable—that is, where it is impossible to tell whether some thing is being represented or some representation is producing a thing. This undecidability pervades language in ways we usually ignore—has it ever seemed to you that someone had the “right” name (their given name, not a nickname)? It’s absurd, of course, but, on the other hand, one’s name can correspond more or less closely to their being, can’t it? The argument over whether words represent their meanings “arbitrarily” or through their “sound shape” as well goes back to Plato’s Cratylus, and is not settled yet—whatever the truth, the fact that it’s a question, that words sometimes seem to match, in sound, their meanings, is an effect of the originary paradox.

This paradox of reference will emerge most insistently in the anomalies generated by disciplines at a certain point in their development, but can be located at any time. What is “capital,” what is the “state,” what is “cognition,” what is “identity”? If you ask, you will be given definitions, which in turn rely upon examples, which in turn have become examples because that term was used to refer to them. This is the kind of deconstructive work that opens up the question of the relation between a discipline and the intellectual traditions it draws upon and conceals. Within that loop of concept-definition-examples-concept is the founder of the discipline and the containment of some disciplinary space. A new imperative, or chain of imperatives, from the center is identified and represented as a new imperative the sovereign is now to follow—he is to create a new social order freeing capital or making the state independent, unleashing new cognitive capacities, representing pre-formed identities.

Articulating these paradoxes, then, presumably help us generate concepts other than “capital,” “state,” “cognition” and “identity.” Let’s review the process of discipline formation on the model of Olson’s study of literacy and classical prose. Writing represents reported speech, but since it does so in abstraction from the speech situation it must supplement those elements of the speech situation it can’t represent: tone, gesture, the broader interaction between figures on the scene. This generates new mental verbs: suggest, imply, insist, assume, and so on. These mental verbs are in turn nominalized into suggestions, implications, assumptions and so on (it doesn’t happen with all words—there seems no corresponding nominalization of “insist,” at least in English). These nominalizations become new “objects” of study, for linguists, psychologists and ultimately all the human scientists. These concepts are artifacts of literacy—this doesn’t mean that they can’t tell us something about processes of thinking, knowing and speaking, but it does mean that they conceal their origins and become naturalized as “features” or “mind” or “language.” Cognitive psychologists, for example, can set up ingenious experiments that test the role of, say, “prior assumptions” in decision making, but built into these studies is the literate, declarative “assumption” that it would be better if decisions were made purely through abstract ratiocination without reliance on “prior assumptions.” So, the use of power to favor what cognitive psychologists and like-minded human scientists across the discipline would recognize as “rational discourse” is implicitly favored over any attempt to, say, think through what a “good” shared set of “prior assumptions” might be.

So, let’s say we reverse the process, and dis-articulate the nominalizations back into verbs. Anna Wierzbicka’s primes can be useful here, but they’re not required. So, for example, the psychologist Daniel Kahneman “writes of a ‘pervasive optimistic bias’, which ‘may well be the most significant of the cognitive biases.’ This bias generates the illusion of control, that we have substantial control of our lives” (I’m just working with Wikipedia here). So, “we” can measure how much “control” “we” have over “our” lives, how much control we think we have, and the “distance” between the two. Those doing the measuring must have more control than those being measured—they know how “complex” things really are. The best way of measuring such things seems to be asking people how much they think things will cost. (Maybe Kahneman has a bias in favor of certain understandings of “control” and “complexity.”)

But being more or less “optimistic” is a question of wanting, hoping, thinking, knowing, trying and doing. These activities are all part of each other. You have to want in order to hope, and you have try in order to do and you have to hope in order to try. And you have to know something (not just not know lots of things) in order to hope—knowing the relation between trying and hoping, for example, and how that relation is exemplified within whatever tradition or community you are located. And the relations between all these activities can be highly paradoxical—the harder you try, the higher your hopes might be, which might mean the more deluded you are or it might mean the more you find ways of noticing your surroundings, taking in “feedback.” But can you try really hard without hoping? Sure—and consciously withdrawing your hopes from your activity, draining your reality of its aura of hopefulness, so to speak, might be a new form of hoping, one in which you accept a lack of control as part of a “faith” you have in “higher powers” or the mutual trust with your neighbors. From this you derive an imperative to hope, try, know, etc., in a specific way, within a particular paradoxical framing. (All of the binaries targeted for deconstruction by Derrida are sites of this originary paradoxicality.) None of this interferes with re-entering the entire disciplinary vocabulary from which we departed, and reading the discipline itself in terms of its hoping, trying, knowing and so on. Any disciplinary space must also be a satire of some institutionalized discipline.

February 7, 2019

Salvation from the East

Filed under: GA — Q @ 6:30 pm

The religious practices of Buddhism, Hinduism, Taoism are mostly ritual in many places. But there is a more spiritual strain found in certain sects and their texts: the idea that consciousness itself is the sacred or God. The very fact of being conscious means that we already know everything that it is possible to know about God or Buddha, although certainly revelation or enlightenment can make that knowledge more available to understanding. The insight of Buddhism is that consciousness is essentially one thing, despite the many creatures who each possess their own form of consciousness, and despite the infinite possible objects of consciousness. A Zen master once summed up Zen teaching in one word: “Attention.” When asked to elaborate, he said, “Attention, attention, attention.”

Consciousness is shared by animals. There is a famous Zen koan about a monk who asks his master whether a dog has Buddha-nature or not. The Zen master answers “no!” although I understand the Japanese word “mu” (sometimes translated as “no”) is actually more nuanced. The traditional teaching of Buddhism is that all beings have Buddha nature and attain enlightenment, including dogs and stars, rocks and trees. The point of the koan, as I understand it, is that the student should be seeking enlightenment not wondering about doctrine. Zen disciples seek enlightenment, the content of which defies any rational explanation. Meditation on the master’s answer to the student’s question, mu, informs the Zen practice of some disciples.

Kant wrote, “Two things fill the mind with ever-increasing wonder and awe, the more often and the more intensely the mind of thought is drawn to them: the starry heavens above me and the moral law within me.” To develop his point, I would say there are three miracles which support faith in God: Being (the universe surrounding us), life (which fills the heights and depths of our planet), and human consciousness, which is indeed distinguished by our sense of right and wrong. Eric’s idea of the scene of representation can articulated in ethical terms. To be aware of God, or to be aware at all, means to be aware of the human community, as a community, and to be aware of individuals, as individuals. Human consciousness is a new awareness of others, as mediated by a sign. Buddhism is never nihilistic. Consciousness, the scene of representation, is filled with the attention of the human group.

All animals and even plants have the ability to react to their environment, and can thus be said to share the miracle of consciousness. Rocks and stars do not obviously have consciousness. But Zen Buddhism, like Kant and Hegel, seeks to overcome the opposition of the perceiving self and the objects of consciousness. Anything that can be perceived, therefore, partakes of Buddha, and the duality of subject and object, coming and going, is illusory. For this reason, the question of whether the scene of representation has any ontological status apart from human consciousness is not meaningful.

Buddhism recognizes that life is suffering, and that the source of suffering is desire. The human condition therefore is desire and the suffering which results, a very Girardian insight. The solution is to minimize desires for sensual pleasure and not let desire lead one into sin. There is an ascetic strain to certain types of Buddhism and Hinduism. Enlightenment suggests an inner serenity and detachment from external conditions, founded on the insight that all existence is always-already free and transcendent.

February 5, 2019

Form and Paradox

Filed under: GA — adam @ 7:14 am

Once the sign has done its work on the originary scene, that of arresting the forward, convergent movement of the emergent community toward the central object, the members of the group will, indeed, proceed to advance on the object and consume it together. This raises the question of how they do so without forgetting what they just learned, and restarting the mimetic crisis. The sparagmos, the manifestation of the resentment toward the center, must be contained. My answer to this question, one I have put forward many times, is that the sign is “flashed” at each point along the way, accruing meaning and variation along the way. Even at the “wildest” moment of the sparagmos, a quick gesture would prevent one member of the group from encroaching “too much” on the portion of another member. What this means is that form is needed to make transitions from one activity to another, or from one “stage” of an activity to another.

This is the reason for that “canopy of ceremony” enveloping all practices in traditional orders, the loss of which in modernity is so bitterly mourned by reactionary cultural theorists. Think, for example, of how difficult it can be to “disengage” from an intense conversation with a close friend. It’s awkward to say something like “ok, see ya” when that cut-off point inevitably comes. The good-bye is best framed in such a way as to indicate some carrying over of that experience into more mundane activities, as well as that the separation represents a mere interregnum, as the conversation will be resumed at some later point. Or, take perhaps the most “wild” activity of most modern humans, sexual intercourse—just as some process of seduction must proceed the act, some exchange of words and gestures must “seal” its conclusion, both to preserve it as sacralized memory and integrate it into the rest of life. A lot of “bad” sexual experiences are no doubt a result of a failure on the part of one or both parties to see to the “scenic” character of the act. (The new legal doctrine of “affirmative consent” is a kind of unintentional parody of this need for form, trying to codify in declaratives what must in large part take place on the ostensive and imperative level.)

I’m coming back to this question in connection with arguments regarding the moral order of absolutism I’ve been making recently. The problem for absolutist political thought is conceiving of a post-sacrificial center. We can’t have a God-Emperor because we know that the emperor doesn’t control the weather, the river or the crops, nor can we in good faith bring some portion of our possessions to a temple to be consumed so as to ensure the regularity of rainfall or, more generally, the benevolent gaze of the deities. But, since there is a center, over and beyond any “justifications” for it, or for a particular occupant of the center, that anyone could provide, the center’s de-sacralization leaves a hole. Since what the center does is issue imperatives, in obeying the imperatives from the center we confer the “graceful charisma” (a term from Philip Rieff recently referenced by Imperius in his twitter feed) the center needs—more precisely, we do so in the way we obey, by eliminating the gap between the imperative issued and the imperative obeyed. “Social science” becomes a holy science insofar as it is wholly engaged in studying the difference between imperatives issued and imperatives obeyed, including the ways that difference is manifested through the declarative order.

A particular “fork” confronts us in embarking upon the path any imperative places before us. Since the center is occupied by, has been “usurped” by, a human, every human comes to model him or herself on that occupant by demanding some form of centrality him/herself. Being the recipient of an imperative places you at a center with, therefore, some power to wield—at the very least the power to direct attention one way or another. One way of directing attention is by appropriating the “transgressive charisma” (to return to the distinction Imperius evokes) one gains by violently centralizing someone “falsely” claiming centrality. This putative falseness consists, circularly, in marginalizing the present claimant’s, and all those he invites to be represented by him, self-centralizing. We can identify transgressive charisma because its bearer will accuse his target of all of the violations of normative order that he himself commits in his very accusation.

And this normative order is the result of the deferral of scapegoating that marks post-sacrificial order. Something goes wrong—our first impulse is to find the origin of the threat and eliminate it. (We are all originary thinkers.) How? We first of all look for a human origin because anything that threatens us seems intentionally directed at us, and only a human could threaten us intentionally. (Gods, in sacrificial orders, can be considered humans for this purpose—the border line is very porous.) So, which human? Some of us stand out more than others, whether it is because we are “defective” in some way (physically disabled, speaking with a lisp, etc.) or because we have come, rightly or wrongly, to be associated with “trouble.” Some of us are “marked,” in other words. Someone, in a given situation, will be “especially” marked. How so? Someone will make some apparently plausible connection between that individual and the event. Someone else will second it. Others start to look more closely, and find other reasons for suspicion. And not just suspicion of a past deed, but of ongoing connivance in whatever the threat is. Everyone starts to converge upon this individual. It is not just that he needs to be punished, but that he is the source of a contagion that can only be stopped by shutting it down at its source, and right now. The proof of this is the very contagion that leads to the convergence on the individual. The panic intensifies until that individual is eliminated.

That is scapegoating, and we see this kind of thing happen, usually, of course, in much less disastrous forms, all the time. Look at why people get excluded from groups, ostracized by or within institutions. Now, if we put the scenario I described in the previous paragraph in reverse, let’s say that as the crowd starts to converge, one individual hesitates, and starts questioning the movement toward this central object. He points out that the association someone has made could easily have another explanation, or may not even be an association. He proposes that we look more closely at that purported “evidence.” He might further point out that harming this one person will do nothing—whatever the emergency is (if it is in fact an emergency—another question he might raise), it has to be addressed on its own terms. He may point out that some of the participants are clearly hurling accusations only because others are—indeed, they’re the same accusations, and the people hurling them give no evidence of having thought of them on their own.

All this scenic construction is what lies at the base of a “normative order” or “justice system.” The entire legal system can be seen as erected so as to cut off at the pass all the mimetic inclinations toward scapegoating. But the person who slows down the crowd redirects its hostility toward himself. He may become a victim, but he has advantages that the chosen victim doesn’t. The selected victim, the “emissary,” is marked, and every response he has given towards the crowd has stained him further—his denials are obviously lies, his tone and gestures show that he is keeping some secret, etc. The retardant, meanwhile, is no more marked than anyone else, and attempts to mark him now will be risky because too obviously “interested.” He begins by drawing attention to the crowd, which must now look at itself—or, at least some are looking at others, diluting its “crowdness.” To the extent that he is an effective retardant, everything he says confronts some claim, some accusation, made by the leader of the crowd (the self-chosen leader, or perhaps one chosen by the retardant himself, to give the crowd focus and slow it down). Why did he notice this, but neglected to tell you that? The retardant doesn’t want to renew the crowd’s fervor, this time directed at its (former) leader—he wants to dissolve the crowd, while ensuring that it retains a memory of what it would most like to forget. It may be important to punish the leader, but it should be a slow and proportionate punishment, in contrast to the hurried and massively disproportionate one the crowd was about to inflict. Most basically, the punishment should be a lowering of the trust given to that individual, which is really just a recognition that he has revealed something that we can’t forget. At the same time, there will now be something in each of us that we trust a bit less, and we will all be a little bit more ready to listen to someone taking on the role of the retardant in similar cases.

You have a post-sacrificial culture once the balance has shifted from the arsonists to the retardants so that, ultimately, most of us are mostly retardants, and can note our own inflammatory tendencies. But once this takes place there comes the tendency to farm out our retardant capacities to automatized institutions that run according to fixed rules and bureaucrats who can apply those rules without thinking too much about their origins or meaning. Sacrificial tendencies will then recur; indeed, the justice institutions themselves will attract such tendencies, where they can be indulged covertly and in good conscience. (Liberalism is essentially the laundering of scapegoating through the justice institutions.) We will never have to stop learning to be the first retardants. This is what we learn by giving form to all of our interactions and thereby ensuring continuity and consistency of intent—passing the baton, so to speak, even to ourselves. When scenes are formally constructed, emergencies are already accounted for in terms of the scene itself—there are “procedures” in place, even if only tacitly, in the forms given to actions and interactions. It is accusations of intent that can’t be seen in the form of one’s actions that will stand out, not markings of being less fit.

This requires an acknowledgment of the paradoxical structure of the sign I’ve been exploring in the last couple of posts. Again: we create the “reality” that we also simply “refer” to. Even knowing this doesn’t extricate us from the paradox because any attempt to act on this knowledge just generates a new scene, with an uncertain outcome, on which new signs with the same paradoxical structure will be emitted. We work, live think and speak with this paradox by remaking ourselves, as much as possible, into forms that sustain continuity across acts. I might be marked; any of us might be, under certain conditions. But one can show that the very things that might mark one are in fact signs of one’s retardant quality. What seems irritating, annoying, or threatening is really my giving notice of a readiness to hesitate before any prospective convergence. I would then need to remake myself so that that is genuinely the case, so that I don’t delude myself into thinking that simply being irritating and annoying in itself marks one as a retardant. One thereby constructs the reality within which one will circulate as a sign of deferral, but it will only be such a reality insofar as one actually defers, which also depends upon all the others—all the others with whom one is then engaged in a reciprocal process of creating an idiom of forms constituting an oscillation between hesitancy and continuity.

January 29, 2019

How Does the Center speak?

Filed under: GA — adam @ 7:25 am

All human existence is an exchange with the center. The first message from the center is to defer appropriation, a message “heard” by all participants on the scene. Once deferral has been effected, the means of the deferral (the sign) can be deployed in new circumstances, to defer new conflicts. The originary center is still the ultimate reference point: we can defer violence in this new situation because we remember (memory is embedded in the sign of) the originary scene. The original “message” is therefore somewhat dimmed, but also different, because more specific, tailored to serve this new act of deferral. Many billions of scenes later, the number of scenes a particular act of deferral must be imagined to ping off makes the origin and retrieval correspondingly more distant and more complex. The center is always saying “defer appropriation,” but appropriation of what, in the face of what potential violence, constructing what frame enabling eventual appropriation—in those details we can find lots of devils.

The answers to these questions can’t be self-evident, but we can’t exactly “argue” about them either. On what grounds could one say that we must do X because that follows from the originary scene and all its subsequent permutations? What would count as a “logical” case, or relevant “facts”? We would be better off trying to understand why people have believed shamans and prophets when they claimed to speak God’s word (and why we, even today, could say that sometimes they really were doing something like that, and sometimes less so). Thinking in terms of paradigm shifts in the sciences would also be more helpful, because that would enable us to keep in mind that what is important is not a point-by-point refutation, but a very close look at the most anomalous of the anomalies. I wouldn’t say that deciding things by the force of the better argument is fraudulent, just that for decisions to genuinely be made that way requires an enormous amount of good faith and prior agreement on all sides. If we agree on 99% of things, we could argue productively over the last 1%.

As always, I want to emphasize that I am not presenting locating and articulating an imperative from the center that we could trace back to the originary scene as one way of making decisions as opposed to others—rather, this is what we do anyway, so we should get clearer about it. If you accept that someone has made a compelling enough argument or disseminated a powerful enough meme that you should change your mind, you have conferred authority upon a particular tradition of determining what counts as “compelling,” “powerful,” or “convincing,” and that tradition is a transformation of some previous tradition, which articulated assumptions (virtual ostensives) regarding the relation between authority and reference, and so on, all the way back to the beginning. You are always conferring and responding to authority, which doesn’t mean you always just do or believe what someone more powerful tells you to; rather, it means that have sought out or been provided by someone else who sought out, the authority that has been maintained because it has “packaged” ostensives and imperatives together in such a way as to maintain the continuity of the center.

The center speaks through everyone and it speaks by maximum paradoxicality. In a sense it would be truer to say that humans construct the center “in their own image,” but saying that would lead instantly to the imperative to decide, together, how we want to construct the center, and that question would contain in itself a complete falsification. We can’t disenchant ourselves in that way—“man,” pure and simple, doesn’t exist, and certainly not as the collective maker of the center. We might, like Descartes, believe we can cleanse ourselves of all prior beliefs, but we certainly can’t cleanse ourselves of the language in which we do the cleansing—we can’t stop believing in language, which always refers us back to the center. We can’t think ourselves out of this fundamental paradox, that we create the language by which we are created—we must think with the paradox. It may be best to say we speak along with the center. We try to make explicit a paradox that everyone is involved in implicitly.

The problem is how to inhabit the most anomalous anomaly—that is where the deferral capacities of the center you help surround are most strained, and new methods are called for. If a group has been able to avoid conflict by dividing up land a particular way, then if the group conquers new land, or some members of the group bring new land into cultivation, that agreement must now be “applied” to the new conditions. This will involve some revision, and some abstraction from the previously successful agreement. But it will generate new conflicts as well: something that made the previous agreement seem “just” will not be repeated here. Someone who was included last time will be excluded here. The judgements of the central authority will seem less grounded, more tenuous; secondary authorities might feel a need to “supplement” him. The decisions made will be increasingly anomalous—that is, they will not fit into the system that has been constructed. If the group is not to unravel, someone will have to propose a new agreement, and they will have to do so in the right way—a way that acknowledges the central authority’s power as judge. It will be necessary to be both more abstract (extracting from the original agreement something that can be applied in a new way here) and more concrete (contributing to a specific, consequential, decision). Some kind of “leap” is necessary—that’s what “prophecy,” as well as “intuition” and “genius” is about. It will most likely involve the invention of a new social or legal category, one that will be shown to have been “always already” applicable.

But I don’t want to use words like “prophecy,” “intuition” and “genius” because that space of thinking and decision is what needs to be theorized in terms of hearing the center. if I find myself in the space between some imperative and its fulfillment, then I fill that space by oscillating between some possible implementation of the instruction and some necessary limitation in its utterance. I thereby make myself, as much as I can, within that situation, an extension of the will or intention behind the command. The more I separate myself from the command the more I give myself over to it. Everyone else is doing the same thing, or something different, even the opposite—shirking, defecting or sabotaging. The center speaks through them as well. Maybe they think you’re the saboteur. At this point, the center needs to speak in declaratives. Articulate maximum agreement with maximum disagreement. Maximum agreement: there is a center, and we all respond to it, otherwise how do words and sentences mean and how do we know what they mean—at the very least, your enemy assumes you can understand the epithets he hurls at you. If there’s a center—even if someone wants to call it “principles,” or “maxims of action,” or better habits—something toward which one orients oneself, then it’s impossible to turn away from that center, and all you could accomplish in trying to do so is show its wealth through your own poverty. We then need not fear maximum disagreement, even among friends and allies: there is always some virtual ostensive we see, some mediated command we obey, differently.

The same paradoxicality applies from act to act, carried out by the same actor—in trying to obey the command of the center as that command is embedded in its precedents and through the dispossession it requires, my just previous attempt completely failed, and is thoroughly marked by shirking, defecting and sabotage; but that shirking, defecting and sabotage can only be seen because it all clarifies the command one failed to obey, and in my ongoing obedience I will target the inclinations and distractions that issued in the failure. Of course, if someone else accuses me of defection, I will have to agree completely while asking for the imperatives the center his own obedience to which enables him to identify me as an anomaly issues. And what center is that center on the periphery of? He follows the imperative to seek the truth, or maybe to advance equality; he then follows some tradition of sorting out truth from falsehood, of exposing “artificial” inequality as it obscures “natural” equality. But is there not something anomalous in taking the leading role in fighting inequality—surely, to fight it even more effectively, there are all kinds of “privileges” you would have to claim. Now we might be able to reach agreement on how to study the kinds of privileges that might be necessary to advance anything. What falsehoods does he have to leave unexposed to isolate the little truth upon which he has decided to expend his current energies; maybe we could agree on some terms of study regarding the said and unsaid, the explicit and the tacit. The center speaks as we find failures in successes, new problems in solutions, and then answers to old questions in those failures and problems.

The purest paradoxes may be the paradoxes of self-reference, that is, sentences that refer back to themselves in a way that makes them simultaneously true and false. The one I prefer to use is the liar’s paradox, originally the Cretan liar’s paradox because, even though it has been subsequently tightened by logicians in order to make it airtight, in its original form it sounds very much like something we can imagine someone saying, for very intelligible reasons—that is, it doesn’t come across as an artificial construct. “Cretans are all liars,” said by a Cretan, can very readily be understood, for example, as a Cretan ratting out his fellow Cretans in order to curry favor with whoever, for the moment, is in charge of the Cretans—don’t trust them; rather, trust me, because I know them so well. And, indeed, who could know that Cretans are all liars better than another Cretan? Of course, it could also be a decent Cretan in despair, expressing his hopelessness of the state of Crete, and accepting his own implication in its degradation. Or a Cretan trying to wake up his fellow Cretans—don’t you see you’re all drowning in your lies—I can tell you this because I’m poisoned by them as well. There’s always something paradoxical in an individual speaking for a group, because any such speaking for is an attempt to change the group in some way by posing as a mere description. But there’s no way a group could speak, or an outsider could speak about a group without hearing members of the group speak of it. But any listener must suspect the speaker of describing his group either to cover himself with its luster or distinguish himself favorably from it. But the same is true when one just speaks of oneself.

But if we know this, we can present ourselves, as individuals or members of groups, as paradoxes. What makes you a member of a group is that any time someone addresses you they do so on both levels: as an individual and a member, a center and a fluctuating probability. I don’t mean in some specific social category; I mean as the type of person capable of being addressed in a particular way—capable of answering a certain question, obeying a specific imperative, adding to a particular discourse. To have the center speak through you is to enact a self-referential paradox of accepting your membership in the linguistically constructed group by attributing, implicitly or explicitly, qualities to that group at odds with what you actually say. To take a simple example, let’s say someone concludes a discourse directed towards you with an aggressive “so what do you think about that!?,” thereby constructing you as a member of some hypothetical group that would be offended, or stymied, or angered by what has just been said. If your response is in the vein of “well, here’s what Ithink about that,” i.e., one accepting of the challenge, while in fact derailing, or parodying, or neutralizing the prospective confrontation by distinguishing one’s “I-ness,” then you have enacted the self-referential paradox: yes, I’m a _______, and (as you say) ________’s always _________ (even while maybe I don’t quite…). The words, your gesturing, your posture, and/or you use of a particular media, all fitted to the scene, will be the center speaking, saying “transform this into a deferring rather than “horde-ing” center.”

January 22, 2019

Paradoxicality

Filed under: GA — adam @ 9:09 am

Words like “spirituality,” “religiosity,” “faith” and so on, insofar as they refer to something, refer to a dwelling within and refusal to suppress the constitutive paradox of the human. That paradox emerges with the originary event: the (newly) humans on the scene point to, name and thereby create the central figure that was already there, already a compelling and repelling substantial being—in which case, naming it is just recognizing it for what it is. It is paradoxicality that can never be “proven” or reduced to any particular ostensive sign, because it is ostensivity itself. (For GA, this is the truth implicit in Heidegger’s ontological-ontic distinction.) Paradoxicality is the non-material reality theists are always arguing with atheists about, and if there is to be some kind of dialogue between the different faiths, it would far better be constructed around the assumption that there are various ways of dwelling within the founding paradox than around some general notion of “humanity,” “nature,” “transcendence” or “morality.”

We can run paradoxicality through all of our grammatical categories: the paradox of the ostensive is, as just mentioned, that we refer to something created by and yet pre-existing our referring to it; of the imperative, that the asymmetry of the command relation is reversed with the dependence of the one issuing the imperative on the one fulfilling it; of the declarative, that we must renounce ostensives (all “irritable reaching after fact”) in order to make a new world of ostensives possible. A paradox is any sentence that puts forward a claim or rule to which it is itself the exception, but any sentence and any discourse paradoxically refers to, talks about, a world created by and therefore always running in advance of and behind the sentence or discourse itself. We can formulate the paradoxicality of the declarative as follows: if you try to compose a perfectly clear sentence, that is, a sentence that will be understood in the same way by everyone who hears or reads it, that is, a sentence in response to which everyone will say or do the same thing, the best way to do that is by using the most repeated words and phrases, in the most repeated collocations, in the most repeated grammatical forms, in their most stereotyped uses. This means that you are completely reliant upon a received version of the world presumed to be shared by everyone, conveyed through linguistic means the use of which is characterized by the same unanimity. And that is, indeed, the “vocation” of the declarative sentence: that is how you undo an imperative by embedding its target in a world perfectly constructed so as to cancel it.

But sustaining this clarity requires continually selecting the features of represented scenes that will ensure unanimity, which means imagining the witnesses on that scene, their means of representation, the traditions enabling them to see what they see as they see it. It means placing yourself on the scene, with all that you have witnessed and all who have witnessed you, your conceptual framework and those conceptual frameworks it has modified. That is, it means joining your potential readers in a disciplinary space continued in one’s own discourse, rendering that discourse intelligible not to everyone, but to those who can follow your trail and continue reconstructing the scene. Implicit, though, in the constitution of any disciplinary space is the possibility that anyone can enter and transform it so that eventually, conceivably, everyone might enter it and what it presents would become perfectly “clear.” In producing a discourse, we keep generating this paradox, where the clarity and idiosyncrasy of the discourse oscillate for the hearers or readers of the discourse. The more you seek absolute clarity, the more you approximate complete idiosyncrasy; the more precise, micrological and self-reflexive, the more you anticipate a possible universal scene.

So, trying to say something everyone will understand leads to saying nothing, which nobody really understands because it’s what everyone already presumes they know; trying, then, to make something understood leads to saying something that someone might, someday, in some manner, understand in some yet-to-be-determined sense. We oscillate between the already said and what might turn out to have been said. The virtual scene generated by classic prose through its supplementation of a presumed speech situation can be seen as an attempt to suppress this paradox, while disciplinarity can be seen as an attempt to open it up. Any disciplinarity in the human sciences must start from mimetic theory because the starting point of mimetic theory is that we are all doing what we have seen others do but cannot acknowledge it to ourselves in action—even the most convinced mimetic theorist must believe in his own freedom, that he has “decided” to do whatever he is to do. Trying to figure out whom you’re really imitating in what you’re about to do would be paralyzing; it might be that such paralysis is a condition for a freer act because you would have to realize that your imitation will get the original wrong in some way; that is, not quite be an imitation.

The declarative originates in the representation of a reality immune to an imperative. The “task” of the declarative sentence, then, is to fortify reality against imperatives—in each case a specific imperative, or field of imperatives, that presents a danger because it is both pressing and impossible for those charged with fulfilling it. If your boss says that he wants the inventory done in an hour, and you reply that there are only three employees available, that might be sufficient to repel the imperative—OK, you can have 3 hours, then. Maybe not, in which case you would have to make it clear to your boss that three employees simply can’t do that work in an hour—maybe your boss doesn’t really know what he’s doing and has to have explained to him what taking inventory actually entails, and how long it would have to take to do each and every one of the acts involved in “doing inventory.” In so doing, you construct a “discourse” filled with virtual ostensives (you “point to” maximum employee capability, to the number of shelves and an estimated number of objects on them, and so on) but each ostensive generates a counter imperative of its own, coming from “reality,” which can only be disobeyed at one’s peril. You would be attempting to make reality immune to the boss’s imperative, but all of these imperatives would not be very commanding if the boss wasn’t already subject to another, higher, one, which seems obvious but isn’t: don’t command people to do the impossible; or, don’t issue unfulfillable commands. Reality’s commands gain their force from this ethical one, which is grounded in the nature of the imperative itself, which is meant to be fulfilled.

So, where is the paradox in “there’s only 3 of us,” or in the more extended discourse regarding the elements of inventory and the estimated extent of this specific inventory? “There’s only 3 of us” seems perfectly clear but, really, only if the statement was answering the question, “how many are you?”; in this case, it’s only clear insofar as the boss knows why three is grossly insufficient; the further elaboration meanwhile, makes things clearer by referring to the realities behind the reference to “only 3,” but insofar as the boss needs to be informed of this reality he can only take the employee’s word for it, which means he has no way of distinguishing between an accurate portrayal of the situation and a clever employee saying exactly what he needs to say to get the desired and predictable response from the boss. Whether the statement is clear or not is undecidable, or yet-to-be decided, but whenever it will be decided will be too late and all those references will have lost their meaning. This asymmetry in both power and knowledge could end disastrously (well, unpleasantly, at least) unless a kind of reciprocity is established: the boss modifies his power to acknowledge the employee’s knowledge and the employ frames his knowledge to acknowledge the boss’s power.

This is done through satire. I don’t mean a tension diluting joke, or a self-deprecating quip. I mean each performing a response to the typical expectations coming from the other—the boss showing that he knows he might be read as the typical slave-driving bully and the employee showing he knows he might be read as a lazy smart-ass. It becomes a satire they perform together. While in the end someone has to be right about how many conscientious employees to takes how much time to do inventory, the two will get closer to being right together through satire than serious, reasoned discussion. This is even the case if we assume that both are completely devoted to the “mission” of the company and know each other (as well as such things can be known) to be sincere in their desire to do things properly. Unless they’re already in complete agreement (in which case, they wouldn’t be discussing it), their disagreements will involve some kind of oscillation between “typical” responses that are expected to “work,” on the one hand, and what could only be known by someone “inside” the situation. The asymmetries remain, and would be better maintained by being performed.

Is a good ruler a satirist, then? Must ruling be solemn? (How does a good ruler engage the constitutive paradox?) The awe of sacralized power kings could once rely on may not ever be restored. The ruler is always staging things, and if the bloom of sacrality is off the king, those stagings can’t all be pageants. I will propose, hypothetically (hypotheticalism being part of the declarative form of paradoxicality), that if paradoxicality is to replace, because it is the real essence of, the transcendent, then the stage set by the ruler will have to be a satirical one. The more power I have, the more I depend upon the knowledge, faith and mutual trust of all of you; the more you seek out knowledge, transcendent foundations and reciprocities amongst all of you, the more you rely on my power being unquestioned. The ruler’s power remains unquestioned because there are better questions to ask, and that can only be asked if that power is unquestioned. The declarative paradox is performed as inquiry into the imperative one: all the things that can happen between the time the ruler relays a command originally issued from within the most ancient origins and the time that command is obeyed and completed by all the individuals at all the social “capillaries” where the details of the command need to be worked out. Here is the source of high comedy, low comedy, subtle humor, friendly joking, and the knowledge coming from all this will be worth more than that produced by the contemporary social sciences, which are all constituted by the denial of the declarative paradox. They all believe in the clear statement, which anyone who follows the correct method will agree with and act upon in accord with all of the implications contained therein. The yet-to-be-grasped sentence is just a meaningless one. Once the model of the physical sciences is rejected for the human ones, what does “knowledge” mean? Certainly not predictability, because of all the millions of possible “causes” of any event, how are we supposed to determine the precise “effect” of each of them? (Especially since our very knowledge of what has gone into the event is part of, if not the event itself, then the event of understanding.) To the extent that we can do something like that, it will be in very circumscribed situations, and hardly applicable to others. Knowledge really means surfacing and performing the anthropological form of the event.

I’ve suggested in previous posts and in my Anthropoeticsessay that the most important staging carried out by the ruler is that involved in providing for his own successor. The entire social order would be both involved in and represented by such staging. The entire society would be a school for the tutoring of rulers, who would go through a very carefully prepared and “stereotyped” selection process: they would have to prove their capability to rule while and by renouncing any desire to do so. Shows of hypocrisy and self-delusion would separate the wannabees, however intelligent, capable and courageous they might be, from the real deal. Satire is the medium in which such a winnowing out would be enacted, and for the satire to be trustworthy the ruler would have to be on the stage as well. I’m not talking about Nixon going on Laugh-In, or presidents and candidates going on SNL—the ruler will be the one choosing his successor, so he is inevitably implicated in the selection process, which will reveal his strengths and weaknesses as well. We can’t worship paradoxicality, but we can acknowledge it as something we will never completely master, intellectually and practically, while never being able to rid ourselves of it, either—but paradoxicality can never become the basis of an imperium in imperio, either, because it provides no model for ruling, just a model for staging it.

Older Posts »

Powered by WordPress