April 16, 2019

Accessing the Ostensive within the Declarative

Filed under: GA — adam @ 7:14 am

It is in the nature of the declarative to both supplant and appropriate the ostensive. The declarative comes into being by deferring some imperative and, first of all replacing it with the combination of an “operator of negation,” or prohibition on proceeding to act on the failed imperative, on the one hand, and a negative ostensive, representing the demanded object in absentia, on the other. The declarative creates a world full objects, which is to say a world of useful and desirable things that we observe and refer to without appropriating. The declarative is born in terror of the imperative and, by extension, the ostensive, the latter of which it produces a virtual version of. All developments of declarative culture involve further distancing and regulating access to imperatives and ostensives. This is the logic of “enlightenment”: all action is to be a result of the sheer accumulation of declaratives, providing such a complete account of the world, that anything one might do has already been so mapped out in advance as to not even require a “decision.”

But seeking to erase the “violent” ostensive-imperative world ends up creating a new, inverted, version of it. The more distant from the ostensive-imperative world the declarative moves, the more it becomes imperative to interpose new declarative layers in between declarative culture and the ostensives and imperatives that emerge unbidden and unanticipated in the course of social life. Replace actions with explanations whenever possible—but this only produces perverse actions, suppressing those who point out threats or try to solve problems directly, before they metastasize. This is the linguistic basis of liberalism, which becomes a generalizable possibility once the emergence of print culture creates an extensive disciplinary structure that tilts the balance, once and for all, towards the declarative and against the ostensive-imperative. A linguistic problem requires a linguistic solution or, more precisely, deferral. This is a question I have addressed in many ways, through the concept of “upclining” in one essay and more recently by proposing we think about subjectivity as the performance of paradoxes of self-reference, and the post-sacrificial, post-literate human being as “total sign.” The attempt here is to embed an ostensive dimension in the declarative in the form of a marker of the disciplinary space of attentionality that all the references made possible in the declarative depend upon. The “what” of your sentences should have, as its Mobius strip-like obverse, the “where,” “when,” and to and from “who(m)” of its utterance—not as biographical markers (I’m writing this on a porch in a farm house in Des Moines, September 32, 2016, 5:23 PM, etc.), but as a marker within the current state of language. We could think of this as an attempt to heal the oldest split within language.

This question can now be approached more precisely by drawing on my analysis of the implications of the “classic prose” that David Olson sees as prototypical of literacy. To review: Olson sees writing as representing reported speech, and identifies as the specific features of writing the supplementation of the words reported with a vocabulary designed so as to represent what cannot be represented directly in writing: tone, emphasis, bodily language—everything that can only be grasped ostensively. If I’m telling you that John says that “the enemy is on its way” and I don’t think John knows what he’s talking about I might repeat John’s words in an exaggeratedly mock-frightened tone. Since you can’t do this in writing, in conveying not only what John said but the meaning of what he said (a distinction that becomes intelligible only under literate conditions), which is to say, registering my own distance from John’s view, I might write “John claimed that he saw the enemy ready to attack.” The use of the word “claim” puts what John said in question—I make it clear that I’m not vouching for it. A substantial vocabulary serving the purpose of indicating all the possible relations the reporter of speech might have to the reported speech is developed—mastering this vocabulary is what is involved in becoming literate.

So, we can “claim,” “assume,” “suggest,” “suppose,” “contend,” “argue,” “understand,” “imply,” and so on and these speech acts get nominalized into “claims,” “assumptions,” “suggestions,” “implications” and all the rest and these nouns come to exist within the disciplinary spaces within which we speak about thinking, reading, writing and other intellectual activities. Even “thought” is such a nominalization of the verb “think”—we can have “thoughts,” but there is also a whole world of “thought,” with its own history. Drawing upon Mark Turner and Francis-Noel Thomas’s notion of “classic prose,” Olson argues that the imperative writing is under is to construct a simulated scene upon which the writer and reader all stand—and we can see in this an extension of the declarative’s paradoxical suppression and appropriation of the ostensive-imperative realm. Classic prose is a manner of writing that enables the reader to see whatever is being described as if he were there. Olson recognizes this to be a “conceit,” i.e., a kind of fiction we adopt for the purpose of reading (Thomas and Turner of course recognize this as well), but doesn’t see any objections on those grounds. The disciplines, starting with philosophy, are in turn erected on the basis of these nominalizations, and we are left with a paradox: the neutralization of the ostensive-imperative world is carried out through a mode of writing that purports to be like a window, given you a “clear” view of the topic under discussion, as if you were present on the scene.

It seems to me that much if not all literature, or at least literary prose fiction, constitutes an ongoing satire of the disciplines—including literary fiction itself insofar as it becomes a discipline. My own proposal for engaging the disciplines by using the terms they apply to their domain of inquiry to their own space of inquiry is, in this sense, “literary.” It involves taking the nominalizations and turning them into verbs, and therefore imperatives, towards the end of bringing us all into the space of inquiry as both “objects” and “subjects.” This produces a scene of writing which interferes with the scene of presence represented by the writing. The paradox of declarative culture can therefore be represented within declarative culture. Once the scene of writing is established, any concept, any word, within the disciplinary discourse can be “meta-d” in this way. One could say that in infiltrating the language of the disciplines only or mainly the “most important” concepts should be addressed forcefully, but that’s “Big Scene” thinking: the most important concepts are not necessarily the ones the discipline itself thinks are most important—it might very well be something the discipline shunts off to one side and yet can’t seem to do without. This is something we can learn from deconstruction. Taking the discipline at its word regarding its own concepts leads to “debates” in which the discipline has a built in advantage—more lateral approaches even the playing field for the innovative.

On a grammatical level, this involves replacing nominalizations with verbs, in order to represent disciplinary specific concepts as signs of events. If the creation and subsequent uses of the concept can be seen as events, then the set of relations represented by the concept can also be reduced to an originary event form. Those new event forms, no doubt rich in verbs, will in turn become nominalized in a more extensive and de-familiarizing way than in the source material. Let’s take a concept within GA, like “resentment.” It’s easy to use the concept of resentment as a way of expressing resentment: accusing those you resent of being resentful allows for a perfectly exculpatory manifestation of resentment. But this means that in order to use the concept effectively, you must have deferred it: your discourse should provide signs that you withhold any resentment you might have for the resentful object of your analysis. How do you do that? You identify the center against which the resentment is directed: there is some rule which some central authority has pledged, implicitly or explicitly, to uphold, and has failed to do so. Even “horizontal” resentments derive from “vertical” ones, because it’s the role of the central authority to ensure groups don’t come into conflict with each other. If you resent horizontally, it’s because you see your object of resentment as the protégé of the “unfair” central power. Seeing resentment as resentment towards the center provides a way of exhibiting the non-resentful quality of your study of resentment, because you turn that study into a study of the center in which your own object of study, regardless of how “justified” or “unjustified” his resentment is, could conceivably join. In this way you, the inquirer/accuser can own your own resentment towards the center whose lapses enabled the other’s resentment, while converting your resentment into greater clarity regarding central imperatives.

So, I have brought the originary inquirer into the disciplinary space as both subject and object of the study of resentment. But notice the quotation marks I was compelled to place around “justified” and “unjustified.” This is a particularly difficult question in GA: how can we—even, can we—distinguish between justified and unjustified resentments? The concept itself seems trans-moral. The first resentment is toward the center on the originary scene, in response to the center barring access to the object itself. This resentment is both “unjustified” (because the center creates peace and the human through its prohibition) and completely unavoidable, and therefore justified. All subsequent resentment must therefore partake of this paradox. Some resentments will be suppressed because they make the existence of essential institutions (the purpose of which is to limit the consequences of resentment) problematic, but that doesn’t make them “wrong”—maybe a more comprehensive resentment towards the institutions themselves will turn out to be “justified” if it is possible to replace them with something “better.” What is “better”? Providing for the adjudication of a wider range of resentments, which can therefore be productive rather than being—or before they need to be—suppressed. The study of resentment that turns into a study of the center also turns into the attempt to derive from the center a way of determining the latitude to be allowed to different resentments, which must also, though, be a study of the means of transforming those resentments so that they can participate in the discourse of the center—by finding new ways of representing other resentful positions so that they can eventually participate in the discourse of the center by…

So, we begin with an attempt to “define” or characterize “resentment,” which leads us to a question regarding the relation of the one so attempting to his own resentment, which leads us into the paradoxical nature of resentment along with a means of discussing the pragmatics of sustaining and limiting that paradoxicality. We end up with complex nominalizations, like the discourse of the center, or something like “the reciprocal relation between donating one’s resentment to the center and the naming of resentments in the practice of converting them into donations of resentment to the center.” We could actually put a verb after the long noun phrase just quoted, and predicate various features and consequences of this “relation.” The ostensive within the declarative, in all the forms I mentioned earlier, are now in the fully paradoxicalized declarative itself. And the same process can be initiated with regard to any part of that noun phrase, including the by no means transparent concepts of “reciprocal” and “donate,” which themselves could be “verbalized” and reduced to originary event form and in turn re-nominalized as paradoxical articulations of center and margin. As Peirce asserted, all inquiries are inquiries into the meaning of “difficult words,” but, of course, what counts as a “difficult word” shifts as our attentions do. To return to a claim I made a few posts back (The Central Imaginary), the only real question we can have is whether, or in what way, to what extent, is an iterated sign the “same” sign as its previous iteration. The only way to answer this question is by reducing the sign to its scenic origins as the representation of those origins is embedded in the event forms of the different scenes upon which the sign was indeed iterated. If that’s all we ever do, knowing that and how that is all we ever do would have us threading the ostensive through the declarative as a matter of course.

April 9, 2019

The Big Scene is the Anthropological Basis of Anarchist Ontology

Filed under: GA — adam @ 7:26 am

As sacral kingship disintegrated, and the unity of the sacred and social centers was dismembered, the response in the late middle ages in the West was to retrieve the originary scene. Going back to the scene is the only response to any social crisis: if the existing institutions and the totality of gestures they organize no longer defer violence, what else could there be to do other than discover some new gesture; and what other means could we have other than finding some central object the deferral of the appropriation of which we can organize around? Sacral kingship in its high imperial forms (i.e., “divine kingship”) is in fact anti-scenic: the sacral king of a community small enough that they might still be able to simply kill and replace the king if his powers fail is still the center of a scene; with the monstrous empires of antiquity, where the king is completely protected and most people, we can assume, pay him tribute while relying more directly on their ancestral cults, there is no real social scene. In a sense, nothing happens for very long periods of time, other than court intrigues.

The Axial Age acquisitions, then, restart history by creating centers outside of the imperial one. The Axial Age acquisitions—Greek philosophy, prophetic Judaism and Christianity and even, I think (but probably less so), Chinese philosophy, are both anti-imperial and imperial. They construct a position from which the existing emperor falls short in God’s eyes, which is to say they institute a kind of permanent resentment towards empire; while at the same time imagine an eternal and universal empire under a true, divinely ordained king. Western “history” is, we could say, the history of the deserved fall of empires until the establishment of the one true empire at the end of days. Both Marxism and liberalism fit this apocalyptic pattern. So, from the failure of non-scenic imperialism, the recovery of scenicity takes the form of the imagining of “History” as a scene. This is why the anti-imperial side of the Axial Age ultimately wins out—the only acceptable God-Emperor would be God himself, who will rule once love of Him has been implanted in all human hearts by some revelation produced by the final, cataclysmic fall of increasingly evil empires.

We can see a comprehensive iteration of the originary scene here: our evil inclinations lead to us wanting, also fearing, but finally demanding and deserving the tyrant to end all tyrants; while the gesture on this scene that prevents our final descent is the Word of God becoming our words. How violent this final apocalypse must be, and how much it depends on human action rather than divine intervention will vary according to circumstances, but the structure is unvarying right down to the present day. We are still told, in the midst of declared crises of the liberal order, that the “voice of the people” finally sets things right. We still think there is a “voice of the people”—nothing can be more commonplace than to hear commentators says the “American people want (or don’t want)” this or that. What they mean to the extent that they are accurate, is that a sufficient majority could be patched together, by hook and crook, for a particular purpose. But imagine what it would sound like if politicians and pundits spoke in that way (as they often undoubtedly do amongst themselves)—there would be absolutely no reason to grant any decision they make the slightest legitimacy. Which means there is no other way of thinking about liberal legitimacy than according to what is still a Rousseauian notion of the “general will.”

And it is also true that unanimity regarding the originary structure of a social order is necessary if that society is not to completely degenerate into warring forces devoid of any limits on the weapons used and aims pursued in the struggle. So, it’s not surprising that liberalism recognizes this. Even leftists need to reference a unanimously held originary structure. Their anti-whiteness, for example, is not asserted as a matter of taste or mere tribal hostility—they must assert that there was in fact another, truer, America all along, with its own genealogies, its own sacred events and names, its own anticipated apocalypse. These are all versions of what I would call The Big Scene, and in the end there isn’t that much to choose from among them. The Big Scene is big in size and in consequences, but most importantly it is big in the sense of limitless because it is a scene constructed, not around a center, but in order to prevent the emergence of a center. A centered scene always has limits in space and time—participants must be in a circumference a certain distance from the scene to be witnesses, and if the number of participants grows beyond the size of this original circumference, it is people in the “rows” further back who acknowledge the precedence, in space or time, of those in the front rows, so this growth can be orderly.

A scene whose participants are devoted to the suppression of any center, though, is inherently unlimited. One can organize entire countries, or the majority and most active parts of them, around preventing the emergence of some proxy for a center. One can even organize regions around it; it’s too soon to say whether the world can be organized in this way. Such scenes are like lynchings—anyone can come along and throw another stone. They tend toward egalitarianism—everyone is against the same thing, and intensity is always increasing so no one can establish real preeminence in that regard. Elections are still about selecting a government, so they must put someone, some imperial figure, at the center—but the history of democracy is the history of the effacement and disfiguring of these central figures so that they represent nothing more than “who we are as a people” at this point. No doubt part of the hysterical hostility to President Trump is the overly imperial figure he strikes—he seems to actually make decisions, rather than just being the final filter through which the information circulating among elites and specialized institutions is processed. But all of the surrounding para-governmental institutions—the media, the NGOs, the universities, and so on—are completely uninterested in governing, and are free to engage in perpetual center smashing. They support politicians, of course, and more fervently than ever, but center-smashing politicians, more interested in gestures and less in coherent imperatives. And the politicians themselves eventually assimilate to this crowd. Governing of a sort continues, by the civil servants hired to do it, but they are themselves increasingly caught up in virtue signaling and helping to take down anyone who threatens to establish order.

It was liberalism that finally tilted the apocalyptic scene towards its permanently anti-imperial trajectory. And that’s when we get The Big Scene firmly installed as the imagined retrieval of the originary scene. It is a false scene, because it imagines a world without the Big Men—in this sense, liberalism and democracy are carnivalesque. But for this very reason it seems closer to the originary scene, which had no one at the center, just an object to tear to pieces. Anyone presuming to be a Bigger Man would violate the scene, but the same must be the case for any attempt to propose a general basis for agreement on anything whatsoever because that too must merely be an attempt to sneak someone into the driver’s seat. This is why resentments cannot be remedied in this way: only resentments that are framed in terms of some discord between the social center and the sacred or paradoxical center can be addressed. But only a shared concord between both modes of centrality makes discordance a problem—if all social centers, all central authorities, are equally illegitimate because equally evanescent and arbitrary, resentments can only feed on each other.

The discourse of The Big Scene is deeply rooted in our cultural and political vocabularies. If you listen carefully, across the entire political spectrum, you will see that virtually no one criticizes anything or anyone on any other basis than the violation of one norm of equality against another. All we see is people leveraging one residue of liberalism against another. It’s all people elbowing each out of the front row in the march of The Big Scene. For example, people can acknowledge that there are relations between nations that are best described as “imperial” or “hegemonic,” but such words are only used as terms of opprobrium, and the states accused of creating such relations will insist on euphemisms disavowing them. Imagine somebody criticizing the Saudis and Israelis for not superintending the Middle East effectively enough, or China for not establishing clear rules of inter-state interaction for East Asia, or the US for not thinking seriously about the best mixture of traditional and modern social forms to promote throughout Latin America. For that matter, think about how the sting of populist nationalism would be removed, and the basic ends of such nationalisms brought closer to achievement, if we could simply acknowledge, one, that many, maybe most, societies will be ethnically mixed; and, two, that in ethnically mixed societies there will almost always be a dominant, majority ethnic group that should set the tone for, be deferred to by, and in turn offer patronage to, minority groups. All of these approaches would imply “little scenes” with a center, and therefore must be overrun by The Big Scene apocalypse.

Restoring the originary structure of the social order only secondarily involves getting into arguments over the officially recognized founding events: the “real meaning” of the American or French revolution, of “1688” or the Magna Carta. “Arguments” are part of the problem. The originary structure will be restored through the constitution of disciplinary scenes carved out of the many anomalies of The Big Scene. Every scene must be revealed as originary, as having a central object, even if unidentified or even unsought; every scene institutionalizes itself, even if minimally. The semiotic materials of the scene should be used to name every emergent practice on the scene. The practices on the scene at least then become objects of the scene, and the origins of those practices point to other objects to be placed at the center. Relapses into argumentative clichés can be named, as can the pedagogical moves used to circumvent them. This kind of practice in itself looks back toward other originary scenes, as it finds its precedents in them, in part by looking for models to extend its own scene. The more such practices inform and lead others to institute related practices, the more the commonly recognized founding events can be introduced, probably in a revised manner, into the discourse.

By the way, did you understand the title of this post? (Before you started reading? While you were reading? At this point?) “Anarchist ontology” might be a fairly familiar phrase, going back the Reactionary Futureblog. We’ve been contrasting it with “absolutist ontology” for a while. That one might propose that an ontology has an “anthropological basis” might not be very surprising for people familiar with GA. “The Big Scene” is a phrase new to this post, but, of course, in GA we are always speaking of scenes, the scenic, and scenicity. Perhaps the originary scene was a small scene, so this one is distinguished from it, perhaps pejoratively—that it’s the basis of anarchist ontology, which is generally distinguished unfavorably from absolutist ontology, would reinforce this impression. But if you’re unfamiliar with all of this, the title would look like sheer gibberish. It would be “unclear.” Now, that someone would say the title is gibberish and unclear, rather than saying that there are signs here of an unfamiliar disciplinary space is another way of being on The Big Scene. The norm of “classic prose” is that your writing should place all readers on the same scene along with each other and the writer. A text which some will understand but others won’t is inherently suspect. Imagining yourself on The Big Scene is the equivalent of what Marxism called “ideology.” The kinds of incommensurabilities between languages identified by Anna Wierzbicka are “retouched” through supplementations like “progress” and “cultural development” rather than seen for the originary constructs they are. There is nothing outside of the attention articulated in disciplinary spaces as they study the always distinctive and present imperatives from the center. Building distinctive spaces to study what is distinct even in those spaces under the spell of The Big Scene and being able to answer charges of merely having a little scene by ratcheting up the distinctions all around is the way you resist The Big Scene.

April 2, 2019

Total Semiotics, or Exteriorizing the Interiors

Filed under: GA — adam @ 7:43 am

We have almost no way, at any level of discourse, of referring to mental and psychological states (thoughts, feelings, desires) or qualities (moral and ethical, character, etc.) other than through some metaphor of interiority. Everything about us is “inside,” “within,” “deep down,” “buried,” “kept inside,” and so on. Discussions of learning, or about being transformed by events, are invariably conducted in terms of “internalizations.” Like depth metaphors in general, we can assume these interiors are artifacts of literacy. If you say something, or want something, that’s on the outside; you can feel things inside, but in a physical sense; even thinking does not necessarily require an internal location where it takes place—it’s a way of being embedded in the world and language. “Psychology,” in a pre-literate world, would be framed in terms of voices and agencies that exist outside of the one hearing and being moved by those voices and agencies. Prompts to behavior would not be from “within.”

Interiorization is metaphysics, or the declarative culture of literacy: all the interiorizing concepts are drawn from the supplementations to the reported speech scene David Olson identifies in “classic prose.” For example, Olson points out that the word “belief” is a marker of “sincerity”: “I believe” is a way affirming under more demanding conditions what one has said. But once we have the verb “believe” to indicate a willingness to be held to one’s words with greater accountability than usual, we will have the noun “belief,” and “belief,” as a noun, seems to be a “thing,” and where could this thing be other than inside us—so, we believe “deep down.” Social scientists can then construct experiments to test the “strength” or “malleability” of beliefs, and we can rummage around inside ourselves and others to determine where inside of us our “beliefs” are shelved along with our “principles,” our “memories,” our “unconscious” and all the rest.

Eliminating metaphors of interiority and depth more generally coincides with Charles Sanders Peirce’s own anti-metaphysics, advanced through semiotics. Everything that we know is a sign. Things that are not signs can be known because they generate effects that are registered by signs, and from those signs we can infer their causes. But even “causes” are signs. That nothing is unmediated by signs, that we are all ourselves signs referring to other signs, is a necessary consequence of the originary hypothesis. Peirce’s own tripartite schema, icon, index and symbol, corresponds fairly well, but certainly not exactly, with GA’s own ostensive, imperative, declarative. So, we can be less interested in what someone believes, however deeply, and more interested in his conduct, including, of course, his discourse. But what makes someone a someone in the first place is that he constitutes himself as a center among centers, and it is this self-constitution as a center that will enable us to do all the work of the interiorizations I recommend replacing, and quite a bit more as well.

The post-sacrificial, or omnicentric social order makes us all centers—we have last names, ID numbers, histories in public institutions, credit cards, and much more—all of which requires a center around which all this “orbits.” The work you must put into making yourself a functional center involves managing attention—constituting yourself so that people pay attention to you in the “right” ways, which also means paying attention to them in the “right” ways. We might think of self-centering as a network of attentional exchanges: words and gestures through which we reciprocally confirm (and, of course compete over) each other’s centrality. Finally, you have to become, and to some extent already are, a center for yourself—Peirce himself endorsed the classical notion of the “self” as the dialogue of the soul with itself, while modifying this formulation into the dialogue of the self with the self that is presently coming into being as a result of this very dialogue (and which is also apparent in what someone says and does). He also saw the enhancing of self-control as the purpose of inquiry (and all sign use, for Peirce, is inquiry), and self-control is simply strengthening the self as center in relation to its margins.

Becoming a center is not a simple matter—drawing attention to oneself means drawing desires and resentments toward oneself. That may mean desires from some and resentments from others. It means modeling desires and resentments towards others. You can be attractive as something to be possessed and enjoyed or as a model to be imitated. If some imitate you, others are sure to resent you. No one’s centrality is self-subsisting: even the most complete narcissist must imagine himself projecting more generally admirable qualities, which means he presupposes a shared set of signs with his “audiences” which must be taken to derive from a common center. In your own signifying activity you gather together through a system of references all of the signs pointing in your direction; in gathering them together you turn yourself into a sign, in the sense that following the signs pointing in your direction can serve to defer resentment. As a sign, what you are pointing to is some other center, one that allows you and your fellow signs to co-exist and even jointly flourish.

We can generate a vocabulary of inquiry here that can abolish interiorizations. Instead of talking about things like “spirituality” and “faith,” for example, we could speak in terms of the signs you have constellated so as to turn yourself into a sign, for yourself as well as others, that can model ways of placing more signs between oneself and the desires and resentments that lead to violence—even the various violences against one’s own centrality. As an ostensive sign, in presenting one’s centrality one is also an iconic sign, “resembling” the mode of deferral one is modeling. One’s ostensivity and iconicity blend into indexicality and imperativity: your acknowledged presence issues commands and makes demands on the other precisely by occupying the same space and thereby impacting the other. Even more visceral emotions, implicitly assumed to be “inside,” like, say, anger or despair, are better spoken of in terms of ostensive power that has been weakened, imperatives that can no longer be heard or complied with, ostensives that are overwhelming in their attractive power, imperatives that cannot be resisted even if the consequences of obeying them cannot be controlled, and so on. In this way, all individual feelings can be made directly social, representing ways one is bound up with various social centers and traditions—and what are traditions, if not imperatives from some especially powerful center that have moved through the medium of a history of social practices and can still be heard as a distilled form of the original?

Replacing interiority with the embedding of the human being as emergent center in the ostensive-imperative world establishes a continuity with pre-literate discourse that has been lost. Pre-literate peoples will not see themselves as having autonomous selves, within each person following his own “conscience,” “passion,” “inspiration,” etc. They will see themselves as in constant, often hostile and distressing, dialogue with the dead and various divine figures. Someone is always telling them to do or think what they are doing—we can see this from a late orally produced and transmitted text like The Illiad, even with its significant literate overlays. Even for Socrates, everyone has their “daemon,” and one is compelled to answer questions posed by oracles. Part of my argument here is that this way of thinking about thinking, desiring and decision making is far more realistic than those framed in accord with individualistic models; in the post-literate resolution of the anomalies of the literate mind (which probably needed to define itself sharply against orality, even if just for pedagogical purposes) we are working towards our self-otherness can be described far more minimally than was possible under oral and sacrificial conditions.

Can it be experienced directly, though? That depends on whether we can distance and extricate ourselves from the still sacrificial exchanges that constitute resentful centrality. Once you have established yourself as center, you have to defend that centrality—you have to be willing to “prove” yourself, counter falsifications, address slights, avenge violations of your centrality, establish various deterrence mechanisms, and so on. You need to assert your “sincerity,” your “integrity,” “honesty,” and so on by demonstrating—and attacking anyone who doubts the demonstration—your consistency (“consistency” according to terms that you also have to establish and impose). The disciplines remain within these reifications even while “explaining” the ways they get articulated one way or another—they introduce rigor into the various “folk psychologies,” which means entering the system of self-controlling centrality and conditioning its terms upon institutional constraints so as to subject them to external controls.

The only way not to be or have a “self,” without indulging in the fantasy of a direct plug-in to the divine, is to make oneself a total sign. All of the things others can think or say about you, or do to you, are parts of how you compose yourself as a potential center of attention. Every time you so compose yourself refers to other times you have, and other times and ways you might, compose yourself. The furniture of interiorization is excluded in an a priori way—yes, you’ll still speak to yourself (have “internal dialogues”) but these are essentially rehearsals and planning sessions for possible enactments of self-representation.  Insofar as you are to be made into a center you work to defer some possible violence; this means eliciting so as to redirect mimetic crises on different levels. We’re all signs of course, signifying ostensively, imperatively and declaratively, but if you rely on the assumption that the world is a single scene (an assumption encouraged by literacy) then you array your signs so as to pre-empt any questioning of your belonging on that scene. This is “humanism”: a batch of qualities and characteristics that make you like everyone else insofar we are all on the world scene. Humanism is a prohibition on becoming a total sign and an insistence that everyone supply oneself with a full interiorization.

To become a total sign is to signify the scenes upon which those qualities and characteristics (the supplementations of the self as mandated center) are identified and thereby turn them into objects of inquiry. People get angry and offended; they can be sympathetic, caring, rude, and much more. These qualities can be treated as sites of sign exchange in which one responds in kind, or as expected, to signs of anger, caring and all the rest. You are then in a constant state of shuffling and refining these qualities, and showing them off when they can centralize you most effectively (drawing mimetic desires short of scapegoating). All of these emotions and qualities are social and involve negotiations regarding the state of the center and access to it. But why not simply formalize all this as well: to feel anger rather than act is to acknowledge some form of powerlessness commanded by the center; to be sympathetic is to imagine yourself, without much evidence, less in danger of resentment from the object of your sympathy. You can refuse the exchange by not providing the complementing sign; you can frame the terms of the exchange by treating those terms as imperatives—who told you that you should feel angry, offended, concerned, hopeful, or whatever on this kind of occasion (what kind of occasion is it—and you told you to identify it as such?). What do you think would satisfy your anger or your sympathy? When the little imperative exchanges implicit in the supplemented emotional states (where a psychological quality has filled a space left by a god) fail to come off, we are left facing the center, which we counted on to oversee the exchange. Something was telling me to be frustrated, or hopeful, or suspicious (all these “emotions” require scenic “translations”), or whatever, but now the center can tell me to preserve the space within which the exchange takes places, rather than take up one side of the exchange. Instead of an exchange of conventional gestures, we can command each other to go set up new spaces that are themselves aimed at spreading spaces irreducible to gestural exchanges. “Psychology” is still the residue of sacrificial culture, in which we all cut off little pieces of ourselves to distribute and consume. Post-sacrificial modes of being involve giving over our desires and resentments to the center in the knowledge we will have to sustain our attention towards the center so as to be worthy of when those desires and resentments come back transformed into imperatives from the center. And then we become ostensive, imperative, interrogative and declarative signs of the center.

March 26, 2019

The Central Imaginary

Filed under: GA — adam @ 12:33 pm

A while back I formulated the concept of the “sovereign imaginary.” This concept represents the assumption anyone makes who expresses a desire or some resentment, who says “we should…” or “someone should…,” regarding some authority who could do the thing “we should” do. If you say “Medicare for all,” you imply a model of a state that would implement Medicare for all and would do so in the way you intend. If you say “Medicare for all” you’re not thinking of the frauds that will be parasitic on it, the bureaucrats who will make cruel and capricious decisions, the drug companies that will donate to politicians who will push to have their drugs purchased at high prices, etc. In other words, you airbrush out of the picture all of the crisscrossing powers that would make the reality of Medicare for all far different from the intentions of its supporters. You imagine a unitary executive power, who issues orders that will be obeyed by subordinates, who will in turn issue orders obeyed by their subordinates; you imagine competent people with integrity placed where they belong and allowed to do their jobs. Even if you say, yes, I know there is corruption and incompetence and that bureaucracies develop their own interests, etc., you are still assuming that these are marginal to the sovereign power you imagine—if not, you wouldn’t be able to say “we should…” This seems to me a very useful observation to make because, if it is accepted by an interlocutor (and it’s very hard to deny), the following conclusion is also very hard to evade: such a sovereign power might have very different ideas on how to handle the health care system, and, freed of all the interfering powers (all the conflicting “we shoulds”) would have very little reason to care what you think. So, implicit in your political desire is its cancellation. Even better, the same must be true for me, and for anyone else participating in the conversation. So, instead of arguing about Medicare for all vs. private insurance vs. treatment for cash, we can talk amicably about something upon which we have just found we agree: there “should be” a central authority that can carry out policies unhindered by interest groups, nosy NGOs, bureaucratic factions, and so on.

Now, we are no more in a position to institute our desire for clear and secure central authority than we are to implement our version of Medicare for all, and so arguments over how to do this are equally pointless. We don’t need to imitate the pathetic revolutionary movements that split into a dozen factions over how to define a particular institutional reality or assess a particular event. But we are now listening to the center, and we can ask, what kind of practices will enable us to project possible paths towards clarified and security central authority, to prioritize among those paths, to invest what energy and resources we have in the most favorable paths, all the while maintaining our initial agreement that all our desires and resentments indeed point in that direction. Even more, the projection of possible paths and the prioritization among them should be guided by the need to maximize that agreement, to spread it, and to ground it more thoroughly in the disciplinary spaces we enter and sustain. That is the kind of activity that will let us see the possible paths when they take shape, and to distinguish among the opportunities they offer. What is essential here is that this imaginary is in our language—no one expressing a political desire can be exempt. So, every conversation about every policy or every social evil (poverty, “racism,” etc.) can be directly converted into a conversation about the kind of authority you seem to be imagining as capable of eliminating or mitigating that evil (or perhaps redefining it as not-evil) or implementing that policy, and the kind anthropological, epistemological, ontological, and so on assumptions you must be making so as to consider such an authority worth considering. And this approach can be applied to all of culture, not just narrowly political discussions—a movie will represent a particular sovereign imaginary, as will a dispute between parent and child, a conversation between friends, a psychological theory, and so on. Even if you want to argue for democracy or liberalism, you have to be imagining a sovereign that can protect “free speech,” the integrity of elections, whatever you imagine to be the role of the media in informing the public, and all the rest. Even a globalist, even an anarchist, inhabits a sovereign imaginary, whether it be international human rights courts and organizations mediating trade disputes, on the one hand, or spontaneously formed agreements between unbound individuals, on the other hand. The sovereign imaginaries, when acknowledged, are the starting point of needed conversations; when unacknowledged, are the sources of all the conflicts over the actual sovereign. So, making them explicit is the first step toward ending those conflicts, towards overcoming ideology.

Needless to say, I continue to consider this concept essential and unimpeachable. But I formulated it before I had thought through sufficiently the consequences of installing, so to speak, the concept of sovereignty at the heart of absolutist theory. The concept is itself ultimately a liberal one, assuming a “natural” condition of violence among abstract individuals that can only be quelled by a sovereign with direct power over each individual. It is better to see ruling as helping to maintain and enhance the peace preserved throughout the social order by its various corporate powers and governors. I have come to use the term “central authority” rather than sovereign, and so I will now speak in terms of a “central imaginary.” This should also help to conserve, within this concept, the concept of “listening to the center,” which I have discussed in quite a few recent posts. We listen to the center as the center speaks through our central imaginary. Once we have identified the central imaginary as the “topic” of our conversation, we can start to seek out imperatives from the center. Once I find in my desire for Medicare for all a faith in the possibility of clear and secure authority, I cannot but start to think of what I must do to increase the future possibility of such an authority. This imperative, and the subsidiary ones it generates, concerning maximizing agreement on this point, now restructures all my transactions with the world. And I start working on reshaping the declarative order by composing declarative sentences that answer the questions those imperatives turn into as the means of fulfilling them are exhausted in a particular case.

So the problem then becomes, how to think of that theoretical, declarative practice. I must engage with a particular person who happens to be crossing my path in some consequential way so as to enhance our agreement regarding the existing traces and elements of a clear and secure order. But I’m talking with this person and no way of fulfilling that imperative fills me with confidence at the moment, so the imperative converts itself into an interrogative, which is a way of demanding information rather than commanding a specific act. How do I arrive at a “good” answer to that question? How do we talk about doing so productively? The disciplinary space we need here should be filled with thought experiments. In devising such thought experiments, we can derive inspiration from the kinds of experiments cognitive psychologists devise in order to identify the various cognitive thresholds children pass through. David Olson made much use of these experiments in order to determine the cognitive consequences of literacy. So, you want to see whether children of a certain age, or certain degree of exposure to literacy, are capable of understanding the concept of having been wrong—that is, of realizing that they know something now that they didn’t before. You can give them a box with pictures of candies on the cover and ask them what they think is in it. Candies, of course. You open the box, and it’s something quite different—say, pencils. What is in the box? Pencils. What did you think was in it before? Pencils! The children are incapable of grasping the concept of moving from one state of mind to another in response to changes in observed reality. Or, you show children a man opening a drawer and finding something in it—neckties, say. The man leaves the room. The children see someone else come in and replace the neckties with bowties. You then ask the children: when the man comes back, what will he expect to find in the drawer? Bowties, they say. They can’t separate what they know from what someone else knows—they don’t yet have a “theory of mind.”

The question then is, what distinction or threshold do we want to uncover when our imperatives from the center turn into interrogatives that “command” us to compose a declarative response? What opens space to hear the imperatives of the center is deferral, which also means deferring to the other and waiting for a reciprocal gesture of deferral in turn. This is not a question of politeness of considerateness (not that there’s anything wrong with them) but of developing a discovery procedure: from another’s ability or lack thereof to defer an imposition of a compelling resentment we discern the extent to which he is ready to open inquiry into future imperatives from the central authority.

Derrida associated “defer” with “differ,” and the two words are really the same. If we want to “assess” deferral, we can therefore do it through practices of differentiation. The originary scene ends, and, more importantly, is remembered as ending, with everyone putting forth an identical gesture: hence the strictness with which ritual (not to mention grammatical conventions) is enforced. But on the originary scene there would have had to have been significant differentiation: not everyone’s sign was issued simultaneously, was held equally long, was equally well-formed, was equally responsive to the preliminary gestures and potential lapses of others. Future instances of deferral will require someone to re-activate the generative scene out of the ritualized and habitualized one. Tacitly acknowledged elements of social practices need to be turned into signs. This is not a difference between “progressive” and “reactionary”: even the preservation of traditional practices requires this kind of renewal.

The practices of literacy constellated in “classical prose” are similarly homogenized through the supplementation of an imagined represented speech scene by metalinguistic devices aimed at placing all readers with each other and the writer on that imagined speech scene. The study of language required to create classical prose makes language into the center of a disciplinary scene; the generation of future disciplinary scenes will depend upon turning those metalinguistic devices into centers of new disciplinary scenes. If you take metalinguistic representations as referring to “objects” (“beliefs,” “assumptions,” etc.) your discipline will simply reiterate the construction of “prosaic” metalinguistic literacy itself. Of course, it’s necessary to hold some concepts constant while you work with others, but that’s just a question of the degree of “flux” you want in the disciplinary space so as to conduct a particular inquiry, not of more “constant” vs. more “variable” objects or domains of reality.

You use a concept (like “assumptions”) in a particular way for a particular inquiry, which differentiates that use of the concept from the history of usage it derives from. That history of usage has taken shape in a normative center of usage, which your own usage will to some degree imperfectly iterate. Now, we can assume that the speaker’s meaning is the same as the word’s meaning, which collapses the very distinction introduced by literacy. In this case, our meta-inquiry will concern itself with guaranteeing the normativity of the use of the concept. This is a way of resisting any differentiation beyond that required to ensure the continuation of the discipline itself. In a sense, you are then like the child who thinks he always thought there were pencils in the box, and like the participant on the originary scene who forgets the event itself in its ritualization. Or, you can accentuate the difference, indeed the differences, between your use of the concept and those circulating around the normative center. The purpose of doing this is not a romantic attempt to make yourself a center of hostile attention in what is really just another ritualized form of modernist individualization. Rather, you want to ensure the commensurability of speaker’s meaning and word’s meaning—not by policing deviations from the latter, though, but by making explicit the supplementary character of the meta-concepts of literacy. We have concepts like “assumptions” and “beliefs” because the iteration of the sign is always problematic, and always requires a disciplinary “gathering” or “assembly.” In a sense, the most fundamental human question, the question upon which our emergence depended, is whether this sign that we use now is indeed the same sign as when we used it before. Literacy makes this question explicit, and all subsequent media do so as well in different ways. We need for the sign to be the same in its different uses, but if we imagine that it simply is the same, we commit ourselves to resisting differentiation. The only thing that makes the sign the same is that members of a disciplinary space establish continuity with other uses of the sign, which means with other disciplinary spaces. Accentuating the difference of the sign is taking on the responsibility of the disciplinary community: it is the way we make this concept our own and potentially others’ out of all the other ways other disciplinary spaces have done this. Only in this way can you remember that you first thought there was candy in the box, and what changed your mind; and you can remember the bustling, hypothetical scene even as you fill your allotted role in the rituals and habits in which its residue is deposited.

Perhaps the whole of a “centered” social and political practice lies in devising experiments for determining how a given practice is determining the sameness of the sign it is centered on.

March 19, 2019

The Worlding Event

Filed under: GA — adam @ 7:21 am

I have argued previously for the priority of “attentionality” over “intentionality”—attention must precede intention, and “intention” individualizes what is “joint” in attention, making it more of a declarative than an ostensive concept. We can trace the emergence of intentionality from attentionality, whether by “intentionality” we mean the more philosophical notion of constituting an object or the more everyday use of the term as meaning to do something. On the originary scene, all participants attend to the central object, and attend to each other attending; the sign, as the gesture of aborted appropriation, is really nothing more than the demonstration of this reciprocal attending to their joint attention. Self-referentiality, then, is built into the originary scene. Even more, what is action if not a prolongation of attention? I see the other attending to me, which becomes a kind of self-attending, as I can single out that in my gesture that might be articulated in the other’s attention, and in that way move myself so as to fit the shifting attentional structure of the other. My movements, and therefore my actions, enter into and are supported by the attentional space I have co-created with others. In all of our actions, then, we are tacitly referring to this attentional space, of which we are mostly unaware at any moment. As Michael Polanyi says, we know more than we can say. But we can say more and more of what we know, in the process producing more knowledge we can’t yet say—becoming a representation of this state of affairs is what ethical action entails.

For originary thinking, the human being has a telos: to speak and act along with the center; to enter the history of deferral in such a way as to construct the world as the effect of and continuation of that history. We assume everyone is trying to do that as well, which is why we know every utterance includes a sovereign imaginary eliciting commands from the center. Traditional ethical thinking will start to speak in terms of will, judgment, capacities, desire and its education and so on and all of that is fine, but we can just speak of the center one becomes as soon as one is amongst people, a center both actual and possible, and that each of us constructs as the ways we want attention drawn to or deflected from us. You can compete with other centers within the economy of attention, or you can redirect attention from you to the center enabling you to so redirect attention. Sometimes the very competition with other centers can be turned towards that end.

Performing the paradox of self-reference is the highest good for originary thinking. Turn every reference to something else into a reference to you and every reference to you into a reference to something else. You can never run out of things to do this with because everything is marked by the history of such reciprocal reference, and so keeps becoming something new. In this way you keep turning the world into a completely internalized self-referential system. This would seem to be a completely closed, and therefore dead, system, but in relation to the center this self-referentializing system is itself just a thing comprised of references to the center. You point to something, enabling others to see it, which enables it to be, but its being in turn enables you to see it and to point to yourself seeing it along with others—the center makes its appearance in this layering of the scene and the impossibility of determining whether new things are coming into view or we are sharing attention so thoroughly that we’re not sure where your seeing begins and mine ends. The center tells us to sustain that, by constructing institutions out of sites where the articulation of shared reference and self-reference (where we find a way of saying to each other, “here’s how we’re making sense of each other”) can become a model of deferral.

We don’t need to invent clever ways of enacting the paradox of self-reference, like saying “I am lying.” ‘I see that” is quite paradoxical enough, because “I” can only see that because “you” and “others” are least potentially able do so (and have therefore “always already” done so) as well; “that” is that only because I am seeing it; and I “see” that because our deferral, our laying back from appropriation, lets that object, like all objects since the first object, set itself off against a background—seeing is always a refrained touching and tasting. The disciplined forms of literacy try to suppress the paradoxicality of the declarative by supplementing sentences within imaginary scenes whose parameters are set by those defining the abstractions used to perform the supplementation. To define “perception” in terms of physiological structures and learned Gestalts is to try to abolish the paradoxicality of “I see that.” But, of course, we have to say things like that, so it’s best to say them in the manner of little satires on these suppressive supplementations, reintroducing the paradoxes they hope to avoid. Eventually, these running satiric digressions become indistinguishable from the primary discourse itself. If you can find ways of iterating this digression-within-the-discourse in new variations within emergent events so as to have each variant naming the previous ones you enable others to join in self-referential centering.

One way of breaking with Western metaphysics is by acknowledging the traditional character of all thought. The concepts you are working with have been worked with in other contexts, and are conversions of earlier concepts, which solved problems within a now extinct paradigm which has nevertheless bequeathed to us some of it problems and some of its materials for solutions. But this means that the more we shape these concepts to our own purposes the more we are participating in an ongoing inquiry with those who did so earlier, and had no idea we were coming along. But since the most fundamental and universal tradition is language itself, it seemed to me that the self-aware participation in traditions of thought could more simply be understand as a form of language learning. When you learn a new language, or when children learn language, the process involves imitating chunks of discourse in ways that are inevitably mistaken because you must intuit their uses in unanticipated contexts—how else could anyone learn? In the process, you generate new idioms, and this is how language changes—enough people take the mistake, or even a shift in emphasis, as “correct.” We never stop learning, so we’re always students, but we also have to step outside of the flow of learning in order to teach people who we see falling into what we fear (but we could be wrong) are less productive patterns of error. Here, we have, broadly, two choices: one, we situate ourselves within a more or less institutionally protected orthodoxy, and correct those whose language usage doesn’t conform. The advantage here is that you guarantee you’ll always be right and smarter than anyone who comes along. Or, you re-use the misused idiom with some of the weight of inherited uses which the newcomer might be less aware of and thereby incorporate the mistakes into a regenerated tradition of discourse. Here, authority has to prove itself by showing itself capable of allowing digressions to flow back into a larger current. You keep emulative mimesis in play by allowing that play to construct the very space in which the implications of language usages can be explicitly hypothesized.

Many years ago I started working on what I called “originary grammar” because I felt that GA needed to be more than just another “theory,” one that offered its own “readings” of texts and “explanations” of social structures and historical events. I thought it needed to generate its own comprehensive vocabulary—a language others would have to and want to learn—rather than just saying something like, “here’s how we think it all began” and then proceeding to talk about ideas and interpretations and principles and beliefs and arguments and proving things like everyone else. And the way to do that was out of the dialectic of linguistic forms Gans worked through in the first work in GA, The Origin of Language(the new edition of which is of course available, and the Amazon page for which is still sadly bereft of comments). I was encouraged in this by the fact that Gans used a kind of grammatical approach to defining the two key intellectual and cultural transformations constitutive of the West: he defined “metaphysics” as taking the declarative sentence as the primary speech act; and he defined Judaic (I think “Israelite” is better) monotheism as “the name of God as the declarative sentence.” In both cases, the post-sacral or imminently modern world is constructed in terms of some tension between the declarative, on the one hand, and the imperative, or, more broadly, the entire ostensive-imperative network, on the other hand. Wouldn’t anything we would want to talk about be included in this field of tension?

Originary grammar should supersede scientism while preserving all the intellectual advances of science. Instead of “facts,” we have what is known ostensively: what could become an object of shared attention. Something could only become an object of shared attention on a scene, which cannot itself be prepared ostensively: we are driven to create new scenes by the breakdown of a previous scene, the central object upon which eventually generated new desires it could no longer defer. (Of course, the new scene could feature the “same” central object in a different way.) If the scene is not simply to break down; if a transition to a new scene is to be achieved, asymmetry must enter the arena in the form of an imperative: someone issuing an “inappropriate” ostensive regarding a new or old/new object. Here, the preservation of presence on the scene can be united with maximum innovation on the scene: we allow a space for inappropriate ostensives, to see which might work as imperatives. Finally, we can bound declaratives to the scene by allowing the declarative field maximum freedom to explore all the complexities of declarative possibilities (to cross over time and space, to organize all of reality around one center or another) on the condition that it represent actual and possible ostensive-imperative articulations. The declarative sentence constructs a linguistic present, the present in which you can utter the sentence, that, unlike the ostensive and imperative, can be separated from any particular scenic present—but that means that the “vocation” of the declarative sentence is to keep restoring the continuity and extension of the trillions of human scenes, each of which threatens in a new way to break that continuity. The declarative would be most interested in suggesting ways of preparing us, or issuing imperatives, to share new ostensives.

In this way we would have a completely self-contained and completely open system in which we would always be talking about what we’re doing in the language through which we are doing it. The content of our declarative sentences would be the way other declarative sentences have commanded us to draw lines connecting objects around a centerized one. So, discussions would take something like the following form: “you say I’ve been looking at things in such a way that others see what I don’t and this is because of where and how I stand and in saying this you are telling me to be led by the configuration which I have not yet identified as a configuration and thereby to see and lean toward something that would compel others to join me in reconfiguring it…” The specific details of any particular scene at the center of an array of scenes would be inserted.

We would be more precise than this sample indicates because each sentence modifies in some way inherited chunks of language and meaning is thus generated by the modification itself—in a language user’s noticing that you have eschewed the expression that 87.8% of listeners would have expected to come at that point in your discourse in favor of a rarely or never before used one because you want that point in the discourse to operate as a center that has you reworking language along with perception, intention and  intuition. And the next declarative in the discussion could point that out or, even better, iterate it in a new modification that the language learners around you would be able to iterate in turn so as to open new fields of objects. So, we’d be talking about things in the world while talking about how we talk about things in the world while talking about how we can rework the way we and others talk about things in the world and it’s all really one “talking.” This still seems to me to be the imperative.

« Newer PostsOlder Posts »

Powered by WordPress