GABlog Generative Anthropology in the Public Sphere

April 25, 2014

Mimetic Culture, Liminal Culture

Filed under: GA — adam @ 9:39 am

There are two kinds of moral innovations: one, upward, in which more distance is created between desire and appropriation; and the other, downward, in which that distance is shrunken by the violation of some prohibition with impunity (the innovation lies in the intimation of unlimited possibility, which mimics the generation of human possibility by the originary act of deferral). The great “axial age” moral innovations upward took place during the period of manuscript culture, where writing (and alphabetic writing, in particular, at least in the West) had been invented and was in use among a scribal elite and/or a small reading public sharing rare texts—manuscript culture was still deeply embedded in orality (texts were used to facilitate oration, or memorization), while making it possible to memorialize oral scenes and confer upon them the prestige and permanence of the written word—it is telling that the figures of Moses and the Hebrew prophets, Socrates, Confucius, Jesus and the Buddha are all very often situated within “quiet” scenes, dialogues with a few participants, or God, bordering on and often entering a silent inner dialogue with(in) the self. Words are inscribed in one’s heart, and can be recited exactly as they were originally said as many times as desired, enhancing the sacrality of those particular words, enabling the construction of communities devoted to their preservation and effectuation.

Print culture (McLuhan’s “Gutenberg Galaxy”) spreads the results of manuscript culture far more widely, while introducing the capacity and compulsion to fragment and reassemble, and therefore criticize, parody, and re-contextualize those results. Manuscript culture strives to approximate writing not just to speech, but to speech between co-participants in discussions over what is worthy to be preserved; print culture strives to make speech more like writing—normative, widely intelligible, uniform. (Part of the prodigious fertility of the Renaissance period lies in the interplay of the norms of manuscript and print culture, and of expanding literacy and the more varied layers of orality brought within the orbit of the written word.) Certainty, rather than proximity to the origin, becomes the primary value of reason, actions start to seek out widespread publicity rather than recognition as an enduring model, and thought aims at material transformation rather than contemplation. This transformation involves significant moral innovations, in particular those associated with rigors of life in the modern marketplace: punctuality, frugality, patience, politeness, respect for rules, large scale coordination, etc., along with a much less widely shared, but at least generally valued, fearlessness before the unknown and untried. It has also abetted new and unprecedentedly brutal forms of violence and empire, as control from the center was eased considerably, and made difficult to resist by the increasing specialization at the margins.

What about our emerging electronic and, especially, transparent and algorithmic culture? The intensified culture of celebrity and publicity thereby generated most obviously privileges the transgressive over the continent, the brash and boastful over the modest—the invisibility of the virtues of manuscript culture is intensified by the demand that everything be made visible, literal and blatant. The brazenness and self-exemption from morality print made available to the inventors and adventurers of the modern period are now available to anyone, and it is hard to see any reason why one should display even the most minimal patience. Most people, whether they realize it or not, assume that every individual is a god unto him (or her) self. At the same time, practical learning and participation are strongly encouraged, and can curb the excesses of self-idolatry. I will return to the question of the actual and possible upward innovations native to our now native culture.

Let’s imagine, as a conceptual baseline, a near absolute mimeticism. That is, imagine that every desire is immediately and comprehensively expressed in posture, gesture and word, and every posture, gesture and word is in turn immediately and comprehensively responded to by whomever it is directed towards, and whoever witnesses it. Such an order would involve constant mimetic contagion and hence aggression and violence; it could build no institutions and have no learning. Not exactly none, though, because insofar as it is a human community, the mimeticism could only be near absolute—our barely human community is at least able to restore if not maintain order through the emergence of spontaneous forms of unanimity, in which mimeticism is transformed momentarily into a stabilizing force, directed at more or less arbitrarily chosen targets of discipline and punishment (a very Girardian model, but I don’t assume that actual scapegoating, in the sense of human sacrifice, is necessarily the primary institution).

Something of this absolute mimeticism still resides in every human, and we still respond automatically to a smile or frown, a hint of aggression, a subtle offer of reconciliation, etc. But, of course, these spontaneous reactions are already highly mediated, as there would be no “hints” or “subtle offers” in the originary human community I have hypothesized—everything would be directly out in the open. The point of the originary barely human model is to provide us with a way of measuring moral innovation. The first step beyond near absolute mimeticism would have to be someone not responding immediately, repeating the originary hesitation, allowing an aggressor to have his way, while signaling (and having that signal received) that he will not continue to have his way indefinitely. Upward moral innovations are always of this kind: a new hesitation, but one that organizes posture, gesture and word together in a new way so as to present an imitable mode of hesitation. And downward moral innovations recognize the fragility of such ascents, and recover and display against them the sheer power of a more direct action-reaction cycle. We could see human history as the fluctuation and dueling of upward and downward innovations.

So, what replaces, in the upward moral innovation, the direct, automatic, spontaneous, full and commensurate response to an other’s expression of desire or resentment? It would be trivial to say, “an indirect response,” as that would beg the question—we must imagine, then, an equally direct, automatic, spontaneous, full and commensurate response, but to the other’s expression of desire or resentment as a sign, rather than appropriative act. A sign is, in the first instance, a truncated act; to treat the other’s act as a sign is to treat it as a truncated version of a larger act, an act that entails consequences signified even if not materialized in the act itself. Treating an act as such an exemplary sign involves an audience other than the actor himself—the third person we now assume on the scene is part of the shaping of the act, one that the potential respondent, but not the actor, accounts for in his response. The act would set in motion a chain of consequences that would require for its closure the intervention of the third and perhaps other parties; that future closure is what makes it possible to treat the act as a sign. Treating the act as a sign is an attempt to obtain the closure without the consequences. And in turn, the respondent becomes an exemplary sign.

To paraphrase Aime Ceasire, Western men and women speak all the time of freedom but never cease to stamp out freedom wherever they find it. The current rampage of the victimocracy is no accident—demands for freedom on the liberal and democratic models are really demands for revenge against those who one imagines have expropriated one’s freedom. But the first freedom is the freedom from one’s own desires and resentments, and only in the most extreme instances is the acquisition of such freedom not within one’s own grasp (one just has to stop grasping at something else); at the same time, such freedom is always provisional, always suffused with doubts, always needs to be recovered, and can have no external guarantees. Demands for economic and political freedom are only sustainable insofar as they aim at the space needed to practice and exemplify that first freedom. Has a single modern political theorist ever said that? Maybe—I haven’t read them all—but it’s certainly not any part of our liberal democratic commonsense—even the awareness one finds in thinkers like de Tocqueville and the American founders to the effect that moral responsibility must attend the individual freedom democracy unleashes see such responsibility as a concession to reality by enlightened self-interest—in other words, a more effective way of getting what one wants (or, in more theological terms, of imposing one’s own law on reality). (Only high manuscript culture, forged in self-adopted or embraced exilic relation to monstrous imperial orders and broader social decadence [by prophets, monks, small communities of teachers and disciples, self-lacerating disaffected elites], has ever understood this first freedom—which is no doubt the source of its continuing power today.)

Environmentalism admonishes us to shrink our “footprint”—they mean carbon, a trivial matter, but the metaphor is a nice one for thinking through the possible moral innovations enabled by the transparent and algorithmic. It does seem to me that a highly moral way of passing through this life is to leave only the slightest traces of footprints, i.e., identifying markers that can be definitively traced back to ones own intentions and efforts. Rather than clearly demarcated and strategically located footprints, better to do something to reveal the world as a world of signs, and oneself as just another one of the signs, one that has lowered the threshold of significance for yet to be revealed signs. Revealing the world to be a world of signs is to reveal the world as composed of truncated, fractured, fragmented actions unmoored from the desires and resentments that originally motivated them (a radical de-mimeticization) and arriving far away from their intended destinations. Even those bits and pieces of actions can be broken down further—excessive exposure to them would restore their wholeness and render them sentimental and sensationalistic, assimilating them to one or another “classical” model—as can the very act of breaking them down. This is not just a contemplative position within our transparent and algorithmic reality, in which everything already tends to get reduced to a gesture to everything else—it is always possible to withhold the mimetic response and represent the other’s act as an incomplete one and hence a sign, a sign of which one tacitly pledges to be the bearer. The algorithm makes it possible to project hypothetical transformations across unlimited, virtual fields—the fall of a sparrow can be aligned with various possible initial conditions to produce mappings far into the future and across vastly divergent causal chains, the point being to facilitate the reduction of any act to a fluctuating data point, and hence radically uncertain in its effects but maximally significant in its articulations with other signs. This moral innovation would install, there where mimetic culture presently is, liminal culture, a culture that continuously lowers the threshold at which we perceive, feel, and intuit emergent meanings. Old cultural forms like the maxim and the epigram might make a comeback, as such literary forms can be put on a t-shirt, a web page, or tattooed on one’s skin—but maxims and epigrams that subvert and invert some vapid or bullying slogan or public imperative.

Such a moral innovation would follow in the footprints of the print revolution, with its privileging of what Walter Benjamin called “mechanical reproduction”; but, well beyond that, it reaches back to the originary scene, where the sign was created through the truncation of an act, rendering it available for reproduction, segmentation and new articulations. Remembering forward, further de-mimeticization requires further specializations, specializations that lead, not to the mutilation of the individual but to participation in a culture of overlapping disciplinary spaces. Take, for example, the operative imperative for “Seinfeld,” “no hugging, no learning,” a slogan Eric Gans discusses in one of his Chronicles on the show. “Seinfeld” is often taken as accelerating a shift towards a more thoroughgoing irony in American popular culture, marking the point at which nothing is free from irony, i.e., the point of “cynicism.” And it is true that if you watch pre-Seinfeld sitcoms, even the “boundary pushing” ones like “All in the Family,” there is always some sentimental, preachy substratum to the humor—in the end, some things remain off-limits to laughter. To see this as a shift toward a general cultural cynicism is to miss the point, I think—it would make more sense to see this development as a form of social specialization. The point of a TV comedy is to make you laugh—it should be judged according to some measure of quality laughs per 23 minutes, not the “lessons” it teaches. Why would anyone turn on a TV show to learn about life or morality? If we really did so, that would be an alarming sign of cultural decay. You turn on a TV show (at least a comedy) to get something you couldn’t have otherwise: pieces of the world turned around so that situations that are not ordinarily funny become so. Once you realize that, attempts by the entertainment industry to tend to your character become ludicrous and insulting, and, anyway, the point of gesturing to moral pieties was always to avoid professional death by “controversy,” and was therefore always cynical itself—and, indeed, despite “Seinfeld” and all its would-be imitators, earnestness abounds in American culture. And specializing in comedy is very different than specializing in one stage in the production of pins, as it relies upon anthropological, historical and sociological intuitions—what is funny today is not what was funny 5 years ago, or, often, 5 days ago.

A similar development in higher education would be welcome, particularly in the humanities—rather than going to a literature or philosophy class in order to (at its best) enter the ongoing conversation over which works and ideas should be preserved, wouldn’t it be better for your literature or philosophy professor to provide you with a form of literacy, a way of working with language so as to generate new meanings out of existing ones that you could only with significantly greater labor and a lot of luck acquire for yourself? As with the specialist in generating laughter, the algorithmic (or what I coming to be called “digital”) humanities would enable the student to reveal new fields of signs as mutations of more familiar ones. On the level of scholarship, while mimetic theories ask what is “literature,” or “reason,” or “meaning,” or “humanity,” or “society,” and so on, liminal theories would ask, where is the boundary between all of these categories and whatever their “others” might be at a given moment—this kind of inquiry would also involve learning new modes of literacy, insofar as the boundaries are always shifting, in part as a result of the inquiries themselves. (In a sense, this would make all pedagogy and even all scholarship “remedial”—part of the problem with the traditional humanities, or at least an increasingly unavoidable part of the problem, is that students can’t really “read” Plato, Shakespeare, Joyce or any of the other “great books”—they can, at best, mimic their teacher’s reading of the texts as already read, which they must be insofar as they have already been designated “great.” Providing students with reading practices that would reveal these texts to them in their otherness, with all the messiness and stupidity that is sure to follow, might lead to something interesting, even if it’s not likely that many instructors will know what to do with it.)

I suppose this would mean that originary thinking is itself a new specialization, a discipline focused on revealing the consequences and implications of the maxim “representation is the deferral of violence.” Our project would be to show what difference this maxim makes in all of the disciplines with which ours does or could overlap. What does the originary hypothesis enable us to see that we wouldn’t otherwise? Does that mean that one doesn’t claim that the originary hypothesis is true, or gets us closer to the truth of human being than other ways of thinking? Well, to the extent that we are invested in or converted to originary thinking we have concluded that it is more revelatory than other ways of thinking available to us, which is pretty much synonymous with “truer”; but insofar as there is no neutral set of intellectual standards by which the relative truth of theories in the human sciences can be determined authoritatively, I would say we let the “long run” settle the question of truth and attend to our business of lowering the threshold of human things we can make new sense of.

To return to the concepts examined in my previous post (“Selfy”), it seems to me that the kind of disciplinary inquiry I am proposing as a moral innovation requires self-control, self-abolition and self-creation: the disciplinary self is a creation of the inquiry itself, much like the “narrator” of a novel, who is neither the author or a character (and where the narrator is a character, most obviously in first person narration, the reader posits another narrator behind the “I”), who exists only so long as the novel does, and is obliged to follow the rules of coherence and consistency constitutive of the narrative. Likewise, the disciplinary self is created by some boundary question or anomaly, and must remain the “same” insofar as questions raised must be answered or questioned in turn, and rigorous controls must be in place to ensure that the “real self” external to the inquiry, with its resentments and desires, does not interfere—even if those resentments and desires might (again, like the relation between author and narrator), properly treated, inform the disciplinary self. And into what does the disciplinary self inquire: well, among other things, the slippages within and between “identities,” a central cause of “threshold” questions in the modern world; and “personhood,” perhaps first of all the boundary between the constitutive fantasy of personhood (one’s own absolute erotic centrality) and its never completed reality of shared erotic centrality. (I refer, again, to my previous post, and in particular my reading of Andrew Bartlett’s originary analysis of personhood.)

April 14, 2014

Selfy

Filed under: GA — adam @ 7:55 pm

Everyone is taking selfies, but does that mean that no one is selfy, that is, self-like, anymore? It’s a serious question, even if it is prompted by the hilarious new song (I suppose that’s what it is) titled “Selfie,” which features a young woman, with an attention span of approximately 3 seconds whose only anchor in a stable reality seems to be the compulsion to take a selfie (and announce that she is doing so) every 10 seconds or so. The song, which, like so many other products of contemporary culture is a parody so immersive in its object as to blur the boundary between parody and celebration, seems to suggest a direct correspondence between ubiquity of the external and ultra-literally named marker of “selfiness” and the absence of any inner experience of the same.

Freud’s “Copernican turn” was his claim that human consciousness was on the margin, not at the center—the margin, more specifically, of immense and obscure unconscious processes that we could only ever know very imperfectly, and only affect minimally. Freud used the term “Ego,” which is not necessarily a close synonym for “self,” but we have already introduced the term “consciousness,” and cultural Marxists following in Freud’s footsteps (Lacanianly mediated) introduced the term “subjectivity,” to cover conceptual territory aimed at including and usurping that covered by “Ego,” “self,” “consciousness,” and others, like “individual,” “person” and “identity” (not to mention “soul”). The notion of “subjectivity” aims at greater precision, drawing on phenomenology to conceptualize the subject “constitutively” embedded in a world of objects and inter-subjectively mediated intentions, but also contains an implicit taunt in its allusion to subjection. No theorist of subjectivity will admit to being only a subject herself. All these terms, except for subjectivity, are used so widely and have been used for so long that it would be ridiculous to dismiss them as “mystifications” (and not only would I not dismiss “subjectivity,” either, but I will further explore the term’s implicit argument that modern society has progressively marginalized, impaired, diminished and even shattered what was once taken to be the human center). The task for originary thinking is explore the overlapping terrains covered by this sprawling vocabulary, and make sense of them as so many ways of being signifying beings.

Andrew Bartlett gets us off to a very good start in his “Originary Human Personhood” in the Fall 2011 issue of Anthropoetics. I will be more interested in the “self” than the person in this discussion, but Bartlett’s originary analysis of personhood, and the distinction he draws between “person” and “self,” suggests a way of starting to see these concepts in relation to each other. Starting from Eric Gans’s contention that the originary “person” was God, Bartlett proposes that the appropriation of a kind of derived divinity in the constitution of the human “person” takes place through the mediation of the private, erotic center. To be a person is to be lovable and to love (to confirm the lovability of another)—to be an inexhaustible source of desire for another who is in turn such a source for oneself; to be aligned with another as reciprocally orbiting centers of meaning and concern capable of shutting out the world. Meanwhile, Bartlett distinguishes the “person” from the “self” as follows:

To be a self is not quite yet to be a person. The self designates rather a denuded, anesthetic entity lacking both the concrete bodily vulnerability and the power to create meaning that belongs to the person. “He is a wonderful person” sounds fine; “he is a wonderful self,” awkward. “She is a giving person” makes sense; “she is a giving self” rings oxymoronic. The undesirability of the reputation of “selfish person” tells all: the self is not the person. To have achieved personhood and to have personality, to be personable, to have personal relationships–those are goods. But to have a self–well, we all have one of those, it takes no work to have one of those; having a self makes no distinction–what can one do with oneself? The erotic self–especially–knows that what it can do with itself is limited. (The erotic person, however, may seem limitlessly beautiful.) In the originary event, the moment of consciousness of self is the moment of resentment. In resenting the sacred center, we first experience ourselves as violently dispossessed by it. Originary selfhood would thus be resentfully but not interpersonally human. In naming the sacred Object only as object of resentment, we are not yet naming God as a person: the sacred Other whom we selfishly name in resentment is not the divine Person whom we name in love. By contrast, to love God as originary Person is to love something of the way the sacred central Object has moved and moves us. Likewise in human exchange, the self-dispossession of resentment opposes love. We cannot have true love for the one against whom we feel real resentment. These contrasting associations of the self with resentment and the person with love, it seems to me, are worth preserving.
And yet there is value in owning the mere originary self as a kernel of sign-using consciousness prerequisite to personhood. Individual agency, free will, moral responsibility: several founding texts of Generative Anthropology affirm the value of the contributions made by these categories to the project of our self-understanding. Acclamations of even a resentful free will are a valuable counterweight to the post-structuralist denials of agency that would sever the connection between our internal scenes of representation (i.e., our imaginations), and the many external worlds, local and global, where exchanges of signs and things produce concrete results and where ethical performances have often incalculable consequences for good and evil. Anybody who uses language is a self endowed with free will; to use the sign on the scene of representation is to be a human self. My first qualification aims simply to spotlight the fact that a self consumed by resentment militates self-defeatingly against the openness to exchange of others’ personhood, and therefore against its own. Resentfulness is parasitic on love. The totally resentful self is not yet a person because such a self must abolish without loving the otherness of the center, and the desire to abolish the center makes exchange with others as centers, as persons, impossible. Distinguishing between selfhood and personhood may, therefore, illuminate the boundaries between originary resentment and originary love. If I am consumed by resentment of the other, I have not stepped back from myself to recognize the otherness in myself. I have not learned to imitate the sacred central Other withdrawing itself in the founding move of erotic activity from which human personhood is derived.

Bartlett’s analysis explains (to follow in the tracks of his own linguistic observations) why “selflessness” is praised, and why the extinguishing of the self (as in Buddhism) can be transcendent project—none of which would apply to “personhood.” If the self is a “prerequisite” of personhood, then the purely resentful, self-protecting self must be a kind of “skeleton” supporting the fully “embodied” person. Implicit in this argument seems to me a couple of other consequences: first, that the self can survive the obliteration of the “person” (Bartlett does not say, but how could we deny, that one’s erotic centrality could be demolished under certain conditions); and that as long as the self persists, the reconstitution of the person remains possible, while the pulverizing of the self, if we imagine that to be possible, would make any such restoration impossible.

This identification of the self with resentment also provides insight into the grammar of “self,” in particular its use in reflexive pronouns, which itself derives from the ancient identity of meaning of “self” with “same”—when we say “itself,” we mean the same “it” that was just referred to. In that case, the self is sheer sameness of the individual, that whatever it is that makes the individual that individual from moment to moment, year to year, decade to decade. Originary resentment is what makes us our“selves,” while I suppose the originary love of the person is ecstatic, taking us outside of the continuous flow of the self-same. Would that then mean that feelings of guilt and shame (i.e., conscience) are attributes of the self, insofar as those emotions are experienced when we have not been self-same, have broken the line of continuity (maintained through promises to self and others) that makes commitment possible? And the “sovereign subjectivity” so despised by post-humanist theories would, then, also reside in the self, or would rather be the self which, like an ever-vigilant government is constantly policing its own borders, keeping out intruders and keeping intact the needed defense mechanisms. Paranoia would also be an attribute of the self, and schizophrenia its breakdown.

More interesting even than all of that is the light shed by Bartlett’s analysis on the particular vulnerabilities of both person and self in a decentered, centripetal modern world. I have wondered for a while why the sexual revolution has been such an obsession of liberatory movements (political and artistic) from the Romantic period on, and why modern means of mass manipulation target the erotic so relentlessly. In other words, if Bartlett is right, then a possible strategy of assault and domination becomes visible. The specific articulation of self and person Bartlett outlines would be the basis for an individual who can think for him/herself, resist illegitimate demands, live within his/her means, recognize human limitations, and so on. If the erotic can be plugged into broader circuits of desire driven by commodity production, then personhood can be kept under constant pressure—the fantasy Bartlett outlines in his essay as the basis of the erotic imaginary (“You find yourself surrounded and alone in the center and you notice that all the people on the periphery–who knew? — suddenly “want” you erotically. They all want consummation with you, the person…”) only to dismiss as unrealistic and undesirable would be the source of one’s vulnerability to mass produced erotic fantasies, only in this case without any place to withdraw to (such withdrawal being, in Bartlett’s model, the way one transitions to a more mature eroticism).

Another prong of this assault would target the self. We could see all the normalization processes of modern societies, in which disciplines like medicine, psychiatry, sociology, economics and so on become disciplinary practices aimed at homogenizing and regulating millions of individuals circulating through modern institutions (first of all teaching them reading, writing and arithmetic) as directed at the self first of all. All these practices can be reduced to devising and enforcing the procedures needed to maintain “sameness” across a bewildering array of institutions, situations, obligations, norms, etc. We could see the early modern period studied by Michel Foucault in which these institutions were set up and given their legal and political foundations as excessive, often brutal, ad hoc and easily exploited by charlatans and power-hungry psychopaths and yet, for all that, necessary and largely successful. But with the myriad tentacles of the marketplace (most obviously, the massive explosion of pornography in recent years) undoing the erotic foundations of personhood, the processes of self-regulation may be getting more desperate and haphazard, drawing upon the new bio-political disciplines (drug therapies, gene research, etc.). It may be that more and more selves can only remain the same insofar as they adhere to increasingly arbitrary and rigid regimes of regulation.

Obviously, it would be impossible to quantify or be certain about any of these claims—what would it mean to say fewer people are less completely persons or selves now than was the case, say, half a century ago? Maybe what looks (inevitably) as disintegration to those embedded in a particular meditated mode of being is simply a transformation to terms of personhood or selfhood we are not able to recognize. Maybe, more radically, the entire vocabulary of human self-reference is being remade, to the point that somewhere down the road people won’t really understand what we once meant by things like persons and selves. To take just one example, the fact that the genre of romantic comedy in the movies is just about defunct suggests that certain key elements of erotically mediated personhood are no longer operative—the movie critic James Bowman associates the esthetic power of the genre to the belief on the part of the couple (and the audience) that the two people were “made for each other,” were “destined to be together.” Such a belief seems to me a “necessary appearance” (to use Arendt’s term for beliefs about reality that survive all attempts at demystification) for the closed erotic circle Bartlett identifies as the source of personhood—if such a belief is becoming as alien to our sensibilities as tragedy has long been, then we are indeed witnessing a sea-change in the self-person configuration.

At this point I don’t want to pursue this analysis further; I just want to suggest that originary thinking should pursue such questions as the contemporary state (or states) or the “person,” the “self,” and other originary elements of the human; and we should do in a way that is as divested from, or defers any desire for, any particular outcome as possible. That is, no apotropaic invocations of the preferability of market society to other forms, or of the superiority or inevitability of liberal democracy—or, for that matter, any denunciations of the market or prophecies of doom regarding liberal democracy. I would recommend refusing the use of a particular historical form of personhood or selfhood as an invariant model against which we find contemporary forms to be degraded versions; or using an idealized model of the self or person in order to condemn contemporary institutions for “distorting” that model. Of course we must be interested in the outcome of any moment in the unending process of hominization; but the clarity of analysis will benefit from our keeping that interest as minimal as possible, at simply identifying the threshold at which new modes of signifying emerge. What are the new modes of attentionality; how are we seeing and giving ourselves to be seen (and heard and felt and imagined) in new ways?

OK, I’ll pursue it just a little further. It seems to me that what is central to modernity is something that Marshall McLuhan associated with print culture—the capacity and compulsion to analyze phenomena into to ever smaller fragments that can in turn be recombined and disseminated in new ways that bear less and less trace of their origins. To the extent the person and the self can be reduced to a set of fragmented, stereotyped gestures that can be turned into esthetic formulas and models of imitation aimed at directing the “subject’s” attention in pre-programmed ways (what Judith Butler, following Derrida, once called “citationality,” referring to the fact that we are always citing and quoting others, even or especially when we believe we are most “ourselves”), the less we are persons and selves. Restoring, re-imagining or instituting new forms of personhood and selfhood, or imagining forms of individuality or “agency” irreducible to those terms (we could just become indeterminate processes of semiosis, for example) would then depend upon entering, interfering with and commandeering when possible that process of analysis and composition of the elements. The skeptical, suspicious resentment of the self would be needed here, as would the ecstatic, even if fleeting, enthusiasms of the person. The problem would be to acknowledge that one is always taking on others’ words, down to one’s most inmost being, while remembering that they are, even when most our own, in the end still others’ words.

March 29, 2014

Property

Filed under: GA — adam @ 10:32 am

A simple way of distinguishing Left from Right: the Right believes in property and the Left doesn’t. What does it mean to believe in property? That there are things that can only be used and enjoyed by one individual, insofar as they are not used and enjoyed by another—and that social order is only possible if that reality is recognized. Let’s say we were to establish the principle of communal property—nothing is owned by anyone individual, but, rather, food, clothing, housing, entertainment is provided in a regulated manner according to shared rules. Food is distributed at regular intervals and consumed in a common area, implements are made available accordingly, even if you sleep in the same bed more than one night in a row a change in collective priorities (leaving aside for now how they are established) can have you rousted from it at any time, and so on. It may be that some Israeli kibbutzim actually came closest to realizing that principle. Even under such conditions, though, private property will reassert itself: I might want to try your dinner, and you mine, and so we trade—each of us would have to recognize a certain informal title of ownership in the other to make that possible. One could imagine the kind of totalitarianism needed to make that impossible: mealtimes (and every other time, because food could be saved and shared elsewhere) would have to be supervised with panoptic comprehensiveness and punishments severe enough to deter such “traders” (or you could make sure everyone has the same meal—which wouldn’t so much abolish property as make differences in property irrelevant, which means one would be working backwards, frantically negating the reality of property while recognizing that reality—could all the meals really be identical, even if that were the intention?). The same for, say, books or magazines—they could only be read in libraries (and there would have to be enough copies for all, otherwise one could have at least momentary possession of a text desired by another—which I suppose leads one to consider universally required and scheduled reading). Perhaps you could imagine stamping out or rendering impossible all exchanges (“exchange” and “property” being reciprocally constitutive concepts), but such an incorruptible system would be impossible and unlivable, if for no other reason than they depend upon a cadre of established or informal enforcers who themselves have a kind of property in the means of enforcement at their disposal—someone, somewhere, will take a cigarette or piece of chocolate so as to allow another to share another with a friend. James Madison summarized it very succinctly in Federalist # 10: different capabilities and interests distributed across humankind lead to different results and products and, hence, property, and diverse forms of property. As Eric Gans has noted (in The End of Culture?), war is the first market, where one differentiates oneself from others through performance and receives corresponding rewards—but if war, why not hunting and gathering, or rearing children, or any activity in which one could distinguish oneself? Imagining what it would take to stamp all that out would be an interesting exercise in imagining the most minimal forms of human being and becoming.

The Right, understanding the resentments that have on occasion actually led to attempts to construct such leveling systems, focuses on making the right to private property “sacred,” that is, beyond political caprice. This has had some unfortunate effects—for example, the blurring of lines that enabled many conservatives to accept slavery as a form of property, fearing that allowing the state to expropriate the slaveowners would legitimate further expropriations. However unjustified slavery is, that worry was not without foundation. But the Right has a language within which slavery can be delegitimated, insofar as the slaves’ right to property can also be asserted (and in fact would be evident in their interrelations with each other), and seen to be incompatible with their being slaves. Only theories of racial superiority, which go from questions of property to questions of biology (and race theories might care about “territory,” but not property), could definitively override that argument.

The Left, meanwhile, sees all property as being held by the grace of the community—reasoning backwards from the communal recognition and regulation of property, the Left figures that the community creates property, and can therefore recreate or abolish it according to whatever means of communal decision making are in play. Very few leftists today argue for expropriating all private property, but that doesn’t mean they believe in it—it just means that they assess that, under current conditions, given the available options, a certain (never to be precisely defined) amount of private property provides for the best way of producing and distributing wealth. If they determine that there are better means within their grasp, they will take them. The Left’s starting point, though, is always with illegitimate uses and users of private property—with who uses their wealth improperly, or who doesn’t really deserve it. They want the presumption of guilt to color our view of private property owners, even if they allow for some to be exonerated, for now. Indeed, for the Left property is never free of the taint of theft (rewards from the community for services rendered is another matter, of course.)—even if they were to accept the argument for the inevitability or “naturalness” of property I gave above, and acknowledged that the desire for property and some mutual recognition of it will always emerge, they could never endorse the fact that it has emerged in the particular way that it was—there is always a perspective from which it can appear that someone elbowed someone else aside to get a larger share, and then conspired with other elbowers to protect it. (Ultimately, we’re dealing with calibrations of resentments here, not arguments.) Which is really a way of saying that the Left could only accept an originary scene as a kind of conspiracy, whereas the Right could accept such a scene, even with all the rough edges we might imagine it to have (maybe there would be a bit of elbowing). Our ability to refrain from theft even when it is physically possible and risk-free on the assumption that the other will do the same (and without calculating the ultimate utilitarian value of such restraint) is distinctly human, and makes sense in terms of the originary scene—for the Left, that scenario only makes sense insofar as those involved have their eye on someone else’s property. This is why the Right is an inherently limited, and the Left an inherently unlimited, mode of politics: for the Right, a state of things in which theft and violence are relegated to the margins is at least possible in principle; for the Left, whatever looks like enterprise, ingenuity and informed cooperation is really a more sophisticated design on someone less able to defend their own property—and there will always be enterprise, ingenuity and informed cooperation.

Although Madison doesn’t say so explicitly, the “factions” he unsuccessfully attempted to preclude from the new constitutional order are always organized against someone else’s property—it is always possible to contend that while property in general and of course one’s own is perfectly fine, that other’s property has been stolen from its rightful owners. And, of course, there is theft, on petty and grand scales. It makes a big difference whether the theft was carried out through fraud or force, though—if by force, or if one insists that it was by force (pure force without at least a bit of fraud is very rare), then violence (civil war) becomes the only remedy; if by fraud, then a revision of the rules governing exchange and the enforcement of those rules (or, perhaps, simply heightened vigilance) can be the remedies. Here is another dividing line between Left and Right: when the Right cries “force,” it is referring to the state and when the Left cries “force” it is referring to property owners; when the Right cries “fraud” it has a model of just exchange to aid in proposing remedies; when the Left cries “fraud,” it really means “force,” once again. The Right can therefore imagine civil war of property owners against overweening state; the Left imagines perpetual civil war, of the (relatively) propertyless against the (relatively) propertied.

The Right can agree to limitations on property—limits on how one’s property can be used and disposed of—but only in the interest of enhancing the sacrality of property as a whole. The Right is always vulnerable to charges of hypocrisy—while the Left never has to restrain its own desire to punish and harass private property owners, even (or especially) when such efforts are impotent, the Right often finds itself in situations where, for example, the community might be so invested in the traditional uses of a particular property as to violate the general principle that the property owner can use it as he or she likes. For example, the community might band together to prevent a developer from buying a revered Church and turning it into a McDonalds, or a strip club. Hence, zoning codes, and various environmental and esthetic restrictions on uses of property. The Right can never adequately square such exceptions with the sacrality of property, and the Left can point out that the ubiquity of such restrictions until very recently demonstrates the artificiality of the sacrality of property as such—property, on this argument, has always been restricted in accord with contingent ethical, esthetic, religious and other concerns. The argument, then, seems to be reduced to semantic chicken-and-the-egg style quibbling best settled pragmatically, on an ad hoc basis. The Right will always lose those arguments, though, because abuses, real and putative, by those who have a lot of property can therefore leave a big footprint, are visible and palpable; while the long-term benefits of the unfettered use of property, much less the principles underlying its sacrality, are invisible and abstract—you have to “believe” in property.

However effective, the only real argument for the Right is that the preservation of private property requires a community of people who respect each other as people deserving of a presumption of innocence in their uses of property, and it is this necessity that leads to the aforementioned exceptions: for private property to be sacred, other things must also be held sacred in common, and what those must be will indeed be historically contingent: houses of worship, traditional or revered buildings, works of art, portions of nature, and so on. The Left is correct to say that restrictions on private property were loosened drastically through the 19th and into the 20th century—but that simply might mean that the threshold of reciprocal trust needed to have everyone invested in the mutual defense of property lowered as well. Still, since that threshold has been rising steadily from the mid-20th century on, it might be better to start using the principles of private property to defend those commonly held tokens of reciprocity, rather than relying upon the state, which provides an opening to the Left—so, if a community would like to preserve a park, rather than having it sold to developers who want to set up a shopping mall, let them put in a bid and purchase the land corporately. Some bids will be lost, of course, and the wealth of the community perhaps diminished when they are won. There will be free rider problems, but these can be solved by having voting rights in the newly corporate property be based on stock ownership. Arguments about future uses of the property will surely follow—perhaps that is where political energies will come to reside. If so, those energies will be far more productively engaged than they are at present.

Issues regarding sexuality (marriage, homosexuality, birth control, abortion) and personal morals more generally (intoxicants of various kinds) raise the same problems—on the one hand, many libertarians, who also defend private property as a first principle, have good reasons to be perplexed at the “social conservative” insistence that the community, “society” or the state can regulate what people do with their bodies (presumably the most basic form of property) and how they manage their intimate relations. And this is especially case given the implication of these norms in the infamous patriarchal “property in women.” But it’s not that hard to see the effects of promiscuity and a relaxed regime of marriage on the maintenance of a culture in which property can be preserved: divided loyalties, unclaimed and uncared for children (heirs), a lowering of inhibitions in one crucial area of life that can readily lead to their lowering elsewhere all interfere with the clarity and predictability property requires. (As an aside, “women’s studies” could, if it were so inclined, explore the extremely varied relations women have had historically to property, their own, their husband’s, their father’s, their children’s, so as to see how women’s full participation in a restored private property regime could be ensured. [I do assume a certain bias toward men in strict private property arrangements.]) Similarly with intoxication, which even more obviously renders people unreliable and unfit to tend to their property or respect that of others.

Like hierarchies in rank (aristocracy), which also follow logically from a commitment to property (more property translates directly into more social and political power—which may be more orderly than the currently indirect ways in which such translations occur), however, it seems these fences built around the regime of property can no longer be manned. The forces of anti-property (which, ultimately, whatever we choose to call them or they choose to call themselves, means communism) are already inside the fences. A further retreat on the part of “propertarians,” which might turn into a new offensive at some time, would involve converting these once socially established and inherited distinctions and prohibitions into privately and contractually established and negotiated ones. If social norms of marriage cannot be maintained, then, insofar as the moral state of those with whom one interacts matters, and insofar as marriage (and family ties) serves as a marker of that moral state, private individuals and enterprises can demand the absolute right to interact with whom they will and therefore to recognize which marriages (and divorces) they will. Winning such a right may be difficult, but it will certainly be easier than trying to turn back the tide of same-sex marriage (much less no-fault divorce) nation-wide. Similarly, if, as seems to be the case, laws against drug use are not long for this world (how the FDA will survive this dismantling of the legal regime governing “controlled substances” is a question I have not seen anyone raise), then property owners would have to demand the right to drug test those whom they hire, or educate, or allow onto their premises (say, a shopping mall).

In these ways, perhaps the most fundamental lesson of property will be relearned: the premises undergirding property ownership can only be preserved and protected by property owners themselves, acting in concert through contracts and covenants; the attempt to slough off such responsibilities—for keeping order and policing its moral preconditions—onto the state was an experiment destined to fail, and in the end nothing more than a Trojan Horse for communism.

March 25, 2014

Back to Nature

Filed under: GA — adam @ 9:29 am

Proponents of the originary hypothesis find themselves arguing for the discontinuity of humans with non-human animals far more often than we find ourselves arguing for the continuity between the two realms or levels of reality. The reason for this is a good one, but also purely contingent: the most powerful obstacle to a careful consideration of the originary hypothesis by humanists and scientists (social and natural alike) is the “materialist” or “atheist” dogma that the quintessentially human capacities, such as consciousness, or ethics, are either illusions or versions of capacities shared with other animals. This dogma, in turn draws its strength from the anti-religious origins of modern thought, as well as more recent victimary elements, meaning that originary hypothesists (to coin a phrase in order to avoid tiresome repetition) are in turn are compelled to defend religion as “good anthropology,” at least, if “bad ontology.”

I have no quarrel with this argument, but the terms of this polemic have obscured from view, I think, the fact that coinciding with the originary leap out of “nature” must have been a very determined, even compulsive, effort at re-fitting ourselves back into nature. The sounds and rhythms of spoken language, the structures of dwellings, the forms of tools and weapons, the enactment of rituals and, finally, the original forms of the written word, would all be motivated by attempts to either imitate or blend into the surrounding environment.

All of these efforts would, of course, involves specific abstractions from (the singling out of and articulation of relevant features) and therefore interpretations of those surroundings—such interpretations would depend upon the surroundings themselves as well as what the sign system and way of life of the community as a whole would lead them to notice. This is why a “cratyllian” theory of language, which would argue that the sounds of words derive from the object of representation (that is, contain a significant and irreducible “non-arbitrary,” iconic, dimension), need not be surprised at the vast diversity of languages. The re, or retro, fitting back into nature would always be done in specific events, and would always coincide with the development of the sign system (that is, would simultaneously involve a further differentiation from nature), but it would always be done.

There is no reason to assume that anything has changed for humans in this regard—even the most advanced technologies, enabling us to fly, light our homes, analyze our DNA, devise complex algorithms are all (re)trofittings into nature, mediated through previous (re)trofittings, as we use the air and wind to elevate and transport ourselves, “download” our minds and brains into machines, find our language inside of the mechanisms of transmission of heritable material from generation to generation. All these devisings are, as Marshall McLuhan asserted, extensions of our senses and more, broadly, our semiosis. And, needless to say, language always situates us firmly in space, behind, above, underneath, in front of, moving or standing still, feeling, and so on. This is why I think the “naturalistic fallacy,” that is, appeals to nature as arguments for what is good, is hardly a fallacy at all—unless “nature” is reduced to a strictly physicalist account.

The question then, is, why is it that people can feel—as they undoubtedly often feel—as if they have lost touch with nature, been alienated from it, been denatured themselves? This seems to be a perpetual complaint of civilized societies in particular, as the continuing power of the pastoral demonstrates. Part of it is certainly that whatever has become habitual, and therefore unconscious and “given” is easily conflated with the natural, and so any disruption of habits is experienced as a denaturing. But, beyond that, a sense of being denatured is really a sense of being decultured—of having lost a shared naturalization or “fit” between a people and their surroundings. In other words, since the (re)trofitting back into nature takes place on a scene, being alienated from nature is really being de-scened.

In that case, appeals to nature can be persuasive to the extent that they direct attention to the fraying of shared attentional scenes; and they can be effective and salutary to the extent that they restore shared attention in new and sustainable scenes. But such appeals must genuinely find a “fit”—they cannot just be “rhetorical.” How is it possible to know what “fits,” and for whom the fit fits? No one really can, and extreme suspicion should alight on anyone claiming to know how “we” can restore some lost value, virtue, perception, or mode of expression that was more in accord with nature. But what can be done is singling out elements, and relations between elements, in any surrounding, and transforming those relations into constraints that allow for indefinite iteration. No new technology can denature us or take away our humanity—not genetics, not information technology, not virtual reality, or robotics, not nuclear weapons. The capacity to, and fantasy of, manipulating our basic components, of being a conductor rather than intentional users of signs, of utterly destroying ourselves, has always been with us. Those capacities and fantasies are what take shape within and tear apart shared intentional scenes and therefore eliciting them in relatively protected spaces and making them visible is the way to restore those scenes.

Those who devote themselves to displaying whatever is anomalous in any utterance or scene therefore perform the greatest service to humanity. One of the funnier and more memorable episodes of “Seinfeld,” I think, was the one where George finds (after realizing that “every” decision he has made in his life so far has been wrong) great success in “doing the opposite” of his immediate impulses and habitual responses. Whether or not the writers of that episode were familiar with the esthetic practices of pataphysics, performance art, happenings, conceptual art, and Oulipo, the idea was in their spirit. The first “opposite” thing George does in that episode is, instead of “being intimidated by women,” he goes boldly up to an attractive woman and, in another instance of “doing the opposite,” instead of giving her his usual line of BS, announces that he is unemployed and lives at home with his parents. The woman responds warmly (and they commence a relationship), perhaps suggesting that she herself is “doing the opposite”—the anomalous generates the anomalous. The lesson is that even the most uncreative among us can generate a constraint to live by, since anyone can find something in their daily practices to negate—anyone knows what they feel like doing, or feel like they should do, what is expected of them, in a given instance, and can therefore “do the opposite” (not that it is always obvious what the “opposite” of something is). Negation is deferral and the start of discipline, and even the most arbitrary negation can get the ball rolling.

Generating an anomaly opens a space of uncertainty, awkwardness and anxiety, and a space that will undoubtedly be quickly closed, since humankind cannot bear not to feel part of a scene. But the closure will also be a disclosure, as the boundary between fit and misfit becomes momentarily visible. Technology is imperative—what we become capable of doing becomes what we have no choice but to do. But imperatives can be disobeyed, or obeyed too well. There should be rules for the esthetico-ethical practice I am proposing—all violence and compulsion should be avoided, and even discomfort should not be pushed beyond a certain threshold (which is obviously difficult to measure). The point is to produce examples or, better, yet, “samples,” which are drawn from and can be examined for their conformity with, a larger “population”; and, “sampled” in turn as one pleases. This is a practice of “cynicism,” in the Diogenean sense that Peter Sloterdijk explored in his popular, yet anomalous (not really left or right, kind of, but not quite “theory”) when it appeared in the 1980s, Critique of Cynical Reason. Perhaps the fool of Lear is another “sample.” The ancient cynics were the original critics of civilization, calling for a return to nature, which is to say what is needful and no more, and leading to Stoicism. Pataphysics (really the origin of those esthetic movements I just mentioned along with it) might lead in any number of directions but always, I think, back to nature.

Two questions: first, why should anomalies be preferred to the normal?; second, are all anomalies created equal—can’t one be anomalous, “do the opposite,” for evil as well as good? Number 1: it is not so much that the anomalous is to be preferred to the normal as that the anomalous is what we notice, while the normal is invisible. In other words, we attend from the normal to the anomalous. Furthermore, the anomalous precedes, logically and temporally, the normal: the first sign was an anomaly, and one can only determine normalcy by averaging out anomalous instances. Even more: there is no normal, just various efforts at and processes of normalizing the anomalous. None of which, of course, means that the normal isn’t real or desirable—it just means that we generate the normal by modeling ourselves on one or another anomaly. And that brings us to the second question, which concerns which anomalies we model ourselves on. And here, indeed, there can be no a priori principle that sorts out the good anomalies to be imitated and the bad ones to be shunned. Any attempt to propose such a principle simply and arbitrarily declares a particular version of the normal anomaly free. Furthermore, if my immediate move is to negate what is immediately expected, then doesn’t that mean I slip from one negation to another, doing the opposite of the opposite thereby spinning in circles or ending up where I started?

Let’s start with the second part of the second question—doing the opposite of the opposite and so on does not lead one in a circle or back to the starting point because there is more of a rough diagonal than a circle and there is no starting point to return to because that has already dissolved in the initial negation. The expectations generated by the first negation or deferral will be different from the expectations deferred, and so their opposite will not be the original position. The only question left, then, is that distinguishing between good and bad anomalies. Let’s try out an example: let’s say the normal position in a particular cultural setting is to favor the death penalty. One “opposite” of this would be “eliminate the death penalty”; another opposite might be to replace today’s efficient, antiseptic death penalty (the constant search for more distanced, neat and technologically refined means) with the old-fashioned hanging, drawing and quartering. Is arguing for one position better, ethically, than arguing for the other? If you’re an Enlightenment rationalist like Steven Pinker, I suppose so—the less violent the better, so if we could move “forward” towards the elimination of the death penalty than would be an ethical advance like the move from cruder to more refined forms of punishment. My own answer is that I don’t know. Or care. Who can know all the consequences of distancing ourselves from our ever more lethal forms of homicide? Or of brutal, terrifying, spectacle style punishments? Or of striving to punish less and less? But, some originary demon might ask, surely we can say that random punishment, of whatever kind, is worse, more evil, than attempts to make punishment correspond to some act that has been determined, by some more or less freely created consensus, to be wrong? “Random punishment” would be another “opposite,” in this case to the juridical norms we take for granted. The idea of random punishment is not evil, because ideas can’t be evil; quite to the contrary, this like any idea is productive because we are then able to imagine what it would mean to attempt it. I suppose Shirley Jackson’s famous story, “The Lottery,” is one attempt to do so, but even there the punishment is very confined and localized—we’d have to imagine some much more elaborate mode of generating random outcomes—if it were genuinely random, punishment could come to anyone, anytime. But would it still be “punishment”? Is “random punishment,” perhaps, simply a contradiction in terms? Maybe, but maybe not if we were to imagine some originary and unlocalizable criminality that constitutes the human and that transcends the rather petty and irrelevant acts we carry out every day. From that standpoint, what we do now is “random punishment.” If we imagine some random punishment generator, and work through the implications, would the outcomes actually turn out to be random when they came out? (What, continues the originary demon, about someone who decides to enact random punishment, in a kind of perfectly senseless terrorism—he would be doing the opposite of something, wouldn’t he? I know you laid down a rule prohibiting violence, but the senseless terrorist is just doing the opposite of that rule, isn’t he? Well, I might answer, if he wants to be a character in a hack pseudo-Dostoevskian novel, who could stop him—but there are a lot of opposites between the concept of “senseless terrorism” and its enactment, and those digressions would be more revealing than the playing out of the terrorist act, which would actually have fairly predictable consequences.)

Enough of that. In the end, one would simply have to trust the scenicity of human being to average out the anomalies—to arrive at modes of punishment that could be recognized as legitimate, i.e., as generating the most diffused, distributed and composed forms of resentment possible under the circumstances. What else could we trust to? At this point in history, does anyone really think that they can devise a universal rule for determining what counts as a just system of punishment? (Or at least a universal rule shared by anyone else, starting with one’s book reviewer.) In other words, back to nature! But I would like to make a more important argument here. We are, each one of us, composed of such negations, constraints and anomalies, doing the opposite in all kinds of major and minor, planned and spontaneous ways. We average ourselves out in order to “fit,” but in doing so we can try to fit into “culture” or “nature”—by “culture,” in this case, I mean approved and standardized models (which, by definition, have banished anomalies); by “nature,” I mean the reduction to the lowest possible threshold of meaning: what do any or all of us turn out to mean by “punishment” (or anything else) when the question is imposed upon us in the way that it comes to be imposed once it has crossed the threshold of questionability? If “culture” works, it’s probably the safer alternative; I don’t think it works anymore—so, once again, back to nature!

February 22, 2014

Originary Memory

Filed under: GA — adam @ 9:41 pm

In my latest essay for Anthropoetics, I argued for a language or semiotic based notion of ethics, piecing together the concepts of joint attention, language learning, disciplinarity, and what I called “upclining,” or the retrieval of the signs of more originary events through the signs of the present. All of that is fine, but it now strikes me that an even simpler, more fundamental way of grounding all those concepts, and of proposing an originary ethics, is right at hand. What is ethical, and all that really matters, is remembering the originary scene. This may seem hard to understand, and even impossible: the originary scene is a theoretical construct, derived from a synthesis and transformation of recent thinkers (Girard and Derrida in particular), and while we GA-niks take it to be true, we do so because we believe it provides the most compelling theoretical and analytical account of culture, religion, society and anthropological phenomena more generally, and not because we experience any bond to an actual originary scene (the way in which Christians may experience a identification with Jesus, or Jews with the revelation on Mt Sinai). The originary scene is not peopled for us in that way.

But how could we understand a sign without remembering other scenes upon which we understood signs, or use a sign without commemorating all those other scenes. And any sign bears with it the traces of the scenes upon which it was performed before it found its way to us—a proper care for the sign is a tribute to those earlier scenes, and through them the scenes before those. A sign well used is a sign that defers violence, even violence several or many degrees removed from the scene upon which the sign is used—using a sign to defer the first stirrings of resentment so as to potentially marginally replenish the social store of civility is iterating the sign’s use on the originary scene. But what kind of sign use will do that? We’re not talking about being nicer to people. Sometimes the proper care of the sign involves confrontation, sometime bluntness, sometimes subtlety, sometimes a strong line of BS—the only way we can know is by drawing upon our intuitions as sign users, and since our intuitions as sign users ultimately derive from the originary scene, sharpening, honing and sensitizing those intuitions take us back through the past, following the trail of auto-probatory signs to the first one.

It follows that any future-oriented ethics will be shallow, self-serving, and even fraudulent—none of us knows the slightest thing about the future, or of the way any of our actions will play out in the vast networks of activity comprising our world so doing something “to make things better” requires an unethical degree of arrogance. Similarly, acting according to some “principle” (even “freedom”) is an attempt to evade attunement with originary intuitions, to stop listening to the imperatives that would have us turn our head back to the ostensive from whence they originate. In both cases, we are dealing with escapism and fantasy. Originary memory is taking care of language—by which I don’t mean trying to maintain it as a transparent vehicle of communication, or ensuring that words be used in their proper meaning; what I mean is that everything anyone says makes it possible to say something else that couldn’t have been said otherwise, and that in articulating one of those things that couldn’t have been said otherwise one remembers by carrying forward the very first utterance that made everything said since then possible. It is by thus heading back into the past, enriching the originary scene with everything that has happened since and therefore, in a sense, happened there, is still happening there, that we open up possible futures.

« Newer PostsOlder Posts »

Powered by WordPress