We can consider the emergence of the Big Man out of the primitive egalitarian community as the beginning of civilization. With civilization comes the placing of some individual at the center of the community, as the source of power (this could just as easily be described as some individual appropriating the center). A new moral order is thereby initiated. With the central object, prey animal, ancestor/icon at the center, the overriding moral principle is precisely preventing anyone from seizing the center—the ritual means of distributing food, mates and other goods follows from that imperative. Once a human occupies the center, that human can be held responsible for everything attributed to the center, which is everything required for the well-being of the community. Sacrificial morality involves adhering to the rules surrounding the worship and eventual sacrifice of the central figure. These rules are already a deferral of the immediate killing of the central figure as soon as some failure in his mediation of the cosmos for his people is revealed. The most moral one can be in the sacrificial community is to increase this space of deferral, by attributing as much of the responsibility as possible to the ruler for actions one might imagine he could actually have carried out otherwise, or left undone. But under sacrificial conditions there is no way of consistently isolating which actions might fall into this category.
Post-sacrificial civilization (accomplished via the Judaic, Biblical, and Christian revelations in the West and otherwise—via Buddhism, Taoism, Confucianism, etc., elsewhere) is the ongoing effort to bring that category into focus. Once the individual occupying the center can be blamed for anything that goes wrong (because everything, good or bad, comes from the center), then any individual, occupying any central position, can also be blamed equally indiscriminately. And the erection of one center leads to the proliferation of other, orbiting centers, so the resistance to potentially unlimited scapegoating becomes the moral problem. How is such resistance possible, and how could the “immunity” in question be built up? There are two ways, which are not mutually exclusive and can even support each other, but one of which will nevertheless be dominant in a given case. The first way is the insight gathered by rulers that perpetual scapegoating, as well as its more organized form in the vendetta, is socially destructive and a threat to the sovereign itself, and must be suppressed. Part of the suppression involves the inculcation of self-control, which means refraining from acting upon your resentment in other than social approved of ways, and also means constructing the means of social approval, i.e., some kind of justice system, that will ensure such restraint has the desired effect. Here we get, I assume, the Confucian (for example) model of the wise man, who respects authority, doesn’t act upon impulse, pursues moderation, places the family and tradition first, and so on. Such a man will defer to the authorities and not act out vengefully. But there is some question as to whether such resistance to scapegoating is more than just “pragmatic,” with the resentment being enacted in some ritual or aesthetic manner.
The other way, which is more transformative but also leads to more vulnerabilities, is modeled by what Gans, following Girard fairly closely here, calls the “Christian revelation.” The Christian revelation confronts each one of us with the bad faith implicit in our impulse to scapegoat, to pile unlimited responsibility on some other placed at the center. Jesus proclaims a universal moral reciprocity, retrieving the symmetry of the originary scene, but this time in a way that forces constant confrontation with sacrificial institutions, even the more deferred, mediated and “symbolic” sacrificial culture of Rabbinic Judaism from which Jesus emerged, and from which he adopted and adapted the call for reciprocity. The center demands, first of all, refraining from violence, including revenge, against your neighbor; but sacrificial religions and polities displace this violence by establishing the proper form and rationale of sacrificial violence, and they can’t tolerate the exposure of the emptiness of those rationales. It is for this reason that Jesus is sacrificed, i.e., in the name of preserving sacrificial violence and the institutions and resentments predicated upon it. Since Jesus has done nothing wrong, and in a sense nothing at all other than expose these institutions and resentments, our universal complicity in his murder reveals our own implication in sacrificial violence. Any time we find ourselves starting to put someone at the center, then, a move which always implies the possibility of a violent outcome, we are to question our sacrificial investments in doing so. Since we must put individuals in the center, we must continually disinvest our resentments in the process, and reduce centrality to the barest necessity; the construction of institutions and culture is all directed towards identifying, tagging, studying those sacrificial investments and building regulated forms of interaction that systematize this moral imperative. This moral form is more transformative, because it is reflected back to us in all our engagements with the other, and not just in our acknowledgement of authority; it is more vulnerable because, while not incompatible with authority, the particular form that our restraint before our tendency to centralize, violently, the other, takes can never be set once and for all. We can always identify yet a further, previously unnoticed, incitement to sacrificial resentment and, even more important, can always find grounds for condemning authorities for not protecting the victims of that resentment.
All of this is really by way of review. The further analytical step I want to take here is to explore a substantive ethical account to supplement these post-sacrificial moral forms. Morality involves the “thou shall nots,” open-ended imperatives, what Gans in The Origin of Languagecalls the “operator of interdiction.” Ethics concerns self-shaping, bringing one’s actions, and therefore one’s intellectual and emotional prompts to action, into a hierarchical order directed towards a center. No sustainable ethics can be immoral, but morality can’t dictate the content of ethics. There are a lot of different ways, corresponding to different historical situations and individual capacities, of restraining one’s sacrificial resentments. For some people basic self-control, the reminder that they will be “bad people” if they commit certain transgressions, may be enough. For others, the imperative to refrain from sacrifice includes nothing less than world-building. It is also the case that ethical failures, the confirmed sense that one has fallen short of the model one has generated or adopted for oneself, that is, the inescapable feeling of being a fraud, is a prominent, I am even tempted to say the only, source of lapses into immorality.
In a ritualistic culture, one cannot be a fraud—one fulfills or violates what is required. Before we can speak of fraud, we have to have disciplines. Someone purporting to be a doctor, lawyer, banker, investor, soldier, teacher, etc., can be a fraud. All these professions are products of literacy (even soldiers who can’t read and write are part of a literate culture, which makes their discipline and method possible). The authenticity of one’s professionalism, or participation in a discipline, then, depends upon one’s relation to the metalinguistics of literacy. To review: writing, presenting itself as reported speech, supplements the elements of the speech act that are lexically inaccessible (tone, body language, etc.). The proliferation of metalinguistic terms supplementing primes like “think,” “say,” “see,” “feel,” “know,” “want,” and “do” follows. And then the nominalization of those supplementing terms. The imperative of the written text, codified in “classic prose,” is to “saturate” the speech scene, to place the reader there with the writer in his imagination. This imperative to saturate the scene is the source of the easily ridiculed and despised “jargon” so prevalent in the disciplines—the psychologist can’t simply say his patient “thinks” and “feels” certain things—those feelings and thoughts need to be made more precise (their scenic preconditions made explicit) and they need to be given a location (e.g., in the “unconscious”).
The question for the disciplines, then, is what particular thoughts, feelings, etc. (even to nominalize “think” and “feel” is to situate us within a discipline) mean. To say what something means is to refer it to something else, which is to make it less than “prime” and auto-intelligible. Wierzbicka doesn’t find “mean” or “meaning” among the primes; Olson does, interestingly, locate it in the pre-literate English vocabulary he compiles, along with a list of post-literacy equivalents and supplements, in his The World on Paper. “Mean” is a borderline word/concept, then (the Online Etymological Dictionary seems to give it virtually “prime” status, as it doesn’t present It as emerging from a metaphorical transformation of another word). Olson himself may provide the explanation for this in The Mind on Paper, when he points out that literacy introduces the distinction between “speaker’s meaning” and “sentence’s meaning,” which itself rooted in an older distinction between “say” and “mean.” Once language, through writing, becomes an object of inquiry, words, sentences and the grammatical rules that get us from one to the other are objectified and standardized, which means that we can judge what an individual speaker says against those standards. So, “meaning,” which first marks the observation that something is concealed by the speaker comes to refer to what is concealed from the speaker in his own speaking. There’s no reason both senses of the word couldn’t be represented by different words, so “mean” might be more “elemental” in some languages than others. “Mean” as metalinguistic concept refers to the always existing discrepancy between speaker’s meaning and sentence’s meaning—there is always something in the words and sentences we utter that is irreducible to whatever we thought we were doing with them.
The imperative to saturate the scene constitutive of classic prose, then, is also an imperative to abolish the distance between speaker’s meaning and sentence’s meaning. Think about how much effort is put into avoiding misunderstandings, fending off misinterpretations, attacking “distortions” and “de-contextualizations” of one’s words. All this is an attempt to fold sentence meaning back into speaker meaning. This is the central ethical problem, because all sustainable self-shaping depends upon accepting and living within that distance: what you are for yourself can never be quite what you are to others, and we all need to find ways to have our representations of ourselves to ourselves be complementary to all the representations we “give off” to others. To refuse to accept that, whether by completely identifying oneself with the successive representations we give off, or (far more commonly) by trying to control one’s self-representations so as to rule out meanings other than one’s own, is to be a fraud. On the one hand, one is a Don Juan or con man; on the other hand, one is a bureaucrat of the self, or hypocrite. Either way, one is likely to assuage the sense of shame by assuming everyone else falls into the same category, in which case suspending all moral obligations to others is the sensible course. Violent resentment and projecting accusation is directed towards whoever re-opens the difference within meaning.
A sustainable ethics would have to place speaker’s meaning in the midst of the multitude of actual and possible sentence meanings. We have a definition of “competence” and “virtue” (and perhaps “phronesis”) here—neither competence nor virtue is about who you “really” are, or about what you can induce others to believe about you. Both involve a kind of constant interplay in which one keeps refining one’s meaning by soliciting feedback from the ramifications of the meanings of one’s sentences. The first thing one is inclined to do upon being called, implicitly or explicitly, a fraud, at least if one suspects some truth the accusation is to lash out at, centralize the accuser, and prep him for a symbolic lynch mob. Here is where the ethical problem slides into the moral one. In order to violently centralize the other you will have to “saturate” yourself—the other is “that” because you are “this.” Building and shaping oneself while and by refraining from such violence involves creating spaces that bring other speakers’ meaning into proximity with your sentence’s meaning. As others repeat, in different contexts, for differing purposes, your sentences (and you can of course join in as well), they keep exposing the distance between the two meanings, for themselves as well as for you. With this in mind, you would already write and therefore think differently, more hypothetically—if writing is always implicitly a record of speech (even speech one has with oneself), it makes more sense to explore the various settings in which that speech could have been uttered than to try to reproduce a full, present speech situation that is by definition absent. In that case, the distance is addressed from the beginning, not “patched up” afterwards. That distance, and the imperative to make it oscillatory and therefore a disciplinary space of inquiry, resisting the imperative to shut it down and resolve the discrepancy antagonistically, is the articulation of the moral and the ethical.