GABlog Generative Anthropology in the Public Sphere

February 26, 2019

Relevance

Filed under: GA — adam @ 7:24 am

It’s common to hear some event or discussion denounced as a “distraction.” A distraction, presumably, from what is really important. A distinction between what is more and what is less important is essentially a distinction between what is more real and what is less real. What is more real, it will always turn out, is what better fits the model of reality you presuppose—and wish to impose on others. So, people pay more attention to some lurid scandal manufactured by media outlets than to the latest study showing a decline in the wealth of the middle class. Clearly, the latter is more real, more important, because it is a sign of other things that are real and important: a decline in consumption, leading to a recession; growing dysfunction among members of the affected group, leading in turn to growing dropout, drug addiction and crimes rates with potential for a higher risk and less stable society; the possible emergence of new political forces trying to represent the dispossessed, with the possibility of upsetting the existing establishment, and so on. Meanwhile, what follows from the scandal? Nothing real—one corrupt politician gets replaced by another, maybe a new rule, soon to be forgotten, gets imposed—no one will remember it a few years down the road.

But in attributing such a higher degree of reality to certain processes, a further assumption is made: that those who are enjoined to pay attention to those processes in proportion to their reality can also affect the event, or its subsequent consequences, in proportion to the attention paid to it. Why criticize or ridicule others for being “distracted” or “distracting” if distributing their attention in a more appropriate way is not going to pay off in commensurate power over what one pays attention to? Otherwise, why not just pay attention to local, everyday, “petty” events and issues that one might be able to influence; or, to what one finds amusing or exciting? The one criticizing the distraction and the distracted, then, is the one out of touch with reality: more people paying attention to the latest economic developments does not add up to more people having intelligent, informed discussions about those developments, which would not, anyway, in turn, lead to a shift in the commitments of policymakers, such that they would now start formulating and implementing policy in accord with the presumably coherent and essentially unanimous conclusions drawn by those intelligent and informed discussions. The pathways from events, to reporting of those events, to taking in that reporting, to public opinion, to official political responses to public opinion are all cut in a manner unrecognizable to one who takes the model of the public spirited, informed, citizen seriously.

Well, then, how should one organize one’s attention? In such a way as to find vehicles for thinking through the anomalies and paradoxes that most forcefully present themselves to you. If there really are intrinsically more important or more real realities, that’s the way you’re going to find them anyway. This means we’re always working with what’s “at hand”—even when we want to be important and talk about important things we end up carving our own little niche within them, like arguing some technical point that hardly anyone else considers important. The desire to pitch one’s tent at the realest of the realities is the desire to have a commanding metalanguage that enables you to give orders, at least in your own mind, or the space you share with others, to those who actually command, who occupy centers. When a pundit or resentful intellectual says that some politician did this or that in order to distract us from what he’s really doing, resentment is expressed in a satisfying way insofar as one is superior to those so easily distracted and to the politician who thinks he can hoodwink you. You can construct a pleasing image of the political leader who will come along and carrying out your instructions to the letter.

A better criterion for determining relevance and reality is to employ as much as possible of the signifying means available on the scene where you find yourself. You’re on a scene—you’re thinking about things, which is to say rehearsing potential future scenes; you’re observing something; you’re speaking with people, even if mediated through a screen. The scene has propos and supports; it has a history. The participants have entered this scene from other scenes. All of this leaves traces on their posture, gestures, tone, words on the scene. All of it can be elicited. How much, and what, exactly, of it, should be elicited? Well, this is at least a much better way of posing the question of relevance than looking for an objective hierarchy of importance. Elicit whatever can be turned into a sign of the center of the scene. Any scene falls prey to mimetic rivalry: one actor tries to one-up or indebt the other, maybe even without realizing it. Everyone involved wants to be at the center, which might very well mean subverting another’s bid for centrality. It certainly means evincing resentment towards whatever keeps us all on the scene in the first place—even if, in fact, we’re all there to see that person, her attempts to usurp others’ attempt to be at least the center of their own site of observation is a form of resentment. And, of course, pointing these things out on the spot leaves one, justifiably, in fact, vulnerable to charges of deploying an escalatory form of resentment oneself.

Any sign of resentment toward the center is also a sign of genuflection before it. You can always show another their resentment by simultaneously showing their worship of what they resent. And of whatever it is that counts as a center upon the scene. The resented/worshiped figure, itself, points to some other center: whatever we deem to be “in” or “of” the resented object is also elsewhere, in whatever allows that object to carry on in such an offensive way. If your argument with someone escalates, it gets to the point where it becomes “excessive”—but what does that mean? Excessive according to what measure? Well, the argument started with an “issue,” but the stakes have now raised to the point where the “issue” has become secondary—the confrontation becomes the thing itself. Like it or not, your resentment toward the other is a form of worship: you devote attention to him, and attribute to him power over your own actions (he’s making you angry). But this means that the original “issue” hasn’t been left behind—it turns out that that issue was a mere proxy for this new one, this new form of devotion. And who’s to say it’s less relevant? But what form of worship will this turn out to be? If he kicks your ass, it ends anti-climactically, and you return to your own group in shame. If you kick his, well, maybe it’s the same, because it turns out he wasn’t a worthy adversary, which is also a bit shameful and not very “relevant”; but if you return in triumph, you install him as a kind of permanent deity, whose prowess proves your own. You construct an idol, and will require ritual repetitions of the same battle.

But it’s also possible that the two warriors will discover that they worship, not the other, but something that is neither of them. Whatever allows them to make peace with honor intact, whatever they can swear by together—that is what they worship. But now every sign put forth by the other—every sign of fear overcome by courage, all evidence of training, sacrifice, self-denial, skill—that one can emulate, that one has put forth oneself, are signs of devotion to that center. All the words that the two will henceforth speak to each other, and that others, telling their story, will speak of them, testify to that center. If one gets unreasonably angry with the other they can both laugh, because that resurgent resentment recalls the scene upon which its predecessor was transcended, and therefore becomes a sign of that transcendence. The show of resentment is just demonstration of the gift of vigor given by the center.

This brings us to critical ontological and epistemological questions. We’ve already dealt with the question of “reality,” that is, whatever is inexhaustibly signifying. It’s also a question of truth, which, in social and cultural terms, can only mean the eliciting of signs of one another’s relation to the center. One central principle of modernist art is that aesthetic value lies not in what a given work represents (ideas, a social reality, etc.) but in the extent to which it makes full use of its materials—colors and shapes on a flat surface, words on a page, and so on. Modern art and its theoretical defenders were right to defend art against its social utility, which in practice means kitsch, but were mistaken in thinking that rigorous artistic practices meant eliciting desires concealed or suppressed by the civilized social order. The materials of art are the materials of other areas of life, which also use colors, shapes, surfaces, words, sounds, etc. The vocation of art is to retrieve those materials from the disciplines, which use them to establish the hierarchies of relevance through which they hope to subordinate those who occupy the center. To some extent this always means the disciplinary establishments of the arts themselves.

Whatever is presented as relevant in itself is to be presented anew as a product of a scene. This includes all the aesthetic materials that, in a disimperativized declarative, disciplined order, are set up for purposes of control—for the anticipatory capture and sequestering of resentments generated by the carousal for rotating power itself. The more you can event-ize and scenicize the conceptual hierarchies streaming toward you the more reality and the closer to truth you are getting. These conceptual hierarchies always stream toward you through other people, people mediated by scenes and media. The conceptual hierarchies, then, need to be performed along with them—one needs to help elicit from them this performance, and help them elicit it from oneself. When the conceptual hierarchies dissolve, the real hierarchies that don’t need their support become more visible. The concepts can then be put to use discerning what the real hierarchies demand of us.

Here’s another way to think about it. Our lives are increasingly run by algorithms, which are really just a technological extension of the desire to predict what others will do. If I’m in a difficult situation, and I can predict what others will do for the next 5 minutes, I might get out of it; if I have machines that can predict what pretty much everyone will do over the course of their entire lives, I can dominate them fairly easily. Two things are necessary to build such machines: first, humans must be analyzed, broken down, into parts (fears and desires, primarily) that make them predictable; second, a social world is built that constantly elicits those anthropological mechanisms. It’s a bit too science fictiony to say we currently live in such a word, but it’s obvious that we are governed by those who are trying very hard to construct something along these lines. If you want to approach this in libertarian terms, you could say freedom depends upon being anti-algorithmic; in autocratic terms, you could say that clarity in the command structure requires it. The ideal of the algorithm is to separate the declarative order from the ostensive and imperative worlds once and for all. In the perfected algorithmic order no one need ever command because everyone would always already be guided spontaneously upon the path that maximizes frictionless coordination.

It’s pointless to ask, well, what would be so terrible about that (even if we could answer the question), because the ideal of the algorithmic order is really the opposite of what its appears to be. It’s just a war machine. It grinds you up to generate the inputs it needs. The victimary left thinks it opposes the algorithmic order because it reproduces the hierarchies resulting from behavioral differences—but the left just wants to control the machine. Which just proves human decisions are necessarily made to determine what counts as “inputs.” So, the left can’t have a counter-algorithmic program. Countering the algorithm would involve asking, what would be predicted of me now? And then confounding the prediction. I don’t mean that, if your “profile” suggests that you will behave compliantly in a given situation you should instead kill a bunch of people. Indeed, a slightly modified algorithm could predictthat. It means looking at the markers of compliance, as many of them as one can in a given scene, and delineating their imperative structure. We’re following orders here—we can all see this, right (look at what that guy just did.. why do you think this space is arranged in just this way?… did you notice how that gesture made her nervous?…)? The algorithm can’t account for an ongoing exposure of the terms of obedience. There’s no telling where it will lead—not necessarily to disobedience; maybe to subtle shifts in obedience that might eventually add up to decisive ones. The algorithm can’t account for someone seeking out an other worthy of being obeyed, or trying to become worthy of being obeyed oneself. The algorithm can’t account for the irreducible determination of relevance, of centrality, on the scene. It can’t account for the reading and writing, literal and figurative, of all the signs of the sign as signs of centrality and marginality—and therefore of relevance.

February 19, 2019

Can Networks Crowd Out Markets?

Filed under: GA — adam @ 7:24 am

When I go to the store to buy a loaf of bread, I have to pay the supermarket because I am not performing any equivalent service for them, or because, as is the case in David Graeber’s “communism,” we are not part of a community in which it is a matter of course that each takes what he needs and contributes what he can. So, how does the supermarket know how much to charge me? They buy the bread from a baker, and they have to charge enough beyond what they pay the baker to cover the costs of the building, the store technology, salaries and wages, etc., and still make a profit. The baker, meanwhile, needs ingredients, equipment, a building, employees, etc., so that determine how much he has to charge for the bread. And likewise for those who sell the baker the ingredients, etc. If we start at the other end, the supermarket charges what consumers will pay, which comes down to how much consumers prefer buying bread here as opposed to some other place, how much they prefer bread to possible substitutes (e.g., fajita wraps), how willing they are to forego bread compared to other types of food if they have less money to spend, etc. Still, at any point in time, there must be some minimum cost of making bread, and if it isn’t sold at that cost, bakers will simply cease to exist.

The standard argument for markets against planning is that no one can know how many people over, say the next month, will want how many loafs of bread, and, then, how much the operating costs of the supermarket, the ingredients and equipment of the baker, the machines that make the equipment for the baker, the machines that make those machines, etc., will cost over the next month. The only way to find out is to let supermarkets sell as much bread as they can, then have the bakers buy the ingredients they need to meet the demands of the supermarket, and so on, until we end up oscillating around an average amount of bread sold per month—sometimes the store will be out for a little while, sometimes it will have to throw away some extra, but that then gets worked into the cost. And, of course, with some products (probably not bread, though), the fluctuations can get really dramatic, to the point of forcing retailers and manufacturers out of business.

But, what if this all worked on something like a subscriber system? I, along with many others, sign up for a certain amount of bread (or a “basket” of goods, within a prescribed range of variation); the supermarket, along with other shops, subscribes with a “co-op” of bakers, who in turn subscribe with producers of wheat and ovens, and so on, who in turn subscribe with those they need to procure materials from? This could only work, or—not to get ahead of ourselves—be imagined to be possible, if everyone one joined various co-ops and virtually all economic activity took place through them. Those who make the machines to make the ovens sold to the bakers must ultimately, through a vast network of subscribers, subscribe those who employ the ones buying the bread. Obviously, I’m trying to imagine an advanced economy without prices and therefore without money, but also without central planning, and if that’s utopian, so be it, but I at least want to think it through in absolutist terms, and maybe someone else could incorporate these reflections into more sustainable ones. The question is whether the more advanced form of coordination that the elimination of liberalism, democracy, which is to say, “politics,” might make possible, would in turn make cooperation through a vast network of agreements between directors of enterprises possible.

It is already the case that much, if not most, economic activity is organized through networking: if you’re starting a new company, you look for customers and suppliers through established channels, including friendships and other informal associations; also, you will prefer to work with more stable companies with good reputations, in communities with reasonably favorable and predictable government, etc. You will prefer customers and suppliers you expect to be around for years over those who might leave or go under next month. None of this is directly priced. But all this means that the most basic value is an institution in which the high, the middle, and the low, cooperate rather than struggle against each other, in an area where other institutions are not trying to subvert the ones you want to work with. Companies that are well established and well run, and that value their reputations, will, of course, charge one another for goods and services, but they might also be more open to various forms of cooperation and exchange where payment can be waived or indefinitely deferred. With enough companies like this, you have your network of subscribers, and below a top-tier of networks, you could have lower tiers that would be more unstable and might need further supervision, or might even rely on money, but the damage done by their failures might also be contained.

Two big problems arise (at least that I can think of now). First, wouldn’t this be a very static system? How would innovation be possible—how could anyone break into the existing networks, and why should the top-tier companies feel any need to improve their products and services? Second, within this vast network of agreements that all depend on each other within an enormously complex system (the baker can promise x amount of loaves because the farmer has promised y amount of wheat, which he can do because the manufacturer has promised z amount of replacement tractor parts, within everyone bought in all the way across the social order), how can self-interested defections (cheating, strikes, adulterations of materials, side deals, black markets, etc.) be prevented? The eligibility of subscribers would have to be regularly assessed. We must assume the highest value of all is a unified and coherent command structure stemming from a central authority and reaching into all institutions. Individuals reporting directly to authorities would be in each company, and everyone in each company would be expected to consider himself a “delegate” of the government. There would be social pressures from other subscribers, who would stand to be disqualified if some of their members were found to be defectors. (If a particular subscriber can no longer fulfill its responsibilities, the central authority’s responsibility is to replace or reconstruct its governing structure.) We really have to assume the individualist, utilitarian ethics and morality of liberal society can be eliminated, and people can think of themselves as directly social. While it doesn’t quite prove anything, doesn’t the fact that so many millions of people can be recruited into an increasingly apocalyptic leftism that in many cases at least cuts against individual economic self-interests suggest that for many, if not most, utilitarian ethics are extremely unsatisfying? Doesn’t the fact that so many wish to resist the new order, without any guarantee that it will benefit them materially, indicate the same?

Innovation is really the bigger problem, since any innovation would have to, it seems, disrupt an extremely complexly integrated set of networks. How would R&D be conducted, and by whom? Here, I would follow up on some of my earlier “social market” posts and emphasize the centrality of the state to economic activity. This has always been the case, and is certainly so today—research is overwhelmingly shaped by state investment and subsidies, starting with the state’s monopoly on military equipment (which, of course, involves myriad spin-offs), but including quite a bit of investment in medicine, environmental technology, computing, communications, infrastructure, space exploration and so on. We can now add to that social network and surveillance technology. The state, then, perhaps in the form of its various agencies, would be a major subscriber within the networks. The state would be a major “consumer” of technological innovations and the co-ops who want to subscribe to certain forms of technological innovations could do so. (What does subscribing to the state entail? What does the state provide in return? Land, corporate and monopoly rights, airwaves, electronic networks, etc.) The co-op would then be able to provide a better product to its subscribers, and could solicit more subscribers as a result; as a subscriber to its suppliers, meanwhile, that co-op could be more demanding—its greater influence would enable it to get its employees into better co-ops or subscription lists.

I know it sounds crazy to speak of an advanced, civilized social order without money, in which everyone asks another for what they need, and in turn give others what they ask for. Maybe it is crazy, but I think it’s worth the speculation if we are to think beyond liberal fetishizations of the market. Almost everyone will concede that markets require some state support and regulation, but such concessions almost always assume that such support and regulation is the “lesser evil,” and so encourage us to constantly chafe against it, and assume it could always be reduced. Nationalist economics imagine a more positive role for the state, but that still involves intervening in already existing markets in very targeted ways—the basic liberal anthropology is not challenged. “Big government” left-liberal political economy, meanwhile, always presupposes an adversarial relationship between agents in the private economy—all the state does is take sides in that struggle, or sometimes act as a referee. But if we are genuinely to see the central authority as the source of social organization, as, essentially, the owner of the entire territory over which it rules, with subordinate agencies having delegated quasi-managerial powers over the “productive forces, then we should try to formulate that relationship prior to any mediation, like the various departments of a corporation. A corporation has external constraints, of course, but, then again, so does a government: it must show itself a respected and responsible actor on the international scene, whatever its place among an international hierarchy of states. But more important than all this is how to think about the creation, or re-creation, of an ethics, morality and aesthetics that transcends liberal ontology and anthropology.

I think the conventional view that sees pre-modern peoples as more “spiritual” and less “selfish” than moderns has it completely wrong. With all of the adherence to ritual and belief in supernatural agencies, pre-modern peoples are driven by the most material interests—fear and need. If one sacrifices regularly to the gods and is careful not to violate any ritual prescriptions, one will be provided for—one will have victory against enemies, a good hunt, rain for the crops. In a post-sacrificial order, there are no more exchanges with the gods, or even God: what God has given us is everything and incommensurable with any return; what each of us gives to God is also everything, all of us, even in the knowledge of its utter inadequacy. This desire is no less powerful in the atheist. The post-sacrificial epoch would better (and more positively) be called the epoch of the absolute imperative, a concept I take from Philip Rieff. The absolute imperative is absolute because no imperative can be issued in return by the commanded (no “God get me out of this and I’ll never…”). The absolute imperative is to stand in the place of whomever is violently centralized, i.e., scapegoated. The absolute imperative has its corollaries, enjoining us to construct and preserve justice systems that place accused individuals at the center in a way that defers, delays and ultimately transforms the scapegoating compulsion, or represent actions and uses of language that reveal the scapegoating compulsion in less than obvious places. Obviously, we (at least in the West) “hear” the absolute imperative because of the Judaic and Christian revelations, but it can certainly be made “audible” in other ways. But all the radicalisms and “holiness spirals” of the modern world, however “puppetized” and “proxified,” are set in motion by an attempt to obey the absolute imperative. Despite the best and worst intentions of economistic thinkers (who are really obeying the absolute imperative in their own way), human beings will not be satisfied with an affluent society, not even if we make a little bit more affluent. Or, at least, enough humans will not be so satisfied to allow affluenza to settle in, undisturbed, once and for all.

What, then, are the economic consequences of the absolute imperative? Eric Gans—while not speaking of an “absolute imperative”—sees the economic consequences of the Christian revelation to be, precisely, the free market, where exchanges are voluntary and non-violent and the natural world can be exploited in increasingly productive ways. This may be part of it: exchanges mediated through money are a huge moral advance over such economic practices as pillage and slavery. But if the most powerful players on the market simultaneously centralize and destabilize central authority, the ethical and moral advantages of both the market and central authority are compromised beyond repair. The two must be kept separate: money must be kept out of politics, but once the money is out, what is left of the politics? What, indeed, is left of the money? If money is first created by central authorities in order to enable individuals to purchase their own animals for sacrifice, then it from the beginning is a consequence of the derogation of authority over a shared sacrificial scene. The same is the case when money and markets are created by the imperial state to provide for soldiers in conquered territories—here, as well, money is a marker of the limits of authority, which means what money really measures more than anything else is the degree of pulverization of central authority. A secured central authority would, then, have to contain the market within its embedding, enabling, moral and ethical (disciplinary) boundaries. The use of money as an abstract sign of the goods, services, and capacities to be commanded by its possessor would necessarily dwindle: the uses of money would be qualified in many ways. How much could that use dwindle? If it dwindled to nothing, wouldn’t that mean that economic activity has been wholly re-embedded in a thoroughly articulated self-referential social order devoted to ensuring the institutionalization of the absolute imperative? That, at any rate, is the thinking behind the thought experiment attempted here.

February 12, 2019

Paradox, Discipline, Imperative

Filed under: GA — adam @ 6:12 pm

If the signifying paradox is constitutive of the human, then humanistic inquiry, or the human sciences, really involves nothing more than exposing and exemplifying that paradox in forms where it had previously been invisible. The paradox here is that we know what we’re going to find, but we’re going to find it, if we’re searching properly, precisely where we assumed our search for it was paradox free. I’ve been hypothesizing that what constitutes the post-sacrificial disciplines has been the concealment of the scene of writing (and subsequent media) upon which those disciplines depend. Drawing upon David Olson’s discussion of “classical prose,” in which he shows that writing historically took the form of a supplementation of the speech act represented in writing, I’ve been arguing that this supplementation occludes of the scene of writing itself. What the scene of writing reveals is that words (and ultimately all other signs) can be separated from their scene of utterance and the intentions of those on that scene and iterated on further scenes and taken up by other intentions. As Derrida claimed, writing reveals that what is truly originary in the sign is its iterability, not its meaning or the intention behind it; we can take the next step and say that its iterability, which guarantees the possibility of future human scenes, is its meaning, and is the intentionality of anyone issuing a sign. So, the meaning of the word “dog” is something like “I reference, with varying degrees of directness, all previous uses of the word ‘dog’ in order to enable a potential ostensive that will enhance scene construction in more or less vaguely conceived future instances of emergent mimetic conflict.”

The disciplines, starting with the mother of them all, philosophy, want to abolish paradox. An acceptance of paradoxicality would situate the disciplines as supplemental to the paradox of imperatives issued by the center: the narrower and more precise the imperative, the more all of its intended subjects must make themselves ready and worthy of obeying it in unanticipated settings. Inquiring into this paradox would be all the human sciences we ever need, but in this case the disciplines would have to “abdicate” their self-appointment as those who provide the criteria upon which we judge the legitimacy of the sovereign. Is the sovereign doing “justice,” is he protecting and respecting the “rights” of his subjects, is he meeting their “needs,” adhering to “international law,” enforcing the “law,” ensuring “prosperity,” “wealth creation,” “growth,” etc.? Has he been selected and does he rule according to procedures in a way satisfactory to all those who have themselves been appointed by certain procedures; all of which procedures merely lead us back to the establishment of those procedures according to other procedures, which…? If no, then he’s not the “real” sovereign, and in order to know whether he is or not you have to be a political scientist, a legal theorist, an economist, a sociologist, etc. To maintain that position, you must suppress the paradoxicality of your own utterances. You must provide certain, clear, unequivocal declaratives yielding universally available virtual ostensives that lead to only one conclusion regarding whether the central authority is rightly distributing whatever it is your science assumes he must be distributing.

The human sciences claim they conduct inquiries modeled on the experimental sciences, with their process of hypothesis generation and testing, but they really don’t. (Do the physical sciences? Shouldthe physical sciences? I leave these questions aside for now.) I worked my way to this realization through reflection upon my own little field, the teaching of writing. I came to see that all the criteria used to determine whether student writing was “good” or “improving” was circular—terms like “clarity,” “precision,” “deep analysis,” “reading comprehension” really don’t mean anything, because what it means to be clear, precise and all the rest depends upon the situation, i.e., the discipline. The assumption is that the instructor him or herself knows what clear, precise, analytical, etc., reading and writing because, otherwise, what would he or she be doing teaching writing at an accredited institution? But that means that all of these supposed concepts really translate into the teacher saying “become more like me.” And how can the student tell what the teacher is “like” (since the condition of the student is defined precisely by being unlike the teacher)? Well, I’ll tell you when you are or aren’t. So, for most writing or English teachers out there, this is why your students always ask you “what you want”—they have intuited that the entire structure of your pedagogy is predicated upon you desiring from them a reasonable facsimile, not of who you really are (that would be hard enough) but of who you imagine yourself to be.

From this I concluded that what is to be rejected in this conception of teaching and learning is that its “standards” do not provide “actionable” imperatives. No one can obey the imperative “write more clearly,” unless there is already a shared understanding of what “clarity” entails for the purposes of that communication. And, again, in the educational setting, such imperatives are issued precisely because the student doesn’t have access to such a shared understanding. So, I concluded that the only kind of “fair” and effective pedagogy is one that provides students with imperatives such that they can participate in creating the shared understanding making it possible to determine when those imperatives have been obeyed. This generally involves something like “translate (or revise) X according to rule Y,” i.e., some operation upon language from within language. I don’t want to go any further here (but if anyone is interested… https://wac.colostate.edu/docs/double-helix/v6/katz.pdf)–butthe point here is that the conclusion applies to all the human sciences (which are all, really, if unknowingly, pedagogical). That is, a genuine human science would have to participate in its “object” of study, producing imperatives aimed at improving the social practices it studies, along with generating the shared criteria enabling the practitioners to assess the way and degree to which the imperatives have been fulfilled. (Of course, political scientists, sociologists, economists and the rest make suggestions to policy makers all the time—indeed, they are routinely hired and subsidized for this very purpose. But the results of these suggestions and proposals can only be assessed in the language and means of measurement of the disciplines themselves—they therefore represent different ways of imposing a power alien to the practice in question. They are attempts to give imperatives to, rather than receive imperatives from, the central authority.)

The next question, then, is how do paradoxes generate actionable imperatives? To get to paradoxes generating imperatives, we can start with the imperative to generate paradoxes. Find the point at which the relation between the name, concept, or title becomes undecidable—that is, where it is impossible to tell whether some thing is being represented or some representation is producing a thing. This undecidability pervades language in ways we usually ignore—has it ever seemed to you that someone had the “right” name (their given name, not a nickname)? It’s absurd, of course, but, on the other hand, one’s name can correspond more or less closely to their being, can’t it? The argument over whether words represent their meanings “arbitrarily” or through their “sound shape” as well goes back to Plato’s Cratylus, and is not settled yet—whatever the truth, the fact that it’s a question, that words sometimes seem to match, in sound, their meanings, is an effect of the originary paradox.

This paradox of reference will emerge most insistently in the anomalies generated by disciplines at a certain point in their development, but can be located at any time. What is “capital,” what is the “state,” what is “cognition,” what is “identity”? If you ask, you will be given definitions, which in turn rely upon examples, which in turn have become examples because that term was used to refer to them. This is the kind of deconstructive work that opens up the question of the relation between a discipline and the intellectual traditions it draws upon and conceals. Within that loop of concept-definition-examples-concept is the founder of the discipline and the containment of some disciplinary space. A new imperative, or chain of imperatives, from the center is identified and represented as a new imperative the sovereign is now to follow—he is to create a new social order freeing capital or making the state independent, unleashing new cognitive capacities, representing pre-formed identities.

Articulating these paradoxes, then, presumably help us generate concepts other than “capital,” “state,” “cognition” and “identity.” Let’s review the process of discipline formation on the model of Olson’s study of literacy and classical prose. Writing represents reported speech, but since it does so in abstraction from the speech situation it must supplement those elements of the speech situation it can’t represent: tone, gesture, the broader interaction between figures on the scene. This generates new mental verbs: suggest, imply, insist, assume, and so on. These mental verbs are in turn nominalized into suggestions, implications, assumptions and so on (it doesn’t happen with all words—there seems no corresponding nominalization of “insist,” at least in English). These nominalizations become new “objects” of study, for linguists, psychologists and ultimately all the human scientists. These concepts are artifacts of literacy—this doesn’t mean that they can’t tell us something about processes of thinking, knowing and speaking, but it does mean that they conceal their origins and become naturalized as “features” or “mind” or “language.” Cognitive psychologists, for example, can set up ingenious experiments that test the role of, say, “prior assumptions” in decision making, but built into these studies is the literate, declarative “assumption” that it would be better if decisions were made purely through abstract ratiocination without reliance on “prior assumptions.” So, the use of power to favor what cognitive psychologists and like-minded human scientists across the discipline would recognize as “rational discourse” is implicitly favored over any attempt to, say, think through what a “good” shared set of “prior assumptions” might be.

So, let’s say we reverse the process, and dis-articulate the nominalizations back into verbs. Anna Wierzbicka’s primes can be useful here, but they’re not required. So, for example, the psychologist Daniel Kahneman “writes of a ‘pervasive optimistic bias’, which ‘may well be the most significant of the cognitive biases.’ This bias generates the illusion of control, that we have substantial control of our lives” (I’m just working with Wikipedia here). So, “we” can measure how much “control” “we” have over “our” lives, how much control we think we have, and the “distance” between the two. Those doing the measuring must have more control than those being measured—they know how “complex” things really are. The best way of measuring such things seems to be asking people how much they think things will cost. (Maybe Kahneman has a bias in favor of certain understandings of “control” and “complexity.”)

But being more or less “optimistic” is a question of wanting, hoping, thinking, knowing, trying and doing. These activities are all part of each other. You have to want in order to hope, and you have try in order to do and you have to hope in order to try. And you have to know something (not just not know lots of things) in order to hope—knowing the relation between trying and hoping, for example, and how that relation is exemplified within whatever tradition or community you are located. And the relations between all these activities can be highly paradoxical—the harder you try, the higher your hopes might be, which might mean the more deluded you are or it might mean the more you find ways of noticing your surroundings, taking in “feedback.” But can you try really hard without hoping? Sure—and consciously withdrawing your hopes from your activity, draining your reality of its aura of hopefulness, so to speak, might be a new form of hoping, one in which you accept a lack of control as part of a “faith” you have in “higher powers” or the mutual trust with your neighbors. From this you derive an imperative to hope, try, know, etc., in a specific way, within a particular paradoxical framing. (All of the binaries targeted for deconstruction by Derrida are sites of this originary paradoxicality.) None of this interferes with re-entering the entire disciplinary vocabulary from which we departed, and reading the discipline itself in terms of its hoping, trying, knowing and so on. Any disciplinary space must also be a satire of some institutionalized discipline.

February 7, 2019

Salvation from the East

Filed under: GA — Q @ 6:30 pm

The religious practices of Buddhism, Hinduism, Taoism are mostly ritual in many places. But there is a more spiritual strain found in certain sects and their texts: the idea that consciousness itself is the sacred or God. The very fact of being conscious means that we already know everything that it is possible to know about God or Buddha, although certainly revelation or enlightenment can make that knowledge more available to understanding. The insight of Buddhism is that consciousness is essentially one thing, despite the various creatures who each possess their own form of consciousness, and despite the infinite possible objects of consciousness. A Zen master once summed up Zen teaching in one word: “Attention.” When asked to elaborate, he said, “Attention, attention, attention.”

Consciousness is shared by animals. There is a famous Zen koan about a monk who asks his master whether a dog has Buddha-nature or not. The Zen master answers “no!” although I understand the Japanese word “mu” (sometimes translated as “no”) is actually more nuanced. The traditional teaching of Buddhism is that all beings have Buddha nature and attain enlightenment, including dogs and stars, rocks and trees. The point of the koan, as I understand it, is that the student should be seeking enlightenment not wondering about doctrine. Zen disciples seek enlightenment, the content of which defies any rational explanation. Meditation on the master’s answer to the student’s question, mu, informs the Zen practice of some disciples.

Kant wrote, “Two things fill the mind with ever-increasing wonder and awe, the more often and the more intensely the mind of thought is drawn to them: the starry heavens above me and the moral law within me.” To develop his point, I would say there are three miracles which support faith in God: Being (the universe surrounding us), life (which fills the heights and depths of our planet), and human consciousness, which is indeed distinguished by our sense of right and wrong. Eric’s idea of the scene of representation can be articulated in ethical terms. To be aware of God, or to be aware at all, means to be aware of the human community, as a community, and to be aware of individuals, as individuals. Human consciousness is a new awareness of others, as mediated by a sign. Buddhism is never nihilistic. Consciousness, the scene of representation, is filled with the attention of the human group.

All animals and even plants have the ability to react to their environment, and can thus be said to share the miracle of consciousness. Rocks and stars do not obviously have consciousness. But Zen Buddhism, like Kant and Hegel, seeks to overcome the opposition of the perceiving self and the objects of consciousness. Anything that can be perceived, therefore, partakes of Buddha, and the duality of subject and object, coming and going, is illusory. For this reason, the question of whether the scene of representation has any ontological status apart from human consciousness is not meaningful.

Buddhism recognizes that life is suffering, and that the source of suffering is desire. The human condition therefore is desire and the suffering which results, a very Girardian insight. The solution is to minimize desires for sensual pleasure and not let desire lead one into sin. There is an ascetic strain to certain types of Buddhism and Hinduism. Enlightenment suggests an inner serenity and detachment from external conditions, founded on the insight that all existence is always-already free and transcendent.

February 5, 2019

Form and Paradox

Filed under: GA — adam @ 7:14 am

Once the sign has done its work on the originary scene, that of arresting the forward, convergent movement of the emergent community toward the central object, the members of the group will, indeed, proceed to advance on the object and consume it together. This raises the question of how they do so without forgetting what they just learned, and restarting the mimetic crisis. The sparagmos, the manifestation of the resentment toward the center, must be contained. My answer to this question, one I have put forward many times, is that the sign is “flashed” at each point along the way, accruing meaning and variation along the way. Even at the “wildest” moment of the sparagmos, a quick gesture would prevent one member of the group from encroaching “too much” on the portion of another member. What this means is that form is needed to make transitions from one activity to another, or from one “stage” of an activity to another.

This is the reason for that “canopy of ceremony” enveloping all practices in traditional orders, the loss of which in modernity is so bitterly mourned by reactionary cultural theorists. Think, for example, of how difficult it can be to “disengage” from an intense conversation with a close friend. It’s awkward to say something like “ok, see ya” when that cut-off point inevitably comes. The good-bye is best framed in such a way as to indicate some carrying over of that experience into more mundane activities, as well as that the separation represents a mere interregnum, as the conversation will be resumed at some later point. Or, take perhaps the most “wild” activity of most modern humans, sexual intercourse—just as some process of seduction must proceed the act, some exchange of words and gestures must “seal” its conclusion, both to preserve it as sacralized memory and integrate it into the rest of life. A lot of “bad” sexual experiences are no doubt a result of a failure on the part of one or both parties to see to the “scenic” character of the act. (The new legal doctrine of “affirmative consent” is a kind of unintentional parody of this need for form, trying to codify in declaratives what must in large part take place on the ostensive and imperative level.)

I’m coming back to this question in connection with arguments regarding the moral order of absolutism I’ve been making recently. The problem for absolutist political thought is conceiving of a post-sacrificial center. We can’t have a God-Emperor because we know that the emperor doesn’t control the weather, the river or the crops, nor can we in good faith bring some portion of our possessions to a temple to be consumed so as to ensure the regularity of rainfall or, more generally, the benevolent gaze of the deities. But, since there is a center, over and beyond any “justifications” for it, or for a particular occupant of the center, that anyone could provide, the center’s de-sacralization leaves a hole. Since what the center does is issue imperatives, in obeying the imperatives from the center we confer the “graceful charisma” (a term from Philip Rieff recently referenced by Imperius in his twitter feed) the center needs—more precisely, we do so in the way we obey, by eliminating the gap between the imperative issued and the imperative obeyed. “Social science” becomes a holy science insofar as it is wholly engaged in studying the difference between imperatives issued and imperatives obeyed, including the ways that difference is manifested through the declarative order.

A particular “fork” confronts us in embarking upon the path any imperative places before us. Since the center is occupied by, has been “usurped” by, a human, every human comes to model him or herself on that occupant by demanding some form of centrality him/herself. Being the recipient of an imperative places you at a center with, therefore, some power to wield—at the very least the power to direct attention one way or another. One way of directing attention is by appropriating the “transgressive charisma” (to return to the distinction Imperius evokes) one gains by violently centralizing someone “falsely” claiming centrality. This putative falseness consists, circularly, in marginalizing the present claimant’s, and all those he invites to be represented by him, self-centralizing. We can identify transgressive charisma because its bearer will accuse his target of all of the violations of normative order that he himself commits in his very accusation.

And this normative order is the result of the deferral of scapegoating that marks post-sacrificial order. Something goes wrong—our first impulse is to find the origin of the threat and eliminate it. (We are all originary thinkers.) How? We first of all look for a human origin because anything that threatens us seems intentionally directed at us, and only a human could threaten us intentionally. (Gods, in sacrificial orders, can be considered humans for this purpose—the border line is very porous.) So, which human? Some of us stand out more than others, whether it is because we are “defective” in some way (physically disabled, speaking with a lisp, etc.) or because we have come, rightly or wrongly, to be associated with “trouble.” Some of us are “marked,” in other words. Someone, in a given situation, will be “especially” marked. How so? Someone will make some apparently plausible connection between that individual and the event. Someone else will second it. Others start to look more closely, and find other reasons for suspicion. And not just suspicion of a past deed, but of ongoing connivance in whatever the threat is. Everyone starts to converge upon this individual. It is not just that he needs to be punished, but that he is the source of a contagion that can only be stopped by shutting it down at its source, and right now. The proof of this is the very contagion that leads to the convergence on the individual. The panic intensifies until that individual is eliminated.

That is scapegoating, and we see this kind of thing happen, usually, of course, in much less disastrous forms, all the time. Look at why people get excluded from groups, ostracized by or within institutions. Now, if we put the scenario I described in the previous paragraph in reverse, let’s say that as the crowd starts to converge, one individual hesitates, and starts questioning the movement toward this central object. He points out that the association someone has made could easily have another explanation, or may not even be an association. He proposes that we look more closely at that purported “evidence.” He might further point out that harming this one person will do nothing—whatever the emergency is (if it is in fact an emergency—another question he might raise), it has to be addressed on its own terms. He may point out that some of the participants are clearly hurling accusations only because others are—indeed, they’re the same accusations, and the people hurling them give no evidence of having thought of them on their own.

All this scenic construction is what lies at the base of a “normative order” or “justice system.” The entire legal system can be seen as erected so as to cut off at the pass all the mimetic inclinations toward scapegoating. But the person who slows down the crowd redirects its hostility toward himself. He may become a victim, but he has advantages that the chosen victim doesn’t. The selected victim, the “emissary,” is marked, and every response he has given towards the crowd has stained him further—his denials are obviously lies, his tone and gestures show that he is keeping some secret, etc. The retardant, meanwhile, is no more marked than anyone else, and attempts to mark him now will be risky because too obviously “interested.” He begins by drawing attention to the crowd, which must now look at itself—or, at least some are looking at others, diluting its “crowdness.” To the extent that he is an effective retardant, everything he says confronts some claim, some accusation, made by the leader of the crowd (the self-chosen leader, or perhaps one chosen by the retardant himself, to give the crowd focus and slow it down). Why did he notice this, but neglected to tell you that? The retardant doesn’t want to renew the crowd’s fervor, this time directed at its (former) leader—he wants to dissolve the crowd, while ensuring that it retains a memory of what it would most like to forget. It may be important to punish the leader, but it should be a slow and proportionate punishment, in contrast to the hurried and massively disproportionate one the crowd was about to inflict. Most basically, the punishment should be a lowering of the trust given to that individual, which is really just a recognition that he has revealed something that we can’t forget. At the same time, there will now be something in each of us that we trust a bit less, and we will all be a little bit more ready to listen to someone taking on the role of the retardant in similar cases.

You have a post-sacrificial culture once the balance has shifted from the arsonists to the retardants so that, ultimately, most of us are mostly retardants, and can note our own inflammatory tendencies. But once this takes place there comes the tendency to farm out our retardant capacities to automatized institutions that run according to fixed rules and bureaucrats who can apply those rules without thinking too much about their origins or meaning. Sacrificial tendencies will then recur; indeed, the justice institutions themselves will attract such tendencies, where they can be indulged covertly and in good conscience. (Liberalism is essentially the laundering of scapegoating through the justice institutions.) We will never have to stop learning to be the first retardants. This is what we learn by giving form to all of our interactions and thereby ensuring continuity and consistency of intent—passing the baton, so to speak, even to ourselves. When scenes are formally constructed, emergencies are already accounted for in terms of the scene itself—there are “procedures” in place, even if only tacitly, in the forms given to actions and interactions. It is accusations of intent that can’t be seen in the form of one’s actions that will stand out, not markings of being less fit.

This requires an acknowledgment of the paradoxical structure of the sign I’ve been exploring in the last couple of posts. Again: we create the “reality” that we also simply “refer” to. Even knowing this doesn’t extricate us from the paradox because any attempt to act on this knowledge just generates a new scene, with an uncertain outcome, on which new signs with the same paradoxical structure will be emitted. We work, live think and speak with this paradox by remaking ourselves, as much as possible, into forms that sustain continuity across acts. I might be marked; any of us might be, under certain conditions. But one can show that the very things that might mark one are in fact signs of one’s retardant quality. What seems irritating, annoying, or threatening is really my giving notice of a readiness to hesitate before any prospective convergence. I would then need to remake myself so that that is genuinely the case, so that I don’t delude myself into thinking that simply being irritating and annoying in itself marks one as a retardant. One thereby constructs the reality within which one will circulate as a sign of deferral, but it will only be such a reality insofar as one actually defers, which also depends upon all the others—all the others with whom one is then engaged in a reciprocal process of creating an idiom of forms constituting an oscillation between hesitancy and continuity.

Powered by WordPress