GABlog Generative Anthropology in the Public Sphere

March 12, 2019

Dialectics

Filed under: GA — adam @ 7:24 am

Dialectics is the rendering of paradox pragmatic. There are two ways of thinking about dialectics. One is as a mode of generating new ideas through probing, critical dialogue, in which each side tries to make explicit the assumptions underlying the other’s discourse. This notion of dialectics goes back to Socrates, and a particularly interesting modern example can be found in R.G. Collingwood’s understanding of dialectics as the attempt to find agreement underlying disagreement. The agreement, which, in Derridean terms, was “always already” there (insofar as argument was possible in the first place), is nevertheless, once explicated, a position that neither side knew they held in advance. In other words, something both originary and new emerges.

The other way of thinking about dialectics is as a way of understanding a historical process, or even as that process itself, whereby events are generated by contradictions in an existing social form, so new configurations emerge which both fulfill and confute the intentions of the actors who initiated them. Historical dialectics acquired a bad name as a result of its association with orthodox Marxism, which used “dialectical materialism” as a ‘guarantee” of both the inevitability and justice of its own victory, but Eric Gans employs a much subtler version in his account of the emergence of the imperative speech form from the ostensive and then the declarative speech form from the imperative (by way of the interrogative). Here, the shared intentionality bound up in a particular sign is put to the test (“contradicted” by) an “inappropriate” use of that sign; the tension is resolved as the desire to maintain shared intention (“linguistic presence”) generates a new speech form, “recouping” the “mistake.”

Unlike Marxist dialectics, this Gansian version allows for all the times where this “transcendence” of the previous form would fail to take place—linguistic presence can be broken, and some form of violence and social crisis ensues. The result of a dialectical process, then, can only be assured once the new form has been spread through imitation sufficiently so that it has proven itself capable of deferring the antagonisms those failures would have aggravated. In other words, “historical dialectics” proceeds in a manner beyond the intentions of any participant, but must be “authenticated” by shared intentionality at each point along the way and eventually yield a higher level of shared intentionality. But this also means that the two meanings of “dialectic” are one: the emergence of new historical forms is a process of more advanced dialogues taking place at the margins and gradually providing the means of deferral that enable a reconstructed center to resolve some crisis. Thomas Kuhn’s notion of scientific revolution provides us with the best model for understanding this process: the margins where the more advanced, “disciplinary” dialogues are taking place are where those who have perceived the anomalies of the existing social order in such a way as to doubt whether they can be “recouped” within that order produce questions invisible within that order. Their work is then focused on developing and trying out possible paradigms that might replace the prevailing one.

We could see the emergence of Generative Anthropology itself in just such dialectical terms. At the center, according to the originary hypothesis, sits a potential victim. It is in designating this potential victim, and refraining from victimizing it, that the sign emerges and the group is formed. But how did this clear, minimal insight become possible? If the making of victims is a matter of course, whether it be through conquest, those in power destroying those who might pose even a distant threat, sacrifice, mass slavery, and so on, one would never consider that the production of victims could be a source of any significant insights. In fact, I wonder whether a word equivalent to “victim” would even have been used (the word “victim” itself, according to the Online Etymological Dictionary, comes from the creature brought as a sacrifice). Certainly those whom we would today consider victims, like conquered, displaced and massacred populations, would have not thought of themselves in those terms: they would know, of course, that they had been bereft of their gods, rituals, territory, wealth, kinsfolk, institutions, and so on, and they would mourn all this and bemoan their destruction or enslavement, but this would be a source of shame and loss of faith more than of a complaint anyone would be expected to attend to. Our gods have failed us, or we failed our gods; what else is there to say?

Only with the emergence of justice systems can the notion of a “victim” be conceptualized—that is, once wrongs are not addressed directly through a vendetta but through some socially sanctioned process of determining punishment. This indicates an added degree of deferral, which opens a new realm of paradoxes. The law is established so as to do justice, because “justice” by definition is the proper allotment as determined by anyone who is in the “right” position to determine it—so, something we could call “law,” even if that means the sifting through, by legal professionals, of privileged precedents, rather than a written code, will emerge with the concept of “justice.” But, then, isn’t “justice” merely an effect of what the law, with its own institutional history, has decided? In that case how do we determine whether the law has been rightly decided? For this, we must step outside of the system, to reclaim its origin, but this stepping outside is a dialectical process which requires the model of the exemplary victim of the justice system itself. At that point, the concept of the victim becomes increasingly central culturally until, in Christianity, we have the worship of the exemplary victim. As Christianity permeates all cultural sites to the extent that it can be detached from its origins and its victimolatry separated from the carefully demarcated exemplary victim defining it, all of culture comes to be obsessed with the search for victims and self-representation as victims. The history of democracy, liberalism and romanticism trace this negation of Christianity from within Christianity. With post-structuralism, even language becomes grounded in victimization. Victimary thinking becomes so central as have destroyed any “other” it could distinguish itself from for some moral purpose. Once this ontological colonialism has proceeded to a certain point, it becomes possible to consider that it is not victimization that is at the origin, but a refusal to victimize. And then it becomes possible to think the originary hypothesis.

We can posit a related dialectic as the form of modern politics. Eric Gans speaks of an oscillation between “firstness” and “reciprocity” as constitutive of liberal democracy, but this can’t be a dialectic because nothing new can come out of it. The distributive demands of the moral model will always be assailing the innovators and merit-based hierarchical structures that make those demands for equality possible in the first place. The only thing that could keep the pendulum swinging back and forth is a sufficient degree of cynicism on the part of the redistributors—they must know, as the Schumers and Pelosis surely do, that the eat the rich and get whitey talk is just to keep the contributions flowing and the voters and activists mobilized—they know better than to actually kill the goose laying the golden eggs. But their successors, like AOC, Ilhan Omar, Rashida Tlaib and others, don’t know this. They’ve grown up saturated in the political simulacra of Media Matters, and take all the egalitarian talk quite literally. Even if they “grow in office” and realize what the progressive ideals are really for, we wouldn’t really have a dialectic: the increasing disparity between ideals and the cynicism with which they are advanced can’t lead to anything new. Even if the pendulum keeps swinging, all it can lead to is more corruption and more advanced degeneration.

We could, though, speak of a dialectic between the model of the originary scene and the model of the “second revelation,” that of the Big Man. Here we have a genuine dialectic that has always produced cultural novelties. Ancient Israelite monotheism—the name of God as the declarative sentence—is itself a product of this dialectic: a retrieval of the originary relation to a shared center on the terrain created by the ancient empires, heirs of the Big Men. Rather than a figurable center, like a sacrificial animal, a non-figurable God; rather than a sacred grounded in ritual specific to a closed community, a relation to the center any people could imitate; rather than a deity with whom to engage in imperative exchange, a God who commands reciprocity with our neighbor. But neither Israelite monotheism, nor its Christian and Islam successors, reject monarchy—rather they, seek to constrain and edify it. Nor do any of these faiths recommend a universally shared relation to the center that would override all hierarchical political institutions: the imperative to seek the peace of the kingdom where you live is always intact—and, of course the Israelite God is Himself paradoxically and scandalously, national as well as universal. As with any dialectic, new problems are generated out of the solutions of old ones.

Liberalism might be seen as an attempt to stall this dialectic by internalizing it within the economy, producing a pseudo-dialectic between expanded production and expanded consumption. This also cannot create anything new. But if we see the adherence to the model of the originary scene as itself a product of struggles between hierarchs seeking to efface their descent from the Big Man, we can set the dialectic in motion again. The logical endpoint of victimocratization would be the direct branding, like with sports stadiums, of groups demanding absolute, genuflecting respect from anyone marginally more normal than them by corporations defending their fiefdoms within the global distribution process. Facebook’s Women’s March; Amazon’s Black Lives Matter; Google’s Committee to End Transphobia, etc. The “antithesis” to this WokeCapital hearkening back to the emergent originary scene is, first, that the position of the hierarch is left unclaimed; and, second the originary scene as configured around a center has also been abandoned. Pretty much anyone who asserts the right to issue commands, and the grace to obey them, simply because there has to be a social center, is an avatar of autocracy, and heir to the Big Man, consciously or not. And virtually anyone who gathers others together to study some thing unresentfully, letting the object speak or, in Heideggerese, “be,” has created a direct line back to the originary scene. The “synthesis” comes when those forming disciplinary spaces turn their attention to the emergent autocrats, and those autocrats revise their command structures upon receiving feedback from the disciplinary spaces.

This “synthesis” can only take place in the middle, in the meeting of those upholding the normal and some “allotment,” and those marginal to the official disciplines. Together they will have to form a “spine” which can act once enough elites realize that their role is to govern in their own name rather than ginning up the mobs in whose name they can then claim to govern. But this involves keeping a kind of double dialectic at work. On the one hand, there is the dialectic between WokeCapital and the disciplinary/disciplined, as the latter learn from the negative example of the former how to disentangle the command structure from the demand for sparagmos now. On the other hand, there is ongoing dialectic between the disciplined and disciplinary themselves, as the former imbibe modes of moral and ethical prescription from the latter, while the latter learns from the former to be more pragmatic and pedagogical, to be that hardest thing of all for thinkers—useful. The norm-setting distinction of the victim currently situated most antipodally to the normal can then be met by the re-marking of the normal as the vertex of convergent resentments.

March 5, 2019

Identities

Filed under: GA — adam @ 7:20 am

We are all products of the center; we all want to participate in the center. Any discussion of who any “I” or “we” had better take that as its starting point. Any individual life can be traced from center to center: the parent(s) at the center of the family, the teacher at the center of the classroom, the principal at the center of the school, the cool kid at the center of the peer group, the boss at the center of the workplace, and many more. These are the centers from which imperatives are issued, and which impose a nomos on the scene: the “fair” or “just” division of goods, attention, sympathy, protection amongst siblings, classmates, co-workers. In the modern world, there are centers backing these with which we are in direct contact: corporations, media, the state. These larger centers support the local ones, or encourage us to resist them, or some complicated combination of both that individuals need to figure out. The local centers, meanwhile, may support or subvert each other—the cool kid implicitly or explicitly encourages us to defy our parents and teachers. And, of course, the power of the cool kid might be enforced by entertainment media while the authority of parents might be reinforced by the state. Whatever goes into making up individuals will be the “processing” of all these articulated centers in tension with each other within a more or less stable and dynamic structure of desires, resentments and imperatives. (I don’t deny the importance of biological and ultimately genetic make-up to the formation of individuals, but I don’t have anything to say about that and any genetic predispositions would still get “processed” through the structures outlined above.)

We will find that all of these local and intermediary centers are supported by the central authority—family, school, workplace, even informal groupings like clubs, leagues and associations are ultimately legitimated by the state. The most informal of these groupings, such as friendships and romances, are not, but are closely supervised by these other structures. One of the most powerful fantasies of the modern world is that of forming an intimate bond with another that is outside of, and transcends, all formal authorities—“us against the world.” But it’s a powerful fantasy because it is produced so often in art and entertainment, and it is part of the long term political process of demolishing intermediary authorities and leaving each individual face to face with the central authority. The production of this fantasy draws heavily upon the Christian iconography foundational to Western culture (and I wonder whether other cultures even have something like this): central to the “us against the world” narrative is the martyrdom of one or both of the couple, who somehow evoke the mimetic resentment of the authorities and those who accept them unquestioningly, and whose actual or social death reveals the violence behind the apparently placid normalcy of everyday life. I wonder if it would be possible to test the hypothesis that to be a fully participating member of Western culture, one must have experienced oneself as the victim within this narrative at some point in one’s life. Perhaps that is what makes one “interpellatable.”

One’s relation to the state could be seen as an articulation of one’s relations to actual, possible, and residual sovereignties. An Irish-American, for example, is first of all subject to the American state, while having more or less distant ancestors who were subject to Irish sovereignty or, more likely, Irish potential sovereignty in more or less open rebellion against British. This residual allegiance might subside into irrelevance and be subsumed into a new mixture of lapsed allegiances; but it might also be leveraged against the American state or other groups (others with analogous more or less phantom allegiances). This play of identities we can also see as ultimately an effect of the degree of unity of the central authority: the more pluralistic the state, that is, the more it invites different elites to levy sections of the population to vie for control over an increasingly centralized state, the more sharply defined and reciprocally antagonistic (with various shifting alliances) these groups will be. But there’s no reason to assume that the absorption of all these residual and possible allegiances into a single homogeneous identity subordinate to the state is the privileged model, either—in fact, even the most fractious state will have to recur to that centralizing identity on occasion, making it simply part of a larger system of domination: a proxy of some kind. Where there are residual and possible allegiances (which exist even in non-immigrant societies, where nations are formed out of tribes or regions once subordinate to local kingdoms or aristocratic families), partial and local forms of responsibility can be delegated. Everyone should be grouped up, and groups should be allowed to exercise the executive and judicial powers needed to maintain themselves as such. What about individuals who want to escape their groups? Like quitting a job, you’d have to find another group willing to “adopt” outsiders, which they might have all kinds of reasons for doing. Think of the self-exiled black American artists who became, essentially, honorary Frenchmen and women over the course of the early to mid 20thcentury.

Liberalism abstracts, ruthlessly; counter-liberalism should concretize. The central authority wants all forms of authority to flow into its own, and that might involve inheriting residual and possible modes of authority borne by its people. A great deal of the ruler’s activity should involve issuing and supervising charters: for corporations, for townships, for local forms of authority, for associations of various kinds. If you want a recognized identity, apply to the central authority for a charter, or apply to subscribe to an authority that has already been chartered. The more generic, and potentially disruptive identities, like those promoted by feminism, can be broken down and sorted out: women’s groups focused on the education of women, on moral improvement, on counseling wives, on the preservation of traditional ceremonies and customs, and so on. If a woman wants to experience womanhood in sisterly relations with other women, there can be plenty of opportunities for that in non-antagonistic forms. If an identity can’t really be chartered for some purpose the central authority can acknowledge, then it’s best to let it dissolve.

Through all of this, to be an individual is to be a morally responsible person. This involves, not imagining oneself outside of, and victimized by, “society,” but establishing practices that defer centralizing violence. What is important is the character of the violent intention one resists, not its target: seditious violence against the state can display the same mimetic contagion as the merciless bullying of an unpopular child, and both cases require an inspection of the authorities that have allowed the contagion to grow to the point where outside intervention has become necessary. But moral action can only be carried out through the identities. The media strategy of dispersal and incorporation involves providing models of victimary self-centering and transgressive charisma. You can put yourself on the market as a victim of the normal, or as a defender of such victims who exposes the oppressive underbelly of the normal. It’s a way of taking yourself hostage and demanding the ransom payment. This media strategy works because it managed to plug into the dominant, pre- and non-Christian, heroic narratives of mass culture, which always involve a single man or small group of men defeating many more or much more powerful enemies in defense of the victims of those evil enemies. Reprogramming that narrative is a simple matter because the vicarious pleasure taken by the viewer is too obvious and too obviously exploited and hence somewhat shameful when exposed explicitly—so, the pleasure can only be preserved if the evil enemy is turned into some “exemplary deviation” from the cultural source of the heroic narrative itself. So, Captain America has to fight, not the Nazis, but the Nazi within all of us, embedded in what we take to be normal. His charisma becomes transgressive; but, as I said, this is not so difficult to accomplish, because constructing perfectly evil villains already elicits a kind of guilty transgressive pleasure—unrestrained violence is allowed where it normally wouldn’t be.

Centralizing violence, then, is primarily directed against the normal, or what we could call the normalest normal, the exemplary normal. The normal that’s so normal it has no idea how oppressive it really is. Obviously, in today’s culture this means white+male+Christian+straight+conservative+middle class. Moral action then needs to deflect this centralizing violence from the normal, but this is no easy matter—defending some ordinary guy against a virulent hate campaign because he said something currently deemed racist or sexist invites comparisons between his suffering and all the suffering that has been experienced historically, even if not in this particular case, by those who might, following some familiar if far-fetched chain of consequences, be possibly victimized through the racist or sexist statement. And there’s no transhistorical frame for determining the right terms of comparison. How do you weigh the humiliation and economic deprivation experienced by some middle class white guy against hundreds of years of violence done to black bodies, etc. To defend someone is to enter the legalistic game of attack and defend, and even if you can occasionally manage to turn the tables the prosecutorial initiative always lies with the defenders of the marked on the market against the unmarked.

The normal is the unmarked, and the postmodern critique that norms produce their own deviations is self-evidently true. The lives saved and improved, the cultural “equipment” made possible, because of the restraints placed on desires and resentments so as to reinforce the most local centers, are all invisible; those chafing under those restraints, unable to comply with them through, arguably, no fault of their own, are highly visible. The long term horizon of liberalism is that we will all be unmarked; until then we must keep up the war against the unmarked, who by definition, “structurally,” mark the others. If we are to get to the condition of universal unmarkedness, then, that means the most marked of today (the transgendered handicapped Somali refugee…) will someday become the norm. But does it not follow, then, that at the origin of any norm is the most marked? There is nothing more marked than inhabiting the name others ostensively designate you with because that’s who you in fact have turned out to have been. To be marked is to perform the paradox of self-reference—to be both liberated and constrained by the name. Everyone’s mimetic rivalry circles around this marked one, and mimetic violence is always just below the threshold of convergence upon him while he manages to expose the potential violence, make a nomos out of it, and recruit everyone to defer early on future signs of such violence. This is where a new norm comes from.

Moral action, then, entails performing the hypothetical origin of the norm. This involves opening a disciplinary space within the disciplines—it is the disciplines that control the system of naming. The disciplines can say, “X is Y,” or someone characterized by this feature is going, according to some probability we are competent to establish, to have this other feature. Go ahead and treat X as if he has this other feature, then—the burden of proof is on him. This organization of reality is inevitable, and only immoral if a space is not left open for that burden of proof to be met. Moral action is meeting that burden of proof while imposing a like burden on the disciplinary agents who establish it—what, exactly, do sociologists, psychologists, economists, etc., and the activists mimicking them at a distance, “tend” to do? The terms establishing burdens of proof all come from the nominalizations resulting from the supplementations of literacy, upon which the disciplines are founded. A word like “legitimacy” will have been derived from precise rituals and ceremonies that would have once served to mark one as institutionally recognized; now, it’s an abstract concept manipulated by those in the disciplines taking sides in power struggles. In that case, there’s a kind moral “arbitrage” that can be enacted by referring the competing nominalizations in any confrontation back to these power struggles. Attaching various “qualities” (the “Ys” mentioned above) to, say, “white males,” indicates some power differential—the “accuser” thinks this will be effective in someway. What power does it enact? Well, “history,” or “equality,” or “morality”—OK, but name some people, institutions, powerful figures embodying this power. Whom are they contending with, and for which discernable stakes? What will the victor be able to determine? Sure, in placing a burden of proof back on you (“people who believe in ‘racism’ are…”), I’m also hoping to leverage one power against another. In that case, no one is unmarked. That must mean we all want everyone to be marked in such a way as to defer, rather than incite, centralizing violence against them. The power struggles circulating through us make that impossible—each power can contend against the other only by means of incitement. The most moral thing to do then—to sound Kantian—is to act as if my act will increase the likelihood of an orderly arrangement of power that will mark (“(re)deem”) everyone accordingly—even though I can’t know in advance where I might fall within that order (a little bit of Rawls there as well). I’m a sign of disorder if that prospect repels you (and you need your dose of centralizing violence), and of order if you can imagine a complementary relinquishment on your part. In that way—to sound Nietzschean—we forge new norms. We return the disciplinary nominalizations back into acts conferring faith, trust and loyalty. The markings of racist/sexist/homophobic/transphobic/… are converted into notations on the accomplishments and responsibilities those charges aim at dispersing.

February 26, 2019

Relevance

Filed under: GA — adam @ 7:24 am

It’s common to hear some event or discussion denounced as a “distraction.” A distraction, presumably, from what is really important. A distinction between what is more and what is less important is essentially a distinction between what is more real and what is less real. What is more real, it will always turn out, is what better fits the model of reality you presuppose—and wish to impose on others. So, people pay more attention to some lurid scandal manufactured by media outlets than to the latest study showing a decline in the wealth of the middle class. Clearly, the latter is more real, more important, because it is a sign of other things that are real and important: a decline in consumption, leading to a recession; growing dysfunction among members of the affected group, leading in turn to growing dropout, drug addiction and crimes rates with potential for a higher risk and less stable society; the possible emergence of new political forces trying to represent the dispossessed, with the possibility of upsetting the existing establishment, and so on. Meanwhile, what follows from the scandal? Nothing real—one corrupt politician gets replaced by another, maybe a new rule, soon to be forgotten, gets imposed—no one will remember it a few years down the road.

But in attributing such a higher degree of reality to certain processes, a further assumption is made: that those who are enjoined to pay attention to those processes in proportion to their reality can also affect the event, or its subsequent consequences, in proportion to the attention paid to it. Why criticize or ridicule others for being “distracted” or “distracting” if distributing their attention in a more appropriate way is not going to pay off in commensurate power over what one pays attention to? Otherwise, why not just pay attention to local, everyday, “petty” events and issues that one might be able to influence; or, to what one finds amusing or exciting? The one criticizing the distraction and the distracted, then, is the one out of touch with reality: more people paying attention to the latest economic developments does not add up to more people having intelligent, informed discussions about those developments, which would not, anyway, in turn, lead to a shift in the commitments of policymakers, such that they would now start formulating and implementing policy in accord with the presumably coherent and essentially unanimous conclusions drawn by those intelligent and informed discussions. The pathways from events, to reporting of those events, to taking in that reporting, to public opinion, to official political responses to public opinion are all cut in a manner unrecognizable to one who takes the model of the public spirited, informed, citizen seriously.

Well, then, how should one organize one’s attention? In such a way as to find vehicles for thinking through the anomalies and paradoxes that most forcefully present themselves to you. If there really are intrinsically more important or more real realities, that’s the way you’re going to find them anyway. This means we’re always working with what’s “at hand”—even when we want to be important and talk about important things we end up carving our own little niche within them, like arguing some technical point that hardly anyone else considers important. The desire to pitch one’s tent at the realest of the realities is the desire to have a commanding metalanguage that enables you to give orders, at least in your own mind, or the space you share with others, to those who actually command, who occupy centers. When a pundit or resentful intellectual says that some politician did this or that in order to distract us from what he’s really doing, resentment is expressed in a satisfying way insofar as one is superior to those so easily distracted and to the politician who thinks he can hoodwink you. You can construct a pleasing image of the political leader who will come along and carrying out your instructions to the letter.

A better criterion for determining relevance and reality is to employ as much as possible of the signifying means available on the scene where you find yourself. You’re on a scene—you’re thinking about things, which is to say rehearsing potential future scenes; you’re observing something; you’re speaking with people, even if mediated through a screen. The scene has propos and supports; it has a history. The participants have entered this scene from other scenes. All of this leaves traces on their posture, gestures, tone, words on the scene. All of it can be elicited. How much, and what, exactly, of it, should be elicited? Well, this is at least a much better way of posing the question of relevance than looking for an objective hierarchy of importance. Elicit whatever can be turned into a sign of the center of the scene. Any scene falls prey to mimetic rivalry: one actor tries to one-up or indebt the other, maybe even without realizing it. Everyone involved wants to be at the center, which might very well mean subverting another’s bid for centrality. It certainly means evincing resentment towards whatever keeps us all on the scene in the first place—even if, in fact, we’re all there to see that person, her attempts to usurp others’ attempt to be at least the center of their own site of observation is a form of resentment. And, of course, pointing these things out on the spot leaves one, justifiably, in fact, vulnerable to charges of deploying an escalatory form of resentment oneself.

Any sign of resentment toward the center is also a sign of genuflection before it. You can always show another their resentment by simultaneously showing their worship of what they resent. And of whatever it is that counts as a center upon the scene. The resented/worshiped figure, itself, points to some other center: whatever we deem to be “in” or “of” the resented object is also elsewhere, in whatever allows that object to carry on in such an offensive way. If your argument with someone escalates, it gets to the point where it becomes “excessive”—but what does that mean? Excessive according to what measure? Well, the argument started with an “issue,” but the stakes have now raised to the point where the “issue” has become secondary—the confrontation becomes the thing itself. Like it or not, your resentment toward the other is a form of worship: you devote attention to him, and attribute to him power over your own actions (he’s making you angry). But this means that the original “issue” hasn’t been left behind—it turns out that that issue was a mere proxy for this new one, this new form of devotion. And who’s to say it’s less relevant? But what form of worship will this turn out to be? If he kicks your ass, it ends anti-climactically, and you return to your own group in shame. If you kick his, well, maybe it’s the same, because it turns out he wasn’t a worthy adversary, which is also a bit shameful and not very “relevant”; but if you return in triumph, you install him as a kind of permanent deity, whose prowess proves your own. You construct an idol, and will require ritual repetitions of the same battle.

But it’s also possible that the two warriors will discover that they worship, not the other, but something that is neither of them. Whatever allows them to make peace with honor intact, whatever they can swear by together—that is what they worship. But now every sign put forth by the other—every sign of fear overcome by courage, all evidence of training, sacrifice, self-denial, skill—that one can emulate, that one has put forth oneself, are signs of devotion to that center. All the words that the two will henceforth speak to each other, and that others, telling their story, will speak of them, testify to that center. If one gets unreasonably angry with the other they can both laugh, because that resurgent resentment recalls the scene upon which its predecessor was transcended, and therefore becomes a sign of that transcendence. The show of resentment is just demonstration of the gift of vigor given by the center.

This brings us to critical ontological and epistemological questions. We’ve already dealt with the question of “reality,” that is, whatever is inexhaustibly signifying. It’s also a question of truth, which, in social and cultural terms, can only mean the eliciting of signs of one another’s relation to the center. One central principle of modernist art is that aesthetic value lies not in what a given work represents (ideas, a social reality, etc.) but in the extent to which it makes full use of its materials—colors and shapes on a flat surface, words on a page, and so on. Modern art and its theoretical defenders were right to defend art against its social utility, which in practice means kitsch, but were mistaken in thinking that rigorous artistic practices meant eliciting desires concealed or suppressed by the civilized social order. The materials of art are the materials of other areas of life, which also use colors, shapes, surfaces, words, sounds, etc. The vocation of art is to retrieve those materials from the disciplines, which use them to establish the hierarchies of relevance through which they hope to subordinate those who occupy the center. To some extent this always means the disciplinary establishments of the arts themselves.

Whatever is presented as relevant in itself is to be presented anew as a product of a scene. This includes all the aesthetic materials that, in a disimperativized declarative, disciplined order, are set up for purposes of control—for the anticipatory capture and sequestering of resentments generated by the carousal for rotating power itself. The more you can event-ize and scenicize the conceptual hierarchies streaming toward you the more reality and the closer to truth you are getting. These conceptual hierarchies always stream toward you through other people, people mediated by scenes and media. The conceptual hierarchies, then, need to be performed along with them—one needs to help elicit from them this performance, and help them elicit it from oneself. When the conceptual hierarchies dissolve, the real hierarchies that don’t need their support become more visible. The concepts can then be put to use discerning what the real hierarchies demand of us.

Here’s another way to think about it. Our lives are increasingly run by algorithms, which are really just a technological extension of the desire to predict what others will do. If I’m in a difficult situation, and I can predict what others will do for the next 5 minutes, I might get out of it; if I have machines that can predict what pretty much everyone will do over the course of their entire lives, I can dominate them fairly easily. Two things are necessary to build such machines: first, humans must be analyzed, broken down, into parts (fears and desires, primarily) that make them predictable; second, a social world is built that constantly elicits those anthropological mechanisms. It’s a bit too science fictiony to say we currently live in such a word, but it’s obvious that we are governed by those who are trying very hard to construct something along these lines. If you want to approach this in libertarian terms, you could say freedom depends upon being anti-algorithmic; in autocratic terms, you could say that clarity in the command structure requires it. The ideal of the algorithm is to separate the declarative order from the ostensive and imperative worlds once and for all. In the perfected algorithmic order no one need ever command because everyone would always already be guided spontaneously upon the path that maximizes frictionless coordination.

It’s pointless to ask, well, what would be so terrible about that (even if we could answer the question), because the ideal of the algorithmic order is really the opposite of what its appears to be. It’s just a war machine. It grinds you up to generate the inputs it needs. The victimary left thinks it opposes the algorithmic order because it reproduces the hierarchies resulting from behavioral differences—but the left just wants to control the machine. Which just proves human decisions are necessarily made to determine what counts as “inputs.” So, the left can’t have a counter-algorithmic program. Countering the algorithm would involve asking, what would be predicted of me now? And then confounding the prediction. I don’t mean that, if your “profile” suggests that you will behave compliantly in a given situation you should instead kill a bunch of people. Indeed, a slightly modified algorithm could predictthat. It means looking at the markers of compliance, as many of them as one can in a given scene, and delineating their imperative structure. We’re following orders here—we can all see this, right (look at what that guy just did.. why do you think this space is arranged in just this way?… did you notice how that gesture made her nervous?…)? The algorithm can’t account for an ongoing exposure of the terms of obedience. There’s no telling where it will lead—not necessarily to disobedience; maybe to subtle shifts in obedience that might eventually add up to decisive ones. The algorithm can’t account for someone seeking out an other worthy of being obeyed, or trying to become worthy of being obeyed oneself. The algorithm can’t account for the irreducible determination of relevance, of centrality, on the scene. It can’t account for the reading and writing, literal and figurative, of all the signs of the sign as signs of centrality and marginality—and therefore of relevance.

February 19, 2019

Can Networks Crowd Out Markets?

Filed under: GA — adam @ 7:24 am

When I go to the store to buy a loaf of bread, I have to pay the supermarket because I am not performing any equivalent service for them, or because, as is the case in David Graeber’s “communism,” we are not part of a community in which it is a matter of course that each takes what he needs and contributes what he can. So, how does the supermarket know how much to charge me? They buy the bread from a baker, and they have to charge enough beyond what they pay the baker to cover the costs of the building, the store technology, salaries and wages, etc., and still make a profit. The baker, meanwhile, needs ingredients, equipment, a building, employees, etc., so that determine how much he has to charge for the bread. And likewise for those who sell the baker the ingredients, etc. If we start at the other end, the supermarket charges what consumers will pay, which comes down to how much consumers prefer buying bread here as opposed to some other place, how much they prefer bread to possible substitutes (e.g., fajita wraps), how willing they are to forego bread compared to other types of food if they have less money to spend, etc. Still, at any point in time, there must be some minimum cost of making bread, and if it isn’t sold at that cost, bakers will simply cease to exist.

The standard argument for markets against planning is that no one can know how many people over, say the next month, will want how many loafs of bread, and, then, how much the operating costs of the supermarket, the ingredients and equipment of the baker, the machines that make the equipment for the baker, the machines that make those machines, etc., will cost over the next month. The only way to find out is to let supermarkets sell as much bread as they can, then have the bakers buy the ingredients they need to meet the demands of the supermarket, and so on, until we end up oscillating around an average amount of bread sold per month—sometimes the store will be out for a little while, sometimes it will have to throw away some extra, but that then gets worked into the cost. And, of course, with some products (probably not bread, though), the fluctuations can get really dramatic, to the point of forcing retailers and manufacturers out of business.

But, what if this all worked on something like a subscriber system? I, along with many others, sign up for a certain amount of bread (or a “basket” of goods, within a prescribed range of variation); the supermarket, along with other shops, subscribes with a “co-op” of bakers, who in turn subscribe with producers of wheat and ovens, and so on, who in turn subscribe with those they need to procure materials from? This could only work, or—not to get ahead of ourselves—be imagined to be possible, if everyone one joined various co-ops and virtually all economic activity took place through them. Those who make the machines to make the ovens sold to the bakers must ultimately, through a vast network of subscribers, subscribe those who employ the ones buying the bread. Obviously, I’m trying to imagine an advanced economy without prices and therefore without money, but also without central planning, and if that’s utopian, so be it, but I at least want to think it through in absolutist terms, and maybe someone else could incorporate these reflections into more sustainable ones. The question is whether the more advanced form of coordination that the elimination of liberalism, democracy, which is to say, “politics,” might make possible, would in turn make cooperation through a vast network of agreements between directors of enterprises possible.

It is already the case that much, if not most, economic activity is organized through networking: if you’re starting a new company, you look for customers and suppliers through established channels, including friendships and other informal associations; also, you will prefer to work with more stable companies with good reputations, in communities with reasonably favorable and predictable government, etc. You will prefer customers and suppliers you expect to be around for years over those who might leave or go under next month. None of this is directly priced. But all this means that the most basic value is an institution in which the high, the middle, and the low, cooperate rather than struggle against each other, in an area where other institutions are not trying to subvert the ones you want to work with. Companies that are well established and well run, and that value their reputations, will, of course, charge one another for goods and services, but they might also be more open to various forms of cooperation and exchange where payment can be waived or indefinitely deferred. With enough companies like this, you have your network of subscribers, and below a top-tier of networks, you could have lower tiers that would be more unstable and might need further supervision, or might even rely on money, but the damage done by their failures might also be contained.

Two big problems arise (at least that I can think of now). First, wouldn’t this be a very static system? How would innovation be possible—how could anyone break into the existing networks, and why should the top-tier companies feel any need to improve their products and services? Second, within this vast network of agreements that all depend on each other within an enormously complex system (the baker can promise x amount of loaves because the farmer has promised y amount of wheat, which he can do because the manufacturer has promised z amount of replacement tractor parts, within everyone bought in all the way across the social order), how can self-interested defections (cheating, strikes, adulterations of materials, side deals, black markets, etc.) be prevented? The eligibility of subscribers would have to be regularly assessed. We must assume the highest value of all is a unified and coherent command structure stemming from a central authority and reaching into all institutions. Individuals reporting directly to authorities would be in each company, and everyone in each company would be expected to consider himself a “delegate” of the government. There would be social pressures from other subscribers, who would stand to be disqualified if some of their members were found to be defectors. (If a particular subscriber can no longer fulfill its responsibilities, the central authority’s responsibility is to replace or reconstruct its governing structure.) We really have to assume the individualist, utilitarian ethics and morality of liberal society can be eliminated, and people can think of themselves as directly social. While it doesn’t quite prove anything, doesn’t the fact that so many millions of people can be recruited into an increasingly apocalyptic leftism that in many cases at least cuts against individual economic self-interests suggest that for many, if not most, utilitarian ethics are extremely unsatisfying? Doesn’t the fact that so many wish to resist the new order, without any guarantee that it will benefit them materially, indicate the same?

Innovation is really the bigger problem, since any innovation would have to, it seems, disrupt an extremely complexly integrated set of networks. How would R&D be conducted, and by whom? Here, I would follow up on some of my earlier “social market” posts and emphasize the centrality of the state to economic activity. This has always been the case, and is certainly so today—research is overwhelmingly shaped by state investment and subsidies, starting with the state’s monopoly on military equipment (which, of course, involves myriad spin-offs), but including quite a bit of investment in medicine, environmental technology, computing, communications, infrastructure, space exploration and so on. We can now add to that social network and surveillance technology. The state, then, perhaps in the form of its various agencies, would be a major subscriber within the networks. The state would be a major “consumer” of technological innovations and the co-ops who want to subscribe to certain forms of technological innovations could do so. (What does subscribing to the state entail? What does the state provide in return? Land, corporate and monopoly rights, airwaves, electronic networks, etc.) The co-op would then be able to provide a better product to its subscribers, and could solicit more subscribers as a result; as a subscriber to its suppliers, meanwhile, that co-op could be more demanding—its greater influence would enable it to get its employees into better co-ops or subscription lists.

I know it sounds crazy to speak of an advanced, civilized social order without money, in which everyone asks another for what they need, and in turn give others what they ask for. Maybe it is crazy, but I think it’s worth the speculation if we are to think beyond liberal fetishizations of the market. Almost everyone will concede that markets require some state support and regulation, but such concessions almost always assume that such support and regulation is the “lesser evil,” and so encourage us to constantly chafe against it, and assume it could always be reduced. Nationalist economics imagine a more positive role for the state, but that still involves intervening in already existing markets in very targeted ways—the basic liberal anthropology is not challenged. “Big government” left-liberal political economy, meanwhile, always presupposes an adversarial relationship between agents in the private economy—all the state does is take sides in that struggle, or sometimes act as a referee. But if we are genuinely to see the central authority as the source of social organization, as, essentially, the owner of the entire territory over which it rules, with subordinate agencies having delegated quasi-managerial powers over the “productive forces, then we should try to formulate that relationship prior to any mediation, like the various departments of a corporation. A corporation has external constraints, of course, but, then again, so does a government: it must show itself a respected and responsible actor on the international scene, whatever its place among an international hierarchy of states. But more important than all this is how to think about the creation, or re-creation, of an ethics, morality and aesthetics that transcends liberal ontology and anthropology.

I think the conventional view that sees pre-modern peoples as more “spiritual” and less “selfish” than moderns has it completely wrong. With all of the adherence to ritual and belief in supernatural agencies, pre-modern peoples are driven by the most material interests—fear and need. If one sacrifices regularly to the gods and is careful not to violate any ritual prescriptions, one will be provided for—one will have victory against enemies, a good hunt, rain for the crops. In a post-sacrificial order, there are no more exchanges with the gods, or even God: what God has given us is everything and incommensurable with any return; what each of us gives to God is also everything, all of us, even in the knowledge of its utter inadequacy. This desire is no less powerful in the atheist. The post-sacrificial epoch would better (and more positively) be called the epoch of the absolute imperative, a concept I take from Philip Rieff. The absolute imperative is absolute because no imperative can be issued in return by the commanded (no “God get me out of this and I’ll never…”). The absolute imperative is to stand in the place of whomever is violently centralized, i.e., scapegoated. The absolute imperative has its corollaries, enjoining us to construct and preserve justice systems that place accused individuals at the center in a way that defers, delays and ultimately transforms the scapegoating compulsion, or represent actions and uses of language that reveal the scapegoating compulsion in less than obvious places. Obviously, we (at least in the West) “hear” the absolute imperative because of the Judaic and Christian revelations, but it can certainly be made “audible” in other ways. But all the radicalisms and “holiness spirals” of the modern world, however “puppetized” and “proxified,” are set in motion by an attempt to obey the absolute imperative. Despite the best and worst intentions of economistic thinkers (who are really obeying the absolute imperative in their own way), human beings will not be satisfied with an affluent society, not even if we make a little bit more affluent. Or, at least, enough humans will not be so satisfied to allow affluenza to settle in, undisturbed, once and for all.

What, then, are the economic consequences of the absolute imperative? Eric Gans—while not speaking of an “absolute imperative”—sees the economic consequences of the Christian revelation to be, precisely, the free market, where exchanges are voluntary and non-violent and the natural world can be exploited in increasingly productive ways. This may be part of it: exchanges mediated through money are a huge moral advance over such economic practices as pillage and slavery. But if the most powerful players on the market simultaneously centralize and destabilize central authority, the ethical and moral advantages of both the market and central authority are compromised beyond repair. The two must be kept separate: money must be kept out of politics, but once the money is out, what is left of the politics? What, indeed, is left of the money? If money is first created by central authorities in order to enable individuals to purchase their own animals for sacrifice, then it from the beginning is a consequence of the derogation of authority over a shared sacrificial scene. The same is the case when money and markets are created by the imperial state to provide for soldiers in conquered territories—here, as well, money is a marker of the limits of authority, which means what money really measures more than anything else is the degree of pulverization of central authority. A secured central authority would, then, have to contain the market within its embedding, enabling, moral and ethical (disciplinary) boundaries. The use of money as an abstract sign of the goods, services, and capacities to be commanded by its possessor would necessarily dwindle: the uses of money would be qualified in many ways. How much could that use dwindle? If it dwindled to nothing, wouldn’t that mean that economic activity has been wholly re-embedded in a thoroughly articulated self-referential social order devoted to ensuring the institutionalization of the absolute imperative? That, at any rate, is the thinking behind the thought experiment attempted here.

February 12, 2019

Paradox, Discipline, Imperative

Filed under: GA — adam @ 6:12 pm

If the signifying paradox is constitutive of the human, then humanistic inquiry, or the human sciences, really involves nothing more than exposing and exemplifying that paradox in forms where it had previously been invisible. The paradox here is that we know what we’re going to find, but we’re going to find it, if we’re searching properly, precisely where we assumed our search for it was paradox free. I’ve been hypothesizing that what constitutes the post-sacrificial disciplines has been the concealment of the scene of writing (and subsequent media) upon which those disciplines depend. Drawing upon David Olson’s discussion of “classical prose,” in which he shows that writing historically took the form of a supplementation of the speech act represented in writing, I’ve been arguing that this supplementation occludes of the scene of writing itself. What the scene of writing reveals is that words (and ultimately all other signs) can be separated from their scene of utterance and the intentions of those on that scene and iterated on further scenes and taken up by other intentions. As Derrida claimed, writing reveals that what is truly originary in the sign is its iterability, not its meaning or the intention behind it; we can take the next step and say that its iterability, which guarantees the possibility of future human scenes, is its meaning, and is the intentionality of anyone issuing a sign. So, the meaning of the word “dog” is something like “I reference, with varying degrees of directness, all previous uses of the word ‘dog’ in order to enable a potential ostensive that will enhance scene construction in more or less vaguely conceived future instances of emergent mimetic conflict.”

The disciplines, starting with the mother of them all, philosophy, want to abolish paradox. An acceptance of paradoxicality would situate the disciplines as supplemental to the paradox of imperatives issued by the center: the narrower and more precise the imperative, the more all of its intended subjects must make themselves ready and worthy of obeying it in unanticipated settings. Inquiring into this paradox would be all the human sciences we ever need, but in this case the disciplines would have to “abdicate” their self-appointment as those who provide the criteria upon which we judge the legitimacy of the sovereign. Is the sovereign doing “justice,” is he protecting and respecting the “rights” of his subjects, is he meeting their “needs,” adhering to “international law,” enforcing the “law,” ensuring “prosperity,” “wealth creation,” “growth,” etc.? Has he been selected and does he rule according to procedures in a way satisfactory to all those who have themselves been appointed by certain procedures; all of which procedures merely lead us back to the establishment of those procedures according to other procedures, which…? If no, then he’s not the “real” sovereign, and in order to know whether he is or not you have to be a political scientist, a legal theorist, an economist, a sociologist, etc. To maintain that position, you must suppress the paradoxicality of your own utterances. You must provide certain, clear, unequivocal declaratives yielding universally available virtual ostensives that lead to only one conclusion regarding whether the central authority is rightly distributing whatever it is your science assumes he must be distributing.

The human sciences claim they conduct inquiries modeled on the experimental sciences, with their process of hypothesis generation and testing, but they really don’t. (Do the physical sciences? Shouldthe physical sciences? I leave these questions aside for now.) I worked my way to this realization through reflection upon my own little field, the teaching of writing. I came to see that all the criteria used to determine whether student writing was “good” or “improving” was circular—terms like “clarity,” “precision,” “deep analysis,” “reading comprehension” really don’t mean anything, because what it means to be clear, precise and all the rest depends upon the situation, i.e., the discipline. The assumption is that the instructor him or herself knows what clear, precise, analytical, etc., reading and writing because, otherwise, what would he or she be doing teaching writing at an accredited institution? But that means that all of these supposed concepts really translate into the teacher saying “become more like me.” And how can the student tell what the teacher is “like” (since the condition of the student is defined precisely by being unlike the teacher)? Well, I’ll tell you when you are or aren’t. So, for most writing or English teachers out there, this is why your students always ask you “what you want”—they have intuited that the entire structure of your pedagogy is predicated upon you desiring from them a reasonable facsimile, not of who you really are (that would be hard enough) but of who you imagine yourself to be.

From this I concluded that what is to be rejected in this conception of teaching and learning is that its “standards” do not provide “actionable” imperatives. No one can obey the imperative “write more clearly,” unless there is already a shared understanding of what “clarity” entails for the purposes of that communication. And, again, in the educational setting, such imperatives are issued precisely because the student doesn’t have access to such a shared understanding. So, I concluded that the only kind of “fair” and effective pedagogy is one that provides students with imperatives such that they can participate in creating the shared understanding making it possible to determine when those imperatives have been obeyed. This generally involves something like “translate (or revise) X according to rule Y,” i.e., some operation upon language from within language. I don’t want to go any further here (but if anyone is interested… https://wac.colostate.edu/docs/double-helix/v6/katz.pdf)–butthe point here is that the conclusion applies to all the human sciences (which are all, really, if unknowingly, pedagogical). That is, a genuine human science would have to participate in its “object” of study, producing imperatives aimed at improving the social practices it studies, along with generating the shared criteria enabling the practitioners to assess the way and degree to which the imperatives have been fulfilled. (Of course, political scientists, sociologists, economists and the rest make suggestions to policy makers all the time—indeed, they are routinely hired and subsidized for this very purpose. But the results of these suggestions and proposals can only be assessed in the language and means of measurement of the disciplines themselves—they therefore represent different ways of imposing a power alien to the practice in question. They are attempts to give imperatives to, rather than receive imperatives from, the central authority.)

The next question, then, is how do paradoxes generate actionable imperatives? To get to paradoxes generating imperatives, we can start with the imperative to generate paradoxes. Find the point at which the relation between the name, concept, or title becomes undecidable—that is, where it is impossible to tell whether some thing is being represented or some representation is producing a thing. This undecidability pervades language in ways we usually ignore—has it ever seemed to you that someone had the “right” name (their given name, not a nickname)? It’s absurd, of course, but, on the other hand, one’s name can correspond more or less closely to their being, can’t it? The argument over whether words represent their meanings “arbitrarily” or through their “sound shape” as well goes back to Plato’s Cratylus, and is not settled yet—whatever the truth, the fact that it’s a question, that words sometimes seem to match, in sound, their meanings, is an effect of the originary paradox.

This paradox of reference will emerge most insistently in the anomalies generated by disciplines at a certain point in their development, but can be located at any time. What is “capital,” what is the “state,” what is “cognition,” what is “identity”? If you ask, you will be given definitions, which in turn rely upon examples, which in turn have become examples because that term was used to refer to them. This is the kind of deconstructive work that opens up the question of the relation between a discipline and the intellectual traditions it draws upon and conceals. Within that loop of concept-definition-examples-concept is the founder of the discipline and the containment of some disciplinary space. A new imperative, or chain of imperatives, from the center is identified and represented as a new imperative the sovereign is now to follow—he is to create a new social order freeing capital or making the state independent, unleashing new cognitive capacities, representing pre-formed identities.

Articulating these paradoxes, then, presumably help us generate concepts other than “capital,” “state,” “cognition” and “identity.” Let’s review the process of discipline formation on the model of Olson’s study of literacy and classical prose. Writing represents reported speech, but since it does so in abstraction from the speech situation it must supplement those elements of the speech situation it can’t represent: tone, gesture, the broader interaction between figures on the scene. This generates new mental verbs: suggest, imply, insist, assume, and so on. These mental verbs are in turn nominalized into suggestions, implications, assumptions and so on (it doesn’t happen with all words—there seems no corresponding nominalization of “insist,” at least in English). These nominalizations become new “objects” of study, for linguists, psychologists and ultimately all the human scientists. These concepts are artifacts of literacy—this doesn’t mean that they can’t tell us something about processes of thinking, knowing and speaking, but it does mean that they conceal their origins and become naturalized as “features” or “mind” or “language.” Cognitive psychologists, for example, can set up ingenious experiments that test the role of, say, “prior assumptions” in decision making, but built into these studies is the literate, declarative “assumption” that it would be better if decisions were made purely through abstract ratiocination without reliance on “prior assumptions.” So, the use of power to favor what cognitive psychologists and like-minded human scientists across the discipline would recognize as “rational discourse” is implicitly favored over any attempt to, say, think through what a “good” shared set of “prior assumptions” might be.

So, let’s say we reverse the process, and dis-articulate the nominalizations back into verbs. Anna Wierzbicka’s primes can be useful here, but they’re not required. So, for example, the psychologist Daniel Kahneman “writes of a ‘pervasive optimistic bias’, which ‘may well be the most significant of the cognitive biases.’ This bias generates the illusion of control, that we have substantial control of our lives” (I’m just working with Wikipedia here). So, “we” can measure how much “control” “we” have over “our” lives, how much control we think we have, and the “distance” between the two. Those doing the measuring must have more control than those being measured—they know how “complex” things really are. The best way of measuring such things seems to be asking people how much they think things will cost. (Maybe Kahneman has a bias in favor of certain understandings of “control” and “complexity.”)

But being more or less “optimistic” is a question of wanting, hoping, thinking, knowing, trying and doing. These activities are all part of each other. You have to want in order to hope, and you have try in order to do and you have to hope in order to try. And you have to know something (not just not know lots of things) in order to hope—knowing the relation between trying and hoping, for example, and how that relation is exemplified within whatever tradition or community you are located. And the relations between all these activities can be highly paradoxical—the harder you try, the higher your hopes might be, which might mean the more deluded you are or it might mean the more you find ways of noticing your surroundings, taking in “feedback.” But can you try really hard without hoping? Sure—and consciously withdrawing your hopes from your activity, draining your reality of its aura of hopefulness, so to speak, might be a new form of hoping, one in which you accept a lack of control as part of a “faith” you have in “higher powers” or the mutual trust with your neighbors. From this you derive an imperative to hope, try, know, etc., in a specific way, within a particular paradoxical framing. (All of the binaries targeted for deconstruction by Derrida are sites of this originary paradoxicality.) None of this interferes with re-entering the entire disciplinary vocabulary from which we departed, and reading the discipline itself in terms of its hoping, trying, knowing and so on. Any disciplinary space must also be a satire of some institutionalized discipline.

« Newer PostsOlder Posts »

Powered by WordPress