GABlog Generative Anthropology in the Public Sphere

July 16, 2020

Truth and Practice

Filed under: GA — adam @ 10:15 am

What is true? Whatever enables you to further perfect your practice. You have a practice when you can point to something that happens that could only have been a result of something you did. So, if I drop a glass and it shatters, something that only happened because I dropped it, do I have a practice? Yes, but a limited one—you could expand it by, for example, dropping other things and seeing that they too shatter; or by hitting the glass with a hammer or throwing it against the wall, and seeing that these actions also lead to its shattering; but that’s about it. On the other hand, if you drop the glass on a pillow, and see that it doesn’t shatter, and then try it out on surfaces intermediate in hardness between floor and pillow, we might start getting somewhere interesting. You have a practice that we might describe as testing the resilience of various substances under various conditions, and that could get very sophisticated and be extremely useful. (Obviously, it is.) Every time you identify some correlation between resiliency and constraining conditions you have said or recorded something true.

Part of having a practice is reframing things you were doing previously as imperfect versions of that practice. Looking back, the scientist testing the resiliency of objects can say that’s what he was “really” doing when he dropped the glass. And that’s also true, if you can trace that accident to your present practices—no one can know everything about one’s attentional state in some prior event, and very often later actions in a sequence are needed to bring out the truth of earlier actions. That’s part of the practice—deriving the elements of your practice from previous, maybe “unconscious” attempts at it. This is also a helpful way to remind yourself that you don’t know everything you’re doing right now, and to conduct your present practices in such a way that subsequent versions will reveal what presently remains obscure. The more inclusive of past practices and the more anticipatory of future ones they are, the more truth your present practices generate.

This doesn’t mean, though, that the earlier practices “contain” or inexorably lead to, the later practices—nothing about breaking a glass sets you on the path to create technologies to test the effects of various temperatures on specific objects. Social and technological histories have to intervene. Various substances come to be used for various purposes; a social space must be created in which people have the time to “specialize” in certain modes of production; and this means certain kinds of violence must be minimized and certain kinds of authority constructed. We can leave scientific practices to the scientists, except for when those practices cross over into other domains; what we can focus on, though, is the practical structures of those other domains, which “receive” the results of scientific practices and provide the conditions for them. And in the domains of human interaction in its various media, the question of what counts as a practice, or as the perfection of practices within a system of practices, is more complex. When I speak with someone, what makes that a practice? What happens, and happens in such a way that I can point to it so that others can see it as only as a result of what I say? How do I conduct my speech so as make things happen so that their effects can be singled out in this way?

It’s good to be both matter of fact and revelatory at the same time—you’re doing something that can be repeated, i.e., made routine and practicable for anyone, while you’ve designed that practice, and determined the site of its use, in such a way as to produce some knowledge that wouldn’t have existed otherwise. The construction of a practice is simultaneous with realizing that you’ve already been constructing a practice. The starting point is always an anomaly or a mistake—someone does or says something that doesn’t fit the frame of expectations that enable us to make sense of something. The first step is to suppress your impulse to “harmonize” the anomaly with the field of expectations or correct the mistake, and in the latter case to suppress your shame if it happens to be your own. It has to become interesting. The breaking of a frame makes you realize there was a frame; since a mistake or anomaly is essentially the collision of some other frame than the one determining your expectations, you now have two frames that happened to interfere with each other. Such interference is what brings the newly recognized frame into view.

You now have a question around which to organize your emergent practice: what does each frame include and exclude? To answer this question, you have to run tests: repeat the mistake or anomaly, and see how the frame responds. But this raises a question—what counts as a “repeat”? No gesture or utterance can simply be repeated, because part of the gesture or utterance is the context, which has been transformed by the gesture, or utterance, or sample in question. You need to single out what, exactly, in that sample you are identifying as iterable. Since what we’re interested in is the frame which has been disrupted, what needs to be singled out is a particular form of disruption of that frame, or that “kind” of frame. What makes it that kind of frame is the practice that initiated it, and the way it draws in the elements and means from the whole. We can converge the two: the evidence of the practice that initiated a given frame or field is in the way it appropriates and converts elements from the surrounding fields. Something in those surrounding fields will resist incorporation, and the attempt to subdue or ignore this resistance will generate anomalies.

If we know the starting point of inquiry is the anomaly or mistake, we can refine our attention so as to lower the threshold for what we count as an anomaly or mistake. We can do this by imagining the contexts, actual or potential, in which some sample would appear anomalous. There’s a short step from such refinements to adopting the perpetual disciplinary stance that is always on the lookout for what might be anomalous or mistaken in any sample we come across—always looking at things askew, we might say. In this way we see the possibilities of cultural innovation everywhere, because, as we can know from Eric Gans’s study of the succession of speech forms in The Origin of Language, the new cultural form emerges from the treatment of the mistake as the creation of a new form of presence, if only one can find a way to turn it into a practice others might repeat. So, as we’re lowering the threshold for the identification of mistakes, and widening the hypothetical fields in which those samples would be mistakes, we are also, in the very act of ostensively identifying these mistakes, modeling a way of turning them into the origin of new practices.

The path, then, toward perfecting our practices, lies in the ongoing surfacing of the mistake/practice interfaces all across the field of the present. This involves iterating samples in the closest possible way, making our iterations as indistinguishable as possible from the original; while at the same time revealing everything mistaken or anomalous in the “original” and producing the practice that would make up the difference. It’s a kind of infiltration that’s right out in the open and transforms the space being infiltrated so that it’s no longer an infiltration. The practice is the transposition of a sample from one field to another, in such a way that the fields are converted into elements appropriable by the practice. One would never say anything other than what is being said, but in such a way as to summon everything that makes it sayable.

We can frame this in algorithmic terms. What we notice are low probability events. We notice them against the background of high probability events, which can be held constant. The paradox here is that the low probability event, if it happened, was in fact very high probability—100%, in fact. What made it seem low probability, then, were precisely all the other events that were being held constant. (The originary hypothesis is a very helpful model here: there’s nothing that we find ourselves more entitled to hold constant than our existence as human beings, whatever we take that to entail—but holding all that constant makes the emergence of the human itself very low probability, since not having the center is unimaginable.) Whatever our system of measurement is equipped to detect made it incapable of detecting whatever pointed to the emergence of what actually happened. So, we work backwards from the supposedly low-probability event to the system of measurement and we identify everything that pointed to the surprising occurrence, and set it alongside what was actually noticed instead. That’s the instruction that sets the algorithm to work: find all the markers of the event’s emergence, from beginning to end (the parameters for all this would have to be determined), and determine the threshold of detectability of those markers within the existing system of measurement. An obvious example here would be the 2016 election: an intellectually honest prognosticator who was 99% sure Hillary Clinton would win the election might want to do a study of the forms of attention that led to that conclusion, and part of doing that would be to go back and look for all the things you could have noticed and articulated into a better prediction, but never saw because you disdained the source (as opposed to more “reliable” ones), or saw but relegated to the irrelevant because it conflicted with other information you held constant as relevant, things that you noticed and found curious or troubling but never pieced together because that didn’t fit a paradigm that had been successful in the past, and so on. You could imagine this being done through a continual refinement of search terms taking you through the archives, through the feedback you received from the previous search. The algorithm would be the formula or set of instructions enabling the computer to do this on its own, producing various models for you of reconfigured attentional structures that would have led to different results.

So, right now, spreading out to the fringes of your awareness and beyond, there are emergent events one outcome of which, if the event were to be brought to your attention, would seem to be 73%, another outcome 17%, another 5%, and so on, until we get to an outcome that seems .000001% likely to happen. Of course, this breakdown will be wrong in some ways, and it will be wrong in more ways as the predictions get more “granular.” (Someone was right in predicting Trump would win, but did they predict he’d win Wisconsin, etc.?) You would then want an ongoing thought process that’s looking into all the ways you might be wrong and refining your explicit and implicit predictions, not so much to be right more often (this is actually not particularly important) but so as to continually lower the threshold at which you notice things. What, exactly, is an “implicit prediction”? That’s everything you’re paying attention to and not paying attention to, everything you’re hoping and fearing, all the people and institutions you rely on to show you things—every move you make presupposes a structure of predictions.

There’s a question of whether probabilities are “real,” or just a method of thinking: we can’t help but consider one outcome more likely than another; and once we assign greater likelihood to one outcome, we can consider how much greater likelihood to attribute to it, and so on. This presents itself as reality to us. But whatever happens, actually happens. Rather than enter this debate, I will say that what is processed by us, more or less formally, as probabilities, can be resolved into who we think is where, doing what. If I think Trump has a 25% chance of winning the election, then I’m attributing to his supporters, opponents and neutrals a “mass,” a set of motivations and capacities, very different than if I think he has a 75% chance of winning. The same goes for all the social institutions that are facilitating one outcome rather than another. The distribution of probabilities is really a distribution of “anthropomorphized” people, ranging themselves in relation to each other. To clarify, at the risk of caricaturing, the point, there’s some guy in Michigan upon whom my system of measurement hangs who “must” be accessing certain sources of information, have a certain circle of friends, be ready to argue with others and help campaign to a specific extent, be annoyed or outraged when he sees and hears certain things, and so on—we could construct a detailed profile, which is much of what present day algorithms do. My thinking, we might say, is entangled with the existence of this guy, as a kind of tipping point. We can, then, people the probabilities, which involves peopling ourselves as well—who I am is the sum total of everyone out there I imagine manning their stations, or not. “Peopling yourself” is therefore a practice of distributing the present: present who you imagine as your tipping point for whatever event, and you begin to elicit a model of the entire social order as others do the same. We are all of us tipping points for some set of events and you find out which those are by peopling yourself.

No Comments »

No comments yet.

RSS feed for comments on this post.

Leave a comment

You must be logged in to post a comment.

Powered by WordPress