GABlog Generative Anthropology in the Public Sphere

September 15, 2020

A Single Sample is Enough to Hypothesize the All

Filed under: GA — adam @ 4:52 am

The title of this post was actually the thought that got me started on the “hypothesizing the present” post, which, however, ended up going in a different direction. Coming across the following formulation, attributed to Yitzhak Bentov (previously unknown to me), who himself developed the notion of the “hologram” invented by (the also previously unknown to me) Dennis Gabor, in Jonathan Nitzan and Shimshon Bichler’s Capital as Powersent me back to it:

 

Technically, the hologram is a photographic method that uses a laser beam to record and then read and project the interference pattern of incidental waves. But it is much more than a mere technical gadget. Seen as a conceptual approach, the hologram has immense potential implications that go far beyond photography.

To illustrate the underlying principle, think of a pond into which three pebbles are dropped simultaneously. These three incidental ‘events’ create a structure of evenly spreading – and intersecting – waves throughout the pond. Now, suppose that we were to freeze the pond instantaneously, pick up the top sheet of ice containing the wave pattern, and then drop it to the ground so that it shatters to pieces. Because of the curvature of the waves, each piece, no matter how small or from which part of the pond, will contain enough information, a bit fuzzy but nonetheless complete, to trace the three events. All we need to do is to ‘extend’ the partial arcs on the piece into complete circles and then find their centres. Our ability to do so turns each piece into a holo gramma– the ancient Greek for the ‘whole picture.’ (224)

So, one sample—a single utterance, or even a single sentence from a single utterance, provides all one needs to start hypothesizing the all. It contains enough “information” to “trace” all the “events” which produced it back to their center. As I often do, I will make the point that this is already what we do, in making meaning, not something I am claiming we should do—what we should do is be aware of and “own” this necessary intellectual move. Now, of course, upon hearing a second utterance, the hypothesis formed regarding the first one is revised; perhaps even rethinking the original utterance itself, or having it juxtaposed to another utterance, would lead to such a revision. We are always revising our hypotheses of the all as we engage with and become one sample after another. We can think of this as always revising our search terms within an algorithmic mediascape. The only test is kind of scene would materialize your hypothesis.

The truth, according to Peirce, is what we would all have to agree upon in the long run—but the long run never gets here (as Peirce knew). So, there’s no point getting bogged down in trying to “prove” or “falsify” a given hypothesis, except upon very carefully controlled conditions (in which case the implications will be limited), especially when the hypothesizer is part of the hypothesis itself. (Evoking the necessary conditions to “prove” what someone claims to know—what would have to be “controlled”?—is itself a fruitful source of hypotheses.) What makes for a good hypothesis is that it generates events in which a community of inquirers—a disciplinary space—into and of that event is created. The utterance, or sample, and the discourse on the sample, becomes an origin and model, and comes to position potentially everyone in relation to it—as someone addressed, or not addressed, or addressed under certain conditions, in a particular way, by the utterance; and therefore as someone responding to, resituating, repurposing, re-embedding, and so on, that utterance. For these purposes, sometimes it will be the “wild” hypothesis that is best, because it seeds the most possible scenes. This is what became the notion of “hypothesizing the present,” insofar as a single utterance is made the center of a system of reverberations and resonances that spreads across the entire field that has constituted the utterance in the first place.

There is a practice here that can always get us started, one I take from Gertrude Stein: treat every word in a sentence as equally important (which would further imply treating every event as equally important, every “component” of every event as equally important, etc.). This doesn’t entail a claim that they are all equally important: it’s a hypothetical move to counter the ingrained assumptions regarding hierarchies of importance we bring to any “sample.” It’s ultimately unsustainable: you can read a sentence as if the “a,” the “the,” and the “of,” are just as important as the noun and verb, and sometimes, under some conditions, for some purposes, they will, in fact, be—but the more a disciplinary space forms around the sample the more some hierarchy of importance will take shape. But it will be a different one than that with which you started, precisely because it had to “re-form” out of its “elements.” This is why I started Anthropomorphicsby referencing Gertrude Stein’s dictum (maxim? Aphorism?) to “act so that there is no use in a center.” This is a discovery procedure—the more you resist any center that seems to be taking shape in orienting your actions the more the center that will ultimately be revealed as having done so will have resonance and “anti-fragility.” The center will be iterative, insofar as, however it ultimately is structured, it contains within itself all these other possibilities. A model for thinking in these terms is Richard Feynman’s proposal for dealing the paradoxes of measurement in quantum physics which, as I understand it, entails positing that particles take or “try out” all the possible paths from origin until endpoint, any one of which might be captured in a particular measurement.

Any “element” of an event or model, in that case, can be extrapolated and presented as being always what it is within that event or model and, furthermore, to be fully determinative of that event or model. “A tall man killed a short man on Main Street last night” becomes “tall men always kill short men”; “tall men are inveterate killers”; “short men are perpetual defenseless victims”; “Main Street is a killing field”; “last night was the most violent night in the history of the town”; etc. Such wild hypotheses are always in the background as we work our way back to the more moderate conclusions that height probably had nothing to do with it, Main Street is not all that dangerous, etc.—it’s the only way of really bringing all the different features of the event or model into focus. If a part of your thinking holds on to all these wild hypotheses the relative significance of size, location, time, and so on will be composed. At the same time, these wild hypotheses are your transitions to other events and other models, which you seek to anchor in this one, as you seek to determine the “curvature of the wave” of this fragment as an effect and sign of all the killings, all the size differentials, all the Main Streets, all the night times, as differentiated from, say weight differentials, side streets, peaceful interactions, day and evening times, etc.

So, any utterance contains or indicates the entire social order, and so do you in taking up that utterance—how so, of course, is what is to be determined, or deferred. You can single out an especially odd and contemporary utterance (no shortage of those) but you can also defamiliarize an apparently unexceptional one: where could this have come from, is the initial question? Who would say this, in what media, in response to what problem or provocation, to what interlocutor or audience, within which field of possible effects, with what set of conceivable intentions? (We have to accept, I think, that curiosity and inquisitiveness, once viewed with suspicion at best, and for some good reasons, have become virtues.) You populate the field of the present around the utterance, and then keep repopulating it as you go. The utterance can be repeated in various contexts; indeed, in working with it, you are creating some of those contexts. Each repetition would reveal something new about the utterance. Each question opens up a field of others, which can be reviewed without prejudice, until a new “prejudice” takes shape: how do the various media work, what are the various audiences and sub-audiences and cross-over audiences; the institutions through which an utterance can circulate; under what conditions would the utterance be impossible or unthinkable; what are the observations, the confirmation of those observations, the transmission of the summaries of those observations that go into making up the referents in your sample utterance—all these become the origin of hypotheses as well.

The practice of explicitly hypothesizing the all from the single sample is a form of training in identifying what is peculiar to our present. One is placed on alert to the “signs of the times.” In doing so one knows oneself to be a sign of the times, and thereby comes to signify more. The practice also converts others into such signs, in a form of public pedagogy. And I will here remind you of the Natural Semantic Primes, which encourage us to translate all utterances into someone saying something to someone else, someone doing something, something happening to someone and so on, with the boundaries between doing and happening, saying and thinking, wanting and doing, and so on, being an endless source of hypotheses. As is attention to what David Olson calls (and I have called many times after him) the “metalanguage of literacy,” in which we can reduce, for example, “assumptions,” into something “many people say before they say this thing,” “belief” into something people say when they will also say “you can do bad things to me if I don’t do this thing,” and so on—and so generate scenes and histories of scenes out of every word. For example, I knew from the first time I heard of it the Trump-Russia collusion story was nonsense for the simple reason that no one could give it the form of an event one could imagine: Trump says_____; Putin replies_______; Trump responds_______; they shake hands, the election in the bag. Try and fill in the blanks to construct a coherent event without laughing out loud. (Of course, this also means its satiric possibilities are immense—how would all of Trump’s actions as president appear if we were to believe he really was remotely controlled by Moscow? Moscow would become very interesting!) And in the single sample of the Russia collusion hoax, we have the means to hypothesize the all—all those who pushed it, who constructed bits and pieces of pseudo-evidence to “corroborate” it, all those who actually believed it, everything they had to train themselves to ignore and everyone they had to train themselves to hate in order to continue believing it—this chain of hypotheses leads us to everything.

In hypothesizing the all from the single sample you transform the entire world into fellow inquirers as well as objects of inquiry, and you can treat all of their utterances as hypotheses of the all out of the single sample whether they like it or not. The practice overlaps with more conventional practices of “fact checking,” “context providing” and other elements of “critical thinking, but without claiming to saturate the field. If you think fact checking is a meaningful activity, you must believe you can gather and confirm all the “relevant” facts in a way all “reasonable” beings would agree on. This is nonsensical, because what counts as “relevant” is always institutionally and historically dependent, but, at the same time, the fact checker, in checking one fact in the way he does, in fact hypothesizes the all from the single sample because he’s hypothesizing the historical and institutional setting that makes the fact relevant—and the institutionalized “chain of custody” that makes it a “fact” in the first place. That’s the way to address the fact checker, not by pointing to some fact he left out (unless in doing so you are explicitly hypothesizing the all from a single sample). Similarly, nothing can be more obvious that we can never have, once and for all, the “whole” or “proper” context, which would really have to be the entire history of the human race. But in making a bid to close off the context, the context provider hypothesizes the all from a single sample, and can therefore be treated as a fellow inquirer, even if not in quite the way he might have wished. The same is true of logical fallacy detectors, who wish to institute rules regulating discourse which no one could follow consistently while actually generating any discourse. But chasing down any utterance into the definitions and if… then sequences that would make it acceptable to the exacting logician is a way of creating algorithms and mock algorithms.

The hypothesizing of the all out of the single sample (and as the single sample) is a form of self-appification, or turning yourself into an interface between other users and the Cloud. This is the way to install the iterative center into the stack. As all the practices I propose, it can operate on various levels—the advanced academic discussion no less than the Twitter ratioing. It can be mastered at a very high level of proficiency, but it can also be broken down into little techniques anyone can use. It’s a way of moving very quickly to broader frames, and also of sticking tenaciously to a single demand: no, tell how it was possible for this person to say this thing, and what follows from him having said it? Every utterance “calls for” translation, and every translation is a “transfer translation,” which resolves some inconsistency or anomaly between overlapping discourses (for our purposes here, we can say that a “transfer translation” is when one needs to reconcile the differences between equivalent utterances in the same discourse). My own hypothesis here is that the most important and generative translations will be those of statements uttered under the presumed rule of the Big Scene into statements intelligible within the scenes of scenes authorized by the iterative center. Each hypothesis of the all from the single sample creates such a scene and the revelation of a further iteration of the center.

A final practice to suggest here. All of us, as “selves,” which is to say, as the “same” as we were previously, are comprised of what has been deposited in us by previous incarnations of the center, on the one hand, and by our ongoing engagements with the center, wherein we are deputized, so to speak, to exercise those deposited capacities. Where is that line between what has been deposited, and what one currently exercises? (This bears some family resemblance to the free will vs. determinism problem. But also to Marx’s distinction between constant and variable capital.) No one can really say, but we are always hypothesizing by virtue of our construction of practices, which presuppose the possibility of exercising upon what has been deposited—on doing something with what has happened. This line can be hypothesized in the transfer translation of any utterance; it can be drawn up very close, so as to suggest almost nothing is exercised; or it can be pushed way back, so as to suggest that only bare remnants of what has been deposited remain—and we can identify practices where, depending upon the practice and disciplinary space being enacted, it can seem that either one or the other is the case. And such hypothesizing and thought experimenting is itself an exercise on the deposits.

Powered by WordPress