GABlog Generative Anthropology in the Public Sphere

July 2, 2014

Brain as computer

Filed under: GA — Q @ 9:22 am

The basic premise of much current brain research seems to be that the brain is a biological computer and evolution is the programmer. Theoretically, then, we should be able to find the codes and understand the working of the brain. According to a 2010 article on CNET:

Researchers at the Stanford University School of Medicine have spent the past few years engineering a new imaging model, which they call array tomography, in conjunction with novel computational software, to stitch together image slices into a three-dimensional image that can be rotated, penetrated and navigated. Their work appears in the journal Neuron this week. To test their model, the team took tissue samples from a mouse whose brain had been bioengineered to make larger neurons in the cerebral cortex express a fluorescent protein (found in jellyfish), making them glow yellow-green. Because of this glow, the researchers were able to see synapses against the background of neurons. They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth. (Elizabeth Armstrong Moore CNET).

A high end computer chip such as the Intel Quad i7 has 731 million transistors, which act as switches. The human brain, on the other hand, has an estimated 86 billion neurons and 1000 trillion synapses “In a related finding there was a new article that suggests the difference between human and other primates is the space between neurons in the prefrontal cortex, with humans having more space, which is speculated to allow more connections.” (Ward Plunet Link)

The fact that the brain is many times more complicated than a computer does not, by itself, refute the analogy. It does seem significant, however, that no computer yet devised has any degree of consciousness.

Scientists have been very succssful of late in manipulating living cells, especially in tinkering with the DNA, to create new plants and so on. But they are not yet been able to create life in the laboratory, starting with non-living compounds.

23 Comments »

  1. We took a quick look at that just to gain a bit of respect for the scope of the challenge in a Finite Automata class at UCI back in ’72. While our estimate of gate-equivalent elements (‘switches’ in the article) was a more modest ‘at least trillions’, we were able to recognize from the state of neurology at that time, that the synapses were functioning as both memory and processor. This would allow for a highly efficient, massively parallel processing architecture.
    To incorporate and expanding upon my previous ‘Free Will’ comment (Aping Mankind post), I predict the mental software will look a lot like a GPS style Kalman Filter. The Kalman Filter modeling and estimation algorithm was developed to allow a ‘60’s era computer to navigate the Apollo module to the moon and back while brains evolved to allow animals to navigate this world.

    Comment by Alan — July 3, 2014 @ 1:15 pm

  2. Thanks Alan! I appreciate any comments or input you have on this issue. Do you think that the human brain is comparable to a computer? ~Peter

    Comment by Q — July 3, 2014 @ 1:45 pm

  3. I’m fully confident that animal brains are computers, with humans being the most powerful. There are similarities to the computers that we build/use and very significant differences. That said, I think the best way for us to understand it is to think of it as a computer, and try and sort it out top down (greater functions which I would model as software) and bottom up – from the neurons as addressed in the referenced articles. Clearly the processing power is phenomenal relative to computers we can build, and the architecture is radically different.

    Comment by Alan — July 4, 2014 @ 11:45 am

  4. If there are very significant differences, and a radically different architecture, in what sense are brains really computers? How could something human beings have invented and refined over the past 70 years or so (taking the brain, I assume, as a model) have really been in existence for hundreds of millions of years? I can see the computer as a heuristic device for analyzing the brain and/or mind, as a kind of metaphor in other words, but wouldn’t the limits of that metaphor be that it will lead us to reduce the significance of the differences?

    Comment by adam — July 6, 2014 @ 6:45 am

  5. ‘Computer’ is a functional term as well as a reference to man-made machines that do computing. Men invented (man-made) computers and computer technologies to extend and enhance the decision-making capabilities of our brains. I’m not the first as I here suggest that computer technology has developed to the point that we can use our computers as heuristic devices to analyze the brain, and in so doing we stand a chance (however slim) of catching a glimpse of mind. [must run – will add suggestions for going forward when I have time]

    Comment by Alan — July 7, 2014 @ 12:57 pm

  6. So, brains compute, therefore they are computers. In that sense, brains are computers in the same sense in which they are fantasists, emoters, motion coordinators, and much else.–that is, it’s something they do (or, more precisely, help humans to do). That’s very different from saying that brains operate like or as computers, which is what “brains are computers” is usually taken to mean.

    As for your final sentence, it seems to me to be what I said in my previous comment–certainly computers serve as heuristic devices to analyze the brain, or, at least, much of what the brain does. We need other heuristic devices, though, because this one (any one) will occlude as well as illuminate. Of course, once you have a good heuristic, it’s worth pushing it as far as you can and getting the most out of it. As long as you don’t start thinking that when we walk our brains are really remote control devices.

    Comment by adam — July 7, 2014 @ 4:13 pm

  7. Adam: Your last sentence confuses me – remote control device? I’m not sure what idea of brains you are dismissing. And yes, I borrowed your use of ‘heuristic’ to try and forge an emotional connection which would give you the feeling that we were talking about the same thing (which I think we basically are.). As for ‘fantasists, emoters, motion coordinators, and much else’, I suggest that our brains are massively interconnected organs where all of these functions are intertwined and we need to consider as many of these functions, all working together, as we can because they are so interdependent. That is, dreams, emotions, motion coordination, and much else are all decisions made in our brain and nervous system – they are all part of our brain-computer function.
    Man has been developing mechanical devises for thousands of years to assist with decision making to include the Abacus and Stonehenge. The real advances for our current discussion have been in the last 50 years with micro-circuits and digital computers. Having all grown up and been educated in this age of computers, I will assume that we all know enough about the man-made variety for the purposes of this discussion, so will focus on brains, and what I see as their evolution from proto-worms through mollusks and perhaps, beyond. Nerves and muscles appear to have evolved together, muscles doing the work and nerves providing the control. These have two basic functions in proto-worms that have stayed with animals as they continued to develop in complexity, with new functions layered over the primitive functions as animals became more complex: pushing food through the gut and pumping blood through the body. Both of these require an orderly constriction and release to effect a pumping action. Moving up to earthworms, there are new layers of complexity (but not a brain yet): strain sensors in the gut to instruct the constriction to occur more rapidly with low viscosity food and more slowly but with greater force when the gut has a high viscosity filling. They also have valves at the ‘mouth’ and anus to help control what gets in and when things go out. A variety of nerves have also developed to serve as sensors on the outside environment (hot, cold, moisture, light, pressure, gravity) and muscles to scoot the worm along or through the ground. The basic information processing and decision making functions are developed in neurons and neuron clusters at this level of development.
    With snails, more sensors and their associated nerve clusters are incorporated, to include eyes. Their behavior complexity also increases and they appear to be doing rudimentary navigation.

    Comment by Alan — July 8, 2014 @ 4:37 pm

  8. I agree that humans have invented and adopted many devices to help with decision making–first of all, language itself. But it’s not the brain that is thinking, or speaking–these are things humans with brains do. I also don’t think decisions are made “in” our brains, although I’m more ready to admit that a worm can make decisions. The brain is in the body–it doesn’t do anything by itself. We may be disagreeing over what words like “decision” mean.

    Comment by adam — July 8, 2014 @ 6:47 pm

  9. Yes, we are using the word decision for slightly different phenomenon, but the point of my somewhat lengthy yet incomplete discourse above is to argue, with evolution as my justification, that the two are essentially equivalent. I am using the word in the computer sense that each bit or ‘switch’ flip represents a decision: Something was just made to be different than it was before, by whatever phenomenon. I am suggesting as well that by digging into the operation of neurons we can uncover how thinking takes place. Do not loose sight of the insight of the original post – one human brain is more complex in terms of gates or switches than all the man-made computers combined. Peeling back the layers of evolved complexity is my goal – that we might demystify thinking. In a nutshell, I think each decision at the micro level, each transistor in a digital computer, each synapse in a brain or neuron, is a deterministic event. Whenever the input is at the right level (signal + noise), a state change is propagated in the system. Also, that we think is deterministic: As long as a human brain is alive, it will be thinking. While this is not typically true for computers, some purpose-built computers do operate in this fashion as well. What we think, however, is far from deterministic and is based on hundreds of millions of years of evolution, our lifetime of experiences and some varieties of random chance. The key, I believe, in making the jump from deterministic neurons to free-thinking humans lies in the evolution of navigating animals.

    Comment by Alan — July 9, 2014 @ 12:59 pm

  10. But the jump from deterministic neurons to free-thinking humans includes mimetic crisis and the deferral of violence, and I don’t see what the equivalent to this for computers could be. Computer science, it seems to me, like logic, takes the declarative sentence as the primary linguistic form: a proposition is true or false, and determining which it is leads to another proposition (if…then…) which can be true or false, and so on, through a series of flip switches. This is very productive, both for computer technology and for demystifying thinking, but it will always leave out something essentially human and therefore essential to thinking–the qualitative all or nothing of the ostensive, or Being: that there is something rather than nothing.

    Comment by adam — July 9, 2014 @ 2:39 pm

  11. You are approaching this exercise with two crippling handicaps: You are determined we will fail (to understand brains) and you are thinking of computers as something that sits on your desk. Both will prevent you from imagining an effective solution. One of the strengths of the computer on the desk is it is designed bottom up, to excel at a type of processing that is awkward for our brains to do. They are very good for augmenting our native thinking, but not good at modeling it. Computers have no issues with violence and have no need to defer it. Predators survive through violence and have great issues with cooperation, where violence must be deferred. Brains do not naturally work with logic or propositions, but by recognizing patterns and generating scenarios. The qualitative, I suspect, is an independent and parallel phenomenon to navigation, where socialization is an expansion of navigation, as is thinking. I do not see the qualitative as a thinking process but a reaction. The best way to think about our thinking process is dreams. We think in dreams and dream-like exercises. By imagining scenarios.

    Comment by Alan — July 9, 2014 @ 9:30 pm

  12. It is mimetic beings that have to solve the problem of violence–just as brains are not computers, so humans are not just another medium sized predator. Correct me if I’m wrong, but to detect patterns computationally, you need algorithms–instructions that tell the computer (in crude terms) to identify and remember all the times x, y, and z occur together in a particular configuration or proportion. Then, you can construct scenarios in which x, y, and z occur together (along with however many other elements or items you want to include) in certain “average” or purposefully arranged relations to each other. How would you compose the necessary algorithms without something like propositions (“when x reaches threshold a, look for instances of y”)?

    Again, I don’t deny that all these bears a family resemblance to a lot of things that the mind does, but it is not what the mind does because the mind is constitutively concerned with the deferral of violence. Dreams are, in one way or another, connected to desire–what do computers desire? Do androids dream of electric sheep? If I imagine a scenario I’m trying to solve a problem, which must in some sense be “existential” (do computers have existential problems?) , or perhaps I’m imagining my problems away, or, Walter Mitty-like, adopting some fantastical heroic role for myself. All grounded in desire. So, what emerges out of desire and the deferral of violence in humans is simulated through the creation of algorithms in computers. That is a difference that will never be erased. If the best example of the thinking process is dreaming, then computers are excluded from thinking–you can program a computer to generate surrealistic scenarios, or scenarios that follow each other with a pre-set degree of randomness, but that is not dreaming.

    You say my approach makes it impossible for me to imagine an effective solution, but I don’t see a problem that needs a solution (maybe that’s another handicap!). Computer programers will continue to refine their science (or is it a craft?), they will continue to generate models that shed light on our thinking processes, some hubristic programmers and their propagandists will overreach and claim they can exhaustively account for and reproduce the thinking process in this way, and some humanistic thinkers will always be there to point out that something critical to human thinking cannot be captured this way. No problem, just stimulating discussion, as far as I can see.

    Comment by adam — July 10, 2014 @ 6:26 am

  13. Yes. Well said! The problem that seeks a solution here is simply the intellectual exercise that brings us together in this dialog – understanding how free will is manifest out of a lump of mater that is our bodies. It is only through free will that this dialog is even remotely possible – the real problem, that of achieving free will, has long been solved. We are simply challenging ourselves to understand it better. Not much of a need, I was attempting drama. And as you point out, good critics are essential to refining any theory or model. My concern was that you were dismissing rather than considering and criticizing.
    Like so many complex problems, piecemeal solutions are probably the most promising, and the piece in my sights is what I am calling the navigation problem. The model I am attempting to fit this to is a Kalman Filter, which is by deliberate coincidence, the algorithm used by the software on any GPS device. This algorithm is inherently scalable and quite naturally accommodates an arbitrary number of dimensions and an arbitrary number of sensory inputs, each with (potentially) unique conditioning algorithms. Each level of complexity to the model naturally requires a corresponding increase in memory and computational power, but as we have already seen, brains have lots of that.
    As for recognizing patterns, you are correct for modern built computers, but our brains appear to be more holographic processing organs – capturing together sight, sound, smell and texture as available. Each of these are simultaneously checked for similarity to previously captured sights, sounds, smells and textures. The algorithm required for pattern comparison appears to be hard-wired. Birds and mammals alike appear to have this natural ability to a fairly high degree. To that end, birds and mammals alike are mimetic beings that have had to solve the problem of violence – to the extent that they must cooperate to raise their young. One thing that marks humans is how phenomenally more successful we are having developed a ritualized solution to our violence. For millions of years our ancestors, often called archaic humans in the anthropology literature, struggled at the threshold of extinction in small family bands. We were mid-sized predators with only marginal success and minds that were constitutively concerned with perpetrating and/or escaping violence. Deferring violence was an acquired social skill which was developed very recently in the grand scheme of things. That we must continually reenact the rituals reflects on just how un-natural it is for us.
    Dreams are, in many ways, connected to desire. If I imagine a scenario I’m trying to solve a problem, which must in some sense be “existential” (agreed!). The survival, existence and performance of a computer is the humans’ problem, so no existential problems for them. They need not worry their pretty little heads. Also, there is no need to erase the difference between brains and computers, and that is surly not my goal here – just to better understand brains. To that end, I suspect we can program desires into computer routines and teach computers to dream, and perhaps, to think.

    Comment by Alan — July 10, 2014 @ 3:11 pm

  14. I suppose this is interesting, but at this point it is not clear to me what you are actually doing–you are trying to fit “it” (what?) to a model–does this mean you are actually building something physical, or is this an abstract model of the brain? You accept my distinction between computers, which work on algorithms, and the more holographic brain, but then just go ahead and suggest that’s run by an algorithm as well. Written by whom? Animals solve the problem of violence through a pecking order, not signs. That’s the fundamental difference–one, again, that seems to play no role in your model.

    Maybe in the end we agree–yes, we can better understand brains through computers. We can better understand brains in other ways as well. And brains are not computers.

    Comment by adam — July 11, 2014 @ 6:45 am

  15. As with any harebrained scheme, it is best to come armed with a suite of justifications, and so, I have. The more practical goal is to improve computers. Human brains appear to me to be computers orders of magnitude more powerful than anything we could hope to build with technologies we are currently developing. It is not simply the number of gates or switches that the cited articles are highlighting as critters with really small neural networks are capable of surprisingly complex behaviors. Neural networks are processing data with speeds and efficiencies that we have trouble just trying to grasp. We should be able to realize an order of magnitude improvement in processing power by mimicking with current circuit technologies these neural networks. (easy to say, hard to do – it’s 3-d, and memory is integrated with the processing).
    On an emotional level, I am quite bemused by otherwise intelligent folks who grandstand in public forums on how science and evolution show free will to be a myth.
    Professionally, following a lecture series on GPS, it dawned on me that the GPS algorithm should be adaptable to animal navigation. Animals began to navigate early in evolution. Further that terrain, resources and predators are unpredictable, and that adapting to this dynamic could account for the evolution of free will. It should be possible to model this on a (regular, digital) computer, demonstrating the concept. In a brain, this algorithm would be hard-wired by evolution, a feature of the neural architecture. .

    If your goal is to understand the human mind you are probably better off studying poetry or music. I do not see how my approach can explain consciousness.

    Comment by Alan — July 13, 2014 @ 7:07 am

  16. Well, we can’t be sure what what will explain what, so it’s worth while to try various approaches–I certainly have no objection to, say, referring to human memory as a “database” upon which we perform “searches”; in a sense, it’s no different from older, spatial models of thought–we could think of the mind as a house, or a neighborhood, and much else. As long as one remembers that the brain is in a body, not a vat.

    Comment by adam — July 13, 2014 @ 7:39 am

  17. My model actually fits the GA model like a glove as I thought I pointed out earlier, but was perhaps too cryptic. My Kalman Filter model at the level of humans works essentially like a war game (but for all aspects of life, not just war) where you have a map of the environment and models of the other players. You then run through a series of what-if scenarios to try and anticipate the outcome of different initial actions. The outcome of these scenarios are various gut-feelings associated with each initial action. The bulk of this process is not conscious at all, or only randomly. We have given names such as dream or day dream to the aspects of this process that enter our consciousness, but we also augment this process with language (wherein comes much of our advantage over animals) by telling stories and myths among companions – and by acting out rituals. The sign, according to GA, reminds us of any one of, or of several myths, stories or rituals of the category wherein potentially violent meetings remain peaceful. Reminders of the positive outcomes of past communions culminating in constructive cooperation.
    I have some cautionary suggestions for analogies of ‘data bases’ and ‘searches’ I will articulate later.

    Comment by Alan — July 13, 2014 @ 10:43 am

  18. Sounds like a promising video game.

    Comment by adam — July 13, 2014 @ 12:45 pm

  19. Video Games. I like that analogy. Another analogy I like is ‘Last In First Out’ – this is a nearly obsolete style of electronic memory where the most recent entries are the first found in a search. One feature of our memories not found in computers is we seem to rate our experiences in terms of importance, with the more important being recalled more quickly (along with the more recent). Further reflections on data bases and searches can wait for my comments on the next topic from Q: ‘Thought Experiment.’

    Comment by Alan — July 14, 2014 @ 2:26 pm

  20. I didn’t even mean it as an analogy. Video games could turn into a great art form and a powerful means of inquiry.

    Comment by adam — July 14, 2014 @ 3:51 pm

  21. Yeah, your comment got me thinking too. Could we come up with a videogame built on a simple model that was able to incrementally add complexity based on the play as the game went on? Should be possible.

    Comment by Alan — July 15, 2014 @ 12:42 pm

  22. I thought video games already did that–shows how much I know about them. But doesn’t The Sims work that way?

    Comment by adam — July 15, 2014 @ 3:13 pm

  23. I think I need to understand them better to judge. I don’t think The Sims does what I want, the players themselves to evolve into more complex entities. I’ll have to mull that one over.

    Comment by Alan — July 15, 2014 @ 3:35 pm

RSS feed for comments on this post. TrackBack URL

Leave a comment

You must be logged in to post a comment.

Powered by WordPress