Friday, March 25, 2005

Evolving Minds, Thinking Machines

In my previous pieces on Incompleteness and Chaos, I have often come close to writing about the human mind. I have certainly had questions about the way we understand ourselves (philosophically and scientifically). How did we evolve into conscious, thinking, intelligent beings? Since any theory must stand experimental validation, can scientists test and validate the theory of evolution? Will we ever be able to duplicate our intelligence, our way of thinking in machines? How well can we answer the question, "How do we think?" Here are some glimpses in what may be the answers.

Let us start by taking the tour of a lab populated by 200 computers in the basement of the Plant and Soil Sciences building at Michigan State University. Researchers and scientists here have worked over a decade, to develop Avida. Avida is a digital evolution research platform. What, you might ask, is digital evolution? Consider this scenario:

A digital organism is a few lines of program code which initially does not do anything useful, but can replicate and produce other copies of code like itself. We will call it a primitive digital organism. At regular intervals, we present it with a pair of numbers. At first, it will not be able to do anything with the numbers. Nevertheless, each time it replicates, there is a small chance that one of its command lines might mutate into something else. On a rare occasion, these mutations will allow the organism to process one of the numbers in a simple way. It may acquire the ability simply to read a number, for example, and then produce an identical output.

If that happens, we reward the digital organism by speeding up the time it took it to reproduce. If an organism could read two numbers at once, we would speed up its reproduction even more. If they could add the numbers, we would give them an even bigger reward. Within six months, a lab doing this experiment had organisms, which were addition whizzes. The organisms always evolved, but what was more surprising was exactly how the organisms were adding numbers. Some of the ways they developed during evolution were completely insane, something, which even their creators, had not thought of.

In Avida, a population of self-replicating computer programs is subjected to external pressures (such as mutations and limited resources like a rationed supply of nutrition) and allowed to evolve subject to natural selection. This is not a mere simulation of evolution -- digital organisms in Avida evolve to survive in a complex computational environment and adapt to perform entirely new traits in ways never expected by the researchers, some of which seem highly creative.

Using Avida, scientists have found plausible answers to questions like: Can complex organisms like humans evolve from simple precursors like single celled protozoa? Can natural selection produce a complex organ like the human brain? Why does a forest have so many varieties of plants as in shouldn't only the fittest species have survived? Why do organisms cooperate in nature, when competing for existence? Why is not all reproduction in nature asexual when it is far more efficient than the sexual variety? The implications are completely mind blowing and give rare insights into why we are, the way we are, today.

If you think, however, we know enough already, you are mistaken. Let us consider another facet of this self-quest. The human brain performs at least one quadrillion operations per second, almost a thousand times more than the best supercomputers. Given the overwhelming complexity of the brain, it is not surprising to know that neuro-scientists still do not understand the way the brain processes information. The so called `neural code' which encodes the complex working of small electrical and magnetic impulses, which jump between our neurons, is still not very well understood. How are thoughts triggered in our head? How exactly does a set of neurons fire in or out of sequence to make some of us feel ravenous at the smell of a freshly baked chocolate cake, while making others feel completely nauseated?

Even though the scientists do not know how or why, they seem to be able to record what the neurons do. The patterns in which a whole orchestra of neurons fire in response to a stimuli is observable. By implanting electrodes in a monkey's brains, and analyzing the information from a small set of neurons, scientists have invented a system that could recognize patterns in monkey brains well enough to let the animals swing a robot arm to the left or to the right with their thoughts.

Curiously scientists still argue that chips which, would even, be able to decipher, let alone control the human mind are still far away in the future. When signals in the monkey's brain accompanying a specific arm movement are recorded; they can be processed by a computer and be used to move a robot arm. If the monkey's arm is tied down, the monkey learns how to control the robot arm using an entirely different set of neural signals. The mutability of neural code means even though chips one day might help restore memory to stroke patients or learn taekwondo instantly, in true Matrix style, but there is no way they will be able to identify the memory of your grandmother in a particular series of neural impulses.

Do we jump from here to cyborg implants on humans or maybe an artificial intelligence? The answer to both is still very much uncertain. Consider that the organisms in Avida can evolve in ways inconceivable by its creators. Will harmful computer viruses be able evolve like this someday? How can we tell when an evolving piece of software begins `thinking' on its own, when we do not even exactly know how we think ourselves? If scientists and engineers ever succeed in building a truly intelligent machine based on a neural coding scheme similar to ours, we won't able to read its mind either!

I'll sum up with a line about the human mind, which I heard at the end of a National Geographic television program many years ago:

The most powerful thinking entity on this planet is still unable to understand itself.

All the technical content in this post was obtained from the following sources: Read these excellent articles to know more about the subjects discussed in this post.

4 comments:

  1. ok THIS scared me a bit.. kinda like it did with IROBOT...we cant understand ourselves till today but in the process of trying ...im scared that man will come up with the most alarming ideas and innovations that will be the ruin of him...

    ReplyDelete
  2. I would venture to say that computer virii today even evolve to a certain extent... there are ways to code a worm that can learn and figure out "new" ways of infiltration. These new ways might be relatively trivial to the human mind, but they still are big leaps for a heap of code...

    I used to have a book on the human brain. It was beautifully illustrated and all that... with a virtual walkthrough of the brain itself... wonderous to think about how complex the system really is, and how effortlessly we put it to use.

    I've done some work on what's called computational reflection. Its the concept by which a computer program can be aware of itself, thereby allowing it to change its own run time behavior... heady stuff... I would try to write a post about it.. but I dare say I could infuse it with as much depth as you can...

    I can plagarize well though ;) And what better a source than the Gita itself -

    “Being has not vanished in reflection, but is negated. In reflection we have the meeting of two different things (Being and Notion), but Being meets not a stranger, but itself, as an Other, which is mediated, past Being.”

    ReplyDelete
  3. first rain, i am truly amazed at your brilliant thoughts. (i am!) but even more amazed that you always manage to let your mind hold sway here rather than your feelings. cheers!!

    p.s.

    are u an INTP???me, i am INFP:-D

    ReplyDelete
  4. @Anon: First anon comment on the blog - so welcome! I did not exactly wish to scare anybody but yes we are venturing into uncharted territories here.

    @Vigs: A worm can learn using predefined rules. That makes it an automaton. Do we really know when an automaton ceases to be an just an automaton and starts to think for itself? The way things are going, probably the Being will meet its Reflection in a not so distant future, but the question is, will it recognize itself?

    @Sanjana: You are most welcome.

    @Rapz: Errr... I am flattered that you think so. As for my letting my mind hold sway, in this post it was deliberate... but I do not do that always. I had to hunt Google for the INTP/INFP thingy. After having figured out the acronyms I think I do the T and the F in equal amounts. So what does that make me?

    ReplyDelete