Saturday, May 7, 2011

Thoughts on Artificial Life, AI and AI reboot

"Unthinking Machines Artificial intelligence needs a reboot, say experts."

There are some issues with a top-down approach to automatic artificial complex behavior. The problem with modeling the brain or the brain's neural network is that you are just looking at the end result of millions of years of evolution. We should understand how the human brain became relevant and came to be and then we will find that other animals also have brains and exhibit complex behaviors. Simple animals have smaller brains but we can look how those systems evolved over time. You could go that route, completely model and understand the brain but you will still end up with issues. You will have a broken, less than accurate copy of the brain but then you still are missing other components of the human body. The heart, the nervous system, the lungs, millions of years of evolution. Scientists look at the brain and say, "Hey, that is pretty cool, let's model that". I say, "Hey the earth's biosphere is pretty cool. How did I and the rest of the other intelligent animals get there, let's model that". They are looking at intelligence. But what is intelligence? Why are humans more intelligent than monkeys? Or Crows? Or Dolphins? In reality, they aren't THAT much more intelligent. And even if humans are a lot more intelligent, a lot of other animals have the same hardware. So if we understand the system that created...animals and their hardware, I think that would be more interesting than look at just one animal "brain" and trying to copy that. What parts do you model/copy? No matter how accurate you model the brain, scientists will always play catch up trying to understand the interesting parts of the human brain. And then after 20 years of copying the brain's functionality, we still may have to copy other aspects of the human body that give the brain life.

We need a true bottom up approach that looks at biologically inspired entities if we truly want to understand emergent phenomena. Examine the the microbiology level and chemical reactions and move up. A truly bottom-up approach that looks at the biology of basic organisms and models basic organisms, starting from bacteria to cells is the way to go. And of top of the biology, I would look at inorganic matter and how that relates to organic matter. And then I would look at the evolution of these biologically inspired systems. You could play experiments, where did organic matter come from on earth? We should understand DNA, RNA, mRNA, cells, single celled organisms, water, on and on. Even those basic components are kind of interesting. Combine DNA, cells and other matter together and you have a complex entity. Understanding the reasons for those components and how they interact is the way to go. Evolve systems that generate those interactions. I would model simple creatures, evolve those creatures and then create an environment for those creatures to exist, have them interact and then evolve a system that has some form of brain..or multiple brains.

Even the term "artificial intelligence" leads people in the wrong direction and needs a reboot. I like "autonomous artificial adaptability". We want creatures that adapt to the world around them and do so at their own direction. The concept of intelligence implies "human brain intelligence". Humans are more intelligent than pigs. But pigs are WAY more intelligent than trees. That leap in adaptability is interesting and worth looking at. Think about it this way, the unintelligent parts of the human body are fascinating. And who is to say that there are creatures in the universe that are infinitely more intelligent than humans? We have one brain, is it possible that a creature could have a million brains that all operate independently of one another. We only use a small capacity of our brains. Is it possible a creature could use 100% of their brain capacity. Bacteria and plant life are not normally considered intelligent but they do adapt to the earth's changes conditions. Human beings are far more interesting than bacteria but that doesn't necessarily mean that replicating human brain intelligence doesn't have to be the ultimate goal for strong AI.

At the heart of strong AI will be computational biology, whether the artificially evolved creature has something similar to a brain or neuron cells is irrelevant to the problem, you can still create adaptable, seemingly intelligent creatures with artificial biology through a controlled artificial environment. That is why I think the AI field is missing, the focus has always been the brain. Even if you create an artificial brain that is similar to the human brain, you have the problem of replicating the signal processing mechanisms of the eyes and ears. You have an issue with creating pain receptors and other bits of information that are fed into the brain. Even if you can feed the right bits to the brain, you will hit the next philosophical question, what is this autonomous creature supposed to do? Everything that a person does is ultimately tied to their evolutionary inspired purpose. You eat because you are hungry, the human is hungry. You create societies to make it easier to survive for other humans. All of the adaptability of the human brain and the human are kind of tied to its evolutionary purpose. What will be the goal of this artificial brain? You still have the same problem with a biologically inspired, evolutionary inspired artificial systems but you can control the evolutionary constraints. Maybe the creature doesn't need a brain? But that is OK, it still may have interesting properties that encourage its survival. The computer science AI research community has tunnel vision as it relates to AI, "the human brain, the brain, the brain". If you stop and think, "what is the brain? what is a human?". We are really a collection of cells and bacteria, all wrapped in a nice protective package. Most of the individual cells in the human body are interesting on their own, the brain cells are not that much more interesting than the skin cells or blood cells or anything else.

With most software engineering and even most AI research, the developer is required to program the behavior into the system. The developer is careful to program a response to all known inputs. Even if you model the brain and create a close enough model of the brain, the puppet master will still have a problem of programming and training inputs that only this particular brain can respond to. You have reached the zenith of AI but now you have hit a wall trying to train and feed information to the brain. You are essentially programming the brain with known inputs. With a good biologically inspired model that evolves behavior and operates autonomously and completely independent of the "creator", you don't program any behavior (as much as you can). If you run the system 20, 100 years, we may not know what type of behavior emerges. These systems should have a start button but no kill switch. Killing the system means you start all over and completely new behavior emerges. In theory, The brain model and the bottom-up biological model are similar, you expect emergent behavior. Evolutionary design creates more emergent behavior than starting at the brain and watching what happens next.


\[
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}
{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
\]

Resources

[1] http://en.wikipedia.org/wiki/Avida - Artificial life software platform

[2] http://en.wikipedia.org/wiki/Artificial_life

[3] http://en.wikipedia.org/wiki/Christopher_Langton

[4] googlecode/bottom-alife-demo.zip

-- Ron Paul 2012

No comments: