Mission to build a simulated brain
In New Scientist a while ago there was an article entitled Mission to build a simulated brain begins which describes "An effort to create the first computer simulation of the entire human brain, right down to the molecular level...". It goes on to say "The hope is that the virtual brain will help shed light on some aspects of human cognition, such as perception, memory and perhaps even consciousness", and so on.
"...simulation of the entire human brain"?!!
- That's rather a lot of neurons, and a mind-boggling number of connections. And that's not even going down to the molecular level. The state of the art is very limited compared to the goals of this project. I assume that the report is slightly inaccurate.
- For instance, there are detailed computational models of how neurons function, and how they interact in small assemblies; there is the NEURON software for running these simulations. At a much higher level there are also general architectual models of whole regions of the brain.
- In a few cases there are models that span several scales all the way from the bottom level to a fairly high level. I am thinking in particular of the visual cortex where we have the luxury of addressing the brain directly via the retina (which is effectively a layer of brain tissue). This has led to a very good (relative to other areas of the brain) neural model of visual processing in the brain.
"...and perhaps even consciousness"?!!
- When I hear the "C" word I reach for my gun! People routinely use the "C" word to inject an air of mystery and importance into what they are saying. The fact is that it is a non-word that is used by people who haven't seriously reflected on what goes on inside their heads.
- One book (amongst many) that I have found very insightful on this issue is The Artful Universe Expanded by John Barrow, in which he lucidly explains how evolution ensures that the universe imprints itself on the way that we we think. This means that the thing that we think with is not as free thinking as we would like to believe that it is. Is it at all surprising that we feel as if there is an inner light (call it "consciousness" if you want)?
- Isn't it obvious that evolution ensures that you (whatever you are, human, fish, ant, or whatever) have a sense of "self" to encourage you to survive and reproduce, or have a sense of "host" to make you sacrifice yourself to save the nest? There will be "degrees" of "self" according to the "amount" of "processing" that is used to "implement" it. Yes, there are inverted commas everywhere because the terminology is not precisely defined.
What sort of brain studies should be given more prominence?
There is a lot of research going on near the bottom level, such as modelling neural chemical processes, modelling the dynamics of membrane potentials, modelling the interaction of small assemblies of neurons, etc. This is all very good reductionist work that potentially will lead to a deeper understanding.
However, I can't see much work going on that studies the theory of low-level brain-like processing, although there is a lot of neural modelling work. What do I mean by that? Brain-like processing is massively parallel, which needs its own special kind of "algorithm" that can be distributed across a large number of small interconnected processors. For really large parallel systems (e.g. brains) you can't go in and individually program every processor, but you can envisage the processors having inbuilt behaviours (i.e. a bootstrap program) that allow them to sort out their collective behaviour (i.e. self-organise) to do something macroscopically useful. We certainly need a lot more work on the theory of this type of massively parallel self-organising system.
There is a vast amount of research into what is called "neural networks", but which is nothing of the sort. As with the "C" word above, the "NN" phrase is used to guild otherwise rather mediocre research to make it seem more sophisticated than it is. A better phrase to describe most of this research is "non-linear adaptive filtering". The small subset of "NN" research that uses discrete firing events, rather than firing rates, is much better qualified to be called "NN".
There is the important distinction between supervised and unsupervised training:
- Supervised training is usually used to guide an adaptive filter so that it approximates an externally specified output. From the point of view of brain-like processing this is not a very interesting type of adaptation.
- Unsupervised training is much more interesting because it does not require an output to be externally specified, so the adaptive filter must discover for itself something "interesting" about its input; this is like answering an open-ended question, which is much more challenging than a question with a known answer. From the point of view of brain-like processing this is a very interesting type of adaptation.
Too much time is spent on supervised training by people doing what they think is true "NN" research. They are kidding themselves. How do they expect very large networks to be trained when all the external supervisor can do is to guide the network outputs? Is supervised training really going to help the deep inner workings of the network to learn anything useful? Wouldn't it be much better to distribute throughout the network the ability to discover "interesting" structure? Doesn't this sound rather like what unsupervised training is aiming to do anyway?
In Mission to build a simulated brain begins I want to see more theoretical work, especially on abstract brain-like processing and (more specifically) unsupervised training of neural networks that use discrete firing events, which would then guide the modelling work that they hope to do anyway. To understand what a large network simulation is doing you need a theory that is formulated at an appropriate level. It is not always a good idea to drill down to the bottom level of modelling (see my comments on reductionism versus emergentism).
4 Comments:
Regarding training neural networks , a growing human goes through stages of mental training at different ages ,the environment including their parents provides the training the brain is wired to use this information to train different parts of it's self. No real or simulated brain can work or learn correctly by it's self. The brain cannot learn when isolated from the environment it must be immersed in it.
I guess you are commenting on my remarks about the relative merits of supervised versus unsupervised training.
Unsupervised training is a generalisation of supervised training, because not only does it expose (some parts of) the network to the external world, but also it encourages the internal parts of the network to behave in non-trivial ways that are not strictly required if all you want to do is to fit the external data.
A good unsupervised network will not only fit the external data, but also arrange its internal codings so that they are "useful" in other ways. This allows an unsupervised network to be applied not only to the original problem on which it was trained, but also applied to other related problems.
A good example is the training of the visual cortex so that it encodes visual information in useful ways. This visual coding is learnt in very early life by exposure to a rich visual environment, after which the coding becomes "frozen", and it then acts as a stable platform on which to build higher level codings, and so on. This is a good example of the staged training of an unsupervised network.
Re last comments
my name is stephen as well but
being provided with external input of visual information is a form of training the network, also in a person the form of input is interactive when the eye moves so does the image
a complete feedback loop, it's also accompanied by sound and other sensory modalities this interactive effect and the images them selves are a form of unsupervised training i think trying to produce a network which is preprogrammed to do these things which are in animals tuned by immersion in an environment will to make it difficult work. Also the internal structure of the network is predisposed to interpret the inputs in certain ways the cortex has many specialised areas each requires training to fulfill it's proper functionallity. I consider the environment the trainer and without it making AI do anything like a human is going to be near impossible.
I agree with everything you said.
Just to make sure we are not actually talking at cross purposes, here is my opinion on various issues you raised:
1. Unsupervised networks are really supervised networks, but they have some extra constraints applied to them during training, which makes their internal codes much easier to apply to problems other than the one on which the network was trained.
2. Feedback loops are really important. You need to close the sensor/effector/sensor loop in order to get a network can can live as a good citizen in a control loop.
3. Data fusion is really important. You need to be able to combine inputs from different modalities (e.g. sound, vision, touch, etc). You also need to degrade gracefully when one or more modalities are missing.
4. It is important that networks have specialised areas. It is impossible to train a single network so that it partitions itself into a large number of interconnected specialised processing sub-networks, no matter how rich the training set is. You need to either
(a) manually structure the training process to encourage the processing sub-networks to emerge, or
(b) use genetic programming to run a simulated evolution, and let the genes gradually sort themselves out so that they specify the processing sub-networks.
5. Human-level AI needs a real environment in the loop. My original posting was to do with trying to short-circuit the need to do a full simulated evolution (see 4(b) above), by replacing it with a compact set of information processing principles. I think these principles have not yet been identified, so IBM's "Mission to build a simulated brain" is premature on that count alone.
Post a Comment
<< Home