ACEnetica

Discussion about the use of self-organisation for automatically "programming" networks of processing nodes.

Tuesday, October 18, 2005

One face, one neuron

This month in Scientific American there is a News Scan article entitled One Face, One Neuron, which describes some interesting results (observed in several experimental subjects) in which there was a strong response by a single neuron to fairly abstract concepts. That is to say, when an experimental subject was shown various different things that were related to the same abstract concept, the same neuron fired in each case. It wasn't clear from the article what the rest of the neurons were doing, but there are rather a lot of them so it isn't possible to measure them all.

This observation is really significant, because it means that the brain is mapping a highly variable input onto an invariant response (at least, as far as one of the neurons is concerned), in which a single neuron fires reliably in response to a whole range of inputs. No doubt the neural dynamics that leads to this invariant mapping involves many neurons recurrently interacting with each other, with the net effect being the observed firing behaviour of the single neuron.

How is it that the neural dynamics in the brain can self-organise so that it creates these invariant mappings? Presumably they are something like fixed points (or limit cycles) of the dynamics. What properties of the external input are preserved at the output end of these mappings? In other words, what is the neural code used by the brain? Are the same properties of the external input preserved by different brains? If they are, presumably the detailed mappings are different in different brains. How much can the external input be varied whilst still being mapped to the same output? Is a few vague clues in the external input enough for the recurrent neural dynamics to home in on the appropriate output?

It would be really useful to have a general theory of such self-organising mappings in recurrent networks, wouldn't it?

0 Comments:

Post a Comment

<< Home