ACEnetica

Discussion about the use of self-organisation for automatically "programming" networks of processing nodes.

Thursday, September 22, 2005

Reductionism versus emergentism

The issue of reductionism versus emergentism amounts to whether you model your experimental observations in terms of underlying causes (this is reductionism), or whether you directly model the experimental observations in terms of themselves (this is emergentism). There are "wars" waged between the factions that support one or the other approach, with the reductionists being accused of being "coldly scientific", and the emergentists accused of being "vague and mystical". I think this polarisation of attitudes is silly; it seems that people simply like to be partisan and to spoil for a fight.

A concrete example from physics is the modelling of experimental scattering amplitudes using an underlying quantum field theory (QFT), or modelling them directly using an S-matrix approach. QFT (reductionism) predicts the scattering amplitudes from the behaviour of underlying degrees of freedom, whereas the S-matrix approach (emergentism) models the observations directly by defining inter-relationships between the scattering amplitudes.

Reductionism and emergentism are really the same thing, except that they have a different view about what the fundamental degrees of freedom are. Reductionism explains experimental observations in terms of degrees of freedom that are at a deeper level than the experimental observations, whereas emergentism regards the deepest level as being the level at which the experimental observations are made.

A really good book that gives you an informal feel for the issues surrounding reductionism and emergentism is A Different Universe: Reinventing Physics from the Bottom Down by Robert B Laughlin. This book explains why it may not always be useful to use a reductionist approach. Essentially, the up-side of a reductionist model is that it reduces everything to simpler deeper degrees of freedom, but the down-side is that you have to put in the effort to derive everything from these deeper degrees of freedom. Unfortunately, the amount of effort can be prohibitive, in which case the reductionist model explains the observations "in principle", but "in practice" it is too computationally expensive to be useful.

How is all this relevant to self-organising networks?

There are two general classes of network model called "generative" and "recognition" models:
  1. The generative class of models are reductionist, because they explain experimental observations in terms of underlying causes (or hidden variables). Usually a generative model is primed in a fair amount of detail, so it contains quite a lot of prior knowledge.
  2. The recognition class of models are emergentist, because they compute whatever they need without reference to underlying causes. Usually a recognition model is primed only loosely, so it contains little prior knowledge.

Reductionist and emergentist network models lie at two extremes of a spectrum of possibilities. However, most networks fall close to one or other of these extremes, because each approach has its own "mind set" that tends to repel the other.

The emphasis in this blog is to start with the emergentist approach to SONs, because this approach introduces less prior knowledge, and so is more "hands off" in its analysis of data. It is possible, although not guaranteed, that self-organisation can then be used to automatically discover the sorts of underlying degrees of freedom that are used in the reductionist approach.

If it all goes to plan then reductionism will emerge from emergentism by a process of self-organisation. This sets the scene for my research programme into self-organising networks.

0 Comments:

Post a Comment

<< Home