Discussion about the use of self-organisation for automatically "programming" networks of processing nodes.

Sunday, November 27, 2005

Problem uploading images to

Merde, merde, and thrice merde!

For some unknown reason is still not accepting uploaded images, so I am unable to post any images here until that is sorted out. Maybe this problem is related to the comment about "network maintenance" on the Blogger Status page.

This pretty much makes it impossible for me to post anything meaningful, so this blog will be dormant until this problem is sorted out.

I am more than a little annoyed by this.

Update: My frustration is building to the point where I might start to host all of my images elsewhere, and to include links from my postings here.

Update: I have now discovered that the uploading problem was a 30 second timeout that prevented what would otherwise have been successful uploads from completing. For some reason this 30 second threshold was not exceeded for the first 10 or so images that I uploaded, but it was exceeded thereafter. Since I was attempting to upload uncompressed bitmap versions of my images, the solution was simple: compress-before-upload. This trick worked on my first attempt, so my earlier Discrete network dynamics. Part 1: Operator theory posting is now complete with its images. I can now bring this blog out of its dormant state.

Wednesday, November 16, 2005

A new kind of science

A few years ago Stephen Wolfram published a book called A new kind of science (NKS). Both the man and the book cause a strong split in people's opinions. Some claim him to be a genius and others dismiss him as a megalomaniac crackpot. Some claim that the book represents the dawn of a new scientific era, whereas others dismiss it as a derivative work with nothing new to offer. As for me, I don't really pay much attention to what people say, because idle chatter is mostly heresay. I judge things by their demonstrable benefit, and for a book that means I judge it subjectively by the new ideas that it gives me.

I originally read the whole of NKS fairly quickly to get the gist of what it said. I already had many of its concepts on board, which helped this speed-reading along. Since then I have delved into selected areas of the book when I realised how the ideas that they held might be applied to my own work on self-organising networks. Usually, I use this book in "pick and mix" mode, where I mine it for concepts rather than complete solutions.

My subjective assessment of the book is that it is invaluable for me, because it has allowed me to enlarge the toolbox of scientific techniques that I use in my work. Objectively, I am sure that most (and perhaps all) of these ideas were already available somewhere in the literature, but then I am told that needles can be found in haystacks if you look hard enough. The key point is that a lot of interesting ideas are contained in the NKS book, and they are repeated many times just in case you don't get the message the first time. I certainly find that thinking in the NKS cellular automaton style gives me a usefully different viewpoint, which I can use to augment rather than inhibit my understanding.

I don't have much time for the scientific snobs who dismiss SW because of his personal and his scientific style. I think it all makes him a rather fascinating character.

Sunday, November 13, 2005

The human connectome

We had the human genome project, and now we have the human connectome project. In this week's New Scientist there is a short article entitled Connectome. I found an on-line paper entitled The Human Connectome: A Structural Description of the Human Brain whose abstract reads:

The connection matrix of the human brain (the human “connectome”) represents an indispensable foundation for basic and applied neurobiological research. However, the network of anatomical connections linking the neuronal elements of the human brain is still largely unknown. While some databases or collations of large-scale anatomical connection patterns exist for other mammalian species, there is currently no connection matrix of the human brain, nor is there a coordinated research effort to collect, archive, and disseminate this important information. We propose a research strategy to achieve this goal, and discuss its potential impact.

This project is much more realistic in its scope than the ridiculously premature Mission to build a simulated brain that I wrote about a while ago. Measuring the connection matrix of the human brain might seem to be mundane and rather like butterfly collecting, but it is essential to collect lots of data in order to constrain the imagination of the scientists who construct models of the brain; without such data we would not be doing science.

I wish them well in the success of this project. However, knowing the connection matrix does not imply that you understand the brain states that emerge from the corresponding network of neurons. It is not even clear that the connection matrix is a sufficient description of the network of interconnections between neurons. Even if you know the connection matrix, it does not tell you anything about the self-organisation of such a network of connections in the first place. Which aspects are genetically determined and which are the result of being driven by sensory inputs?

Are there as yet undiscovered universal principles governing such self-organisation? I'll bet there are.

Thursday, November 10, 2005

Discrete network dynamics. Part 1: Operator theory

This posting is a brief overview of my recent paper entitled "Discrete network dynamics. Part 1: Operator theory", which can be found in arXiv at cs.NE/0511027.

The basic idea is that an MCMC simulation (see a description of MCMC algorithms in a previous posting of mine here) of a joint probability Pr(x), where x is the joint state of an N-dimensional system (e.g. xi is the state of the ith node of an N-node network), generates a sequence of x's (i.e. x(1), x(2), ... , x(t), ...) that are sampled from Pr(x). x(t+1) is generated from x(t) by an operation that holds some of the components of x(t) fixed, whilst stochastically updating the rest of the components of x(t). The MCMC update operation is chosen so that if Pr(x(t)) has the correct joint probability then so also does Pr(x(t+1)). This was discussed in detail here.

The MCMC update operation is particularly simple if the state of only one of the nodes of the network is updated at a time, so that x(t+1) differs from x(t) in only one of its vector components. Let us assume that it is the ith component that is updated, then MCMC update operation consists of two simpler operations:

  1. Erase the state xi. This is an unconditional erase operation, in which the state is erased with no memory of what it used to be.
  2. Randomly rewrite the state xi. This is a conditional rewrite operation, in which the state is drawn from the conditional probability Pr(xix\xi).

The diagram shows an MCMC update in a visually intuitive way for a 2-node network (i.e. N=2). Node 1 is at the top of the diagram, node 2 is at the bottom of the diagram, and time reads from left to right across the diagram. The initial state x1 of node 1 is changed to final state x1´ by the influence of the state x2 of node 2 (which is itself unchanged in going from initial to final state). The solid lines from left to right indicate the evolution of the states of nodes 1 and 2. The vertical dashed line indicates the influence of the state of node 2 on the state of node 1, and the probability of the final state of node 1 is indicated as Pr(x1´│x2), which is memoryless because it does not depend on x1.

The above diagram is a Feynman-like diagram describing the annihilation of the state of node 1, and then subsequent creation of a new state for node 1, and the "amplitude" for this process is actually a probability given by Pr(x1´│x2).

This identification allows the MCMC update operator to be written as

Ha ≡ ∑i ai† (∑jaj) (∑kPr(ik) bkbk)

where ai† is the creation operator for state i at node 1, bk† is the creation operator for state k at node 2, bkbk is the number operator for state k at node 2, ∑kPr(ik) bkbk is the operator for computing the conditional probability factor that weights the MCMC update, and ∑jaj unconditionally annihilates the state at node 1.

Application of the MCMC update operator Ha automatically generates the correct probability-weighted ensemble of possible next states of the 2-node network in which the state of node 1 is updated. The full MCMC update operator is the sum of Ha+Hb, where Hb is obtained from Ha by interchanging the "a" and "b" operators (and appropriately changing the conditional probability factor). The Feynman-like diagram version of Ha+Hb is thus a sum of two diagrams as shown below.

The MCMC update operator correctly implements the 2-step MCMC update procedure described earlier. Each node of the network has a state with occupancy 1 that is erased (i.e. annhilated) during step 1 of the MCMC update to become the "vacuum" state with occupancy 0, and the state is rewritten (i.e. created) during step 2 of the MCMC update to become a state with occupancy 1 again. This update process is memoryless because its intermediate state is the "vacuum".

However, the MCMC update operator goes beyond the standard MCMC updates, because it can also be applied to initial states that have an occupancy of 2 or more. The algebra of the creation and annihilation operators automatically does this generalisation for you, and thus defines a unique generalisation of the MCMC algorithm for multiply occupied states of the network nodes.

The above discussion has focussed on a network with only 2 nodes. The generalisation to an N-node network is straightforward when the probability Pr(x) of the joint state of the network is expanded using the Hammersley-Clifford expansion, as is shown in my paper.

The structure of this MCMC algorithm is similar to a quantum field theory of bosons:

  1. Each state of node i of the network can have a variable occupancy, which corresponds to each state of bosons of type i having a variable occupancy.
  2. An MCMC update causes the multiply occupied state of node i of the network to change by using the neighbouring nodes to influence the hopping of an occupant from one state of node i to another state of node i, which corresponds to a boson of type i scattering from one state to another under the influence of bosons of various other types.

This completes the reformulation of MCMC algorithms using the language of creation and annihilation operators, and opens up a large number of possible lines of enquiry.

One particularly interesting consequence of multiple occupancy MCMC algorithms is self-organising networks, in which the state space of a multiply occupied network node can split into 2 or more smaller state spaces, each of which inherits some of the original node's multiple occupants, so that overall the original node splits into several daughter nodes. I will be saying much more about this later.

Monday, November 07, 2005

Discrete network dynamics series of papers

I am now back after a quiet period during which I was writing my first paper in a forthcoming series of papers on "discrete network dynamics". This first paper is entitled "Discrete Network Dynamics. Part 1: Operator Theory", and I have just uploaded it to arXiv where it is available at cs.NE/0511027. I will be commenting in detail on this paper in due course.

The "discrete network dynamics" series of papers will be about the use of operator techniques borrowed from quantum field theory to implement Markov chain Monte Carlo algorithms, and the emphasis will be on the adaptive cluster expansion network (ACEnet) which is a particular type of self-organising network.