ACEnetica

Discussion about the use of self-organisation for automatically "programming" networks of processing nodes.

Monday, August 28, 2006

Burden of proof

There is an article by Marcus du Sautoy entitled Burden of Proof in this week's New Scientist, which discusses the issue of whether a computer generated mathematical proof can be considered to actually be a proof, especially since some such proofs are so lengthy that they cannot be checked by a human. An example of this type of proof is the 4-colour map theorem.

My own view on this issue is that a computer generated proof has exactly the same status as a human generated proof. The difference is only one of the degree of assistance provided to the brain of the human to help with the generation of the proof. A totally unassisted human would have to somehow do the whole proof mentally, which severely limits the length of proofs that are accessible. A human with the typical assistance that is allowed in an examination room (i.e. pen and paper) has the luxury of at least being able to write things down, which allows much longer proofs to be reliably generated. The mechanics of generating a proof then reduce to using well-defined rules to manipulate symbolic expressions, where pen and paper are used as a medium for representing these symbols, and the rules are implemented in the human brain.

The degree of assistence in generating a proof can be taken one stage further by using a computer to implement some or all of the rules for manipulating the symbolic expressions, rather than implementing all of the rules in the human brain. This seems to be a fairly radical step to take, because hitherto the only part of the proof that was "outside" the human brain was its "dumb" representation using pen and paper, whereas the "clever" bit involving the implementation of rules to manipulate this representation was "inside" the human brain.

Let us consider what these rules of manipulation actually are. Effectively, they define a procedure for taking an initial expression constructed out of symbols, and repeatedly operating on it using the rules to eventually generate the required final expression. The cleverness is in the construction of the set of rules, which is where a human is the best source of the cleverness needed to create the rules. There is no cleverness in the repeated application of these rules; all that is required is that their application is done reliably, which is where a computer is the best approach, especially if the proof has many steps.

Use a human to define the rules of manipulation, and use a computer to implement these rules. This approach seems to me to be entirely uncontroversial, and it is exactly how computer generated proofs are done. Note that software bugs in the computer part of the proof are dealt with in an analogous way to "software" bugs in human part of the proof, i.e. try a variety of approaches on a variety of platforms.

Interestingly, I wrote a paper (see here)

Luttrell S P, 1999, Self-organised modular neural networks for encoding data, in Combining Artificial Neural Networks: Ensemble and Modular Multi-Net Systems (Perspectives in Neural Computing, Springer-Verlag, ed. Sharkey A J C), pp. 235-263

in which I used a computer (running the symbolic algebra program Mathematica) to help me derive some results on the optimisation of vector quantisers so that their Euclidean distortion was minimised, and this optimisation was done non-parametrically in order to discover the globally optimal solution. The only way that I have found to do this type of optimisation involves intermediate expressions that can have a large number of terms, but which eventually leads to results that can be quite compact, and which are thus interesting. The only sensible way of handling this type of derivation is to automate it using a computer to implement the rules of manipulation (i.e. using symbolic algebra). As far as I can tell other people have not taken up my idea of using symbolic algebra to derive this type of result; my paper went down like a lead balloon, which is a great pity.

Update: OK, I can see that the Mathematica notebook in which I derive the results that I published in the above paper (see here) has now been downloaded 22 times (as of 31 December 2006). It would be interesting to hear your comments on it. Maybe you think it is just a load of algebraic twiddling which is not much use in the real world, but that conclusion would ignore the fact that exact algebraic solutions to toy problems are a powerful source of insight for interpreting approximate numerical solutions to real world problems.