Physical Review Letters 54 (1985) 735-738.
The paper was published in 1985, long before the most important work of Stephen Wolfram was accomplished, A New Kind of Science. It is not only the book, but also the very wide, comprehensive and multidisciplinary program, which according to the author’s intentions, should essentially change the way in which science is practiced. The book covered most of the ideas previously expressed in the separate papers and the author himself confessed that it is better to read the book, and not the articles. Nevertheless there are at least two reasons why the text is still interesting: It touches the crucial problems of computation in modern physics and it is much shorter than the corresponding chapter of the book (“Fundamental Physic” – about 100 pages).
According to Wolfram, there is close correspondence between physical processes and computations. At least we can say that physical processes seem to be computable and theoretical physic aims to discover and properly formulate those algorithms that represent those processes. Algorithms however would be useless if we didn’t discover the “shortcuts” i.e. the equations or set of equations which describes the particular physical process and let us input the initial data to obtain the results. It is almost commonly accepted among the physicist that such shortcuts are common in the nature and we should continue looking for them. The Wolfram thesis is different.
Comparing the physical process to computer, Wolfram argues, that there are computations, normally executed on the computer, which are “irreducible”. Irreducibility means, that although we know the “program”, i.e. the instructions which are uploaded to the computer in order to conduct the computations, we cannot predict the result by the kind of shortcut, but we have to execute the whole computation step by step. Computation irreducibility is common in mathematics and in computation theory, but Wolfram claims that it is also common in theoretical physic, although has not been noticed so far.
Wolfram distinguishes three kinds of problems, which can be submit to the “universal computer” in order to get the solution: The first are denoted P, and they represent the level of complexity (in terms of space and time) which makes them decidable. The second, PSPACE, are those that can be solved with polynomial storage capacity, but may require exponential time, and so are in practice effectively intractable. The third NP, consist in identifying, among an exponentially large collection of objects, those with some particular, easily testable property. The computer that could follow arbitrarily many computational paths, could solve such problems in polynomial time. For actual computers, those problems are undecidable. According to Wolfram computation irreducibility may be widespread, even (or may be particular) among the systems with very simple structure. Cellular automata constitutes the best examples. Very simple instruction coded in the automaton may lead to the irreducibly complex structure.
Do we have an examples of such structures in physical systems? Wolframs argues that we do, and points the following examples: electric circuits, hard-sphere gases with obstructions, networks of chemical reactions, chaotic dynamical systems. Minimum energy conformation for a polymer is in general NP-complete with respect to its length. Finding a configuration below a specified energy in a spin-glass with particular couplings is similarly NP-complete. The examples can be multiplied. Wolfram expresses in fact the most important suspicion: It may occur that systems that are computationally reducible, and the so called “shortcuts”, which have been discovered so far are very rare examples of structures in the world.
The observation that in computer theory and in mathematic we do meet problems which are undecidable or intractable is not new. We know about it at least since Gödel formulated his famous theorems or since Turing presented his conception of universal computer. However the thesis, that those unsolvable problems represent the systems actually existing in the physical world is something new. New, seems to be also the suspicion that the theory of cellular automata can be the closest representation of such a systems. Both hypothesis are also quite courageous. Why courageous? Because they seem to be very hard to proof or even to make it probable. When scientists find the shortcut which solves the significant set of problems, the only thing that can be said is that there exists solvable, computational reducible problems in nature. The thesis, that the whole nature has that special feature, that can be described by the computationally reducible processes is an extrapolation. The adverse experience, so that we conclude, that there are processes, for which the fitted shortcut has not been found yet, although many tries, don’t even let us extrapolate about the computationally irreducible problems in nature. This seems to be purely philosophical speculation and I can hardly imagine what kind of facts could falsify the hypothesis. Nevertheless it still very interesting and worth considering, especially as it goes against the wave. The another problem is, if on the basis of such a suspicions, we can really construct a new kind of science.