appendix to
EXPLAINING EMERGENCE - towards an ontology of levels

Note:

This appendix is the original (big) section "Recent approaches: complex systems" of the published paper. That section seemed for the publisher a little too long, and was abbreviated into the section you find in the published paper.


Recent approaches to emergent phenomena

New developments in science have constantly challenged the philosophical approach to emergent phenomena. In the present section we will discuss recent computational approaches to emergence and their implications for the ontology and epistemology of the concept of emergence as defined in the preceding sections. Before doing so, we have to take a look at the historical forerunners of this development, especially the popular notion of 'self-organizing' systems within thermodynamics, cybernetics, and other disciplines.

Since the birth of science there has been a split between the 'physical sciences' concerned with universal properties of the bricks and mortar of the universe, and the 'life sciences' dealing with the much more specific and highly complex phenomena of biological, psychological and social organization. In fact, the modern science of general biology (in contrast to the early fields of natural history, zoology, physiology, anatomy, etc.) came into being in part by the recognition that there existed some general characteristics and principles governing all living matter in contrast to inorganic chemical or physical systems. Apart from the controversy around the vitalistic interpretations of this difference as due to special non-physical 'vital forces', the early biologists of the nineteenth century seemed to agree that organization was one of a set of such unique traits that distinguished both animals and plants from the inanimate world, and that this difference was more fundamental than the animal/vegetable distinction of Natural Philosophy, even though the mechanists held that life potentially could be explained by ordinary chemical and physical laws. The notion of common organizational principles of all living things was just as important for the development of a general concept of life and a science of biology as was the cell theory advanced by Schleiden and Schwann, who considered the cell to be the basic unit of life. In a sense, the 'holistic' idea of organization and the 'atomistic' idea of the cell as the atom of biology coalesced during the ultramicroscopic investigations of cells in the 20th century which revealed that single cells indeed are extremely organized systems from a physical or chemical point of view.

The question of how such a complex organization could emerge for the first time could not be answered deductively from Darwin's theory of evolution or from the neo-Darwinian synthesis, but it was clear that no principles of external design -- as used in accounts for complex technological systems -- could be allowed for in any evolutionary explanation. Thus, the idea of something living and rather complex arising from something less complex (and less living) by processes purely intrinsic to the natural systems themselves continued to intrigue scientists from both sides of the physics/life science split. Seen in context of early and middle 20th century biology, an adequate physical notion of self-organization was needed to bridge the gap.

In physics, the only indirect measure of 'order' in a system was the thermodynamic concept of entropy. According to the second law of thermodynamics, for an isolated system, the entropy is either constant or rising, and this could be interpreted as saying that in a system left to itself, all particles tend to become disarranged with respect to one another, and the system's capacity for doing work decreases irreversibly. For a long time it was clear to the physicist that, as Joseph Needham expressed it, "it would be quite erroneous to say (...) that because thermodynamic order [decreasing in an isolated system] and biological organisation [realized in open systems 'exporting' entropy to the surroundings; our insertions] are two quite different things, there is any conflict between them" (Needham 1941, p.230; see also Schrödinger 1944). But even though it was clear that increasing biological organization could go hand in hand with increasing entropy, physics could still not specify the necessary and sufficient conditions needed to generate emergent biological structures.

In a popular book dealing with the philosophical implications of non-equilibrium thermodynamics, the physicist Paul Davies (1987) claims that "as more and more attention is devoted to the study of self-organization and complexity in nature so it is becoming clear that there must be new general principles - organizing principles over and above the known laws of physics - which have yet to be discovered" (Davies 1987, p. 142). It is important that Davies emphasizes that such organizing principles do not mean that we have to conceive of them as "deploying mysterious new forces specifically for the purpose", which would be tantamount to vitalism. Though it cannot be excluded a priori that physicists may discover new forces, the collective behaviour of particles may take place entirely through the operation of known interparticle forces. So the organizing principles "could be said to harness the existing interparticle forces", they "need therefore in no way contradict the underlying laws of physics as they apply to the constituent components of the complex system" (p. 143 ibid.). Today a whole branch of research within physics deals with complex dynamical systems (of which 'chaos theory' is an important part) and can be seen as an attempt to find such organizing principles; and related efforts have been made (often in vain) to define the notions of complexity and organization quantitatively (cf. Landauer 1988).

Another line of thought in the context of self-organization is the idea of information as central for understanding living systems. Schrödinger, in "What is Life?" from 1944, hinted at the constitution of the hereditary material as a kind of code script, and since then, especially since the Watson-Crick model of DNA and the elucidation of the genetic code, there have been language-like and informational metaphors abound in various fields of biology (Emmeche & Hoffmeyer 1991), partly inspired by information and communication theory, cybernetics, computer science and general systems theory. Many definitions of life have been given by reference to the information processing capacities of living cells, or, in more general terms, by reference to complex systems' need of a kind of (incomplete) 'self-description' in order to select among the gigantic high number of possible combinations of building blocks the ones specifically needed for maintaining and replicating the system in question. Of course, the problem here is again to give a detailed account of the emergence of such systems which seem to be governed by a 'complementarity' (cf. Pattee 1977) between a 'dynamic' physical mode and a 'linguistic' informational mode. We cannot, of course, review this literature here, but only observe that there seems to be a growing consensus that the emergence of living systems must be understood both as a self-organizing process in the thermodynamic sense and as a process whereby chemical systems evolve an informational dynamics of a peculiar kind -- such systems are 'informed autocatalytic systems' according to J. Wicken who have contributed to unify thermodynamic and informational approaches (Wicken 1987).

Apart from the scientific study of the emergence of life (protobiology) and in general, the emergence of biological organization of various kinds (as studied by e.g., ecology, developmental biology, evolutionary biology), there have been attempts to develop a General Systems Theory of emergent self-organizing phenomena. These approaches (e.g., Jantsch 1980; Yates 1989) are more philosophical in character and have often been received with certain reservations or even with profound suspicion by the scientific community, who often tend to see them as over-generalized speculations ('thermo-fiction' according to Ramon Guardans 1989), even though such approaches, as their general system theory ancestors, in part built upon new research. Notably, the claim that the same type of self-organizing principles may govern radically different types of systems, or appear on different levels of organization, has not been generally accepted by workers within the particular fields in question.

In the 1960's, there was a general interest in the application of cybernetics, information and automata theory for understanding the spontaneous self-generation of order in nature. As Ashby (1962) pointed out, "the computer is heaven-sent in this context, for it enables us to bridge the enormous conceptual gap from the simple and understandable to the complex and interesting" (p. 271, ibid.), and he added with great optimism that "there is no difficulty, in principle, in developing synthetic organisms as complex, and as intelligent as we please" (ibid., p. 273, his emphasis). The cybernetic idea of self-organizing systems was an important forerunner for what we now see in the fields of artificial intelligence, cognitive science, and artificial life; fields that approach emergence by computational means.

The advent of digital calculating machines and the increasing use of computer power to simulate the dynamics of complex systems within the physical and biological sciences have been accompanied by the appearance of new approaches to the study of emergent phenomena, which can be seen in such fields as nonlinear dynamics, cognitive science ('connectionist' modeling of neural networks), mathematics and computer science (cellular automata, genetic algorithms, fractal geometry). The classical phenomena of emergence of physical, biological and mental structures are now better understood within these disciplines partly because of the greatly extended computational power in modeling complex processes. It is obvious that one can see this development as the final 'devitalisation' of the concept of emergent wholes, because the emergent phenomenon is usually considered to be merely a 'collective effect' (to use the physical jargon) of a very large number of local, mechanical or computational interactions on the lower levels of the system that generates coherent behaviour when the system is described by macroscopic parameters. This is especially clear when one considers 'Artificial Life' -- a new field of research that studies biological and other life-like processes by attempting to synthetisize -- in the computer, in 'wet chemistry' or in hardware robotic-like vehicles -- biological phenomena such as self-reproduction, evolution, metabolic networks or ecological dynamics. Artificial Life is interesting here for two reasons.

First because the concept of emergence has been introduced explicitly in the scientific research agenda for this new discipline: The 'synthetic' or constructive approach to understand and explain life-like phenomena encourages researchers to look for emergence of higher level behaviour within their model systems (e.g., the 'swarm intelligence' of an ant nest; flocking behaviour of fishes; self-reproduction as a systems property of replication, control and growth; the stable point or cyclic attractors of metabolic networks; the appearance of new organisms in evolution).

In the computer simulations these behaviours are considered to be 'truly emergent' if they are not introduced a priori by control-structures 'from above' by the programmer (often regarded as the out-dated 'top-down approach' to programming used in classical Artificial Intelligence), but instead being modeled merely by specifying the characteristics of the components, their simple state-transitions, and the initial values of the parameters of the simulated environment. And indeed, such highly parallel distributed computational processes seem to be pregnant with emergence; the development of these models often reveals complex and unpredictable patterns of behaviour appearing on the screen as the computer crunches through the simulations. If one reads through the first three conference proceedings (Langton 1989; Langton et al. 1991; Varela & Bourgine 1992), such a high proportion of papers reports to have found emergent behaviour in computational models of biological or 'life-like' phenomena that one wonders if creation of emergents has been added as a criterion for quality of a scientific paper in this very 'constructive' discipline (cf. the claim of Artificial Life to study a more generic class of phenomena than traditional biology; 'life-as-it-could-be', i.e., possibly realized in other material substrates than carbon-chain chemistry as life on Earth).

Secondly, the 'Strong claim' of a-life research -- that even purely computational structures, organised the right way and with a sufficient degree of complexity, must constitute genuine life irrespective the rather unusual 'physical environment' viz. the computer -- has raised some scepticism, and as a spin-off, attempts to define more clearly criteria for evaluating a system's surprising behaviour as truly emergent. It has been objected that it is not clear at all that the emerging patterns of, for instance, a cellular automaton, is representing any genuine new properties, and that the seemingly complexity of the simulation might be the results of the perceptual projections of the observer. Like emergence, the related catchword 'complexity' now so popular in science writings (e.g., Pagels 1988, Kauffman 1993), must - as argued earlier - be defined by some other characteristic than its psychological Gestalt quality.

In this section we shall discuss some attempts to define emergence in a more formal setting as suggested by Peter Cariani and Nils A. Baas. We shall also advance our own suggestion for a criterion based on ideas from the algorithmic information theory of G. J. Chaitin.

(a) Cariani's definition.

Peter Cariani has criticized the unreflected notion that simulations of self-reproducing, metabolising or evolving structures should be alive when exhibiting common biological criteria for life and when displaying emergent structures. Cariani (1989, 1991) separates three distinct notions of emergence, two more or less intuitive and one exact, and he advocates for the importance of specifying the frame of reference of the observer in deciding whether a set of computations is really emergent. These three notions of emergence are:

(1) Computational emergence. This is what many researchers of artificial life speaks loosely about; it is not well defined but the idea is, as mentioned, that local micro-deterministic computational interactions together determine the emergence of coherent behavior at the macro-level of the system. To give an example, the popular two-dimensional cellular automaton called "Game of Life", invented by John Conway, may run with a totally random distribution of 'living' and 'dead' cells in the starting configuration of the tessellation, from which there might 'emerge' within few generations or computational steps interesting and relatively stable configurations (so-called gliders, beehives, boats, loafs, blinkers, eaters; and even pulsars, flip-flops or glider guns). Other (one-dimensional) cellular automata have been shown by S. Wolfram (1984) to generate patterns that perfectly simulate the beautiful growth patterns of some species of conch shells.

In general, cellular automata as well as other models that compute over many parallel interacting units have been used as tools for modeling complex processes and creating interesting emergent behavior. But according to Cariani, the 'emergent behavior' of a computational model is fundamentally based on ascriptions by the interpreter and hence is not intrinsic to the formal system: "The interesting emergent events that involve artificial life simulations reside not in the simulations themselves, but in the ways that they change the way we think and interact with the world. Rather than emergent devices in their own right, these computer simulations are catalysts for emergent processes in our minds; they help us to create new ways of seeing the world." (Cariani 1991, p. 790). The emergent forms are not real, they do not belong to the system observed. This is because the behaviour of a system simulation can in principle always be predicted in advance by an external observer equipped with an adequate model of the simulation he observes (such model is in fact equivalent to the simulation itself). Thus Cariani refuses this first vague concept of emergence with reference to his own definition (given below).

(2) Thermodynamic emergence. This term alludes to various studies of the physics and chemistry of self-organizing systems where it is used in order to describe how stable structures can arise far from thermodynamic equilibrium. One problem here is that the theory of thermodynamics of these systems is not yet fully connected with a biological theory of the appearance of new functions in evolution. Cariani does not discuss this concept much further, and it is as intuitive or non-formal as the related ideas of self-organization in different sorts of systems theory.

(3) Emergence relative to a model. This is Cariani's preferred concept. It defines emergence simply as the deviation of the behaviour of a physical system from an observer's model of it (and it is in this respect similar to Thom's treatment of the concept of fluctuation as being relative to a theory, cf. above). If one is observing a system which changes its internal structure and behavior in such a radical way that one needs to change ones model to 'track' the system's behavior in order to continue to predict its actions, then the change is truly emergent relative to the model. Cariani specifies in a systems theoretical manner the preconditions for talking precisely about a system, its states, state transitions and observables.

The crucial point in case of artificial life simulations (or any other reliable physical implementation of a formal symbol system) is that the observer can in principle always choose his frame of reference so as to be able to fully predict the track (i.e., the state transition trajectory) of the system. The observables can be chosen to coincide with the computable states of a finite state automaton equivalent with the simulation. Because the simulated artificial life system itself is fully determined by the computational state transitions rules and the initial 'input' conditions, this system can be simulated and hence precisely predicted by an observer's model. Therefore, according to the definition, simulations will always be non-emergent. This can be seen by the simple fact that by starting the simulation again on the same machine, with the same program and input values, one gets exactly the same state transitions during the simulation where the same patterns will evolve (even in case of simulations of stochastic phenomena such as mutations and natural selection, this will also be the case when 'noise' is introduced via a pseudo-random number generater subprogram, if only the initial conditions for the second run are the same).

This appears to be a strong argument against interpreting any computational model as realizing (and not just modeling) 'emergent' phenomena. However, one should note the peculiar fact that in case of the set of Artificial Life styles models, i.e. simulations with potentially chaotic (in the mathematical sense of being very sensitive on the initial conditions) and complex behaviour, Cariani's form of 'prediction' will be the somewhat pathological form of predictive model duplicating the very system itself (this is the 'unsimulatable complexity' of Pagels 1989, p. 101). In such cases, we face a situation known from the study of certain classes of cellular automata as 'computational irreducible' behaviour (Wolfram 1984): These automata are typically simple deterministic state transition machines (or 'cells') where the next state of each cell is determined by its own present state and the states of the neighbouring cells according to a simple rule table. For some classes of rule tables, the computational evolution of the system cannot be reduced, in the sense of being accounted for by computations simpler than the system executes itself during the run of the simulation. Thus, no predictive short-cut can be made, one cannot find a nice formula by which one can compute the outcome of the simulation (e.g., what will the configuration be like at step n, will this automata reach a final steady state, or will it continue to change states) given only limited information of the automata rule and the initial conditions. One has simply to wait and see what happens. These automata are surely 'predictable' in Cariani's sense that you can have a model of them (completely equivalent with the automata you model) and run it the same way , but this model is itself just as complex as the 'system' you wanted to predict, and therefore 'prediction' does not seem to be an apt term in this case. We are facing real complexity here, not just ascribed by an observer. We can conclude that the case of behaviour of artificial life models which can be supposed to be subjected to the same limitations as Wolfram's complex classes of cellular automata, should be considered as a special case of genuine emergence.

According to Cariani, a proper concept of emergence should be concerned with the formation of global structures (emerging from the action of local rules or entities) which subsequently constrain and alter local interactions (Cariani 1991, p. 778). It is tempting to see exactly this kind of non-reducible 'downward causation' in an evolutionary perspective accounting for the action of genetic information as an informational boundary condition constraining the generation of form in phylogeny and ontogeny. However, this aspect of Cariani's theory remains to be developed. A major challenge is to develop a coherent theory to account for both formal and physical, as well as functional and evolutionary aspects of emergent properties, and a metaphysical hindrance seem to be the fact that any global structure can only act by manipulating local elements in which it is embodied.

(b) Baas' definition.

A general, more mathematical approach to emergence is proposed by Nils A. Baas (in press; 1994), presented as a step towards a general theory of hierarchies, complexity, emergence and evolution. The general idea is that these four interrelated phenomena (the 'hyperstructure' of his theory) are always found realized in biological systems, but also in the computational systems of Artificial Life and dynamical systems. Whenever we encounter life, it must be hierarchically organized, and hierarchies are the things that have had the time to evolve from simple to complex structures; complexity is here used in the algorithmic sense of needing a long 'programme' for the specification of the system or a long route of computational development (cf. Bennett's notion of 'logical depth' in Bennett 1988). According to Baas, life cannot do with just one macro-level/micro-level distinction and hierarchies are what make complexity manageable through several levels of organization. Evolution by natural selection is the process which gives rise to new levels. In a certain sense by evolution the environment acts as an observer that 'sees' or 'acts upon' higher level properties, thereby establishing recurrent forms of interactions within and between the different levels.

In order to develop a very general notion of emergence that can be defined formally, Baas gives a general formalism, in some cases expressible in the language of category theory, but we shall not go into the technical details, just give an informal hint about the basic idea (the following notation is simplified and must not be taken as Baas' original one).

Emergent properties must be observable, but they appear because of the system of interactions among the lower level objects, and not because of observation. Baas does not specify the nature of the observing subject or his 'observational mechanism', and restrict himself to give the formal requirements for emergence. "For something new to be created we need some dynamics or better - interaction - between the entities. But to register that something new has come into existence, we need mechanisms to observe the entities." (Baas 1994). The process of emergence of properties on several levels may be considered as a result of a series of abstract construction processes, similar to mathematical constructions. Given a set S1 of first order structures, one can, by some kind of observational mechanism Obs1(S1) obtain or 'measure' the properties of the structures at this level. The S1's can then be subjected to a family of interactions, Int, using the properties registered under observation (this could be a dynamic physical process). Hence one gets a new kind of structure, S2 = R(S1, Obs1(S1), Int), where R stands for the result of the construction process. The interactions may be caused by the structures themselves or imposed by external factors. Obs is related to the creation of new categories in the systems. S2 is a second order structure, a new unity whose properties may now be observed by another observational mechanism Obs2, which may also observe the first order structures it consists of.

Now for the definition, Baas defines P as an emergent property of S2 if and only if P belongs to the set Obs2(S2), and P does not belong to the set Obs2(S1) -- which may be interpreted as saying that the whole is more than the sum of the parts.

As Cariani, Baas too distinguish between different types of emergence, (a) deducible/computable emergence, which means that there is a deductional or computational process D such that P can be determined by D and Obs1(S1); and (b) observational emergence, which is the more profound type, characterized by the condition that if P is an emergent property, it cannot be deduced as in (a). (Type (a) emergence clearly indicate, that the defining characteristic of an emergent property P -- that it belongs to Obs2(S2) but not Obs2(S1) -- does not entail that it could not be determined by Obs1(S1) in an explanation using D. This is close to the idea of Kincaid (1988) that the irreducibility of a higher level theory does not entail that lower level theories, with respect to some questions, cannot explain higher level phenomena).

Baas exemplify deducible emergence by non-linear dynamical systems where simple systems interact to produce new and complex behaviour: phase transitions, broken symmetries and many types of engineering constructions where coupled components interact in known ways so that we can calculate the new compositional properties of the system. Chaotic dynamical systems is considered as a borderline case of (a) and (b). A genuine example of observational emergence in a formal system is according to Baas found in Gödel's theorem, i.e., in the fact that in some formal systems there are statements which are true, though this cannot be deduced. Here observation is the truth function (ascribing the truth function value (true) to a string of symbols which is not deducible from the axioms and rules of inference of the formal system itself). Starting from first order calculus -- which is complete in the sense that every true statement can be deduced from the axioms -- one can add further axioms to cover the theory of arithmetic, and this 'adding' is a kind of 'added interactions' among well-formed expressions which creates emergent properties. Though Gödel's theorem as observational emergence may at first appear as a rather negative result for constructive purposes, it is not, because Baas have "incorporated the observational mechanisms in such a way that they can usefully be taken into account in further constructions -- even if they are not deducible" (ibid.).

Thus, in contrast to Cariani, Baas concludes that even in formal, abstract systems (including models of life-like processes) profound kinds of emergence may occur. In Baas's general theory, both the traditional micro-deterministic upward causation and the more controversial downward causation (that may even occur across several levels) are allowed for. Whether these forms of causation are actualised in real systems is an empirical question.

(c) An algorithmic information theoretical definition

We will now present a related proposal for a formal criterion for emergent structures. This idea is inspired by Cariani's critique of merely ascribing emergent patterns to simulations without any guarantee that these patterns are real in some objective sense and not just psychological Gestalt properties, and by Baas's discussion of observational and computational emergence. The basic idea is very simple, and takes the form of an information economical 'Occam's razor'. Using the algorithmic information theory (see Chaitin 1987, 1992), it is possible to formalize the notion of information economy in descriptions, translating it to the length of various computer programs needed to generate the described object as output. We shall only give an informal argument here.

The basic notion is that some form, property, entity (or eventually forms of movement) is emergent if and only if this form is i-real, i.e., real in an informational sense. That they are 'real' implies that they are not just ascribed by an observer projecting subjective Gestalts onto the system observed, and that they are 'informationally' real implies that their reality as forms can be evaluated by an information theoretical criterion:

A form is i-real (and thus, i-emergent) if there exists a program with the properties (1) it specifies the form completely (or eventually specifies it within some accepted margin of error), i.e., if, when the program is given as input to a computer, it makes the computer produce the equivalent form as output, and (2) the program is shorter (represents an informationally more economical description) than the complete explicit bit-for-bit description of the form.

The intuition applied here is that if a pattern, say, of dots on a two dimensional surface is totally random (and hence non-emergent), it does not constitute a form in the usual sense of the word; and the only way of precisely describing the pattern would be by a long bit-for-bit description of the value (black/white?) of each 'pixel' on the surface. Measured in bits, this description would be as huge as the information content of the pattern itself. Things changes if we rearrange the dots to create an orderly picture (we could do this by hand according to our imagination, or for instance by letting the pixel values represent the intensity of an imposed electromagnetic field or some other physical phenomenon). As soon as a sufficient number of dots have been rearranged to make the observer 'sees' a form appearing, it is possible to apply this orderliness to make a shorter description of the surface's pixels.*note 8* If for instance all the dots were arranged along horizontal lines, we could easily make a 'program' that in a few lines of instructions would give a procedure for generating this form. (In case of the imposed field, we could let the physical law describing field intensity generate the values of the pixels, or let them represent the lines of forces).

So in the transition from 'chaos' or randomness to 'order' or emergent form, we might say that a new level is created in the system which has two aspects: A subjective one, where new Gestalts appear to the observer with some degree of perceptual evidence. And an objective one, where the description of the state of the surface can be 'compressed' informationally, implying objective existence of the related Gestalts. This objective possibility of 'compressing' the extensive bit-for-bit description (actually known as 'compression algorithms' used in electronic image processing) corresponds to the emergence of a new level of description of the system, deviating from the low or 'atomic' bit level. We can elaborate this metaphor a little. At the higher level, the descriptions are shorter, even though they must specify the same total number of states. Thus the higher level can be said to operate with 'chunks' of lower level bits ('words', or 'molecules' rather than letters or atoms); but the specification (the descriptive program for generating the total structure) works only on the condition that there is a lawful procedure or a set of rules or constraints that allow us to generate the structure if only the shorter (higher level) description is given as input. In this sense, the program can be said to organize the atomic level by imposing a set of global constraints on what kinds of micro-patterns can exists. The whole is (informationally) less than the sum of the parts!

Let us consider another example. Suppose we can describe a system of interacting elements on two levels: one (lower level n) for the individual interactions, and one (higher level n+1), where an emergent phenomenon eventually can be observed. We could think of the individual states of a cellular automaton and the higher level as typical appearing forms. Or it could be a vat of water with moving molecules described on the higher level as a wave in a liquid. When do we then consider the emergent phenomena to be genuine? According to the criterion when the forms are i-real, that is, when we make a description shorter than the 'total' atomic one (for level n), that completely specifies the observed form on level n+1. In a sense, then we can save bits by this new description that still generates the CA-form or the wave in the vat.

If the system is very small, the shortest possible program that generates the full specification of the system might well be simply the bit-for-bit description on level n, and we cannot save any bits on a higher level description. As we consider larger systems with more elements and interactions, we will face phenomena describable by 'laws' constituting emergent forms of movement on higher levels. These laws can be considered as algorithms, given in an information economical form, allowing the generation of models of exactly the equivalent macroscopic patterns that we as observers are interested in. (The idea of natural law as an algorithm for ordering the data of observation is not new and can be found in the work of von Mises, Solomonoff, Carnap and Popper (see review and references in Chaitin 1992). Even in an evolutionary cosmological perspective, one may see the emergence or the laws of nature as a sequence of generation of 'habits' or 'algorithms' constraining the indeterminate movements of the level below.

If one postulates to 'see' forms on level n+1, without giving any relatively short algorithm or formula for generating a computational model of the equivalent forms, one has strictly speaking not shown that the forms in question are emergent. However, this criterion has the problem that it demands, as far as we can see, rather simple systems to be of any use in applications. Another problem is that the algorithmic measure of information does not deal adequately with complexity, even if it can distinguish 'random' structures from 'non-random' (compressible) ones. Non-random forms can either be simple or complex. Trivially simple (non-complex) forms will be considered to be as emergent as highly structured ones, which is somewhat contrary to one of the usual connotations of emergence as meaning the appearance of a complex pattern. One could hope for a quantification of the degree of emergence of some form or lawful behavior in the sense that the shorter program that can be given (i.e., the more compressible the description is) the more emergent is the system. However, this would make the most trivial structures the most emergent ones, which is at odds with the intuition that complex biological structures (such as the body hierarchy of organism-organs-tissues-cells-macromolecules) are the ones that exemplify emergence par excellence. Furthermore, and as a consequence of an important theorem of algorithmic information theory, we can never know for sure that we in fact have found "the shortest program"; there is no general decision procedure for identifying a program as non-compressible ("random" in the algorithmic sense). Therefore, this notion of emergence must be restricted to deal with the transition from non-ordered random structures to structures, where some kind of patterns are emerging. Chaitin himself realizes that the algorithmic notion of complexity seems to throw some light upon biology but that it still is not sufficient - a specific idea of biological complexity is needed.

A further problem with the algorithmic notion of emergence is that the 'specification' of the system that the short program gives must ideally be as precise as the trivial bit-for-bit description of the lower level (in order to exclude an element of subjective projection of gestalts into the object so specified). In practice, however, in the non-formal domain, one often sees some 'loss of information' going from a detailed atomic or fine-grained description to a more coarse-grained one. The essence of emergent phenomena is often said to be the existence of irreducible kinds that cannot be equivalated or deduced from lower level descriptions (cf. P. W. Anderson's dictum, "more is different"). But the algorithmic definition of emergence, in which one simply counts the length of two descriptions that both specify the system completely, does not appear to be compatible with non-equivalence between levels. However, this contradiction is only apparent, because the algorithmic notion presupposes that we are comparing two formal models or programs (the 'print out the structure by this bit-for-bit description' program and the 'short-cut' program), not more elaborate conceptual structures from disparate disciplines or separate ontological domains.

Thus, because of the formal restrictions due to the nature of comparison and model construction within the algorithmic approach, it might have a very limited application. Nevertheless, we think that it reveals an important aspect of emergent structures to the extent that they can be treated by the computational methods of dynamical systems theory, automata theory and in simulations within such fields as artificial life and cognitive science.

In a broader historical view it is a fact, that the concept of emergence does have a central position inside these new domains. Even if it is only a part of the total set of emergent processes which they can handle, it is a very promising step in relation to a fully developed theory of emergence.


*note 8*

This is in fact the same principle as used for constructing data compression algorithms used in electronic communication and data processing. Some picture compression algorithms are based on fractals, an instance of iterated function systems. Think of the fact that the highly complex Mandelbrot set (or other popular and spectacular fractals) are defined by a very short formula describing an iterative process.


Back to the main (published) article by Claus Emmeche, Simo Køppe & Frederik Stjernfelt.