|"Where shall we find a machine that reproduces its own substance and at the same time grows, structures itself, reconstitutes itself upon trauma, reorganises itself in response to changes in its environments, and programmes its own reproduction?"
Chandebois & Faber 1983, p.4.
The present paper discusses a topic often neglected by contemporary philosophy of biology: The relation between metaphorical notions of living organisms as information processing systems, the attempts to model such systems by computational means (e.g., Artificial Life research), and the idea that life itself is a computational phenomenon. This question has ramifications in theoretical biology and the definition of life, in theoretical computer science and the concept of computation, and in semiotics (the study of signs in the most general sense, including information, signification, and meaning), and the concept of the interpreter. It is argued, that the theory of autopoietic systems known from theoretical biology should be integrated with a biosemiotic reflection on the natural history of signs.
The title of the present paper perhaps seems a little obscure, so we will in a moment explain the idea of life as a computational phenomenon in relation to the study of artificial life. However, let us first consider term semiotics, i.e., the general study of signs and sign interpretation processes, and biosemiotics, the study of sign processes in living systems. The question to be addressed first is what constitutes the biosemiosis, or the sign processing capabilities, of a living system, and we will focus on the single cell as a specific but fundamental example. If anything can be said to be alive, it has to be organized on the biological level as a cell or as a system of cells-at least according to our normal biological intuitions of life based on the modern molecular and cellular biology (cf. Emmeche 1993a; even viruses require a cell as a necessary part of their cycle of existence, and can thus be considered as a pathological subcomponent of the cell). The fascinating idea that the cell is endowed with a natural capacity to compute or perform computational processes-a power we normally ascribe to human beings or desk computers-will be critically discussed.
The reader might well ask why on Earth one should propose such a fancy thought, except by some pathological chain of metaphoric transformation of the Cartesian credo cogito ergo sum, into its cognitive science and artificial intelligence version computo ergo sum, and finally into an artificial life edition of the slogan computo ergo vivo. Scientists may use the computer as a convenient tool to model specific objects of investigations, and even to mimic forms of life by computational, chemical and robotic techniques (cf. Langton, ed. 1989, Meyer & Wilson, eds. 1991, Fernandez & Moreno 1992), but shouldn't they abstain from projecting their own cognitive and computational abilities into the subject matter, as when they look for `computational properties of living cells'? It cannot be denied that a dash of dualistic metaphorics is perceived here, but the matter is more complicated than that, and we will return to the concept of computation below. For the moment we shall only notice the fact that a nice amount of research (recently reviewed by Conrad 1990) has already been devoted to elucidate the computational capacities of biological cells under the clear (though debatable) presumption that such capacities do exist.
Semiotics will be understood here as the theory of signs and sign processes in various kinds of systems, cultural as well as natural, in the American semiotic tradition of Charles Sanders Peirce. One might prefer to call the peircean theory of signs a philosophy, or a perspective (Anderson et al. 1984), because of its very broad scope and metaphysical foundation. Semiotics was earlier thought of as merely pertaining to specific human sign systems, according to the Swiss linguist F. de Saussure, who envisioned his `semiologie' to be the study of `the life of signs in society'. In the recent years, there has been an increasing interest in seeing the concept of sign and semiosis (sign-action, or sign processing and interpretation) in a broader, naturalistic and evolutionary perspective, allowing semiotics to include such areas as the study of the origin of human sign systems in nature (Sebeok 1987; Richards 1987); the natural evolution of animal communication systems (Sebeok 1972); and the investigation of the extension of semiotic processes in general.Biosemiotics is a term invented to cover exactly the theoretical description of semiotic processes in living nature and a general semiotic view of life (Hoffmeyer, forthcoming; Hoffmeyer & Emmeche 1991; Uexküll 1989; Eder & Rembold 1992; see also the papers in Sebeok & Umiker-Sebeok, eds., 1992).
As information and computer science constitute the core of the new semiotic technologies in the modern society, it could seem attractive to state the idea of a natural action of signs in cells and other living systems in terms of a `bioinformatics' of the cell dealing with its `information processing' capacities. However, when used in the context of living systems, as we shall see, computational terminology too often takes the form of an unreflective metaphoric.
The reason to frame the question in the alternative semiotic perspective is threefold: First, philosophically, it is by far the most general one, and thus includes the information processing approach-forms of information processing can be seen as specific kinds of semiosis. Second, the cell has probably quite specific kinds of `informational' processes for which we might have no equivalent notions within neither computer science, nor the field of bioinformatics which primarily deals with technological and theoretical problems of handling the data of, e.g., molecular biology (see Nussinov 1987, Lander et al. 1991). Third, the general question of the biosemiotics of the cell should not be confused with a metaphorical use of informational terms.
Artificial Life research
One of the virtues of the Artificial Life as a field of research is, that it has become possible for biologists, cognitive scientists and computer scientists to see their own field within a much broader framework, and to pose new questions about the fundamentals for the descriptions of natural and formal systems. It is rather seldom that experimental biology is concerned with the subject matter of the field of research in such a fundamental way. Let us emphasize that we find this initiatitive very welcome. However, much of the current line of thinking in the study of artificial life or `Alife' do not yet seems to have a direct impact on the way biologists conceive of their subject matter. (In Emmeche 1993 the very notions of `life' in Alife and traditional biology are analysed, and in Emmeche 1992b the extent to which the traditional idea of life as a material phenomenon is in conflict with the strong, or computational version of the AL research programme is discussed). It is interesting to see how life is defined by Doyne Farmer, who has contributed to interesting Artificial Life studies.Life (as defined by Farmer and Belin 1992 by a list of criteria) involves: 1. a pattern in spacetime (rather than a specific material object); 2. self-reproduction, in itself or in a related organism; 3. information-storage of a self-representation; 4. metabolism that converts matter/energy; 5. functional interactions with the environment; 6. interdependence of parts within the organism (preserve the identity of the organism; ability to die); 7. stability under perturbations of the environment; 8. the ability to evolve. We shall only discuss the most relevant criteria here (cf. Emmeche 1992b for the first and second criteria).
The third criterion talks about an information storage of a self-representation. This raises several problems: Are living cells really informational, i.e., in what sense can we talk about `genetic information', `transcription', `translation' and the whole register of linguistic and computational metaphoric of molecular biology. What exactly do we mean when we say that the genes are a partial `self-description' (Pattee 1977) of the cell or organism? Is that form of representation in any sense the same as the representation we have in a computational (or in a mental and linguistic) context? When looking at computer organisms and computer viruses, this criterion seems to make least trouble; everybody seems to agree that "a computer virus stores a representation of itself" (Farmer & Belin 1992, p.820), or as Spafford say: "The code that defines the virus is a template that is used by the virus to replicate itself. This is similar to the DNA molecules of what we recognise as organic life"(Spafford 1992, p.742). But one should observe that, first, it is a truism that the computer organism's own representation or code `is' the organism; second, its not really a representation in the sense of description; third, the relation is of a quite different character than the relation between DNA and the whole cell; and fourth, even if we only compare computer virus and real viruses, such as the simple RNA-viruses, we have only a superficial similarity between the way they respectively exploit the computer and the cell. Accordingly, one might feel forced to conclude that the biological cell is not a computer, it is not `computational' and stores no `programs', because the function of the genetic machinery is not algorithmic in the same sense as the execution of the code of a computer organism is; and furthermore, that although it may be compared to an algorithmic process, that depends on our interpretation, not the cell's own way of `interpreting' the genetic material.
However, the question of computation in living cells should not be settled by these simple observations as we shall see below. We do claim that one can justifiable speak of biological information at the intracellular level, but this information is of a different category than information in the computational sense.
A similar problem is involved in the fourth criterion, metabolism. Surely, computer organisms of any sort have metabolism, if we beforehand reduces this biological concept to a simple question of the existence of matter/energy-conversion (and then forget about the matter). In fact, any program has then this sort of quasi-metabolism in its use of `computational resources'. But what is metabolism in the biological sense? Again, it is a set of very specific biochemical processes that couples catabolic and anabolic chemical reactions, that is, reactions that `break down' substances to simpler ones (especially water, carbondioxide, and urea) and reactions that built new and more complex compounds from simple compounds (e.g., biosynthesis of aminoacids and proteins; energy-demanding processes of `active transport' of ions across membranes). In total, the whole process results in an irreversible production of heat exported to the surroundings, but the entropy production is smoothed out in a long series of intermediate steps of nearly reversible reactions. This coupling in a whole network of processes is based on the macromolecular properties of enzymes and enzyme-substrate recognition-in this sense it is a material medium-dependent phenomenon that forms the presupposition of the phenomenon of biological autonomy in the sense of autopoiesis (the `creative' aspect of biological self-modification, Kampis 1991). A computer organism's `conversion' of computational resources or a computational model of metabolism are on the material level quite different from the metabolism of living cells or even an artificial metabolism in vitro with real chemical reactions (that under some interpretation may be an appropriate model of many aspects of metabolism).
On the latter criteria, we will just give a few remarks:5'th criterion: It is not clear what an environment of a `program-organism' is. The interface between a cell and its environment is spatially well-defined; this is not so for the abstract life in a computational model.6'th criterion: (a) Any system can have various sorts of `interdependence' of parts (so does the solar system of planets), so it has to be specified a little more. (b) Can computer organisms really die? Their death seems to be somewhat clinic, they just stop to exist: they are erased from the memory. When living organisms die, it is not so much a process of annihilation of information as it is an organizational disintegration over time that results in the transformation of biochemical substances and their ultimate dispersal within the ecosystem.7'th criterion: (stability under perturbation): The same can be said here: it is rather vague.8'th criterion: This is a strong one. In fact, the genetic and evolutionary mechanisms of `transmission of information' (or, as one might prefer: transmission of organization) and natural selection are good candidates for formal criteria of processes, that are genuine life-like. Natural selection can work at the sub-organismal level (and maybe also supra-organismal, such as deme- and species selection, though these concepts are not so clear). So it is a necessary, but not sufficient criterion for life.
One fundamental problem common for all criteria when used in the context of computational `strong Alife' is, that they are really not criteria for life in the usual biological sense, but that they already represent another concept of life, namely life as an abstract, non-material phenomenon, and thus their relevance as a kind of `conceptual anchor cable' to the physical world of known plants and animals is dubious. "But hesitate," the Alife objection might say, "we didn't want them to be criteria for carbon-based life in the normal sense! We wanted to create new forms of life; life forms in other media." -But that doesn't help. They are not useful at all for evaluating the strong claim of possible construction of life in a formal domain, because these criteria derive from another world than the world of formal properties, and they do not seem to make sense in the latter domain. One should not forget that the strong version of Alife-that human beings can create life out of non-living artifacts-is really a radical claim. Alife is not yet life, if it ever be. One could be tempted to say that what is being studied in Artificial Life for the present, at least in the computational part of the research program, is quite another object: It is not even life as an abstract phenomenon, it is the life of abstract concepts ascribed to a specific interpretation of formal computational structures.
However, one should remember that contemporary biology does not have to be the only source of systematic knowledge of life, and that neither the concept of life nor the concept of computation are that clear. Let us briefly consider some of the historical background of the informational idea within biology.
Biology and informational terminology
In a sense, biologists have always been occupied with describing informational or semiotic processes such as stimulus/response mechanisms of cells and organisms, homeostatic regulation of the body, sensory and perceptional processes, communication between organisms, and the mechanisms of inheritance, now understood as the workings of a genetic code-notice the semiotic flavour of these terms. Influenced by the philosophical doctrines of mechanism and physicalism, such descriptions were often casted in a seemingly physical and non-teleological language. However, the main trend of theoretical and evolutionary biology in this century has been very skeptical toward any attempt to bring about an ontological reduction of biology to the laws of physics (e.g., Mayr 1982), although of course allowing for `analytical reduction' in the normal methodological sense. With the rise of molecular biology (Stent 1968; Judson 1979; Abir-Am 1985)-the science that revealed the genetic `language' of living nature (incidentally a problematic metaphor, see Emmeche & Hoffmeyer 1991)-the informational terminology became accepted as part of the usual vocabulary of biology. Examples are legio, viz. the `genetic code'; the `transcription' of DNA and formation of `messenger' ribonucleic acid or mRNA; the subsequent `translation' of this molecule in the synthesis of a protein peptide; the `molecular recognition' in the binding of enzymes to their substrates; the `signal' peptides, etc. In the age of computer technology, it became almost second nature for the biologists to conceive the cell (the most basic and simple autonomous unit in the domain of living beings) as a special kind of information-processing system with the genes as a program, embodied in the chromosome strings of DNA, and the rest of the cell body as the `hardware' interpreting this program by translating the genetic code into functional proteins. Such a program, incorporated for instance in the fertilized egg cell of a multicellular organism such as a human being, should in principle contain a preformed set of instructions (evolved by natural selection) for building the whole organism in the process of development. Thus one can understand the fascination of the current international humane genome project (HUGO) endeavouring to sequence the entire three billions DNA base pairs to get `the blueprint for Homo sapiens'. Yet, the idea of a blueprint is totally misconceived. As the computer scientists D. R. Hofstadter remarked, "If you wanted to find some piece of your DNA which accounts for the shape of your nose or the shape of your fingerprint, you would have a very hard time. It would be a little like trying to pin down the note in a piece of music which is the carrier of the emotional meaning of the piece." (1979, p.160).
There has been a lot of criticism of the simplified preformationistic and `genocentric' view of the organism, which has dominated large areas of experimental molecular biology (Goodwin 1984, Webster & Goodwin 1982, Oyama 1985, Sibatani 1989). We can, of course, postulate the necessary existence of some form of `genetic instructions' or `biological knowledge representation' in the genome, that is, representation of the procedures and mechanisms of development. This `representation', however, is of a very implicit kind, and the only thing that one really can declare to be explicitly represented in the genome is the specific primary sequence of sub-components (amino acids) for each component building block (peptide) of a protein. The precise relationship between the phenotype of such a complex `feature' of a multicellular animal as, say, the form of the lens of the eye, and the representation of this feature in the genome, is still not known, except for very simple characters, such as features of the basic metabolism related to the action of specific proteins. It would be more adequate to recognize that much of this `representation' is of an epigenetic character:
This means that the developmental information needed to create the total phenotype is not a stored representation in the DNA but is created structurally in the self-organizing process of development, where many levels are in a continual interplay, the detailed nature of which we know so relatively little. Here, the idea of genotypic information for specifying the phenotype can be rather misleading. René Thom, in his Mathematical Models of Morphogenesis, gave an early description the situation: "The use of the word information, in the case of `genetic information' gives evidence of the following psychological situation. In the immensely complex unfolding of the morphogenetic processes in embryology Molecular Biology has revealed an important mechanism, the synthesis of proteins. The natural tendency of the specialist is to say that this stage is the essential one, and that other stages are no more than the simple consequences of it. The word information, in such a situation, serves obviously to disguise the almost-total ignorance in which we find ourselves specifying these other allegedly subordinate mechanisms, while allowing through the connotation of intentionality in the word information, an implicit guarantee of the ultimate purpose (`finalité') which underlies all biological thought. Information, in this sense, is a disguised form of causality." (Thom 1973, p. 282).
Though a lot details of the genetics of development have been revealed since this description (Wolpert 1991 provides an overview for the non-specialist), we are still far from an overall picture of the relation between the genome as a `digital code' of the organism, and the whole set of `analog' dynamics that construct the anatomy during the growth and differentiation of the multicellular organism. Thus, while information of the whole genetic sequence of Homo sapiens will probably revel a lot of interesting material about our species at the genetic and biochemical level, it will not as such give us `the instruction manual' of the complete ontogenetic process, because such a manual does not exist (in the sense of a self-contained explicit precept). At this level, the metaphor breaks down. An instruction manual for assembling, let's say, a model aeroplane presumes the intentionality of the constructor, an external agent of which no biological correlate exists in case of the autonomous self-assembly of an organism. A better understanding of the mechanisms of form-creation in development will have to be worked out in a laborious interplay between experimental work and construction of theoretical models (see the papers on `biological structures and morphogenesis' in Mosekilde & Mosekilde, eds., 1991). In this context the semiotic considerations may be a source of inspiration, but no general theory of biosemiotics can give an operational account of development. Informational metaphorics, computational terminology or semiotic musing is in itself of little scientific help. On a meta-scientific level, however, semiotics can be of heuristic value in the analysis of problems in the philosophy of biology (Emmeche 1991), or in exploring the possibilities of new perspectives on science and nature (Anderson et al. 1984; Hoffmeyer, forthcoming).
Living signs or loans of intelligence?
One of the attractive promises of a biosemiotic inquiry is to deepen our understanding of the informational aspects of living systems, without taking recourse to specious computer- or `instruction manual' metaphors. To the extant that computational concepts can be used in explaining natural sign processes, it should be made clear exactly what is meant by computation in, for instance, the slimy protoplasm of an amoeba, an entity which appear to be quite unlike the mathematical and conceptual domains of formal computational systems. Such a clarification may be possible.
An important object of biosemiotics is to account for what might be called the natural history of signs, that is, the evolution of sign systems in nature on any level of complexity (Hoffmeyer, forthcoming). But in addition to this enterprise, biosemiotics as a reflexive theory should recognise the difference between, on the one hand, the human ascription of symbolic terms and categories to the natural systems we observe and intend to explain, a process which is mediated by human language, and on the other hand, the presumed semiotic processes in these systems as real entities. This complementary aspect of the endeavour, the system-observer semiotics, distinguish between the biosemiosis of living nature looked at as a self-contained natural system, and the observation and description of it as a semiotic process involving human observers, language, institutional science, etc. (Emmeche 1992a). The system-observer semiotics should critically deal with what Dennett (1978, p.12) calls `loans of intelligence'. Such loans are taken by biologists and biochemists in their use of unanalyzed man-analogues in descriptions of for instance cells as being capable of interpreting `signals', `messages' or `commands'. They are unanalyzed to the extent that these terms merely reflect projections of specific human capacities seemingly isomorphic with the organism's observed behaviour, and are not followed by any specific understanding of the basic mechanisms of such behaviour. The man-analogues could also be computer-analogues, argued with more or less sophistication.
Sometimes one sees an explicit and intended use of intentional or cognitive terminology within cell biology, suggesting, for instance, that the cytoplasm of the cell is an `intelligent machine' (Albrecht-Buehler 1985), because the cell is seen as having many of the data-processing capacities of the computer. It is argued that the control of the amoeboid movement of certain cell types provides evidence of the intelligence, because the complex deformations of the cell body require coordinations of many domains of the cell, indicating a requirement for processing of information about metabolic and motile states of these domains, and because locomotion requires `navigation', which again is seen as requiring collection and integration of data about the environment. (As simple artificial devices constructed on cybernetic principles can have similar capacities, one might well question whether the notion of `intelligence' involved here is too shallow). Albrecht-Buehler honestly confesses that nothing is known about the data-processing mechanism of the cell which he supposes "to compare, link, and integrate by certain rules of cellular logic the different incoming signals and their meanings" (ibid., p.17). Here, he takes a kind of `loan of computational capacity', and it is an open question if it can be repaid merely by future empirical research in cellular biology. Probably a more conceptual clarification with respect to the ascription of `computation' to various systems is needed, too. This will also apply for attempts to `prune' (in the sense of Midgley 1979) the computer metaphor within embryology, as in the interesting work of Chandebois, who compares the cytoplasm of the cell to the memory of a computer (Chandebois & Faber 1983). Here, the cytoplasmatic memory is seen as storing the "input data" (from the neighbour cells) and the "results of the computations performed" (by the metabolism), and these together are seen as establishing a program which is submitted to the DNA. The DNA is seen to be "comparable to the `arithmetical and control circuits' of a computer (part of the `hardware')" (p. 21, ibid.). In spite of the virtues of Chandebois's approach (it is non-preformationistic, it recognizes the metaphorical status of the analogy between the embryo and man-made automata, it seeks to give explicit mechanisms, and it is based on extensive empirical research), the extent to which the analogy makes sense or is needed at all is not obvious-if not to loan a capacity for steering, guiding or computing a series of complex transformations of an object of which we know much less than our explanatory schemes prompt us to think.
From a system-observer semiotical point of view, it should be clarified if our model of the relation between an artificial computationally based device (such as a computer) and its environment can be of the same kind as our model of the relation between a sign-interpreting organism (or single cell) and its environment. This is hardly so. The artificial device has an organization which imply a concatenation of processes which are (not autopoietic, i.e.:) not processes of production of the device's components which specify the device as a unity since the components of the device are produced by other processes which are independent of the organization of the device and its operation. Thus, the device is likely to be of a non-autopoietic, or allopoietic kind (Maturana and Varela 1980, p. 78ff)-nobody has hitherto succeeded in synthesizing an artificial autopoietic organism as a metabolising self-maintaining network of component-producing components-and therefore, it cannot act as an interpreter (see below) in the same sense as a living cell, which means that its processing of signs is of another nature and may not confer the same kind of `biological meaning' (in an evolutionary perspective) to the device.
To recapitulate, a general semiotics of nature includes a system-observer semiotics as well as a biosemiotics proper, and the scope of the latter is to give, at least in principle, a detailed account of the spontaneous development of more and more complex systems of semiosis in the course of evolution by various mechanisms. Among these mechanisms,natural selection play an important and possibly major role, but the concrete evidence for judging the role of natural selection to be the dominating evolutionary force in the genesis of all kinds of semiosis in nature is, to my view, insufficient. In a specific sense, the sign-based organism-environment relationship is in itself an important evolutionary factor determining the route of evolution, especially for the animal part, where this factor includes the organism's perception and choice of habitat as a crucial element that determines which kind of selective forces the organism will be subjected to (known for long time as the Baldwin effect, cf. Braestrup 1971, Bateson 1988). The chain of causation involved here is traditionally seen as a change in behaviour that generates a new selective pressure, which, via mutation, variation and natural selection, modifies the structures involved, e.g., generating the long neck of the giraffe (and if anybody should ask about the fat neck of the hippopotamus, the evolutionist would claim that some `traits' may more adequately be seen as consequences of developmental constraints), but the question of explaining the newly acquired behaviours that lead to the changes in the first place is usually left open. Or, alternatively, the newly acquired behaviour is put into the same neodarwinistic explanatory scheme as simple organic traits, so that the prevalence of the behavior in the population is explained with reference to its functional value as a consequence of a successful selective propagation of the genes that code for the new behavioural trait, even if this is not established by any empirical evidence (for a critique, see Jamieson 1986). There is a tendency in evolutionary biology to see the organism as the passive object of natural selection, ignoring the active and semiotic side of this relation, in which organisms take the role of autonomous agents that select, act upon, and even co-produce their environments (cf. Lewontin 1982)-not to speak of the `inner' side of the organism-environment relation, in which the animal's perception constitutes a specific `Umwelt', to use a phrase of the pioneer ethologist and biosemiotician, Jakob von Uexküll (Uexküll 1989). In a fundamental way, sign-based relationships on the genetic as well as behavioral and cognitive levels co-determine evolutionary processes. Also in this respect, it is a primary task of biosemiotics to reinterpret and extend traditional evolutionary biology to cover these `intentional' aspects of life's evolutionary history.
We are now in a position to be more specific about the cluster of interrelated questions that the study of artificial and natural living systems from a biosemiotic perspective must cope with:
First, is the origin of life, the origin of cells, and the origin of signs, co-extensive processes? To how low levels can we reasonably conceive sign processes to occur? Must semiosis always involve life, so that the cell is the minimal semiosic unit; or can full-fledged sign processes take place in even simpler systems, such as in autocatalytic reaction networks of some `proto-organism' or in chemical reaction-diffusion systems with morphogenetic properties of form-propagation? (René Thom once made the perceptive proposal that signification should be understood as the propagation of form).
Second, is the teleological aspect of living systems (their adaptive and functional features seemingly `made for' specific purposes) explainable as consequences of semiotic processes-or is it the other way around?
Third, what is the nature of the biological information of the cell, i.e., what kind of signs are processed or interpreted, and indeed, what is the relation between the `information processing description', the `computational description', and the general biosemiotic one?
On the autopoietic origin of sign-interpretation
It is tempting to apply Charles Morris's syntactics/semantics/pragmatics trichotomy and describe the organism's internal processing of a sign and the sign's association with other representamens as the syntactic aspect of semiosis; the existential or ecological `meaning' to the organism of individual signs or sign complexes as its semantic aspect; and the actual relevance of a sign in the given situation in relation to survival as its pragmatic aspect (cf. Morris 1938). One should take care, however, not to confuse the specific linguistic terms syntax, semantics, and pragmatics with their generalised biosemiotic meanings not restricted to the signs of human language. An interpreter can be any kind of organism, but only a human interpreter can possibly have a linguistic capacity. We probably share with related species some `zoosemiotic' forms of bodily communication, so deeply tainted, however, by culture, that we can barely distil their `natural' core.
With respect to the question of low-level semiosis posed above, one can give a tentative answer in accordance with a pragmaticistic system-observer semiotic point of view and a naturalistic stance of evolutionary biosemiotics:
The level to which we fruitfully can conceive sign processes to occur depend partially on the interests of the society of inquirers and the distinctions drawn in its observations, partially on the `answer' given by nature to that society in its observations. If we in a consistent and meaningful way can describe simple chemical reagents as forming and interpreting signs-signs, of course, of a primitive and often indexical kind; see the reformed peircean and biochemical approach of Kilstrup (forthcoming) on the formation of sign links between macromolecules-then there is no reason why we should not say, that in this case, in fact, we have an example of a non-cellular (non-organismic)chemosemiotic system. To the extent that such systems have contingent (i.e., not self-produced) boundaries, involve the interaction of organic macromolecules, and have a brittleness due to very sensitive stability conditions, they will tend to be be `swallowed' in presence of living cells and may not be found under natural conditions in any place on Earth anymore, and sign links between macromolecules will only be found as part of biosemiotic cellular processes (as the ones dealt with by Kilstrup).
We can legitimately define chemosemiotics negatively as the study of chemical signs in systems whose organization does not, as in the case of living organisms, involve `code-duality' (that is, the digital/analog duality between template molecules with discrete sequence information and an `analogic' mode of a continuous cellular dynamics in which structural information is implicit in the organization of the network of catalytic molecules, cf. Hoffmeyer & Emmeche 1991), and where the `interpretation' of the signs (or informational molecules, or changes in concentration of these, or whatever may constitute representamens) do not involve autonomous self-reproducing units. The origin of life may be considered as a transition from complex chemosemiotic systems (such as an autocatalytic network of macromolecules, as simulated by Bagley et al. 1989) to biosemiotic units which are relatively simple from the point of view of contemporary organisms, even though their code-duality, autopoiesis and self-reproduction far exceeds the performance of any chemosemiotic system.
An objection to the idea of a pure chemosemiosis would be that there is not a cellular subject in such a system with an interpretative power, or a subject to whom these ostensible signs could make a difference (if we say that sign, or information, is `a difference that makes a difference for some interpretant', to cross the ideas of C. S. Peirce and Gregory Bateson). Without any organism, no subject-like agent; and without any subject, no interpretation. One may have the intuition that if we confront a system with the capacity to respond selectively to differences in the surroundings, then it must also contain the distinctions necessary for its own identification as a system, and thus it must have a kind of circular organization or self-referring character (Maturana and Varela 1980, p.10f; Hoffmeyer and Emmeche 1991, p.125f), that is, the system must be the `subject', and we are pushed into the realm of biosemiotics. Pure chemical systems cannot show signs of semiotic activity, they are merely ascribed by an external observer.
An answer to this objection could be that the general concept of sign interpretation (at least as one might reinterpret Peirce) does not have to presume an organismic interpretant. Clearly, the interpretant of the sign is, according to the triadic sign-concept, merely another sign, i.e., a Third, which is determined to stand in such a relation to the object sign (a Second) as the representamen sign (a First) itself stands to that object. Thus, one should distinguish between a concrete organism (or a cell) that acts as an interpreter and the interpretant in the sign triade. An interpreter involves the idea of an organism, a living being (situated in a specific environment) to whom an external sign may confer some information which may or may not be relevant for the organism's staying in the game of existence. An interpretant, on the other hand, is a purely logical concept. Sometimes it is the organism as a whole that realizes the logical relation of a Third acting as interpretant, when for instance a predator smells its prey and approaches it. On other occasions the interpretant's relation to a representamen and an object is realized by far more simple processes on molecular scale. (If this is consistent with Peirce's own views will not be pursued here; reconstructing his ideas on interpretation, intentionality and the extension of the concept of sign-action is a very complex affair, see Short 1986). Reactions between organic macromolecules, e.g., of the form A + B -> C, can be understood as processes in which [C], the concentration of the reaction product C, is an interpretant, mediated by the process, of [A] and [B], the concentrations of the reactants (Kilstrup, forthcoming). The universal biochemical phenomenon of `molecular recognition' (which takes place in any enzyme-substrate reaction, in DNA-protein binding, immunological reactions, and hormonal and other reactions mediated by membrane receptor molecules) is based on the three-dimensional structure of the two reacting macromolecules and the nature of the weak chemical binding they form among each other. As suggested by Stjernfelt (1992), such reactions can be seen as an instance of `categorical perception'; a concept that originally designated the process by which a discrete system of phonemes can emerge from a continuous mass of sounds, but later generalised to a broader class of semiotic phenomena in order to make possible a topological or physical dynamic foundation of the description of basic classificatory phenomena. Accordingly, Stjernfelt proposed that categorical perception could be a primitive semiotic concept of biology in so far as it is topological in two ways: Molecular categorization erect classes (of molecules, and even cells), and it relies on the topological features of the interacting molecules. This could also be a way of overcoming the fear of a disguised dualism, lurking in descriptions of the basic properties of life in terms of code-duality or complementarity of analog or `dynamic' and digital or `linguistic' mode of functioning (Pattee 1977), without taking recourse to a purely physical description (as Carello et al. 1984).
On the first question we can conclude, that nothing in the general concept of semiosis seems to prevent us from ascribing semiotic properties to the action of complex molecular processes on levels of complexity below life. To a biologist, a linguist, or a social anthropologist this may seem far fetched, but no single discipline can have a privileged status vis a vis general semiotics. There have been several attempts to develop a more comprehensive view of signs in nature and culture. It has been proposed that the physical theory of complex dynamical systems can substantiate the constraints on simple semiotic interactions and give a sort of physical foundation to the theories of signs in nature and to evolutionary epistemology (Barham 1990, see also Yates 1985). I will admit that the idea of pure physical semiosis-not to be confused with physical aspects of biosemiotic processes-with no organism/interpreter seems to be counterintuitive. Though some physicists allow for the existence of teleomatic or `purpose-like', quasi-intentional phenomena in inorganic systems (e.g., the spontaneous probabilistic drive of irreversible processes towards thermodynamic equilibrium, see Wicken 1987), such forms of movement seems to have very little affinity to the idea of interpretants of signs.
However, counterintuitiveness does not have to be a decisive argument against the idea of general forms of semiosis without organisms, as long as we consider the notion of sign and semiosis at the most abstract and logical level, with no implications as to the nature of the interpretant. Organisms are just one specific way to have physical systems organized so that they have the capacity to relate to signs with environmental origins, to process this information, and eventually to reorganize their own structure adaptively according to that processing. In fact, many non-biological systems have some forms of `memory' making their state dynamics path-sensitive, or dependent of the past history of the system (a phenomenon commonly known as hysteresis).
Accordingly, biosemiotics could be conceived as the sector of general semiotics dealing with the set of physical systems in which the interpretant of some crucial signs may directly influence system's chance of maintaining its organizational stability as a whole, or staying in the game of existence. It is the domain in which we encounter systems with a special kind of wholeness or autonomy, as implied by the notion of autopoiesis of Maturana (1981), where an autopoietic system is a system constituted as a network of components which through their interactions produces the very components that constitute the network and also produces the boundaries of the network to the environment (such as the cell membrane). This kind of organization of a system forms a condition for conferring a special `meaning' (in the functional sense of improved chances of maintaining existence) to the signs that such systems encounter, process or encompass. Though the theory of autopoiesis do not use representational or informational notions, because these are seen as implying mysterious non-physical `instructive interactions', the peircean concept of sign and sign interpretation implies only contiguous relations and continuous transformations, and in this respect it does not seem to be in conflict with Maturana's basic idea of the organism as a self-productive system; rather, it is incongruous with the mechanistic and deterministic philosophy in which the theory of autopoiesis was shaped.
An interpreter = an autopoietic system + a triadic interpretant
From the notion of the living state as autopoietic and the general concept of sign and interpretant, we arrive then at the following definition of an interpreter: When an autopoietic system (such as a single cell, or, a higher-order autopoietic system such as a multicellular organism) acts as interpretant in the sign-triad, it constitutes an interpreter. We see that sign-interpretation involves an interpretant of a special kind, i.e., an autopoietic system with an internal dynamics rich enough to let the perturbations of external objects affect the component production network in a way that allows for continual autopoiesis while at the same time let the changed dynamics, representing past perturbations, acquire an anticipatory adaptive sign-function. (It is possible to develop this relation between biosemiotics and autopoiesis in further detail). With the emergence of interpreters, the evolution of semiosis has reached a new level of complexity, because sign-interpretation is from now on bound to the special character of living systems in general, i.e., their teleonomic autonomy and functionality, their evolutionary history, development and behavioral ecology. An example already alluded to above is a given animal species's capacity to perceive its environment, select a habitat, and thereby to co-determine its future evolutionary route (the Baldwin effect, see Bateson 1988).
By analogy, one might see the relation between inorganic chemosemiotics and biosemiotics proper as the relation between the `absolute' geometry based on the first four postulates of Euclid (from which he could derive the first twenty-eight propositions of his Elements) and the true Euclidian system including the fifth postulate as an axiom (corresponding in our analogy to the notion of an interpreter) from which, in concert with the first four postulates, the full-fledged Euclidian geometry could be deduced. Remember that the fifth postulate, the parallel postulate, was just one possible extension of absolute geometry to give the specific Euclidian system; alternative extensions worked out later gave rise to elliptical and hyperbolic geometry that gave alternative implicit meanings to the notions of `line' and `point' than the ones implied by the parallel postulate. Similarly, we might conceive of alternative forms of biosemiosis not tied to the implicit meanings of organisms acting as interpretants, if we dare to imagine alternative forms of `life' or lifelike processes that may give rise to complex dynamical systems not seen on Earth but maybe existing on other planets (one of the ideas behind the research programme for artificial life, cf. Langton 1989).
Signs on the borderline between order and chaos?
Under what conditions could we observe the transition from a complex dynamical system to one in which genuine semiosis takes place? Nobody seems to have the answer to that question, but some inspiration can be gained from computer science, nonlinear dynamics, and `Artificial Life', if we want to explore the physical conditions for the emergence of what could be interpreted as semiotic processes on a relatively simple level of complexity. For a moment, let us accept the idea that simple sign processes can be viewed as computations, no matter what medium is involved as support. Thus, we presume that the semiotic question of origin can be translated to an equivalent computational or physical one. Now, Chris Langton, a pioneer of the Artificial Life discipline, has emphasized that in order for computation to emerge spontaneously and become an important factor in the dynamics of a system, the material substrate must support the primitive functions required for computation: the transmission, storage, and modification of information (Langton 1990). One can accordingly ask under what dynamical conditions will physical systems support these operations of information processing. This question is still very difficult to address directly, so Langton instead looked at cellular automata (CA) as a class of formal abstractions of physical systems, and asked under what conditions a CA might support basic operations of information transmission, storage and modification.
It is important to notice that a `cell' in a CA is a formal entity that has nothing to do with biological cells. By different kinds of CAs, one can model various phenomena such as a biological cell, self-reproduction, intercellular communication, morphogenesis, etc., but a non-interpreted CA as such is just an array of abstract input/output units; a mathematical object. You may think of cellular automata as a huge grid of cellular boxes in which each cell for each time step (`generation') can be in either one of two or more possible states (represented by numbers and visualised on the computer screen by colours), depending on the states of its neighbour cells the preceding time step, according to a given rule. The values of all cells in the CA are simultaneously updated at each tick of a clock according to the chosen rule. Thus both time, space and the parameters describing each cell's state are discrete. The computer game `Life' is a famous example of a two-dimensional CA, in which you can see small `creatures' wandering about on the screen of the computer, each creature is a specific temporarily coherent pattern of cell states. Langton used one-dimensional automata, consisting of one line of cells that again change their state according to the states of the adjacent cells of the previous generation, visualised on the computer screen by lining the successive generations under each other (each one a line composed of cells in different colours according to their state) so that the development of the automata looks like the development of a carpet by weaving on a carpet loom, or, if the screen has no colour, the frost work of ice ferns creeping down the window a cold winter day.
By testing many different types of CAs with different rules, Langton found qualitatively different types of behaviour (also characterised by Wolfram 1984): class I CAs evolve to a relatively boring homogeneous state (in the language of dynamical systems this corresponds to a limit point); class II CAs evolve to simple separated periodic structures (corresponding to limit cycles); class III yields chaotic aperiodic patterns (or `strange attractors'), and class IV gives rise to complex patterns of localised structures which may have very long transients of slowly propagating forms. Langton invented a measure of the complexity of the rule table responsible of the behaviour of each individual CA species, which allowed him to compare the qualitative dynamics with a quantitative picture of different CAs (ordered on a scale from the most homogeneous to the most heterogeneous rule tables). He found a very sharp transition from `chaotic' rules (that gave rise to very disordered behavior in the developing patterns of the `carpet') to `ordered' rules (that resulted in more `boring' and predictable behavior). Only at the tiny border between ordered and disordered dynamics-that Langton compares with a `phase transition' between solid and fluid states of matter-could class IV type dynamics, the far most interesting one, be found. What characterised this behaviour `at the edge of chaos'? That it was the most complex, and that it according to Langton had precisely the capacity to support information processing: In this class, the CAs developed ostensible information storage capacity in the form of stable patterns of configurations of cell states; apparent information transmission capacity in the sense of the long transients that can act as propagation of signals in the CA space over long distances, and furthermore the potential to modify these signals as the propagating signals interact with each other and in this way `compute' new configurations of metastable cell states. Outside the phase transition, Langton notice that the correlation between the individual cell states within a CA is either too strong (so they become overly dependent), or too small (so cells are overly independent), and in either case, "they cannot cooperate in a computational enterprise" (p. 30, ibid.). When correlations in behavior have the right magnitude as in class IV, these "imply a kind of common code, or protocol, by which changes of state in one cell can be recognized and understood by the other as a meaningful signal. With no correlations in behavior, there can be no common code with which to communicate information." (ibid., Langton's emphasis). It is of course rather surprising if Langton really implies that in this simple system he in fact has observed the emergence of genuine informational processes. Whether this is the case or not, he concludes (in Langton 1992) that these results suggest the possibility that the information dynamics that gave rise to life came into existence when global or local conditions brought the medium (probably water plus some other materials) through a critical phase transition, the dynamics of which support metastable structures characteristic to life: "Once such systems emerged near a critical transition, evolution seems to have discovered the natural information-processing capacity inherent in nearcritical dynamical systems, and to have taken advantage of it to further the ability of such systems to maintain themselves on essentially open-ended transients." (1992, p.85).
From a semiotic point of view, it is extremely interesting to discuss the possibility of a physical contribution to a general theory of the conditions for realizing sign processes in nature. In fact, a cluster of studies have recently appeared that treat the phenomenon of `emergent computation' (Forrest, ed., 1990). However, one should recognize the strange nature of the `informational signals' or `computations' that are said to `emerge' in the cellular automata setup. There are problems, both for the emergence and the information part of the claim.
First, the character of the `emergence' can be discussed: Is the CA behavior really genuine emergent, or could it, as Cariani (1992a) emphasizes, in principle be predicted by an observer equipped with an good model of the system? With respect to the class IV CAs, which are probably computational irreducible (Wolfram 1984), no modelling short-cuts can be made that predict the `emergent' patterns in a more simple way than by running a model that is equivalent to the CA being modelled, so Cariani's argument that the behavior is only seemingly emergent, not `emergent relative to a model' (to use Cariani's term) might seem a little quibbling, even if it is in principle sound: The model may `predict' the system, but the model is just as complex as the system, so it's a peculiar form of prediction. This of course should not distract from the importance of clearly defining in what sense the life-like or computation-like behaviour of a CA is emergent (see also Cariani 1992b; Baas forthcoming).
Second, and more pressing, what is exactly meant by reference to `information' and `computation' in the cellular automata space to which Langton refers (and in many similar studies modelling life as an informational phenomenon)? It is not quite clear. Information is used in an intuitive way, but no definition is given.However, from the first passage referred to, it is clear that Langton contemplates the emergence of some kind of proto-semantic functions with `meaning'. This is indeed a strange thing. To whom should these emergent signals mean something to, you might ask. Some authors would say that what is revealed at this very point is the whole problem of grounding the semantics of this type of model in a way that is not parasitic on the meaning ascribed to the model by the human observer (as discussed in the context of cognitive models by Harnad 1990). Though one could simply say that we should not put too much significance in Langton's more rambling suggestions, the question underscores the fact that an inquiry into the genesis of living signs involves deep conceptual problems. Some insight might be gained from asking how far we can push the paradoxical idea of information with nobody to be informed. Although it is misleading to talk of `meaningful signals' in such a CA model of a dynamical physical system, it is not true that the model faces quite the same kind of `symbol grounding problem' as the cognitivistic models of human faculties criticized by Steven Harnad (1990) and John Searle (1992). If the general notion of semiosis, as we have seen, may extend to non-biological systems (with no interpreters) as well as biological ones, we do not have to claim that the individual signals of a CA bear any meaning whatsoever, even if we still consider them to be models of proto-informational entities. Thus, in order to remedy the semantic grounding problem in the context of Langton's CAs, one could try to use the general peircean concept of sign as a triadic relation between a representamen (the `primary' sign, i.e., the physical vehicle of the sign), its object and its interpretant. The sign vehicle could then be seen as one of the propagating patterns in the CA. The interpretant, which is the `effect' of the action of the sign, could for example be the effect of one propagating pattern upon other `signs' or moving forms in the CA (and this may be what Langton calls `to cooperate in a computational enterprise'). But what about the object? It is difficult to imagine a representational relation between sign and object in such a simple system. This could be due to our mentalistic intuitions of what is the implied meaning of the representational part of Peirce's definition (Collected Papers vol. 2, paragraph 228) of the sign as "something which stands to somebody for something in some respect or capacity". Does the signal in the CA `stand for' anything? Maybe it stand for the previous conditions of cell states in the vicinity of the pattern in question? The `standing for' relation does not, strictly speaking, have to involve representations of mental, intentional or cognitive character, but could be reduced to pure contiguity of different signs, or, as suggested by Thom, to a continuous propagation of form. The symbolistic paradigm cognitive science is often criticized for its notion of `representation', because any concept of representation is thought to imply a form of mentalism end hence dualism. But representation could be understood as metastable forms in the dynamic sense that relates an object to its immediate interpretant.
Even more serious is the claim that computations can emerge spontaneously in a physical system (or its `equivalent' abstract model-for a critique of such equivalence, see Kampis 1991). Can we conceive of computations as natural, self-organizing phenomena?
Computation as a natural phenomenon
What constitutes a computation? As a first approach, Langton's own remarks on computation in CAs may be illuminating (1990, p.15f): A cellular automaton can be viewed either (1) as computers themselves, or (2) as structures into which one can embed higher order computational structures. In (1), the initial configuration of cell states corresponds to the input data, and the transition function or rules of the CA corresponds to the algorithm that is to be applied to the data. On this level, a CA constitutes just a special form of parallel computation, that can be simulated on a normal (sequential von Neumann) computer. Alternatively (2), a CA can be considered as a logical universe within which computers can be embedded. Here, the initial configuration (or at least a local part of it) constitutes the computer, the CAs own rule function can be seen as the `physics' obeyed by the parts of the embedded computer, and the function of the embedded computer is to manipulate input data (which is also embedded in the CA as specific patterns). On the second view, we have three levels: The CA, the embedded computer, and the specific `program' and input data that is implied by the initial configuration of the CA, and that the embedded computer can run. It is well known, at least for CA freaks, that the `Game of Life' (i.e., the CA that has such and such a specific rule structure) has been shown to be capable to emulate within a subset of its initial configurations the logical gates that are needed to build a universal Turing machine (Berlekamp et al., 1982), the most general kind of computational machine. So CA rules can support patterns that emulate second order rules which render possible the patterns needed to construct universal computation.
Langton takes the second view and looks after `computational' structures embedded in this indirect way on the macro level of his CAs. But contrary to the known instances of computers constructed in CAs (as emulated for instance by the structures of Game of Life), Langton looks for `computational atoms' emerging spontaneously in this same space. But what does that mean? Let us translate the situation from the computational space back to the physical. What would then be the analogy of such an endeavour? Probably it would be that one asks for the physical conditions for-in a liquid solution of some adequate atoms and molecules-the spontaneous emergence of the physical parts of a real computer, its hardware components, so to say. It is of course extremely unlikely, whether near phase transitions or in any other space of `natural' conditions, that a pure physical dynamics would allow specific hardware components of any designed machine to emerge spontaneously. Langton may object that he only intended to investigate how the preconditions for computational behaviour could emerge (and only in an abstract model of a physical system), not genuine computational operations as such. Nonetheless, the whole approach seems to rest upon the idea that, given the right physical conditions, computational processes can emerge spontaneously with no intervention of neither a human number or computer theorist nor a long process of evolution of organisms with nervous systems or other alleged computational capacities (see below). (Lanton's idea should not be confused with Ed Fredkin's even more speculative idea that the physical universe as such, by the dynamics of its finest grains, is a huge CA). The problem is, that it is by no means clear how to speak rationally about computations without presupposing the existence of a complex system including (a) a conceptual structure of symbols, rules of manipulations, and well-formed strings as axioms to be manipulated; plus (b) an organism or a well-designed physical device that in some way can do the manipulations; and thirdly, (c) an interpreter, that makes sense of (a) and (b).
Computations as normally conceived, whether they are realized by paper and pencil or by an automatic computational machine, presuppose the existence of a set of discrete symbols being manipulated according to some rules (which is not the `physical rules' or laws that govern the electromechanics of a machine or the biomechanics of a hand writing the symbols, but a set of rules that represent valid logical and mathematical transformations). Also presupposed is an interpretation of these symbols. Clearly, computation is not just automatic manipulation of symbols according to totally arbitrary or random rules. The rules embody the logic involved in doing that particular piece of mathematics. And the symbols mean or represent something, they stand for numbers (or bit codes that stand for numbers) which are conceptual entities within a mathematical space. Thus even if a desk computer does some part of the job automatically, ultimately a computer must be defined as an interpreted automatic formal system (Haugeland 1985).
One way of dealing with the problem is to take a perspectivistic or pluralistic stance and claim that we simply face different kinds of computations: One kind which has the normal intentional structure just outlined for human and machine-mediated human computation, another kind which is not conceptual but is found to take place in living cells (see below), and a third one which under special circumstances may occur in physical nature as a self-organising phenomenon, and which can be modelled in the abstract space of cellular automata. One could add an additional kind which may not even deserve the name of computation but which characterises a highly advanced form of mathematical reasoning that cannot be done merely by computational or algorithmic approaches, due to the limitations inherent in formal systems-e.g., reasoning about the truth value (true/false) of some propositions in number theory which, by Gödel's incompleteness theorem, can be shown to be undecidable in a given formal system (see Hofstadter 1979 for a nice introduction). Instead of kinds of computation, we prefer to talk of four different concepts of computation (abbreviated as coc.1-4), that can be listed as
coc. 1. The formal, or algorithmic, concept of computation, which has its theoretical footing in the notion of a Universal Turing Machine (see, e.g., Davis 1978).
coc. 2. An informal, intuitionistic, or `mathematical' concept of computation (and in general: reasoning about numbers) that is not bounded by the known limitations of formal systems. It points simply to the fact that mathematics cannot be reduced to automatic manipulation of symbol tokens; there is more to numbers (and hence, to computing) than the properties that can be accounted for by formal theories of computation.
(A rude indication of the difference between coc.1 and coc.2 is the number pi, the ratio of the circumference of a circle to its diameter. You can give an algorithm for generating the decimal expansion of pi (3.15149263589793...), i.e., an approximal value expressed as a rational number with a decimal fraction which may be arbitrarily long (within the limits of the physical possible). This number, however, is not pi-in that sense, no human being have ever `seen' pi. The algorithm is, so to speak, just the `translation' of the mathematical concept of an irrational and transcendental number to a computationally convenient expression of it. Strictly speaking, one cannot compute pi but some of its approximations.)
coc. 3. A biological concept of computation. This seems to be a quasi-theoretical concept that can be understood in many ways, for example, as problem solving by learning and adaptability (e.g., Conrad 1990); as molecular processing of information in cells (Albrecht-Buehler 1985; Hameroff et al. 1989, discussed below); or as computation by neural networks.
coc. 4. A physical concept of computation, that might be non-representationalistic. The entities that co-operate in computational enterprises are patterns that can transmit, store and modify information, but these patterns seemingly do not have to `stand for' anything, as long as no functional constraints are imposed from a higher level. Framing the problem this way, the emergence of biological cells can be seen as the emergence of a level of entities that constitute boundary conditions (to use a physical term in a non-physicalist way) on the kind of physical computations that are allowed for on the level below cells. These boundary conditions are expressed in a dual way, as external and internal conditions: The entities will persist, not as concrete physical identities, but as a class of entities (a lineage of autopoietic units with a specific family of genotypes) on the `external' condition that they either can compete successfully in an environment of limited resources, or otherwise can satisfy the conditions for maintaining autopoiesis (e.g., energy requirements) within a more gentle environment. The `internal' boundary conditions on the `physical computations' are imposed by the structure of the genotype, that in a sense select a very narrow and specific set of macromolecules to be produced (and hence, molecular computations to be performed) out of a huge class of possible molecules that can be combined by the basic building blocks (aminoacids). The emergence of autopoietic units implied the emergence of the distinction between external selective and internal selective constraints. We cannot yet explain how this came about in the first place; this is to some extent an empirical question to decide between the various hypotheses of the discipline of `protobiology'. But it is clear that information, or the sign mediated relation between environment, organism and the internal metabolism, is crucial for this dual interplay of external and internal constraints to emerge. The constitution of the very first distinction between `external' and `internal'-as incarnated by a cell enveloped by a membrane-can be seen as a basic form of semiotics of living organization, which might be called the primary organismic information.
How these different concepts of computation relate to each other is not clear, and have not as far as we know been analysed by anyone, but the pluralistic acceptance of all four concepts may blur eventual inconsistencies. From a formal point of view (cf. coc.1), or from an anthropocentric, conceptual perspective (one that may embrace coc.2), one is not forced to accept the notion of non-representational computation (coc.4). From a biosemiotic perspective (coc.3), there is a point in restricting language- and concept-dependent processes (such as formal and mathematical computation) to the level of `anthroposemiotics' which is outside the realm of biology, and to preserve the term `information' for more simple kinds of sign-transfer. One may easily be lead to a `restrictivist' approach to computation and dismiss non-doctrinary notions (coc.3 and 4) because they are not specifiable within a formal setting. The fact that sound research in physics and computer science have been devoted to understand the principal physical limits on computation (e.g., Bennett and Landauer 1985) is not in conflict with such a formalistic stance, because these physical limits (on formal computation) do not imply `a physical concept of computation' (in the sense of coc.4), but is concerned with, for instance, how fast and energetic `cheap' one can realize formal computations in physical devices. However, it is too preliminary to attempt to ascribe explanatory monopoly to just one theory of computation, especially as we are seeing a lot of interesting approaches within computer science and Artificial Life coming up with new material; e.g., Rasmussen, Knudsen and Feldberg 1992. As these authors observe, "a useful computation theory for natural systems has yet to be formulated" (p.220, ibid.; for a critical discussion, see Kampis 1991 and Pattee 1989). We will not take the analysis of the different kinds of computations further here; instead, we shall give an example of a biological structure within the cell that represents a good candidate for supporting possible `computations', at least in the unspecific sense of biophysical interactions that could instantiate cellular automata like transformations of patterns acting as `propagators of form'.
Computation in the cytoskeleton
In addition to DNA, RNA, and the enzymatic machinery of the cell that takes part in the interpretation of the genetic information, other classes of biomolecules have been claimed to be directly involved in computation-like processes. The general metabolism of the cell can be seen as an example of a complex fluid computational network of `molecular automata' (Marijuán 1991, Conrad 1990), which can be simulated by various artificial life techniques. Here, we shall look at the cytoskeleton. The cytoskeleton is a structure within eukaryotic (`higher') cells, that is made of fine threads of protein fibrils of various kinds, of which the so called microtubules are among the most important. If a culture of cells is stained by a special immunoflorence technique, one can see in the electron microscope that each cell has within it a complex network of fibrils crossing the cell body in many directions. The function of the cytoskeleton is probably to give the cell a structural, `bone-like' support, but more important is its conjectured organizational influence on complex cellular activities, such as transport of proteins, organelles, calcium ions and other materials (for instance within the axons of nerve cells); mitosis; cell growth; shape and differentiation; and various types of cell movement. Contractile and enzymatic proteins seems to be attached to microtubules at specific sites and may function as `anchors' to other substances; transport rates from one to 400 millimetres per day have been measured.
Stuart Hameroff and his group have proposed the hypothesis that the activity of microtubules of the cytoskeleton is of a computational nature, because its structure seems to be able to support `cellular automata'-like change of the state of its components (Hameroff et al. 1989, Hameroff 1987). We noted that a CA in two dimensions is a big grid of cells (like an infinite chessboard) that change state according to a rule table (the state transition function). For each cell at each time step the rules take into account the state of the neighbour cells and update the next state of the cell. In for instance the two-dimensional CA called `Game of Life', one of the characteristic patterns, the glider, is a ::.-like configuration of five cells (all black) that stand out from the background (all white cells). The glider pattern is named so because it changes periodically such that for each four `ticks' of the clock, it resumes its original form and has meanwhile `moved' one cell diagonally down the grid. Now, a microtubule is build like a cylindrical grid of `cells', where each `cell' is a protein molecule (tubulin) that can have one of two molecular conformations, alpha and beta (corresponding to the CA states black and white in `Life'). Thus, instead of a flat grid, each microtubule of the cytoskeleton is like a cylindrical CA, each cell is a tubulin protein (in one of two possible conformational states). Moreover, the microtubule CA has its own rules of state change, and it is very remarkable that these can be inferred from pure chemical considerations, due to the fact that there are some constraints on the possible neighbourhood relations. We shall only give hints of the theory of these `state transitions rules' here (for details, see Hameroff et al. 1989).
The tubulin molecule is oriented (`north-south') in the microtubule cylinder. Furthermore, it is a dipole, and in the alpha conformation it has the negative charge localized towards the north, but if the labile electron is forced towards the south, the form of the molecule shifts to the beta conformation. Like cells in a bee hive, each tubulin molecule has six neighbours (each one either in state alpha or beta). The electrostatic forces exerted on each molecule's mobile electron by the mobile electrons of its nearest neighbour molecules will depend on the dipole orientations of the neighbours, that eventually force changes in the conformation of that tubulin molecule. This in turn may cause changes in its neighbour's conformation and so on. These forces, which can be calculated, serve as the basis for `automata rules' in a model of the interactions. (It's like humour; if you're in the alpha mood, but some of your companions is in beta, you can feel forced to shift temper, and this shift will later affect the spirit of others). But what dynamic mechanism must play the role of the synchronous updating of states in a CA? Here, Herbert Fröhlich's theory of coherent protein dipole excitations (Frölich 1986) indicates that random supply of energy to such a system of nonlinearly coupled dipoles may lead to coherent excitations of a single vibrational mode (of about ten to the eleventh power Hz for one wave across the microtubule diameter), and this is suggested by Hameroff to serve as a `clocking' mechanism to generate `discrete generations' of states in the microtubule CA lattice.
The exact derivation of the rules in the CA simulation will not be considered here; the important thing to notice is that they seem to be based on a plausible theory of biophysical behavior of oriented assemblies of dipole macromolecules with piezoelectric properties (so-called `electrets'), and that we face a good case of cellular automata-like behaviour in a real biological system. The latter conclusion is elaborated by the actual simulations of Hameroff's group, which show that a broad spectrum of virtual objects-patterns propagating through the microtubule CA such as `dot and triangle gliders', `spider glider'; linearly growing patterns such as `bean sprouts'; and the stationary `diamond blinker'-appear under various simulated conditions. These patterns often consist of a `seed' structure that, once formed, propagates through the CA lattice leaving behind it a wave of changed conformations that also propagates, like the wave front from a boat. The wave structures can arise from unordered starting conditions due to the self-organising properties of the interactions within the system. It is suggested by the authors that these "behaviors may be capable of information processing and computation" (Hameroff et al. 1989, p.543) and of course, may serve the biological activities known to be associated with microtubules, including the movement of single celled organisms and transport of molecules within cells. As the cytoskeletal structures are also found in the axons of nerve cells, Hameroff and co-workers speculate of a fundamental connection between inter- and intra-cellular information processing in the brain, and see the cytoskeleton as a part of a nested hierarchy of automata that allows for "parallel computation and decision making inside a neuron" (ibid.). Let us have their credo: "Just as the genetic code was deciphered, we hope to decrypt what may be real-time information codes in the cytoskeleton of eukaryotic cells. By doing so, we can perhaps communicate with and program cytoskeleton structures to perform tasks including nanoscale surgery for a variety of medical problems, or the self-assembly of large-scale cognitive arrays. These would truly be new frontiers of `Artificial Life' " (p. 544, ibid.).
Again, it is by no way clear in what sense the eventual existence of these travelling waves of form through the microtubules in living cells could support genuine computation if the notion of computation involved here should be made as explicit as the formal concept referred to above. For the sake of clarity, we are forced to distinguish between different situations: (a) One thing is to have a physical system acting as support for wave propagation (and modelling this in a CA system, and implementing such a model in a Turing type computational machine like a desk computer). Here, if semiosis is involved, it can only be at the form of chemosemiosis as discussed above, because one has no genuine interpreter in such a system (apart from the human interpreter of the model of the system). (b) Another thing is to have a functional system in which waves may have a biofunctional role as signals or sign vehicles (here, true semiosis on the biological level is involved). It is very conceivable that important parts of the microtubule's behaviour have semiotic functions without performing computations other than what might metaphorically be implied by the coc.3. (c) Quite another thing is to have signals representing pieces of information that are being processed within a genuine computational system. If one does not recognize other notions of computation than the formal and the mathematical referred to above, then the microtubules' behaviour will not be seen as computational. Of course, we may re-define our concept or take the pluralistic stance, but this does not enlighten us as to the real nature and function of, for instance, the alleged `computational properties' of a system like the cell's cytoskeleton.
We are forced to conclude, that here is no general theory of realized computation in natural systems, and neither semiotics nor molecular biology or artificial life research provide such a theory yet. What semiotic reflections may help to do is to clarify epistemological and model-theoretic issues and to frame the functional properties of biological information within a greater evolutionary frame. Computing must be seen as a human and semiotic activity. It is necessary that the conceptual obstacles to a coherent understanding of life, computation and sign-activity can be remedied, and there is indeed some hope that a broader perspective may emerge from the cross-disciplinary gathering around the disciplines of cognitive science, artificial life, biology, semiotics and general epistemology.
For discussions on these ideas, and for comments and criticism, I thank Nils A. Baas, Peter Cariani, Peder Voetmann Christianen, Julio Fernández, Jesper Hoffmeyer, Hans Siggaard Jensen, Mogens Kilstrup, Simo Køppe, Chris Langton, Benny Lautrup, Michael May, Alvaro Moreno, Steen Rasmussen, Ib Ravn, Frederik Stjernfelt, Ib Ulbæk and Esben Villumsen.
Abir-Am, Pnina G. (1985): "Themes, genres and orders of legitimation in the consolidation of new scientific disciplines: deconstructing the historiography of molecular biology", Hist. Sci. 23: 73-117.
Albrecht-Buehler, Guenter (1985): "Is cytoplasm intelligent too?", Cell and Muscle Motility 6: 1-21.
Anderson, M., Deely, J. Krampen, M. Ransdell, R. Sebeok, T.A. & Uexküll, T. von (1984): "A semiotic perspective on the sciences: steps toward a new paradigm", Semiotica 52(1/2), 7-47.
Baas, Nils A. (forthcoming): "Hyperstructures-a framework for emergents, hierarchies, evolution and complexity" (Preprint from The Mathematical Institute of Trondheim University, to be published in Artificial Life III, Santa Fe Studies in the Sciences of Complexity Proceedig Volume ?, Addison-Wesley, Redwood City, Calif.).
Bagley, R.J., J.D. Farmer, S.A. Kauffman, N.H. Packard, A.S. Perelson and I.M. Stadnyk (1989): "Modeling adaptive biological systems", BioSystems 23: 113-138.
Barham, James (1990): "A Poincaréan approach to evolutionary epistemology", Journal of Social and Biological Structures 13(3): 193-258.
Bateson, Patrick (1988): "The active role of behaviour in evolution", pp. 191-207 in: M.-W. Ho & S. W. Fox, eds., Evolutionary Processes and Metaphors. John Wiley & Son, Chichester.
Bennett, Charles H. & Rolf Landauer (1985): "The fundamental physical limits of computation", Scient. Amer. 253 (1): 48-56.
Berlekamp, Elwyn R., John H. Conway & Richard K. Guy (1982): Winning Ways for Your Mathematical Plays, vol.2, pp.817-850, Academic Press, London.
Braestrup, F.W. (1971): "The evolutionary significance of learning", Vidensk. Meddr. dansk naturhist. Foren. 134: 89-102.
Carello, Claudia, M.T. Turvey, Peter N. Kugler, and Robert E. Shaw (1984): "Inadequacies of the computer metaphor", pp. 229-248 in: Michael S. Gazzaniga, ed.: Handbook of Cognitive Neuroscience. Plenum Press, New York.
Cariani, Peter (1992a): "Emergence and artificial life", pp.775-797 in C.G. Langton et al., eds.: Artificial Life II (Santa Fe Studies in the Sciences of Complexity vol. X). Addison-Wesley, Redwood City, Calif.
Cariani, Peter (1992b): "Adaptivity and emergence in organisms and devices", World Futures 32: 111-132.
Chandebois, Rosine & Jacob Faber (1983): Automation in animal development. Karger, Basel.
Conrad, Michael (1990): "Molecular computing", pp. 235-324 in: Marchall C. Yovits, ed., Advances in Computers, vol. 31. Academic Press, London.
Davis, Martin (1978): "What is a computation?" pp.241-267 in: Lynn Arthur Steen, ed.: Mathematics today: twelve informal essays. Springer-Verlag, New York.
Dennett, D.C. (1978): Brainstorms: Philosophical Essays on Mind and Psychology. Bradford Books, MIT Press, Cambridge, Mass.
Eder, J. & Rembold, H. (1992): "Biosemiotics - a paradigm of biology", Naturwissenschaften 79: 60-67.
Emmeche, Claus (1991): "A Semiotical Reflection on Biology, Living Signs and Artificial Life", Biology and Philosophy 6 (3): 325-340.
Emmeche, Claus (1992a): "Modeling life: a note on the semiotics of emergence and computation in artificial and natural living systems", pp. 77-99 in: Thomas A. Sebeok & Jean Umiker-Sebeok (eds.), Biosemiotics: The Semiotic Web 1991. Mouton de Gruyter, Berlin.
Emmeche, Claus (1992b): "Life as an abstract phenomenon: is Artificial Life possible?", pp. 466-474 in: Francisco J. Varela and Paul Bourgine (eds.): Toward a Practice of Autonomous Systems. Proceedings of of the First European Conference on Artificial Life. The MIT Press, Cambridge, Mass
Emmeche, Claus (1993): "Is life as a multiverse phenomenon?", pp.553-568 in: Christopher G. Langton, ed.: Artificial Life III (= Santa Fe Institute Studies in the Sciences of Complexity, Proceedings Volume XVII), Addison-Wesley Publishing Company, Reading, Massachusetts.
Emmeche, Claus (1994): The Garden in the Machine - the emerging science of artificial life. Princeton University Press, Princeton.
Emmeche, Claus and Hoffmeyer, Jesper (1991): "From language to nature - the semiotic metaphor in biology", Semiotica 84 (1/2), 1-42.
Farmer, J. Doyne and Aletta d'A. Belin (1992): "Artificial Life: the Coming Evolution." In: Artificial Life II. Santa Fe Institute Studies in the Sciences of Complexity, Proc. Vol. X, edited by Christopher G. Langton, Charles Taylor, J. Doyne Farmer and Steen Rasmussen, 815-838. Redwood City, Calif.: Addison-Wesley,
Fernández Ostolaza, Julio and Álvaro Moreno Bergareche (1992): Vida Artificial. Eudema, Madrid.
Forrest, Stephanie, ed. (1990): Emergent computation (special issue), Physica D 42.
Frölich, H. (1986): "Coherent excitations in active biological systems", pp. 241-261 in: F. Gutmann & H. Keyzer, eds.: Modern Biochemistry. Plenum Press, New York.
Goodwin, Brian C. (1984): "Changing from an evolutionary to a generative paradigm in biology", pp. 99-120 in: J. W. Pollard, ed.: Evolutionary Theory: Path into the Future. London: John Wiley & Sons.
Hameroff, Stuart R. (1987): Ultimate computing: biomolecular consciousness and nanotechnology. Elsevier, North-Holland, New York.
Hameroff, Stuart, Steen Rasmussen and Bengt Månsson (1989): "Molecular automata in microtubules: basic computational logic of the living state?", pp. 521-553 in: C.G. Langton, ed.: Artificial Life, (= Santa Fe Institute Studies in the Sciences of Complexity, vol.6). Redwood City, Calif.: Addison-Wesley Publ. Co.
Harnad, Stevan (1990): "The symbol grounding problem", Physica D 42: 335-346.
Haugeland, J. (1985): Artificial Intelligence: The very idea. MIT Press, Cambridge, Mass.
Hoffmeyer, Jesper (forthcoming): "Semiotic aspects of biology: Biosemiotics", in: Robert Posner, Klaus Robering and Thomas A. Sebeok, eds.: Semiotics: A Handbook of the Sign-Theoretic Foundations of Nature and Culture. Mouton de Gruyter, Berlin & New York.
Hoffmeyer, J. & Emmeche, C. (1991): "Code-duality and the semiotics of nature", pp. 117-166 in: Myrdene Anderson & Floyd Merrell, eds., On Semiotic Modelling. Mouton de Gruyter, Berlin.
Hofstadter, Douglas R. (1979): Gödel, Escher, Bach: an Eternal Golden Braid. The Harvester Press, London.
Jamieson, Ian G. (1986): "The functional approach to behavior: is it useful?", American Naturalist 127: 195-208.
Judson, Horace Freeland (1979): The Eight Day of Creation. The Makers of Revolution in Biology. Simon and Schuster, New York.
Kampis, Georg (1991): Self-modifying Systems in Biology and Cognitive Science. Pergamon Press, New York.
Kilstrup, Mogens (forthcoming): "Inductive semiotics: the synthesis of sign triades" (manus., Insitute of Biological Chemistry B, University of Copenhagen).
Lander, Eric S., Robert Langridge & Damian M. Saccocio (1991): "Computing in molecular biology: mapping and interpreting biological information", Computer 24 (11): 6-13 [also in Communications of the ACM, Nov. 1991].
Langton, Christopher G. (1989): "Artificial life", pp. 1-47 in C.G. Langton, ed.: Artificial Life, (= Santa Fe Institute Studies in the Sciences of Complexity, vol.6). Redwood City, Calif.: Addison-Wesley Publ. Co.
Langton, Chris G. (1990): "Computation at the edge of chaos: phase transitions and emergent computation", Physica D 42: 12-37.
Langton, Christopher G. (1992): "Life at the edge of chaos", pp. 41-91 in: C. G. Langton et al., eds.: Artificial Life II (Santa Fe Studies in the Sciences of Complexity vol. X). Addison-Wesley, Redwood City, Calif.
Lewontin, R.C. (1982): "Organism and environment", pp. 151-172 in: H.C. Plotkin, ed.: Learning, Development, and Culture. Essays in Evolutionary Epistemology. John Wiley & Sons, Chichester.
Marijuán, Pedro C. (1991): "Enzymes and theoretical biology: stetch of an informational perspective of the cell", Biosystems 25, 259-273.
Maturana, Humberto R. (1981): "Autopoiesis", pp. 21-33 in: Milan Zeleny, ed.: Autopoiesis. A Theory of Living Organization. Elsevier North Holland, New York.
Maturana, Humberto R. and Francisco J. Varela (1980): Autopoiesis and Cognition (= Boston Studies in the Philosophy of Science vol. 42), D. Reidel, Dordrecht.
Mayr, Ernst (1982): The Growth of Biological Thought. Cambridge, Mass.: The Belknap Press of Harvard University Press.
Meyer, Jean-Arcady & Wilson, Stewart W., eds. (1991): From Animals to Animats, The MIT Press, Cambridge, Mass.
Midgley, Mary (1979): "Gene-juggling", Philosophy 54: 439-458.
Morris, Charles W. (1938): Foundations of the Theory of Signs. University of Chicago Press, Chicago.
Nussinov, Ruth (1987): "Theoretical molecular biology: prospectives and perspectives", J. theor. Biol. 125: 219-235.
Mosekilde, Erik & Lis Mosekilde, eds. (1991): Complexity, Chaos, and Biological Evolution. NATO ASI Series B270. Plenum Press, New York.
Oyama, Susan (1985): The Ontogeny of Information. Developmental Systems and Evolution. Cambridge: Cambridge University Press.
Pattee, H.H. (1977): "Dynamic and linguistic modes of complex systems", Int. J. General Systems 3: 259-266.
Pattee, H.H. (1989): "The measurement problem in artificial world models", BioSystems 23: 281-290.
Peirce, C.S. (1931-58): Collected Papers of Charles Sanders Peirce, vol.1-8, eds.: Charles Hartshorne, Paul Weiss & Arthur Burks. Harvard University Press, Cambridge, Mass.
Rasmussen, Steen, Carsten Knudsen, & Rasmus Feldberg (1992): "Dynamics of programmable matter", pp. 211-254 in: C. G. Langton et al., eds.: Artificial Life II (Santa Fe Studies in the Sciences of Complexity vol. X). Addison-Wesley, Redwood City, Calif.
Richards, Graham (1987): Human Evolution. Routledge & Kegan Paul, London.
Searle, John (1992): The Rediscovery of the Mind. The MIT Press, Cambridge, Mass.
Sebeok, Thomas A. (1972). Perspectives in Zoosemiotics. (= Janua Linguarum, Series Minor, 122). The Hague: Mouton.
Sebeok, Thomas A. (1987): "Toward a natural history of language", Semiotica 65 (3/4): 342-358.
Sebeok, Thomas A. & Jean Umiker-Sebeok (eds.), Biosemiotics: The Semiotic Web 1991. Mouton de Gruyter, Berlin.
Sibatani, Atuhiro (1989): "How to structuralise biology?", pp. 16-30 in: B. Goodwin, A. Sibatani & G. Webster, eds.: Dynamic Structures in Biology. Edinburgh University Press, Edinburgh.
Short, Thomas L. (1986): "What they said in Amsterdam: Peirce's semiotic today", Semiotica 60 (1/2): 103-128.
Spafford, Eugene (1992): "Computer Viruses - a Form of Artificial Life?" pp. 727-746 in: Artificial Life II. Santa Fe Institute Studies in the Sciences of Complexity, Proc. Vol. X, edited by Christopher G. Langton, Charles Taylor, J. Doyne Farmer and Steen Rasmussen. Redwood City, Calif.: Addison-Wesley.
Stent, Gunther S. (1968): "That was the molecular biology that was", Science 160: 390-395.
Stjernfelt, Frederik (1992): "Categorial perception as a general prerequisite to the formation of signs?", pp. ¤¤-¤¤ in: Thomas A. Sebeok & Jean Umiker-Sebeok, eds., Biosemiotics: The Semiotic Web 1991. Mouton de Gruyter, Berlin.
Thomas A. Sebeok & Jean Umiker-Sebeok, eds. (1992): Biosemiotics: The Semiotic Web 1991. Mouton de Gruyter, Berlin.
Thom, René (1973): "A semantic chamelion: Information", chapter 15 in R. Thom (1983): Mathematical Models of Morphogenesis. Ellis Horwood, Chichester; John Wiley & Sons, New York.
Uexküll, Thure von (1989): "Jakob von Uexküll's Umwelt-Theory", pp. 129-158 in: Th.A. Sebeok & J. Umiker-Sebeok, eds.: The Semiotic Web 1988, Mouton de Gruyter, Berlin & New York.
Yates, F.E. (1985): "Semiotics as a bridge between information (biology) and dynamics (physics)", Recherches Sémiotiques/Semiotic Inquiry 5 (4): 347-360.
von Neumann, John (1966): Theory of Self-Reproducing Automata (ed. and completed by A. W. Burks), University of Illinois Press, Urbana.
Webster, B. & Goodwin, B.C. (1982): "The origin of species: a structuralist approach", J. Social Biol. Struct. 5: 15-68.
Wicken, Jeffrey S. (1987): Evolution, Thermodynamics, and Information. Extending the Darwinian Program. Oxford University Press, Oxford.
Wolfram, Stephen (1984): "Universality and complexity in cellular automata", Physica D 10: 1-35.
Wolpert, Lewis (1991): The Triumph of The Embryo. Oxford University Press, Oxford.