Torna alla home page


Marco Giunti

DYNAMICAL MODELS OF COGNITION

In van Gelder, T. and R. Port, eds. (1995). Mind as motion: Explorations in the dynamics of cognition, ch. 18, 549-571. Cambridge MA: The MIT Press.



 
 
 

EDITORS' INTRODUCTION
Dynamics has been employed in various aspects of cognitive science since at least the late 1940s. Until recently, however, this kind of research had received little philosophical attention, especially by comparison with the great deal of effort and care that has been devoted to the articulation and evaluation of the mainstream computational approach. As a result, many foundational questions concerning dynamical models and their relation to more conventional models have remained unaddressed.
In this chapter, Marco Giunti confronts some of the most central foundational questions head-on. First and foremost: What, exactly, is one saying when claiming that a cognitive system is a dynamical system? In answering this question, Giunti is led to address a host of further critical issues, and to formulate and defend the ambitious general empirical hypothesis that all cognitive systems are dynamical systems. A key insight is that computational systems, as deployed in classical symbolic cognitive science, constitute a specially restricted subclass of dynamical systems. Relaxing these restrictions can therefore lead to noncomputational dynamical models of cognitive processes. (Note that Giunti uses the term dynamical system in the broader of the senses discussed in chapter 1, section 1.1.)
Investigating the philosophical foundations of dynamical research is - like that research itself - an ongoing enterprise. Many more questions remain to be explored. Nevertheless, answers that Giunti provides form an essential part of the general conceptual framework surrounding current research efforts.

18.1 INTRODUCTION
A cognitive system is any concrete or real object which has the kind of properties (namely, cognitive properties) in which cognitive scientists are typically interested. Note that this definition includes both natural systems such as humans and other animals, and artificial devices such as robots, implementations of artificial intelligence (AI) programs, some neural networks, etc.. Focusing on what all cognitive systems have in common, we can state a very general but nonetheless interesting thesis: all cognitive systems are dynamical systems. Section 18.2 explains what this thesis means and why it is (relatively) uncontroversial. It will become clear that this thesis is a basic methodological assumption which underlies practically all current research in cognitive science.
The goal of section 18.3 is to contrast two types of explanations of cognition: computational and dynamical. Computational explanations are characterized by the use of concepts drawn from computability theory, while dynamical explanations employ the conceptual apparatus of dynamical systems theory. Further, I suggest that all explanations of cognition might end up sharing the same dynamical style, for dynamical systems theory is likely to be useful in the study of any kind of model currently employed in cognitive science. In particular, a dynamical viewpoint might even benefit those explanations of cognition which are based on symbolic models. Computational explanations of cognition, by contrast, can only be based on symbolic models or, more generally, on any other type of computational model. In particular, those explanations of cognition which are based on an important class of connectionist models cannot be computational, for this class of models falls beyond the scope of computability theory. Arguing for this negative conclusion requires a formal explication of the concept of computational system.
Finally, section 18.4 explores the possibility that explanations of cognition might be based on a type of dynamical model which cognitive scientists generally have not considered yet. I call a model of this special type a Galilean dynamical model of a cognitive system. The main goal of this section is to contrast this proposal with the current modeling practice in cognitive science and to make clear its benefits.

18.2 COGNITIVE SYSTEMS AS DYNAMICAL SYSTEMS
This section proposes a methodological interpretation of the thesis that all cognitive systems are dynamical systems, and then provides an argument which in fact shows that this thesis underlies all current research on cognition. Before doing this, however, it is crucial to clarify the distinction between mathematical and real dynamical systems and, second, the relationship which a real dynamical system may bear to a mathematical one.

REAL VS. MATHEMATICAL DYNAMICAL SYSTEMS
A real dynamical system is any concrete object which changes over time. A mathematical dynamical system, on the other hand, is an abstract mathematical structure which can be used to describe the change of a real system as an evolution through a series of states. (Thus, only real dynamical systems actually undergo change; mathematical dynamical systems are timeless, unchanging entities which can nevertheless be used as models of change in real systems.) If the evolution of the real system is deterministic, i.e., if the state at any future time is determined by the state at the present time, then the abstract mathematical structure consists of three elements. The first element is a set T which represents time. T may be either the reals, the rationals, the integers, or the non­negative portions of these structures. Depending on the choice of T, then, time will be represented as continuous, dense, or discrete. The second element is a non­empty set M which represents all the possible states through which the system can evolve; M is called the state space (or sometimes the phase space) of the system. The third element is a set of functions {gt} which tells us the state of the system at any instant t member of T, provided that we know the initial state1. For example, if the initial state is x member of M, the state of the system at time t is given by gt(x), the state at time w > t is given by gw(x), etc. The functions in the set {gt} must only satisfy two conditions. First, the function g0 must take each state to itself, for the state at time 0 when the initial state is x obviously is x itself. Second, the composition of any two functions gt and gw must be equal to the function gt+w, for the evolution up to time t+w can always be thought as two successive evolutions, the first up to time t, and the second up to time w.
An important subclass of the mathematical dynamical systems is that of all systems with discrete time. Any such system is called a cascade. More precisely, a mathematical dynamical system <T M {gt}> is a cascade just in case T is equal to the non­negative integers (or to the integers). To obtain a cascade, we may start from any non­empty set M and any function g: M ­­> M. We then set T equal to the non­negative integers, and we define the state transitions {gt} as follows: g0 = the identity function on M and, for any x member of M, gt+1(x) = g(gt(x)). In other words, we generate an arbitrary state transition gt (t > 0) by iterating t times the function g (note that g1 = g).
The distinction between real and mathematical dynamical systems is crucial to understand the thesis that all cognitive systems are dynamical systems. Before going on, then, let me further illustrate this distinction by means of a classic example. Consider first those concrete objects (falling bodies, spheres on inclined planes, projectiles, etc.) which Galileo studied in the course of his investigations in the field of mechanics. These objects are examples of real dynamical systems. Consider, then, Galileo's laws for the position Y and velocity V of a falling body: Y[y v](t) = y + vt + (1/2)ct2 and V[y v](t) = v + ct, where y and v are, respectively, the position and velocity of the falling body at time 0, and c is a constant (the acceleration of gravity). If we identify the state of a falling body with the values of its position and velocity, it is easy to verify that these two laws specify a mathematical dynamical system G = <T YxV {gt}>, where each state transition gt is defined by gt(y v) = <Y[y v](t) V[y v](t)>.

THE INSTANTIATION RELATION
What is the relation between the mathematical dynamical system G specified by Galileo's laws and a real falling body? We all know that, within certain limits of precision, these two laws accurately describe how the position and velocity of a real falling body change in time. Therefore, we may take the mathematical dynamical system G to correctly describe one aspect of the change of a real falling body, i.e., its change of position and velocity. However, it is important to note that, if we decided to focus on a different aspect of its change, a different mathematical dynamical system would in general be appropriate. For example, suppose we are interested in how the mass of the body changes. Then, since we may take the mass to be a constant m, we obtain a different mathematical dynamical system H = <T {m} {ht}>, where each state transition ht is the identity function on the state space {m}, i.e., ht(m) = m. We may thus claim that the mathematical dynamical system H correctly describes a different aspect of the change of a falling body, i.e., its change of mass.
This example thus shows that different mathematical dynamical systems may correctly describe different aspects of the change of the same real dynamical system. Let me now introduce a bit of terminology which will be useful later. I will say that a real dynamical system RDS instantiates a mathematical dynamical system MDS just in case MDS correctly describes some aspect of the change of RDS. According to this definition, then, we may take a falling body to instantiate both systems G and H specified above. In general, given a real dynamical system, this system will instantiate many mathematical dynamical systems, and each of these systems represents a different aspect of the change of the real system.

ALL COGNITIVE SYSTEMS ARE DYNAMICAL SYSTEMS: THE MEANING
We are now in the position to see what the distinction between real and mathematical dynamical systems has to do with the interpretation of the thesis that all cognitive systems are dynamical systems. First, if we interpret "dynamical system" as real dynamical system, the thesis turns out to be trivial. A real dynamical system is any concrete object which changes in time. But, since any concrete object can be said to change in time (in some respect), anything is a real dynamical system. Furthermore, a cognitive system is a concrete object of a special type, that is, an object with those kinds of properties usually studied by cognitive scientists. It thus trivially follows that any cognitive system is a real dynamical system. Second, if we instead interpret "dynamical system" as a mathematical dynamical system, the thesis affirms an absurdity, for a cognitive system, which is a concrete or real object, is said to be identical to a mathematical dynamical system which, by definition, is an abstract structure.
It thus seems that we face here a serious difficulty: depending on how we interpret the term dynamical system, the thesis that all cognitive systems are dynamical systems turns out to be either trivial or absurd. This, however, is a false dilemma, for this thesis is better interpreted in a third way, which gives a definite and nontrivial meaning to it.
When we say that a certain object is a cognitive system, we describe this object at a specific level, i.e., the level of its cognitive properties. And when we further say that this object is a dynamical system, we are making a methodological claim as to how its cognitive properties can be understood. This claim is that they can be understood by studying a mathematical dynamical system which correctly describes some aspect of its change. According to this methodological interpretation, then, a cognitive system is a dynamical system just in case its cognitive properties can be understood or explained by studying a mathematical dynamical system instantiated by it.
Interpreted this way, the thesis that all cognitive systems are dynamical systems thus means that (1) any cognitive system is a real dynamical system and (2) this system instantiates some mathematical dynamical system whose study allows us to understand or explain the cognitive properties of the real system.
We have seen above that the first clause of this thesis is trivial. However, the second clause gives us an interesting methodological indication: if we want to understand the cognitive properties of a real system, then we may study an appropriate mathematical dynamical system instantiated by it, that is, a specific mathematical structure which correctly describes some aspect of the change of the real system.

ALL COGNITIVE SYSTEMS ARE DYNAMICAL SYSTEMS: THE ARGUMENT
I have proposed a methodological reading of the thesis that all cognitive systems are dynamical systems. According to this interpretation, the thesis means that the cognitive properties of an arbitrary cognitive system can be understood or explained by studying a mathematical dynamical system instantiated by it. How might one argue for this thesis?
First, we need a crucial premise concerning the abstract mathematical models currently employed in cognitive science. These models can be basically classified into three different types: (1) symbolic processors, (2) neural networks, and (3) other continuous systems specified by differential or difference equations. Each of these three types corresponds to a different approach to cognition. The symbolic or classic approach (Newell and Simon, 1972; Newell, 1980; Pylyshyn, 1984; Johnson-Laird, 1988) employs symbolic processors as models; the connectionist approach (Rumelhart and McClelland, 1986b) employs neural networks; and models of the third type are typically proposed by nonconnectionist researchers who nevertheless believe that cognition should be studied by means of dynamical methods and concepts. Nonconnectionist researchers favoring a dynamical perspective are active in many fields; for examples, see many of the chapters of this book.
Now, the crucial premise is that all systems which belong to any of these three types are mathematical dynamical systems. That a system specified by differential or difference equations is a mathematical dynamical system is obvious, for this concept is expressly designed to describe this class of systems in abstract terms. That a neural network is a mathematical dynamical system is also not difficult to show. A complete state of the system can in fact be identified with the activation levels of all the units in the network, and the set of state transitions is determined by the differential (or difference) equations which specify how each unit is updated. To show that all symbolic processors are mathematical dynamical systems is a bit more complicated. The argumentative strategy I prefer considers first a special class of symbolic processors (such as Turing machines, or monogenic production systems, etc.) and it then shows that the systems of this special type are mathematical dynamical systems. Given the strong similarities between different types of symbolic processors, it is then not difficult to see how the argument given for one type could be modified to fit any other type. Here, I will limit myself to show that an arbitrary Turing machine is in fact a mathematical dynamical system.
A Turing machine is an ideal mechanism that evolves in discrete time steps. This mechanism is usually pictured as having three parts. First, a tape divided into a countably infinite number of adjacent squares. Each of these squares contains exactly one symbol taken from a finite alphabet {aj}2. Second, a head, which is located on a square of the tape and can perform three different operations: write a symbol on that square, move to the adjacent square to the right, or move to the adjacent square to the left. Third, a control unit which, at any time step, is in exactly one of a finite number of internal states {qi}. The behavior of the machine is specified by a set of instructions, which are conditionals of the form: if the internal state is qi, and the symbol on the square where the head is located is aj, write symbol ak (move one square to the right, move one square to the left) and change internal state to ql. Each instruction can thus be written as a quadruple of one of the three types: qiajakql, qiajRql, qiajLql, where R and L stand, respectively, for "move to the right" and "move to the left". The only requirement which the set of quadruples must satisfy is that it be consistent, in the sense that this set cannot contain any two conflicting instructions, i.e., two different quadruples which begin with the same state/symbol pair.
Given this standard description of an arbitrary Turing machine, it is now not difficult to see that this ideal mechanism can in fact be identified with a mathematical dynamical system <T M {gt}>. Since a Turing machine evolves in discrete time steps, we may take the time set T to be the set of the non­negative integers. Since the future behavior of the machine is determined when the content of the tape, the position of the head, and the internal state are fixed, we may take the state space M to be the set of all triples <tape content, head position, internal state>. And, finally, the set of state transitions {gt} is determined by the set of quadruples of the machine. To see this point, first note that the set of quadruples tells us how the complete state of the machine changes after one time step. That is, the set of quadruples defines the state transition g1. We then obtain any other state transition gt (t > 1) by iterating g1 t times, and we simply take the state transition g0 to be the identity function on M. We may thus conclude that any Turing machine is in fact a mathematical dynamical system <T M {gt}> with discrete time, i.e., a cascade. A similar argument can be given for any other type of symbolic processor we may consider, so that we can also conclude that any symbolic processor is a mathematical dynamical system.
Having thus established that symbolic processors, neural networks, and continuous systems specified by differential (or difference) equations are three different types of mathematical dynamical systems, we can finally provide an argument for the thesis that all cognitive systems are dynamical systems.
Typical research in cognitive science attempts to produce an explanation of the cognitive properties that belong to a real system, and this explanation is usually obtained by studying a model which reproduces, as accurately as possible, some aspect of the change of the real system. This model can be of three types: (1) a symbolic processor, (2) a neural network, or (3) a continuous dynamical system specified by differential (or difference) equations. Any system of these three types is a mathematical dynamical system. Therefore, the explanation of the cognitive properties of a real system is typically obtained by studying a mathematical dynamical system instantiated by it. But, according to the interpretation proposed above, this precisely means that the real system whose cognitive properties are explained by typical research in cognitive science is a dynamical system.
The argument I have just given only shows that any real system which has been the object of typical research in cognitive science is a dynamical system. However, the conclusion of this argument also supports the unrestricted version of the thesis. For, unless the cognitive systems that have been considered so far are not representative of all cognitive systems, we may also reasonably conclude that all cognitive systems are dynamical systems.

18.3 TWO CONCEPTUAL REPERTOIRES FOR THE EXPLANATION OF COGNITION: COMPUTABILITY THEORY AND DYNAMICAL SYSTEMS THEORY
Section 18.1 first proposed a methodological reading of the thesis that all cognitive systems are dynamical systems, and then gave an argument to support it. According to the proposed interpretation, this thesis means that the cognitive properties of an arbitrary cognitive system can be understood or explained by studying a mathematical dynamical system instantiated by it. If an explanation of the cognitive properties of a real system can be obtained by studying a mathematical dynamical system instantiated by it (i.e., a model of the real system), then it is important to pay attention to the type of theoretical framework we use when we carry out this study. For the type of explanation we construct in general depends on the type of theoretical framework we use in the study of the model. Let me make this point clearer by means of two examples.
According to the symbolic approach, cognition essentially is a matter of the computations a system performs in certain situations. But the very idea of a computation belongs to a specific theoretical framework, namely computability theory, which is thus presupposed by the explanatory style of this approach. In the last few years, however, both connectionists (e.g., Smolensky, 1988) and nonconnectionist dynamicists (e.g., Skarda and Freeman, 1987; Busemeyer and Townsend, 1993) have been developing a new style of explanation which represents a clear alternative to the computational one. Tim van Gelder (1991, 1992) has called the explanations of this type dynamical explanations. One of the key ideas on which this type of explanation is based is that to understand cognition we must first of all understand the state-space evolution of a certain system. The point I wish to stress here is that the concept of a state-space evolution (as well as many other concepts employed in dynamical explanations) belongs to dynamical systems theory, which is thus the theoretical framework presupposed by this new explanatory style.
Let me now draw a broad picture of the state of the current research in cognitive science. If we look at the models employed, i.e., at the mathematical dynamical systems actually used in the study of cognition, we can distinguish three different approaches: (1) the symbolic (or classic) approach, which employs symbolic processors; (2) the connectionist approach, which employs neural networks; and, finally, a third approach, let us call it (3) the dynamicists' approach, whose models are neither symbolic nor connectionist, but are nonetheless continuous systems specified by differential (or difference) equations. If, instead, we look at the explanatory styles, they can be sorted roughly into (at least) two different types of explanation: computational and dynamical. These two explanatory styles are characterized by the use of two different sets of concepts, which respectively come from computability theory and dynamical systems theory. More precisely, computational explanations are obtained by studying symbolic models by means of concepts drawn from computability theory, while dynamical explanations are obtained by studying neural networks or models of the third type by means of concepts drawn from dynamical systems theory.
But then, if this is the current situation, two questions arise: (1) why is it that dynamical explanations are exclusively based on neural networks or models of the third type? Or, to put it in a different way: why not use dynamical systems theory to study symbolic models too, so that, independently of the type of model employed, all explanations of cognition might end up sharing the same dynamical style? (2) Is it possible to obtain an analogous conclusion for computability theory instead? That is, why not study neural networks and models of the third type by means of concepts drawn from computability theory, thus extending the scope of the computational style of explanation?

DYNAMICAL SYSTEMS THEORY AND THE EXPLANATION OF COGNITION BASED ON SYMBOLIC MODELS
With regard to the first question, it is clear that symbolic models can be studied from a dynamical point of view. For these models are a special type of mathematical dynamical systems, and the most basic concepts of dynamical systems theory apply to any type of mathematical dynamical system. However, there is an important point to keep in mind. Only a limited part of the conceptual apparatus of dynamical systems theory applies to symbolic processors. For example, we can think of the state space of the processor, and of its time evolution as a motion along an orbit in this space. We may also classify different types of orbits: periodic, aperiodic, eventually periodic. Furthermore, since most symbolic processors have merging orbits, also the notions of attractor and basin of attraction make a clear sense. But not much more. To mention just one example, the whole theory of chaos does not seem to apply, in its present form, to symbolic processors. The basic reason is that the usual definitions of chaos presuppose (at least) a topological or a metrical structure on the state space of the system. The state space of a symbolic processor, however, typically lacks a natural topology or metric.
Therefore, given that only the most basic part of dynamical systems theory applies to symbolic processors, the real question seems to be the following. If we study a symbolic model of a cognitive system by means of this restricted dynamical apparatus, is this sufficient to understand the cognitive level of the system? Or, instead, is a computational perspective the only way to understand this level?
At the moment, I don't have a definite answer to this question. However, I would like to suggest that, even when symbolic models are concerned, a dynamical viewpoint might turn out to be useful for a deeper understanding of the cognitive level. This conjecture is supported by the fact that some problems that are usually treated within the conceptual framework of computability theory can be better solved by applying dynamical concepts.
For example, it is well known that the halting problem for the class of all Turing machines is undecidable. More precisely, given an arbitrary Turing machine, there is no mechanical procedure to decide whether that machine will stop when started on an arbitrary input. However, it is obvious that the halting problem for certain specific machine is decidable. For example, the machine specified by {q000q0 q011q0} immediately stops on any input. The problem which thus arises is to find nontrivial classes of Turing machines for which the halting problem is decidable. The interesting result is that by using dynamical concepts it is possible to find one such classes.
In the first place, we need to think of the halting condition of a Turing machine in dynamical terms. When a Turing machine stops, its tape content, head position, and internal state no longer change. Dynamically, this means that the Turing machine enters a cycle of period 1 in state space. More precisely, there are two possibilities. Either the Turing machine immediately enters the cycle, or it gets to it after one or more steps. In the second case, we say that the Turing machine has an eventually periodic orbit.
In the second place, we need the concept of a logically reversible system. Intuitively, a mathematical dynamical system <T M {gt}> is logically reversible if, given its state x at an arbitrary time t, we can tell the state of the system at any time w £ t. This is formally expressed by the requirement that any state transition gt be injective, i.e., for any two different states x and y, gt(x) is different from gt(y).
In the third place, we must rely on a theorem of dynamical systems theory: any system <T M {gt}> with eventually periodic orbits has at least one state transition gt which is not injective (for a proof see Giunti 1992). In other words, a system with eventually periodic orbits is not logically reversible.
Let us now consider the class of all logically reversible Turing machines. It is then easy to see that the halting problem for this class of machines is decidable. In fact, by the previous theorem, no such machine has eventually periodic orbits. But then, given any input, a logically reversible Turing machine either halts immediately or never halts. Therefore, to decide the halting problem for a logically reversible Turing machine, we may just check whether the machine halts on the first step.
The interest of this result is twofold. In the first place, this result gives us a better understanding of the halting problem: we now know that the undecidability of the halting problem is limited to logically irreversible Turing machines. In other words, we have discovered an intriguing connection between one of the classic negative results of computability theory and the dynamical concept of logical irreversibility. In the second place, this result is also interesting because it shows that dynamical systems theory can improve the solution of problems which are usually treated by means of the conceptual apparatus of computability theory. Since the explanation of cognition based on symbolic models is one of these problems, this result suggests that a dynamical viewpoint might turn out to be useful in this case too.

COMPUTABILITY THEORY AND THE EXPLANATION OF COGNITION BASED ON NEURAL NETWORKS OR OTHER CONTINUOUS DYNAMICAL MODELS
Thus far, I have argued that a dynamical approach to the study of symbolic models of cognitive systems is possible, and that it might be useful to better understand the cognitive level of these systems. If this conjecture turned out to be true, then all explanations of cognition might end up sharing the same dynamical style, independent of the type of model employed.
I now discuss the analogous question which concerns the computational style of explanation: Is it possible to study neural networks and other continuous dynamical models by means of the conceptual apparatus of computability theory, so that computational explanations of real cognitive systems might no longer be exclusively based on symbolic models?
Computability theory studies a family of abstract mechanisms which are typically used to compute or recognize functions, sets, or numbers. These devices can be divided into two broad categories: automata or machines (e.g., Turing machines, register machines, cellular automata, etc.) and systems of rules for symbol manipulation (e.g., monogenic production systems, monogenic Post canonical systems, tag systems, etc.). I will call any device studied by computability theory a computational system. The problem we are concerned with, then, reduces to the following question: Are neural networks and continuous dynamical systems specified by differential (or difference) equations computational systems? If they are, we might be able to extend the computational style of explanation to connectionist models and models of the third type. If they are not, however, this extension is impossible, for these two types of models fall beyond the scope of computability theory.
The strategy I am going to use in order to answer this question consists of two steps. In the first place, I will give an explication of the concept of a computational system. That is, I will give a formal definition of this concept in such a way that the defined concept (the explicans) arguably has the same extension as the intuitive concept (the explicandum). Since I have intuitively described a computational system as any system studied by computability theory, this means that I am going to propose (1) a formal definition of a computational system, and (2) an argument in favor of the following claim: all, and only, the systems studied by computability theory are computational systems in the formal sense.
In the second place, I will deduce from the formal definition two sufficient conditions for a system not to be computational, and I will then argue that all systems specified by differential (or difference) equations and an important class of neural networks satisfy at least one of these conditions. I will thus conclude that, whenever models of the third type or connectionist models that belong to this class are employed, a computational explanation of cognition based on these models is impossible.

A FORMAL DEFINITION OF A COMPUTATIONAL SYSTEM
In order to formulate a formal definition of a computational system, let us first of all consider the mechanisms studied by computability theory and ask (1) what type of system they are, and (2) what specific feature distinguishes these mechanisms from other systems of the same type.
As mentioned, computability theory studies many different kinds of abstract systems. A basic property that is shared by all these mechanisms is that they are mathematical dynamical systems with discrete time, i.e., cascades. I have already shown that this is true of Turing machines, and it is not difficult to give a similar argument for any other type of mechanism which has actually been studied by computability theory. Therefore, on the basis of this evidence, we may reasonably conclude that all computational systems are cascades.
However, computability theory does not study all cascades. The specific feature that distinguishes computational systems from other mathematical dynamical systems with discrete time is that a computational system can always be described in an effective way. Intuitively, this means that the constitution and operations of the system are purely mechanical or that the system can always be identified with an idealized machine. However, since we want to arrive at a formal definition of a computational system, we cannot limit ourselves to this intuitive characterization. Rather, we must try to put it in a precise form.
Since I have informally characterized a computational system as a cascade that can be described effectively, let us ask first what a description of a cascade is. If we take a structuralist viewpoint, this question has a precise answer. A description (or a representation) of a cascade consists of a second cascade isomorphic to it where, by definition, a cascade S = <T M {gt}> is isomorphic to a second cascade S1 = <T1 M1 {ht}> just in case T = T1 and there is a bijection f: M1 ­­> M such that, for any t member of T and any x member of M1, gt(f(x)) = f(ht(x)).
In the second place, let us ask what an effective description of a cascade is. Since I have identified a description of a cascade S = <T M {gt}> with a second cascade S1 = <T1 M1 {ht}> isomorphic to S, an effective description of S will be an effective cascade S1 isomorphic to S. The problem thus reduces to an analysis of the concept of an effective cascade. Now, it is natural to analyze this concept in terms of two conditions: (a) there is an effective procedure for recognizing the states of the system or, in other words, the state space M1 is a decidable set; (b) each state transition function ht is effective or computable. These two conditions can be made precise in several ways which turn out to be equivalent. The one I prefer is by means of the concept of Turing computability. If we choose this approach, we will then require that an effective cascade satisfy: (a') the state space M1 is a subset of the set P(A) of all finite strings built out of some finite alphabet A, and there is a Turing machine which decides whether an arbitrary finite string is a member of M1; (b') for any state transition function ht, there is a Turing machine which computes ht.
Finally, we are in the position to formally define a computational system. This definition expresses in a precise way the informal characterization of a computational system as a cascade that can be effectively described.

Definition:
S is a computational system iff
S = <T M {gt}> is a cascade, and there is a second cascade S1 = <T1 M1 {ht}> such that:
(1) S is isomorphic to S1;
(2) if P(A) is the set of all finite strings built out of some finite alphabet A, M1 is included in P(A) and there is a Turing machine which decides whether an arbitrary finite string is a member of M1;
(3) for any t member of T1, there is a Turing machine which computes ht.

This definition is formally correct. However, the question remains whether it is materially adequate too. This question will have a positive answer if we can argue that the systems specified by the definition are exactly the systems studied by computability theory. In the first place, we can give an argument a priori. If a cascade satisfies this definition, then computability theory certainly applies to it, for it is always possible to find an effective description of that cascade. Conversely, if a cascade does not satisfy this definition, then there is no effective description of that cascade, so that computability theory cannot apply to it. In the second place, we can also give an argument a posteriori. In fact, it is tedious but not difficult to show that all systems which have been actually studied by computability theory (Turing machines, register machines, monogenic production systems, cellular automata, etc.) satisfy the definition (see Giunti, 1992).

TWO SUFFICIENT CONDITIONS FOR A SYSTEM NOT TO BE COMPUTATIONAL
The definition allows us to deduce two sufficient conditions for a mathematical dynamical system not to be computational. Namely, a mathematical dynamical system S = <T M {gt}> is not computational if it is continuous in either time or state space or, more precisely, if either (1) its time set T is the set of the (non­negative) real numbers, or (2) its state space M is not denumerable3.
An immediate consequence of condition (2) is that any finite neural network whose units have continuous activation levels is not a computational system. A complete state of any such network can always be identified with a finite sequence of real numbers and, since each unit has a continuous range of possible activation levels, the set of all possible complete states of this network is not denumerable. Therefore, by condition (2), any finite network with continuous activation levels is not a computational system. (A computational system can, of course, be used to approximate the transitions of a network of this type. Nevertheless, if the real numbers involved are not computable, we cannot conclude that this approximation can be carried out to an arbitrary degree of precision. This is exactly the same situation that we have when we use computers to approximate the behavior of a physical system. Physical systems are continuous [in both time and state space] so that they can transform infinite amounts of information and, in general, they cannot be described in an effective manner. Computational systems, on the other hand, are limited to a finite amount of information, and they can always be effectively described.) We can reach the same conclusion if we consider a continuous system specified by differential equations. Since these systems are continuous (in time or state space), none of them is computational4.
Now, we can finally go back to the question posed earlier. Is it possible to produce computational explanations of cognition on the basis of connectionist models or other continuous dynamical models based on differential or difference equations? For this to be possible, computability theory must apply to these two types of models. However, we have just seen that all neural networks with continuous activation levels and all continuous systems specified by differential (or difference) equations are not computational systems. Therefore, computability theory does not apply to them. We must then conclude that whenever connectionist models with continuous activation levels or other continuous dynamical models specified by differential or difference equations are employed, a computational explanation of cognition based on these models is impossible.
A point of clarification is essential here. Let us approach it by imagining someone objecting to this claim in the following way. A standard digital computer is a physical machine and its operation is based on the storage and interaction of electrical currents. At a relevant microlevel, these electrical activities are continuous and their behaviors are described by differential equations. Yet this is a paradigmatic example of a computational system which is effectively studied using the tools and concepts of computability theory. Therefore it is not impossible to produce computational explanations of continuous systems based on differential equations. This objection is confused because it fails to keep clear the distinction between real dynamical systems and mathematical dynamical systems that can be used as models of them. Clearly we have two kinds of mathematical models of a digital computer: one which is a symbolic processor model, and another specified by differential equations. Digital computers are quite special in that they appear to instantiate both mathematical models equally well. The claim for which I have argue here is that it is impossible to base a computational explanation on a continuous dynamical model (though it is possible to base dynamical explanations on symbolic models). That is, it is a claim about the relation between conceptual and explanatory frameworks and mathematical models, not between conceptual and explanatory frameworks and real systems. As a matter of empirical fact, it is true that there are many kinds of real dynamical systems for which there are no really good computational explanations based on symbolic models, and it may turn out that cognitive systems belong in this class. However, this can be established only by detailed empirical investigation, not by abstract argument.

18.4 COGNITIVE SYSTEMS AND THEIR MODELS
Thus far I have identified a model of a real system with a mathematical dynamical system instantiated by it where, according to the discussion above, the instantiation relation holds just in case a mathematical dynamical system correctly describes some aspect of the change of a real system. However, since this clause can in fact be interpreted in different ways, there are different types of instantiation relation. Therefore, we can distinguish different types of models of a real system by looking at the specific type of instantiation relation which holds between the model and the real system. More precisely, the type of instantiation relation depends on three elements: (1) what aspect of the change of a real system the mathematical dynamical system intends to describe; (2) what counts as a description of this aspect, and (3) in what sense this description is correct.

SIMULATION MODELS OF COGNITIVE SYSTEMS
The three types of models currently employed in cognitive science (symbolic processors, neural networks, and other continuous systems specified by differential or difference equations) are standardly characterized by a special type of instantiation relation, which is based on the fact that these models allow us to simulate certain aspects of the behavior of cognitive systems. For this reason, I call a model with this type of instantiation relation a simulation model of a cognitive system. The three elements of the instantiation relation proper of this type of model are the following.
First, the aspect of the change of a cognitive system which a simulation model intends to describe is a cognitive process involved in the completion of a given task. For example, if the cognitive system is a subject who has been asked to solve a simple logic problem, a simulation model will attempt to describe the subject's problem-solving process (see Newell and Simon, 1972). If, instead, the cognitive system is a young child who is learning the past tense of English verbs, a simulation model will attempt to describe the child's past tense acquisition process (see Rumelhart and McClelland, 1986a).
Second, a simulation model allows us to produce a simulation of the cognitive process it intends to describe, and it is this simulating process which counts as a description of the real cognitive process. In general, a simulation of a cognitive process is obtained by first implementing the model (usually by means of a computer program), and by then assigning this implemented version of the model a task similar to the one assigned to the cognitive system. In dealing with this task, the implemented model goes through a certain process: this is in fact the simulating process which counts as a description of the real cognitive process.
Third, the description of a cognitive process provided by a simulation model is correct in the sense that the simulating process is similar to the cognitive process in some relevant respect. Which respects are to be considered relevant is usually clear in each specific case.
A classic example of a simulation model is Rumelhart and McClelland's (1986a) Past Tense Acquisition model. This neural network is intended to describe the process of past tense acquisition (PTA) in a young child learning English verbs from everyday conversation.
Rumelhart and McClelland implemented the model by means of a certain computer program, and they then assigned this implemented version of the model a task which they claim to be similar to the child's task. "Our conception of the nature of this experience is simply that the child learns first about the present and past tenses of the highest frequency verbs; later on, learning occurs for a much larger ensemble of verbs, including a much larger proportion of regular forms" (pp. 240­241). Rumelhart and McClelland divided PTA's task into two parts: first, learning just 10 high frequency verbs, most of which were irregular; and second, learning a greatly expanded repertoire of verbs, most of which were regular. In dealing with this task, PTA went through a certain acquisition process. This is in fact the simulating process which counts as a description of the child's PTA process. If the authors were right, the description of this process provided by PTA would be correct, in the sense that the simulating process turns out to be similar to the real acquisition process in many relevant respects.

GALILEAN DYNAMICAL MODELS OF COGNITIVE SYSTEMS
It is now interesting to ask whether, besides the instantiation relation proper to simulation models, there are other ways in which a cognitive system can instantiate a mathematical dynamical system. To answer this question, however, it is useful to first consider some aspects of the current practice of dynamical modeling. I have in mind here a traditional way of using mathematical dynamical systems to describe the change of real systems. Simple examples of these traditional applications can be found in many elementary books on differential or difference equations, and they cover such different fields as mechanics, electrodynamics, chemistry, population dynamics, engineering, etc.
For the moment, I wish to focus on just one basic aspect of traditional dynamical modeling, namely, the use of magnitudes in order to describe the change of real systems. A magnitude is a property of a real system (or of one of its parts) which, at different times, may assume different values. For example, the position, velocity, acceleration, momentum, and mass of a body are five different magnitudes. Each magnitude is always associated with two mathematical objects. First, the set of values which the magnitude can take at different times and, second, its time evolution function, that is, a function which tells us the value of the magnitude at an arbitrary instant. Time is a special magnitude, for it is associated with a set of values, but not with a time evolution function.
The set of values of a magnitude usually is the set of the real numbers; however, one may also think of magnitudes whose set of values is the domain of some other mathematical structure (e.g., some magnitudes can only take discrete values, i.e., their set of values is a (subset) of the integers).
In general, the time evolution function of a magnitude is a parametric function of time, where the parameters are the initial values of the magnitude itself and of other magnitudes. For example, we can take the time evolution function of the position of a falling body to be specified by the Galilean equation Y[y v](t) = y + vt + (1/2)ct2, where t is an arbitrary instant, while y and v are, respectively, the values at time 0 of the position and velocity of the falling body. Since certain magnitudes may be functions of other magnitudes, the time evolution function of a magnitude can often be expressed in a different way. Thus, since velocity is the ratio of momentum and mass, that is v = p/m = V(p), the time evolution function of the position of a falling body can also be expressed by using y and p, i.e.,
Y[y v](t) = Y[y V(p)](t) = y + (p/m)t + (1/2)ct2 = Y[y p](t).
To eliminate clutter, I will indicate the time evolution function of magnitude Mi with the symbol Mi(t). The context will then make clear which parameters (besides the initial value xi of Mi) are used to express this function.
Let me now recall a basic result which links the theory of magnitudes to dynamical systems theory, and is in fact one of the foundations of traditional dynamical modeling. Let us consider n (n > 0) magnitudes M1 ... Mn whose time evolution functions can all be expressed by using their initial values x1 ... xn as parameters. That is, the time evolution function Mi(t) of magnitude Mi (1 £ i £ n) can be expressed as the parametric function Mi[x1...xn](t). Let us then consider the system P = <T M1x...xMn {gt}> where T is the set of values of the magnitude time, each component of the Cartesian product M1x...xMn is the set of values of magnitude Mi and, for any t member of T,
gt(x1...xn) = <M1[x1...xn](t) ... Mn[x1...xn](t)>.
The system P is called the system generated by the magnitudes M1 ... Mn. Then, the system P is a mathematical dynamical system just in case the time evolution functions satisfy
(1) Mi[x1...xn](0) = xi, and
(2) Mi[x1...xn](t+w) =
Mi[M1[x1...xn](t) ... Mn[x1...xn](t)](w).
Let us now consider all the magnitudes proper of a real dynamical system. Among all the sets of n (n > 0) magnitudes of this system, there will be some whose time evolution functions satisfy conditions (1) and (2) above. Each of these sets of magnitudes thus generates a mathematical dynamical system P. I call any mathematical dynamical system P generated by a finite number of magnitudes of a real system a Galilean dynamical model of the real system.
It is now quite clear in which specific sense a real system instantiates its Galilean dynamical models. First, the aspect of the change of the real system which a Galilean dynamical model intends to describe is the simultaneous variation in time of those magnitudes M1 ... Mn of the real system that generate the model. Second, the description of this variation is provided by the time evolution functions M1(t) ... Mn(t) of these magnitudes. Third, this description is correct in the (obvious) sense that the value of magnitude Mi at an arbitrary instant t is the value Mi(t) of its time evolution function.
The traditional practice of dynamical modeling is in fact concerned with specifying a mathematical dynamical system, and then justifying the claim that this system is a Galilean dynamical model of a real system. Which dynamical models one tries to specify depends on the type of properties of the real system that the study of these models is intended to explain. For example, if we are interested in understanding the mechanical properties of a real system, we will try to specify those dynamical models of the real system which are generated by such mechanical magnitudes of the system as position, velocity, mass, etc. If we are instead interested in the explanation of the cognitive properties of a real system, we will try to specify those dynamical models which are generated by the cognitive magnitudes of the system.
Let us now consider a cognitive system. Since any cognitive system is a real dynamical system, it will have a certain class of Galilean dynamical models. These are the Galilean dynamical models of the cognitive system. It is then interesting to ask two questions: (1) Are the models employed so far in cognitive science Galilean dynamical models of cognitive systems? (2) If they are not, what would we gain if we instead based the explanation of cognition on Galilean models?
As regards the first question, it is clear that most models employed so far in cognitive science are not Galilean dynamical models of cognitive systems. A Galilean dynamical model is a mathematical dynamical system generated by a finite number of magnitudes of the cognitive system. Therefore, a Galilean model of a cognitive system has a very specific type of interpretation, for each component of the model corresponds to a magnitude of the cognitive system. The models currently employed in cognitive science, however, lack this type of interpretation, for their components do not correspond directly to magnitudes of cognitive systems themselves. The correspondence is at best indirect, via the simulation.
Since most models currently employed in cognitive science are not Galilean dynamical models of cognitive systems, it is important to understand what we would gain if we changed this practice, and we instead based the explanation of cognition on Galilean models. The first gain would be a dramatic increase in the strength of the instantiation relation which links our models to the cognitive systems they describe.
We have seen that most models currently employed in cognitive science are simulation models, and that the instantiation relation proper of these models insures, at most, a certain similarity between the aspect of change the model intends to describe (a certain cognitive process) and what counts as a description of this aspect (a simulated process). The instantiation relation between a cognitive system and a Galilean dynamical model, instead, is much stronger, for this relation in fact insures an identity between the aspect of change the model intends to describe (the simultaneous variation in time of the magnitudes which generate the model) and what counts as a description of this aspect (the time evolution functions of these magnitudes).
Besides this first gain, the use of Galilean dynamical models of cognitive systems may allow us to improve our explanations of cognition. To see this point, we must briefly reconsider how an explanation of cognition is usually obtained. First, we specify a model which allows us to simulate a cognitive process of a real system. Second, we study this simulation model in order to explain certain cognitive properties of the real system. Now, the main problem with this type of explanation is that it is based on a model which is instantiated by the real system in a weak sense. In particular, we have seen that the instantiation relation of a simulation model only insures a similarity between a cognitive process and a simulating process. But then, an explanation based on such model is bound to neglect those elements of the cognitive process which do not have a counterpart in the simulating process. The instantiation relation of a Galilean dynamical model, instead, insures an identity between the real process the model intends to describe (the simultaneous variation in time of those magnitudes of the cognitive system which generate the model) and what counts as a description of this process (the time evolution functions of these magnitudes). Therefore if an explanation of cognition were based on a Galilean dynamical model, all the elements of the real process could be considered.

A NEW POSSIBILITY FOR COGNITIVE SCIENCE: GALILEAN DYNAMICAL MODELS OF COGNITION
The use of Galilean dynamical models of cognitive systems stands to yield at least two important benefits. We must now consider how we could in fact proceed to accomplish this goal.
Clearly, this question is not one that can be answered in detail independently of a real research which aims at this goal. The current practice of traditional dynamical modeling in other disciplines can, however, give us some useful indications. I mentioned above that traditional dynamical modeling aims at specifying certain Galilean dynamical models of a real system, and that the type of dynamical model of a real system one attempts to specify depends on the type of properties of the system which the study of these models is intended to explain. Since we are interested in explaining the cognitive properties of a cognitive system, we should then exclusively consider those Galilean dynamical models of a cognitive system whose study allow us to understand or explain these properties. I call any Galilean dynamical model of this special type a Galilean dynamical model of cognition.
The problem we face is thus the following. Suppose that we have specified a mathematical dynamical system MDS = <T M {gt}>. Under what conditions can we justifiably affirm that this system is a Galilean dynamical model of cognition? By the definition I have just given, the system MDS is a Galilean dynamical model of cognition just in case it satisfies two conditions: (1) The study of this system allows us to devise an explanation of at least some of the cognitive properties of a real system RDS, and (2) MDS is a Galilean dynamical model of this real system. Clearly, the justification of the first condition does not present any special difficulty. What we must do is in fact produce an explanation of some cognitive property of RDS that is based on the study of MDS. As we saw in section 18.3, we may study MDS by employing different theoretical frameworks, and the type of explanation we produce depends on the theoretical framework we use. If MDS is a computational system, we may decide to study it by means of concepts drawn from computability theory. The resulting explanation will thus be a computational one. Otherwise, we may always employ dynamical system theory, and the resulting explanation will thus be a dynamical one.
The problem we face thus reduces to the justification of the claim that MDS is a Galilean dynamical model of RDS. Fortunately, the practice of dynamical modeling allows us to outline a quite standard procedure to deal with this problem. Since the mathematical dynamical system MDS is a Galilean dynamical model of the real system RDS just in case MDS is generated by a finite number of magnitudes of the real system, we must first of all be able to divide the state space M into a finite number of components, and then associate to each component a magnitude of the real system. This first step of the justification procedure gives a conceptual interpretation to the mathematical dynamical system MDS, for each component of its state space M is now interpreted as the set of values of a magnitude of the real system RDS, and a magnitude is in fact a property of the real system which may assume different values at different times.
The conceptual interpretation provided by the first step, however, is not sufficient. To justify the claim that MDS is a Galilean dynamical model of RDS we must also provide MDS with an empirical interpretation. The next two steps of the justification procedure take care of this problem. The second step consists in dividing the magnitudes specified in the first step into two groups: (1) those magnitudes which we intend to measure (observable) and (2) those which we do not plan to measure (non­observable or theoretical). This division of the magnitudes, however, must satisfy two conditions. First, the group of the observable magnitudes must have at least one element and, second, if there are theoretical magnitudes, they must be empirically relevant. More precisely, for any theoretical magnitude, there must be some observable magnitude whose time evolution function depends on it20. In fact, if this condition is violated, we can always obtain an empirically equivalent system by simply eliminating all those theoretical components which do not make any difference to the possible evolutions of any observable magnitude.
In the third step, we then complete the empirical interpretation of the mathematical dynamical system MDS by specifying methods or experimental techniques that allow us to measure or detect the values of all the magnitudes of the real system RDS which we have classified observable in the previous step.
After we have provided MDS with both a conceptual and an empirical interpretation, we can finally establish under what conditions the claim is justified that MDS is a Galilean dynamical model of RDS. This claim is justified just in case MDS turns out to be an empirically adequate model of RDS, that is, if all the measurements of the observable magnitudes of RDS turn out to be consistent with the values deduced from the model.

THE GALILEAN DYNAMICAL APPROACH TO COGNITIVE SCIENCE
I started this chapter by making explicit a basic methodological assumption which underlies all current research in cognitive science, namely, that all cognitive systems are dynamical systems. According to this thesis, the cognitive properties of a cognitive system can be understood or explained by studying a mathematical dynamical system instantiated by it or, i.e., by studying a model of the cognitive system. I then contrasted the computational and the dynamical style of explanation, and argued that the dynamical style does not depend on the type of model employed, while the computational style can only be based on computational models.
Finally, I explored the possibility of basing the explanation of cognition on a type of model which has not been considered yet. This type of model is the class of all the Galilean dynamical models of cognitive systems. The methodological assumption underlying this proposal is that, among all Galilean dynamical models of cognitive systems there are some, the Galilean dynamical models of cognition, whose study allow us to understand or explain the cognitive properties of these systems directly. This assumption can thus be interpreted as the basic methodological thesis of a possible research program in cognitive science. Whether we will in fact be able to produce explanations of cognition based on Galilean dynamical models is a question which can only be answered by actually starting a concrete research which explicitly aims at this goal. In this chapter, I have tried to state this goal as clearly as I can, and to show why we should care to pursue it. I see no reason why, in principle, this kind of dynamical approach should not turn out to be successful. This, however, does not mean that we will not encounter some serious difficulty along the way. In fact, we can already anticipate some of the problems which we will have to solve.
First of all, we will have to radically change our way of looking at cognition. So far, in order to explain cognition, we have been focusing on the cognitive processes involved in the completion of some task, and we have then tried to produce models which simulate these processes. If, instead, the explanation of cognition is to be based on Galilean dynamical models, we should not primarily focus on the processes involved in cognition but, rather, on how the values of those magnitudes of a cognitive system that are relevant to cognition vary in time. I call a magnitude of this special type a cognitive magnitude.
Now, the two main problems we face are that (1) we will have to discover what the cognitive magnitudes are, and (2) we will then have to invent appropriate experimental techniques to measure the values of at least some of these magnitudes. If are able to solve these two basic problems, then the way to the actual production of explanations of cognition based on Galilean dynamical models of cognitive systems will be open.

NOTES
1. Each function in {gt} is called a state transition (or a t­advance) of the system. If T includes all the reals, rationals, or integers, then each positive state transition gt has the inverse state transition g­t and, for this reason, the dynamical system is said to be reversible. If instead T is limited to the non­negative reals, rationals, or integers, there are no negative state­transitions, and the dynamical system is called irreversibleTorna al testo

2. The first symbol of the alphabet is usually a special symbol, the blank. Only a finite number of squares may contain nonblank symbols. All other squares must contain the blank. Torna al testo

3. A set is denumerable just in case it can be put in a 1:1 correspondence with (a subset of) the non­negative integers. If condition (1) is satisfied, then S is not a cascade, so that, by definition, S is not a computational system. If condition (2) holds, then by condition (1) of the definition, M1 is not denumerable. But then, M1 cannot be a subset of the set P(A) of all finite strings built out of some finite alphabet A, for any such subset is denumerable. Therefore, condition (2) of the definition is not satisfied, and S is not a computational system. Torna al testo

4. This conclusion can also be extended to all systems specified by difference equations of the form (f(t+1)) = g(f(t)), where f is a function from the (non-negative) integers to an interval I of the reals and g is a function from I to I. Since the state space of these systems is a real interval I, these systems have a nondenumerable state space. Therefore, by condition (2), they are not computational systems. Torna al testo

REFERENCES
Busemeyer, J.R., and Townsend, J.T. (1993). Decision field theory: a dynamical­cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432-459.

Giunti, M. (1992). Computers, dynamical systems, phenomena, and the mind. Doctoral dissertation. Department of History and Philosophy of Science, Indiana University, Bloomington.

Johnson-Laird, P.N. (1988). The computer and the mind. Cambridge, MA: Harvard University Press.

Newell, A. (1980). Physical symbol systems. Cognitive Science, 4, 135-183.

Newell, A., and Simon, H.A. (1972). Human problem solving. Englewood Cliffs NJ: Prentice Hall.

Pylyshyn, Z.W. (1984) Computation and cognition. Cambridge, MA: MIT Press.

Rumelhart, D.E., and McClelland, J.L. (1986a). On learning the past tenses of English verbs. In Parallel distributed processing, vol. 2, (pp. 216­271). Cambridge, MA: MIT Press.

Rumelhart, D.E., and McClelland, J.L., (Eds.) (1986b). Parallel distributed processing, 2 vols. Cambridge MA: MIT Press.

Skarda, C.A., and Freeman, W.J. (1987). Brain makes chaos to make sense of the world. Behavioral and Brain Sciences, 10, 116­195.

Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1-74.

Van Gelder, T. (1991). Connectionism and dynamical explanation. In Proceedings of the 13th annual conference of the cognitive science society (pp. 499-503). Hillsdale, NJ: L. Erlbaum.

Van Gelder, T. 1992. The proper treatment of cognition. In Proceedings of the 14th annual conference of the cognitive science society. Hillsdale, NJ: Erlbaum.

GUIDE TO FURTHER READING
Most of the issues covered in this chapter, and many others, are explored in considerable more detail in Giunti (1992). The relevance od dynamical systems theory for the study of computational systems is a central theme of Wolfram's work on cellular automata (Wolfram, 1986). The status of computers as dynamical systems and computation in neural systems is discussed in Hopfield (1993). A connectionist-oriented discussion of computational and dynamical models of cognition may be found in Horgan and Tienson (1992; forthcoming). An influential discussion of connectionism as a form of dynamics based research may be found in Smolensky (1988). An ambitious and provocative work with an interestingly different perspective than presented here is Kampis (1991).

Giunti, M. (1992). Computers, dynamical systems, phenomena and the mind. Ph.D. dissertation, Indiana University, Bloomington.

Hopfield (1993). Neurons, dynamics, and computation. Physics Today, 47, 40-46.

Horgan, T., and Tienson, J. (1992). Cognitive systems as dynamical systems. Topoi, 11, 27, 27-43.

Horgan, T., and Tienson, J. (forthcoming). A non-classical framework for cognitive science. Synthese.

Kampis, G. (1991). Self modifying systems in biology and cognitive science. Oxford, England: Pergamon.

Smolesky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1-74.

Wolfram, S. (1986). Theory and applications of cellular automata. Singapore: World Scientific.


  Torna alla home page Torna all'inizio