Torna alla home page


Marco Giunti

IS COMPUTATIONALISM THE HARD CORE OF COGNITIVE SCIENCE?1

In Abrusci, Michele e al. (a cura di), Prospettive della logica e della filosofia della scienza . Atti del convegno triennale SILFS, Roma, 3-5 gennaio 1996, Pubblicazioni della SILFS, Edizioni ETS, Pisa, 1998, pp. 255-267.
1. Introduction

By computationalism in cognitive science I mean the view that cognition essentially is a matter of the computations that a cognitive system2 performs in certain situations. The main goal of this paper is to assess whether this view may represent a basic hypothesis shared by the three current approaches to cognition: the symbolic (or classic) approach, connectionism, and nonconnectionist dynamics.

If we look at the models actually used in cognitive science, we see that a different type of model corresponds to each approach. The symbolic approach (Newell and Simon 1972; Newell 1980; Pylyshyn 1984; Johnson Laird 1988) employs symbolic processors as models. As a first approximation, we may take a symbolic processor to be any device that operates effective transformations of appropriately defined symbol structures. The connectionist approach (Rumelhart and McClelland 1986), on the other hand, employs connectionist networks, while nonconnectionist dynamicists use other kinds of continuous systems specified by differential (or difference) equations. Nonconnectionist researchers favoring a dynamical perspective are active in many fields. For examples see Port and van Gelder (1995).

Computationalism is the received view within the symbolic approach. However, also connectionists often refer to the information processing that goes on in connectionist networks as computational, even though the computations performed by a network would be quite different from the transformations of symbols which are typical of classic models. Nonconnectionist dynamicists, on the other hand, tend to explain the behavior of their models in terms of dynamical concepts such as, for example, orbits in state space, attractors, bifurcations, etc. Nevertheless, a common argument against this approach maintains that the dynamical jargon is just a different way of describing the computations performed by the models. Therefore, dynamicists would not really propose an alternative view of the nature of cognition. Instead, they would just rephrase good old fashioned computationalism in a fancier or, at best, more efficient language.

In this paper, I will confront these issues head­on. The main thesis I am going to defend is that computationalism is only consistent with symbolic modeling or, more generally, with any other type of computational modeling. In particular, those scientific explanations of cognition which are based on (i) an important class of connectionist models or (ii) nonconnectionist continuous models cannot be computational, for these models are not the kind of system which can perform computations in the sense of standard computation theory.

The thesis that computationalism is only consistent with computational modeling is empty unless one gives a sufficiently precise characterization of what a computational model of a cognitive system is. By this term, I mean any computational system that describes (or, at least, is intended to describe) some cognitive aspect of the cognitive system. Intuitively, by the term computational system I refer to any device of the kind studied by standard computation theory. Thus, for example, Turing machines, register machines, finite state automata, and all the symbolic processors employed in cognitive science are four different types of computational systems. By contrast, so­called analog computers are not computational systems. In section 2.2, I will propose a formal explication of this intuitive notion of a computational system.

Thus, if my thesis is correct, we are left with the following alternative. Either we construe computationalism by explicitly referring to some nonstandard notion of computation, or we simply abandon the idea that computationalism be a basic hypothesis shared by all current research in cognitive science. In the last section of this paper, I will also suggest that a different hypothesis, dynamicism, may represent a viable alternative to computationalism. According to it, cognition essentially is a matter of the state evolutions that a cognitive system undergoes in certain situations.
 

2. The argument

The main thesis of this paper is that computationalism is only consistent with symbolic modeling or, more generally, with any other type of computational modeling. The argument I am going to propose is based on two premises. The first one affirms that all models currently employed in cognitive science are mathematical dynamical systems. The second premise, on the other hand, affirms that a computation (in the sense of standard computation theory) can only be performed by that special type of mathematical dynamical system which I have called a computational system. Having established these two premises, I will then show that (a) an important class of connectionist models, and (b) nonconnectionist continuous models are not computational systems. Hence, these models cannot perform computations in the standard sense. But then, if our scientific explanations of cognition are based on these models, we cannot maintain that cognition is, essentially, a matter of the computations performed by the cognitive system which these models are intended to describe. On the other hand, (c) all symbolic models are computational systems. Therefore, computationalism is only consistent with symbolic modeling or, more generally, with any other approach which employs computational systems as models of cognition.
 

2.1. The first premise

The first premise of my argument is that all models currently employed in cognitive science are mathematical dynamical systems. A real dynamical system is any real system that changes over time. Therefore, since any real system can be thought to change in time (in some respect), any real system is a real dynamical system. A mathematical dynamical system, on the other hand, is an abstract mathematical structure that can be used to describe the change of a real system as an evolution through a series of states. If the evolution of the real system is deterministic, that is, if the state at any future time is determined by the state at the present time, then the abstract mathematical structure consists of three elements. The first element is a set T that represents time. T may be either the reals, the rationals, the integers, or the nonnegative portions of these structures. Depending on the choice of T, then, time is represented as continuous, dense, or discrete. The second element is a nonempty set M that represents all possible states through which the system can evolve; M is called the state space of the system. The third element is a set of functions {gt} that tells us the state of the system at any instant t provided that we know the initial state; each function in {gt} is called a state transition of the system. For example, if the initial state is x Î M, the state at time t is given by gt(x), the state at time u > t is given by gu(x), etc. The functions in the set {gt} must only satisfy two conditions. First, the function g0 must take each state to itself. For, if the initial state is x, the state g0(x) at time 0 obviously is x itself. Second, the composition of any two functions gt and gw must be equal to the function gt+w. For, given an arbitrary initial state x, the state gt+w(x) reached at time t+w can always be considered as the result of two successive state transitions, the first from state x to state gt(x), and the second from state gt(x) to state gw(gt(x)).

An important subclass of the mathematical dynamical systems is that of all systems with discrete time. Any such system is called a cascade. More precisely, a mathematical dynamical system <T, M, {gt}> is a cascade just in case T is equal to the nonnegative integers (or to the integers). To obtain a cascade, we may start from any nonempty set M and any function g: M ® M. We then set T equal to the nonnegative integers, and we define the state transitions {gt} as follows: g0 = the identity function on M and, for any x ÎM, gt+1(x) = g(gt(x)). In other words, we generate an arbitrary state transition gt (t > 0) by iterating t times the function g (note that g1 = g).

As mentioned, the models currently employed in cognitive science can basically be classified into three different types: (1) symbolic processors, (2) neural networks, and (3) other continuous systems specified by differential (or difference) equations.3 That a system specified by differential or difference equations is a mathematical dynamical system is obvious, for this concept is expressly designed to describe this class of systems in abstract terms. That a neural network is a mathematical dynamical system is also not difficult to show. A complete state of the system can in fact be identified with the activation levels of all the units in the network, and the set of state transitions, on the other hand, is determined by the differential (or difference) equations that specify how each unit is updated. To show that all symbolic processors are mathematical dynamical systems is a bit more complicated. The argumentative strategy I prefer first considers a special class of symbolic processors (such as Turing machines, or monogenic production systems, etc.) and then shows that all systems of this special type are mathematical dynamical systems with discrete time, i.e., cascades. Given the strong similarities between different types of symbolic processors, it is then not difficult to see how the argument given for one type could be modified to fit any other type (Giunti 1992, 1997). We may thus conclude that all models currently employed in cognitive science are mathematical dynamical systems. Here, I limit myself to show that the thesis holds for an arbitrary Turing machine.

A Turing machine is an ideal mechanism that evolves in discrete time steps. This mechanism is usually pictured as having three parts. First, a tape divided into a countably infinite number of adjacent squares. Each square contains exactly one symbol taken from a finite alphabet {aj}.4 Second, a head which is located on a square of the tape and can perform three different operations: write a symbol on that square, move to the adjacent square to the right, or move to the adjacent square to the left. Third, a control unit which, at any time step, is in exactly one of a finite number of internal states {qi}. The behavior of the machine is specified by a set of instructions, which are conditionals of the form: if the internal state is qi, and the symbol on the square where the head is located is aj, write symbol ak (move one square to the right, move one square to the left) and change internal state to ql. Each instruction can thus be written as a quadruple of one of the three types: qiajakql, qiajRql, qiajLql, where R and L stand, respectively, for "move to the right" and "move to the left". The only requirement that the set of quadruples must satisfy is that it be consistent, in the sense that this set cannot contain any two conflicting instructions, that is, two different quadruples that begin with the same <internal state, symbol> pair.

Given this standard description of an arbitrary Turing machine, it is now not difficult to see that this ideal mechanism can in fact be identified with a mathematical dynamical system <T, M, {gt}>. Since a Turing machine evolves in discrete time steps, we may take the time set T to be the set of the nonnegative integers. Since the future behavior of the machine is determined when the content of the tape, the position of the head, and the internal state are fixed, we may take the state space M to be the set of all triples <tape content, head position, internal state>. And, finally, the set of state transitions {gt} is determined by the set of quadruples of the machine. To see this point, first note that the set of quadruples tells us how the complete state of the machine changes after one time step. That is, the set of quadruples defines the state transition g1. We then obtain any other state transition gt (t > 1) by iterating g1 t times, and we simply take the state transition g0 to be the identity function on M. We may thus conclude that any Turing machine is in fact a mathematical dynamical system <T, M, {gt}> with discrete time, i.e., a cascade. A similar argument can be given for any other type of symbolic processor that we may consider. We may thus also conclude that all models currently employed in cognitive science are mathematical dynamical systems.
 

2.2. The second premise

The second premise of my argument affirms that a computation (in the sense of standard computation theory) can only be performed by a computational system. Intuitively, by this term I refer to any device of the kind studied by standard computation theory.5 Standard computation theory studies a family of abstract mechanisms which are typically used to compute or recognize numeric functions, sets of numbers, or numbers. These devices can be divided into two broad categories: automata or machines (e.g., Turing machines, register machines, cellular automata, etc.) and systems of rules for symbol manipulation (e.g., monogenic production systems, monogenic Post canonical systems, tag systems, etc.).

Even though the domain of standard computation theory includes many different devices, any of them only performs operations that are clearly effective or mechanic in the intuitive sense.6 I call any sequence of operations performed by any such device a standard computation. According to this terminology, then, the second premise affirms that a standard computation can only be performed by a computational system. It is thus obvious that this premise is a direct consequence of the definition of standard computation I have just given.

Nevertheless, someone might object that the second premise not only is (trivially) true, but also irrelevant. According to my imaginary critic, the interesting question is not whether a standard computation can only be performed by a computational system but, rather, whether standard computational methods are sufficient to accurately describe the behavior of all models employed in cognitive science (be these models computational or not). I will give an answer to this kind of objection in section 2.5. Before I can proceed with my argument, however, I need to give a formal explication of the intuitive concept of a computational system.
 

2.3. A formal definition of a computational system

To this extent, let us first of all consider the mechanisms studied by standard computation theory and ask (i) what type of system they are, and (ii) what specific feature distinguishes these mechanisms from other systems of the same type.

As mentioned, standard computation theory studies many different kinds of abstract systems. A basic property that is shared by all these mechanisms is that they are mathematical dynamical systems with discrete time, i.e., cascades. I have already shown that this is true of Turing machines (see section 2.1), and it is not difficult to give a similar argument for any other type of mechanism which has been actually studied by standard computation theory. Therefore, on the basis of this evidence, we may reasonably conclude that all computational systems are cascades.

However, standard computation theory does not study all cascades. The specific feature that distinguishes computational systems from other mathematical dynamical systems with discrete time is that a computational system can always be described in an effective way.7 Intuitively, this means that the constitution and operations of the system are purely mechanical or that the system can always be identified with an idealized machine. However, since we want to arrive at a formal definition of a computational system, we cannot limit ourselves to this intuitive characterization. Rather, we must try to put it in a precise form.

Since I have informally characterized a computational system as a cascade that can be effectively described, let us ask first what a description of a cascade is. If we take a structuralist viewpoint, this question has a precise answer. A description (or a representation) of a cascade consists of a second cascade isomorphic to it where, by definition, a cascade MDS1 = <T, M1, {ht}> is isomorphic to a given cascade MDS = <T, M, {gt}> if and only if there is a bijection f: M ®M1 such that, for any t Î T and any x ÎM, f(gt(x)) = ht(f(x)).

In the second place, let us ask what an effective description of a cascade is. Since I have identified a description of a cascade MDS = <T, M, {gt}> with a second cascade MDS1 = <T, M1, {ht}> isomorphic to MDS, an effective description of MDS will be an effective cascade MDS1 isomorphic to MDS. The problem thus reduces to an analysis of the concept of an effective cascade. Now, it is natural to analyze this concept in terms of two conditions: (a) there is an effective procedure for recognizing the states of the system or, in other words, the state space M1 is a decidable set; (b) each state transition function ht is effective or computable. These two conditions can be made precise in several ways which turn out to be equivalent. The one I prefer is by means of the concept of Turing computability.8 If we choose this approach, we will then require that an effective cascade satisfy: (A) the state space M1 is a subset of the set P(A) of all finite strings built out of some finite alphabet A, and there is a Turing machine that decides whether an arbitrary finite string is member of M1; (B) for any state transition function ht, there is a Turing machine that computes ht.

Finally, we are in a position to formally define a computational system. This definition expresses in a precise way the informal characterization of a computational system as a cascade that can be effectively described.

DEFINITION (computational system)
MDS is a computational system if and only if MDS = <T, M, {gt}> is a cascade, and there is a second cascade MDS1 = <T, M1, {ht}> such that MDS1 is isomorphic to MDS and

  1. if P(A) is the set of all finite strings built out of some finite alphabet A, M1 Í P(A) and there is a Turing machine that decides whether an arbitrary finite string is member of M1;
  2. for any t Î T, there is a Turing machine that computes ht.


This definition is formally correct.9However, the question remains whether it is materially adequate too. This question will have a positive answer if we can argue that the systems specified by the definition are exactly the systems studied by standard computation theory. In the first place, we can give an argument a priori. If a cascade satisfies this definition, then standard computation theory certainly applies to it, for it is always possible to find an effective description of that cascade. Conversely, if a cascade does not satisfy this definition, then there is no effective description of that cascade, so that standard computation theory cannot apply to it. In the second place, we can also give an argument a posteriori. In fact, it is tedious but not difficult to show that all systems that have been actually studied by standard computation theory (Turing machines, register machines, monogenic production systems, cellular automata, etc.) satisfy the definition.10
 

2.4. Two sufficient conditions for a system not to be computational

The definition of a computational system allows us to deduce two sufficient conditions for a mathematical dynamical system not to be computational. Namely, a mathematical dynamical system MDS = <T, M, {gt}> is not computational if it is continuous in either time or state space or, more precisely, if either (i) its time set T is the set of the (nonnegative) real numbers, or (ii) its state space M is not denumerable.11

An immediate consequence of condition (ii) is that any finite neural network whose units have continuous activation levels is not a computational system. A complete state of any such network can always be identified with a finite sequence of real numbers and, since each unit has a continuous range of possible activation levels, the set of all possible states of this network is not denumerable. Therefore, by condition (ii), any finite network with continuous activation levels is not a computational system. Also note that the same conclusion holds for any continuous system specified by differential (or difference) equations. Since all these systems are continuous (in time or state space), none of them is computational.
 

2.5. Summing up the argument

We have thus seen that (I) all models currently employed in cognitive science are mathematical dynamical systems; (II) a standard computation can only be performed by a computational system; (III) any finite neural network whose units have continuous activation levels or, more generally, any continuous system specified by differential (or difference) equations is not a computational system. Hence, all connectionist models in this class and all nonconnectionist continuous models cannot perform standard computations. But then, if our scientific explanations of cognition are based on these models, we cannot maintain that cognition is, essentially, a matter of the standard computations performed by the cognitive system which these models are intended to describe. On the other hand, it is obvious that (IV) all symbolic models are computational systems.12 Therefore, computationalism is only consistent with symbolic modeling or, more generally, with any other approach which employs computational systems as models of cognition.

A word of caution is needed here. Somebody might object to this conclusion in the following way. It is well known that the behavior of virtually all continuous systems considered by physics can be approximated, to an arbitrary degree of precision, by a computational system, even though these systems are not computational systems themselves.13 Why should the continuous systems considered in cognitive science be different in this respect? As long as the behavior of a continuous model of a cognitive system can be approximated (to an arbitrary degree of precision) by a computational system, there is nothing, in the model, which is beyond the reach of standard computational methods. Therefore, it is false that computationalism is only consistent with computational modeling.

This objection is confused because it blurs the distinction between the standard computations performed by a system, and the approximation14 of its behavior by means of standard computations performed by a different system. In the first place, this distinction is essential for the formulation of the computational hypothesis itself. If computationalism is intended as a very general hypothesis that indicates the appropriate style of explanation of cognitive phenomena (namely, a computational style), it is crucial to affirm that cognition depends on the standard computations performed by the cognitive system we are studying, for it is precisely by understanding the particular nature of these computations that we can produce a detailed explanation of cognition. But then, in formulating the computational hypothesis, we are in fact implicitly assuming that the cognitive system is a computational system, we are not just claiming that its behavior can be approximated by a computational system.15 Otherwise, we should similarly maintain that, since a hurricane can be approximated by a digital computer, the nature of the particular computations performed by the digital computer to approximate the behavior of the hurricane allow us to explain the hurricane. In the second place, I have argued (section 2.4) that any continuous model is not a computational system, and thus it cannot perform standard computations. But then, if our scientific explanations of cognition are based on continuous models, we cannot maintain that cognition is, essentially, a matter of the standard computations performed by the cognitive system which these models are intended to describe. Therefore, computationalism is indeed inconsistent with continuous modeling.
 

3. Concluding remarks

My argument shows that, unless we construe computationalism by explicitly referring to some nonstandard notion of computation, we cannot maintain that computationalism is a basic hypothesis shared by all current research in cognitive science. In view of this fact, however, we should consider at least two further questions. First, what kind of nonstandard notion of computation would be needed for an adequate generalization of the computational hypothesis? And, second, is there some other hypothesis that might play this unifying role as well?

As regards the first question, I will limit myself to just one preliminary remark, for a critical discussion is beyond the scope of this paper. Even within these limits, however, it seems quite reasonable to maintain that a generalized version of the computational hypothesis should be based on a theory of computation that (i) applies to continuous systems and standard computational systems as well; (ii) in the special case of standard computational systems, this more general theory reduces to the standard one, and thus (iii) all the standard computability results should turn out to be special cases of the more general theory. I leave it open for further discussion whether these conditions are indeed well chosen, or whether they are in fact satisfied by some existing theories which intend to generalize various aspects of standard computation theory (Blum, Shub, and Smale 1989; Friedman 1971; Shepherdson 1975, 1985, 1988; Montague 1962).

As for the second question, we have seen that all models currently employed in cognitive science are mathematical dynamical systems. Furthermore, in general, a mathematical dynamical system changes its behavior according to the particular state evolution that the system undergoes. But then, if our aim is to model cognition by means of appropriate mathematical dynamical systems, we may very well claim that cognition is, essentially, a matter of the particular state evolutions that a cognitive system undergoes in certain situations. I call this hypothesis dynamicism. For two, quite different, articulations and defenses of dynamicism see van Gelder and Port (1995) and Giunti (1995, 1997).

It is thus clear that dynamicism, unlike (standard) computationalism, is consistent with symbolic, connectionist, and nonconnectionist continuous modeling as well. Therefore, all research on cognition might end up sharing this new hypothesis, independently of the type of model employed. The question remains, however, whether this possibility will really obtain. I believe that the answer to this question depends on whether the explicit assumption of a dynamical perspective can sharply enhance our understanding of cognition. This issue, however, will ultimately be settled by detailed empirical investigation, not by abstract argument.

On the other hand, it is also quite obvious that the dynamical hypothesis, as stated above, only gives us an extremely general methodological indication. Essentially, it tells us that cognition should be explained by focusing on the class of the dynamical models of a cognitive system, where a dynamical model is any mathematical dynamical system that describes some cognitive aspect of the cognitive system. Now, a standard objection against this version of dynamicism is that this methodological indication is so general as to be virtually empty. Unfortunately, a detailed rebuttal to this charge goes beyond the scope of this paper. Therefore, I must limit myself to briefly outline the three defenses that have been adopted by the proponents of the dynamical approach.

The first line of defense points out that dynamicism, just like computationalism, has in fact two aspects. The first one is the specification of a particular class of models (dynamical vs. computational models), while the second is the proposal of a conceptual framework (dynamical systems theory vs. computation theory) that should be used in the study of these models. Therefore, if we also consider this second aspect, we see that the mathematical tools of dynamical systems theory provide dynamicism with a rich methodological content, which clearly distinguishes this approach from the computational one (Giunti 1995, 1997; van Gelder and Port 1995; van Gelder 1995).

Second, some proponents of the dynamical approach (van Gelder and Port 1995; van Gelder 1995) have in fact restricted the class of models allowed by the dynamical hypothesis. According to their proposal, dynamical models include most connectionist models and all nonconnectionist continuous models, but they exclude computational models. Thus, under this interpretation of dynamicism, it is no longer true that the dynamical hypothesis is consistent with symbolic modeling. These authors, however, do not take this point to be a drawback, for they maintain that all symbolic models give a grossly distorted picture of real cognition.

Finally, my line of defense (Giunti 1995, 1997) also restricts the class of the dynamical models, but in a different way. The heart of my proposal lies in the distinction between two different kinds of dynamical models: simulation models and Galilean ones. This distinction is an attempt to set apart two, quite different, modeling practices. Simulation models are mathematical dynamical systems that allow us to individuate, or even produce, a process that correctly simulates a real (cognitive) process. The simulation is correct in the sense that we are able to empirically establish that the simulating process is similar to the real one in several relevant respects. Which respects are to be considered relevant, and which empirical methods we may employ to establish the similarity, is usually clear in each specific case. However, besides this empirical adequacy (which most often is itself quite weak), it is very difficult, if not impossible, to find an interpretation which assigns a feature (aspect, property) of the real system to each component of the model. By contrast, Galilean models are built in such a way that no component of the model is arbitrary. Rather, each component must correspond to a magnitude of the real system. Therefore, even if a Galilean model turns out to be empirically inadequate, we can nonetheless maintain that it is a widely nonarbitrary description of a real phenomenon. This cannot be claimed in the case of simulation models. Galilean modeling is in principle consistent with symbolic, connectionist, and nonconnectionist continuous modeling as well. What I have been arguing for (Giunti 1995, 1997) is that cognitive scientists should take the ideal of Galilean modeling more seriously, because the models of this type are superior to simulation models, both in the descriptive respect and in the explanatory one.
 

Notes

  1. Giunti (1996) is a shorter version of this paper. Torna al testo
  2. I take a cognitive system to be any real system that has some cognitive property. Note that this definition includes both natural systems such as humans and other animals, and artificial devices such as robots, implementations of AI (artificial intelligence) programs, some implementations of neural networks, etc. Torna al testo
  3. By a continuous system specified by differential (or difference) equations I mean a mathematical dynamical system whose state space is not denumerable. Recall that a set is denumerable if and only if it can be put in a one­to­one correspondence with (a subset of) the natural numbers. Torna al testo
  4. The first symbol of the alphabet a0 is usually identified with a special symbol, the blank, also indicated by b. Only a finite number of squares may contain nonblank symbols. All the other squares must contain the blank. Torna al testo
  5. Several authors have studied computing devices whose basic operations are transformations of real numbers or, even more generally, transformations that define some abstract mathematical structure on some set (Blum, Shub, and Smale 1989; Friedman 1971; Shepherdson 1975, 1985, 1988). These devices may not be computational systems in the standard sense, for an operation on real numbers involves the manipulation of an infinite amount of information, and so it may not in general be possible to effectively describe the basic operations of these machines. Within standard computation theory it is possible to define computable functions on the real numbers, but the values of these functions can never be exactly computed. Rather, these values are calculable to any desired degree of accuracy (Turing 1937; Pour­El and Richards 1989). By contrast, the basic operations of the devices mentioned above are assumed to have infinite precision, in the sense that they yield the exact values of a real­valued function. Other devices that may not be computational systems are unrestricted cellular automata. These cellular automata either operate on arrays with an infinite number of cells in a nonblank state, or they do not satisfy the restriction that a neighborhood whose cells are all in the blank state never changes the state of its central cell. Torna al testo
  6. It is well known that the class of numeric functions computable by any of these devices is included in the class of the recursive functions (or, equivalently, of the Turing computable functions). The hypothesis that any numeric function computable, or effective, in the intuitive sense is recursive is known as Church-Turing thesis (Church 1936; Turing 1937). There is (almost) universal agreement that the fact just mentioned provides sufficient evidence for this thesis. For two interesting arguments against Church-Turing thesis, however, see Kalmár (1959) and Bowie (1973). Torna al testo
  7. Sometimes, computational systems are simply identified with discrete systems, that is, cascades with, at most, a denumerable number of possible states. This identification is not correct. Computational systems are a proper subset of the discrete systems. The crucial property of computational systems is that of being effectively describable, and not all discrete systems satisfy this property. This can be proved if the intuitive idea of an effectively describable cascade is explicated by means of the usual, absolute, notion of computability (see the definition I give in this section). However, if we also allow relative computability, then computational systems and discrete ones turn out to be identical (Giunti 1997, chapter 2). Torna al testo
  8. An equivalent approach is by means of the concept of recursive function. Torna al testo
  9. Also note that, since MDS1 is a cascade, the requirement that any state transition function ht be Turing computable is equivalent to the Turing computability of h1 (or, if T = Z, to the Turing computability of both h1 and h­1). Torna al testo
  10. I proved that both Turing machines and cellular automata satisfy the definition (Giunti 1992, 1997). Similar proofs can be given for all other types of computational systems. Torna al testo
  11. A set is denumerable just in case it can be put in a one­to­one correspondence with (a subset of) the nonnegative integers. If condition (i) is satisfied, then MDS is not a cascade, so that, by the definition, MDS is not a computational system. If condition (ii) holds, then, since by the definition MDS is isomorphic to MDS1, M1 is not denumerable. But then, M1 cannot be a subset of the set P(A) of all finite strings built out of some finite alphabet A, for any such subset is denumerable. Therefore, condition (1) of the definition is not satisfied, and MDS is not a computational system. Torna al testo
  12. Any symbolic model can be identified with an appropriate computer program. But any computer program is a computational system whose state space and state transitions are defined by means of the basic operations of the computer language in which the program is written. Torna al testo
  13. According to Kreisel's notion of mechanistic theory (1974), the behavior of any system considered by a mechanistic theory can be approximated by a Turing machine. On the other hand, all those parts of a particular theory (say a physical theory) that we regard as worked out are bound to be mechanistic. Therefore, virtually all systems considered by a sufficiently worked out theory can be approximated by a Turing machine. Torna al testo
  14. The term simulation is sometimes used to mean the computational approximation of the behavior of a continuous system. This term, however, also has a second meaning, which is more common in cognitive science. According to this usage, a simulation is an artificial process that is similar to a real (cognitive) process in several relevant respects. To avoid equivocation, I reserve the term simulation for this second meaning (see section 3). For the first meaning, I just use approximationTorna al testo
  15. The computational hypothesis implies that an arbitrary cognitive system performs standard computations. But a standard computation can only be performed by a computational system. Therefore, according to the computational hypothesis, an arbitrary cognitive system is a computational system. Torna al testo
References

Blum, Lenore, Mike Shub, and Stephen Smale. 1989. On a theory of computation and complexity over the real numbers: NP­Completeness, recursive functions and universal machines. Bulletin of the American Mathematical Society 21, 1:1­46.

Bowie, G. Lee. 1973. An argument against Church's thesis. The Journal of Philosophy 70:66­76.

Cottrell, Garrison W., ed. 1996. Proceedings of the 18th annual conference of the cognitive science society. Mahwah, NJ:L. Erlbaum Associates.

Church, Alonzo. 1936. An unsolvable problem of elementary number theory. American Journal of Mathematics 58:345­363.

Davis, Martin, ed. 1965. The undecidable. Hewlett, NY:Raven Press.

Friedman, Harvey. 1971. Algorithmic procedures, generalized Turing algorithms, and elementary recursion theory. In Logic colloqium '69, 361­389. See Gandy and Yates 1971.

Gandy, Robin O., and C. M. E. Yates, eds. 1971. Logic colloquium '69. Amsterdam:North Holland.

Giunti, Marco. 1992. Computers, dynamical systems, phenomena, and the mind. Ph.D. dissertation. Bloomington, IN:Dept. of History and Philosophy of Science, Indiana University. (Microfilm published in 1993. Ann Arbor MI:University Microfilms Inc.)

----------. 1995. Dynamical models of cognition. In Mind as motion, 549­571. See Port and van Gelder 1995.

----------. 1996. Beyond computationalism. In Proceedings of the 18th annual conference of the cognitive science society, 71­75. See Cottrell 1996.

----------. 1997. Computation, dynamics, and cognition. New York:Oxford University Press.

Harrington, L. A., et al., eds. 1985. Harvey Friedman's research on the foundations of mathematics. Amsterdam:North Holland.

Herken, Rolf, ed. 1988. The universal Turing machine: A half century survey. Oxford:Oxford University Press.

Heyting, A., ed. 1959. Constructivity in mathematics. Amsterdam:North Holland.

Johnson­Laird, Philip N. 1988. The computer and the mind. Cambridge:Harvard University Press.

Kalmár, László. 1959. An argument against the plausibility of Church's thesis. In Constructivity in Mathematics, 72­80. See Heyting 1959.

Kazemier, B., and D. Vuysje, eds. 1962. Logic and language. Dordrecht:D. Reidel Publishing.

Kreisel, G. 1974. A notion of mechanistic theory. Synthese 29:11­26.

Montague, R. 1962. Toward a general theory of computability. In Logic and language. See Kazemier and Vuysje 1962.

Newell, Allen. 1980. Physical symbol systems. Cognitive Science 4:135-183.

Newell, Allen, and Herbert Simon. 1972. Human problem solving. Englewood Cliffs, NJ:Prentice Hall.

Port, Robert F., and Timothy van Gelder. 1995. Mind as motion: Explorations in the dynamics of cognition. Cambridge: The MIT Press.

Pour­El, Marian B., and J. Ian Richards. 1989. Computability in analysis and physics. Berlin:Springer Verlag.

Pylyshyn, Zenon W. 1984. Computation and cognition. Cambridge:The Mit Press.

Rose, H. E., and John C. Shepherdson, eds. 1975. Logic colloquium '73. Amsterdam:North Holland.

Rumelhart, David E., and James L. McClelland, eds. 1986. Parallel distributed processing. 2 vols. Cambridge:The MIT Press.

Shepherdson, John C. 1975. Computation over abstract structures: serial and parallel procedures and Friedman's effective definitional schemes. In Logic colloquium '73, 445­513. See Rose and Shepherdson 1975.

----------. 1985. Algorithmic procedures, generalized Turing algorithms, and elementary recursion theory. In Harvey Friedman's research on the foundations of mathematics, 285­308. See Harrington et al. 1985.

----------. 1988. Mechanisms for computing over abstract structures. In The universal Turing machine: A half century survey, 581­601. See Herken 1988.

Turing, Alan M. 1937. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42:230­265. (Reprinted in The undecidable, 115­154. See Davis 1965.)

van Gelder, Timothy. 1995. Connectionism, dynamics, and the philosophy of mind. To appear in the proceedings volume of Philosophy and the sciences of mind. The third Pittsburgh-Konstanz colloquium in the philosophy of science. Konstanz, May 1995.

van Gelder, T., and Robert F. Port. 1995. It's about time: an overview of the dynamical approach to cognition. In Mind as motion, 1­43. See Port and van Gelder 1995.


  Torna alla home page Torna all'inizio