This preprint differs from the published version. Do not quote or photocopy.

Turing, Wittgenstein and the Science of the Mind

A Critical Notice of Justin Leiber 'An Invitation to

Cognitive Science', Oxford: Basil Blackwell, 1991.

 

Diane Proudfoot, B. Jack Copeland

 

 

1 Introduction

The principal characters of Justin Leiber’s recent account of the origins of cognitive science are Turing and Wittgenstein. Leiber makes two important contributions to Turing scholarship. He provides a rationale for the Turing test which knits together the motivational remarks of Turing’s 1950 article more satisfyingly than any previously proposed (section 4, below) and he draws attention to Turing’s anticipation of connectionism in 1948 (section 3, below).1

Leiber’s account of the significance of Turing’s famous 1936 paper for the computational view of mind is, we believe, the one to which many, or even most, of those working in cognitive science tacitly subscribe. It is our opinion that this current orthodoxy distorts both Turing’s achievement and the epistemic status of the computational theory of mind (section 2). On the orthodox view, the claim that the mindbrain is equivalent to a Turing machine is apt to appear self-evident, an unassailable corollary of the Church-Turing thesis. One recent writer even goes so far as to attribute what we are calling the orthodox view to Turing himself: 'One of the central points that Turing was to make in his 1947 Lecture [Turing 1947] was that the Mechanist Thesis is not just licenced but is in fact entailed by his 1936 development of Church's Thesis' (Shanker 1987, p.615, Shanker's italics). In fact there is no such entailment. Nor is there any reason to think that Turing was confused about this; it is a trivial consequence of the central theorem of his 1936 paper that there is no such entailment. One of Turing's most important but least appreciated achievements was to provide cognitive science with the conceptual resources for understanding how the mindbrain could fail to be equivalent to a Turing machine and thus for understanding how the hypothesis that it is so equivalent can be an empirical one (as befits the fundamental hypothesis of a science).2

Leiber upsets the common view of Wittgenstein by arguing that theses in the Philosophical Investigations commit Wittgenstein to a scientific approach to the mind and encourage a specifically computational theory of the mind. He argues for this view on the basis of central elements of Wittgenstein’s constructive accounts of mind and language. Consequently, he poses a significant challenge to standard Wittgenstein commentary. Just as the task once was to decide whether Wittgenstein was (any sort of) a behaviourist, the task now is to consider whether Wittgenstein can properly be regarded as a cognitive scientist.

Philosophers who are cognitive scientists have been little interested in Wittgenstein’s views on their discipline. This is surprising. It would appear that, from the viewpoint of a philosopher cognitive scientist, Wittgenstein has impeccable credentials. He was, after all, an engineer and a logician who was deeply interested in the issue of machine intelligence and who argued with Turing over matters central to the computational model of the mind. (At one time he even conducted a series of experiments on musical perception in the psychology laboratory at Cambridge. His collaborator C.S. Myers presented the results to the British Society of Psychology. (Monk 1990, pp.49-50)) We might also expect that any philosopher cognitive scientist who takes advantage of Wittgenstein’s attack on the Cartesian picture would be interested in Wittgenstein’s other views. Nevertheless, Leiber is unusual in that he is a cognitive scientist who takes Wittgenstein's later philosophy to be of considerable importance.

For Leiber, Wittgenstein has a dual role. First, he is the 'harrower of the old paradigm' (pp. 64,159). He relentlessly attacks the theory of the embodied soul and the principle that we just know, by introspection, and know in their entirety our mental states (pp. 46,70). These are the myths standing in the way of cognitive science (p. 65). Their demolition leaves the way open for the ‘new paradigm’: the computational theory of the mind and the principle that mental states require detailed empirical investigation. Second, Wittgenstein is a 'cognitive naturalist' (pp. 61,159). In Leiber's view, it is Wittgenstein's concerns - language-learning, face recognition, aspect-perception and rule-following - which are the subject-matter of the successful projects of contemporary cognitive science. By the amassing of heterogeneous and often striking examples of psychological phenomena, Wittgenstein showed us the poverty of the commonsensical theory of the mind and set the agenda for cognitive science (pp. ix, 66). Here Leiber gives Wittgenstein a strong claim to the attention of cognitive scientists. He also, apparently unwittingly, provides a remarkable rejoinder to the common complaint that Wittgenstein took a 'know-nothing' approach. Far from this being so, Wittgenstein anticipated contemporary psychology.

Our concern in section 4 is to show that the new paradigm is in certain striking ways very similar to the old. In section 5 we argue that it is far from the case that his later philosophy committed Wittgenstein to a scientific, let alone a computational, view of the mind. That Wittgenstein attacked the old paradigm is well known. Less well appreciated is the fact that his attack was directed equally to much of what is now the new paradigm.

2 Turing and the Computational Theory of Mind

Leiber characterises ‘Turing’s project’ as that of ‘making symbols, numbers, proofs, and procedures into machines’ (p.54). Numbers, says Leiber, are ‘really just kinds of machines’ (ibid). The idea is that a number may be identified with a certain standard Turing machine that produces it as output. In this section we stress the importance for cognitive science of Turing’s complementary project, that of proving the existence of numbers which are not ‘kinds of machines’.

First some definitions. The steps of a procedure are known as moronic if no insight, ingenuity or creativity is necessary in order to carry them out. A procedure for achieving some specified result is an algorithm when 1. every step of the procedure is moronic; 2. at the end of each step it is moronically clear what is to be done next (i.e. no insight etc. is needed to tell); and 3. the procedure is guaranteed to lead to the specified result in a finite number of steps (assuming each step is carried out correctly). We call a system (real or abstract) algorithmically calculable just in case there is an algorithm - known or unknown - for calculating its behaviour. That is, the system is algorithmically calculable if and only if there is an algorithm that yields a correct description of the system’s output (including the null response) from a description of the input into the system and a description of the relevant state of the system, for all possible inputs that produce output. (Certain inputs into certain algorithmically calculable systems produce no output because they drive the system into an infinite loop. The behaviour-calculating algorithm is not required to be able to predict when this will happen.) Tautologically, a computer running a given program P is algorithmically calculable - provided the hardware does not malfunction - because, of course, P is itself an algorithm for passing from input to output. Connectionist networks are also algorithmically calculable.

The Church-Turing thesis (sometimes inaccurately referred to simply as 'Church's thesis') may be expressed as follows (there are many equivalent formulations):

Any algorithmically calculable system can simulated by a Turing machine. That is, a Turing machine can generate a correct symbolic description of the system’s output from a description of the input into the system and of the relevant state of the system, for all possible inputs that produce output. The import of the Church-Turing thesis is that there is no algorithmically calculable device - or organ - more powerful than a universal Turing machine.

Is the mindbrain algorithmically calculable? For many the proposition that it is so has acquired the status of being something too obvious to mention. For example, take the following famous argument due to Newell (which is intended to demonstrate that computer intelligence is achievable, at least in principle):

A universal [symbol] system always contains the potential for being any other system, if so instructed. Thus, a universal system can become a generally intelligent system. (1980, p.170)

What Newell means is that a universal symbol system (i.e. a general-purpose computer with no practical bound on the size of its memory) can be programmed to simulate any other algorithmically calculable system (a statement which is equivalent to the Church-Turing thesis). The assumption that a system exhibiting general intelligence will be algorithmically calculable is thought so obvious that Newell does not even mention it.

It is not only proponents of AI and cognitive science who subscribe to the dogma that our cognitive processes are algorithmically calculable. John Searle writes as follows:

Can the operations of the brain be simulated on a digital computer? . . . [G]iven Church's thesis that anything that can be given a precise enough characterization as a set of steps can be simulated on a digital computer, it follows trivially that the question has an affirmative answer. (1992, p.200)

In point of fact this follows only given the assumption that the brain is algorithmically calculable.

In similar vein Leiber purports to derive the computational theory of mind from the Church-Turing thesis:

If any formal system can be explicitly mechanized as a Turing Machine, so can any actual machine, nervous system, natural language, or mind in so far as these are determinate structures. (p.57; Leiber’s italics)

Later we have, even more explicitly:

[We] know in principle that there has to be a Turing Machine that represents the computational specs for my mind/brain . . . or yours, of course . . . In the in principle sense, you are that Turing Machine, but you do it all through a marvel of cunningly coordinated parallel processes, shortcuts piled within shortcuts. (p.100)

There is a fallacy here - and, as we have said, the fallacy is not only Leiber’s. Turing’s 1936 paper is remembered best for the universal Turing machine and the Church-Turing thesis. If equal attention were paid to the negative result that Turing proves there, the fallacy would no doubt be widely appreciated. (Penrose is one of the few people to have dwelt on the implications of Turing’s negative result for AI and cognitive science. He writes: ‘The kind of issue that I am trying to raise is whether it is conceivable that a human brain can, by the harnessing of appropriate "non-computable" physical laws, do "better", in some sense, than a Turing machine' (1989, p.172). Unfortunately Penrose muddies the issue by attempting to revitalise a famous argument (known to and dismissed by Turing) purporting to show, on the basis of Gödel's incompleteness results and various very general features of mathematical practice, that the mindbrain cannot be equivalent to a Turing machine.)

Turing’s negative result, or rather a part of it, can be stated as follows: there are sets that are nonsemidecidable.3 We will give a brief introduction to the terminology.

A set of sentences S of a language L is said to be decidable if and only if there is an algorithm that can be applied to each sentence of L and that will deliver either the answer 'Yes, this sentence is in S' or 'No, this sentence is not in S' (the answers the algorithm gives must, of course, always be correct). If there is such an algorithm it is called a decision procedure for S. S is said to be undecidable if there is no such algorithm. Any finite set is decidable (by means of an algorithm incorporating a ‘look-up table’ - an exhaustive list of the set's members). Formal logic provides examples of both decidable and undecidable infinite sets. The set of valid sentences of truth-functional logic is decidable. That the set of valid sentences of first-order quantificational logic is undecidable was first proved by Church (1936). However, this set is not completely intractable computationally. There is an algorithm (in fact many) that meets the following conditions.

1. Whenever the algorithm is applied to a valid sentence of first-order quantificational logic it will (given enough time) deliver the result ‘Yes, this sentence is valid’.

2. Whenever the algorithm is applied to a sentence of first-order quantificational logic that is not valid it will either deliver the result ‘No, this sentence is not valid’ or will deliver no answer at all (i.e. will carry on computing 'forever’).

An algorithm that meets these two conditions is called a semi decision procedure. If there is a semi decision procedure for a set S then S is said to be semidecidable.

A set for which there is not even a semi decision procedure is called nonsemidecidable. If S is nonsemidecidable then there is no algorithm that will say 'Yes, that’s a member' whenever it is applied to a member of S (and never when it is applied to a non-member). Turing established the existence of nonsemidecidable sets. Let TURING be the set of all binary sequences that Turing machines can, in principle, churn out (the set of computable binary sequences, in other words).4 TURING is nonsemidecidable. (The crux of the proof is to show that the complement of TURING is non-empty: to show, in other words, that there are binary sequences that are not computable, even by an infinite machine. Turing established this by means of a diagonal argument.) Second-order quantificational logic yields another example of nonsemidecidability. (Second-order quantifiers range over properties and relationships, as in 'Jules and Jim have some properties in common' and 'Every constitutional relationship that holds between the US President and Senate also holds between the British Prime Minister and the House of Commons'.) The set of valid sentences of second-order logic is known to be nonsemidecidable.

There is an intimate connection between the concept of a system’s being algorithmically calculable and the concept of a set’s being decidable or semidecidable. Let the members of a set S be descriptions of the input/state/output behaviour of a system S. That is, each member of S is of the form ‘If S is given input I while in internal state X then S settles into internal state Y and produces output O’. (O may be the null response.) S contains all sentences of this form that are true of S. If S is nonsemidecidable then S is not algorithmically calculable, since if S is nonsemidecidable there can be no algorithm that yields a correct description of S's output from a description of the input into the system and a description of the state of the system, for all possible inputs that produce output.

The nonsemidecidability of second-order logic guarantees the existence of abstract models of neuronal function that are not algorithmically calculable. In connectionist jargon, the activation function for a neural network is a description of the conditions under which any neuron in the network will fire. For example, in a typical connectionist network a neuron fires if the weighted sum of its inputs exceeds its threshold. We call a neural network second-order if the activation function for the network can be formulated only by means of second-order logic. That is, the activation function can be formulated only by means of quantifying over properties of neurons or relationships between neurons. A schematic example of such an activation function is: a neuron n fires if all relationships of a certain sort holding between the neurons in some cluster of interconnected neurons containing n satisfy some specified condition. It remains to be seen whether 'global' activation functions such as this one are physically realisable in a way that is consistent with what is already known about neural tissue. (The function is 'global' in the sense that the neuron needs to know more than what is happening just on its own doorstep in order to be able to tell whether or not to fire - that is, the neuron needs non-local, or global, information about the whole cluster.) Certain types of second-order neural networks are not algorithmically calculable.

As we have said, every finite set is decidable. To complete our argument for the conclusion that it is empirically possible that the human brain should turn out not to be equivalent to a Turing machine we must show that the (indisputable) fact that each brain is in some sense finite does not imply that each brain is algorithmically calculable. Let B be the set of descriptions of input/state/output behaviour for a brain. (The members of B are highly complicated sentences. Each records the total output (electrical, chemical, etc) that the brain will produce in response to a particular total input received while the brain is in a particular (total) internal state.) It is an empirical question whether B is finite; and moreover the answer is not yet known. It is, of course, true that a brain can process only a finite number of inputs in its finite life. But it need not be the case that there are only a finite number of potential inputs from which the actual inputs that a given brain encounters are drawn. The potential inputs seem endless - in the same way that there are an endless number of English sentences. If the number of potential inputs is infinite then B is infinite, for B contains all input/state/output descriptions, potential as well as actual.

The same goes for states. Each brain can enter only a finite number of states in its life, yet the set of potential states from which this finite number is drawn may be infinite. (Analogously the electrical potential of a cell membrane can take only a finite number of values in the course of the cell’s finite life, yet there are indefinitely many possible values lying between the maximum and minimum values for that membrane from which the set of actual values is drawn.) ‘Thinking that P’ may be taken to be an example of an internal state, and there certainly seems to be no limit to the number of different things that a person could think. As Fodor puts it:

[T]he thoughts that one actually entertains in the course of a mental life comprise a relatively unsystematic subset drawn from a vastly larger variety of thoughts that one could have entertained had an occasion for them arisen. For example, it has probably never occurred to you before that no grass grows on kangaroos. But, once your attention is drawn to the point, it’s an idea that you are quite capable of entertaining, one which, in fact, you are probably inclined to endorse. (1985, p.89)

One might hope that even if the brain should turn out not to be algorithmically calculable, it is nevertheless ‘recursively approximable’ in the sense of Rose and Ullian (1963). A system is recursively approximable if and only if there is an algorithm that computes descriptions (not necessarily in real time) of the system’s input/output behaviour in such a way that after sufficiently many descriptions have been generated the proportion of incorrect descriptions never exceeds a prescribed figure. At present, however, this can be nothing more than a hope. There are uncountably many functions that are not recursively approximable (ibid, p.700). One sometimes sees it claimed that any physical device or process can be approximated by a Turing machine to any required degree of fineness; but this is true only for devices and processes that are either algorithmically calculable or recursively approximable.

Not every algorithmically calculable system implements or 'follows' an algorithm. A standard example is a system consisting of a star with an orbiting planet: no one supposes that the planet runs through an algorithm in order to determine where to move next (Fodor 1975, p.74; Cummins 1989, p.91). The computational theory of mind hypothesises not only that the mindbrain is algorithmically calculable but also that the mindbrain implements an algorithm. Here we have been concerned to argue only that the first of these conjuncts expresses an empirical proposition. It is by no means easy to give an analysis of the concept of a system's implementing an algorithm in such a way that one secures the truth of the intuitive thought that there are systems which implement no algorithm. This difficulty has prompted some to suggest that it is, at best, trivially true that the mindbrain implements an algorithm (for example Searle 1992, chapter 9). These issues are discussed by Copeland in 'What Is Computation?' (forthcoming), where it is argued that the second of the above conjuncts is also an empirical proposition.

In one and the same article Turing provided the conceptual resources both for formulating the computational theory of mind and for understanding how it might contingently be false. Contemporary cognitive science is polarised around the issue of whether the algorithms of cognition are of a classical or connectionist nature. (Just as the heuristics used in symbolic AI 'bottom out' in algorithms so too do the computations performed by connectionist networks. At the algorithmic level one finds, for example, the activation and propagation rules for the network.) Turing’s negative result highlights a third possibility: algorithms may not be much of the story of cognition at all - at best one of its minor characters, perhaps. The computational theory of mind is the hypothesis that this is not so. This hypothesis is a bold empirical one, not a near-trivial consequence of the Church-Turing thesis.

3 Turing’s Anticipation of Connectionism

Turing himself certainly embraced the computational theory of mind. He conjectured that ‘the cortex . . . [is] a universal machine or something like it’ (1948, p.16) and in a lecture given in 1947 he explicitly referred to the human brain as a 'digital computing machine' (1947, p.111).

When he spoke of digital computing machines he had in mind a range of architectures considerably wider than the class of (what we would now call) von Neumann machines and their near relatives. It is seldom realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like components connected together into networks in a largely random manner. In a little-known report written in 1948 and entitled 'Intelligent Machinery' he described randomly connected networks of two-state neurons whose operation is synchronised by means of a digital clock. (For a detailed description of Turing's 1948 architecture see our article 'On Alan Turing's Anticipation of Connectionism'.) Turing called his networks ‘unorganised machines’. By the application of 'appropriate interference, mimicking education' an unorganised machine can be trained to 'do any required job, given sufficient time and provided the number of units is sufficient' (1948, pp.14-15). Turing theorized that ‘the cortex of the infant is an unorganised machine, which can be organised by suitable interfering training’ (1948, p.16). Turing found ‘this picture of the cortex as an unorganised machine . . . very satisfactory from the point of view of evolution and genetics’ (1948, pp.16-17). The idea that an initially random network can be organised to perform a specified task by means of what he calls 'interfering training' is undoubtedly the most significant aspect of Turing's discussion of unorganised machines. This idea is often attributed to Hebb 1949. Certainly Hebb's work was much more widely read but priority lies with Turing.

As a result of his lukewarm interest in publication, Turing's work on neuron-like computation remained unknown to others working in the area. Rosenblatt - inventor of the perceptron and precursor of modern connectionism - seems not to have heard of Turing's unorganised machines (Rosenblatt 1962, esp. pp.5 and 12ff). Discussions of the history of connectionism by Rumelhart, McClelland et al. (1986) show no awareness of Turing's early contribution to the field (see for example pp.152, 424). It is to Leiber's credit that he draws attention to Turing's pioneering work (pp.117-18, 158).5

Turing had no doubts concerning the significance of his unorganised machines:

[M]achines of this character can behave in a very complicated manner when the number of units is large . . . [These] unorganised machines are of interest as being about the simplest model of a nervous system with random arrangement of neurons. It would therefore be of very great interest to find out something about their behaviour. (1948, p.10)

Turing himself was not able to carry out this investigation. It must be remembered that at the time he wrote these words the only electronic stored-program general-purpose computer in existence on either side of the Atlantic was a tiny pilot version of the Manchester Mark I. Turing's only resource for simulating the behaviour of a network was paper-and-pencil. It was to be many years before the computer simulation of anything but the most simple of networks was to become practicable. (Even on today's machines the simulation of a network of any great complexity is a slow business.) It is a tribute to Turing's remarkable farsightedness that even in those primitive days he was able to envisage the research program we now call connectionism:

I feel that more should be done on these lines. I would like to investigate other types of unorganised machines . . . When some electronic machines are in actual operation I hope that they will make this more feasible. It should be easy to make a model of any particular machine that one wishes to work on within such a UPCM [universal practical computing machine] instead of having to work with a paper machine as at present. If also one decided on quite definite ‘teaching policies’ these could also be programmed into the machine. One would then allow the whole system to run for an appreciable period, and then break in as a kind of ‘inspector of schools’ and see what progress had been made. (1948, pp.20-1)5

4 The New Paradigm

For Leiber the difference between the old and the new paradigms consists in the shift from the view that we are embodied souls to the view that we are machines, amenable to straightforward scientific investigation and to being duplicated in artefacts. However, in each case what is being described is only part of a larger picture of the relation of the thinking thing, whether soul or mechanism, to other thinking things and to the world external to the thinking thing. These pictures may appear strikingly different, in particular because of the scientifically sophisticated and philosophically technical vocabularies in which the modern picture is set. There is, nevertheless, remarkable continuity between them, often unnoticed, and particularly so by Leiber. Viewed in this way, the old and new paradigms are as similar as they are different: the new paradigm is not so new after all. We shall point to a number of key similarities between the two paradigms.

The first similarity: the thinking thing

Both the old and the new paradigms have very narrow conceptions of the thinking thing. '[W]e propose to try and see what can be done with a "brain" which is more or less without a body providing, at most, organs of sight, speech, and hearing' wrote Turing (1948, p.13). Both the soul of the old paradigm and the computational mechanism of the new paradigm are logically independent of any particular physical realisation. (On certain views, the soul is causally independent of any physical existence whatsoever.) For both paradigms the thinking subject may be linked to its sources of information by only a single connection, in Descartes’ case the pineal gland and in the computational case a single input stream. ('Turing showed that multidimensional, or multichannel, inputs always can be reduced to much, much longer one-dimensional input' (p.55). The underlying point is that any multi-tape Turing machine is equivalent to a single-tape machine (see Minsky 1967, pp.129-130).) Those who take their goal to be the production of programs that think and understand envisage the construction of a res cogitans even narrower in conception than Descartes': a thing not only whose essence is to think (that is, compute) but whose single activity is to think.

For both paradigms the narrow conception of the thinking thing creates problems. For the old paradigm the problems are those of giving identity conditions for the thinking thing and of rendering intelligible interaction between the immaterial and the material. For the new paradigm the problem is that of applying psychological predicates to what Dennett has described as ‘bedridden’ programs, which do not satisfy presuppositions on the use of those predicates.6

The second similarity: the test of mind

Both paradigms address the question of what justification we have for asserting the existence of other thinking things. The proponent of the old paradigm seeks a justification for the claim, made of any physical object other than the speaker's body, that it contains a soul. This arose out of a profound worry: since I cannot be any other thinking thing, how can I know that there are such? The proponent of the new paradigm seeks a ground upon which to decide whether certain machines are thinking things. Even given a materialist theory of mind, some justification is obviously required here: what for Turing is ‘the polite convention that everybody thinks’ cannot unquestioningly be extended to artefacts (Turing 1950, p. 446). Both paradigms provide a solution. The old paradigm uses the Argument from Analogy: roughly, if X displays the behaviour which in me is explained by mental phenomena, then I can conclude that X is a thinking thing. The new paradigm has the Turing Test: we are entitled to say that the computer is a thinking thing if, in certain circumstances, it successfully imitates a human being (ibid, pp. 433-5). (Leiber claims that the Turing Test has a wider applicability: in his view, to decide whether or not any human being other than myself is a thinking thing I must use the Turing Test (p. 116).) Both solutions are presented with the same rhetorical flourish, for the benefit of the sceptic: if this doesn't satisfy you as to the existence of a thinking thing, then what would?

The solutions of the old and new paradigms are commonly held to be very different. The old paradigm infers from the candidate thinking thing's behaviour the presence of some additional factor, namely mental phenomena, which justifies the label 'thinking thing'. On the standard interpretation of the Turing Test, the new paradigm offers a behaviourist definition of a thinking thing: behaviour indistinguishable across some specified range from that of a thinker justifies the label 'thinking thing'.7 This difference frees the new paradigm of the traditional worry. However, the standard interpretation of the Turing Test has difficulties. Why, after all, should we accept this definition? Turing's only explicit justification is that doing so gives us a practicable test (Turing 1950, pp. 434, 442). This is insufficient: there must be some other, implicit justification. This may be an appeal to a general principle of the form: behaviour indistinguishable across some specified range from that of an X justifies the label 'X'. However, not only is this principle false, the 'imitation game' with which Turing introduces and illustrates the Turing Test constitutes a counter-example to it. In its first appearance, the Turing Test requires a man to impersonate a woman (ibid, pp. 433-5), yet it is plainly false that behaviour indistinguishable across the range specified in the Turing Test from that of a woman justifies the label 'woman'. Either Turing had no such principle in mind in setting up the Test or, as his biographer concludes, his introduction and illustration of the Test is extremely confused (Hodges 1983, p. 415).

The justification of the definition may rest on the particular features of the Test when applied to machines, that is on those particular respects in which the candidate successfully imitates a certified thinker, for example the candidate's ability to argue, quickness of response, and command of language. Turing's claim would then in effect be: X is a thinking thing if X successfully imitates a thinking thing in some particular respects. However, on this interpretation of the Turing Test, what is of central importance in Turing's exposition, the requirement that the candidate impersonate a certified thinker, becomes of only secondary importance. What is required is that the candidate display qualities equal to that of a certified thinker: the impersonation requirement is only a rather clumsy and unnecessary means of assessing whether the candidate possesses these qualities. Moreover, on this account, the example Turing uses to introduce the Test is entirely irrelevant - a 'red herring', as Hodges describes it (Hodges 1983, p. 415). This is, then, an implausible interpretation.

If we begin, as Turing does, with the man's successful impersonation of a woman, what significance can it have to a test designed to show that a machine can think? Here is one possibility, briefly and illuminatingly suggested by Leiber (p. 110). The man who manages to pass himself off as a woman does so only because he has managed to think like a woman. The principle supposed is: successful impersonation by X of Y's conversation requires that X be able to think like Y. In the case of artefact and human being, this yields: successful impersonation by a machine of a human being's conversation requires that the machine be able to think like a human being and, a fortiori, that it be a thinking thing. This interpretation explains both the relevance of the example Turing uses to introduce the Test and his emphasis throughout upon imitation and dissembling.

This interpretation has the consequence (unnoticed by Leiber) of bringing the solutions of the old and new paradigms strikingly close: both take behaviour akin to that of a certified thinker to imply that which explains the behaviour, namely mental phenomena. On this interpretation - the only one to fit Turing's text - the Turing Test does not, after all, rest on a behaviourist definition of intelligence but rather on an inference identical to that employed in the traditional Argument from Analogy. In consequence, criticisms of the Test that are criticisms only of behaviourism are unsatisfactory. However, the alleged counter-examples to the Test remain, for example the ELIZA, PARRY and SUPERPARRY programs (Bobrow 1968, Longuet-Higgins 1972, Heiser et al. 1980, Colby 1981, Copeland 1993, ch. 3). On the interpretation of the Turing Test given here, these, if effective, are counter-examples to the principle that successful impersonation of X's conversation requires being able to think like X. As such, they are modern counter-examples to the thought expressed in the traditional Argument from Analogy.

To distance herself from the old paradigm, the cognitive scientist must both abstract the Turing Test from Turing's exposition and provide some convincing alternative justification of the Test. This is a tall order.

The third similarity: perception

For both paradigms the thinking thing’s perception of a world of external objects is typically mediated by internal representations: in the one case ideas and impressions, sense data and the like, and in the other the neurally instantiated representations that constitute the initial, intermediate and terminal states of computations. For both paradigms the assertion of the existence of internal representations is an empirical claim, supported by, for example, the existence of perceptual illusions (both paradigms) or of cortical maps (the new paradigm).

This gives rise to a further similarity. The proponents of both paradigms accept that difficulties arise here for the thinking thing's knowledge of the nature or even existence of the external world. Contemporary science can then be used to generate sceptical arguments of the traditional sort. (Examples are the 'brain in the vat' and Leiber's suggestion that neurological studies of REM sleep may show waking and dream experience to be physiologically alike and so encourage the Cartesian sceptical hypothesis (pp. 5,155).)

The fourth similarity: meaning

For both paradigms for a thinking thing to understand expressions, uttered or written, it must decode them into some kind of internal representation. Candidates offered include ideas, quasi-linguistic symbol-structures, distributed representations with no compositional semantics and analogue mental models. Thus the modern representationalist hypothesis is a variant on a 17th century account of language. For both the 17th century and the modern theorist the same question arises: what makes the representations, external or internal, meaningful? The 17th century theorist claimed: external representations stand proxy for ideas (internal representations). Wittgenstein argued that this initiated a regress and that it had the appearance of a solution only in virtue of the unclarity in the concept of an idea (PI § 28-32, 339; BB pp. 3-6). Knowingly or unknowingly, cognitive scientists typically adopt Wittgenstein’s account of meaning as use (PI § 1-36, 108, 454; PG 29) to avoid the regress objection: the meanings of the internal representations are not additional entities, whose meaningfulness then has to be explained, but are to be cashed out in terms of the functions that the neurologically instantiated representations perform.

The modern representationalist hypothesis is thus a combination of pre- and post-Wittgensteinian ideas. Nevertheless it retains the 17th century view that mental representations are at the core of cognition and that the intentionality of external language is derivative, acquired only via the relation of external language to mental representations.

For both the 17th century and the modern representationalist hypotheses one thinker successfully communicates a thought to another when the communicator encodes an internal representation A into an external representation which is decoded by the recipient into an internal representation similar in the relevant respects to A. (For the modern view this may require either functional or structural identity, and for the old paradigm even, per impossibile, numerical identity.) In both accounts communication by this means appears clumsy: the ideal of communication is the making directly available of the internal representation. In cases where the making directly available of the internal representation is in principle (as in the case of ideas) or in practice (as in the case of brain states) impossible, epistemological difficulties arise. What assurance do we have that the internal representations of communicator and recipient agree? Both paradigms typically turn to nature in response: the traditional assurance that nature (or God) just has made us such that we do form similar representations is renewed in cognitive science's emphasis upon native ‘hard wiring’ (i.e. the innate components in cognition).

Wittgenstein and the new paradigm

Leiber takes Wittgenstein to have attacked, not just the conception of the embodied soul, but also part at least of the old paradigm's view of the relation of the soul to the world, since he believes Wittgenstein to have demolished the phenomenalism of Descartes, Locke and Russell by showing that traditional notions of sense data and of physical objects as constructions from sense data are hopelessly unclear (pp. 73,80). However, Leiber does not appear to find in Wittgenstein’s later philosophy any implications for the new paradigm. He does not mention Wittgenstein's attacks upon features of the picture shared by both old and new paradigm.

Wittgenstein's arguments amount to a wholesale dismissal of this picture. First, he rejected the narrow conception of the thinking thing. In his view, the criteria for mental states are tied to contingent features of the behaviour and appearance of human beings ('Consciousness is as clear in his face and behaviour, as in myself.' Z § 221; see also Z § 594; PG 128-9; RPP vol I §267, 280-81). In addition, it is necessarily true of all thinkers that they perform actions other than thinking (RPP vol I §563).

Second, he disputed the need to justify the claim that there are other minds. In his view, it is a grammatical fact that human beings, and only human beings and what resembles human beings, have minds (PI § 281-4, Z § 117). If it is a grammatical fact, then its denial is unimaginable (Z § 320, PR §1-4) and it is 'unshakably certain' (RFM II 39; III 4). We thus do not need to justify the claim that other human beings have minds and we cannot justify the claim that machines (which, Wittgenstein argues, do not resemble human beings in any of the ways that matter) have minds. In addition, Wittgenstein explains understanding and thinking as skills, the criteria for which are behavioural (PI § 157; PG § 26, 44; RPP vol I § 302; RPP vol II § 209). Consequently, the judgement made of a particular human being that she has a mind, i.e. understands or thinks, can be made without difficulty.

Third, with respect at least to vision, Wittgenstein denied that perception is mediated. In his view, the empirical hypothesis that visual perception is mediated by internal representations is quite at odds with the logic of perception. The mistaken contrary belief, in Wittgenstein's view, arises both because we do not separate seeing from the physiology of seeing and because we conflate seeing and imaging (RPP vol II § 75, 87-9, 98-9, 108-9; Z § 621-38). Moreover, we falsely believe that postulating internal representations can be helpful in solving philosophical problems concerning perception (Z § 614, RPP vol I § 1012).

Lastly, Wittgenstein denied that the intentionality of language is derivative (PG § 2, 3; PI § 339). He denied that meaning and understanding are mental processes or activities, whether immaterial or material processes, and whether or not involving internal representations (RPP vol I § 171-2, 180-81; RPP vol II § 193; Z § 605-613; PG § 35, 64-65; PI pp. 217-8). The objections to internal representations in perception arise again here: postulating such processes is not explanatory and the logic of the concept of meaning, understanding or thinking is not that of a process (PG § 38-42; RPP vol II § 50-57, 265-6). In consequence, any internal process we discover is irrelevant to the analysis of meaning, understanding or thinking (PI § 153-178, 316-344; PG § 6, 33; RPP vol I § 96-97; RPP vol II § 238, 250-251).

In order for Leiber to claim Wittgenstein as foreshadowing the new paradigm, he must believe, implausibly, that those features which the two paradigms share are the proper target of Wittgenstein's criticisms only when accompanied, as in the old paradigm, by what Wittgenstein referred to as an 'occult' metaphysics and by an introspectionist epistemology. It is impossible here to give a comprehensive discussion of Wittgenstein's criticisms. As a test case, however, we can investigate one of his arguments against the 17th century model of language, an objection which attacks not the metaphysics but the logic of the model, to see if it holds also against the modern representationalist hypothesis.

The regress objection is as follows: if for the thinking thing to understand the external representation it must decode it into an internal representation, then how is it able to understand the internal representation? Do we not need a second decoding, and so on? (RPP vol I § 677) If so, then by positing an internal representation we have initiated a regress and here offer no account of understanding (BB pp. 3-5; PG § 105). The traditional answer of the old paradigm is that the internal representation, unlike the outer, is intrinsically meaningful. The regress is stopped by arriving at some thing which of its nature is meaningful. Traditionally this is the mental image, the intrinsic meaningfulness of which is supposedly secured by its natural resemblance to its object. But since resemblance is not sufficient for representation, the mental image cannot on this ground be intrinsically meaningful. (Moreover, on Wittgenstein’s account of meaning as use no object is intrinsically meaningful.) If, in place of the traditional answer, we say that the thinking thing just does find the internal but not the external representation genuinely significant, we offer no more of an account of meaning than if we had said the thinking thing just does understand the external representation. So we might as well, Wittgenstein concludes, have said the latter (PG § 104), especially since it offers the simplest account.

The same objection can be made against the cognitive scientist. The cognitive scientist, like Wittgenstein, typically offers a functionalist account of meaning. Wittgenstein argues that for a spoken or written word to have meaning is for that symbol to have a function or role in a language-game (PI § 432,120). The modern proponent of the representationalist hypothesis appears to regard this account as insufficient. She claims that for the thinking thing to understand the external representation it must decode it into an internal representation, the meaning of which is the function of that neurologically-instantiated representation within the 'internal cognitive economy' of the thinking thing. Now, either meaning can be explained by reference to function or it cannot. If it can, Wittgenstein appears to have explained it. If it cannot, the modern representationalist hypothesis fails. Either way, there is no room for the modern representationalist hypothesis.

To avoid this dilemma, the proponent of the representationalist hypothesis must claim that although meaning is a matter of function, only a functionalist analysis of a certain sort, one dealing with internal representations, will do. But why? What can be the point of adding an extra layer to the account of language in order merely to give the same sort of explanation? If Wittgenstein can explain the link between sounds or marks and meaning by showing the function of the former in language-games, then the addition of information concerning neurological structures is redundant in an explanation of meaning. The conceptual gap between mere sounds or marks and meaning is already closed: we do not need the representationalist hypothesis to close it for us.

The modern proponent of the representationalist hypothesis appears either to have motives similar to those of the seventeenth century philosopher, i.e. an unexamined preference for the inner rather than the outer in explanation (RPP vol II § 643) - a typical example of what for Wittgenstein is the widespread prejudice against the surveyable (PI § 92, 129; Z § 313-4) - or to assume that the only genuine explanation is scientific as opposed to philosophical explanation. If the former is true, then the modern representationalist runs risks similar to those of the 17th century philosopher: her account is either infinitely regressive or arbitrary. If the latter is true, then Wittgenstein's account here under consideration simply constitutes a counter-example to the assumption that the only genuine explanation is scientific explanation.

We conclude that the regress objection is an example of an objection made by Wittgenstein to a central element in the account of the thinking thing shared by both old and new paradigms which the latter's materialist stance does not enable it to avoid. Leiber's implicit assumption that Wittgenstein's arguments work only against particular metaphysical and epistemological features of the old paradigm is false. For Leiber, Wittgenstein’s role as harrower of the old paradigm is now of little more than historical interest. The similarities between the old and the new paradigms show such confidence is misplaced.

5 Wittgenstein: Cognitive Scientist in Disguise?

Wittgenstein and the computational model of the mind

Leiber claims that Wittgenstein’s account of meaning and understanding leads naturally to a computational view of mind (pp. 67,68). For Leiber Wittgenstein is a pivotal figure: the philosopher who switched paradigms and brilliantly anticipated the computational model of the mind (p. 109). Leiber bases this claim upon Wittgenstein's thesis that the meaning of an expression is its role in a language-game, which he interprets as the thesis that the meaning of an expression is its place in a formalised procedure (pp. 77-78). He gives as an example the language-game in Philosophical Investigations §1, where a grocer is given a request for five red apples. According to Leiber, it is Wittgenstein’s view that the meanings of the words 'five', 'red' and 'apple' just are their roles in 'step by step physical procedures' (p. 66). The grocer’s understanding of the utterance ‘five red apples’ consists in his mastery of a technique, namely the simpler competences of fetching and matching. On hearing the request, the grocer opens the drawer marked 'apples', looks the word ‘red’ up in a colour chart to see the sample correlated with it, says the series of cardinal numbers up to the word 'five' and for each number takes an apple of the same colour as the sample from the drawer. Leiber assumes that the grocer here performs no procedure which a computer could not perform and concludes from this that Wittgenstein’s account of meaning and understanding anticipates the computational model of the mind.

The interest of this argument is that it is at odds with the standard view of Wittgenstein, which is that he is opposed to the computational model of the mind, since - as we have already seen - he is opposed to the representationalist hypothesis. To make his case, Leiber will have to show at least the following: first, that the step by step procedures involved in the language game he describes are in fact algorithmic and so can be carried out by a computer; second, that competence in other language-games is also the performance of such procedures; and, third, that in Wittgenstein’s view the computational simulation of the grocer’s behaviour would duplicate the grocer’s cognitive competence.

There are difficulties here for Leiber. With regard to the first requirement, a step by step physical procedure need not be algorithmic. We might not be able to analyse what is a step by step procedure for the grocer into simpler competences, all of which can be simulated by a computer. With regard to the second requirement, Leiber attempts to meet it by interpreting Wittgenstein's claim that language-games are part of our natural history as the claim that we have a natural history of formalising (p. 68): all language-games involve formalised procedures. However, there seems no more reason to think that other language-games involve formalised procedures, if by that is meant algorithmic procedures, than there is in the example Leiber discusses. For Wittgenstein what appear to be cognitive states or processes are in fact skills or dispositions: to have a mind is to possess certain cognitive skills (PG § 11; RPP vol II § 45). We have a natural history of instilling these skills by means of step-by-step procedures and of checking competence by means of such procedures. But, again, why assume these are formalised procedures in the sense of those which a computer can perform?

With regard to the third requirement, there is a clear case, based on Wittgenstein’s account of rule-following, for saying that Wittgenstein would not readily take the computer simulation to duplicate the grocer’s cognitive competence. Leiber's claim is, in effect, that on Wittgenstein's account there is no more to understanding than the ability to follow rules. However, even given this, there is the question of whether, in simulating the grocer's behaviour, the computer is following a rule, and thus whether the computer understands. Wittgenstein distinguishes between, on the one hand, merely acting in accordance with a rule and, on the other hand, following a rule (PI § 200). We can imagine two human subjects who both receive the request for five red apples. Both subjects respond in exactly the way Leiber describes. According to Wittgenstein, we can imagine that one subject is only acting in accordance with the rules of the language-game, while the other is a competent player and is following the rules of the language-game. In Wittgenstein’s view only the latter understands the request. So in Wittgenstein’s view the response Leiber describes, and the computer simulation of this response, is insufficient to justify the claim that the subject understands.

On Wittgenstein’s account of rule-following, the subject who acts in accordance with a rule is a genuine rule-follower only if her actions take place within a rule-following society and within a particular custom (PI § 198-200); her actions are the result of a prescriptive training process (PI § 143-5, 206; Z § 319; RFM I. 4); and she makes normative claims concerning her actions and those of others (PI § 231, 238). In consequence, if Leiber is to argue that Wittgenstein anticipates the computational model of the mind Leiber must either give us reason to reject Wittgenstein's account of the distinction between genuine and apparent rule-following or show that the computer simulation satisfies the conditions Wittgenstein provides. Attempting either move, however, would be fruitless, given Leiber's failure to meet the previous two requirements.

Wittgenstein and a scientific theory of the mind

Even if Leiber is wrong to assert that Wittgenstein gives us a computational model of the mind, Wittgenstein may still be committed, if unwittingly, to a scientific approach to the mind. Leiber's assertion that Wittgenstein is so committed is based upon two claims. The first is that Wittgenstein demonstrated the need for some account of the mind other than the commonsensical one. According to Leiber, Wittgenstein exposed 'the anomalies of everyday cognitive life', quirky psychological phenomena now the subject-matter of cognitive psychology (p. 63). In particular he strikingly pointed out the inadequacy of any naive, commonsensical account of language-acquisition (p. 130) and anticipated what Chomsky was later to call 'the poverty of the stimulus argument' (p. 160). Leiber’s second claim is that Wittgenstein’s explanations of cognition are themselves causal. The implication is that they are in consequence proto-scientific. They end with the fact that we just do behave naturally in certain ways. So Wittgenstein, like Leiber, is committed to the view that puzzling cognitive and linguistic behaviour are the result of 'nature . . . beyond clear view' (p. 97).

Leiber bases the first of these claims in the main upon the passages in the Investigations dealing with ostensive definition, private language and rule-following, and to some extent upon the discussion of psychological phenomena such as aspect-perception. However, these passages do not in fact support Leiber’s view. In the case of the first two topics, Wittgenstein's explicit target is not an everyday account of the mind or of language acquisition but the 'Augustinian' view that the meaning of an expression is the object for which it stands and that language-learning is acquiring knowledge of this association (PI § 1-38). (Leiber is aware that this is Wittgenstein's target (p. 66) but takes the Augustinian view to be a commonsense view (p. 83), which for Wittgenstein it was not.) In the case of rule-following, for Wittgenstein the rule-follower's activity becomes anomalous only in the context of the assumption - which is philosophical, not commonsensical - that the genuine rule-follower is in possession of an interpretation. Similarly, Wittgenstein's discussion of aspect-perception is intended to create puzzles primarily for the philosophical thesis that vision is mediated by representations (PI pp. 193-214).

To justify his first claim, Leiber must insist that Wittgenstein exposed difficulties, not just for particular philosophical accounts, but for any attempt to understand the mind that is confined to what Leiber calls the 'level of ordinary lived experience'. To do so Leiber takes the influential view that the discussions of rule-following and of private language demonstrate the radical indeterminacy of any language, public or private (pp. 78,80,81). However, this interpretation of the Investigations - of either Wittgenstein's intentions or the unintended implications of his views - is highly problematic. The critical debates here are well-known. In addition to the localised exegetical questions this reading of Wittgenstein must answer, Leiber has the difficulty of explaining away Wittgenstein's insistence that, until distorted by philosophical theory, our own accounts of ourselves are in good order (PI § 124).

According to Leiber, Wittgenstein, having demonstrated the inadequacy of the old picture of the mind, was unable to offer more in explanation of the anomalies he exposed than the observation that that is just how we are (p. 67). This, says Leiber, is true but insufficient (pp. 91,92): to explain the oddities of cognition a scientific account of the physical processes underlying how we are is required (pp. 58,82,84,85,94,97). For Leiber, Wittgenstein was simply uninterested in the further investigation of the natural phenomena he drew to our attention (pp. 65,82-3,87,160,161). Moreover, in the context in which Wittgenstein is attempting to explain cognition, the ordinary lived world, accounts of neurological processes are usually unavailable and therefore are for practical purposes redundant (pp. 67,69,79). Leiber implies that none of this alters the fact that Wittgenstein’s explanations are causal and therefore proto-scientific. In Leiber's view, the cognitive scientist simply pursues the inquiry at the point Wittgenstein gives it up.

This, Leiber’s second claim, faces the following difficulty. It is not that Wittgenstein was simply uninterested in natural science. In his discussions of language, of understanding and thinking, and of rule-following, Wittgenstein consistently argues against the explanatory power of inner processes or states, whether of a dubious soul-stuff or a material nature. This excludes the kind of explanation which, Leiber claims, the Investigations foreshadowed. Leiber says too little about Wittgenstein’s explicit rejection of scientific explanations of mind and language.

Wittgenstein’s attitude towards scientific explanation of the mind can be stated in two distinct theses. The first is: no scientific solution to traditional problems in the philosophy of mind is possible. According to Wittgenstein, these problems are chimeras (PR § 159) arising from faulty methodology (PR § 150). Philosophical theories typically conflate different language games and so produce conceptual error (PI § 116-7) under the guise of ‘correcting’ ordinary language (PI § 402). Consequently, for Wittgenstein explanation consists, in part, in the distinguishing of different language-games, in the process dismantling traditional metaphysical and epistemological pictures. This is Wittgenstein’s dissolutionist strategy ('While thinking philosophically we see problems in places where there are none. It is for philosophy to show that there are no problems.' PG § 9; see also PI § 90, 118-9, 133). Only philosophy, and not science, can perform this task. Leiber notes this attitude of Wittgenstein (p. 61, 65) but misses its significance: this first thesis is compatible with a commitment to a scientific theory of mind only so long as the latter is not an attempt to provide solutions to traditional epistemological and metaphysical problems of mind. This first thesis considerably cramps the style of the philosopher cognitive scientist.

Wittgenstein’s second thesis is more of a problem for Leiber’s account. It is: since the question ‘What is the mind?’ arises out of ordinary discourse about the mind, it cannot be answered by science. For Wittgenstein, ordinary language questions such as ‘What is the mind?’ express conceptual confusion ('When such an obstinate problem makes its appearance in psychology, it is never a question about facts of experience (such a problem is always much more tractable) but a logical, and hence properly a grammatical question.' Z § 590; see also PG § 141). Such confusion is the result of ignorance of the language-game being played (RPP vol I § 549; PI § 123,125). The remedy is conceptual clarification, i.e. the making explicit of the language-game (PI § 109, 122, p. 203; RFM IV § 52). The constructive part of Wittgensteinian explanation is, therefore, the provision of detailed description of the language-games generating our concepts of the mental. This will be little like investigation in the natural sciences, since it will consist in making clearer what in some sense we already know (Z § 255; PI § 109, 127; RFM I § 141). Again science cannot perform this task (PG § 63). Scientific explanations of cognitive processes do nothing to remedy conceptual confusion and so taking the question ‘What is the mind?’ as a question science can answer leads us nowhere (RPP vol I § 1039, 1063). This is the rejection, not of science, but of scientism in philosophy (PG § 72).

This second thesis, then, excludes the possibility of a scientific answer to the question 'What is the mind?'. Leiber fails to notice this. His interpretation commits Wittgenstein to the view that everyday psychological concepts are less adequate in the theoretical description of the mind than the concepts of neuroscience (p. 85). Yet Wittgenstein claims that everyday discourse about cognitive processes does not constitute a scientific theory of cognition, not even an exceptionally primitive one:

'[N]aive language', that's to say our naif, normal way of expressing ourselves, does not contain any theory of seeing - it shews you, not any theory, but only a concept of seeing. (RPP vol II § 1101)

A fortiori, ordinary discourse does not constitute a theory which can be falsified: discoveries in neuroscience cannot disprove 'folk' psychology.8

Of course, Wittgenstein's explicit rejection of scientific explanation does not disprove Leiber's claim that Wittgenstein was committed to a scientific theory of mind. For Wittgenstein may simply be inconsistent. It may be true both that he is committed to and that he rejects the scientific approach to the mind. This is not so. In Wittgenstein's view we answer the question 'What is the mind?' only by describing language-games (as stated above). The scientific investigation of biological, psychological or neurological mechanisms is obviously no part of the logical enquiry which constitutes the description of language-games. Consequently, although the end-point of such description is the observation that this is how we naturally act (RFM I.§ 63; PI § 217, 654, p. 226; Z § 309, 419), Wittgenstein cannot be committed to a scientific approach to the mind. Leiber's interpretation is false.

None of this excludes the possibility that neuroscientific investigation will provide a causal explanation of how we are able to participate in language-games and to become competent speakers and rule-followers through a process of training. (Indeed, in RPP vol II § 128 Wittgenstein expresses wonder at the fact of language acquisition.) The distinction Wittgenstein makes here between conceptual and causal explanation is analogous to the distinction between 'what' and 'how' questions used by many cognitive scientists. For example:

What makes a processor primitive? One answer is that for primitive processors, the question 'How does the processor work?' is not a question for cognitive science to answer. . . The cognitive scientist can say, 'That question belongs in another discipline, electronic circuit theory.' We must distinguish the question of how something works from the question of what it does. The question of what a primitive processor does is part of cognitive science, but the question of how it does it is not. . . [A primitive processor's] operation must be explained . . . rather in terms of electronics or mechanics or some other realisation science. (Block 1990, pp. 257-9)

For Wittgenstein the primitive component - the ‘endpoint’ - in the explanation of, e.g., meaning and understanding is the natural behaviour of the human being at the level of ordinary lived experience (RPP vol I § 630).9 Clarifying (or at least emphasising) the role of natural human behaviour in creating the forms of life and language-games generating our ordinary concept of mind is a matter for philosophy. The investigation of the causal mechanisms underlying such behaviour is not. Cognitive science, in so far as it is the further investigation of what for Wittgenstein is a primitive component, is nothing more than a ‘realisation science’. This is far from the role currently assigned to it by many philosophers of mind.

We conclude that, contrary to Leiber's view, Wittgenstein is no cognitive scientist, even one in heavy disguise. The radicalism of Wittgenstein's philosophy is that he is neither a mentalist nor a behaviourist, neither a proponent of Leiber's 'old paradigm' nor a cognitive scientist. This falsifies a widespread and deeply-rooted assumption that the old and new paradigms present a genuine dichotomy.

NOTES

1. Abbreviations for Wittgenstein's works are as follows: BB: The Blue and Brown Books; CV: Culture and Value; PI: Philosophical Investigations; PG: Philosophical Grammar; PR: Philosophical Remarks; RFM: Remarks on the Foundations of Mathematics; RPP: Remarks on the Philosophy of Psychology; Z: Zettel.

2. Leiber’s book contains a number of small factual errors concerning Turing. Turing did not describe ‘the absolutely minimal Universal Machine’ (contra Leiber p.57). The smallest known universal Turing machine was described by Minsky (1967, pp.277-80). Turing did not play ‘a major part in the development of one of the first electronic digital computers, Colossus’ (contra p.46). He took little or no part in building the Colossus. He declined an invitation to join the project, and was in fact away on a visit to the United States during the period when the crucial technological advances were made (see Hodges 1983, pp.267-8). Leiber’s statement that ‘Turing played a significant role in the first two British postwar computer construction projects’ is misleading (p.107; see also p.46). Turing was involved with the National Physical Laboratory project and the Manchester University/Ferranti project, but not with the Cambridge EDSAC/LEO project. The EDSAC ran its first program in 1949, a year before the NPL machine did. It was not Turing who named the NPL machine the ACE (Automatic Computing Engine) (contra Leiber p.107). This was Womersley (Hodges 1983, p.317). The title of Turing’s 1950 article is incorrectly shown as ‘Computer Machinery and Intelligence’ (p.30). ‘On Computable Numbers’ is referred to as ‘Turing’s 1937 paper’ (pp.52 and 157). Leiber is not alone in dating this paper wrongly (Hodges, among others, makes the same mistake (1983, p.545)). The paper was received by the London Mathematical Society in May 1936 and was read in November 1936. It appeared in the 1936-7 volume of the Proceedings. (Turing published a correction to the paper in the next volume (1937). He there refers to the original paper as having appeared in volume '42 (1936-7)'.) The volume came out in parts, and the part containing Turing's paper appeared in 1936 (as may be ascertained from any library that was in the habit of stamping each individual part of a volume with its date of acquisition).

3. Turing’s overall result, now known as the Halting Theorem, is more specific: the set of satisfactory sequences is undecidable, where a sequence is satisfactory if and only if it constitutes an encoding of the machine table and initial tape contents of a Turing machine that eventually halts (i.e. whose head eventually comes to rest, the computation completed).

4. We ignore Turing machines that do not use the binary alphabet since these can be simulated by machines that do use it.

5. Leiber is wrong to suggest that Turing made use of weighted connections (p.118). Turing does not seem to have considered the possibility of doing so. His approach to storing information in a network is very different from the one followed almost universally today.

6. Wittgenstein reminds us that in most ways other than an assumed similarity in internal processing machines and human beings are remarkably dissimilar. (A Wittgensteinian would say that, even given this assumed similarity, it is the cognitive scientist’s prejudice for the inner rather than the outer which leads her to think of computers and human beings as relevantly alike.) In Wittgenstein’s view ‘[w]e only say of a human being and what is like one that it thinks!’ (PI § 360). Neither bodies nor parts of bodies, including brains, resemble human beings; as a result, we cannot say of a body that it thinks (PI § 360). Or that it is in pain (PG § 64), uses signs (PG § 139), considers (RPP vol I § 561) or hopes (RPP vol II § 16). For Wittgenstein the thesis of artificial ‘intelligence’ is nonsensical, since computers are, after all, at best artificial brains (PG § 64) and robots are at best artificial bodies.

Of course, the question whether or not machines can think does appear perfectly intelligible. This does not disprove Wittgenstein’s claim. For he allows that we can speak of things other than human beings (e.g. dolls) as thinking if we first make believe that they are like human beings in important respects. From there it is a short step to speaking of them as thinking things. This, although still within the make-believe, will appear a completely straightforward use of language and it may appear to us that we make literal sense. Consider, for example, Turing’s discussion of a ‘child-machine’ (1950). The child-machine can be punished and rewarded, could not be sent ‘to school without the other children making excessive fun of it’, is compared with Helen Keller, and can be taught by a process which follows ‘the normal teaching of a child’ (1950, pp 456-60). A Wittgensteinian would say that here we have Turing embedding the thesis of machine intelligence in a make-believe in which it may appear a perfectly intelligible, and empirical, question.

7. The Turing test is widely regarded as offering a definition of, i.e. a necessary and sufficient condition for, intelligence. Actually Turing himself nowhere offers the Test as a necessary condition.

8. To our knowledge, there is only one remark of Wittgenstein's which can with any plausibility be read to fit with Leiber's interpretation:

Psychological concepts are just everyday concepts. They are not concepts newly fashioned by science for its own purpose, as are the concepts of physics and chemistry. Psychological concepts are related to those of the exact sciences as the concepts of the science of medicine are to those of old women who spend their time nursing the sick. (RPP vol 2 § 62)

This remark can be read in two different ways. It can be taken to mean either that the concepts of the exact sciences provide a more accurate theory of the mind than do everyday concepts or that everyday concepts constitute no theory at all. It is the latter interpretation that is consistent with what Wittgenstein says elsewhere (for example, in the remark already quoted).

9. For Block an element in a cognitive process is primitive if further analysis of the process cannot be carried out using semantic or intentional notions. Wittgenstein's primitives are different. The primitive elements in any explanation are themselves those elements which cannot be analysed using semantic or intentional notions but which are the presuppositions of behaviour which can be so analysed. The bedrock, the point in the explanation at which the spade turns, is behaviour.

REFERENCES

Bates, A.M., Bowden, B.V., Strachey, C., Turing, A.M. 1953. 'Digital Computers Applied to Games'. In Bowden, B.V. (ed.) 1953. Faster than Thought. London: Pitman, pp.286-310.

Block, N. 1990. 'The Computer Model of the Mind'. In Osherson, D.N., Lasnik, H. (eds) 1990. An Invitation to Cognitive Science. Vol.3. Thinking. Cambridge, Mass.: MIT Press, pp.247-289.

Bobrow, D. 1968. 'A Turing Test Passed'. ACM SIGART Newsletter, December l968, pp.l4-l5.

Church, A. 1936. 'A Note on the Entscheidungsproblem'. The Journal of Symbolic Logic, 1, pp.40-41.

Colby, K.M. 1981. 'Modeling a Paranoid Mind'. Behavioral and Brain Sciences, 4, pp.515-534.

Copeland, B.J. 199-. 'What Is Computation?'. (Forthcoming.)

Copeland, B.J. 1993. Artificial Intelligence: A Philosophical Introduction. Oxford: Basil Blackwell.

Copeland, B.J., Proudfoot, D. 199-. 'On Alan Turing's Anticipation of Connectionism'. (Forthcoming.)

Cummins, R. 1989. Meaning and Mental Representation. Cambridge, Mass.: MIT Press.

Fodor, J.A. 1985. ‘Fodor’s Guide to Mental Representation: The Intelligent Auntie's Vade-Mecum’. Mind, 94, pp.76-100.

Fodor, J.A., 1975. The Language of Thought. New York: Thomas Y. Crowell.

Hebb, D.O. 1949. The Organization of Behavior: A Neuropsychological Theory. New York: John Wiley.

Heiser, J.F., Colby, K.M., Faught, W.S., Parkison, R.C. 1980. 'Can Psychiatrists Distinguish a Computer Simulation of Paranoia from the Real Thing? Journal of Psychiatric Research, 15, pp.149-162.

Hodges, A. 1983. Alan Turing: The Enigma. London: Burnett.

Longuet-Higgins, H.C. 1972. 'To Mind Via Semantics'. In Kenny, A.J.P., Longuet-Higgins, H.C., Lucas, J.R., Waddington, C.H. 1972. The Nature of Mind. Edinburgh: Edinburgh University Press, pp.92-107.

McCorduck, P. 1979. Machines Who Think. New York: W.H. Freeman.

Minsky, M.L.1967. Computation: Finite and Infinite Machines. Englewood Cliffs, N.J.: Prentice-Hall.

Monk, R. 1990. Ludwig Wittgenstein: The Duty of Genius. London: J. Cape.

Newell, A. 1980. 'Physical Symbol Systems'. Cognitive Science, 4, pp.135-183.

Penrose, R. 1989. The Emperor's New Mind. Oxford: Oxford University Press.

Rose, G.F., Ullian, J.S. 1963. ‘Approximation of Functions on the Integers’. Pacific Journal of Mathematics, 13, pp.693-701.

Rosenblatt, F. 1962. Principles of Neurodynamics. Washington, D.C.: Spartan.

Rumelhart, D.E., McClelland, J.L., and the PDP Research Group 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol.1: Foundations. Cambridge, Mass.: MIT Press.

Searle, J. 1992. The Rediscovery of the Mind. Cambridge, Mass.: MIT Press.

Shanker, S.G. 1987. 'Wittgenstein versus Turing on the Nature of Church's Thesis'. Notre Dame Journal of Formal Logic, 28, pp.615-649.

Turing, A.M. 1936. 'On Computable Numbers, with an Application to the Entscheidungsproblem'. Proceedings of the London Mathematical Society, Series 2, 42 (1936-37), pp.230-65.

Turing, A.M. 1946. 'Proposal for Development in the Mathematics Division of an Automatic Computing Engine (ACE)'. In Carpenter, B.E., Doran, R.W. (eds) 1986. A.M. Turing's ACE Report of 1946 and Other Papers. Cambridge, Mass.: MIT Press, pp.20-105.

Turing, A.M. 1947. 'Lecture to the London Mathematical Society on 20 February 1947'. In Carpenter, B.E., Doran, R.W. (eds) 1986. A.M. Turing's ACE Report of 1946 and Other Papers. Cambridge, Mass.: MIT Press, pp.106-24.

Turing, A.M. 1948. 'Intelligent Machinery'. National Physical Laboratory Report. In Meltzer, B., Michie, D. (eds) 1969. Machine Intelligence 5. Edinburgh: Edinburgh University Press, pp.3-23. Reproduced with the same pagination in Ince, D.C. (ed.) 1992. Collected Works of A.M. Turing: Mechanical Intelligence. Amsterdam: North Holland.

Turing, A.M. 1950. 'Computing Machinery and Intelligence'. Mind , 59, pp.433-60.

Turing, A.M. 1992. Collected Works of A.M. Turing. Vols 1-3, edited by Britton, J.L., Ince, D.E., Saunders, P.T. Amsterdam: North-Holland.

Wittgenstein, L. Culture and Value. (Ed. von Wright, G.H., Nyman, H.; trans. Winch, P.) Oxford: Basil Blackwell, 1980.

Wittgenstein, L. Philosophical Grammar. (Ed. Rhees, R.; trans. Kenny, A.) Oxford: Basil Blackwell, 1974.

Wittgenstein, L. Philosophical Investigations. (Trans. Anscombe, G.E.M.) Oxford: Basil Blackwell, 1953.

Wittgenstein, L. Philosophical Remarks. (Ed. Rhees, R.; trans. Hargreaves, R., White, R.) Oxford: Basil Blackwell, 1975.

Wittgenstein, L. Remarks on the Foundations of Mathematics. (Ed. von Wright, G.H., Rhees, R., Anscombe, G.E.M.; trans. Anscombe, G.E.M.) Second edition. Oxford: Basil Blackwell, 1967.

Wittgenstein, L. Remarks on the Philosophy of Psychology. (Ed. Anscombe, G.E.M., von Wright, G.H. (vol. 1), von Wright, G.H., Nyman, H. (vol. 2); trans. Anscombe, G.E.M. (vol. 1), Luckhardt, C.G., Aue, M.A.E. (vol. 2).) Oxford: Basil Blackwell, 1980.

Wittgenstein, L. The Blue and Brown Books. New York: Harper Torchbooks, 1958.

Wittgenstein, L. Zettel. (Ed. Anscombe, G.E.M., von Wright, G.H.; trans. Anscombe, G.E.M.) Oxford: Basil Blackwell, 1967.

Return to Recent Publications

Return to AlanTuring.net Home Page