Characteristic for the Turing test is that thinking is considered to be an activity or process that is separated from (non-verbal) action, emotion and affect, intuition, and so on. The relation between thought and action is construed as a relation between a program and a machine that executes the program. Thinking consists in formulating instructions, action in their executions. (Traces of a Cartesian dualism are visible here.) Another characteristic, closely related to the first one, is that knowledge and thought processes are assumed to be expressible in rules. (This is closely connected to a representational view of thought.)
What does the Turing-test tell us about human thinking, human thought? Consider the following example. Suppose there are two columns of water that both indicate exactly and correctly the sea level at some point on the shoreline. Column A is directly connected to the sea at that point, via a system of pipes, say. Column B is a closed system in which the water level is determined by calculations that are programmed into the system. Both column ‘behave’ in exactly the same way, yet one would only say of what happens in column A that it is the result of tidal flow. This indicates that the identity of behaviour (or a process, event, action) is co-determined by the network of causal relations in which it is located.
The same seems to be true of human thinking, human thought. If there was a machine that would pass the Turing test, it still remains an open question what conclusion we would (need to) draw. What the example suggests is that one relevant factor is whether the machine is causally related to the kind of things that human thinking is intrinsically connected to: action, will, emotions, and so on. Thus, whether there will be a machine that passes the Turing test is an empirical matter; but what that will mean, for us, is a philosophical question.
Martin Stokhof from: Aantekeningen/Notes date: 29/03/1992
Peter Hacker’s account of the relationship between philosophy and cognitive science raises questions that concern the ramifications of that position.
When it comes to the relationship between cognitive science and philosophical analysis I am always reminded of Jerry Fodor’s direct approach of the problem. In his seminal book The Language of Thought (p. 57) he relates the following encounter: ‘I was once told by a very young philosopher that it is a matter for decision whether animals can (can be said to) hear. “After all”, he said, “it’s our word”.’ As the context of this quotation makes clear, the issue was not just about hearing, but extended to a wide range of psychological predicates, including talking, thinking, reasoning. Fodor was not impressed by the argument, as is obvious from the way in which he continues his tale: ‘But this sort of conventionalism won’t do; the issue isn’t whether we ought to be polite to animals.’ And then Fodor goes on in his characteristic fashion to explain why, basically, there is no room for a separate enterprise called ‘philosophical analysis’.
If I understand Hacker correctly he would agree that perhaps the young philosopher was not quite as snobbish as Fodor makes them out to be, and that an argument may be construed to attach a basically human meaning to some of the terms the debate was about. Of course, it is not a matter of politeness whether animals think, but it isn’t a straightforward factual issue either. If anything, it is a matter of what the content of the concept of thinking is, i.e., of what the term means. And meaning belongs to the human sphere.
That is quite the opposite view from Fodor’s, and one that was formulated and defended already much earlier by Wittgenstein, who claimed in Philosophical Investigations (II.xi; cf., also 360): ‘If a lion could talk, we could not understand it.’ The opposition is apt and relevant, since it is clear that Hacker’s analysis owes much to Wittgenstein’s observations concerning the way concepts are acquired and have meaning.
Motivation: reductionism redux
Why is this important? New developments in cognitive science, in particular new techniques that give access to low level brain processes, suggest to many that finally reductionism is a goal that is coming within reach. Cognitive processes can be observed in vivo, i.e., concurrently with the corresponding processes in the brain. What thinking, feeling, speaking, various perceptual acts really are, we can see with our own eyes when we observe the various electrochemical processes in the brain run their course. And this is ontological reduction, not the linguistic substitute that logical positivism promoted, in which theories were reduced to others by reformulating the statements of the former in those belonging to the latter. This reductionism is the real thing: it explains the cognitive entities of everyday in terms of neuronal entities.
The technical developments are real and we cannot rule out a priori that a reduction of this kind of some cognitive processes will indeed turn out to be feasible. (Although we certainly are not yet in a position to actually affirm that.) Several questions arise. Are there any cognitive entities that resist this kind of reduction? With regard to those concepts that are susceptible to reduction, does this affect all of their content, or will there be irreducible residues? And where reduction is feasible, what consequences does this have for the application of the concepts in question in their everyday domain?
Conceptual analysis versus empirical science
Hacker’s position on this is clear: philosophy deals with concepts and provides an analysis of their contents and logical relation relations; cognitive science is concerned with the neural conditions that determine the operation of the functions corresponding to these concepts and provides descriptively and explanatory adequate theories.
So, it seems that both with regard to method as well as with regard to content philosophy and cognitive science are strictly separated. There is an a priori distinction between the conceptual analysis provided by philosophy and the empirical investigations of cognitive science. This seems to suggest that no interaction occurs between the two realms, but that is not what Hacker means. He does see a role for conceptual analysis vis à vis empirical science: conceptual analysis may provide the necessary conceptual clarity without which the empirical investigations may go astray.
I wonder, first, whether philosophical reflection might not provide more than just conceptual clarity, i.e., whether it does not also provide actual empirical data; second, whether there may not also be an influence in the other direction, viz., from empirical science to conceptual analysis. And, third, whether one could not combine philosophical and scientific methods, as is being done for example by people working in neurophenomenology.
The main reason for thinking that this might be possible is the rather humdrum observation that after all the conceptual domain of philosophical analysis and the empirical domain of cognitive science are both related to (not: coincide with) the same field of everyday phenomena.
Cf. the following quote from Hacker’s paper on emotions:
‘Moods are such things as feeling cheerful, euphoric, contented, irritable, melancholic or depressed; they are states or frames of mind, as when one is in a state of melancholia, or in a jovial or relaxed frame of mind. […] It is, therefore, unwarranted to characterise moods, as Damasio does, as emotional states that are frequent or continuous over long periods of time.’
It seems obvious to me that we are not dealing here with some kind of nominal, stipulative definition. The concepts in question have a pre-theoretical content and it is this content that is being captured and analysed in definitions (philosophy) and at the same time it is this content that motivates and directs empirical investigations (cognitive science).
My suggestion would be that this imposes restrictions on both conceptual analysis and empirical investigation.
On the empirical side: it seems obvious to me that empirical investigations into everyday phenomena such as emotions, moods, knowledge, memory can not be dissociated from whatever content these concepts have in everyday life. Perhaps cognitive science may discover that certain distinctions should be drawn slightly differently, or that connections exists that are not apparent from the conceptualisation of these phenomena in everyday language. But it cannot attribute a different content to these concepts. If it does that, it studies something, but not emotions, moods, etc. This marks a difference with cases where science is able to correct common sense understanding, such the case of jade turning out to be two different kinds of chemical compounds, of that of light having both a corpuscular and a wave nature’
So it seems to me that our first person experience, i.e., the content of these concepts as it reveals itself in philosophical reflection, provides an empirical constraint on cognitive research. (From which it follows that there is a distinction to be made between the study of those concepts that allow for such reflection, such as emotions, moods and certain cognitive actions, and those that do not, such as perception. Interesting question: on which side of this divide are language and meaning?)
This constraint, I propose, goes further than mere conceptual clarification, but actually provides additional empirical data that need to be accounted for by empirical theories.
But on the conceptual side, too, constraints arise. Conceptual analysis is not empirical research: philosophers traditionally don’t do experiments, use questionnaires, etc. Nevertheless, conceptual analysis is tied to empirical issues. For one thing, the fact that we have the concepts that we have by itself is an empirical matter. Different cultural and/or historical circumstances may give rise to (slightly) different sets of cognitive and emotional concepts. And the contents of these concepts themselves may change under the influence of both philosophical analysis and empirical research. To put it differently, in as much as our concepts embody a (rudimentary) conception of our selves, this very conception, and thereby the contents of those concepts, may change when we analyse it, both conceptually as well as empirically.
In particular the last point means that although there is a categorical difference between conceptual analysis and empirical research (here I agree Hacker), it does not follow that conceptual analysis is a priori to empirical research. The very object of conceptual analysis may change due to the results of empirical investigations, in much the same way as the empirical investigation must proceed on the basis of the results of conceptual analysis: there exists an ongoing interaction between the two.
Also, it seems to me that this might have methodological consequences as well, in so far as it indicates that the idea of a combination of philosophical reflection and empirical research, i.e., of a first person and a third person perspective, may prove to be relevant if we are to gain a proper understanding of what such phenomena as emotions, memory, etc are. Neurophenomenology à la Varela, Thompson, Depraz and Vermersch may provide a model here, but it need not be the only one. I do feel that this is something that philosophers and cognitive scientists need to explore.
Finally, the empirical and contingent nature of the concepts involved also provides an impetus to investigate to what extent these phenomena transcend the boundaries of the individual. Hacker quite rightly stresses that the attitudinal aspects of these concepts, viz., the fact that to have a belief, to reach a decision or to form a hypothesis, to be in a melancholy mood or to be proud, or jealous, are not isolated, instantaneous events or states, but phenomena that are related with all kinds of other properties and relations that individuals may have and enter into, with a whole network of cognitive and non-cognitive dispositions and capabilities. But he tends to ignore that a substantial part of those capabilities (such as those that enter into the use of language) are essentially social in nature, at least in this sense that the idea of only one single individual having these capabilities is conceptually incoherent, These concepts presuppose a social framework that allows them to be instantiated in an individual. Taking this seriously would provide us with an impetus to investigate to what extent modern cognitive science suffers from an individualistic bias, which may be due to its reductionist presuppositions and/or the limitations of its experimental toolbox.
‘What X is for us’: what we are, is in an important sense what we think (feel, imagine) that we are. And in this sense reductionism might succeed that once we believe in (some version of) it, i.e., once we are willing to adapt our self-image to the particular picture it presents, we in fact become whatever that picture says we are.
(Note that such a development would have ethical consequences as well. That is one reason why imagination of what we are and, in particular, imagination of what we could be, as for example literature provides, is (also) of ethical importance.)
If central concepts pertaining to human identity, such as will, consciousness, thinking, feeling, imagination, meaning, are concepts with a content that is determined by ‘What X is for us’, human identity is essentially a construct: historically, socially and culturally constrained and only partially individually maintained. We are in that sense what we think we are, although the freedom we have in thinking ourselves is constrained by social, cultural and historical factors (and, of course, physical and biological ones). In modern times science has become an important source for what we count as content of certain concepts. For example, our view of the material world is increasingly informed by scientific theories (albeit often distorted by popular misconceptions and simplifications). To the extent that this holds true also for the concepts that go into determining our identity, our conception of ourselves may change as well.
Today, it seems, essential aspects of the contents of many of the central concepts mentioned above are determined externally, i.e., by reference to things outside the individual mental realm. However, increasing influence of research in cognitive science (psychology, neurobiology) may change that. We may come to adopt, for example, a view on what meaning is that takes into account only what can be explained in terms of individual, psychological and/or neurobiological properties. That would not be a better (or worse) account, since there is no fact of the matter that would provide an independent measure here: if meaning is what counts as meaning for us, then if we ‘change our mind’ about what meaning is, meaning indeed becomes something else. But then so do we: as these central concepts change, our identity changes accordingly. And so we may end up with a view of ourselves in which any differences that we now count as essential differences between, say, human intelligence and artificial intelligence have been obliterated, or a view in which we accept only explanations for our actions that are based on facts concerning our material (neurophysiological) make up.
So, humanity may well come to an end by its own hand, not through physical destruction (although that is certainly not unlikely) but by conceptual elimination. After all, is that not how we got rid of a lot of other things?
Martin Stokhof from: Aantekeningen/Notes date: December 20, 2008
Suppose there were creatures with the following features. If something is the case, they believe it; if something is not the case, they believe it is not the case; they do not entertain any other thoughts, more specifically they don’t have thoughts of the form ‘Suppose A were (not) the case …’, ‘If B had not been the case …’, and so on. Would we say that these creatures had knowledge? They could serve as reliable oracles, as perfect encyclopaedias, but we wouldn’t want to say that they knew anything. So knowledge presupposes (among other things) our ability to be uncertain, to entertain suppositions, to consider situations that we know to be counterfactual.
Does this mean that the concept of an omniscient interpreter à la Davidson is incoherent? Not necessarily. Perfect knowledge about the world is compatible, at least so it seems, with counterfactual uncertainty, and hence with having the concept of being wrong.
Martin Stokhof from: Aantekeningen/Notes date: 13/10/2000, 09/08/2001
Dreyfus, Kierkegaard, ‘unconditional commitment’. Remarkable thing about the case of Abraham is that we do not consider the issue from Isaac’s point of view. What would he have said? “I’d rather have a despairing Buddhist as a father than this unconditionally committed Christian …”? He might have, and that’s enough. The unconditional commitment of Abraham to his God might go against whatever views Isaac has concerning the way he wants to lead his life, and that really should be reason enough for us to reject, not just this particular unconditional commitment of Abraham’s, but the very concept itself. Given the fact that we lead our lives with others, and that hence, whether we like it or not, our actions directly or indirectly influence the lives of those others, an unconditional commitment, precisely because it is unconditional, i.e., also not conditioned by concerns about others, is intrinsically morally wrong. This is independent of the moral status of actual effects of some particular unconditional commitment, it is an objection to the concept as such.
My guess is that the concept is appealing for reasons quite similar to those that make people susceptible to the idea of living in ‘historical times’, witnessing ‘turning points in history’, and so on (Heidegger). We want our lives to be dramatic, exciting, important. Whereas in reality they are ordinary, humdrum, inconsequential, even if they turn out to make a difference. That sounds contradictory, but it is not. The point is: what is a decisive moment is decided by history (i.e., by reality in its temporal dimension and complexity), not by us, and it is hardly ever possible for us to discern while we are witnessing it. Too often an event is labelled ‘historical’, something that ‘changes the world as we know it’ by contemporaries, and most of those events turn out to be completely unimportant. At best some of them may become regarded as symbolic for a much more complex and extended sequence of events. History is complex, much too complex for us who are witnessing it to grasp, and often also too complex for those who have the benefit of hindsight to fathom completely. There is no communis opinio among historians about the majority of the events that make up our history, not because of a lack of knowledge, but because of their sheer complexity combined with the unavoidable multiplicity of perspectives. So even if a certain event or action does make a significant difference, the claim of those participating in it that it does, in most cases will be completely unfounded.
The idea of an unconditional commitment is based on a similar misunderstanding of our lives: appearance to the contrary notwithstanding, it places us, as an individual, in the centre of things. The unconditional commitment is ours, even where (or should we say, precisely because?) it involves a complete surrender to God. As such it displays a complete disregard of the fundamental given that our life is always related to that of others, even if we live alone, in the remotest place on earth. Given that, whatever commitment we make to live our life in accordance with, it needs take into account others and therefore can never be unconditional. The alternative is a fundamental dismissal of others as worthy of moral, ethical concern, something that unavoidably leads to nihilism.
Martin Stokhof from: Aantekeningen/Notes date: 21/05/2003
On the relation between experience and theoretical explanation
The ultimate justification of a theoretical explanation resides in the fact that it changes our experience. It allows us, not only to see things differently, but better: for our understanding of things is in the way we experience them. In that sense theories are a means, not an end in themselves.
A good example seems to be provided by certain mathematical theories, in particular geometrical ones, that, when really understood, change our ways of perceiving objects and their relationships. Or rather, allow us to perceive them differently. It is this added freedom of perception that deepens our understanding: things are not just like this, they are much more.
Similarly, mythologies, mystical explanations, good philosophy. (Is there something of this in Wittgenstein’s remarks on Frazer?)
But, of course, this will work only if we realise that a new way of looking at things, a new way of experiencing them, is just that: one among many possible ways. The crux of the matter is that we should not exchange one view for another, but ‘collect’ them, exploit them, amplify them. Of course, we can’t hold onto all of them at the same time (in much the same way that we can’t entertain two different sets of certainties). Which means that we should engage in flexibility, change, train ourselves to switch back and forth, enjoying the distance in between.
To come to grips with the relation between experience and theory (in a wide sense) seems a crucial issue: experience alone will not do (pace the claims of sensualism) because experience never comes only by itself. It is always accompanied by feelings, thoughts, emotions that transcend it. (Even when we are not aware of this. This shows itself in how we act upon our experiences.) It is in this sense that we are not a database of experiential input and some calculating device. We need theory, not to knit the experiences together, but to understand that what holds it together in the first place: our own selves. But understanding ourselves in that way is not enough: the understanding remains sterile if it is not tested again in new experiences, or rather, in new ways of experiencing.
Another aspect: certain types of theories, say particle physics, or neurophysiology, are hard to fasten unto everyday experience. We may know that what looks as a solid material object is nothing but a swarm of particles, but we can not experience it in that way. Similarly, we may know that certain feelings arise from certain stimulation patterns in the brain, made possible by the production of certain neurotransmitters, but that is not an account of what we experience. This, too, points towards a distinction between the experiential aspect, or content, of an experience, and the accompaniments thereof. Experience is that total, not one of its components. And such theories as indicated above mainly pertain to the ‘data aspect’ of experiences.
Martin Stokhof [from: Aantekeningen/Notes date: 22-08-1998]
Suppose we have a sequence of properties N1 … Nn such that: N1 ⊆ … ⊆ Nn. If A is the property expressed by an extensional, i.e., subsective or intersective, adjective, it holds that (A ∩ N1) ⊆ … ⊆ (A ∩ Nn). Contrariwise, for some intensional adjectives this breaks down in an interesting way: we can have A(N1) ⊆ … ⊆ A(Ni) while we do not have: A(Ni+1) ⊆ … ⊆ A(Nn). Example: a one-guilder piece is a coin, is a piece of currency, is a material object. A blackened one-guilder piece is a blackened coin, is a blackened piece of currency, is a blackened material object. But although a false one-guilder piece is a false coin and a false piece of currency, it is not a false material object. This shows that somewhere along the line of N1 to Nn there is a break, between different kinds of properties, say characteristic and non-characteristic ones, and that intensional qualifications such as false are a means to determine where the break occurs.
Martin Stokhof [from: Aantekeningen/Notes date: 30/06/1998]