Part of the argumentation in Davidson’s ‘On the Very Idea of a Conceptual Scheme’ centers around transitivity of the translation relation. The point Davidson is trying to make here, I think, is that if there exists a translation from L1 into L2, that translation itself can be stated in either L1 or L2, and consists in an accurate mapping of sentences from L1 onto sentences of L2. If the translation is stated in L1 then, by assumption, it can also be stated in L2 (after all, there is a mapping from all of L1 into L2, i.e., also of the L1-sentences describing the mapping). So we can safely assume that if there is a translation from L1 into L2, it can be formulated in L2. Now suppose there is a translation from L2 into L3: that, again is a mapping, in this case of L2-sentences onto L3-sentences. That mapping includes those L2-sentences that describe how to translate L1-sentences into L2-sentences. So what we have then is (among other things) a set of L3-sentences that tell us which L2-sentences are translations of which L1-sentences, and L3-translations of the L2-sentences. But that means we have a translation of L1 into L3. (More neatly: if a translation is a homomorphism from L to L’, then we know that if there is a homomorphism from L1 to L2 and a homomorphism from L2 to L3, then there is a homomorphism from L1 to L3, viz.: the composition of the two.)
Of course the above argument only works if we assume that translations are total, i.e., that they map all sentences of L onto sentences of L’ and vice versa. What if we drop that assumption? First of all we have to ask ourselves whether we are still dealing with translation in such a case. But let that pass, and suppose we have a mapping of all the sentences of L1 onto a proper subset of sentences of L2, i.e., there are parts of L2 that have no counterpart in L1. Notice that the formulation of the translation can not be in the latter set (for that would mean that the translation would be statable in L2, but not in L1, which is absurd.) Now assume we have a translation from L2 into L3: the only way in which that would not give us a translation of L1 into L3 (by the reasoning above), is by being restricted to exactly that part of L2 that does not contain the L1-to-L2 translation. But that would mean that it really isn’t a translation of L2 as such: it must leave out a proper part of L2. Of course, such a mapping could exists, but we would lack any justification for calling it a translation. (And if we would insist that we could call it that, then we actually are assuming what we set out to prove, viz., the failure of transitivity of translation.)
Note that the fact that in actual cases there are bound to be discrepancies between languages, i.e., things in L that don’t have an exact counterpart in L’, or things in L’ that lack a counterpart in L, does not really affect this line of argumentation. The central point is that the translation itself concerns such a substantial part of both languages, that that by itself guarantees transitivity to a sufficient degree.
Martin Stokhof from: Radical Interpretation Discussion Board date: 11-2004
Peter Hacker’s account of the relationship between philosophy and cognitive science raises questions that concern the ramifications of that position.
When it comes to the relationship between cognitive science and philosophical analysis I am always reminded of Jerry Fodor’s direct approach of the problem. In his seminal book The Language of Thought (p. 57) he relates the following encounter: ‘I was once told by a very young philosopher that it is a matter for decision whether animals can (can be said to) hear. “After all”, he said, “it’s our word”.’ As the context of this quotation makes clear, the issue was not just about hearing, but extended to a wide range of psychological predicates, including talking, thinking, reasoning. Fodor was not impressed by the argument, as is obvious from the way in which he continues his tale: ‘But this sort of conventionalism won’t do; the issue isn’t whether we ought to be polite to animals.’ And then Fodor goes on in his characteristic fashion to explain why, basically, there is no room for a separate enterprise called ‘philosophical analysis’.
If I understand Hacker correctly he would agree that perhaps the young philosopher was not quite as snobbish as Fodor makes them out to be, and that an argument may be construed to attach a basically human meaning to some of the terms the debate was about. Of course, it is not a matter of politeness whether animals think, but it isn’t a straightforward factual issue either. If anything, it is a matter of what the content of the concept of thinking is, i.e., of what the term means. And meaning belongs to the human sphere.
That is quite the opposite view from Fodor’s, and one that was formulated and defended already much earlier by Wittgenstein, who claimed in Philosophical Investigations (II.xi; cf., also 360): ‘If a lion could talk, we could not understand it.’ The opposition is apt and relevant, since it is clear that Hacker’s analysis owes much to Wittgenstein’s observations concerning the way concepts are acquired and have meaning.
Motivation: reductionism redux
Why is this important? New developments in cognitive science, in particular new techniques that give access to low level brain processes, suggest to many that finally reductionism is a goal that is coming within reach. Cognitive processes can be observed in vivo, i.e., concurrently with the corresponding processes in the brain. What thinking, feeling, speaking, various perceptual acts really are, we can see with our own eyes when we observe the various electrochemical processes in the brain run their course. And this is ontological reduction, not the linguistic substitute that logical positivism promoted, in which theories were reduced to others by reformulating the statements of the former in those belonging to the latter. This reductionism is the real thing: it explains the cognitive entities of everyday in terms of neuronal entities.
The technical developments are real and we cannot rule out a priori that a reduction of this kind of some cognitive processes will indeed turn out to be feasible. (Although we certainly are not yet in a position to actually affirm that.) Several questions arise. Are there any cognitive entities that resist this kind of reduction? With regard to those concepts that are susceptible to reduction, does this affect all of their content, or will there be irreducible residues? And where reduction is feasible, what consequences does this have for the application of the concepts in question in their everyday domain?
Conceptual analysis versus empirical science
Hacker’s position on this is clear: philosophy deals with concepts and provides an analysis of their contents and logical relation relations; cognitive science is concerned with the neural conditions that determine the operation of the functions corresponding to these concepts and provides descriptively and explanatory adequate theories.
So, it seems that both with regard to method as well as with regard to content philosophy and cognitive science are strictly separated. There is an a priori distinction between the conceptual analysis provided by philosophy and the empirical investigations of cognitive science. This seems to suggest that no interaction occurs between the two realms, but that is not what Hacker means. He does see a role for conceptual analysis vis à vis empirical science: conceptual analysis may provide the necessary conceptual clarity without which the empirical investigations may go astray.
I wonder, first, whether philosophical reflection might not provide more than just conceptual clarity, i.e., whether it does not also provide actual empirical data; second, whether there may not also be an influence in the other direction, viz., from empirical science to conceptual analysis. And, third, whether one could not combine philosophical and scientific methods, as is being done for example by people working in neurophenomenology.
The main reason for thinking that this might be possible is the rather humdrum observation that after all the conceptual domain of philosophical analysis and the empirical domain of cognitive science are both related to (not: coincide with) the same field of everyday phenomena.
Cf. the following quote from Hacker’s paper on emotions:
‘Moods are such things as feeling cheerful, euphoric, contented, irritable, melancholic or depressed; they are states or frames of mind, as when one is in a state of melancholia, or in a jovial or relaxed frame of mind. […] It is, therefore, unwarranted to characterise moods, as Damasio does, as emotional states that are frequent or continuous over long periods of time.’
It seems obvious to me that we are not dealing here with some kind of nominal, stipulative definition. The concepts in question have a pre-theoretical content and it is this content that is being captured and analysed in definitions (philosophy) and at the same time it is this content that motivates and directs empirical investigations (cognitive science).
My suggestion would be that this imposes restrictions on both conceptual analysis and empirical investigation.
On the empirical side: it seems obvious to me that empirical investigations into everyday phenomena such as emotions, moods, knowledge, memory can not be dissociated from whatever content these concepts have in everyday life. Perhaps cognitive science may discover that certain distinctions should be drawn slightly differently, or that connections exists that are not apparent from the conceptualisation of these phenomena in everyday language. But it cannot attribute a different content to these concepts. If it does that, it studies something, but not emotions, moods, etc. This marks a difference with cases where science is able to correct common sense understanding, such the case of jade turning out to be two different kinds of chemical compounds, of that of light having both a corpuscular and a wave nature’
So it seems to me that our first person experience, i.e., the content of these concepts as it reveals itself in philosophical reflection, provides an empirical constraint on cognitive research. (From which it follows that there is a distinction to be made between the study of those concepts that allow for such reflection, such as emotions, moods and certain cognitive actions, and those that do not, such as perception. Interesting question: on which side of this divide are language and meaning?)
This constraint, I propose, goes further than mere conceptual clarification, but actually provides additional empirical data that need to be accounted for by empirical theories.
But on the conceptual side, too, constraints arise. Conceptual analysis is not empirical research: philosophers traditionally don’t do experiments, use questionnaires, etc. Nevertheless, conceptual analysis is tied to empirical issues. For one thing, the fact that we have the concepts that we have by itself is an empirical matter. Different cultural and/or historical circumstances may give rise to (slightly) different sets of cognitive and emotional concepts. And the contents of these concepts themselves may change under the influence of both philosophical analysis and empirical research. To put it differently, in as much as our concepts embody a (rudimentary) conception of our selves, this very conception, and thereby the contents of those concepts, may change when we analyse it, both conceptually as well as empirically.
In particular the last point means that although there is a categorical difference between conceptual analysis and empirical research (here I agree Hacker), it does not follow that conceptual analysis is a priori to empirical research. The very object of conceptual analysis may change due to the results of empirical investigations, in much the same way as the empirical investigation must proceed on the basis of the results of conceptual analysis: there exists an ongoing interaction between the two.
Also, it seems to me that this might have methodological consequences as well, in so far as it indicates that the idea of a combination of philosophical reflection and empirical research, i.e., of a first person and a third person perspective, may prove to be relevant if we are to gain a proper understanding of what such phenomena as emotions, memory, etc are. Neurophenomenology à la Varela, Thompson, Depraz and Vermersch may provide a model here, but it need not be the only one. I do feel that this is something that philosophers and cognitive scientists need to explore.
Finally, the empirical and contingent nature of the concepts involved also provides an impetus to investigate to what extent these phenomena transcend the boundaries of the individual. Hacker quite rightly stresses that the attitudinal aspects of these concepts, viz., the fact that to have a belief, to reach a decision or to form a hypothesis, to be in a melancholy mood or to be proud, or jealous, are not isolated, instantaneous events or states, but phenomena that are related with all kinds of other properties and relations that individuals may have and enter into, with a whole network of cognitive and non-cognitive dispositions and capabilities. But he tends to ignore that a substantial part of those capabilities (such as those that enter into the use of language) are essentially social in nature, at least in this sense that the idea of only one single individual having these capabilities is conceptually incoherent, These concepts presuppose a social framework that allows them to be instantiated in an individual. Taking this seriously would provide us with an impetus to investigate to what extent modern cognitive science suffers from an individualistic bias, which may be due to its reductionist presuppositions and/or the limitations of its experimental toolbox.
‘What X is for us’: what we are, is in an important sense what we think (feel, imagine) that we are. And in this sense reductionism might succeed that once we believe in (some version of) it, i.e., once we are willing to adapt our self-image to the particular picture it presents, we in fact become whatever that picture says we are.
(Note that such a development would have ethical consequences as well. That is one reason why imagination of what we are and, in particular, imagination of what we could be, as for example literature provides, is (also) of ethical importance.)
If central concepts pertaining to human identity, such as will, consciousness, thinking, feeling, imagination, meaning, are concepts with a content that is determined by ‘What X is for us’, human identity is essentially a construct: historically, socially and culturally constrained and only partially individually maintained. We are in that sense what we think we are, although the freedom we have in thinking ourselves is constrained by social, cultural and historical factors (and, of course, physical and biological ones). In modern times science has become an important source for what we count as content of certain concepts. For example, our view of the material world is increasingly informed by scientific theories (albeit often distorted by popular misconceptions and simplifications). To the extent that this holds true also for the concepts that go into determining our identity, our conception of ourselves may change as well.
Today, it seems, essential aspects of the contents of many of the central concepts mentioned above are determined externally, i.e., by reference to things outside the individual mental realm. However, increasing influence of research in cognitive science (psychology, neurobiology) may change that. We may come to adopt, for example, a view on what meaning is that takes into account only what can be explained in terms of individual, psychological and/or neurobiological properties. That would not be a better (or worse) account, since there is no fact of the matter that would provide an independent measure here: if meaning is what counts as meaning for us, then if we ‘change our mind’ about what meaning is, meaning indeed becomes something else. But then so do we: as these central concepts change, our identity changes accordingly. And so we may end up with a view of ourselves in which any differences that we now count as essential differences between, say, human intelligence and artificial intelligence have been obliterated, or a view in which we accept only explanations for our actions that are based on facts concerning our material (neurophysiological) make up.
So, humanity may well come to an end by its own hand, not through physical destruction (although that is certainly not unlikely) but by conceptual elimination. After all, is that not how we got rid of a lot of other things?
Martin Stokhof from: Aantekeningen/Notes date: December 20, 2008
Suppose there were creatures with the following features. If something is the case, they believe it; if something is not the case, they believe it is not the case; they do not entertain any other thoughts, more specifically they don’t have thoughts of the form ‘Suppose A were (not) the case …’, ‘If B had not been the case …’, and so on. Would we say that these creatures had knowledge? They could serve as reliable oracles, as perfect encyclopaedias, but we wouldn’t want to say that they knew anything. So knowledge presupposes (among other things) our ability to be uncertain, to entertain suppositions, to consider situations that we know to be counterfactual.
Does this mean that the concept of an omniscient interpreter à la Davidson is incoherent? Not necessarily. Perfect knowledge about the world is compatible, at least so it seems, with counterfactual uncertainty, and hence with having the concept of being wrong.
Martin Stokhof from: Aantekeningen/Notes date: 13/10/2000, 09/08/2001
Description itself is never neutral or objective, there is no Archimedean point that allows us to ‘just describe the facts’. But that does not necessarily imply that description and explanation are alike. An explanation, unlike a description, presupposes a theoretical framework, of general principles, inferential relations (causal or otherwise). An explanation typically presents an individual event as an instance of something more general, a law, a pattern, and in doing so links it to other events that are supposed to be similar. Description, though not objective, remains level with what is described, so to speak. It does not generalise, and respects, you might say, the individuality, the uniqueness of what it describes. Of course, description, too, is possible only within a framework, but it functions quite differently.
[The rain king] What Wittgenstein opposes in Frazer is that the latter attributes some kind of naive proto-science to these people; according to him they are simply wrong (and we are right) about the causal antecedents of the annual rains. That the difference of opinion between Wittgenstein and Frazer itself is like a scientific debate (of sorts) is true but, as far as I can see, that has no direct bearing on the adequacy of Wittgenstein’s criticism. For Wittgenstein, the essential point is that they do not conceive of the relationship between the Rain-King (and what he does) and the coming of the rains as a causal relationship. Q.E.D., as far as Wittgenstein is concerned: for that is exactly what he holds against Frazer, viz., that he (Frazer) does ascribe to them a kind of naive scientific theory that attempts to explain, in causal terms, the coming of the rains.
[The fire-festival] If to understand the meaning of the ritual means to experience its depth, the terror its enactment brings about, then to laugh at the description would be to show a thorough lack of understanding. That applies to the specific examples Frazer and Wittgenstein are concerned with, and it does not mean, I gather, that there couldn’t be rituals for which to laugh would be the hallmark of understanding. But in these particular cases, to laugh, to ridicule the ‘savages’, is to show that one does not understand.
[Kissing a portrait] One important characteristic that Wittgenstein mentions, and that seems what is needed to distinguish the kissing of a portrait from doing the dishes, is that in a ritual means and ends coincide. An ordinary action aims at something: we do the dishes because we want to dine from clean ones, because we want to prevent bacteria from growing in the kitchen sink, because we want to impress someone, and so on. Here the action is a means to an end. A ritual is not like that, a ritual is not performed with an eye to its effects (although it may, of course, have effects, and some of these we might find agreeable). Rather, a ritual is performed for it own sake: “… it does not aim at anything; we act in this way and then feel satisfied.”
One important consequence of this is that whether an act is ritualistic or not (in Wittgenstein’s sense) does not depend (at least not solely) on the nature of the act. (So doing the dishes can be a ritual as well.) And as far as I can see it also means that we can not say that rituals are either private or social, they can be either, and both.
As for the question whether science itself is a ritual, my guess is that Wittgenstein would acknowledge that many people have indeed replaced their reliance on some religious system by a reliance on science. However, he also quite emphatically states that this is a misunderstanding of what science is and what it can do. (Recall Tractatus 6.371-6.372; cf. also the foreword to Philosophical Remarks in Culture and Value) A proper view on science has no place for ritual, since science is about external, causal relationships between (types) of events, whereas ritual is concerned with the internal significance of an event or act.
Martin Stokhof from: EOL Discussion Board date: fall2002
Dreyfus, Kierkegaard, ‘unconditional commitment’. Remarkable thing about the case of Abraham is that we do not consider the issue from Isaac’s point of view. What would he have said? “I’d rather have a despairing Buddhist as a father than this unconditionally committed Christian …”? He might have, and that’s enough. The unconditional commitment of Abraham to his God might go against whatever views Isaac has concerning the way he wants to lead his life, and that really should be reason enough for us to reject, not just this particular unconditional commitment of Abraham’s, but the very concept itself. Given the fact that we lead our lives with others, and that hence, whether we like it or not, our actions directly or indirectly influence the lives of those others, an unconditional commitment, precisely because it is unconditional, i.e., also not conditioned by concerns about others, is intrinsically morally wrong. This is independent of the moral status of actual effects of some particular unconditional commitment, it is an objection to the concept as such.
My guess is that the concept is appealing for reasons quite similar to those that make people susceptible to the idea of living in ‘historical times’, witnessing ‘turning points in history’, and so on (Heidegger). We want our lives to be dramatic, exciting, important. Whereas in reality they are ordinary, humdrum, inconsequential, even if they turn out to make a difference. That sounds contradictory, but it is not. The point is: what is a decisive moment is decided by history (i.e., by reality in its temporal dimension and complexity), not by us, and it is hardly ever possible for us to discern while we are witnessing it. Too often an event is labelled ‘historical’, something that ‘changes the world as we know it’ by contemporaries, and most of those events turn out to be completely unimportant. At best some of them may become regarded as symbolic for a much more complex and extended sequence of events. History is complex, much too complex for us who are witnessing it to grasp, and often also too complex for those who have the benefit of hindsight to fathom completely. There is no communis opinio among historians about the majority of the events that make up our history, not because of a lack of knowledge, but because of their sheer complexity combined with the unavoidable multiplicity of perspectives. So even if a certain event or action does make a significant difference, the claim of those participating in it that it does, in most cases will be completely unfounded.
The idea of an unconditional commitment is based on a similar misunderstanding of our lives: appearance to the contrary notwithstanding, it places us, as an individual, in the centre of things. The unconditional commitment is ours, even where (or should we say, precisely because?) it involves a complete surrender to God. As such it displays a complete disregard of the fundamental given that our life is always related to that of others, even if we live alone, in the remotest place on earth. Given that, whatever commitment we make to live our life in accordance with, it needs take into account others and therefore can never be unconditional. The alternative is a fundamental dismissal of others as worthy of moral, ethical concern, something that unavoidably leads to nihilism.
Martin Stokhof from: Aantekeningen/Notes date: 21/05/2003
Wittgenstein claims that belief (like doubt, expectation, etc.) is ‘introspectively accessible”: if we believe that p, we know that we believe that p. Hence, we cannot say, Wittgenstein claims, that we thought we believed something, but actually did not believe it. Knowledge is not like that: we can think that we know something, only to find out that we didn’t. The reason is (presumably, but Wittgenstein does not discuss this explicitly) that belief concerns a certain state or disposition, whereas knowledge in addition involves a particular relation to the world (this is where truth comes in).
Does this hold up if what a person believes ultimately must show itself in his or her actions, i.e., if belief is a disposition to act in a certain way? That is a view that Wittgenstein seems to endorse as well, so the issue at hand can also be formulated as follows: does it make sense to say that one can somehow be mistaken about what it is one is doing, or is disposed to do?
Stepping back: is there at this particular point a difference between belief in the ordinary, epistemological sense, and religious beliefs and ethical convictions? If we grant Wittgenstein that, indeed, it does not seem to make sense to say “I think I believe that Amsterdam is the capital of the Netherlands, but maybe I’m wrong, maybe I don’t believe that”, are we then forced to also hold that it does not make sense to answer “I don’t know” if someone asks us “Do you believe in an after-life (transubstantiation, Last Judgement, …)”, and that we cannot meaningfully express doubt concerning an ethical prescription, as in “I’m not sure I believe that one should always respect the right to bodily integrity” ? It would be interesting to try to construct different cases and see whether doubt concerning belief is possible, and if so, what it means to express it. In that way we might also get a better picture of how various kinds of beliefs are related to action, on the one hand, and reality on the other (cf. above, concerning knowledge)
It is true that what people say and what they do all too often diverges, but that does not mean that what they believe and what they do diverges as well. For saying something is one thing, believing it is another. It is only a sincere utterance that allows an inference to a belief (cf., Grice’s Maxim of Quality). This is the gist of what is known as “Moore’s Paradox” (which Wittgenstein regarded as Moore’s most important contribution to philosophy): an utterance of the form “p but I don’t believe that p” may very well be true, but it cannot be sincere.
So, the crux of examples about a person saying one thing and doing another concerns that person’s sincerity : that cannot be taken for granted, but has to be argued for.
That many (most) of our beliefs are unreflected, unanalysed, is an important here: in fact, explicitly held beliefs seem to be the exception, rather than the rule. And that seems to make the way a person acts the primary source for belief-ascriptions.
This observation itself raises some other questions: Can we as outside observers always derive distinct beliefs from the way a person acts? Can a person himself do this? What kinds of beliefs lend themselves to such investigation? Aren’t these more like certainties, rather than cognitive beliefs? What about the requirement that it should always be possible to explicate a belief derived from a way of acting?
Consider the case of the egalitarian who does not act upon his beliefs: would that be a case of a person being wrong about what he believes (he thinks he believes in equal rights, yet his failure to act shows that he does not), or is it rather a case of failing to act upon one’s beliefs? Given his beliefs the egalitarian also believes that he should act in a particular way in particular situations. That he does not, might also be attributed to weakness in character, or some other circumstance, and not to a mistake about the beliefs he holds. Of course, if he consistently acts in opposition to what we would expect on the basis of the beliefs he confesses to, we would start to doubt. And his sincerity would be the first thing we would doubt.
Note finally that the phrase “I believe” itself can be used in a variety of ways: as a statement of a firmly held conviction; but also as a way of indicating that we are not sure (yet), that we actually leave room for the opposite. “I think” is more like “I believe” in the latter sense than in the former. And for Wittgenstein’s argument we do indeed need the former sense.
Martin Stokhof from: EOL Discussion Board date: October 2002
Theodore Schatzki’s analysis of dispersed and integrative practises implies that normativity arises, not at the higher order level of rules, or practises, or institutions (at least not exclusively), but at the very basic level of individual interaction with the environment. This view is reinforced by various analyses of know-how and expertise.
What is interesting to note, especially with regard to Davidson and Gadamer, who actually are in the same boat here, is that uninterpreted content then plays a key role. We need such content in order for normativity to have a basis on which the language-based practises may build.
For Gadamer this is anathema: all experience is linguistic and exists only in and through language. For Davidson it presents a problem too, although it may not be immediately obvious that it does. For doesn’t Davidson avail himself of a primitive causal relationship between the world and us? And doesn’t he reject any form of mediation (linguistic or otherwise) between ourselves and the world?
But unmediated content is not the same as uninterpreted content as we use the phrase here. For in Davidson’s view our causal interaction with the world results in beliefs, and beliefs and (sentence) meanings are indistinguishable in terms of structure and content. So Davidson’s unmediated content is highly structured, in such a way that it is immediately expressible (given a suitably expressive language, of course): definitely not the uninterpreted content of everyday expertise. In fact Davidson seems committed to the same kind of linguistic view on experience that Gadamer embraces explicitly.
Martin Stokhof from: Aantekeningen/Notes date: 21/11/2006
The point seems to be this: given that we have a shared ontology due to the application of the Tarskian framework (as the theoretical framework in which we formulate concrete theories of meanings for concrete languages) and charity, which implies shared beliefs, why don’t we have a shared vocabulary and a shared theory of reference concerning this vocabulary?
The point seems to be this: given that we have a shared ontology due to the application of the Tarskian framework (as the theoretical framework in which we formulate concrete theories of meanings for concrete languages) and charity, which implies shared beliefs, why don’t we have a shared vocabulary and a shared theory of reference concerning this vocabulary?
The question boils down to what follows from the assumptions mentioned about reference. Let’s start with the use of the Tarskian framework. For Davidson this follows from the assumptions he makes concerning the nature of meaning (extensionalism) and the function of a semantic theory (explanation of competence); cf., ‘Truth and Meaning’ for the details. So his view is that if we want a theory of meaning for a language, it has to have that particular form. And that in its turn presupposes that in any language we find a shared logical machinery, consisting of propositional connectives, quantificational apparatus (and the basic distinctions brought along by that) and the logical rules governing their behaviour. But that is all, in particular it does not involve any assumptions concerning the reference of the non-logical vocabulary. Of course, reference is used in stating the Tarskian truth theory, but there it is used only as an auxiliary notion. What the theory defines, or accounts for, is (our knowledge of) truth conditions of sentences. It does so using the auxiliary notion of reference of sub-sentential expressions, but, and this is the important point: it does not define truth on the basis of an independent account of reference. And as Davidson points out, any account of truth along these lines leaves reference essentially underdetermined (a point also made by Putnam and several others): two sentences may have the same truth conditions under different assignments of references to their sub-sentential expressions. (If the sentences are from different languages, we have two translatable sentences which do not allow us to infer any shared reference; if the sentences are from the same language, this means that synonymy does not guarantee unique reference; and as a special (but most important) case we have that it is possible to assign to one and the same sentence the same truth conditions based on attributions of different references to one or more of its component expressions.)
Then charity. Notice that charity, too, concerns sentences, not words. In interpretation, i.e., in actually trying to construct a Tarskian theory of truth for a given language, the empirical data we start from are utterances made in a context (situation). The assumption that truth plays the same role for the speakers of the language we are interpreting as it does for us, these data can be viewed as utterances of sentences held true in that situation. A specific (to be determined!) subset of those will be utterances of sentences held true about that situation, i.e., utterances of sentences that are held true on the basis of certain aspects of the situation in which they are uttered. Each and every sentence uttered is supposed to express a belief. So certain sentences uttered in a situation express beliefs of the speaker about that situation. This is where charity comes in: it allows us to proceed on the assumption that the belief that the speaker expresses in these sentences (but remember, we do not know of every sentence in advance whether it belongs to this set) is a belief that we hold about the situation as well. This is supposed to give us enough common ground to work our way into the language. But, and this is the important point, just as two sentences can have the same truth conditions, yet not share reference of sub-sentential expressions, beliefs too can be shared without a shared set of objects, properties and relations that can be attached in a unique way as references to the expressions that occur in the sentences that are used to express these beliefs.
So neither the Tarskian framework nor charity allows us to venture beyond the level of sentences/beliefs and be confident that we will return with a unique ontology in this particular sense. But for Davidson the conclusion is not that hence there is a relativity of ontological schemes, but rather that the idea behind it, viz., that the beliefs we hold and the meanings of the sentences we use to express those beliefs, are built up from referents in this particular way, is misguided in the first place. In combination with the holistic nature of language and belief that Davidson clearly endorses, this assumption would lead to relativism. But there is, according to Davidson, no reason to make it in the first place.
Martin Stokhof [from: Radical Interpretation Discussion Board date: 10-2003]
On the relation between experience and theoretical explanation
The ultimate justification of a theoretical explanation resides in the fact that it changes our experience. It allows us, not only to see things differently, but better: for our understanding of things is in the way we experience them. In that sense theories are a means, not an end in themselves.
A good example seems to be provided by certain mathematical theories, in particular geometrical ones, that, when really understood, change our ways of perceiving objects and their relationships. Or rather, allow us to perceive them differently. It is this added freedom of perception that deepens our understanding: things are not just like this, they are much more.
Similarly, mythologies, mystical explanations, good philosophy. (Is there something of this in Wittgenstein’s remarks on Frazer?)
But, of course, this will work only if we realise that a new way of looking at things, a new way of experiencing them, is just that: one among many possible ways. The crux of the matter is that we should not exchange one view for another, but ‘collect’ them, exploit them, amplify them. Of course, we can’t hold onto all of them at the same time (in much the same way that we can’t entertain two different sets of certainties). Which means that we should engage in flexibility, change, train ourselves to switch back and forth, enjoying the distance in between.
To come to grips with the relation between experience and theory (in a wide sense) seems a crucial issue: experience alone will not do (pace the claims of sensualism) because experience never comes only by itself. It is always accompanied by feelings, thoughts, emotions that transcend it. (Even when we are not aware of this. This shows itself in how we act upon our experiences.) It is in this sense that we are not a database of experiential input and some calculating device. We need theory, not to knit the experiences together, but to understand that what holds it together in the first place: our own selves. But understanding ourselves in that way is not enough: the understanding remains sterile if it is not tested again in new experiences, or rather, in new ways of experiencing.
Another aspect: certain types of theories, say particle physics, or neurophysiology, are hard to fasten unto everyday experience. We may know that what looks as a solid material object is nothing but a swarm of particles, but we can not experience it in that way. Similarly, we may know that certain feelings arise from certain stimulation patterns in the brain, made possible by the production of certain neurotransmitters, but that is not an account of what we experience. This, too, points towards a distinction between the experiential aspect, or content, of an experience, and the accompaniments thereof. Experience is that total, not one of its components. And such theories as indicated above mainly pertain to the ‘data aspect’ of experiences.
Martin Stokhof [from: Aantekeningen/Notes date: 22-08-1998]
The following seems a very plausible conjecture: it is the meaning of the text itself that provides the necessary normative constraints on its interpretation. But there are a few problems with that.
First of all, it makes interpretation very much a factual, ‘realistic’ concern: independent from interpretations and interpreters, there is such a thing as ‘the meaning’, and the task of interpretation is to discover that. Once we’ve done that, the task is fulfilled and there is no more need for interpretation. But that doesn’t sit very well with Gadamer’s insistence that interpretation is an on-going affair, and moreover, one that not only constantly changes the views of the interpreter, but also the meaning(s) of what is interpreted: the ‘fusion of horizons’ is a temporary equilibrium, brought about by adjusting both the perspective of the interpreter and that of the text.
Secondly, if the objective meaning of the text itself were to play this role, this wouldn’t fit into an interpretational scheme that follows the hermeneutic circle. Recall that if we follow the structure of the hermeneutic circle we need to compare two things that both are different from this postulated objective meaning of the text itself, viz., the fore-projection, i.e., our ‘initial hypothesis’, and the result of our (first) reading. The problem was that we can compare these two without any problem, but that in order to evaluate the outcome of that comparison, we need a standard, something normative. Now suppose the objective meaning were to play that role? How would that help? If we know that this is the objective meaning of the text, we wouldn’t need any interpretation to begin with. And if we do not, it will fail to hold any normative authority.
The essence of the problem is that the hermeneutic circle, precisely because it is a circle, involves only entities of the same kind (meanings). And without reference to any external source of normativity, none of these can play the required normative role, on pain of the entire circular structure collapsing into what is basically a realistically understood concept of objectivity.
Martin Stokhof [from: Radical Interpretation Discussion Board date: 11-2006]