Within the analytical tradition the starting point of investigations into the problem of interpretation is two-fold: language and the world (in various senses: objects with properties and relations; or states of affairs; or behaviour) as two independent parameters. Meaning arises, so to speak, from the interaction between these two. From a Heideggerian point of view this approach falls victim of a philosophical picture that is wrong from the start: the distinction between subject and object, with its ontological and epistemological ramifications. Heidegger’s own picture is a reversal of this: it is only as an abstraction (and a distortion) that we arrive at the subject – object dichotomy, what is ontologically fundamental is Dasein in a web of practical significance. Language then comes in at a later stage (at least in Sein und Zeit; there is a significant shift in the later work) as an expression of Interpretation, which in its turn is a particular way of Understanding.
The later Wittgenstein exemplifies yet another approach: he is concerned with language in a concrete sense, and accepts it as a given. But unlike the analytical tradition he does not accept the traditional picture of subject versus object. His emphasis on our ways of acting as the rock-bottom on which language rests refers to the same practical dimension as Heidegger’s Dasein, yet without the underpinning of a formal ontology. He creates the possibility of his own position by coming up with a view on language that is much more congenial to the practical turn, whereas Heidegger has to make a distinction between linguistic interpretation and ontological Understanding because he employs a notion of language that seems derived straight from classical, Aristotelian sources. (In fact, Heidegger’s familiarity with the Scholastics may be relevant here, too.)
Martin Stokhof from: Interpretation date: fall 1992
In the ‘Afterthoughts’ to ‘A coherence theory of truth and knowledge’ Davidson makes clear that he agrees with Rorty that his concept of ‘mild correspondence’ is better characterised as a form of pragmatism (although, he hastens to add, not in the particular sense of Rorty …) In later papers Davidson is even more explicit and argues that truth is such a fundamental concept that any attempt to define it is bound to fail. Cf., his 1990 paper ‘The structure and content of truth’, and the 1996 paper ‘The folly of trying to define truth’. What a truth theory such as Tarski’s does, Davidson maintains, is provide us with minimal conditions that our use of the concept should satisfy. In that way a truth theory is part of a broader theory of rationality, but, and this is crucial for the (later) Davidson, truth cannot be defined, and hence, neither can rationality. These concepts can be circumscribed, we can lay down rules for their proper use, but strict definition is impossible.
It will be clear that from that perspective any theory that appeals to ‘truth makers’ is on the wrong track. It is already quite far removed from the position that Davidson takes in ‘A coherence theory of truth and knowledge’ and he has gone further in the opposite direction thereafter. For a proper appreciation we need to take into account the role a truth theory is supposed to play. It is not to define truth, but it is to lay down rules that we are bound to when dealing with truth (as long as we want to count at being rational in our dealings with it). From that perspective, assigning truth conditions to atomic sentences is not making any claims about the relation between such sentences and particular aspects of the world. That part of the truth ”definition” (scare quotes used deliberately) only serves to provide a basis for the clauses that deal with connectives and quantifiers (and perhaps other logical constants). It is in the latter clauses that real constraints are being formulated: to be rational is to accept that A and B is true iff A is true and B is true; it is to reject A if and only if one accepts not-A; and so on. But in the atomic case no such constraints are forthcoming. At this point a theory of truth is unhelpful, and something else should take its place. And that, according to Davidson, is to be found in a pragmatist outlook.
Note that this concerns the role of a truth theory vis à vis our concepts of truth and rationality. The case of a theory of meaning is different: when I am given truth conditions for atomic sentences of a language that I do not understand, as part of a theory that is formulated in a language that I do understand, I do learn something. But I learn something about meaning, not about truth. This will be clear from the fact that no information is forthcoming if the theory is formulated in a language I do not understand, or if the theory is about and formulated in the same language.
Note that the empirical content of, e.g., my belief that the entity over there is a dog does not derive from my trusting my senses, it is reality itself that is the cause of me having that belief, and thus also of what is the content of the belief. Trust in my senses comes in, in exceptional cases, if this belief conflicts with some other beliefs and if one of the ways of removing the conflict is to assume that at that specific point my senses were not, as they usually are, ‘transparant’. Recall that Davidson does not want any epistemic intermediaries, neither language, nor conceptual schemes, not perceptual capacities. Thus, there must be a reason for doubting the belief that the thing over there is a dog, and this reason will always be another belief (or set of such). Why and how a belief that is caused by reality may shed doubt on another belief, that by default we assume is also caused by reality, is a complex question that has different answers in different circumstances. This is were rationality of procedures kicks in, and although some of these procedures are ‘hard wired’ in our language (logical consistency, and so on), others may be of a more acquired and changeable nature, that reflect changes over time in, e.g., what counts as proper scientific procedure. I think that Davidson, although he focusses on the former, should be able to allow for the latter as well.
Martin Stokhof from: Radical Discussion Board date: spring 2020
On Gadamer’s move away from the materiality of text. What is interesting in this abstraction from the text as a material object to the text as a linguistic object is where it ends. For Gadamer what requires interpretation of a text is abstracted from its materiality, but it seems that it does remain linked to a particular language. That is to say, what is interpreted does not transcend the boundaries of a particular language, for example via translation in other languages. For translation cannot be independent of, let alone prior to, interpretation. If that were the case, then we would end up in something like the realm of Fregean senses, which do not depend on any form of expression at all, which can be grasped ‘as is’, and for which therefore interpretation simply is not an issue. A consequence of that, it seems, is that meaning cannot be ontologically distinct from language and still be something that calls for interpretation.
Is what Gadamer calls ‘deciphering’ of a text a factor in the move away from its materiality? That depends a bit on what Gadamer means by that. If deciphering a text means establishing the text as belonging to a particular language and as thus having an initial meaning in that language, then it seems that deciphering is a process that establishes a starting point for hermeneutic interpretation, quite independent from considerations regarding materiality. Of course, materiality might come in in the deciphering, and even prior to that, in establishing something as a text as such (independent of establishing it as a text written in a particular language), but those are considerations that are germane to the issue.
As for the evaluation of Gadamer’s move away from materiality: it seems that we need to balance two factors: the contextuality of texts, and transcendence of that same contextuality. It is, of course, evident that texts are bound to contexts. But at the same time, texts are also the primary vehicles we have for transcending contextual boundaries. This is not only what makes texts efficient as a means of communication, it is also what makes them objects of interpretation. I think we are right to criticise Gadamer for looking only at the latter and disregarding the former, but that does not mean that the latter does not also exist
Another question that arises is what exactly the boundaries of materiality of text are. In the old days that was clear: ‘paper and ink’. But think of the transition from manuscripts to print: from something that is unique, or exists in only a very limited number of copies, to something of which there is no real ‘original’, but only thousands and thousands of exemplars. In that transition, we see materiality becoming a distributed property, and one that is no longer unique (hardbound versus paperback versions, editions with or without the reference material, …). And then consider the electronic revolution, ‘print on demand’: what kind of materiality of the text does that represent?
Martin Stokhof from: Aantekeningen/Notes date: 25/03/2021
Some distinctions to keep in mind when dealing with Gadamer’s views on hermeneutic interpretation that is puzzling and challenging at the same time. One distinction is more or less like that between possibility and necessity. It seems unlikely that Gadamer would deny that it is possible to read/interpret a text with the explicit purpose of trying to recover/understand its author’s intentions. There is such a thing as literary biography, intellectual biography, and obviously individual intentions and other facts about an individual’s (historical, social, psychological) situation are relevant there. However, what Gadamer would deny is that that is the point (the ultimate, true point) of the text, the real challenge that it presents to us. One way of understanding that is by linking it to what I guess is indeed a fact, viz., that dissociated from its individual context a text may indeed present us with several alternative interpretations. Or to put it differently, that different interpreters (or one and the same interpreter at different stages of the interpretation process) may come up with different interpretations, none of which can claim to be the one, true (correct) interpretation. And notice that this seems to hold even in the face of a factually correct (re)construction of the meaning intended by the author.
This introduces a second distinction: that between the particular and the general. It is quite right to state that the human situation (history, biological and psychological constitution, etc.) provides the framework within which we are able to interpret and understand. However, does that really constitute an argument against Gadamer? The aspects of the human situation that we need to take into account are general, not particular. Gadamer would have no problems with that, while still maintaining that hermeneutic interpretation concerns the text and the text only. To regard something (patterns of ink on paper, scratches in a piece of marble, activated pixels on a screen) as a text means to regard it as a product of human activity, which immediately brings the human situation into play. It is only when we argue that particular aspects of that situation (individuated along the lines of persons, historical periods, social strata, etc) need to be taken into account in order for interpretation to be possible at all (cf. the first distinction) that we have a point against Gadamer, it seems.
Then there is the distinction between literary and non-literary texts, that might be relevant for this issue. Take scientific works. Would we agree that for them the intentions of the author tend to be less relevant for a proper understanding? In the case of a scientific work it seems that it is quite natural to make a distinction between the content, i.e., the meaning of the text itself, and the author’s intentions, historical circumstances, etc. Of course, the latter are relevant for understanding, e.g., the historical development of a scientific discipline, or the intellectual development of an individual scientist. But ordinarily we consider the fact that Newton was a devoted alchemist as irrelevant for our understanding of, e.g., his first law of motion. Or take the so-called ‘frame propositions’ in the Tractatus that proponents of the resolute reading make so much of. Do these tell us something about the author’s intentions? Yes, definitely. But suppose that, e.g., the introduction of the Tractatus were missing: would we then be unable to get the message from the actual text? It seems not. To make the point in a different way: could it not be that the introduction of a text actually contained a mistake, not about the authors intentions, but about what follows from the main text? That seems possible (albeit perhaps unlikely). But that means that the meaning of the text is at least distinct (if not independent) from the author’s intentions.
Of course, we could admit this, but only for a particular kind of texts, viz., scientific ones. With regard to literary texts, the interesting question is whether the same holds (should hold?) if we regard them as sources of knowledge, such as ethical know-how, psychological insights, etc. If (and in so far as) literature is to be regarded as a source of knowledge, it should be able to teach us something over and beyond the concerns of the individual author.
Finally, yet another distinction that we need to keep in mind when assessing Gadamer’s views, viz., that between interpretation of texts and interpretation in general, including spoken conversation. One might argue that in actual conversations the point is one of getting intentions across. That is why non-verbal clues and triggers are crucial in such situations. Gadamer is actually well aware of this fact: it is not without reason that he describes writing as ‘self alienation’. We all share the experience of reading something we have written a while ago and not recognising it as ‘our own’. We know we have written it, but we are not able to identify with the meaning of the text. Here intention and meaning have drifted apart. That never happens, it seems, in ordinary conversation. (Except in rare cases, e.g. of extreme fatigue, where we can actually ‘hear ourselves speaking’.) If this distinction holds, it does seem to indicate that Gadamer’s hermeneutic interpretation is concerned with a different type of object.
Martin Stokhof from: Radical Discussion Board date: November 2002
Crucial for Davidson’s account of radical interpretation is that although we can distinguish verbal from non-verbal behaviour, we cannot separate them. The perennial liar can be found out because what we say is linked to what we do and because what we do is to some extent ‘shared beyond our will’ (i.e., we cannot determine that at will). That some instance of verbal behaviour is a lie will then be revealed by some particular instance of non-verbal behaviour being incongruent with ours where ‘by assumption’ it should be the same. If a perennial liar were able to completely separate their verbal from their non-verbal behaviour, and hence be congruent in what they do, yet lie in what they say, they could indeed not be found out.
Davidson on rationality and the transcendental status of Charity
What is it that strikes one as problematic about Davidson’s appeal to rationality? Is it the apparently metaphysical status of the concept as it plays a role in Davidson’s work, or does it concern the content of the concept that he uses? (In the latter case, how does that differ from the kind of appeal to rationality that is inherent in, e.g., Popper’s approach in terms of falsifiability? Isn’t that another application of a conception of rationality?)
As for the qualms that many people have about the Kantian, transcendental status of Charity: it is certainly true that one can raise objections to the kind of transcendental analysis that Davidson’s use of Charity seems to instantiate. But we should ask ourselves whether the alternative theories are really theories about the same phenomenon, or whether perhaps a shift takes place when we drop the appeal to transcendental notions. For example, one may argue that a consequentialist in ethics `really’ (but the use of `really’ should be a warning sign!) is concerned with a different notion of the good than a deontologist. Likewise, if we talk about interpretation on the assumption of the possibility of an external, independent identification of what counts as (utterances of) the same language, we’re dealing with not quite (another red light flashes) the same problem as Davidson. So what seems to be needed (but at the same time seems very hard to get) is an a priori, non-theory dependent characterisation of the phenomenon in question.
Question: suppose it were clear what exactly Davidson’s conception of rationality was, and suppose it would be one with which we agreed, would that make a difference? In other words, is it the lack of perspicuity of some of the central concepts that is bothering us, or is it the way in which they are used?
Observation: Davidson’s goal is not to come up with empirical theories in the sense in which scientific theories are empirical. (Cf., the discussion in `The Second Person’ about the abstract nature of the concepts of ‘language’, ‘meaning’, etc.) If anything, his goal is to come up with models for empirical phenomena that explain, not their actual ‘ins and outs’, but, one could say, their ‘possibility’.
That actually leads to a second question (one that is not restricted to Davidson’s analyses), viz., what it is that we do when in philosophy we analyse something that is also a straightforward empirical phenomenon. To that question there are many answers, one of which is that of transcendental methodology. And the next step is then to determine how empirical observations are relevant for assessing these philosophical answers.
Martin Stokhof from: Radical Interpretation Discussion Board date: fall 2006
Davidson’s analysis in ‘A Nice Derangement of Epitaphs’ marks a goodbye to the idea of a compositional (recursive) theory of meaning. Why? To answer this question we must answer another one first: Why did we want such a theory in the first place? The answer seems to be because we wanted an a priori characterisation of semantic competence, i.e., an account which deliberately disregards factual use. For such an abstract, non-situated analysis the potential infinity of language constitutes a major problem. In other words, it is the assumption of a pure, individual-based language which creates the problem, for which compositionality provides a solution (one that is intuitive, though arguably not the only one possible). If we drop this assumption this argument for compositionality at least vanishes (there may be other ones). Of course, another issue takes its place: How are we able to create ever new passing theories? Here Wittgenstein’s rule-following considerations come to bear directly on Davidson’s approach. It seems that Davidson has managed to back himself into a corner by not dropping the individual bias: meaning tends to get locked up inside each individual speaker, and a serious threat of semantic solipsism arises.
One question that keeps coming back is what a practice based approach (such as Wittgenstein’s and Schatzki’s) has to offer over and above what Davidson’s appeal to Charity accomplishes. And there are good reasons to ask this question, if only because the Charity principle does seem to lend itself to formal modelling, unlike a practice based view.
The answer can be given in two ways (but it is basically the same answer): ‘uninterpreted content’, and ‘learning’. Participation in a practice originates from a point outside the practice, and a characterisation of what it means to be a participant minimally has to allow for an account of how one becomes one. When the practice is linguistic, -one that involves interpretation-, this involves an account of the transition of the non-linguistic to the linguistic realm, and hence includes a specification of the role of uninterpreted content.
On both counts the Davidsonian approach does not seem to do well: the participants in radical interpretation are autonomous and fully competent, but how they became that way is left in the dark. (Meredith Williams in ‘Wittgenstein and Davidson on the sociality of language’ also voices for criticism along these lines.) In particular, the essentially linguistic nature (in the sense of being linguistically expressible) of what they bring to bear on the task (beliefs, desires, and other attitudes) seems to be an obstacle: there is no role for uninterpreted content here.
Another point that speaks in favour of the learning approach is that it suggests that it is not just the transition from the pre-verbal to the verbal stage that is at stake, but that learning is a continuous process. Hence, ‘teacher’ and ‘pupil’ are really indications of functions, of roles, and even during our days as ‘competent’ speakers of a language we play both roles. If we encounter a new phrase, or one that is used in a new way, we may adopt the role of ‘pupil’ and try to learn what the new meaning is. Or we may adopt the ‘teacher’ role and try to correct the other’s usage. What role we choose depends on a number of factors: our concern with successful communication in this instance, our estimate of the abilities of the other language user, social relations, our emotional attitudes towards the other, and so on.
Martin Stokhof from: Interpretation date: fall 2009
If we compare the picture that we can extract from On Certainty with Davidson’s view (as expounded in, e.g., ‘A Coherence Theory of Truth and Knowledge’), the important difference seems to be this, that Wittgenstein introduces the layer of certainties in between our epistemological practices and external reality, whereas Davidson construes the relation between belief and reality much more directly. The fact that certainties are categorically different from beliefs and other epistemological entities (despite the fact that over time, and between communities and/or individuals, what counts as what may change) in combination with the plurality of systems of certainties, makes room for a measure of (conceptual) relativism that Davidson seeks to avoid. His way of doing so is to take the core of our belief system to be as stable (over time, over communities and/or individuals) as is the causal influence of external reality on humans. (There is more room for differences in the ‘superstructure’ of complex beliefs that are not directly caused by our interactions with reality, but that is something that Davidson does not pay that much attention to).
This has also consequences for how truth works in both perspectives. In On Certainty truth is first and foremost a concept that operates within a particular epistemological practice, that itself is made possible by a particular system of reference consisting of certainties. (That Wittgenstein construes it in more or less verificationistic terms is an additional, independently motivated feature.) The relation between external reality and certainties is not one of determination, but of constraint. This is the source of plurality, and it also implies that certainties are not upheld because they are true. The fact that different certainties can be upheld at the same time also testifies to that, of course. Nevertheless, certainties differ in terms of their entrenchment and some are so basic to our form of life that it does not seem that much of a stretch to call them ‘true’, admittedly in quite a different sense. In Davidson’s perspective we also have two distinct properties. ‘Mild correspondence’ is the notion of truth that links beliefs (and hence meaning) to the world. It is what the causal influence of the world on us results in. Internally, i.e., within our actual epistemological practice, truth then takes on a different form, that of coherence.
Martin Stokhof from: Aantekeningen/Notes date: 22/03/2012
On language as the medium of hermeneutic experience
Gadamer, in Truth and Method: “Interpretation […] is the act of understanding itself, which is realized—not just for the one for whom one is interpreting but also for the interpreter himself—in the explicitness of verbal interpretation.” This is the claim that language is the medium of hermeneutic experience. (And all experience, of whatever kind, is also hermeneutic.) Gadamer insists that the absence of explicit linguistic formulations does not constitute a counterexample: all non-linguistic demonstrations in fact rely on language.
The general claim, that all understanding is linguistic, i.e., has language as its medium, definitely sounds counterintuitive, since we do intuitively feel that there are things that we understand but that we can not ‘put into words’.
One thing to bear in mind, tough, is the intimate relation between understanding and interpretation: all understanding is interpretation, and all interpretation results in understanding. To the extent that this sound wrong, it might reflect a hidden assumption about the existence of something like ‘ultimate’, ‘final’, ‘true and complete’ understanding, a kind of understanding that transcends the kind of understanding that interpretation results in. That, Gadamer claims, is an illusion.
That means that from a Gadamerian point of view our problem in fact reduces to the question: ‘Is all interpretation verbal (linguistic)?’ Again, we need to take that in a broad sense, i.e., without any presupposition of actual verbalisations. (In an analogous fashion, Gadamer argues that we need not worry about linguistic diversity.)
The claim then seems to come down to this: anything that is proposed as an interpretation is in principle subject to questioning, argumentation, justification. This must be so, for interpretation itself is the result of such questioning, etcetera. Asking questions, providing answers, disputing and justifying them, all this is done in language (or in a medium that presupposes language), and inasmuch as there is something that fails to be subject to these language-based procedures it can not part of the interpretation itself (and hence can not constitute (part of) understanding).
It’s not that Gadamer would deny the existence of non-verbalisable phenomena, the claim is that as such they are not part of interpretation and hence not part of our understanding of something. (Note that this comes remarkably close to Wittgenstein’s analysis of the ‘tip-of-the-tongue’ phenomenon in Philosophical Investigations, II.xi.)
To what is extent is this a feasible position? It seems it all centers around the question whether we can actually point to the existence of a kind of understanding that meets two requirements: it is essentially non-linguistic; it is somehow connected to the kind of understanding that is linguistic.
The second requirement is the really problematic one, I think. But it also seems justified. For without it, the dispute would in fact be merely verbal: Is there something that is not linguistic understanding? Of course there is. Can we call it ‘understanding’? Well, yes, but what would be gained by that? It is only when we can point to relationships between the two phenomena, that we really confront Gadamer’s position.
So the question is: Can we do that? What would be good examples?
Martin Stokhof from: Radical Interpretation Discussion Board date: fall 2004
The following is a curious catch in Davidson’s account of metaphor. If a metaphor has only literal meaning, then what exactly does it mean to say that a speaker uses a metaphor? What a speaker uses, in any case, is a sentence, and, according to Davidson, any sentence only has literal meaning. So, ‘metaphor’ as it is used here cannot be a predicate of sentences, but at best expresses a property of utterances of sentences, i.e., of use. ‘To use a metaphor’ then must mean: ‘to use a sentence metaphorically’. But is that not at odds with Davidson’s insistence that metaphor is not a matter of speaker’s intention?
Martin Stokhof from: Interpretation date: fall 1998
What Charity does is to assume agreement (in beliefs). That by itself does not automatically assume truth (of those beliefs), it does so only in conjunction with the assumption that we ourselves have the right beliefs. And there is the conundrum: we know that some of our beliefs are false, but we do no know which these are (otherwise we would not hold them). So indirectly we will assume that others, even if we assume they agree with us, will not be right all the time, but for the same reason that we can not attribute false beliefs to ourselves without giving them up, we can not attribute false beliefs to other without giving up the assumption of agreement.
Martin Stokhof from: Interpretation date: fall 2008
‘Rooks console each other.’ (NRC Handelsblad, 23/01/2007) Question: can a rook console me? Can I console a rook? If in both cases the answer is ‘No’, then we have two isolated subframes, each characterised by a ‘console’-relation that has certain (formal and material) properties within that subframe, but which is limited to that subframe. The analogy with (in)translatability is obvious. The question that arises then is this: What reasons do we have (could we have) to call two such completely isolated relations both an ‘x-relation’? In what sense are these two the same relation?
Martin Stokhof from: Aantekeningen/Notes date: 23/01/2007
What is the mystery of interpretation is not that it would be impossible: we are not able to fly, nor do we have the ability of bi-location, and no-one wonders why we don’t. Nor that it would be: we breath, walk, and again we take this for granted (which does not mean that we can not investigate the actual physiology of these abilities). Regarding interpretation the real problem seems to be that it is both possible and impossible. That interpretation comes natural to us — understanding somehow happens to us without us knowing how —, and that it is an impossible task — no matter how much conscious effort we put in to it we are in the end always defeated. The otherness of the other makes us strangers to ourselves, yet our familiarity with ourselves makes us understand them. Conscious and mechanical, possible and impossible: interpretation is always all of that, but none of it completely.
Part of the argumentation in Davidson’s ‘On the Very Idea of a Conceptual Scheme’ centers around transitivity of the translation relation. The point Davidson is trying to make here, I think, is that if there exists a translation from L1 into L2, that translation itself can be stated in either L1 or L2, and consists in an accurate mapping of sentences from L1 onto sentences of L2. If the translation is stated in L1 then, by assumption, it can also be stated in L2 (after all, there is a mapping from all of L1 into L2, i.e., also of the L1-sentences describing the mapping). So we can safely assume that if there is a translation from L1 into L2, it can be formulated in L2. Now suppose there is a translation from L2 into L3: that, again is a mapping, in this case of L2-sentences onto L3-sentences. That mapping includes those L2-sentences that describe how to translate L1-sentences into L2-sentences. So what we have then is (among other things) a set of L3-sentences that tell us which L2-sentences are translations of which L1-sentences, and L3-translations of the L2-sentences. But that means we have a translation of L1 into L3. (More neatly: if a translation is a homomorphism from L to L’, then we know that if there is a homomorphism from L1 to L2 and a homomorphism from L2 to L3, then there is a homomorphism from L1 to L3, viz.: the composition of the two.)
Of course the above argument only works if we assume that translations are total, i.e., that they map all sentences of L onto sentences of L’ and vice versa. What if we drop that assumption? First of all we have to ask ourselves whether we are still dealing with translation in such a case. But let that pass, and suppose we have a mapping of all the sentences of L1 onto a proper subset of sentences of L2, i.e., there are parts of L2 that have no counterpart in L1. Notice that the formulation of the translation can not be in the latter set (for that would mean that the translation would be statable in L2, but not in L1, which is absurd.) Now assume we have a translation from L2 into L3: the only way in which that would not give us a translation of L1 into L3 (by the reasoning above), is by being restricted to exactly that part of L2 that does not contain the L1-to-L2 translation. But that would mean that it really isn’t a translation of L2 as such: it must leave out a proper part of L2. Of course, such a mapping could exists, but we would lack any justification for calling it a translation. (And if we would insist that we could call it that, then we actually are assuming what we set out to prove, viz., the failure of transitivity of translation.)
Note that the fact that in actual cases there are bound to be discrepancies between languages, i.e., things in L that don’t have an exact counterpart in L’, or things in L’ that lack a counterpart in L, does not really affect this line of argumentation. The central point is that the translation itself concerns such a substantial part of both languages, that that by itself guarantees transitivity to a sufficient degree.
Martin Stokhof from: Radical Interpretation Discussion Board date: 11-2004
Theodore Schatzki’s analysis of dispersed and integrative practises implies that normativity arises, not at the higher order level of rules, or practises, or institutions (at least not exclusively), but at the very basic level of individual interaction with the environment. This view is reinforced by various analyses of know-how and expertise.
What is interesting to note, especially with regard to Davidson and Gadamer, who actually are in the same boat here, is that uninterpreted content then plays a key role. We need such content in order for normativity to have a basis on which the language-based practises may build.
For Gadamer this is anathema: all experience is linguistic and exists only in and through language. For Davidson it presents a problem too, although it may not be immediately obvious that it does. For doesn’t Davidson avail himself of a primitive causal relationship between the world and us? And doesn’t he reject any form of mediation (linguistic or otherwise) between ourselves and the world?
But unmediated content is not the same as uninterpreted content as we use the phrase here. For in Davidson’s view our causal interaction with the world results in beliefs, and beliefs and (sentence) meanings are indistinguishable in terms of structure and content. So Davidson’s unmediated content is highly structured, in such a way that it is immediately expressible (given a suitably expressive language, of course): definitely not the uninterpreted content of everyday expertise. In fact Davidson seems committed to the same kind of linguistic view on experience that Gadamer embraces explicitly.
Martin Stokhof from: Aantekeningen/Notes date: 21/11/2006
The point seems to be this: given that we have a shared ontology due to the application of the Tarskian framework (as the theoretical framework in which we formulate concrete theories of meanings for concrete languages) and charity, which implies shared beliefs, why don’t we have a shared vocabulary and a shared theory of reference concerning this vocabulary?
The point seems to be this: given that we have a shared ontology due to the application of the Tarskian framework (as the theoretical framework in which we formulate concrete theories of meanings for concrete languages) and charity, which implies shared beliefs, why don’t we have a shared vocabulary and a shared theory of reference concerning this vocabulary?
The question boils down to what follows from the assumptions mentioned about reference. Let’s start with the use of the Tarskian framework. For Davidson this follows from the assumptions he makes concerning the nature of meaning (extensionalism) and the function of a semantic theory (explanation of competence); cf., ‘Truth and Meaning’ for the details. So his view is that if we want a theory of meaning for a language, it has to have that particular form. And that in its turn presupposes that in any language we find a shared logical machinery, consisting of propositional connectives, quantificational apparatus (and the basic distinctions brought along by that) and the logical rules governing their behaviour. But that is all, in particular it does not involve any assumptions concerning the reference of the non-logical vocabulary. Of course, reference is used in stating the Tarskian truth theory, but there it is used only as an auxiliary notion. What the theory defines, or accounts for, is (our knowledge of) truth conditions of sentences. It does so using the auxiliary notion of reference of sub-sentential expressions, but, and this is the important point: it does not define truth on the basis of an independent account of reference. And as Davidson points out, any account of truth along these lines leaves reference essentially underdetermined (a point also made by Putnam and several others): two sentences may have the same truth conditions under different assignments of references to their sub-sentential expressions. (If the sentences are from different languages, we have two translatable sentences which do not allow us to infer any shared reference; if the sentences are from the same language, this means that synonymy does not guarantee unique reference; and as a special (but most important) case we have that it is possible to assign to one and the same sentence the same truth conditions based on attributions of different references to one or more of its component expressions.)
Then charity. Notice that charity, too, concerns sentences, not words. In interpretation, i.e., in actually trying to construct a Tarskian theory of truth for a given language, the empirical data we start from are utterances made in a context (situation). The assumption that truth plays the same role for the speakers of the language we are interpreting as it does for us, these data can be viewed as utterances of sentences held true in that situation. A specific (to be determined!) subset of those will be utterances of sentences held true about that situation, i.e., utterances of sentences that are held true on the basis of certain aspects of the situation in which they are uttered. Each and every sentence uttered is supposed to express a belief. So certain sentences uttered in a situation express beliefs of the speaker about that situation. This is where charity comes in: it allows us to proceed on the assumption that the belief that the speaker expresses in these sentences (but remember, we do not know of every sentence in advance whether it belongs to this set) is a belief that we hold about the situation as well. This is supposed to give us enough common ground to work our way into the language. But, and this is the important point, just as two sentences can have the same truth conditions, yet not share reference of sub-sentential expressions, beliefs too can be shared without a shared set of objects, properties and relations that can be attached in a unique way as references to the expressions that occur in the sentences that are used to express these beliefs.
So neither the Tarskian framework nor charity allows us to venture beyond the level of sentences/beliefs and be confident that we will return with a unique ontology in this particular sense. But for Davidson the conclusion is not that hence there is a relativity of ontological schemes, but rather that the idea behind it, viz., that the beliefs we hold and the meanings of the sentences we use to express those beliefs, are built up from referents in this particular way, is misguided in the first place. In combination with the holistic nature of language and belief that Davidson clearly endorses, this assumption would lead to relativism. But there is, according to Davidson, no reason to make it in the first place.
Martin Stokhof from: Radical Interpretation Discussion Board date: 10-2003
The following seems a very plausible conjecture: it is the meaning of the text itself that provides the necessary normative constraints on its interpretation. But there are a few problems with that.
First of all, it makes interpretation very much a factual, ‘realistic’ concern: independent from interpretations and interpreters, there is such a thing as ‘the meaning’, and the task of interpretation is to discover that. Once we’ve done that, the task is fulfilled and there is no more need for interpretation. But that doesn’t sit very well with Gadamer’s insistence that interpretation is an on-going affair, and moreover, one that not only constantly changes the views of the interpreter, but also the meaning(s) of what is interpreted: the ‘fusion of horizons’ is a temporary equilibrium, brought about by adjusting both the perspective of the interpreter and that of the text.
Secondly, if the objective meaning of the text itself were to play this role, this wouldn’t fit into an interpretational scheme that follows the hermeneutic circle. Recall that if we follow the structure of the hermeneutic circle we need to compare two things that both are different from this postulated objective meaning of the text itself, viz., the fore-projection, i.e., our ‘initial hypothesis’, and the result of our (first) reading. The problem was that we can compare these two without any problem, but that in order to evaluate the outcome of that comparison, we need a standard, something normative. Now suppose the objective meaning were to play that role? How would that help? If we know that this is the objective meaning of the text, we wouldn’t need any interpretation to begin with. And if we do not, it will fail to hold any normative authority.
The essence of the problem is that the hermeneutic circle, precisely because it is a circle, involves only entities of the same kind (meanings). And without reference to any external source of normativity, none of these can play the required normative role, on pain of the entire circular structure collapsing into what is basically a realistically understood concept of objectivity.
Martin Stokhof from: Radical Interpretation Discussion Board date: 11-2006