What more is required of a belief, besides being justified and true (JTB), if the belief is to count as knowledge? In my view, at least two further conditions are required: the belief must meet the conditions of safety and adherence.
Safety is popular these days: it has been defended by many distinguished epistemologists, such as Duncan Pritchard and Timothy Williamson, among others. But adherence – the fourth condition that Robert Nozick imposed on knowledge – has few defenders. Most of the philosophers who have discussed adherence have rejected it. In this post, I defend adherence against its detractors.
First, let me explain how I understand adherence. Let us focus on a case C1 in which a believer believes a true proposition p1 in a doxastically justified or rational manner. Then this belief “adheres to the truth” if and only if every normal case C2 that is sufficiently similar to C1 with respect to what makes the belief rationally held in C1, and with respect to the case’s target proposition p2’s being true, is also similar in that the believer believes p2 in C2.
Adherence explains why knowledge is lacking in cases in which the believer’s environment is full of misleading defeating evidence, which by a fluke the believer never encounters. In a case of this sort, the belief does not adhere to the truth – because there are normal cases sufficiently similar to the actual case, both with respect to what makes the belief rational in the actual case, and with respect to the target proposition’s being true, in which the believer encounters this misleading defeating evidence and so does not believe the proposition.
For example, in Gilbert Harman’s “assassination” case (Thought, p. 143), what justifies the believer in believing that the political leader has been assassinated is the belief’s being based on an experience of hearing the original radio broadcast. So, cases in which the believer hears the radio broadcast, but then also encounters the denials of the original broadcast that are printed everywhere, and so ceases to be believe that the leader has been assassinated, will count (in at least some contexts) as “sufficiently similar”.
To take another example, Timothy Williamson (Knowledge and its Limits, p. 62) considers a burglar who ransacks a house all night, risking discovery, because he knows that the house contains a diamond. If the burglar had merely safely believed that the house contained a diamond, the house could have been full of misleading defeating evidence, which would have led the burglar to become agnostic about whether the house contained a diamond. It is only because the burglar's belief adheres robustly to the truth that it is so unlikely that the burglar will abandon the belief that the house contains a diamond.
(In fact, I would defend a contextualist interpretation of adherence, according to which the context in which the term ‘knowledge’ is used may make a difference to how similar to the actual case these other cases have to be in order to count as “sufficiently similar” in the context; but we may bracket these complexities for present purposes.)
Some philosophers have tried to give direct counterexamples to adherence. Here is an attempted counterexample due to Ernest Sosa (“Tracking, competence, and knowledge”, p. 274):
One can know that one faces a bird when one sees a large pelican on the lawn in plain daylight even if there might easily have been a solitary bird before one unseen, a small robin perched in the shade, in which case it is false that one would have believed that one faced a bird. Prima facie, then, it seems unnecessary that one’s belief be [adherent]; one might perhaps know through believing safely even if one does not believe [adherently].
A second attempted counterexample is due to Saul Kripke (Philosophical Troubles, p. 178):
Suppose that Mary is a physicist who places a detector plate so that it detects any photon that happens to go to the right. If the photon goes to the left, she will have no idea whether a photon has been emitted or not. Suppose a photon is emitted, that it does hit the detector plate (which is at the right), and that Mary concludes that a photon has been emitted. Intuitively, it seems clear that her conclusion indeed does constitute knowledge. But is Nozick’s fourth condition satisfied? No, for it is not true, according to Nozick’s conception of such counterfactuals, that if a photon had been emitted, Mary would have believed that a photon was emitted. The photon might well have gone to the left, in which case Mary would have had no beliefs about the matter.
These cases may be counterexamples to rough and imprecise statements of adherence, but it seems clear that they are not counterexamples to the formulation that I have given.
Consider the case in which I see a large pelican on the lawn in daylight in front of me. What makes my belief that there is a bird in front of me rational? Presumably, it is the fact that I have an experience of a certain sort, an experience that inclines me to deploy my concept of a bird. So the only cases that count as “sufficiently similar” are other cases in which I have an experience of this sort. Clearly, cases in which I have no such experience – even if in fact there is a bird in front of me, a small robin concealed in the shade – are just not “sufficiently similar”.
Kripke’s case suffers from a similar defect – even though Kripke claims about his case “Here the method is held fixed.” As I shall argue here, this is a mistake: the method is not “held fixed”. In the actual case, Mary’s belief is rationally held because it is based on an experience of observing the detector plate’s responding to the presence of a photon. Cases in which Mary has no such experience are just not sufficiently similar.
In fact, Nozick himself made a similar mistake, assuming that each of the relevant “methods” could be used to answer the question of “whether or not” the target proposition p is true. It is clear, however, that in many cases, the methods that could be used to come to know a proposition are very different from any methods that could be used to come to know the proposition’s negation.
For instance, to know that an existentially quantified proposition is true, one needs only to observe one true instance; but to know that the negation of such an existentially quantified proposition, one would have to survey the entire domain of quantification. (E.g. to know that there is a spider in the room, one needs only to observe a single spider; to know that there is no spider in the room, one would have to search the whole room to make sure that no spider is hiding anywhere.) As they say, “proving a negative” is harder than proving the corresponding positive statement.
It seems to me, then, that adherence is not vulnerable to these counterexamples. But Kieran Setiya (Knowing Right from Wrong, pp. 91f.) has suggested a more general kind of objection:
I can know the truth by a method whose threshold for delivering a verdict is extremely high, so high that it virtually always leaves me agnostic. A method of this kind may be epistemically poor in other respects; but it can be a source of knowledge.
This may sound like a single objection, but in fact there are two very different kinds of case that are suggested by what Setiya says here.
In some cases, I may believe a true proposition by a “method” that is the same as an ordinary rational method except that it is arbitrarily restricted in some way. E.g. I believe a proposition that I have proved through rigorous mathematical reasoning, but only on the condition that I also believe that today is a Thursday (suppose that if I had not believed that today is a Thursday, I would have responded to this mathematical reasoning with agnosticism). Or I believe what I seem to see before my eyes, but only so long as there is nothing apparently orange in my field of vision (if there had been anything apparently orange in my field of vision, I would have been totally agnostic about the scene before my eyes).
In these cases, the belief in question seems not to be doxastically justified or rationally held. The believer is basing her belief in crucial part on utterly irrelevant considerations, and so the dispositions that the believer is manifesting do not count as rational dispositions. This sort of irrationality seems to me to be incompatible with genuine knowledge.
In some other cases, it is rational for one to use a high-standards method, or a method that can only be used in a narrow range of cases. Perhaps a physician is trying to diagnose whether a patient has a certain illness, and the only available test is one that yields a verdict only in a very narrow range of cases; but luckily, in the actual case at hand, this test does indeed yield a verdict. In this case, as with Sosa’s and Kripke’s examples, it seems to me that cases in which the test yields no verdict are not just sufficiently similar to the actual case.
So cases of this sort are not counterexamples to adherence. In short, the only cases of justified true beliefs that fail to satisfy adherence are cases where the thinker's environment is rife with misleading defeating evidence, which by a fluke the thinker never encounters. Unless it is wrong to deny knowledge in such cases, adherence is not vulnerable to the objections that have been raised against it.
I can't understand this project at all. It seems to be about the use of the word "believe" in English (rather than some abstract concept of belief); there is much recourse to examples in natural language. However, many of the examples are rather strained, so it's not clear what linguistic standard is being applied. Are we to simply trust the judgments of these philosophers? This seems like a highly suspect methodology.
But worse than this, it's a methodology towards a non-existent end. First, the assumption appears to be that all uses of the word "believe" in English have the same meaning, or at least some elements in common. This is a completely unwarranted assumption, and just reveals a distressing naivety about natural language. Worse still is the the assumption about the *type* of common element all of the philosophers seem to expect to find in all uses of the word "believe". All of the arguments above are about features which supposedly appear in the definition of "believe" with positive or negative values. Wedgwood argues that beliefs must be justified, true, safe, adherent, Kripke disagrees:
W K
J + +
T + +
S + +
A + -
But the definitions of words in human minds are not complexes of features. It's not clear that the mental definitions of words are all the same, but I would suggest that prototype definitions are much more common: "Judging something against a prototype, therefore, and allowing rough matches to suffice, seems to be the way we understand a number of different words." Aitchison, Words in the Mind.
So I can't see what this kind of argument is doing. If it's an argument about the English language then its assumptions and methodology are wrong. If it's not about the English word "believe", then what is it about?
Posted by: Phil H | 06/08/2014 at 07:08 PM