The Nature of Knowledge – 1

The Nature of Knowledge – 1

If justification is irrelevant to knowledge, we may wonder at the epistemologist’s obsession with the notion.

There are several explanations. One is that, while ordinary knowers need not be able to defend their claims to knowledge in order to have knowledge, it is one of the epistemologist’s tasks in showing the scope of knowledge to defend it against sceptical challenges. In doing so, she will be justifying or showing the justification for various types of beliefs. Some epistemologists might confuse themselves for ordinary knowers, in thinking that ordinary knowers too must justify their beliefs in the face of sceptical challenge.

Another explanation for all the attention to this concept is the practice of some epistemologists of calling whatever must be added to true belief to produce knowledge ‘justification’. This practice might be excused by the fact, noted earlier, that the term in epistemology is in any case a stipulative term of art. But, if this term refers only to an external relation between a belief and the fact believed, or to a process of acquiring belief that is outside the subject’s awareness, then it will lose its normative force and any connection with the ethical concept of justification from which it supposedly derived. It will then lead only to confusion to refer to such additional conditions for knowledge as justification.

Externalists might retain the concept by requiring only that there be some justification that perhaps no one has, but again this invites confusion in seeming to be, but not being, a normative concept.

Externalist accounts of knowledge do not require that the condition beyond true belief must be accessible to the subject. They take that condition to be either general reliability in the process that produces the belief or some connection between the particular belief and the fact to which it refers. We may consider reliabilism first (Goldman 1986). Can reliabilists capture the requirement that the truth of a belief that counts as knowledge must be non-accidental? If so, they must take it that when subjects use reliable processes, processes that produce a high proportion of true beliefs, it will not be accidental that they arrive at the truth. But reliabilists who require only general reliability in belief-forming processes would be mistaken in assuming this to be universally true. If a process is not 100 per cent reliable, then, even when it generates a true belief on a particular occasion, it may be only accidental or lucky that the belief is true. I may be not very reliable at identifying breeds of dogs by sight, except for golden retrievers, which I am generally reliable at identifying. But I may be not very good at identifying golden retrievers when they have a particular mark that I wrongly believe to indicate a different breed. I may then fail to notice that mark on a particular dog that I therefore identify correctly, albeit only by luck or accidentally.

This example reveals several problems, some insurmountable, in the account of knowledge that takes it to be true belief produced by a generally reliable process. First, at what level of generality should we describe the process that generates this true belief (Feldman and Conee 1998)? Intuitively, we take processes that generate beliefs to be those such as seeing middle-sized objects in daylight, inductively inferring on the basis of various kinds of samples, and so on. But the former, although used to generate the belief in this example, seems completely irrelevant to evaluating the belief. Whether I am generally reliable in identifying things that I see in daylight has little if anything to do with whether I acquire knowledge that this dog is a golden retriever. Given our judgement that I do not have such knowledge in this example, that I am only lucky to believe truly that this dog is a retriever, we can choose as the relevant process the unreliable one of identifying dogs with the marks that tend to mislead me. This is a quite specific process, but, of course there is the yet more specific one of identifying retrievers with such marks without noticing the marks, which turns out to be reliable and to give the wrong answer in this case. By choosing the former process as the relevant one, we can make the reliability account appear to capture the example. In fact, we can probably do the same for any example, given that every instance of belief acquisition instantiates many different processes at different levels of specificity. But such ad hoc adjustments do nothing to support the reliabilist account. We need independent reason or intuition of the correct specification of the relevant process in particular cases, if not in general, in order for the account to be informative or illuminating.

Our example reveals the pressure to specify the relevant process more and more narrowly. But at the same time it shows that however narrowly we specify it in particular cases, as long as we leave some generality in its description, there will remain room for only accidentally true belief being produced by the process. This indicates clearly that what is important in evaluating a true belief as a claim to knowledge is not the reliability of any generalisable process that produced it, but the particular connection between that very belief and the fact believed. One might try to save the language of reliability by claiming that a process must be reliable in the particular conditions in which it operates on a particular occasion, but once more any looseness or generality at all will allow room for the type of accident that defeats a claim to knowledge. One might also demand perfect reliability, but then one would have to explain why we allow beliefs produced by perception and induction, both fallible processes, to count as knowledge. We do so when these methods connect particular beliefs to their referents in the proper way.

If the example discussed does not suffice, we can appeal to the lottery example once more to show the weakness of reliabilism as an analysis of knowledge. If one inductively infers that one’s ticket will not win, we can make the reliability of this inductive process as high as we like short of 100 per cent by increasing the number of tickets. But one still does not know one’s ticket will not win until another ticket is drawn. If one did know this, one would never buy a ticket. The problem is not the lack of high reliability or truth, but the lack of the proper connection between the drawing of another ticket and one’s belief. Once one receives a report of the drawing of another ticket, then one knows, if the report is based on some witnessing of the event. One then knows even if the probability of error in such reports is the same as the initial probability that one’s ticket would be drawn. Once more, it is not the probability or reliability of the process that counts, but the actual connection between belief and fact. Mere statistical inference about the future does not suffice in itself for knowledge, no matter how reliable, but one can have knowledge of the future if it is based on evidence that connects in the proper way with the future

events believed to be coming. If, for example, one discovers that the lottery is fixed, then one can come to know that one’s ticket will not win.

Given the failure of reliabilism to rule out accidentality in true belief, one might again explain the popularity of the theory among epistemologists as the result of their confusing themselves with ordinary knowers. While the general reliability of belief-forming processes is irrelevant to the knowledge of ordinary knowers in particular cases, the epistemologist, who is interested in defending types of beliefs against sceptical challenges, does try to show that certain sources such as perception or induction are generally truth generating or reliable. The project of seeking to improve our epistemic practices must also seek to establish first which practices reliably produce truth. But the analysis of knowledge must focus instead on finding the right connection between belief and truth or fact.

The first attempt to specify the connection between belief and fact that renders the belief knowledge was the causal theory (Goldman 1967). This account holds that a true belief must be causally related to its referent in order to count as knowledge. The account captures such examples as the lottery, which, given the failure of so many other theories to do so, indicates that it’s on the right track, but it proves to be too narrow. One can have knowledge of universal and mathematical propositions, for example, but universal and mathematical facts or truths do not seem to cause anything. It is also too weak in failing to rule out cases in which there is the usual causal connection between a perceptual belief, for example, and an object to which it refers, but in which the subject could not distinguish this object from relevant alternatives (Goldman, 1967). I might see a criminal commit a crime but not know that he is the culprit because I do not know that his twin brother, also in the vicinity, did not commit the act.

This sort of case is handled by what is perhaps the best-known attempt to specify the crucial connection between belief and fact, the counterfactual account (Nozick 1981, ch. 3). This holds that one knows a fact if one would not believe it if it were not the case, and if other changes in circumstances would leave one believing it. In terms of possible worlds, one knows a proposition if and only if in the closest possible world in which the proposition is false, one does not believe it, and in close possible worlds in which it remains true, one does believe it. (We measure closeness of possible worlds by how similar they are to the actual world.) This account captures the examples so far considered, but unlike the causal account that proves to be too weak, this one is too strong, disallowing genuine knowledge claims. Many of the most mundane facts that I know do not obtain only in very distant possible worlds. In worlds so unlike this one there may be no telling what I would believe there. And it does not matter what I would believe in such worlds. I know that my son is not a knight of the round table and that it is not ninety degrees below zero outside. There is no telling what I might believe if those propositions were false, but this affects not the least my knowledge claims. Thus the first counterfactual condition is too strong. If the second requires retention of belief in all close possible worlds, then it is too strong also. An aging philosopher can still know this truth, although there are close worlds in which he cannot follow the argument that establishes it.

Thus, we need a specification of the relevant connection between fact and belief that counts as knowledge that requires that the connection hold only across close possible worlds, and only across most of them. Such a connection is what we would expect also from a naturalistic perspective, from the fact that the capacity for achieving knowledge is a likely product of natural selection. The capacity to achieve firm true belief is one that would be selected in slowly changing environments, so that true belief would be firm in situations close to actual, but not in distant possible worlds. An analysis that meets this condition and captures all of the examples so far discussed requires that the fact believed (the truth of the belief) enter into the best explanation for the belief’s being held. The concept of explanation here can itself be explicated in terms of possible worlds. In this account A explains B if A raises the antecedent probability of B (given other factors, it will raise the probability to 1, where there is no indeterminism or chance involved), and there is no third factor that screens out this relation, that fully accounts for the difference. The last clause is required because evidence, for example, raises the probability of that for which it is evidence, but this relation is screened out by whatever explains both the evidence and that for which it is evidence. In intuitive terms, A explains B if, given A, we can see why B was to be expected. In terms of possible worlds, A explains B if the ratio of close worlds in which B is the case is higher in those in which A is the case than it is in the entire set of close worlds.