The Nature of Knowledge

Knowledge is the goal of belief. It is what belief aims to be, or, more precisely, what
we aim at in believing.

There may be some types of belief, for example religious,
for which knowledge is seen to be impossible and belief itself sufficient (in its effects).
But knowledge is always to be preferred to mere belief where it is possible; it is,
other things being equal, the ideal form of belief. An analysis of knowledge must
reflect this fact. What must knowledge be like to function properly as our cognitive
goal? We want our beliefs to be true, but we want more of them as well.

We want not just truth, but secure truth, truth that will be resistant to pressures against its
acquisition or retention. If the truth of a belief is not firm in this way, then changes
in the world or in the subject that are unrelated to the fact believed will likely alter
the belief and render the resulting changed belief false. Beliefs acquired similarly in
the future will be likely to be false as well, and we will not be able to tell as easily
whether they are true or false. Thus, we want our beliefs to be non-accidentally
true, so that they will not be subject to such whims of fortune. We want to remove
luck from the acquisition and retention of true belief, just as we want to remove
moral luck from the actions of agents. Acting in a morally right way by accident
(when rightness is no part of an agent’s intention) does not produce faith in or praise
for the agent; similarly, believing the truth by accident does not produce faith in
one’s cognitive abilities or positive grades for the achievement.

It is relatively uncontroversial among epistemologists that knowledge involves
true belief, and most would accept the claim that the truth of a belief must be nonaccidental
if it is to amount to knowledge. But controversy will arise over how
to understand this crucial requirement. Certain kinds of luck or accident can enter
into the acquisition of knowledge, while other kinds must be ruled out. And
the absence of accident in certain senses will not guarantee that a true belief counts
as knowledge. Regarding the first point, I might be just lucky to run into a friend
of mine in Paris and hence to know he is there; but despite the fact that my running
into him was accidental, I do know he is there. Regarding the second point,
a perverse epistemologist might deliberately trick me into believing the truth
when my belief is based on the wrong reasons or is unconnected in the right way
with the fact I believe. He might trick me into believing that someone in my
department owns a Ford by convincing me that he himself does, when he but not
I know that only another member of my department owns a Ford. There is a sense
here in which it is non-accidental that I believe a true proposition, but I still lack
knowledge.

These two examples can help us to begin to sharpen the sense in which knowledge
must be non-accidental. In the first example, given the context in which I acquire
the belief, that in which I see my friend, it is non-accidental that I believe he is there.

And in the second example, while my perverse colleague deliberately sets up the
context in which I acquire my belief, given that context, my belief that someone
in my department owns a Ford is only accidentally true. Thus, we can say that a
belief must be non-accidentally true in the context in which it is acquired in order
to count as knowledge. Beyond this point, however, it will remain a matter of great
controversy how to interpret the requirement of being non-accidental.

Ordinarily, when our beliefs are only accidentally true, they result from lucky
guesses. A venerable but suspect tradition in epistemology seeks to eliminate
lucky guesses by requiring that believers be justified in their beliefs. This concept of
justification has its origin and natural home in ethics. In morally judging persons
by their actions, we demand that they be justified in acting as they do and that they
act as they do because of this justification. Similarly, in judging persons by their
beliefs, we may demand that they be justified in believing as they do and not achieve
truth by lucky guesses. But it remains questionable whether justification is either
necessary for knowledge or sufficient when added to true belief.

Before attempting to answer these questions, it is necessary to clarify the concept
of justification to which appeal is being made. While we often talk in nonphilosophical
contexts of agents being justified in acting as they do, ‘justification’
is a technical term of art in epistemology, rarely used in reference to beliefs outside
the context of philosophical analysis and debate. And it is a concept about which
epistemologists themselves have conflicting intuitions. The analogy with ethics
suggests that justification is a matter of fulfilling one’s obligations as these can be
determined from an internal perspective, from the subject’s own point of view.
Moral agents are justified when acting in a subjectively right way given the
information available to them. Similarly, believers might be said to be justified when
they have fulfilled their epistemic obligations given the evidence available to them,
for example, when they have critically assessed the available evidence.

But there are many problems with this internalist conception, based as it is on
what subjects should believe from their own perspective. First, the analogy with
ethics may be out of place, since we do not have the same degree of control over
the acquisition of beliefs as we do over our actions. If we cannot help believing as
we do, then talk of epistemic obligations is suspect, although we can still exercise
control over the degree to which we gather evidence, seek to be impartial, and so
on. Second, it must be clarified to what degree the justification for one’s beliefs must
be available and able to be articulated from one’s own perspective. On the most
extreme view, in order to be justified in a belief, one must be aware not only of the
evidence for it, but of the justifying relation in which that evidence stands to
the belief. But, given the motivation for this view, it seems that one’s belief in that
justifying relation must itself be justified, and that one’s belief that it is justified must
be justified, and so on. Even if that regress were to end somehow, it seems clear that ordinary subjects are not aware of such complex sets of judgements and so could
never fulfil this requirement.

A weaker internalism regarding justification would require only that evidence
for one’s beliefs be in principle recoverable from one’s internal states. One question
here is whether subjects must be able to articulate their evidence as such. This
requirement would disallow the perceptual knowledge of children, for example,
who cannot articulate the ways things appear to them as ways of appearing. Even
without this requirement, there seem to be clear counterexamples to internalist
concepts of justification as necessary for knowledge. (The internalist distinguishes
between a person’s being justified and there being some justification not in the
person’s possession, the latter being irrelevant.) A clairvoyant who could reliably
foretell the future, an idiot savant who knows mathematical truths without knowing
how he knows them, or a person with perfect pitch who can identify tones with
almost perfect accuracy have beliefs that count as knowledge without having
any apparent justification for those beliefs. Certainly they are not justified in their
beliefs until they notice their repeated successes, but they have knowledge from
the beginning. In more mundane cases, we all have knowledge when completely
unaware of its source, when that source or the evidence for our beliefs is completely
unrecoverable. I know that Columbus sailed in 1492, and I assume that I learned
this from some elementary school teacher, but who that teacher was, or what her
evidence for the date was, is, I also assume, completely unrecoverable by me. More
generally, knowledge from the testimony of others requires neither that one knows
the evidence for the proposition transmitted nor even that one have evidence of the
reliability of those providing the testimony (what it does require will be discussed
below).

Thus justification in the sense in which the concept is derived from ethics is not
necessary for knowledge. It is more commonly accepted since Edmund Gettier’s
famous article that justification, when added to true belief, is not sufficient for
knowledge (Gettier 1963). Many examples like the one cited earlier about the owner
of the Ford exemplify justified, true belief that is not knowledge. They show that a
person can be accidentally right in a belief that is not simply a lucky guess. Other
examples that show the same thing include beliefs about the outcomes of lotteries,
which falsify many otherwise plausible analyses of knowledge, and beliefs of those
in sceptical worlds (also to be discussed later), such as brains in vats programmed
to have experiences and beliefs, or victims of deceiving demons. A brain in a vat
programmed to have the beliefs it does can occasionally be programmed to have a
true belief grounded in its seeming perceptual experience about an object outside
the vat, but that justified, true belief will not be knowledge. I can justifiably and
truly believe that my ticket in this week’s Florida lottery will not win, but I do not
know it is a loser until another ticket is drawn.

Thus, justification in any intuitive sense is neither necessary nor sufficient, when
added to true belief, for knowledge. Some philosophers have sought to beef up
the notion so as to make it the sufficient additional condition for knowledge by
requiring that justification be ‘undefeated’. One’s justification is said to be defeated
when it depends on a false proposition, such as the proposition that my colleague
owns a Ford in that earlier example (Lehrer 2000, p. 20). There are two fatal
flaws in this position. One is that it takes justification to be necessary for knowledge,
and we have seen that it is not. The other is that it cannot distinguish between
examples in which one’s claims to knowledge are threatened by misleading evidence
one does not possess. Suppose in the Ford example that my colleague does own
the car and gives me good evidence that he does, but that he has an enemy who
spreads the false rumour that he is a pathological liar. If that enemy is also in my
department and the chances were great that I would have heard his false rumour,
then my claim to knowledge will be defeated. It will then be a matter of luck
that, given the context of being in my department, I did not hear his testimony
and so believe as I do. If, by contrast, my colleague’s enemy is in some distant
city, his attacks will be irrelevant to my knowledge. No way of unpacking the
notion of ‘depending on a false proposition’ will distinguish correctly between
these cases.

That knowledge is the goal of belief indicates yet again that the epistemologist’s
notion of justification is largely irrelevant. In a court of law, for example, where it
is of utmost importance whether witnesses know that to which they testify, jurors
must assess whether the evidence they present connects in the right way with the
facts they allege. Jurors want to know whether the best explanation for the evidence
presented by witnesses appeals to the facts as they represent them, or whether the
explanation offered by the opposing attorney is just as plausible. They do not care
whether the witnesses are justified in their beliefs, only again whether their beliefs
hook up in the right way with the facts. Sceptical worlds also reveal that justification
can be worthless, hence not a goal of belief, as firm truth is. One such sceptical
world mentioned earlier is that of brains in vats programmed to have all the
perceptual experiences that they have. Brains in vats are normally justified in their
beliefs on the basis of such experience, but such justification is unrelated to truth
and knowledge, not the sort of thing we seek for itself.

If justification is irrelevant to knowledge, we may wonder at the epistemologist’s
obsession with the notion. There are several explanations. One is that, while
ordinary knowers need not be able to defend their claims to knowledge in order to
have knowledge, it is one of the epistemologist’s tasks in showing the scope of
knowledge to defend it against sceptical challenges. In doing so, she will be justifying
or showing the justification for various types of beliefs. Some epistemologists might
confuse themselves for ordinary knowers, in thinking that ordinary knowers too must justify their beliefs in the face of sceptical challenge. Another explanation for
all the attention to this concept is the practice of some epistemologists of calling
whatever must be added to true belief to produce knowledge ‘justification’. This
practice might be excused by the fact, noted earlier, that the term in epistemology
is in any case a stipulative term of art. But, if this term refers only to an external
relation between a belief and the fact believed, or to a process of acquiring belief
that is outside the subject’s awareness, then it will lose its normative force and any
connection with the ethical concept of justification from which it supposedly
derived. It will then lead only to confusion to refer to such additional conditions
for knowledge as justification. Externalists might retain the concept by requiring
only that there be some justification that perhaps no one has, but again this invites
confusion in seeming to be, but not being, a normative concept.

Externalist accounts of knowledge do not require that the condition beyond true
belief must be accessible to the subject. They take that condition to be either general
reliability in the process that produces the belief or some connection between the
particular belief and the fact to which it refers. We may consider reliabilism first
(Goldman 1986). Can reliabilists capture the requirement that the truth of a belief
that counts as knowledge must be non-accidental? If so, they must take it that when
subjects use reliable processes, processes that produce a high proportion of true
beliefs, it will not be accidental that they arrive at the truth. But reliabilists who
require only general reliability in belief-forming processes would be mistaken in
assuming this to be universally true. If a process is not 100 per cent reliable, then,
even when it generates a true belief on a particular occasion, it may be only
accidental or lucky that the belief is true. I may be not very reliable at identifying
breeds of dogs by sight, except for golden retrievers, which I am generally reliable
at identifying. But I may be not very good at identifying golden retrievers when they
have a particular mark that I wrongly believe to indicate a different breed. I may
then fail to notice that mark on a particular dog that I therefore identify correctly,
albeit only by luck or accidentally.

This example reveals several problems, some insurmountable, in the account of
knowledge that takes it to be true belief produced by a generally reliable process.
First, at what level of generality should we describe the process that generates this
true belief (Feldman and Conee 1998)? Intuitively, we take processes that generate
beliefs to be those such as seeing middle-sized objects in daylight, inductively
inferring on the basis of various kinds of samples, and so on. But the former,
although used to generate the belief in this example, seems completely irrelevant to
evaluating the belief. Whether I am generally reliable in identifying things that I see
in daylight has little if anything to do with whether I acquire knowledge that this
dog is a golden retriever. Given our judgement that I do not have such knowledge
in this example, that I am only lucky to believe truly that this dog is a retriever, we can choose as the relevant process the unreliable one of identifying dogs with the
marks that tend to mislead me. This is a quite specific process, but, of course there
is the yet more specific one of identifying retrievers with such marks without noticing
the marks, which turns out to be reliable and to give the wrong answer in this case.
By choosing the former process as the relevant one, we can make the reliability
account appear to capture the example. In fact, we can probably do the same for
any example, given that every instance of belief acquisition instantiates many
different processes at different levels of specificity. But such ad hoc adjustments do
nothing to support the reliabilist account. We need independent reason or intuition
of the correct specification of the relevant process in particular cases, if not in
general, in order for the account to be informative or illuminating.
Our example reveals the pressure to specify the relevant process more and more
narrowly. But at the same time it shows that however narrowly we specify it in
particular cases, as long as we leave some generality in its description, there will
remain room for only accidentally true belief being produced by the process. This
indicates clearly that what is important in evaluating a true belief as a claim to
knowledge is not the reliability of any generalisable process that produced it, but
the particular connection between that very belief and the fact believed. One might
try to save the language of reliability by claiming that a process must be reliable in
the particular conditions in which it operates on a particular occasion, but once
more any looseness or generality at all will allow room for the type of accident that
defeats a claim to knowledge. One might also demand perfect reliability, but then
one would have to explain why we allow beliefs produced by perception and
induction, both fallible processes, to count as knowledge. We do so when these
methods connect particular beliefs to their referents in the proper way.
If the example discussed does not suffice, we can appeal to the lottery example
once more to show the weakness of reliabilism as an analysis of knowledge. If one
inductively infers that one’s ticket will not win, we can make the reliability of this
inductive process as high as we like short of 100 per cent by increasing the number
of tickets. But one still does not know one’s ticket will not win until another ticket
is drawn. If one did know this, one would never buy a ticket. The problem is not
the lack of high reliability or truth, but the lack of the proper connection between
the drawing of another ticket and one’s belief. Once one receives a report of the
drawing of another ticket, then one knows, if the report is based on some witnessing
of the event. One then knows even if the probability of error in such reports is the
same as the initial probability that one’s ticket would be drawn. Once more, it is
not the probability or reliability of the process that counts, but the actual connection
between belief and fact. Mere statistical inference about the future does not suffice
in itself for knowledge, no matter how reliable, but one can have knowledge of the
future if it is based on evidence that connects in the proper way with the future events believed to be coming. If, for example, one discovers that the lottery is fixed,
then one can come to know that one’s ticket will not win.

Given the failure of reliabilism to rule out accidentality in true belief, one might
again explain the popularity of the theory among epistemologists as the result of
their confusing themselves with ordinary knowers. While the general reliability
of belief-forming processes is irrelevant to the knowledge of ordinary knowers in
particular cases, the epistemologist, who is interested in defending types of beliefs
against sceptical challenges, does try to show that certain sources such as perception
or induction are generally truth generating or reliable. The project of seeking to
improve our epistemic practices must also seek to establish first which practices
reliably produce truth. But the analysis of knowledge must focus instead on finding
the right connection between belief and truth or fact.

The first attempt to specify the connection between belief and fact that renders
the belief knowledge was the causal theory (Goldman 1967). This account holds
that a true belief must be causally related to its referent in order to count as
knowledge. The account captures such examples as the lottery, which, given the
failure of so many other theories to do so, indicates that it’s on the right track, but
it proves to be too narrow. One can have knowledge of universal and mathematical
propositions, for example, but universal and mathematical facts or truths do not
seem to cause anything. It is also too weak in failing to rule out cases in which there
is the usual causal connection between a perceptual belief, for example, and an
object to which it refers, but in which the subject could not distinguish this object
from relevant alternatives (Goldman, 1967). I might see a criminal commit a crime
but not know that he is the culprit because I do not know that his twin brother,
also in the vicinity, did not commit the act.

This sort of case is handled by what is perhaps the best-known attempt to specify
the crucial connection between belief and fact, the counterfactual account (Nozick
1981, ch. 3). This holds that one knows a fact if one would not believe it if it were
not the case, and if other changes in circumstances would leave one believing it. In
terms of possible worlds, one knows a proposition if and only if in the closest
possible world in which the proposition is false, one does not believe it, and in close
possible worlds in which it remains true, one does believe it. (We measure closeness
of possible worlds by how similar they are to the actual world.) This account
captures the examples so far considered, but unlike the causal account that proves
to be too weak, this one is too strong, disallowing genuine knowledge claims. Many
of the most mundane facts that I know do not obtain only in very distant possible
worlds. In worlds so unlike this one there may be no telling what I would believe
there. And it does not matter what I would believe in such worlds. I know that my
son is not a knight of the round table and that it is not ninety degrees below zero
outside. There is no telling what I might believe if those propositions were false, but this affects not the least my knowledge claims. Thus the first counterfactual
condition is too strong. If the second requires retention of belief in all close possible
worlds, then it is too strong also. An aging philosopher can still know this truth,
although there are close worlds in which he cannot follow the argument that
establishes it.

Thus, we need a specification of the relevant connection between fact and belief
that counts as knowledge that requires that the connection hold only across close
possible worlds, and only across most of them. Such a connection is what we would
expect also from a naturalistic perspective, from the fact that the capacity for
achieving knowledge is a likely product of natural selection. The capacity to achieve
firm true belief is one that would be selected in slowly changing environments, so
that true belief would be firm in situations close to actual, but not in distant possible
worlds. An analysis that meets this condition and captures all of the examples so
far discussed requires that the fact believed (the truth of the belief) enter into the
best explanation for the belief’s being held. The concept of explanation here can
itself be explicated in terms of possible worlds. In this account A explains B if A
raises the antecedent probability of B (given other factors, it will raise the probability
to 1, where there is no indeterminism or chance involved), and there is no third
factor that screens out this relation, that fully accounts for the difference. The last
clause is required because evidence, for example, raises the probability of that for
which it is evidence, but this relation is screened out by whatever explains both the
evidence and that for which it is evidence. In intuitive terms, A explains B if, given
A, we can see why B was to be expected. In terms of possible worlds, A explains B
if the ratio of close worlds in which B is the case is higher in those in which A is the
case than it is in the entire set of close worlds.

Let us review some of the examples that were problematic for the rival accounts
of knowledge. When there is misleading evidence I am just lucky not to have noticed,
then what explains my belief is the fact that I have not noticed this evidence. My
believing the dog in that example to be a golden retriever is explained by my not
having noticed the misleading mark. My believing as I do is not made significantly
more probable by the fact believed, given all the close possible worlds in which I
am aware of the misleading evidence. In the lottery example, the inductive evidence
on the basis of which I believe that my ticket will lose does not explain its losing,
since the probabilistic connection between that evidence and its losing is screened
out by what does explain the latter, the drawing of another ticket. That drawing
explains my losing but not my prior belief, which remains explanatorily unconnected
to the fact to which it refers.

In regard to problems for the causal theory, the truth of universal propositions
helps to explain our belief in them, or it helps to explain the inductive evidence that
explains our beliefs. In the case in which I cannot distinguish the cause of my belief from relevant alternatives in the vicinity, the explanation for my belief lies in the
broader context and not in the specific cause, just as we do not explain the outbreak
of a war by citing only the specific event that triggered it, when any number of
equally likely events would have done so in the broader context of latent hostility.
In such cases the specific cause does not significantly raise the probability of its effect
across close worlds in which alternative causes are also present. To be able to rule
out relevant alternatives in claiming knowledge is to be able to rule out alternative
explanations for the evidence one has.

In regard to the cases that were problematic for the counterfactual account, what
explains the fact that my son is not a knight of the round table, the fact that he lives
in the present time and is a tennis player attending Yale, also explains my belief that
he is not a Medieval knight. What explains the fact that it is not ninety degrees
below zero outside, namely the fact that it is ninety above zero, also explains my
belief that it is not subfreezing. Finally, in the aging philosopher example, his belief
that the counterfactual analysis is too strong is connected with the evidence that it
is too strong in many, although not all, close possible worlds.

In many of these examples, appeal is made to explanatory chains. It suffices for
knowledge if what explains my true belief also explains or is explained by the fact
to which the belief refers, as long as a certain constraint on these chains is met. Each
link in such chains must make later ones more probable. This constraint defeats
some purported counter-examples that will not be considered here (see Goldman
1988, pp. 46–50), but its relevance is also clear in the case of knowledge from
testimony mentioned earlier in discussing the issue of justification. A person may
be justified in believing the testimony of another without any evidence of the other’s
expertise or sincerity, as long as there is no evidence that the testimony is likely to
be false. Testimony can create its own justification, just as perception can, whether
or not the testifier is herself justified in believing her own testimony. But this again
simply contrasts justified true belief with knowledge, since one cannot transmit
knowledge one does not have. Knowledge from testimony requires an explanatory
chain in which the truth of the testimonial evidence enters ultimately into the best
explanation for its being given and believed. If I am completely gullible and believe
absolutely anything I hear, then I do not gain knowledge from testimony, just as if
I see everything as red, then I do not know a red object when I see one. But the last
two points imply a third, that a completely gullible person anywhere in the
testimonial chain destroys knowledge in the later links. For each link, the fact that
the belief was more likely because true must make its transmission more likely to
be believed at later links, the constraint mentioned earlier. This does not prevent
children from gaining or transmitting testimonial knowledge, since they tend
to believe their parents, for example, more than they believe their peers (Schmitt
1999, p. 372).

This completes our brief account of the nature of knowledge. As we shall now
see, it will prove to be highly suggestive for the task of determining the scope and
structure of knowledge.

 

You may also like...