"Memory is necessary for all the operations of reason." (Blaise Pascal, Pensées, Krailsheimer, #651)
This seems right. Consider this quick little argument against scientism, the philosophical, not scientific, view that all knowledge is natural-scientific knowledge:
1. I know by reason alone, a priori, and not by any natural-scientific means, that addition has the associative and the commutative properties and that these properties are distinct.
2. If scientism is true, then it is not the case that (1).
3. Scientism is not true.
I grasp (understand) this argument and its validity by reason. To grasp any such argument, it is not sufficient that a succession of conscious states transpire in my mental life. For if the state represented by (1) falls into oblivion by the time I get to (2), and (2) by the time I get to (3), then all I would undergo would be a succession of consciousnesses but not the consciousness of succession. But the consciousness of succession is necessary to 'take in' the argument. And this consciousness of succession itself presupposes a kind of memory. To grasp the conclusion as a conclusion -- and thus as following from the premises -- I have to have retained the premises. There has to be a diachronic unity of consciousness in which there is a sort of synopsis of the premises together with the conclusion with the former entailing the latter.
But of course something similar holds for each proposition in the argument. The meaning of a compound proposition is built up out of the meanings of its propositional parts, and the meaning of a simple proposition is built up out of the meanings of its sub-propositional parts, and these meanings have to be retained as the discursive intellect runs through the propositions. ('Discursive' from the L. currere, to run.) This retention -- a term Husserl uses -- is a necessary condition of the possibility of understanding.
And so while I do not grasp an argument by memory (let alone by sense perception or introspection), memory is involved in rational knowledge.
The Pascalian aphorism bears up well under scrutiny.
Example of associativity of addition: (7 + 5) + 3 = 7 + (5 + 3). Example of commutativity: (7 + 5) + 3 = (5 + 7) + 3. The difference between the two properties springs to the eye (of the mind). Now what must mind be like if it is to be capable of a priori knowledge? Presumably it can't just be a hunk of meat.
But if the below companion post is right, not even sense knowledge is such that its subject could be a hunk of meat. We are of course meatheads. But squeezing meaning out of mere meat -- there's the trick!
3. There are items of knowledge that are not essentially tied to action.
Daniel K comments and I respond in blue:
First, as to your aporetic triad: I would like to reject (3) in one sense that I describe below, and reject (1) absolutely. Not sure where that leaves the triad. But I'd be interested in whether you think I've clarified or merely muddied the waters.
In one sense I think all knowledge is action guiding. In another sense I think it is not essentially action guiding. All pure water is drinkable (at the right temperature etc.), but drinkability is not an essential feature of water (I wonder if this works).
BV: I don't think it works. I should think that in every possible world in which there is water, it is potable by humans. Therefore, drinkability is an essential feature of water. (An essential property of x is a property x has in every possible world in which x exists.) Of course, there are worlds in which there is water but no human beings. In those worlds, none of the water is drunk by humans. But in those worlds too water is drinkable. Compare the temporal case. Before humans evolved, there was water on earth. That water, some of it anyway, was potable by humans even though there were no humans. Water did not become potable when the first humans arose.
Rejecting (3): The having of knowledge always contributes to how one acts. You give examples of a priori knowledge as counterexamples. My response: it seems to me a priori knowledge is "hinge" knowledge that opens the door for action and cannot possibly not inform action. In other words we won't find circumstances where such knowledge is not action guiding in the presuppositional sense. So, I disagree that we will find knowledge that doesn't inform action. A priori knowledge is presuppositionally necessary and occasionally practically useful (math for engineering). Empirical knowledge will be used when it is available. So, I don't think defending (3) is necessary to defend (2).
BV: Willard maintains that one can have propositional knowledge without belief, and that belief is essentially tied to action. The conjunction of these two claims suggests to me that there can be knowledge that is not essentially tied to action. And so I looked for examples of items of knowledge that are not essentially tied to action, either by not being tied to action at all, or by not being essentially tied to action. If there are such items, then we can say that the difference between belief and knowledge is that every belief, by its very nature, can be acted upon, while it is not the case that every item of knowledge can be acted upon.
Much depends on what exactly is meant by 'acting upon a proposition,' and I confess to not having a really clear notion of this.
While I grant that much a priori knowledge is 'hinge' knowledge in your sense, consider the proposition that there is no transfinite cardinal lying between aleph-nought and 2 raised to the power, alepth-nought. Does that have any engineering application? (This is not a rhetorical question.)
Now consider philosophical knowledge (assuming there is some). If I know that there are no bare particulars (in Gustav Bergmann's sense), this is a piece of knowledge that would seem to have no behavioral consequences. The overt, nonlinguistic, behavior of a man who maintains a bundle-theoretic position with respect to ordinary partiulars will be no different from that of a man who maintains that ordinary particulars have bare particulars at their ontological cores. They could grow, handle, slice, and eat tomatoes in the very same way.
(Anecdote that I am pretty sure is not apocryphal: when Rudolf Carnap heard that fellow Vienna Circle member Gustav Bergmann had published a book under the title, The Metaphysics of Logical Positivism, he refused to speak to Bergmann ever again.)
It seems we should say that some, though not all, philosophical knowledge (assuming there is philosophical knowledge) consists of propositions upon which we cannot act. Here is another example. Suppose I know that the properties of ordinary particulars are tropes. Thus I know that the redness of a tomato is not a universal but a particular. Is that knowledge action-guiding? How would it guide action differently than the knowledge that properties are universals? Is the difference in ontological views a difference that could show up at the level of overt, nonlinguistic, behavior?
Admittedly, some philosophical knowledge is action-guiding. If I know that the soul is immortal, then I will behave differently than one who lacks this knowledge.
Now consider the knowledge of insignificant contingent facts. I know from my journal that on 27 April 1977 I ate hummus. Is that item of knowledge action-guiding? I think not. Suppose you learn the boring fact and infer that I like hummus. You might then make me a present of some. But if I am the only one privy to the information, it is difficult to see how that item of knowledge could be action-guiding for me. Recall that by action I mean overt, nonlinguistic behavior.
There is also modal knowledge to consider. I might have been sleeping now. I might not have been alive now. I might never have existed at all. These are modal truths that, arguably, I know. Suppose I know them. How could I act upon them? I am not sleeping now, and nothing I do could bring it about that I am sleeping now. Some modal knowledge would seem to without behavioral consequences. Of course, some modal knowledge does have such consequences, e.g. the knowledge that it is possible to grow tomatoes in Arizona.
It seemed to me in your post that you took the truth of (2) as giving support to (3). If belief is essentially action guiding and knowledge is not essentially believing, then there should be knowledge that is not action guiding.
But again, I would like to affirm that in the sense you mean it in the post all knowledge is action guiding: either presuppositionally or consciously/empirically. For instance, the law of noncontradiction is action guiding in the sense that I cannot act if essential to that action is that the object has characteristic X, but I affirm that the object is both X and not-X. [. . .]
BV: Consider an example. I cannot eat a bananna unless it is peeled. My affirming that it is both peeled and unpeeled (at the same time, all over, and in the same sense of 'peeled') would not, however, seem to stand in the way of my performing the action. Clearly, I know that nothing is both peeled and unpeeled. It is not clear to me how one could act upon that proposition. If I want to eat the bananna, I can act upon the proposition that it is unpeeled by peeling the bananna. But how do I act upon the proposition that the bananna is either peeled or unpeeled? What do I do?
Rejecting (1): So, what if both knowledge and belief are in one sense "action guiding" (rejecting 3)? Does it imply that we have no reason to think that belief is not an essential component of knowledge (accepting 2 and rejecting 1)? I think we still do have a good reason for thinking belief is not essentially a component of knowledge. When Willard says that belief is not essential to knowledge I take him to be distinguishing between the irrelevance of being concerned with action in the act of knowing and the universal appeal of knowledge for action.
Forget the terms "knowledge" and "belief" for a moment. Distinguish between the following states:
One is in a state (intentional?) (Y) to object (X) iff one has a true representation of X that was achieved in an appropriate way (Willard's account of knowledge). Notice that there is nothing in the description that essentially involves a readiness to act. That is not a part of its intentional character or directedness of state (Y). It is directed purely at unity, period.
Alternatively, one is in an intentional state (Z) to object (X) iff one has a representation of reality that is essentially identified by its being a ground for action. Here, essential to (Z) is its providing a ground for action.
(Y) is not a state that essentially involves action guidance but (Z) is. So, the achievement of (Y) does not involve essentially the achievement of (Z). That is, the achievement of (Y) is the achievement of a kind of theoretical unity with (X) while the achievement of (Z) is the achievement of a motivator for acting in certain ways regarding (X). Response: but Daniel, you've already said that all knowledge is action guiding! Yes, but it is not an essential feature of the state of knowing. Analogy: all water is drinkable. But drinkability is not an essential feature of water.
I'm going to stop there. I'd appreciate any comments you have. That is my effort, thus far, to make sense of both Willard's suggestion and your aporetic triad.
BV: I do appreciate the comments and discussion. Let's see if I understand you. You reject (1), the orthodox view that knowledge entails belief. Your reason seems to be that, while belief is essentially action-guiding, knowledge is not essentially action-guiding, but only accidentally action-guiding. You deny what I maintain, namely, that some items of knowledge (some known propositions qua known) are not action-guiding. You maintain that all such items are action-guiding, but only accidentally so. Perhaps your argument is this:
4. Every believing-that-p is essentially action-guiding.
5. No knowing-that-p is essentially action-guiding.
6. It is not the case that, necessarily, every knowing-that-p is a believing-that-p.
But (6) -- the negation of (1) -- doesn't follow from (4) and (5). (6) is equivalent to
6*. Possibly, some knowings-that-p are not believings-that-p.
What follows from (4) and (5) is
7. No knowing-that-p is a believing-that-p.
(7) is the thesis I am tentatively proposing.
This is a very difficult topic and we may be falling into de dicto/de re confusion.
Well, at least I am in the state that Plato says is characteristic of the philosopher: perplexity!
Here is a trio of propositions that are jointly inconsistent but individually plausible:
1. Knowledge entails belief.
2. Belief is essentially tied to action.
3. There are items of knowledge that are not essentially tied to action.
Clearly, any two of these propositions is logically inconsistent with the remaining one. Thus the conjunction of (1) and (2) entails the negation of (3).
And yet each limb of the triad is very plausible, though perhaps not equally plausible.
(1) is part of the classical definition of knowledge as justified true belief, an analysis traceable to Plato's Theaetetus. (1) says that, necessarily, if a person S knows that p, then S believes that p. Knowledge logically includes belief. What one knows one believes, though not conversely. For example, if I know that my wife is sitting across from me, then I believe that she is sitting across from me. (At issue here is propositional knowledge, not know-how, or carnal knowledge, or knowledge by acquaintance.)
(2) is perhaps the least plausible of the three, but it is still plausible and accepted by (a minority of) distinguished thinkers. According to Dallas Willard,
Belief I understand to be some degree of readiness to act as if such and such (the content believed) were the case. Everyone concedes that one can believe where one does not know. But it is now widely assumed that you cannot know what you do not believe. Hence the well known analysis of knowledge as "justified, true belief." But this seems to me, as it has to numerous others, to be a mistake. Belief is, as Hume correctly held, a passion. It is something that happens to us. Thought, observation and testing, even knowledge itself, can be sources of belief, and indeed should be. But one may actually know (dispositionally, occurrently) without believing what one knows.
[. . .] belief has an essential tie to action . . . .
Although I am not exactly sure what Willard's thesis is, he seems to be maintaining that the propositions one believes are precisely those one is prepared to act upon. S believes that p iff S is prepared to act upon p. Beliefs are manifested in actions, and actions are evidence of beliefs. To determine what a person really believes, we look to his actions, not to his words, although the words provide context for understanding the actions. If I want to get to the roof, and tell you that the ladder is stable, but refuse to ascend it, then that is very good evidence that I don't really believe that the ladder is stable. I don't believe it because I am not prepared to act upon it. So far, so good.
But if belief is essentially tied to action, as Willard maintains, then it is not possible that one believe a proposition one cannot act upon. Is this right? Consider the proposition *Everything is self-identical.* This is an item of knowledge. But is it also an item of belief? We can show that this item of knowledge is not an item of belief if we can show that one cannot act upon it. But what is it to act upon a proposition? I don't know precisely, but here's an idea:
A proposition p is such that it can be acted upon iff there is some subject S and some circumstances C such that S's acceptance of p in C makes a difference to S's overt, nonlinguistic behavior.
For example, *It is raining* can be acted upon because there are circumstances in which my acceptance of it versus my nonacceptance of it (either by rejecting it or just entertaining it) makes a difference to what I do such as going for a run. Accepting the proposition, and not wanting to get wet, I postpone the run. Rjecting the proposition, I go for the run as planned.
In the case of *Everything is self-identical,* is there any behavior that could count as a manifestation of an agent's acceptance/nonacceptance of the proposition in question? Suppose I come to know (occurrently) for the first time that everything is self-identical. Suppose I had never thought of this before, never 'realized it.' Would the realization or 'epiphany' make a difference to my overt, nonlingusitic behavior? It seems not. Would I do anything differently?
Consider characteristic truths of transfinite set theory. They are items of knowledge that have no bearing on any actual or possible action. For example, I know that, while the natural numbers and the reals are both infinite sets, the cardinality of the latter is strictly greater than that of the former. Can I take that to the streets?
(3) therefore seems true: there are items of knowledge that are not items of belief because not essentially tied to action.
I have shown that each limb of our inconsistent triad has some plausibility. So it is an interesting problem. How solve it? Reject one of the limbs! But which one? And how do you show that the rejection of one is more reasonable than the rejection of one of the other two? And why is it more reasonable to hold that the problem has a solution than to hold that it is insoluble and thus a genuine aporia?
As Hilary Putnam once said, "It ain't obvious what's obvious." Or as I like to say, "One man's datum is another man's theory."
But is it obvious that it ain't obvious what's obvious?
It looks as if we have a little self-referential puzzle going here. Does the Hilarian dictum apply to itself? An absence of the particular quantifier may be read as a tacit endorsement of the universal quantifier. Now if it is never obvious what is obvious, then we have self-reference and the Hilarian dictum by its own say-so is not obvious.
Is there a logical problem here? I don't think so. With no breach of logical consistency one can maintain that it is never obvious what is obvious, as long as one does not exempt one's very thesis. In this case the self-referentiality issues not in self-refutation but in self-vitiation. The Hilarian dictum is a self-weakening thesis. Over the years I have given many examples of this. (But I am now too lazy to dig them out of my vast archives.)
There is no logical problem, but there is a factual problem. Surely some propositions are obviously true. Having toked on a good cigar in its end game, when a cigar is at its most nasty and rasty, I am am feeling mighty fine long about now. My feeling of elation, just as such, taken in its phenomenological quiddity, under epoche of all transcendent positings -- this quale is obvious if anything is.
So let us modify the Hilarian dictum to bring it in line with the truth.
In philosophy, appeals to what is obvious, or self-evident, or plain to gesundes Menschenverstand, et cetera und so weiter are usually unavailing for purposes of convincing one's interlocutor.
And yet we must take some things as given and non-negotiable. Welcome to the human epistemic predicament.
We are ignorant about ultimates and we will remain ignorant in this life. Perhaps on the Far Side we will learn what we cannot learn here. But whether there is survival of bodily death, and whether it will improve our epistemic position, are again things about which -- we will remain ignorant in this life.
It is admittedly strange to suppose that death is the portal to knowledge. But is it stranger than supposing that a being capable of knowledge simply vanishes with the breakdown of his body?
The incapacity of materialists to appreciate the second strangeness I attribute to their invincible body-identification.
This is the kind of e-mail I like, brief and pointed:
Recently I've encountered an argument that runs like this:
1. All knowledge comes from experience 2. All experiences are subjective 3. Ergo, all knowledge is subjective.
I think I can argue somewhat against this argument, but I need a nice snappy response to it.
The snappiest response to this invalid argument is that it falls victim to a fallacy of equivocation: 'experience' is being used in two different senses. Hence the syllogism lacks a middle term and commits the four-term fallacy (quaternio terminorum).
To experience is to experience something. So we need to distinguish between the act of experiencing and the object experienced. The act is subjective: it is a mental occurrence. The object is typically not subjective. For example, how do I know that there is a cat on my lap now? I experience the cat via my outer senses: I see the cat, feel its weight, hear it purr. The experiencing is subjective; the cat is not. I have objective knowledge of the existence and properties of the cat despite the fact that my experiencing is a subjective process.
Now I don't grant that all knowledge comes from experience; I grant only that all knowledge arises on the occasion of experience. But suppose I grant premise (1) arguendo. What (1) says is that all knowledge is knowledge of the objects of the senses. (There is no a priori knowledge.) So we can rewrite the argument as follows:
1*. All knowledge is knowledge of sensory objects (either directly or via instruments such as microsopes).
2*. All acts of experiencing are subjective
3*. All knowledge is subjective.
This syllogism is clearly a non sequitur since there is no middle term.
The subjectivity of experiencing is logically consistent with the objectivity of knowledge via the senses. There is no knowledge apart from minds. And yet minds have the power of transcending their internal states and grasping what is real and true independently of minds. How this is possible is a further question, and perhaps the central question of epistemology.
One way to embarrass an empiricist is to ask him how he knows propostions like (1*). Does he know it by experience? No. Then, by his own principles, he doesn't know it. Why then does he think it is true?
It is gratifying to know that I am getting through to some people as is evidenced by the fact that they recall my old posts; and also that I am helping them think critically as is evidenced by the fact that they test my different posts on a given topic for mutual consistency. This from a Pakistani reader:
This is the third in a series of posts on Thomas Nagel's Mind and Cosmos (Oxford 2012). The first is an overview, and the second addresses Nagel's reason for rejecting theism. This post will comment on some of the content in Chapter 4, "Cognition."
In Chapter 4, Nagel tackles the topic of reason, both theoretical and practical. The emphasis is on theoretical reason, with practical reason receiving a closer treatment in the following chapter entitled "Value."
We have already seen that consciousness presents a problem for evolutionary reductionism due to its irreducibly subjective character. (For some explanation of this irreducibly subjective character, see my Like, What Does It Mean?)
'Consciousness' taken narrowly refers to phenomenal consciousness, pleasures, pains, emotions, and the like, but taken widely it embraces also thought, reasoning and evaluation. Sensory qualia are present in nonhuman animals, but only we think, reason, and evaluate. We evaluate our thoughts as either true or false, our reasonings as either valid or invalid, and our actions as either right or wrong, good or bad. These higher-level capacities can be possessed only by beings that are also conscious in the narrow sense. Thus no computer literally thinks or reasons or evaluates the quality of its reasoning imposing norms on itself as to how it ought to reason if it is to arrive at truth; at best computers simulate these activities. Talk of computers thinking is metaphorical. This is a contested point, of course. But if mind is a biological phenomenon as Nagel maintains, then this is not particularly surprising.
What makes consciousness fascinating is that while it is irreducibly subjective, it is also, in its higher manifestations, transcensive of subjectivity. (This is my formulation, not Nagel's.) Mind is not trapped within its interiority but transcends it toward impersonal objectivity, the "view from nowhere." Consciousness develops into "an instrument of transcendence that can grasp objective reality and objective value." (85) Both sides of mind, the subjective and the objective, pose a problem for reductive naturalism. "It is not merely the subjectivity of thought but its capacity to transcend subjectivity and to dsiscover what is objectively the case that presents a problem." (72)
Exactly right! One cannot prise apart the two sides of mind, segregating the qualia problem from the intentionality problem, calling the former 'hard' and imagining the latter to be solved by some functionalist analysis. It just won't work. The so-called Hard Problem is actually insoluble on reductive naturalism, and so is the intentionality problem. (Some who appreciate this go eliminativist -- which is a bit like getting rid of a headache by blowing one's brains out.)
The main problem Nagel deals with in this chapter concerns the reliability of reason. Now it is a given that reason is reliable, though not infallible, and that it is a source of objective knowledge. The problem is not whether reason is reliable as a source of knowledge, but how it it is possible for reason to be reliable if evolutionary naturalism is true. I think it is helpful to divide this question into two:
Q1. How can reason be reliable if materialist evolutionary naturalism is true?
Q2. How can reason be reliable if evolutionary naturalism is true?
Let us not forget that Nagel himself is an evolutionary naturalist. He is clearly a naturalist as I explained in my first post, and he does not deny the central tenets of the theory of evolution. His objections are to reductive materialism (psychophysical reductionism) and not to either naturalism or evolution. Now Nagel is quite convinced, and I am too, that the answer to (Q1) is that it is not possible for reason to be relied upon in the manner in which we do in fact rely upon it, if materialism is true. The open question for Nagel is (Q2). Reason is reliable, and some version of evolutionary naturalism is also true. The problem is to understand how it is possible for both of them to be true.
Now in this post I am not concerned with Nagel's tentative and admttedly speculative answer to (Q2). I hope to take that up in a subsequent post. My task at present is to understand why Nagel thinks that it is not possible for reason to be reliable if materialism is true.
Suppose we contrast seeing a tree with grasping a truth by reason.
Vision is for the most part reliable: I am, for the most part, justified in believing the evidence of my senses. And this despite the fact that from time to time I fall victim to perceptual illusions. My justification is in no way undermined if I think of myself and my visual system as a product of Darwinian natural selection. "I am nevertheless justified in believing the evidence of my senses for the most part, because this is consistent with the hypothesis that an accurate representation of the world around me results from senses shaped by evolution to serve that function." (80)
Now suppose I grasp a truth by reason. (E.g., that I must be driving North because the rising sun is on my right.) Can the correctness of this logical inference be confirmed by the reflection that the reliability of logical thinking is consistent with the hypothesis that evolution has selected instances of such thinking for accuracy?
No, says Nagel and for a very powerful reason. When I reason I engage in such operations as the following: I make judgments about consistency and inconsistency; draw conclusions from premises; subsume particulars under universals, etc. So if I judge that the reliability of reason is consistent with an evolutionary explanation of its origin, I presuppose the reliability of reason in making this very judgement. Nagel writes:
It is not possible to think, "reliance on my reason, including my reliance on this very judgment, is reasonable because it is consistent with its having an evolutionary explanation." Therefore any evolutionary account of the place of reason presupposes reason's validity and cannot confirm it without circularity. (80-81)
Nagel's point is that the validity of reason can neither be confirmed nor undermined by any evolutionary account of its origins. Moreover, if reason has a merely materialist origin it would not be reliable, for then its appearance would be a fluke or accident. And yet reason is tied to organisms just as consciousness is. Nagel faces the problem of explaining how reason can be what it is, an "instrument of transcendence" (85) and a "final court of appeal" (83), while also being wholly natural and a product of evolution. I'll address this topic in a later post.
Why can't reason be a cosmic accident, a fluke? This is discussed in my second post linked to above, though I suspect I will be coming back to it.
Is it ever rational to believe something for which one has insufficient evidence? If it is never rational to believe something for which one has insufficient evidence, then presumably it is also never rational to act upon such a belief. For example, if it irrational to believe in God and post-mortem survival, then presumably it is also irrational to act upon those beliefs, by entering a monastery, say. Or is it?
W. K. Clifford is famous for his evidentialist thesis that "It is wrong always, everywhere, and for anyone, to believe anything on insufficient evidence." On this way of thinking, someone who fails to apportion belief to evidence violates the ethics of belief, and thereby does something morally wrong. This has been called ethical evidentialism since that claim is that it is morally impermissible to believe on insufficient evidence. Sufficient evidence is where there is preponderance of evidence. On ethical evidentialism, then, it is morally permissible for a person to believe that p if and only p is more likely than not on the evidence the person has.
A cognitive evidentialist, by contrast, maintains that one is merely unreasonable to believe beyond a preponderance of evidence. One then flouts a norm of rationality rather than a norm of morality.
Jeffrey Jordan, who has done good work on this topic, makes a further distinction between absolute and defeasible evidentialism. The absolute evidentialist holds that the evidentialist imperative applies to every proposition, while the defeasible evidentialist allows exceptions. Although Clifford had religious beliefs in his sights, his thesis, by its very wording, applies to every sort of belief, including political beliefs and the belief expressed in the Clifford sentence quoted above! I take this as a refutation of Clifford's evidentialist stringency. For if one makes no exceptions concerning the application of the evidentialist imperative, then it applies also to "It is wrong always, everywhere, and for anyone, to believe anything on insufficient evidence." And then the embarrassing question arises as to what evidence once could have for the draconian Cliffordian stricture which is not only a morally normative claim but is also crammed with universal quantifiers.
If I took Clifford seriously I would have to give up most of my beliefs about politics, health, nutrition, economics, history and plenty of other things. For example, I believe it is a wise course to restrict my eating of eggs to three per week due to their high cholesterol content. And that's what I do. Do I have sufficent evidence for this belief? Not at all. I certainly don't have evidence that entails the belief in question. What evidence I have makes it somewhat probable. But more probable than not? Not clear! But to be on the safe side I restrict my intake of high-cholesterol foods. What I give up, namely, the pleasures of bacon and eggs for breakfast every morning, etc. is paltry in comparison to the possible pay-off, namely living and blogging to a ripe old age. Surely there is nothing immoral or irrational in my behavior even though I am flouting Clifford's rule. And similarly in hundreds of cases.
The Desert Rat
Consider now the case of a man dying of thirst in a desert. He comes upon two water sources. He knows (never mind how) that one is potable while the other is poisonous. But he does not know which is which, and he has no way of finding out. Should the man suspend belief, even unto death, since he has insufficient evidence for deciding between the two water sources? Let us suppose that our man is a philosopher and thus committed to a life of the highest rationality.
Absolute evidentialism implies that the desert wanderer should suspend judgment and withhold assent: he may neither believe nor disbelieve of either source that it is potable or poisonous on pain of either irrationality or an offence against the ethics of belief.
On one way of looking at the matter, suspension of belief -- and doing nothing in consequence -- would clearly be the height of irrationality in a case like this. The desert wanderer must simply drink from one of the sources and hope for the best. Clearly, by drinking from one (but not both) of the sources, his chances of survival are one half, while his chances of survival from drinking from neither are precisely zero. By simply opting for one, he maximizes his chances of reality-contact, and thereby his chances of survival. Surely a man who wants to live is irrational if he fails to perform a simple action that will give him a 50-50 chance of living when the alternative is certain death.
He may be epistemically irrational, but he is prudentially rational. And in a case like this prudential rationality trumps the other kind.
Cases like this are clear counterexamples to evidentialist theories of rationality according to which rationality requires always apportioning belief to evidence and never believing on insufficient evidence. In the above case the evidence is the same for either belief and yet it would be irrational to suspend belief. Therefore, rationality for an embodied human agent (as opposed to rationality for a disembodied transcendental spectator) cannot require the apportioning of belief to evidence in all cases, as Clifford demands. There are situations in which one must decide what to believe on grounds other than the evidential. Will I believe that source A is potable? Or will I believe that source B is potable? In Jamesian terms the option is live, forced, and momentous. (It is not like the question whether the number of ultimate particles in the universe is odd or even, which is neither live, forced, nor momentous.) An adequate theory of rationality, it would seem, must allow for believing beyond the evidence. It must return the verdict that in some cases, to refuse to believe beyond the evidence is positively irrational.
But then absolute evidentialism is untenable and we must retreat to defeasible evidentialism.
The New Neighbors
Let us consider another such case. What evidence do I have that my new neighbors are decent people? Since they have just moved in, my evidence base is exiguous indeed and far from sufficient to establish that they are decent people. (Assume that some precisifying definition of 'decent' is on the table.) Should I suspend judgment and behave in a cold, skeptical, stand-offish way toward them? ("Prove that you are not a scumbag, and then I'll talk to you.") Should I demand of them 'credentials' and letters of recommendation before having anything to do with them? Either of these approaches would be irrational. A rational being wants good relations with those with whom he must live in close proximity. Wanting good relations, he must choose means that are conducive to that end. Knowing something about human nature, he knows that 'giving the benefit of the doubt' is the wise course when it comes to establishing relations with other people. If you begin by impugning the integrity of the other guy, he won't like you. One must assume the best about others at the outset and adjust downwards only later and on the basis of evidence to the contrary. But note that my initial belief that my neighbors are decent people -- a belief that I must have if I am to act neighborly toward them -- is not warranted by anything that could be called sufficient evidence. Holding that belief, I believe way beyond the evidence. And yet that is the rational course.
So again we see that in some cases, to refuse to believe beyond the evidence is positively irrational. A theory of rationality adequate for the kind of beings we are cannot require that belief be always and everywhere apportioned to evidence.
In the cases just mentioned, one is waranted in believing beyond the evidence, but there are also cases in which one is warranted in believing against the evidence. In most cases, if the available evidence supports that p, then one ought to believe that p. But consider Jeff Jordan's case of
The Alpine Hiker
An avalanche has him stranded on a mountainside facing a chasm. He cannot return the way he came, but if he stays where he is he dies of exposure. His only hope is to jump the chasm. The preponderance of evidence is that this is impossible: he has no epistemic reason to think that he can make the jump. But our hiker knows that what one can do is in part determined by what one believes one can do, that "exertion generally follows belief," as Jordan puts it. If the hiker can bring himself to believe that he can make the jump, then he increases his chances of making it. "The point of the Alpine hiker case is that pragmatic belief-formation is sometimes both morally and intellectually permissible."
We should therefore reject absolute evidentialism, both ethical and cognitive. We should admit that there are cases in which epistemic considerations are reasonably defeated by prudential considerations.
And now we come to the Big Questions. Should I believe that I am libertarianly free? That it matters how I live? That something is at stake in life? That I will in some way or other be held accountable after death for what I do and leave undone here below? That God exists? That I am more than a transient bag of chemical reactions? That a Higher Life is possible?
Not only do I not have evidence that entails answers to any of these questions, I probably do not have evidence that makes a given answer more probable than not. Let us assume that it is not more probable than not that God exists and that I (in consequence) have a higher destiny in communion with God.
But here's the thing. I have to believe that I have a higher destiny if I am to act so as to attain it. It is like the situation with the new neighbors. I have to believe that they are decent people if I am to act in such a way as to establish good relations with them. Believing the best of them, even on little or no evidence, is pragmatically useful and prudentially rational. I have to believe beyond the evidence. Similarly in the Alpine Hiker case. He has to believe that he can make the jump if he is to have any chance of making it. So even though it is epistemically irrational for him to believe he can make it on the basis of the available evidence, it is prudentially rational for him to bring himself to believe. You could say that the leap of faith raises the probability of the leap of chasm.
And what if he is wrong? Then he dies. But if he sits down in the snow in despair he also dies, and more slowly. By believing beyond the evidence he lives better his last moments than he would have by giving up.
Here we have a pragmatic argument that is not truth-sensitive: it doesn't matter whether he will fail or succeed in the jump. Either way, he lives better here and now if he believes he can cross the chasm to safety. And this, even though the belief is not supported by the evidence.
It is the same with God and the soul. The pragmatic argument in favor of them is truth-insensitive: whether or not it is a good argument is independent of whether or not God and the soul are real. For suppose I'm wrong. I live my life under the aegis of God, freedom, and immortality, but then one day I die and become nothing. I was just a bag of chemicals after all. It was all just a big joke. Electrochemistry played me for a fool. So what? What did I lose by being a believer? Nothing of any value. Indeed, I have gained value since studies show that believers tend to be happier people. But if I am right, then I have done what is necessary to enter into my higher destiny. Either way I am better off than without the belief in God and the soul. If I am not better off in this life and the next, then I am better off in this life alone.
I am either right or wrong about God and the soul. If I am right, and I live my beliefs, then then I have lived in a way that not only makes me happier here and now, but also fits me for my higher destiny. If I am wrong, then I am simply happier here and now.
So how can I lose? Even if they are illusions, believing in God and the soul incurs no costs and disbelieving brings no benefits.
One's own genealogy, for example. What does it matter who begat whom in one's line? Most of us will discover the names and dates of insignificant people who have left nothing behind but their names and dates.
Or is it just a philosopher's prejudice to be concerned more with timeless universals than with temporal particulars? To thrill to the Thoreauvian admonition, "Read not The Times, read the eternities"?
If this post needs theme music, I suggest Party Lights (1962) by the one-hit wonder, Claudine Clark: "I see the lights/I see the party lights/They're red and blue and green/Everybody in the crowd is there/But you won't let me make the scene!" (Because, mama dear, you've kept me cooped up in a black-and-white room studying neuroscience.)
The 'Knowledge Argument' as it is known in the trade has convinced many of the untenability of functionalism in the philosophy of mind. Here is Paul M. Churchland's presentation of Frank Jackson's version of the argument:
1. Mary knows everything there is to know about brain states and their properties. 2. It is not the case that Mary knows everything there is to know about sensations and their properties. Therefore, by Leibniz's law [i.e., the Indiscernibility of Identicals; see my post 'Leibniz's Law': A Useless Expression], 3. Sensations and their properties are not identical to brain states and their properties.
("Reduction, Qualia, and the Direct Introspection of the Brain," Journal of Philosophy, vol. 82, no. 1, January 1985, pp. 8-28, sec. IV, "Jackson's Knowledge Argument.")
Mary is a brilliant neuroscientist who has spent her entire life in a visually impoverished state. Pent up in a room from birth and sheltered from colors, her visual experience is restricted to black and white and shades of gray. You are to imagine that she has come to know everything there is to know about the brain and its visual system. Her access to the outer world is via black-and-white TV. The neuroscience texts over which she so assiduously pores have beeen expurgated by the dreaded Color Censor.
Churchland finds two "shortcomings" with the above argument. I will discuss only the first in this post.
Churchland smells a fallacy of equivocation. 'Knows about,' he claims, is being used in different senses in (1) and (2):
Knowledge in (1) seems to be a matter of having mastered a set of sentences or propositions, the kind one finds written in neuroscience texts, whereas knowledge in (2) seems to be a matter of having a representation of redness in some prelingusitic or sublinguistic medium of representation for sensory variables, or to be a matter of being able to make certain sensory discriminations, or something along these lines. (Emphasis in original)
Rather than argue that that there is no equivocation in the argument as Churchland formulates it, I think it is best to concede the point, urging instead that Chuchland has not presented the Knowledge Argument fairly. He finds an equivocation only because he has set up a straw man. Consider the following version:
4. Mary knows all of the of the physical facts about color vision. 5. Venturing outside her black-and-white domain for the first time, she comes to know a new fact: what it is like to see red. Therefore 6. This new fact is not a physical fact.
There is no equivocation on 'knows' in this argument. Mary knows all of the physical facts about the brain and the visual system. If the physical facts are all the facts, then, when she emerges from the room and views a red sunset, she learns nothing new. But this is not the case. She does learn something new, something she might express by exclaiming, "So this is what it is like to see red!" That is a new fact that she comes to know.
The best counter to this argument is to deny (2) by arguing that that no new fact is learned when Mary steps outside. Mary simply acquires a new concept, a new way of gaining epistemic access to the same old physical facts, namely, the physical and functional facts involved in seeing a red thing. As Churchland puts it,
. . . the difference between a person who knows all about the visual cortex but has never enjoyed a sensation of red, and a person who knows no neuroscience but knows well the sensation of red, may reside not in what is respectively known by each (brain states by the former, qualia by the latter), but rather in the different type of knowledge each has of exactly the same thing. The difference is in the manner of the knowing, not in the nature(s) of the thing known. (Emphases in original)
Churchland's suggestion is that one and the same physical reality appears, or can appear, in two different ways, a third-person way and a first-person way, and that this first-person way of access is no evidence of a first-person way of being. The sensory quale is not an item distinct from the underlying state of the brain, an item that escapes the physicalist's net; the quale is a mode of presentation of the brain state. The quale is an appearance of the brain state. And so Churchland thinks that one can have knowledge of one's sensations via their qualitative features without knowing any neuroscience without it being the case that "sensations are beyond the reach of physical science."
In sum, sensations are identical to brain states. But they can be accessed in two ways, via qualia, and via neuroscience. That there are two different modes of epistemic access does not entail that qualia are distinct in reality from brain states. One and the same btrain "uses more modes and media or representation than the simple stoarge of sentences."
Unfortunately, there is no clear sense in which a quale is an appearance of a brain state. The former may be caused by the latter. But that is not to say that the quale is of or about the brain state. Phenomenal redness does not present a brain state to me. It does not present anything (distinct from itself) to me. After all, qualia are non-intentional: they lack aboutness. No doubt a quale has a certain content, but not an intentional or representational content. One can describe it without describing what it is of, for the simple reason that there is nothing it is of. An intentional state, however, cannot be described without describing what it is of. I can't desire without desiring something, a cold beer, say. So 'cold beer' enters, and enters necessarily, into the description of the mental state I am in when I desire a cold beer. But no words referring to neural items need enter into the description of what I experience when I experience a yellowish-orange afterimage, or feel anxious.
Qualia do not play a merely epistemic role as Churchland thinks. They are items in their own right. They are not mere appearances of an underlying reality; they are items with their own mode of being. For a quale, to be is to be perceived. Its reality consists in its appearing. For this reason it makes no sense to say that the reality of a quale is something distinct from it, something physical to which the quale refers.
Suppose someone, armed with the Indiscernibility of Identicals, were to argue that the Morning Star and the Evening Star are numerically distinct because they differ property-wise, the one, but not the other, being the brightest celestial object in the morning sky. Such an argument could be easily rebutted by pointing out that the two 'stars' are merely different modes of presentation of one and the same physical thing, the planet Venus. Difference in epistemic access does not argue difference in being! Churchland thinks he can similarly rebut the person who argues that qualia are distinct from brain states by claiming that qualia and sentences of neuroscience are different modes of presentation or "media of representation" of one and the same thing, which is wholly physical.
But here is precisely where the mistake is made. Qualia do not present or represent anything. In particular, they do not represent their causes. They are items in their own right with their own mode of being, a mode of being distinct from the mode of being of physical items. For a quale, to be is to be perceived. For a physical item, this is not the case. One cannot drive a wedge between the appearance and the reality of a quale; but one can and must drive such a wedge between the appearance and the reality of physical items.
Even if one were to insist that qualia present or represent their underlying brain states, the materialist position would still be absurd. For if x represents y, then x is distinct from y -- in reality and not merely for us. So if phenomenal redness is an appearance of a complex brain state, the two items are distinct. Churchland thinks he can place qualia on the side of representation and then forget about them. But that is an obvious mistake.
Underlying this obvious mistake is the fundamental absurdity of materialism, which is the attempt to understand mind in wholly non-mental terms. It cannot be done since the very investigation of physical reality presupposes mind.
A reader who says he is drawn to the view that knowledge excludes belief comments:
I am taking a philosophy class now that takes for granted that knowledge entails belief. My sense is that most philosophers now think that that condition is obvious and settled. They tend to dispute what "justification" means, or add more conditions to the Justified True Belief formula.
That knowledge is justified true belief is a piece of epistemological boilerplate that has its origin in Plato's Theaetetus. The JTB analysis is extremely plausible. It is first of all self-evident that there is no false knowledge. So, necessarily, if S knows that p, then 'p' is true. It also seems obvious that one can have a true belief without having knowledge. Suppose I believe that at this very moment Peter (who is 60 miles away) is teaching a class on the philosophy of science, and suppose it is true that at this very moment he is teaching such a class; it doesn't follow that I know that he is teaching such a class. Knowledge requires justification, whatever exactly that is. Finally, if S knows that p, how can it fail to be the case that S believes that p? It may seem obvious that knowledge entails belief. Necessarily, whatever I know I believe, though not conversely.
So I agree with my reader that most philosophers now think that the belief condition is "obvious and settled." But most academic philosophers are fashionistas: they follow the trends, stick to what's 'cool,' and turn up their noses at what they deem politically incorrect. And they read only the 'approved' journals and books. I pronounce my 'anathema' upon them. In any case it is not obvious that knowledge entails belief.
The Case for Saying that Knowledge Excludes Belief
Why not say this: Necessarily, if S knows that p, then it is not the case that S believes that p?
One cannot understand belief except in relation to other mental states. So let's consider how believing and knowing are related, taking both as propositional attitudes. They are obviously different, and yet they share a common element. Suppose we say that what is common to S's knowing that p and S's believing that p is S's acceptance of p. I cannot (occurrently) believe that Oswald acted alone unless I accept the proposition that Oswald acted alone, and I cannot (occurrently) know that he acted alone with accepting the very same proposition. To accept, of course, is to accept-as-true. It is equally obvious that what is accepted-as-true might not be true. Those who accept that the earth is flat accept-as-true what is false. Now one could analyze 'S knows that p' as follows:
a) S unconditionally accepts-as-true p b) p is true c) S is justified in accepting-as-true p.
This is modeled on, but diverges from, the standard justified-true-belief (JTB) analysis of 'know' the locus classicus of which is Plato's Theaetetus.
And one could perhaps analyze 'S believes that p' as follows:
a) S unconditionally accepts-as-true p d) S does not know that p.
These analyses accommodate the fact that there is something common to believing and knowing, but without identifying this common factor as belief. The common factor is acceptance. A reason for not identifying the common element as belief is that, in ordinary language, knowledge excludes belief. Thus if I ask you whether you believe that p, you might respond, 'I don't believe it, I know it!' Do I believe the sun is shining? No, I know the sun is shining. Do I know that I will be alive tomorrow? No, but I believe it. That is, I give my firm intellectual assent to the proposition despite its not being evident to me. Roughly, belief is firm intellectual assent in the absence of compelling evidence.
Surely this is what we mean by belief in those cases that clearly count as belief. Lenny the liberal, for example, believes that anthropogenic global warming is taking place and is a dire environmental threat. Lenny doesn't know these two putative facts; he believes them: he unconditionally accepts, he firmly assents to, the two propositions in the absence of compelling evidence. And it seems clear that an element of will is involved in our boy's belief since the evidence does not compel his intellectual assent. He decides to believe what he believes. His believing is in the control of his will. This does not mean that he can believe anything he wants to believe. It means that a 'voluntative surplus' must be superadded to his evidence to bring about the formation of his belief. Without the voluntative superaddition, he would simply sit staring at his evidence, so to speak. There would be no belief and no impetus to action. Beliefs typically spill over into actions. But there would not be even a potential 'spill over' unless there were a decision on Lenny's part to go beyond his evidence by superadding to it his firm intellectual assent.
"But aren't you just using 'believes' in an idiosyncratic way?"
It is arguably the other way around. Someone who says he believes that the sun is shining when he sees that it is shining is using 'believes' in an idiosyncratic way. He is using 'believes' in a theory-laden way, the theory being the JTB analysis of 'knows.'
"But then isn't this just a terminological quibble? You want to substitute 'accepts' or 'accepts-as-true' for 'believes' in the standard JTB analysis of 'knows' and you want to reserve 'believes' for those cases in which there is unconditional acceptance but not knowledge."
The question is not merely terminological. There is an occurrent mental state in which one accepts unconditionally propositions that are not evident. It doesn't matter whether we call this 'belief' or something else. But calling it 'belief' comports well with ordinary language.
Let me now elaborate upon this account of belief, or, if you insist, of Aquinian-Pieperian belief.
1. Belief is a form of acceptance or intellectual assent. To believe that p is to accept *p*, and to disbelieve that p is to reject *p*. One may also do neither by abstaining from both acceptance and rejection. (Asterisks around a sentence make of the sentence a name of the Fregean proposition expressed by the sentence.)
2. If acceptance is the genus, then knowing, believing, and supposing are species thereof. In knowing and believing the acceptance is unconditional whereas in supposing it is conditional. It follows that believing is not common to believing and knowing as on the JTB analysis. To think otherwise is to confuse the genus (acceptance) with one of its species (belief).
[Species 1: Knowledge Species 2: Belief] [Species 3: Supposal]
Unconditional Acceptance Conditional Acceptance
3. What distinguishes believing and knowing is that the believer qua believer does not know, and the knower qua knower does not believe. Both, however, accept. What I just wrote appears objectionably circular. It may seem to boil down to this: what distinguishes believing and knowing is that they are distinct! We can lay the specter of the circle by specifying the specific difference.
If believing and knowing are species of the genus acceptance, what is the specific difference whereby the one is distinguished from the other? Believing that p and knowing that p are not distinguished by the common propositional content, p. Nor are they distinguished by their both being modes of unconditional acceptance. Can we say that they differ in that the evidence is compelling in the case of knowing but less than compelling in the case of believing? That is true, but then the difference would seem to be one of degree and not of kind. But if knowing and believing are two species of the same genus, then we have a difference in kind. Perhaps we can say that knowledge is evident acceptance while belief is non-evident acceptance. Or perhaps the difference is that belief is based on another's testimony whereas knowledge is not. Let's explore the latter suggestion.
4. It is essential to belief that it involve both a proposition (the content believed) and a person, the one whose testimony one trusts when one gains access to the truth via belief. To believe is to unconditionally accept a proposition on the basis of testimony. If so, then there are two reasons why it makes no sense to speak of perceptual beliefs. First, what I sense-perceive to be the case, I know to be the case, and therefore, by #3 above, I do not believe to be the case. Second, what I sense-perceive to be the case I know directly without need of testimony.
On this approach, the difference between believing and knowing is that believing is based on testimony whereas knowing is not. Suppose that p is true and that my access to *p*'s truth is via the testimony of a credible witness W. Then I have belief but not knowledge. W, we may assume, knows whereof he speaks. For example, he saw Jones stab Smith. W has knowledge but not belief.
'The table is against the wall.' This is a true contingent sentence. How do I know that it is true except by seeing (or otherwise sense perceiving) that the table is against the wall? And what is this seeing if not the seeing of a fact, where a fact is not a true proposition but the truth-maker of a true proposition? This seeing of a fact is not the seeing of a table (by itself), nor of a wall (by itself), nor of the pair of these two physical objects, nor of a relation (by itself). It is the seeing of a table's standing in the relation of being against a wall. It is the seeing of a truth-making fact. (So it seems we must add facts to the categorial inventory.) The relation, however, is not visible, as are the table and the wall. So how can the fact be visible, as it apparently must be if I am to be able to see (literally, with my eyes) that the table is against the wall? That is our problem.
Let 'Rab' symbolize a contingent relational truth about observables such as 'The table is against the wall.' We can then set up the problem as an aporetic pentad:
1. If one knows that Rab, then one knows this by seeing that Rab (or by otherwise sense-perceiving it). 2. To see that Rab is to see a fact. 3. To see a fact is to see all its constituents. 4. The relation R is a constituent of the fact that Rab 5. The relation R is not visible (or otherwise sense-perceivable).
The pentad is inconsistent: the conjunction of any four limbs entails the negation of the remaining one. To solve the problem, then, we must reject one of the propositions. But which one?
(1) is well-nigh undeniable: I sometimes know that the cat is on the mat, and I know that the cat is on the mat by seeing that she is. How else would I know that the cat is on the mat? I could know it on the basis of the testimony of a reliable witness, but then how would the witness know it? Sooner or later there must be an appeal to direct seeing. (5) is also undeniable: I see the cat; I see the mat; but I don't see the relation picked out by 'x is on y.' And it doesn't matter whether whether you assay relations as relation-instances or as universals. Either way, no relation appears to the senses.
Butchvarov denies (2), thereby converting our pentad into an argument against facts, or rather an argument against facts about observable things. (See his "Facts" in Javier Cumpa ed., Studies in the Ontology of Reinhardt Grossmann, Ontos Verlag 2010, pp. 71-93, esp. pp. 84-85.) But if there are no facts about observable things, then it is reasonable to hold that there are no facts at all.
So one solution to our problem is the 'No Fact Theory.' One problem I have with Butchvarov's denial of facts is that (1) seems to entail (2). Now Butch grants (1). (That is a loose way of saying that Butch says things in his "Facts' article that can be reasonably interpreted to mean that if (1) were presented to him, then would grant it.) So why doesn't he grant (2)? In other words, if I can see (with my eyes) that the cat is on the mat, is not that excellent evidence that I am seeing a fact and not just a cat and a mat? If you grant me that I sometimes see that such-and-such, must you not also grant me that I sometimes see facts?
And if there are no facts,then how do we explain the truth of contingently true sentences such as 'The cat is on the mat'? There is more to the truth of this sentence than the sentence that is true. The sentence is not just true; it is true because of something external to it. And what could that be? It can't be the cat by itself, or the mat by itself, or the pair of the two. For the pair would exist if the sentence were false. 'The cat is not on the mat' is about the cat and the mat and requires their existence just as much as 'The cat is on the mat.' The truth-maker, then, must have a proposition-like structure, and the natural candidate is the fact of the cat''s being on the mat. This is a powerful argument for the admission of facts into the categorial inventory.
Another theory arises by denying (3). But this denial is not plausible. If I see the cat and the mat, why can't I see the relation -- assuming that I am seeing a fact and that a fact is composed of its constituents, one of them being a relation? As Butch asks, rhetorically, "If you supposed that the relational fact is visible, but the relation is not, is the relation hidden? Or too small to see?" (85)
A third theory comes of denying (4). One might think to deny that R is a constituent of the fact of a's standing in R to b. But surely this theory is a nonstarter. If there are relational facts, then relations must be constituents of some facts.
Our problem seems to be insoluble. Each limb makes a very strong claim on our acceptance. But they cannot all be true.
Is 'prima facie' evidence something with self-evident contextual significance or a evidence that constitutes some sort of transcendental first principle? I am having some trouble with this concept.
The Latin phrase means 'on the face of it,' or 'at first glance.' Prima facie evidence, then, is evidence that makes a strong claim on our credence but can perhaps be rebutted or overturned. The term is used in the law to refer to evidence which, if uncontested, would establish a fact or raise a presumption of a fact. If you have the victim's blood on your hands, and you are acting nervous, and are seen running from the crime scene with passport in pocket, and have been recently overheard threatening the life of the victim, then that adds up to a strong prima facie case for your having committed the crime. But these bits of evidence, even taken together, are not conclusive.
Philosophers use the term in roughly the same way. For example, a prima facie duty is a duty which, in the absence of conflicting duties, is our actual obligation. If I promise to meet you tomorrow at noon at the corner of Fifth and Vermouth to discuss epistemology, then, so promising, I incur the duty to meet you then and there. But if my wife becomes ill in the meantime then my duty reverts to her care. The prima facie duty to meet you is defeated or overridden by the duty to care for my wife.
Or a philosopher might speak of the prima facie evidence of memory. My seeming to remember having mailed my tax return to the Infernal Revenue Service is good prima facie evidence of my having mailed it, but it is defeasible evidence.
Prima facie evidence should not be confused with self-evidence. Prima facie evidence is defeasible while (objective) self-evidence is not.
I incline towards Panayot Butchvarov's notion of knowledge as involving the absolute impossibility of mistake. In The Concept of Knowledge (Northwestern UP, 1970), Butchvarov writes that "an epistemic judgment of the form 'I know that p' can be regarded as having the same content as one of the form 'It is absolutely impossible that I am mistaken in believing that p'." (p. 51)
One way to motivate this view is by seeing it as the solution to a certain lottery puzzle.
Suppose Socrates Jones has just secured a teaching job at Whatsamatta U. for the 2011-2012 academic year. Suppose you ask Jones, "Do you know what you will be doing next year?" He replies, "Yes I know; I'll be teaching philosophy." But Jones doesn't like teaching; he prefers the life of the independent scholar. So he plays the lottery, hoping to win big. If you ask Jones whether he knows he isn't going to win, he of course answers in the negative. He doesn't know that he will win, but he doesn't know that won't either. Jones also knows that if he wins the lottery, then he won't work next year at a job he does not like.
On the one hand, Jones claims to know what he will be doing next year, but on the other he also claims to know that if he wins the lottery, then he won't be doing what he claims to know he will be doing. But there is a contradiction here, which can be set forth as follows.
Let 'K' abbreviate 'knows,' 'a' the name of a person, and 'p' and 'q' propositions. We then have:
1. Kap: Jones knows that he will be teaching philosophy next year. 2. Ka(q -->~p): Jones knows that if he wins the lottery, then he will not be teaching philosophy next year. 3. ~Ka~q: Jones does not know that he does not win the lottery. Therefore 4. Ka~q: Jones knows that he does not win the lottery. (From 1 and 2) But 5. (3) and (4) are contradictories. Therefore 6. Either (1) or (2) or (3) is false.
Now surely (3) is true, so this leaves (1) and (2). One of these must be rejected to relieve the logical tension. Isn't it obvious that (1) is the stinker, or that it is more of a stinker that (2)? The inference from (1) and (2) to (4) is an instance of the principle that knowledge is closed under known implication: if you know a proposition and you know that it entails some other proposition, then you know that other proposition. This seems right, doesn't it? So why not make the obvious move of rejecting (1)?
Surely Jones does not KNOW that he will be teaching philosophy next year. How could he KNOW such a thing? The poor guy doesn't even KNOW that he will be alive tomorrow let alone have his wits sufficiently about him to conduct philosophy classes. He doesn't KNOW these things since, if we are serious, knowledge implies the impossibility of mistake, and our man can easily be mistaken about what will happen in the future.
Of course, I realize that there is much more to be said on this topic.
How ubiquitous, yet how strange, is sameness! A structure of reality so pervasive and fundamental that a world that did not exhibit it would be inconceivable.
How do I know that the tree I now see in my backyard is numerically the same as the one I saw there yesterday? Alvin Plantinga (Warrant and Proper Function, Oxford 1993, p. 124) says in a Reidian vein that one knows this "by induction." I take him to mean that the tree I now see resembles very closely the one I saw yesterday in the same place and that I therefore inductively infer that they are numerically the same. Thus the resemblance in respect of a very large number of properties provides overwhelming evidence of their identity.
But this answer seems open to objection. First of all, there is something instantaneous and immediate about my judgment of identity in a case like this: I don't compare the tree-perceived-yesterday, or my memory of the tree-perceived-yesterday, with the tree-perceived-today, property for property, to see how close they resemble in order to hazard the inference that they are identical. There is no 'hazarding' at all. Phenomenologically, there is no comparison and no inference. I just see that they are the same. But this 'seeing' is of course not with the eyes. For sameness is not an empirically detectable property or relation. I am just immediately aware -- not mediately via inference -- that they are the same. Greenness is empirically detectable, but sameness is not.
What is the nature of this awareness given that we do not come to it by inductive inference? And what exactly is the object of the awareness, identity itself?
A problem with Plantinga's answer is that it allows the possibility that the two objects are not strictly and numerically the same, but are merely exact duplicates or indiscernible twins. But I want to discuss this in terms of the problem of how we perceive or know or become aware of change. Change is linked to identity since for a thing to change is for one and the same thing to change.
Let's consider alterational (as opposed to existential) change. A thing alters iff it has incompatible properties at different times. Do we perceive alteration with the outer senses? A banana on my counter on Monday is yellow with a little green. On Wednesday the green is gone and the banana is wholly yellow. On Friday, a little brown is included in the color mix. We want to say that the banana, one and the same banana, has objectively changed in respect of color.
But what justifies our saying this? Do we literally see, see with the eyes, that the the banana has changed in color? That literal seeing would seem to require that I literally see that it is the same thing that has altered property-wise over ther time period. But how do I know that it is numerically the same banana present on Monday, Wednesday, and Friday? How do I know that someone hasn't arranged things so that there are three different bananas, indiscernible except for color, that I perceive on the three different days? On that extraordinary arrangement I could not be said to be perceiving alterational change. To perceive alterational change one must perceive identity over time. For there is change only if one and the same thing has different properties at different times. But I do not perceive the identity over time of the banana.
I perceive a banana on Monday and a banana on Wednesday; but I do not visually perceive that these are numerically the same banana. For it is consistent with what I perceive that there be two very similar bananas, call them the Monday banana and the Wednesday banana. I cannot tell from sense perception alone whether I am confronting numerically the same banana on two different occasions or two numerically different bananas on the two occasions. If you disagree with this, tell me what sameness looks like. Tell me how to empirically detect the property or relation of numerical sameness. Tell me what I have to look for.
Suppose I get wired up on methamphetamines and stare at the banana the whole week long. That still would not amount to the perception of alterational change. For it is consistent with what I sense-perceive that there be a series of momentary bananas coming in and out of existence so fast that I cannot tell that this is happening. (Think of what goes on when you go to the movies.) To perceive change, I must perceive diachronic identity, identity over time. I do not perceive the latter; so I do not perceive change. I don't know sameness by sense perception, and pace Plantinga I don't know it by induction. For no matter how close the resemblance between two objects, that is consistent with their being numerically distinct. And note that my judgment that the X I now perceive is the same as the X I perceived in the past has nothing tentative or shaky about it. I judge immediately and with assurance that it is the same tree, the same banana, the same car, the same woman. What then is the basis of this judgment? How do I know that this tree is the same as the one I saw in this spot yesterday? Or in the case of a moving object, how do I know that this girl who I now see on the street is the same as the one I saw a moment ago in the coffee house? Surely I don't know this by induction.
My disembodied existence is conceivable (thinkable without apparent logical contradiction by me and beings like me). But does it follow that my disembodied existence is possible? Sydney Shoemaker floats the suggestion that this inference is invalid, resting as he thinks on a confusion of epistemic with metaphysical possibility. (Identity, Cause, and Mind, p. 155, n. 13.) Shoemaker writes, "In the sense in which I can conceive of myself existing in disembodied form, this comes to the fact that it is compatible with what I know about my essential nature . . . that I should exist in disembodied form. From this it does not follow that my essential nature is in fact such as to permit me to exist indisembodied form."
We need to think about the relation between conceivability and epistemic possibility if we are to get clear about the inferential link, if any, between conceivability and metaphysical possibility. Pace Shoemaker, I will suggest that the inference from conceivability to metaphysical possibility need not rest on a confusion of epistemic with metaphysical possibility. But it all depends on how we define these terms.
John Greco (How to Reid Moore) finds Barry Stroud's interpretation of G. E. Moore's proof of an external world implausible:
According to him [Stroud], the question as to whether we know anything about the external world can be taken in an internal or an external sense. In the internal sense, the question can be answered from “within” one’s current knowledge —- hence one can answer it by pointing out some things that one knows, such as that here is a hand. In the external sense, however, the question is put in a “detached” and “philosophical” way.
If we have the feeling that Moore nevertheless fails to answer the philosophical question about our knowledge of external things, as we do, it is because we understand that question as requiring a certain withdrawal or detachment from the whole body of our knowledge of the world. We recognize that when I ask in that detached philosophical way whether I know that there are external things, I am not supposed to be allowed to appeal to other things I think I know about external things in order to help me settle the question.5
According to Stroud, Moore’s proof is a perfectly good one in response to the internal question, but fails miserably in response to the external or “philosophical” question. In fact, Stroud argues, Moore’s failure to respond to the philosophical question is so obvious that it cries out for an explanation -- hence Malcolm’s and Ambroses’s ordinary language interpretations. Stroud offers a different explanation for Moore’s failure to address the philosophical question: “He [i.e. Moore] resists, or more probably does not even feel, the pressure towards the philosophical project as it is understood by the philosophers he discusses.”6 Or again, “we are left with the conclusion that Moore really did not understand the philosopher’s assertions in any way other than the everyday ‘internal’ way he seems to have understood them.”7 The problem with this interpretation, of course, is that it makes Moore out to be an idiot. Is it really possible that Moore, the great Cambridge philosopher, did not understand that other philosophers were raising a philosophical question? (bolding added)
So how does one know that one is not a brain in a vat, or that one is not deceived by an evil demon? Moore and Reid are for the most part silent on this issue. But a natural extension of their view is that one knows it by perceiving it. In other words, I know that I am not a brain in a vat because I can see that I am not. [. . .] Just as I can perceive that some animal is not a dog, one might think, I can perceive that I am not a brain in a vat. (21)
A commenter on the Pieper post notes that Dallas Willard has a understanding of the belief-knowledge relation (or lack of relation) similar to that of Pieper. A little searching brought me to the following passage in Willard's Knowledge and Naturalism which substantiates the commenter's suggestion (I have bolded the parts relevant to my current concerns):
Dave Gudeman at my old blog commented forcefully and eloquently:
I've always had difficulty with arguments like this:
It is not easy to understand how God could add causal input to the space-time system.
I'm aware that such arguments have a distinguished history, but I don't get it. Just because you don't understand how it works, you doubt that it is possible? But you don't really understand how anything works. Not matter, not energy, not beauty, not humor. Science pretends that it understands things, but if you trace their theories to the end, all they do is propose underlying mechanisms that suffer from the same opaque nature as what they are trying to explain.
Since you don't understand how any cause at all operates, what does it prove that you can't understand how God operates?
In Nicole Hassoun's NDPR review of Roderick T. Long and Tibor R. Machan (eds.), Anarchism/Minarchism: Is a Government Part of a Free Country?, Ashgate, 2008, we read:
Anarchism should be of interest [to social liberals] because it plays the role in political philosophy that skepticism plays in epistemology -- raising the question of what, if anything, could justify a state in the way that brains in vats, etc. raise the question of what, if anything, could justify beliefs. The debate between anarchists and libertarians should be of interest because if the anarchists are right then libertarianism commits one to anarchism. So, social liberals who take libertarianism seriously may have to take anarchism seriously too.
I was struck by the notion that anarchism is as it were political philosophy's skepticism. A fruitful analogy. The anarchist is skeptical about the moral justifiability of the state in the way in which the epistemological skeptic is skeptical about whether what we take to be knowledge really is knowledge. There is a strong temptation, one I feel, to revert to a double insistence: first, that we have knowledge of the external world whether or not we can answer every conceivable objection to the possibility of such knowledge; and second, that some states are morally justified whether or not we we can explain to everyone's statisfaction what it is that confers moral justifiability on them.
Perhaps the right atitude is as follows. Provisionally, we should just accept that some beliefs about the external world amount to knowledge and that some states are morally justified. Ultimately, however, this is not a philosophically satisfactory attitude. One wants rational insight in both cases. And so we should keep working on the problems. But lacking as we do proof of the impossibility of knowledge and of the moral unjustifiability of the state, we have no good reason to abandon our commonsense views about the existence of knowledge and the moral justifiability of some states. You cannot be a philosopher without being a procedural skeptic; but if your skepticism hardens into dogmatic denial of the commonsensical, then the burden of proof is on you.
As I use them, 'imaginable' and 'conceivable' mean the following. Bear in mind that there is an element of stipulation and regimentation in what I am about to say. Bear in mind also that the following thoughts are tentative and exploratory, not to mention fragmentary. The topics are difficult and in any case this is only a weblog, a sort of online notebook.
To imagine X is to form a mental image of X. To imagine a two-headed cat is to form a mental image of (more cautiously: as of) a two-headed cat. To say that X is imaginable is to say that someone has the ability to imagine it. To envisage is to visually imagine. Not all imagining is visual.
To conceive X is to think X. To say that X is conceivable is to say that someone can think it, that is, has the ability to make it an object of thought. Trading Latin for good old Anglo-Saxon, conceivability is thinkability. Therefore, a round square is conceivable in that I now have it as an object of my thought, hence someone can have it as an object of his thought. If you balk at this, then you are probably confusing conceivability with conceivability without contradiction. Admittedly, round squares are contradictory objects. Still, one can think them. They are therefore thinkable or conceivable. If you weren't able to think of the round square you would not be able to judge that there cannot be a round square.
1. Van Inwagen describes his position as "modal scepticism" (245) but a better name for it would be 'mitigated modal scepticism' since he does admit that we have modal knowledge: "I think we do know a lot of modal propositions . . . ." (245)
2. In one sense it is trivially true that we have modal knowledge. Here is an example of my own. Suppose I see that the cat has escaped into the backyard. The cat's escape is an actual fact. But whatever is actual is possible: abesse ad posse valet illatio. Therefore, knowing that the cat has escaped, I know that it is possible that the cat has escaped. So I have modal knowledge.
But this knowledge of the possible from the actual is uncontroversial, and of course there is no special problem about its epistemology. What is controversial is whether we have knowledge of unrealized possibilities, knowledge of possibilities that have not been, are not now, and perhaps never will be actual. And if we do have such knowledge, it will presumably be difficult to explain how we have such knowledge.
3. Van Inwagen has no doubt that we do have knowledge of some unrealized possibilities. "I know that it is possible that . . . the table that was in a certain position at noon have then been two feet to the left of where it in fact was." (246) Van Inwagen also cites the possibility of John F. Kennedy having died from natural causes. In both of these cases, we have knowledge about an unrealized possibility that will forever remain unrealized.
But there is also modal knowledge of the impossible and the necessary: "it is impossible for there to be liquid wine bottles, and . . . it is necessary that there be a valley between any two mountains that touch at their bases." (246)
4. So we have (nontrivial) modal knowledge. But how is this possible? I know that JFK might have died from natural causes. But how do I know this? The proposition JFK died of natural causes is known to be false. So how can I know that it is possibly true? It isn't true and never will be true. There seems to be nothing for my knowledge to grab onto.
We need to rub our noses in the problem a bit longer. I know that my table is now two inches from the wall. How do I know this? By sense perception aided perhaps with a tape measure. But I also know that my table might now have been three inches from the wall. (Observe that the last two occurrences of 'now' pick out the very same time.) Van Inwagen insists (251) that in cases like this we have genuine knowledge: "We certainly do know" things like this. (251)
I find it hard to disagree with van Inwagen on this score. I am blogging now. But surely I might have been swimming now. My swimming now is a part of a total way things might have been. Surely there is nothing intrinsically impossible about my swimming now. Surely it cannot be logically necessary that I be blogging now. Or am I exaggerating with these three uses of 'surely'? How can I be so sure that what I am saying is possible is really possible and not just a reflection of my ignorance?
5. Unfortunately, van Inwagen supplies no answer to how we have modal knowledge. He finds it mysterious. (250) But of course, from the fact that we cannot explain how we have it, it does not follow that we do not have it. The following are logically consistent: I know that JFK might have died of natural causes and I do not know how I know this.
Still, if I cannot explain how I have modal knowledge, that casts some doubt on my possession of it. If I cannot explain how I have it, how can I be sure that I do have it?
6. Let me float a suggestion. Among my abilities is the ability to move furniture. Suppose I move my table, which is two inches from the wall, to a position three inches from the wall. Actually executing the action, I prove that this type of action is possible. Can I use this fact to understand the unrealized possibility of my table's being three inches from the wall at a time at which it is in fact two inches from the wall? What is wrong with this analysis: The unrealized possibility of the table's being in a different position from the one it is in is identical to the unexercised ability of an agent with sufficient power to move the table in question.
The idea is that some mere possibilities are unexercised abilities of agents. A merely possible state of affairs — the table's being three inches from the wall — is just my or someone's unexercised ability to move the table to that position.
I know that it is possible for the table to be three inches from the wall by knowing that I have the ability to move the table to that position. And I know I have that ability from my actually having moved pieces of furniture of similar size and shape.
Knowing a merely possible state of affairs, then, is knowing something actual, namely, an agent's actual, but unexercised, ability to bring about the state of affairs in question.
7. Unfortunately, this suggestion seems to presuppose the very thing it is supposed to be accounting for, namely, unrealized possibilities. My suggestion was that some unrealized possibilities are identifiable with unexercised abilities of agents. An unexercised ability is an ability the agent could exercise but does not. But an ability that could be exercised but is not is an unrealized possibility. So it seems that one moves in a circle if one tries to reduce unrealized possibilities to the abilities of agents. Perhaps we are forced to say that the concept of an unrealized possibility is a primitive or irreducible concept, one that cannot be illuminated in terms of anything more basic.
But then we seem left with an ontological and an epistemological puzzle. The ontological puzzle is that unrealized possibilities, though not nothing, are yet nothing actual. How can something be without being actual? The epistemological puzzle is to explain how one comes to know that there are unrealized possibilities in general and how one knows that a particular unrealized possibility is indeed an unrealized possibility.
As I explained the other day, I am inclined to accept Butchvarov's view of knowledge as the impossibility of error. If I know that p, then it is not enough that I have a justified true belief that p; I must have a true belief whose justification rules out the possibility of error. Anything short of this is just not knowledge. But then what are we to say about the knowledge claims that people routinely make, claims that that don't come near satisfying this exacting requirement? We won't say that they are mere beliefs, for many of them will be rationally held beliefs. For example, an air traveler who claims to know that he will be in New York tomorrow has a rational belief that will in all probability turn out to be true; but by Butchvarov's lights, a true belief for which one has reasons does not amount to knowledge unless the reasons entail the belief's truth. Since the air traveler's reasons for believing he will be in New York tomorrow do not entail his being there tomorrow, his belief, though rational, is not a case of knowledge. How then do we explain his use of the word 'know'? Should we say that there is a weak sense of 'know' as rational true belief short of certainty?
One idea, also from Butchvarov (The Concept of Knowledge, pp. 54-61), is that the various loose claims of knowledge can be understood as cases of exaggeration. But I'll try to develop this idea in my own way.
We begin with an example from Panayot Butchvarov's The Concept of Knowledge, Northwestern University Press, 1970, p. 47. [CK is the red volume on the topmost visible shelf. Immediately to its right is Butch's Being Qua Being. Is Butch showing without saying that epistemology is prior to metaphysics?] There is a bag containing 99 white marbles and one black marble. I put my hand in the bag and without looking select a marble. Of course, I believe sight unseen that the marble I have selected is white. Suppose it is. Then I have a justified true belief that a white marble has been selected. My belief is justified because of the fact that only one of the 100 marbles is black. My belief is true because I happened to pick a white marble. But surely I don't know that I have selected a white marble. The justification, though very good, is not good enough for knowledge. I have justified true belief but not knowledge.
Knowledge, says Butchvarov, entails the impossibility of mistake. This seems right. The mere fact that people will use the word 'know' in a case like the one described cuts no ice. Ordinary usage proves nothing. People say the damndest things. They are exaggerating, as a subsequent post may show. 'Know' can be used in non-epistemic ways -- think of carnal knowledge for example -- but used epistemically it can be used correctly in only one way: to mean absolute impossibility of mistake. Or as least that is Butchvarov's view, a view I find attractive.
A reader inquires, " I'm curious, if someone asked you what you were more certain of, your hand or belief in the existence of God, how would you respond?"
The first thing a philosopher does when asked a question is examine the question. (Would that ordinary folk, including TV pundits, would do likewise before launching into gaseous answers to ill-formed questions.) Now what exactly am I being asked? The question seems ambiguous as between:
Q1. Are you more certain of the existence of your hand or of the existence of God?
Q2. Are you more certain of the existence of your hand or of your belief in the existence of God?