« Thinking of Graduate School in the Humanities? Part II | Main | I'm Free! Some Thoughts on Compatibilism »

Tuesday, May 19, 2009

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I wonder: in a deterministic world, should Congress still deliberate?

I don't see a problem in saying that a decision-making process can be deterministic. That process looks something like this.

1) We perceive that we have several potential courses of action.

2) We simulate the outcome of each course.

3) We choose the course with the preferred outcome. (Preferences being determined by other past factors.)

4) We execute the chosen course.

and in the case of free will...

5) Our chosen course often results in the effects we foresaw, i.e., our simulations were mostly accurate, at least in the short term.

Now the only thing here that seems remotely problematic is (1). How can there be potential courses of action when physics has determined which course we will eventually decide upon?

Well, it seems to me that the "potential" nature of each course can be of an epistemic nature.

Suppose I'm going to choose to cross the road when the nearest light turns red. The light turns red, I look around me, and observe that a car is turning on red and heading my way. I perceive a potential choice. I can either start crossing the road and hope the car stops for me, or I can wait til it passes. If determinism is true then it is already determined which course I'll take. However, until I do the simulating and the preferring, I'm not going to *know* which course I will take. Naturally, I decide to wait for the car to pass. In hindsight, knowing me well, you would easily have predicted that I would be cautious and wait for the car to pass. And the reason you know what I would have done is that you would be familiar with my decision-making processes, i.e., that I generally choose to be cautious.

Nothing in this process is illusory. Choice is not illusory. The only thing that is illusory (for some) is the idea that had I gone back in time to the moment of my decision, kept every parameter identical, and yet chosen otherwise.

I think our real intuition about free will is that, if we went back in time, and our preferences were different, then we could have chosen differently.

Bill,

I am very happy you’ve chosen to revisit some of these issues. We do not have the answers, as you say, but it seems to me that we do make some progress in this place. Non semper ignorabimus.

First I must tackle the Fat Tire Problem. You will recall that earlier in the day Peter alerted me that there was a problem. Now when I hear that a friend’s refrigerator is utterly infested with Fat Tire, I take it as my duty to show up and throw myself body and soul into the eradication process. No thanks needed.

You say that when you deliberate and choose you have to believe that your choice has the power to bring about the actions you choose. I accept that most of us believe this, but I still have to ask: is it actually true that conscious choice has the power to direct my actions? I take this to be a factual not a philosophical issue. And I want to know in particular, if conscious choice does initiate action, how does this play out neurologically? So I go to neuroscience and the behavioral sciences and ask, do we have grounds to be comfortable in our belief in conscious agency?

What I now want to report to you, and I hope I do it accurately, is that the pre-frontal areas associated with conscious decision-making do not appear to be in control of very much of what we do. They come late to the decision-action process, if they intrude at all, and play a sub-ordinate role, often, it seems, in rationalizing actions already chosen and initiated by other parts of the brain. Research continues, and neuroscience is a young science, but as things stand now, the evidence is not there to believe that deliberation can control very much of what we do.

For people who believe in conscious agency, there is first of all the problem that I sometimes deliberate and chose, and then act, but in a completely different way than I chose to act. Philosophers call this the problem of akrasia or weakness of the will. The problem assumes we have at least fairly reliable conscious control of our actions, and then is puzzled when this control mysteriously breaks down (as it seems to do all too regularly!) The problem of akrasia disappears if we assume we have very little control, or only marginal and unreliable influence, over our actions. Our deliberations and choices are only predictions about how we will act. I am fairly good at predicting after 50 years, for example, but still I sometimes predict badly because I do not have access to the unconscious forces really driving my decision-making. Neuroscience with its fMRI’s can now at least chart these deep processes in the brain and how they initiate action.

Instead of contenting ourselves with philosophical slogans like “we are condemned to freedom”, perhaps we should be earnestly investigating how free we actually are? What degree of reliable conscious control does it appear I have over my actions? Where does control seem conspicuously to fail? In the colloquium Bill alludes to I challenged him with the question whether he thought he could do a Gauguin on his 60th birthday and take up with a gaggle of beach nymphettes in Tahiti? I predicted he could not, though perhaps he thinks he is free to. Neuroscience has yet to start explaining character in any convincing way, but I suspect we shall one day have a very good neurological explanation of character, and one that supports the Aristotelian and Stoic view that formed character is very difficult to change or act against. So even if Bill tries to do a Gauguin, don’t worry, he won’t even make it to the airport.


Phil writes,

>>I still have to ask: is it actually true that conscious choice has the power to direct my actions? I take this to be a factual not a philosophical issue.<<

I made it clear above that the sense of being free does not entail that we are free. That would be a non sequitur. If proposition p seems to be the case it does not follow that p is the case. The point is rather that FW can no more be denied than consciousness can be denied.

I am not sure what contrast you intend with 'factual' and 'philosophical.' There is a fact of the matter as to whether or not there is FW, so in this sense the question is factual and not a matter of how we choose to use words. The main philosophical issues are factual in this sense and not merely verbal. I suppose what you mean is that the question whether there is freedom of the will is an *empirical* question, one that might be settled by further study of the brain. But I hope you will agree with the general principle that if a proposition p is a priori impossible, then no amount of empirical study can show it to be false. Now the nonexistence of FW, I claim, is a priori impossible given that we are agents and not mere spectators. So from my point of view brain science cannot answer the question.

We touched on this question at our symposium. We know more than ever about the brain, but this added information about a hunk of meat -- marvelous as it is -- does not help resolve the question. There is no reason to think that knowing still more about it will help.

Phil,

To continue, you pin your hopes on future neuroscience as if knowing more about the meat in the skull will empirically resolve this ancient philosophical question about FW. I see no reason to subscribe to your faith and hope. It smacks of scientism. What justifies your faith and hope? Why is it that the issues debated in the philosophy of mind are no nearer resolution now than they were in the days of Descartes? We know vastly more about the brain now than we did then. Your faith might be justified if you could show how our advanced empirical knowledge has resolved some questions or definitively refuted some old doctrines. Consider parallelism, occasionalism, substance dualism, property dualism, double aspect theory, panpsychism. Have any of these been put out of the running by advances in neuroscience? No. Some neuroscientists are substance dualists, e.g. John C Eccles (The Self and Its Brain).

So here's a challenge for you. Give me an example of some issue in the philosophy of mind that has been definitively settled by an increase in neuroscience knowledge. When you have given us a nice detailed convincing example, then I will grant that pinning your hopes on future neuroscience is rational. Otherwise, it is faith and hand-waving.

Phil,

Thee is a typo above. Instead of "if a proposition p is a priori impossible, then no amount of empirical study can show it to be false," read: if a proposition p is a priori impossible, then no amount of empirical study can show it to be TRUE.

Bill - I've been puzzling all day about your claim that agency is not an illusion. Why could agency not be an illusion?

Continuing. . .
Phil writes, >>Instead of contenting ourselves with philosophical slogans like “we are condemned to freedom”, perhaps we should be earnestly investigating how free we actually are?<<

This shows that you don't appreciate the problem. You think you can dismiss the truth that we cannot choose not to be free by labelling it 'philosophical.' But all you are doing is evading the problem I set forth, which involves integrating the 3rd and 1st person points of view.

You are also confusing freedom of the will with ability. During our discussion you made the preposterous claim I could not go to a strip joint. Of course I could have. But had I done so, it would not have constituted a proof of FW because you could have plausibly claimed that my doing so was either neurobiologically determined or psychologically determined by my preferences and reasons.

Ability to do X does not entail freedom with respect to X. Contrapositively, unfreedom with respect to X does not entail inability to do X.

Note also that if behavior is predictable it does not follow that it is unfree. Someone who acts 'in character' is not eo ipso acting unfreely any more than someone who is acting 'out of character' is eo ipso acting freely. That is another reason why, had I gone to the strip joint, thereby acting out of character, it would not have constituted a proof of FW.

Bob,

>>Why could agency not be an illusion?<<

For the reasons I gave above.

Doc writes,

>>3) We choose the course with the preferred outcome. (Preferences being determined by other past factors.)<<

But when we choose the course with the preferred outcome, we have the sense that we could do otherwise. When I reach for a beer, or cross the street, or whatever, I have the sense 'I am making this happen' and together with this the sense 'I could be doing something else.' That's the phenomenology of the situation and you cannot ignore the phenomenology.

I appreciate the illogicality of exercising agency by denying agency. For the denial to be what it purports to be constitutes a material disproof of that purport. If agency is illusory, however, then it couldn't be exercised in this or any other way. One might nonetheless be under the illusion that one's denial had purport, that it was the exercise (rather than "just" the instantiation?) of a certain sort of capacity. I just don't see how such considerations can decide the question of whether there actually is a capacity of the sort in question.

Bill,

Thank you so much for comments. If it isn't too much trouble, could I ask to recast this in standard argument form:
"Now the nonexistence of FW, I claim, is a priori impossible given that we are agents and not mere spectators."
Premises and conclusion are not quite clear enough to me.

Bill,

But when we choose the course with the preferred outcome, we have the sense that we could do otherwise.

In your post "Weak and Strong Readings of 'Could Have Done Otherwise'", you sided with a strong interpretation. However, I don't think that jives with the phenomenology.

I think the strong reading might be a linguistic error. The terms "choice" and "predetermination" conflict, but only in a different context. For example, suppose I go to a board meeting to choose whether we'll do A or B, but I do not know that the board chairman has already determined he will ignore the outcome of the meeting and do A, no matter what. In that case, the "choice" made by the board is nullified by the "predetermined" decision of the board chairman. Clearly, the will of the board is defeated. They think they're making a decision when they're not. But that language game in which predetermination and choice conflict is not relevant to the free will of an individual.

Setting aside decisions where I don't prefer one choice over another, I certainly don't have the intuition that I could have chosen against my preferences. Rather, I always mean the weak reading, i.e., that had I preferred an alternative, I could have picked the alternative.

So, I can't agree on the phenomenology of the decision. Yes, I am making something happen (e.g., reaching for a beer) according to my preferences, but I don't also have the sense that my preferences are coming out of thin air, or that I would have decided against my overall preferences.

Let me very briefly present an argument why the position represented here by my friend Philoponus is problematical.

Let us entertain the question of whether it is possible that we are causally determined by brain processes. Suppose that Philoponus' hypothesis is true. Then the position Philoponus takes on this matter; namely, on the matter whether everything we decide and do is determined by a brain process, applies to the very matter of which position Philoponus is going to take about this question. If Philoponus' position is true, then he is not free to let the evidence decide. He has no choice to be rational about the matter. He is causally necessitated by his brain to think that there is no free will and that we are causally determined by our brain processes. And the very same thing holds regarding Bill' views. If Philoponus' position is correct and we are causally determined by our brain processes, then Bill does not have a choice but hold the view that FW exists; that his actions are based upon deliberation, etc. Under the circumstances envisioned by Philoponus' position, the question for Bill is not whether there is a good philosophical argument on behalf or against FW or whether there is evidence that would rationally decide the issue. He is compelled to hold the views he does. And since both Philoponus and Bill are compelled by causal forces to hold the views they hold, they do not really disagree. Their respective positions is no different in principle from the scenario where Philoponus gets hungry at 7:30 whereas Bill at 8:00 in the morning. The differences in the respective times at which each of them gets hungry in the morning has to do with certain physiological facts about each; they have no control over these physiological facts. Similarly, if Philoponus' thesis were correct, then Bill and Philoponus do not disagree about anything. They cannot help but hold the positions they do. Evidence would be irrelevant to the views each holds about this matter. Rationality cannot be a possible mediator.

But, Philoponus and Bill do disagree. Therefore, Philoponus' position cannot be right. At least not as long as we think of their respective states regarding this matter as a disagreement that can be adjudicated by rational means.

This argument, with some modifications, can be extended to any relevant situation. Example: A researcher conducts an experiment in order to examine the hypothesis that neurological processes are temporally prior to conscious decisions. The data is tabulated. The researches looks at the data and (he thinks he) concludes that so-and-so is the case. But the researcher is mistaken: not about the data or the conclusion. He is mistaken to think that he derives a justified conclusion based upon the evidence he collected. He has the illusion that he has rationally concluded based upon evidence that such-and-such is the case. If Philoponus' position is correct, then the researcher does not infer a conclusion from the data based on rational considerations such as that the conclusion is warranted based upon the evidence. His brain decides to accept the conclusion, given the data even prior to the researcher examining whether the evidence warrants such a conclusion. He accepts the conclusion based upon the evidence not because it is rational to do so, but because his brain already decided to do so. The researcher merely parrots what his brain already determined the result is going to be. And so forth.

Philoponus fails to appreciate the magnitude of the thesis he seems so eager to embrace. Free will is important. But FW is only one among many casualties of his position. Truth, evidence, rationality, norms, and so on are all condemned to be mere illusions. And that infects the very theories upon which Philoponus relies in order to arrive at this bleak station. These theories are not true, or confirmed based upon evidence. Some of us are simply compelled by our brain to accept them, whereas others are not. A fluke of nature.

We are not only condemned to be free; we are also condemned to be rational, although of course we too frequently fail in both.

peter

I agree with Peter that if we deny the sort of freedom implied by our notions of agency, then "Truth, evidence, rationality, norms, and so on are all condemned to be mere illusions." That's why I framed my earlier comment in terms of "purport," since the "existential meaningfulness" of our actions and their products hangs in the balance. That's a very steep price to pay, but it's still not a demonstration that these things could not in fact be illusory. We might still be strutting and fretting upon this stage, (though not as actors), and it might signify nothing. I don't endorse this (to me) bleak picture, but I can't prove that endorsement, or it's opposite, has any real significance.

BobK

You say: "...but it's still not a demonstration that these things could not in fact be illusory."

I myself think that it is as close as one can get to a demonstration. Take someone who believes in t he philosophical thesis, like Philoponus, that neuroscience will *prove* that FW is an illusion. But this belief itself is going to be an illusion because one of the consequences of this very same position is that nothing can prove anything: we are just causally compelled to have this or that view. This consequence is going to be very difficult to swallow even for someone, like Philoponus, who actually believes that neuroscience will in fact prove that FW is an illusion.

The tension is clear:

(a) if the neuroscience hypothesis is in fact true, then it cannot be proved (for the notion of provability becomes an illusion along with FW);
(b) If it can be proved, then it is false because provability is not an illusion and hence one of the consequences of the neuroscience hypothesis is false; hence, the neuroscience hypothesis itself must be false.

This is one of those cases whereby if something is in fact the case, then from it being the case it follows that we are deprived of any and all means of stating that it is in fact the case. For then all the resources we have of stating that it is in fact the case become devoid of the usual meaning we associate to them. As long as we think that a hypothesis such as the neuroscience hypothesis is meaningful and that brain science will provide evidence for or against its truth, the *philosophical thesis* that neuroscience will prove FW etc., to be an illusion must be false.

I do not see a way out of this tangle and my friend Philoponus will need many bottles of Fat Tire Ale to see his way out of this predicament.

peter

Hi Peter,

Thank you for the comment. My God, what a plague you pronounce on the house of the Determinist. It’s positively Biblical. “They shall not know truth nor reason, and everything shall be but an illusion for them.” If you are right, let us pray our universe is not deterministic.

You say “and since both Philoponus and Bill are compelled by causal forces to hold the views they hold, they do not really disagree.” But since we know they do disagree, our old friend Modus Tollens means that Philoponus and his nasty deterministic heresy are wrong. Your logic of course is unassailable, but I don’t understand your point about why we don’t disagree. Surely it doesn’t matter whence my belief in something arises— God or the Evil Genius or a misfiring patch of neurons could be the source— but if I assert p and you assert not-p, we disagree. Suppose God is the cause and culprit here. Then wouldn’t we say “God has caused to DISAGREE about this matter”? I’m obviously missing your point.

Whilst we speak of the divinity, Peter, a confession. “I am not now, nor have I ever been, a member of the Determinist party, so help me God.” The worry I am exposing in this forum is simply that neuroscience is bolstering a suspicion that I have long entertained about conscious agency. I think very little of our decision making is under any semblance conscious control, and when deliberation does intrude into the decision process, I have no confidence that it is not rationalizing a decision already made. When deliberation gets confused and plumps for something something other parts of the brain do not want, we see the impotence of deliberation revealed under the name of weakness of the will. Deliberation justifies and predicts, it does not affect.

The ability of deliberation to over-rides the dictates of the emotional brain, if it exists at all, is a rare thing conditional upon an unnaturally vigorous exercise of the frontal cortex in prevailing over other areas of the brain. We are not wired for “free” choice, anymore than is my computer. But perhaps freedom of choice can emerge in our incredible bio-computers as a non-design feature. I speculate of course.

I think the way out of the tangle Peter describes is to drop the claim that neuroscience might *prove* that FW is an illusion. But it might be an illusion, whether this can be proved or not.

Peter,

Your remarks to Bob K this morning hit on the same point that puzzled me in your earlier post. Maybe I need to start hitting the Fat Tire, but I don’t understand why there can be no proofs if our beliefs are causally determined. Surely a proof is a proof regardless of why we believe what we believe or how deterministic our universe is. Perhaps your brain won’t let you believe or accept a proof that demolishes something you have a great emotional attachment to, but the proof remains a proof.

BobK

I think you are absolutely correct. Those who champion the philosophical thesis of neurological determinism ought to drop any pretense of a *proof* of this thesis. Still, it might nonetheless be true that FW etc., is an illusion, albeit an unprovable one. This is one of those skeptical positions which a thorough and consistent skeptic holds but cannot consistently announce that we can know it.

peter

Philoponus - I think I understand your puzzlement. As I see it, truth, falsity, evidence, proof, even signification... such things might still "exist" as purely formal things. But if FW is illusory, then any normative import associated with such things is also illusory. We might still construct proofs (given a "deflationary" reading of 'construct'), and they might well affect the content of beliefs, but claims to the effect that proofs _should_ influence beliefs will be empty.

Philoponus raised two objections to an argument I presented in a previous post. I shall consider one of his objections here. First my argument, then Philoponus objection, then my response to his objection:

1) My argument:
(i) If the philosophical thesis of neurological determinism is true, then Philoponus and Bill cannot disagree (as well as all other such cases).
(ii) Philoponus and Bill do disagree.
Therefore,
(iii) The philosophical thesis of neurological determinism is false.

2) This argument is obviously valid. Philoponus challenges its soundness by raising questions about premise (i). Philoponus says:
“Surely it doesn’t matter whence my belief in something arises— God or the Evil Genius or a misfiring patch of neurons could be the source— but if I assert p and you assert not-p, we disagree.”

3) I agree with the later part of this statement where Philoponus says “but if I assert p and you assert not-p, we disagree” and for this very reason I find the antecedent remark unacceptable. Why? Disagreement presupposes contradictory assertions. We agree on this. But what Philoponus seems to overlook is that assertion, unlike vocalization, presupposes that certain antecedent conditions regarding the assertion are satisfied.

4)Consider this example. Suppose we inject someone with a serum/drug that causes them to vocalize in English things such as “2 + 2 = 5”, “I am an alien”, “torturing for fun is permissible”, “free will is an illusion”. In the absence of knowing the circumstances that lead to these vocalizations, I for one will be eager and ready to challenge the speaker on each of these statements. I will assume that the speaker’s vocalizations represent things he believes and that the speaker is not compelled to say what he said and, therefore, is free from any compulsion to hold these beliefs or decline holding them. But as soon as the doctors informs me that the speaker’s vocalizations are induced by the serum/drug, I will no longer view these statements as representing beliefs the speaker holds; I will no longer view them as emanating from a rational agency. They are no different than (to repeat an example given by Putnam) an ant by sheer coincidence draws in the sand shapes similar to the words “I am an alien”. By drawing in the sand these shapes, the ant did not intend to assert what is equivalently asserted by me when I assert “I am an alien”; the ant did not intend to assert anything. The ant cannot assert or intend anything.

5) The same goes for the serum induced human. I no longer see the situation as a disagreement. The speaker’s vocalizations are not intended to express propositions with which I disagree. They are not intended to express any propositions. They just happen to be sound-equivalent to expressions in English that do express propositions and are intended to do so when asserted. But in this case, the speaker is not asserting anything. Hence, the preconditions for disagreement are not satisfied. The same goes regarding neurally induced vocalizations and dispositions to hold this or that.

6) Therefore, it does matter whence a *belief* arises. It makes all the difference in the world. Causally induced, *beliefs* are no beliefs, vocalizations are no assertions, *proofs* are no proofs. As for the case of proofs and rationality, I shall discuss these matters in a different post.

peter

Peter,

You say:

If Philoponus' position is true, then he is not free to let the evidence decide.

It seems to me that this statement is problematic on two grounds.

First, if the evidence is doing the deciding, then Phil's decision is still deterministic. Presumably, the evidence is deterministic, e.g., the Pythagorean theorem isn't changing with time, but is determined.

Second, your argument seems to be a non sequitur because you have not established that the material processes in Philoponus's brain aren't accounting for the evidence. A reductionist will say that, when Phil is being rational, Phil's brain is deterministically accounting for the evidence. If I have some simple components that follow simple rules of configuration, I can create a system from those components that implement very sophisticated rules.

doctor logic,

1) "if the evidence is doing the deciding, then Phil's decision is still deterministic."

If Phil's *decision* is deterministic, then my point is precisely that the evidence is *not* doing the deciding: the brain does. Evidence, as bobK, pointed out and I did in a previous post, is a normative concept, just like rationality, proof etc. Therefore, on Philoponus' position the researcher does not ponder the evidence and makes a decision that it is rational, based on this evidence, to accept the hypothesis. The researcher has no such leeway according to the philosophical thesis of neurological determination. The brain induces the acceptance or non-acceptance of the hypothesis and the *evidence* may or may not be relevant to this causal process, although the researcher may be under the illusion that he made a conscious decision to accept the hypothesis based upon good evidence.

2)"your argument seems to be a non sequitur because you have not established that the material processes in Philoponus's brain aren't accounting for the evidence."

What do you mean by the word "accounting" in the phrase "x's brain accounts for evidence e"? If you mean "suppports", then it is you who commits a non sequitur, for "e supports H" is a normative phrase and purely causal processes do not yield normative consequences. If you mean by "accounting" causes, then that is fine except that according to this interpretation my point stands: evidence is irrelevant for the brain causes whatever position Philoponus holds.

peter

Let me just add one more brush stroke to what I have said previously. The researcher looks at the evidence for H. The researcher has to *decide* whether the evidence supports H. But according to Philoponus the notion of a "decision" is an illusion: there are no such things or at the least they are not instrumental in undertaking an action. Therefore, Philoponus is *commited* (an illusion, I suppose) on pain of inconsistency to hold the view that the researcher does not *decide* whether to accept the hypothesis based upon the evidence. His brain already accepts the hypothesis before any *decision* is made by the researcher based upon the evidence. While the researcher may have the illusion that he has undertaken a rational decision based upon the evidence to accept the hypothesis, the brain already determined that for him. He just parrots what his brain already determined, just like in the case of deciding to left one's arm, the brain already determined that the arm shall be raised: the conscious process of *deciding* is a dangling entity, a feeling, but is not involved in undertaking the action.
Consistency requires treating both cases alike; unless, of course, consistency itself is an illusion!

peter

Peter,

Thank you. Now I get it. “Caused beliefs are no beliefs.” What a bold & interesting conjecture!

I certainly agree that we wouldn’t say that we disagree with a madman or a sleepwalker (or a zombie!) who speaks but does not know what he is saying. Their verbalizations are no more assertions than a parrrot who says “Obama is a fool.” I do not disagree with the parrot!!!

But suppose you and I fall to discussing some sensitive moral behaviour, where I am viscerally and incorrigibly persuaded of its wickedness and you are not. We have reasons and arguments for our positions and we discuss the topic civilly, but in fact there is no chance of persuasion. We are set in our views beyond the chance of revising them. Psychologists would say, probably correctly, that Phil’s rigid abhorrence of this behaviour was caused by experiences in his childhood which literally re-wired his brain on this issue. It is quite impossible for Phil ever to accept this behaviour. Likewise for Peter, but in an opposite direction. I would say in this situation that we both were caused to hold the views we do by unconscious processes that we poorly understand and have no real control over. Yet I would say we disagree. What do you think?

Let us assume the most extreme syntactic position we can imagine in describing a proof:

(1) A proof is a sequence of marks.

This proposal is not going to work:

lkgnfhdh
jgkrtithnd
mgnthfdh___
jgnf,_!?

is a sequence of marks; it is not a proof. So lets try the following:

2) A proof is a sequence of marks each of which is part of the primitive vocabulary of a formal language. Gibberish is out. OK.

we done out very much do.
do some nothing everything in.

is a sequence of marks each of which is (lets say) part of the primitive vocabulary....etc. This is not a proof. Lets try the following:

3) A proof is a sequence of marks each of which is a well-formed formula. "well-formed-formula"? what is that? Who decides what is a "well formed formula"? But let that go. Suppose we stipulate that A, B, and C are well formed formulas. A proof is a sequence of well formed formulas. Thus,

A
B
C

should be a proof. But, it is not! Not always. Why? Because not every sequence of well formed formulas is a proof. In order for such a sequence to be a proof, each formula must either be an axiom, a theorem, or "follow" from the previous formulas in the sequence. But, what do you mean by "follow"? Well, there are certain rules of inference and they determine whether a given formula follows from the previous steps.

But, now, why should one set of rules be better than another? What makes this set of rules define what a proof is and another render a sequence of well formed formulas not a proof. Because we know that this set of rules results in *good* arguments; i.e., valid arguments. We have a metatheoretic proofs that demonstrate that such and such set of rules is consistent and complete. Good! So does that mean that in deciding which sequence of marks we should accept as proofs we need to rely upon whether the sequence in question complies with the rules of inference known to be good rules? Yes!

And this process is replete with normative claims and decisions based upon them. If the brain induces us to follow a sequence of marks, it does not do so because such a sequence complies with good rules of inference. The process is purely causal and the gap between the causal and the normative is as wide and deep as is the gap between causally induced actions and actions based on free will; between accepting a sequence of marks as a proof or being induced to simply parrot them; between accepting neuroscience as based upon evidence or being induced by the brain to vocalize such acceptance, etc. If FW is an illusion, then so are proofs, disagreements, evidence for a theory, the truth of a theory and so on.
Therefore,

Philoponus cannot consistently maintain that FW is an illusion and that he holds this view because he has overwhelming evidence from neuroscience on the basis of which he *decides* that FW is an illusion. Remember, there are no real decisions; there are only neorological causes inducing an action. Hence, Philoponus cannot both accept his philosophical determinism AND simultaneously hold that his acceptance of this view is rational.

peter


Phil:

1) "Caused beliefs are no beliefs."

My thought that there is milk in the fridge caused me to believe that if I open the fridge, I will find milk there. My belief that if I open the fridge, I will milk there is caused by my thought there is milk in the fridge. It is a belief. So it is not the case that anything caused cannot be a belief. If there is any such thesis at all lurking in what I have said, it is that anything caused by purely physical causes is not a belief.
Which leads us to your example.

2) You describe a situation in which we disagree on a topic and the manner in which we each hold our views is so rigid and set (in stone"=brain) that it might be interpreted/explained by some experts as

"that Phil’s rigid abhorrence of this behaviour was caused by experiences in his childhood which literally re-wired his brain on this issue. It is quite impossible for Phil ever to accept this behaviour. Likewise for Peter, but in an opposite direction."

Excellent example. And one which demonstrates in part why I do not feel comfortable with soul-dualism. Your example brings up the interaction problem. It is for this reason I embrace emergence. While I do not have the time right now to offer an extensive defense of this thesis, I can say this. I do believe there are cases of the sort you mention, although I think they are exemplified more often and clearly in the case of certain kinds of emotional disorders. In any case, you conclude from the example:

"I would say in this situation that we both were caused to hold the views we do by unconscious processes that we poorly understand and have no real control over. Yet I would say we disagree. What do you think?"

I view emergence as primarily designed to solve the interaction problem while allowing for a relative autonomy of the mental. So in the example you site, it is quite possible that certain experiences (mental events) become hard-wired in the brain. It is also possible that these will then become unconscious causal antecedent to conscious beliefs, etc. However, emergence (I believe) allows me to argue that no causal antecedents can have an impact upon conscious processes unless they were filtered through the conscious barrier and converted into mental in nature. You might have a position and so will I; we disagree. Suppose we are told that both of our respective positions are causally driven by hard-wired brain states. We will still find *reasons* for our positions and attempt to persuade the other based upon these reasons. The following exchange is meaningless:

Phil: you should hold my view because it is hard-wired in my brain;
Peter: NO! You should hold my view because it is hard-wired in my brain.

Whether or not there are causal antecedents to some of our beliefs, they become *beliefs* only when and because these causal antecedent were converted into conscious states.

This is admittedly a very brief and incomplete account. But it hints toward what I hope to be a more complete answer to you question.

peter


Phil:

1) "Caused beliefs are no beliefs."

My thought that there is milk in the fridge caused me to believe that if I open the fridge, I will find milk there. My belief that if I open the fridge, I will milk there is caused by my thought there is milk in the fridge. It is a belief. So it is not the case that anything caused cannot be a belief. If there is any such thesis at all lurking in what I have said, it is that anything caused by purely physical causes is not a belief.
Which leads us to your example.

2) You describe a situation in which we disagree on a topic and the manner in which we each hold our views is so rigid and set (in stone"=brain) that it might be interpreted/explained by some experts as

"that Phil’s rigid abhorrence of this behaviour was caused by experiences in his childhood which literally re-wired his brain on this issue. It is quite impossible for Phil ever to accept this behaviour. Likewise for Peter, but in an opposite direction."

Excellent example. And one which demonstrates in part why I do not feel comfortable with soul-dualism. Your example brings up the interaction problem. It is for this reason I embrace emergence. While I do not have the time right now to offer an extensive defense of this thesis, I can say this. I do believe there are cases of the sort you mention, although I think they are exemplified more often and clearly in the case of certain kinds of emotional disorders. In any case, you conclude from the example:

"I would say in this situation that we both were caused to hold the views we do by unconscious processes that we poorly understand and have no real control over. Yet I would say we disagree. What do you think?"

I view emergence as primarily designed to solve the interaction problem while allowing for a relative autonomy of the mental. So in the example you site, it is quite possible that certain experiences (mental events) become hard-wired in the brain. It is also possible that these will then become unconscious causal antecedent to conscious beliefs, etc. However, emergence (I believe) allows me to argue that no causal antecedents can have an impact upon conscious processes unless they were filtered through the conscious barrier and converted into mental in nature. You might have a position and so will I; we disagree. Suppose we are told that both of our respective positions are causally driven by hard-wired brain states. We will still find *reasons* for our positions and attempt to persuade the other based upon these reasons. The following exchange is meaningless:

Phil: you should hold my view because it is hard-wired in my brain;
Peter: NO! You should hold my view because it is hard-wired in my brain.

Whether or not there are causal antecedents to some of our beliefs, they become *beliefs* only when and because these causal antecedent were converted into conscious states.

This is admittedly a very brief and incomplete account. But it hints toward what I hope to be a more complete answer to you question.

peter

Sorry for the double take: the system eludes me sometimes.

Peter,

If, when Philoponus says choosing is illusory, he means that a belief in the strong sense of "Could have done otherwise" is illusory, then I agree. If he says that all choosing is illusory, then I don't buy it.

I think you're begging the question against deterministic choice in the above. If determinism is true, we still weigh our actions from an epistemic point of view. We call that weighing "deciding". You can't roll "strong 'could have done otherwise'" into the definition of deciding without begging the question.

Here's how I think it works. At the end of the decision-making process, the decider knows what he will do (or try to do). At the start of the decision-making process, the decider is not aware of which choice will be made. In this picture, the perception of a choice is the perception of an epistemic question. "Which action of mine will I find to be best?"

To become aware of which action will be preferred, the decider has to simulate the outcome of each action. After the simulation, the decider is aware of what action will result in the most preferable outcome. At this point, the decider knows which action is preferred, and, therefore, which action he will perform.

Note that the decider does not know at the start of the process, but does know later.

This doesn't mean that the decider isn't choosing. If my model is write, then "choosing" is *defined* to mean the mental activities I described above. We are not free to say that a deterministic version of the above is not choosing without begging the question.

You say:

"Therefore, Philoponus is *committed* (an illusion, I suppose) on pain of inconsistency to hold the view that the researcher does not *decide* whether to accept the hypothesis based upon the evidence. His brain already accepts the hypothesis before any *decision* is made by the researcher based upon the evidence."

I don't think that's right, and (correct me if I'm wrong Philoponus). I think Philoponus is suggesting that he was bound to DECIDE on the eventual conclusion.

Suppose you're trapped on a desert island where there are two wells. One well seems normal, and the other is marked with "Danger Poison" signs. Which well will you drink from?

Your decision is determined as of now when I write this. You may weigh your options, but you're going to decide to drink from the well that isn't marked "Danger Poison". This is the case because you want to live (I assume) and drinking from the first well is preferable. (I cannot imagine that you would prefer to drink from the first well, but deliberately drink from the second anyway.) So you still make a decision (a weighing), but you're predictably going to weigh in favor of the safe well.

Peter et al.,

Good discussion. When we discuss beliefs we should be careful to distinguish between belief-states and belief-contents. Now perhaps a belief-state is a state of the brain, even though I find this problematic given that belief-states are intentional states and it is hard to understand how a merely physical state can be of or about anything. Be that as it may, surely it is absurd to think of belief-contents, i.e., propositions, as in the brain. So I have a problem with the following phrase of Phil's: "Phil’s rigid abhorrence of this behaviour was caused by experiences in his childhood which literally re-wired his brain on this issue." I stumble over "on this issue." Indeed, something like re-wiring does occur in the brain on the basis of inputs to the brain. But there is no propositional content in the brain. One can perhaps locate the moral abhorrence sector of the brain just as one can map and distinguish visual cortex from motor cortex. But there is no abhorrence OF CHILD MOLESTATION sector of the brain. You cannot locate specific contents in the brain. "Look we just found the theorem of Pythagoras in Phil's brain! Look sharp! It's just to the left of where these sodium ions are diffusing across synaptic interface 197007563."

In a slogan: There ain't no MEANING in that marvelously complex hunk of intercranial MEAT. You can't get meaning from meat.

So you go Platonic and make the meaning abstract: there are Fregean propositions in Frege's Third Reich. And there also you find proofs as sequences of Fregean propositions.

Now Peter's and Bob's worry kicks in. Suppose an investigator is trying to determine whether certain propositions p, q, r . . . support hypothesis h either by entailing h or raising the probability of h. To do this he must be sensitive to reasons (which are propositions). If the mental states he runs through in assessing the evidence for h are all of them brain states, and these brain states are neurobiologically determined, then it seems rationality is out the window. How can there be a distinction between rational and irrational theory construction or valid and invalid reasoning? If the investigator reasons poorly, then we might say to him: you ought not affirm the consequent or deny the antecedent. But ought implies can, and that seems to bring FW into the picture.

There is a tangle of issues here, but one of them is: how the states of a hunk of meat be influenced by abstracta? A mind can be sensitive to meanings, but a brain cannot be.

Another distinction we should make is between neurobiological determinism and psychological determinism. I take it that Peter is the latter: once the reasons for acting are presented to one's emergent mind, the action is necessitates: there is no unconditional 'could have done otherwise.' But then my question for Peter is: what about the normativity of rationality?

Doc writes, >>Suppose you're trapped on a desert island where there are two wells. One well seems normal, and the other is marked with "Danger Poison" signs. Which well will you drink from?

Your decision is determined as of now when I write this. You may weigh your options, but you're going to decide to drink from the well that isn't marked "Danger Poison". This is the case because you want to live (I assume) and drinking from the first well is preferable.<<

Good example. I take it your point is that, given one's desires, beliefs, preferences, motives, etc., the action (drinking from the first well) is necessitated. There is just no way that, with all that in place, one could do otherwise. There is no 'liberty of indifference.' Schopenhauer came to this conclusion and it makes sense. I doubt that it can be decisively refuted. But I don't find it compelling. For there is a gap between the excellent reasons for drinking from the first well and the actual drinking. There is the phenomenological sense that I make it happen and that I could go against the excellent reasons. The reasons incline but do not necessitate.

I was talking to the chairman of a department that later hired me. Everything was on the line. He had given me a cup of coffee. While I was sitting there talking to him I had the thought that I could throw the coffee in his face. Of course I didn't do it. I had excellent reasons not to do it and no reason to do it. Yet I was aware of my freedom in the strong sense.

I believe this is the phenomenology of the situation. Sartre agrees with me in Being and Nothingness. "The Vertigo of Possibility" Of course you might try to argue that the phenomenology doesn't prove anything metaphysically. And then we return to the question whether FW could be an illusion.

Bill,

"While I was sitting there talking to him I had the thought that I could throw the coffee in his face. Of course I didn't do it. I had excellent reasons not to do it and no reason to do it. Yet I was aware of my freedom in the strong sense." (Sorry, italics aren't working anymore.)

Is this really the strong sense you spoke of in an earlier post?

Weren't you aware that, had your preference sided with the aesthetics of making a symbolic act of free will and throwing coffee in his face, then you could have done so?

It really seems that if, on every ground, you prefer to not throw coffee in his face, then to throw the coffee would be to violate your free will, not affirm it.

As for my big picture view on FW... First, I think the phenomenology doesn't conflict with determinism. Second, and more importantly, I think the concept of free will being a third category after determinism and randomness is logically incoherent. I think you can generally define an outcome as "determined" by saying that it is fixed by factors in time and in the past, or by constants that are out of time. The only factors that remain are those in the future, and that doesn't help. Having exhausted all factors (all those in time + all those outside of time), the outcome can only be random in the most fundamental sense. I'm fine with fundamental randomness, but I don't think that randomness = freedom for most people.


Peter,

Can I comment on the first presentation of your argument on Wednesday at 10:10 pm? In a nutshell it is this:

1) ND implies that P and B can't disagree
2) But they do disagree
ergo
3) ND is false

Subsequent discussion about physical cause and belief suggests that a more accurate rendition would be

1*) ND implies that agreement doesn't exist
2*) But agreement does exist
ergo
3*) ND is false

The problem I think is that both occurrences of 'agreement' are given first-person sense. (1*) then looks plausible because ND is a thesis about the third-person world, and indeed we don't find first-person agreement ('agreement1')in the third-person world. All we find is the third person correlate ('agreement3')---similar utterances, nodding, smiling, shaking hands, etc

To make the argument work we need

1**) ND implies that agreement1 doesn't exist (in the first person world)

Justifying this would require an accepted theory of the relations between the two worlds, which of course we sorely lack. In our present state of knowledge it has similar status to 'ND implies that redness1 doesn't exist'.

Doc,

A free act is not an uncaused act but one that is caused by the agent as ultimate source of the action.

Bill,

"once the reasons for acting are presented to one's emergent mind, the action is necessitates: there is no unconditional 'could have done otherwise.' But then my question for Peter is: what about the normativity of rationality?"

Bill, I am not sure that I am committed to the above account you seem to attribute to me. Why do I have to hold that once the reasons are presented, they necessitate the action? And why can't I maintain I could have done otherwise (unconditionally) in the face of the best reasons available? While I do think that there is a class of cases for which psychological determinism holds (mental illness, for instance, and other cases of belief and emotional fixation), I do not see why I must hold that psychological determinism reigns in all or even most cases?

Secondly, there is a sense in which the force of certain normative principles (moral, logical, epistemic, etc.,) is so compelling that we are in some sense bound to follow their prescription. Can I resist the conclusion that Q when I know that P entails Q and I refuse to give up P? This queer form of determinism that flows from normative principles is tricky and to be honest I do not quite understand it. There is a sense in which I could *pretend* as if I resist believing Q in the face of what I know. But, can I really not believe it? Can I say plainly: I believe P; I know P logically entails Q, but I refuse to believe Q? I don't think that this describes a real possibility. So these sort of cases puzzle me.

peter

Bill,

"A free act is not an uncaused act but one that is caused by the agent as ultimate source of the action."

Perhaps you could contrast this with a quantum decay. When a neutron decays, there are certain constraints on the final state. There's conservation of momentum, energy, charge, etc.

However, some aspects of the decay (like the directions of the radiating decay products) are not determined by the initial state or by timeless factors. Let's suppose that these undetermined aspects of the final state are fundamentally (as opposed to epistemically) random. In that case, the neutron's initial state can be said to have caused the final state only in the sense that it *enabled* the final state. That is, there would be no final state of decay products without the original neutron. However, the neutron did not *determine* the final state. Hence, the end result is not *determined* by anything whatsoever, even if it is *enabled* by something quite obvious.

Similarly, it would seem that an uncaused act is one in which the choice is not *determined* by the agent, but is only *enabled* by the agent. And this leads us back into the jaws of the logical complementarity of determination and randomness. Every aspect of an agent choice that is merely enabled by the agent (i.e., not predetermined by the agent) is fundamentally random because there's nothing at all for it to depend upon. (We've exhausted the set of all things by considering determining factors both in time and outside of time.)

Peter,

Let me look a bit more closely at your argument. The following is critical I think:

"And [under the ND hypothesis] since both Philoponus and Bill are compelled by causal forces to hold the views they hold, they do not really disagree. "

1) Bill sometimes says that he fails to find an argument compelling. Perhaps some arguments he does find compelling, eg, mathematical ones. At least he entertains the idea that some argument might be compelling. Then there would seem to be 'benign' instances of first-person compelling that you would want to treat as leading to 'genuine' belief.

2) But I'm not clear how ND 'invalidates' all belief. Certainly I'd agree that beliefs arrived at under the influence of certain drugs might not be 'genuine'. We could say that the drug 'short-circuits' the causal processes that correlate with the formation of 'genuine' beliefs so that we get beliefs but without due deliberation. In other words, genuine belief correlates with the right kind of causal processes, namely those that causally involve the correlates of the evidence in the right way.

Bill,

Your addendum presents the problem starkly. I'd like to try to dissolve the aporia by suggesting that 'I could have done otherwise' differs in meaning in the first-person and third-person views. In the first-person view the 'I' refers to my phenomenal self and the proposition is obviously true. In the third-person view 'I' refers to my physical material and the proposition is false. I'm tempted to think that all arguments in this debate will founder on the rock of this distinction.

Bill Brightly

a) My original argument to which you replied with a post May 22, 2009 at 3:01 am is this: (following your convention ‘ND’ stands for neurological determinism)

1) If ND is true, then Bill and Phil cannot disagree on whether FW exists;
2) Bill and Phil disagree on whether FW exists;
Therefore,
3) ND is false.
Phil challenged premise (1) and I have replied to his challenge. I shall return to my response shortly.

b) In your post you say: “Subsequent discussion about physical cause and belief suggests that a more accurate rendition would be

1*) ND implies that agreement doesn't exist
2*) But agreement does exist
ergo
3*) ND is false”

c) I fail to see that your (*) version improves upon my original argument. My first premise states that if the mental is completely causally determined, then there cannot be disagreement between Bill and Phil on FW (or anything else for that matter). What is meant by ‘disagreement’ in the consequent of (1)? The wording and subsequent discussion of this premise clearly suggests that ‘disagreement’ refers to a state of affairs such that:

(i) Bill and Phil intentionally assert certain sentences which mean-in-English that-P and that-not-P, respectively;
(ii) By so doing Bill and Phil are in an epistemic relation of believing in the respective propositions;
(iii) Bill and Phil each offered explanations, reasons, evidence, and other similar reasons that attempt to justify believing in the respective propositions;
(iv) Since the propositions are contradictory and Bill and Phil asserted their respective sentences with the intention to express these propositions and they both offered reasons so as to justify maintaining their respective beliefs, they disagree on whether P (e.g., FW exists) is true.

d) Now, I do not maintain that clauses (i)-(iv) above are formally adequate to define or characterize the relevant notion of ‘disagreement’ (i.e., are necessary and jointly sufficient) involved in the consequent of my premise (1). But I do maintain that each clause is *conceptually necessary* in order to have the relevant notion of ‘disagreement’ i.e., if one of these clauses is absent or empty or inapplicable; no disagreement of the relevant sort is present. Notice that each of the clauses I have listed involves either the notion of intentionality, or the notion of meaning, or the notion of an epistemic relation of believing, or normative notions such as reasons, justification, explanation, evidence, etc. Thus, the clauses listed above assert in effect that some or all of these notions are essential components of having a disagreement of the relevant sort. It is for this reason that the consequent of (1) includes the modal notion *cannot* and does not merely state (as does your (1*)) the mere matter of fact absence of disagreement/agreement.

e) In light of the above, we are able to show that (1) is true. For suppose that (1) is false. Then the following holds:

(1.1)* ND is true;
(1.2)* Bill and Phil can disagree on whether FW exists.

Note: the (*) indicates that we are engaged in a reductio argument where the premises with asterisks are introduced for the purpose of the reductio argument.

f) If (1.2)* is true, then so must be all the relevant clauses (i)-(iv). What follows from the assumption that (1.1)* is true? Formally, I maintain that the truth of (1.1)* contradicts one or more of the relevant clauses (i)-(iv). If I can show that, then the reductio worked and therefore (1.1)* and (1.2)* cannot both be true. Therefore, (1) is true.

g) Suppose ND is true. Then there are only physical causes: i.e., there are only brains and every phenomenon is either reducible to or fully explained in terms of physical causes produced by the brain. The operations of the brain and the physical causes produced by it are not different in principle from all other physical causes in the universe; otherwise, we have emergence. Intentionality, meaning, rationality, reason, evidential relationships, and hence beliefs etc., are all either reducible to physical causes or if they cannot be reducible, they are eliminated. But I do not think that any of these can be properly reducible to purely physical causes without significant residue. Hence, they must be eliminated according to ND. But, without these notions, clauses (i)-(iv) are empty. But since these clauses are conceptually necessary in order for there to be any instances of the relevant notion of disagreement, it follows that ((1.2)* is false. Therefore, the truth of ND is incompatible with (1.2)*. Hence, my premise (1) is true. The rest of my original argument then goes through (since it is valid and the only controversial premise was my premise (1)).

h) Now, I do not really understand why you think your version of the argument is clearer or in any way superior to this one. First, you removed the modal element from the consequent of premise (1). Why? As noted above, the modal element is absolutely essential in the premise. Second you changed the wording from ‘disagreement’ to ‘agreement’. What is the purpose of this change? I have specifically stated premise (1) as I did because Phil and Bill disagree and Phil readily acknowledges this fact. So my point was to demonstrate that since ND is incompatible with the relevant sort of disagreement, Phil cannot hold both ND and that he disagrees with Bill.

i) Finally, I am not sure I understand the argument you advance regarding the first-person and the third-person distinction. Phil, Bill, and any outside observer will all agree that they disagree on the question of FW. So from all perspectives, this conclusion is forthcoming. I do not see how there are three different notions of ‘agreement’ or ‘disagreement’ here. There is only one so far as I can see. So I do not see that my argument violates ambiguity strictures.

peter

Peter,

I may have misunderstood you. I had the impression from our discussion the other night that you take a compatibilist line: a person acts freely if he acts without interference (no inner compulsions/obsessions, no outer contraint/duress, etc) and acts for reasons. This is compatible with the truth of determinism.

But from what you have just said your position seems indistinguishable from mine. Suppose that, after due deliberation and while in your 'right mind' you come to some decision D on the basis of the best reasons available. You now seem to be saying that you have decided otherwise ALL ELSE REMAINING THE SAME. If so, you are a libertarian, not a compatibilist.

Your second point. Is one free to believe a contradiction? Probably not. Unlimited doxastic voluntarism is surely false. A second example: I open my door and see you standing there (the light is excellent, I haven't just dropped acid, etc.). Am I free not to believe that you are standing there? I should think not.

David,

OK, but somehow the first- and third-person points of view have to be integrated. For I am both an object in the world and also a subject who must act. As an object I am determined, as a subject, free. But how can I be both? You solved the aporia I presented but at the cost of generating a second aporia.

Doc,

You seem just to be rejecting the very notion of agent-causation. I admit that it is a difficult notion, and perhaps even incoherent. As an agent-cause I am like a little Aristotelian unmoved mover: I make something happen, and the buck stops with me as agent cause.

Peter,

Apologies if I have misrepresented your argument in my first (Friday) comment In my attempt to boil it down I may have lost something important. But what you say in (g) above seems to me to bear out my precis. For you say that under ND intentionality, etc, are eliminated, clauses i) thru iv) are emptied, and the conceptual foundation for disagreement is lost.. I think it's your view that this makes (1.2*) 'B and P can disagree' false, whereas I think it makes it meaningless. Hence my rendering this as my (1*) 'ND implies that (dis)agreement doesn't exist' . The latter I understand as 'ND implies that '(dis)agreement' doesn't refer'.

My feeling is that your reworking of the argument in (g) amounts to a question-begging rejection of non-reductive physicalism. Would that be fair?

I think my second (Saturday) comment better addresses your narrower argument that beliefs arrived at through a causal process aren't genuine. How would you respond to that?

Bill,

My thought is that the problem lies in a branch of the philosophy of mind. To that extent I haven't created a second aporia. It's been there all along.

The comments to this entry are closed.

My Photo
Blog powered by Typepad
Member since 10/2008

Categories

Categories

October 2019

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    
Blog powered by Typepad