This just over the transom:
I'm an occasional reader of your wonderful blog, "Maverick Philosopher". I was wondering if I could probe you a bit regarding an argument you make in your post, "Could Freedom of the Will be an Illusion?"
You make the statement, "An illusion is an illusion to consciousness, so that if there is no consciousness there are no illusions either." I know this logic is not unique to you, as Descartes used similar reasoning to conclude that he exists. I firmly believe that free will is not an illusion, but I'm having trouble convincing myself of this particular argument.
As a computer programmer, I can write a program that tries to comprehend things in its environment (identifies animals from images, for example). It might come across a particularly tricky image, and get the wrong answer. I could then say that the program was tricked by an illusion. But, the program does not have consciousness.
Is there something wrong with this example?
There are really two issues here. The first is whether or not consciousness could be an illusion. The second is whether or not free will could be an illusion. The questions are not the same, though they are connected. I won't elaborate on the connection. The first question has an easy answer while the second does not. The easy answer to the first question is: it is utterly absurd to suppose that consciousness is an illusion, and for the reason I gave: the very distinction between illusion and reality presupposes consciousness. In a world without consciousness, nothing would appear, and so nothing would appear falsely. No consciousness, no illusions.
You respond by mentioning a program which, though not conscious, can be "tricked by an illusion." Your point, then, is that there could be an illusion without consciousness.
Well, suppose we have a computer with certain optical input devices, and this computer is running a pattern recognition program. When a bobcat, coyote, javelina, rattlesnake . . . . moves across the visual field of the optical sensors, the voice simulator output device emits the sounds corresponding to 'bobcat,' 'coyote,' etc. But then a scrawny and mangy domestic dog moves into the range of the optical sensors and the output devices emits 'coyote.' Or we can imagine the computer being 'fooled' by a papier mache 'dog.' You want to say that these are cases of illusions without consciousness.
My response is that it is you who ascribe illusions, mistakes, misperceptions, identifications, misidentifications, cognitions, recognitions and the like to this purely mechanical system. Speaking strictly it does not sense or perceive or identify or comprehend or think or make mistakes or succumb to illusion. It is nothing but a complicated mechanical system that we have constructed for our purposes to simulate conscious processes. But it itself is not conscious. And so it cannot be said to suffer from any illusions.
So I reject the counterexample. The most you can say is that it is AS IF the computer succumbs to a perceptual illusion.
Part of my AC system is a thermostat. Speaking loosely, one can say that it 'senses' changes in room temperature. Speaking strictly, however, it does no such thing. It does not sense for the simple reason that it not sentient. It is just a mechanical contraption.
When I bend a piece of tin it stays bent. Speaking loosely, I can say that it 'remembers' the shape I give it by bending it. But this has nothing to do with memory strictly speaking. If I stretch a rubber band and then let it go, it returns immediately to its earlier shape. I could say that rubber bands are not 'teachable' since they immediately 'forget' what is taught them. But again this is just loose talk. And similarly for all storage and 'memory' devices no matter how sophisticated: floppy drives, hard drives, you name it.
Recent Comments