« Saturday Night at the Oldies: Two Obscure Masters of the Telecaster | Main | The Folly of "I Have Nothing to Hide" »

Sunday, June 09, 2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I don't mean to defend Kurzweil, particularly, and certainly not as a philosopher. It's just that I don't think that he and McGinn have completely "come to terms" in Mortimer Adler's sense.

Kurzweil is a computer scientist. In computer terms we speak of data as patterns of bits, of data exhibiting structure, and of pattern matching and pattern recognition in ways that make perfect sense in that field. Kurzweil assumes that is a form of computer, and thinks of memories as data encoded with the brain in a manner analogous to the storage of data in the computer. IF this were true, if memories were no more than stored data, and if vision were no more than the physiological digitization of a scene in the world, then it would be reasonable to talk about applying pattern recognition techniques to both the input and to the saved data. Both consist of data that can form patterns that can be compared with other patterns and so recognized—in the computer science sense of the term. And so far as I can tell, that's precisely what Kurzweil is claiming.

McGinn says, Pattern recognition pertains to perception specifically, not to all mental activity: the perceptual systems process stimuli and categorize what is presented to the senses, but that is only part of the activity of the mind. In what way does thinking involve processing a stimulus and categorizing it? When I am thinking about London while in Miami I am not recognizing any presented stimulus as London—since I am not perceiving London with my senses. There is no perceptual recognition going on at all in thinking about an absent object. So pattern recognition cannot be the essential nature of thought. And in this, he's using the term "pattern recognition" in a different way than Kurzweil, and the two of them are talking past each other.

Now, I do not claim (and do not believe) that Kurzweil's notion of the brain as a computer is correct. As you note, memory is intentional and data, however encoded, is not. But to the extent that memory has a physical component in the brain, it seems reasonable that something like data processing techniques could be used in managing it, including what Kurzweil calls "pattern recognition".

I am a software engineer and have been programming computers for twenty years. I think that McGinn is very precise in his diagnosis of what is wrong with Kurzweil’s "blueprint for a mind".

Indeed, as Will Duquette noted, it is pretty common in the software industry to use homuncular talk to simplify communication about computer systems, and I think that it is actually useful that we do so. Djikstra, a famous computer scientist, once said that anyone who uses homuncular terms in reference to computer systems is unprofessional and should never be hired for a programming job. I think that's too extreme -- we would have to reject almost every single programmer in the world -- but certainly we should never consider for a job a programmer who does not understand that the homuncular terms are nothing more than metaphors. It is a fault in the higher education system that professors don't stress it enough to the new generations of computer scientists.

Of course when one deals with Artificial Intelligence it is crucial to understand the figurative use of "Intelligence", lest we end attributing a mind to almost every electronic device in the world. The problem though, is already present even when we use the seemingly neuter word "computer", a word that was originally used in reference to human beings who did the calculations. It is clear that no electronic device can per se calculate anything -- without a human being to attribute symbolic meaning to its states the device would be just a bunch of electrons moving from one place to another. It would be more accurate to say that a human being uses a computer to compute, calculate or process information.

I think there is, though, another problem with Kurzweil's idea for "creating a mind" -- a problem that ultimately is the same that afflicts the Evolutionist ideology. He, along with other General Artificial Intelligence proponents (as opposed to majority of scientists who actually understand the necessary limitations involved in trying to replicate human skills with automatons), believes that there is a algorithmic philosopher's stone -- a simple program which, if we give it enough time or a fast enough computer, can solve almost any imaginable problem without intervention from any external intelligence. No matter how many times we try it without success, it can always be alleged that we just didn't give the algorithm enough time or that we just didn't find the "right" algorithm.

The Evolutionist, following the same logic, affirms that because we found a simple mechanistic process that can account for some of the complexity we see in the world then, necessarily, we can attribute every single example of complexity to the same mechanism or some yet-to-be-discovered similar mechanism -- given that enough millions or billions of years are supplied to its action.

The comments to this entry are closed.

My Photo
Blog powered by Typepad
Member since 10/2008

Categories

Categories

October 2020

Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
Blog powered by Typepad