Will Duquette e-mails and I respond in blue.
Having followed your link to McGinn's review of Kurzweil's book, "How to Create a Mind," it seems to me that there's something McGinn is missing that weakens his critique. Mind you, I agree that Kurzweil is mistaken; but there's a piece of Kurzweil's view of things that McGinn doesn't see (or discounts) that is is crucial to understanding him.
I don't pretend to be an expert on Kurzweil; but I've been a software engineer for over two decades where McGinn has not, and there are some habits of thought common to the computer science community. For example, computer software and hardware are often designed as networks of cooperating subsystems, each of which has its own responsibility, and so we fall naturally into a homunculistic manner of speaking when working out designs. And this is practically useful: it aids communication among designers, even if it is philosophically perilous.
Anyway, here's the point that I would make back to McGinn if I were Kurzweil: patterns outside the brain lead to patterns inside the brain. A digital camera sees a scene in the world through a lens, and uses hardware and software to turn it into a pattern of bits. Other programs can then operate on that pattern of bits, doing (for example) pattern recognition; others can turn the bits back into something visible (e.g., a web browser).
REPLY: McGinn needn't disagree with any of this, though he would bid you be very careful about 'see' and 'recognition.' A digital camera does not literally see anything any more than my eye glasses literally see things. Light bouncing off external objects causes certain changes in the camera which are then encoded in a pattern of binary digits. (I take it that your 'bit' is short for 'binary digit.') And because the camera does not literally see anything, it cannot literally remember what it has (figuratively) 'seen.' The same goes for pattern recognition. Speaking literally, there is no recognition taking place. All that is going on is a mechanical simulation of recognition.
To the extent, then, that sensory images are encoded and stored as data in the brain, the notion that memories (even remembering to buy cat food) might be regarded as patterns and processed by the brain as patterns is quite reasonable.
REPLY: This is precisely what I deny. Memories are intentional experiences: they are of or about something; they are object-directed; they have content. One cannot just remember; in every case to remember is to remember something, e.g., that I must buy cat food. No physical state, and thus no brain state, is object-directed or content-laden. Therefore, memories are not identical to states of the brain such as patterns of neuron firings. Correlated perhaps, but not identical to.
Of course, as you've noted fairly often recently, a pattern of marks on a piece of paper has no meaning by itself, and a pattern of marks, however encoded in the brain, doesn't either. But Kurzweil, like most people these days, seems to have no notion of the distinction between the Sense and the Intellect; he thinks that only the Sense exists, and he, like Thomas Aquinas, puts memories and similar purely internal phenomena in the Sense. I don't think that's unreasonable. The problem is that he doesn't understand that the Intellect is different.
In short, Kurzweil is certainly too optimistic, but he might have a handle on the part of the problem that computers can actually do. He won't be able to program up a thinking mind; but perhaps he might do a decent lower animal of sorts.
REPLY: Again, I must disagree. You want to distinguish between sensing and thinking, and say that while there cannot be mechanical thinkers, there can be mechanical sensors, using 'thinking' and 'sensing' literally. I deny it. Talk of mechanical sensors is figurative only. I have a device under my kitchen sink that 'detects' water leaks. Two points. First, it does not literally sense anything. There is no mentality involved at all. It is a purely mechanical system. When water contacts one part of it, another part of it emits a beeping sound. That is just natural causation below the level of mind. I sense using it as an instrument, just as I see using my glasses as an instrument. I sense -- I come to acquire sensory knowledge -- that there is water where there ought not be using this contraption as an instrumental extension of my tactile and visual senses. Suppose I hired a little man to live under my sink to report leaks. That dude, if he did his job, would literally sense leaks. But the mechanical device does not literally sense anything. I interpret the beeping as indicating a leak.
The second point is that sensing is intentional: one senses that such-and-such. For example, one senses that water is present. But no mechanical system has states that exhibit original (as opposed to derivative) intentionality. So there can't be a purely mechanical sensor or thinker.
As for homunculus-talk, it is undoubtedly useful for engineering purposes, but one can be easily misled if one takes it literally. McGinn nails it:
Contemporary brain science is thus rife with unwarranted homunculus talk, presented as if it were sober established science. We have discovered that nerve fibers transmit electricity. We have not, in the same way, discovered that they transmit information. We have simply postulated this conclusion by falsely modeling neurons on persons. To put the point a little more formally: states of neurons do not have propositional content in the way states of mind have propositional content. The belief that London is rainy intrinsically and literally contains the propositional content that London is rainy, but no state of neurons contains that content in that way—as opposed to metaphorically or derivatively (this kind of point has been forcibly urged by John Searle for a long time).
And there is theoretical danger in such loose talk, because it fosters the illusion that we understand how the brain can give rise to the mind. One of the central attributes of mind is information (propositional content) and there is a difficult question about how informational states can come to exist in physical organisms. We are deluded if we think we can make progress on this question by attributing informational states to the brain. To be sure, if the brain were to process information, in the full-blooded sense, then it would be apt for producing states like belief; but it is simply not literally true that it processes information. We are accordingly left wondering how electrochemical activity can give rise to genuine informational states like knowledge, memory, and perception. As so often, surreptitious homunculus talk generates an illusion of theoretical understanding.
I don't mean to defend Kurzweil, particularly, and certainly not as a philosopher. It's just that I don't think that he and McGinn have completely "come to terms" in Mortimer Adler's sense.
Kurzweil is a computer scientist. In computer terms we speak of data as patterns of bits, of data exhibiting structure, and of pattern matching and pattern recognition in ways that make perfect sense in that field. Kurzweil assumes that is a form of computer, and thinks of memories as data encoded with the brain in a manner analogous to the storage of data in the computer. IF this were true, if memories were no more than stored data, and if vision were no more than the physiological digitization of a scene in the world, then it would be reasonable to talk about applying pattern recognition techniques to both the input and to the saved data. Both consist of data that can form patterns that can be compared with other patterns and so recognized—in the computer science sense of the term. And so far as I can tell, that's precisely what Kurzweil is claiming.
McGinn says, Pattern recognition pertains to perception specifically, not to all mental activity: the perceptual systems process stimuli and categorize what is presented to the senses, but that is only part of the activity of the mind. In what way does thinking involve processing a stimulus and categorizing it? When I am thinking about London while in Miami I am not recognizing any presented stimulus as London—since I am not perceiving London with my senses. There is no perceptual recognition going on at all in thinking about an absent object. So pattern recognition cannot be the essential nature of thought. And in this, he's using the term "pattern recognition" in a different way than Kurzweil, and the two of them are talking past each other.
Now, I do not claim (and do not believe) that Kurzweil's notion of the brain as a computer is correct. As you note, memory is intentional and data, however encoded, is not. But to the extent that memory has a physical component in the brain, it seems reasonable that something like data processing techniques could be used in managing it, including what Kurzweil calls "pattern recognition".
Posted by: Will Duquette | Sunday, June 09, 2013 at 03:58 PM
I am a software engineer and have been programming computers for twenty years. I think that McGinn is very precise in his diagnosis of what is wrong with Kurzweil’s "blueprint for a mind".
Indeed, as Will Duquette noted, it is pretty common in the software industry to use homuncular talk to simplify communication about computer systems, and I think that it is actually useful that we do so. Djikstra, a famous computer scientist, once said that anyone who uses homuncular terms in reference to computer systems is unprofessional and should never be hired for a programming job. I think that's too extreme -- we would have to reject almost every single programmer in the world -- but certainly we should never consider for a job a programmer who does not understand that the homuncular terms are nothing more than metaphors. It is a fault in the higher education system that professors don't stress it enough to the new generations of computer scientists.
Of course when one deals with Artificial Intelligence it is crucial to understand the figurative use of "Intelligence", lest we end attributing a mind to almost every electronic device in the world. The problem though, is already present even when we use the seemingly neuter word "computer", a word that was originally used in reference to human beings who did the calculations. It is clear that no electronic device can per se calculate anything -- without a human being to attribute symbolic meaning to its states the device would be just a bunch of electrons moving from one place to another. It would be more accurate to say that a human being uses a computer to compute, calculate or process information.
I think there is, though, another problem with Kurzweil's idea for "creating a mind" -- a problem that ultimately is the same that afflicts the Evolutionist ideology. He, along with other General Artificial Intelligence proponents (as opposed to majority of scientists who actually understand the necessary limitations involved in trying to replicate human skills with automatons), believes that there is a algorithmic philosopher's stone -- a simple program which, if we give it enough time or a fast enough computer, can solve almost any imaginable problem without intervention from any external intelligence. No matter how many times we try it without success, it can always be alleged that we just didn't give the algorithm enough time or that we just didn't find the "right" algorithm.
The Evolutionist, following the same logic, affirms that because we found a simple mechanistic process that can account for some of the complexity we see in the world then, necessarily, we can attribute every single example of complexity to the same mechanism or some yet-to-be-discovered similar mechanism -- given that enough millions or billions of years are supplied to its action.
Posted by: Lucas Nicolato | Monday, June 10, 2013 at 06:47 AM