Herewith, a first batch of notes on Richard Susskind, How to Think About AI: A Guide for the Perplexed (Oxford 2025). I thank the multi-talented Brian Bosse for steering me toward this excellent book. Being a terminological stickler, I thought I'd begin this series of posts with some linguistic and conceptual questions. We need to define terms, make distinctions, and identify fallacies. I use double quotation marks to quote, and single to mention, sneer, and indicate semantic extensions. Material within brackets is my interpolation. I begin with a fallacy that I myself have fallen victim to.
The AI Fallacy: "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way humans work." (54) "The error is failing to recognize that AI systems do not [or need not] mimic or replicate human reasoning." The preceding sentence is true, but only if the bracketed material is added.
Intellectual honesty demands that I tax myself with having committed the AI Fallacy. I wrote:
The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans. Artificial intelligence is simulated intelligence.
This is true of first-generation systems only. These systems "required human 'knowledge engineers' to mine the jewels from the heads of 'domain experts' and convert their knowledge into decision trees" . . . whereas "second-generation AI systems" mine jewels "from vast oceans of data" and "directly detect patterns, trends, and relationships in these oceans of data." (17-18, italics added) These Gen-2 systems 'learn' from all this data "without needing to be explicitly programmed." (18) This is called 'machine learning' because the machine itself is 'learning.' Note the 'raised eyebrows' which raise the question: Are these systems really learning?
So what I quoted myself as saying was right when I was a student of engineering in the late '60s, early '70s, but it is outdated now. There were actually two things we didn't appreciate back then. One was the impact of the exponential, not linear, increase in the processing power of computers. If you are not familiar with the difference between linear and exponential functions, here is a brief intro. IBM's Deep Blue in 1997 bested Gary Kasparov, the quondam world chess champion. Grandmaster Kasparov was beaten by exponentially fast brute force processing; no human chess player can evaluate 300 million possible moves in one second.
The second factor is even more important for understanding today's AI systems. Back in the day it was thought that practical AI could be delivered by assembling "huge decision trees that captured the apparent lines of reasoning of human experts . . . ." (17) But that was Gen-1 thinking as I have already explained.
More needs to be said, but I want to move on to three other words tossed around in contemporary AI jargon.
Are AI Systems Intelligent?
Here is what I wrote in May:
The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans. Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective.
Perhaps you have never heard of such an adjective.
A very clear example of an alienans adjective is 'decoy' in 'decoy duck.' A decoy duck is not a duck even if it walks likes a duck, talks like a duck, etc., as the often mindlessly quoted old saying goes. Why not? Because it is a piece of wood painted and tricked out to look like a duck to a duck so as to lure real ducks into the range of the hunters' shotguns. The real ducks are the ducks that occur in nature. The hunters want to chow down on duck meat, not wood. A decoy duck is not a kind of duck any more than artificial leather is a kind of leather. Leather comes in different kinds: cow hide, horse hide, etc., but artificial leather such as Naugahyde is not a kind of leather. Same goes for faux marble and false teeth and falsies. Faux (false) marble is not marble. Fool's gold is not gold but pyrite or iron sulfide. And while false teeth might be functionally equivalent to real or natural teeth, they are not real or true teeth. That is why they are called false teeth.
An artificial heart may be the functional equivalent of a healthy biologically human heart, inasmuch as it pumps blood just as well as a biologically human heart, but it is not a biologically human heart. It is artificial because artifactual, man-made, thus not natural. I am presupposing that there is a deep difference between the natural and the artificial and that homo faber, man the maker, cannot obliterate that distinction by replacing everything natural with something artificial.
I now admit, thanks to Susskind, that the bit about simulation quoted above commits what he calls the AI Fallacy, i.e., "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way that humans work." (54) I also admit that said fallacy is a fallacy. The question for me now is whether I should retract my assertion that AI systems, since they are artificially intelligent, are not really intelligent. Or is it logically consistent to affirm both of the following?
a) It is a mistake to think that we can get the outcomes we want from AI systems only if we can get them to process information in the same way that we humans process information.
and
b) AI systems are not really intelligent.
I think the two propositions are logically consistent, i.e., that they can both be true, and I think that in fact both are true. But in affirming (b) I am contradicting the "Godfather of AI," Geoffrey Hinton. Yikes! He maintains that AI systems are all of the following: intelligent, more intelligent that us, actually conscious, potentially self-conscious, have experiences, and are the subjects of gen-u-ine volitional states. They have now or will have the ability to set goals and pursue purposes, their own purposes, whether or not they are also our purposes. If so, we might become the tools of our tools! They might have it in for us!
Note that if AI systems are more intelligent than us, then they are intelligent in the same sense in which we are intelligent, but to a greater degree. Now we are really, naturally, intelligent, or at least some of us are. Thus Hinton is committed to saying that artificial intelligence is identical to real intelligence, as we experience it in ourselves in the first-person way. He thinks that advanced AI systems understand, assess, evaluate, judge, just as we do -- but they do it better!
Now I deny that AI systems are intelligent, and I deny that they ever will be. So I stick to my assertion that 'artificial' in 'artificial intelligence' is an alienans adjective. But to argue my case will require deep inquiry into the nature of intelligence. That task is on this blogger's agenda. I suspect that Susskind will agree with my case. (Cf. pp. 59-60)
Cognitive Computing?
Our natural tendency is to anthropomorphize computing machines. This is at the root of the AI Fallacy, as Susskind points out. (58) But here I want to make a distinction between anthropocentrism and anthropomorphic projection. At the root of the AI Fallacy -- the mistake of "thinking that AI systems have to copy the way humans work to achieve high-level performance" (58) -- is anthropocentrism. This is what I take Susskind to mean by "anthropomorphize." We view computing machines from our point of view and think that they have to mimic, imitate, simulate what goes on in us for these machines to deliver the high-level outcomes we want.
We engage in anthropomorphic projection when we project into the machines states of mind that we know about in the first-person way, states of mind qualitatively identical to the states of mind that we encounter in ourselves, states of mind that I claim AI systems cannot possess. The might be what Hinton and the boys are doing. I think that Susskind might well agree with me about this. He says the following about the much bandied-about phrase 'cognitive computing':
It might have felt cutting-edge to use this term, but it was plainly wrong-headed: the systems under this heading had no more cognitive states than a grilled kipper. It was also misleading -- hype, essentially -- because 'cognitive computing' suggested capabilities that AI systems did not have. (59)
The first sentence in this quotation is bad English. What our man should have written is: "the systems under this heading no more had cognitive states than a grilled kipper." By the way, this grammatic howler illustrates how word order, and thus syntax, can affect semantics. What Susskind wrote is false since it implies that the kipper had cognitive states. My corrected sentence is true.
Pedantry aside, the point is that computers don't know anything. They are never in cognitive states. So say I, and I think Susskind is inclined to agree. Of course, I will have to argue this out.
Do AI Systems Hallucinate?
More 'slop talk' from the AI boys, as Susskind clearly appreciates:
The same goes for 'hallucinations', a term which is widely used to refer to the errors and fabrications to which generative AI systems are prone. At best, this is another metaphor, and at worst the word suggests cognitive states that are quite absent. Hallucinations are mistaken perceptions of sensory experiences. This really isn't what's going on when ChatGPT churns out gobbledygook. (59, italics added)
I agree, except for the sentence I set in italics. There is nothing wrong with the grammar of the sentence. But the formulation is philosophically lame. I would put it like this, "An hallucination is an object-directed experience, the object of which does not exist." For example, the proverbial drunk who visually hallucinates a pink rat is living through an occurrent sensory mental state that is directed upon a nonexistent object. He cannot be mistaken about his inner perception of his sensory state; what he is mistaken about is the existence in the external world of the intentional object of his sensory state.
There is also the question whether all hallucinations are sensory. I don't think so. Later. It's time for lunch.
Quibbles aside, Susskind's book is excellent, inexpensive, and required reading if you are serious about these weighty questions.
Recent Comments