“Artificial Intelligence”

What the words mean

It seems like there’s a new article or book almost weekly on the coming impact of Artificial Intelligence. The phrase, “artificial intelligence,” has always been puzzling to me, however, and I think we should stop and consider it, the more so if the phenomenon it denotes is actually introducing a disruptive Brave New World.

Let’s start with what “intelligence” actually means. I looked at several different definitions of the word to compare to the understanding I’ve always carried in my mind. In every instance, the meaning relates to learning, reasoning, and understanding.

Can machines do that? Well, we sometimes say computers “learn” because they’re programmed to capture and incorporate new data inputs into their existing algorithms. This can be quite complex, but it’s still the programmed running of algorithms; it’s not abstract thinking. They can be programmed to sift vast amounts of data quickly to detect patterns they’re programmed to detect, and in fact they can do these calculations faster and with larger data inputs than can humans. But it’s still fancy calculating, not real learning.

We sometimes say computers “reason,” but “reason” implies a mind projecting outward to grasp a problem and apply logic to a sequence of thoughts to arrive at a conclusion. This is a part of the “intentionality” or “aboutness” component of human consciousness. What a computer does resembles that, but it is a passive process, with the semantic content of the logical progressions supplied externally; that is, by humans providing the algorithms the machine then executes.

The last and most important piece of the typical “intelligence” definition fails completely, with computers: understanding. The most sophisticated computers ever made do not “understand” what they’re doing. The content I’m typing right now is stored and processed “logically” (in the coded word processing program), but there is no sense in which the computer I’m working on “understands” this content or the logic by which it is stored. Computers are not self-aware, except (sometimes ambiguously) in sci-fi movies, like Ex Machina and Chappie and many others.  One movie which suggests a disconnect between brain and mind in the sci-fi context is Advantageous.  (Links are to my reviews).

The limits of AI

The most recent publication I see on the coming impact of artificial intelligence is by Kai-Fu Lee: AI Superpowers: China, Silicon Valley and the New World Order.

Lee isn’t trying to address the philosophical problem I have with the phrase “Artificial Intelligence.” Among other things, he’s trying to address fears over competing visions of the future with AI: on the one hand, a utopia in which most of man’s work is eliminated, freeing us hopefully toward nobler pursuits; and on the other hand, a dystopia in which we lose control of machines and they acquire dominance over us, their makers. Lee says neither vision is accurate. The reason why is important.  In an essay in the Wall Street Journal, The Human Promise of the AI Revolution (9/15/18), he explains that we should accept neither vision:

“They simply aren’t possible based on the technology we have today or any breakthroughs that might be around the corner. Both scenarios would require “artificial general intelligence” – that is, AI systems that can handle the incredible diversity of tasks done by the human brain. Making this jump would require several fundamental scientific breakthroughs, each of which may take many decades, if not centuries.”

Why It Matters

Hmm. Machines can’t do what the mind does. Lee does suggest they might, eventually. Perhaps in “many decades, if not centuries.” That’s forever, though, in the world of computer evolution, so it’s really speculation. It’s speculation that computers will become as technically proficient as the brain, but it’s speculation about something else, too. It’s philosophical speculation:  that technical proficiency of the brain is equivalent to the capabilities of the human mind.

Is it? This is an instance of the age-old mind-body problem of philosophy: whether the human mind is merely a collection of emergent phenomena of the physical brain, or whether there is something more. This is why I am wary of AI speculations, starting with the phrase “artificial intelligence.” Not because of what AI portends for our future prosperity and posterity. Rather, for what it declares as true about the human mind: that it is the physical brain and nothing more.

One has to be up on computer terms of art to understand that the phrase Lee uses, “artificial general intelligence,” doesn’t just mean a more robust AI than we currently have, one “that can handle the incredible diversity of tasks done by the human brain.” It means intelligence accompanied by consciousness, sometimes called “strong AI.” As Lee acknowledges, it does not exist. But Lee and other AI enthusiasts assume that it one day will, so strong is their faith in an exclusively mechanistic reality.

Use of the phrase “artificial intelligence” to describe machines that are not intelligent imports a presumption of naturalism into any conversation about what human-created future technologies hold in store for us. And naturalism assumes away the existence of God.

1 thought on ““Artificial Intelligence””

Leave a Reply

Your email address will not be published.