AI and Consciousness

Advances in artificial intelligence are of interest because of what they may say about consciousness. If a machine can acquire human-like sentience, does that mean consciousness is entirely an emergent property of brain functioning? If so, it would arguably eliminate human consciousness as a phenomenon that proves God’s existence. All the more reason, then, that we should be careful with the words we use on this subject.

 

I recently wrote about how language is debased by use of words denoting supernatural reality when only natural reality is assumed (in Etiolated Language).  Another debasement of language occurs in discussing artificial intelligence.  Words denoting human mental activity are used for machine activity.  The words may be analogous, but they’re not actually descriptive of machine processes.  Using them in that context equates human and artificial cognition.  That causes us to import into our discussions of artificial intelligence an assumption of naturalism that is not warranted.

 

Machine “consciousness”

In a recent article in Quanta Magazine, roboticist Hod Lipson was interviewed on the subject of consciousness. He is trying to understand consciousness from the “bottom-up,” rather than from the top down as he says occurs through philosophy and neuroscience. By “bottom-up” he means starting from the simple and moving to the complex, rather than beginning the inquiry at full human consciousness. His approach necessarily presupposes that consciousness is a function of machine, and that therefore man is machine.

 

Lipson’s project entails building a robot that “self-simulates” so as to arrive at an “understanding” of its physical form as it interacts with its surroundings. At the same time as the robot self-simulates its physical form, it is to self-simulate the cognition process, so that it acquires “understanding” of how it “thinks” so that it builds on earlier “thoughts,” thus becoming ever more self-aware.

 

I put the words “understanding,” “think,” and “thoughts” in quote marks so as not to assume the conclusion. Finding out whether the robot knows or understands in a fully conscious way is the point of the exercise, not something to assume on the front end.

 

The interviewer asks whether “self-simulation” is the same thing as the self-awareness that we commonly think of as human consciousness. That is the question I am most interested in. The math and technology of AI is nothing alongside that essential question, imho. Lipson and many other robotics and AI researchers believe human-like cognition and learning are achievable by machines because they believe humans are essentially machines. They are often remarkably vague about what human self-awareness means, however. In their work it seems to be a given that self-awareness means only the ability we have to assimilate new data and process it in a variable way over the biological neural networks of our brains; that if this process were replicated mechanically, the resulting robot with AI would be self-aware in the same way as humans are. Sentience, in other words, means high-level machine functioning. Is that really all it is?

 

Let’s scroll back just a little bit in time, say to the beginning of this 21st century. Computers are ubiquitous, but compared to now, AI and quantum computing are in their infancy. We would be talking about whether our laptop has consciousness. In that scenario, the answer to the question whether it is conscious is self-evidently “no.” It’s full of human-made symbols that contain information, but it didn’t generate that information; it’s not aware that it contains that information; it’s not aware that it is processing that information; and it’s not conscious of its interactions with other machines or humans, still less of the presence or absence of self-awareness in those other machines or humans.

 

But that was then; this is now. Now we entertain these kinds of questions in light of the advent of artificial intelligence and quantum computing. The big question is whether this more advanced computing takes us beyond the machine-as-processor, to machine as self-aware processor, and “self-aware” in the same way as human beings are.

 

Human Consciousness

In addressing this question, we would do well to first describe what human consciousness means. There are several levels of self-awareness to ascend, before arriving at human consciousness. So far I see nothing that puts AI quantum computing at even the first level, much less the succeeding levels applicable to humans. Here is what I mean by levels.

 

The first level of awareness for a human is just awareness of objects in our environment. This means perception only, through sensory organs. At this level we’re not yet talking about intentionality, whereby we direct our awareness purposefully; nor does it mean the qualia by which we subjectively experience those objects; nor does it mean interiority, the internal ‘space’ not open to others in which that awareness takes place. It means only perception.

 

Switching from humans to machines, we should stumble a bit with the words, at first, and then realize we’re only using them analogously. Machines don’t “perceive” as humans do, but I suppose we can say they incorporate external data. Even “incorporate” is a bit fraught, however, because the root word corpus means body, so “incorporate” means to make a part of the body. “Awareness” is problematic because we associate it with the thing humans do when conscious, but that’s the very question in front of us: whether machines can be conscious in the same way humans are. So there should be a caveat to our use of “awareness.” With respect to machines, it means only some level of interaction with the world-out-there. A machine’s incorporation of external data variable processing with pre-programmed neural networks would lack subjectivity, qualia and interiority, undeniable features of human consciousness.

 

The second level of awareness (or “awareness”) involves not only that incorporation of external data, but awareness of the process of incorporating that data. I perceive objects in my environment, but I am also internally thinking about the fact that I’m thinking. A machine would have to be thinking about its reception of data, but machines don’t think through rational processes. They don’t “think” at all, because they don’t apply semantic logic, they apply only physical processing according to the logic supplied externally.   Artificial Intelligence employs neural networks which might be regarded as having jettisoned the idea of algorithm, in fact it’s sometimes difficult to trace the process; a so-called “black-box” problem. It’s immensely complicated by several sources of variability run through probability sieves, so it’s not as though one can trace the process in linear fashion as in the early days. But it’s still only processing data.  Even the unpredictable output of neural network processing is the result of sophisticated programming.  

 

The third level of awareness is other-awareness. That doesn’t just mean distinguishing among objects in our environment. It means recognizing that some of those objects are not objects at all, but rather subjects themselves. I am aware of the rocks and the river and the trees, but I am also aware of you, and you’re different than the rocks. You are another like me. You have self-awareness like me, and I am aware of your self-awareness. A machine can be programmed to distinguish between objects and other self-aware entities, but that’s not the same as complete other-awareness because there is not enough there to develop the double-playback of mutual awareness that builds individual relationships and brings about a shared cultural memory.

 

The fourth level of awareness involves a breach of one’s interiority. If I am aware that there are others like me who are also self-aware, then our distinctness resides in our respective interiority – the internal processing of mine that you cannot see into, and yours that I cannot see into. I can infer some of what you’re thinking by your actions and by your semantic logic uttered in literal words, but I can’t see into your mind. If I imagine another like me who is the superlative of all my ideals and who also breaches the interiority of my consciousness, I have arrived at God. If a machine could arrive at the form of “awareness” hypothesized by Lipson, it would do so by the ground-up building of something that mimics human awareness but which is not quite sentience.

 

Brain is not Mind

It’s exciting to think about the possibilities of artificial intelligence, but it amounts to linear computing at successive meta-levels. We can call it “learning,” but that just means it’s programmed to acquire new data and develop new algorithmic pathways, but all based on its original programming. Complexity is not a synonym for sentience.

 

It’s also exciting to think about the possibilities suggested by quantum computing, which employs neural networking in mimicry of what the brain does. We’re still just talking about very fast processing, however; possibly even scary monster-like machine behavior. But not human awareness.

 

Are self-simulation of cognition and physical form the same thing as human self-awareness? The hypothesis underlying most robotics/AI research is “yes;” that mind equals brain. But it’s only a hypothesis. As Lipson acknowledges: “It is completely a leap of faith to believe that eventually this will get to human-level cognition and beyond.”

 

Leave a Reply

Your email address will not be published.