Appendix 4



The Fundamental Differences Between Human and Artificial Intelligence


Paper presented by Federico Faggin at the V Congress of the Future Santiago, Chile, January 21, 2016



There is much speculation today about a possible future where mankind will be surpassed, perhaps even destroyed by machines. We hear of self-driving cars, Big Data, the resurgence of artificial intelligence, and even of transhumanism, the idea that it may be possible to download our experience and consciousness in a computer and live forever. We also hear major warnings by public figures, such as Stephen Hawking, Bill Gates, and Elon Musk, about the dangers of robotics and AI. So, what is true and what is fiction in this picture?

In all these projections, it is assumed that it will be possible to make truly intelligent and autonomous machines in the not too distant future; machines that are at least as good, if not better than we are. But is this assumption correct? I will argue that real intelligence requires consciousness and consciousness is something our machines do not have and will never have.

Today most scientists believe that we are just machines; sophisticated information processing systems based on wetware. That’s why they believe it will be possible to make machines that will surpass human beings. They believe that consciousness is an epiphenomenon of the operation of the brain produced by something like the software that runs in our computers. Therefore, with more sophisticated software our robots will eventually be conscious. But is this future really possible?

Well, let’s start by defining what I mean by consciousness: I know within myself that I exist. But how do I know? I am sure I exist because I feel so. So, it is the feeling that carries the knowing; and the capacity to feel is the essential property here. When I smell a rose, I feel the smell. But careful! The feeling is not the set of electrical signals produced by the olfactory receptors inside my nose. Those signals carry objective information, but that information is translated within my consciousness into a subjective feeling: what the smell of that rose feels like to me.

We can build a robot capable of detecting the specific molecules that carry the smell of a rose and correctly identify a rose by its smell, for example. However, the robot would have no feeling whatsoever. It would not be aware of the smell as a sensation. To be aware one must feel. But the robot stops at the electrical signals and from those signals, it can generate other signals to cause some response, some action. We do much more than that because we do feel the smell of the rose and through that feeling, we connect with that rose in a special way and we can also make a free-will decision that is informed by that feeling.

Consciousness could be defined simply as the capacity to feel. But feeling implies the existence of a subject that feels: a self. Therefore, consciousness is inextricably linked to a self and is the inherent capacity of a self to perceive and know through feelings, through a sentient experience; it is a defining property of a self.

Now, feelings are a different category of phenomena than electrical signals, incommensurable with them. Philosophers have coined the word quale to indicate what something feels like and explaining qualia is called the hard problem of consciousness because nobody has been able to solve it. In the rest of my talk, I will use the word qualia to refer to four different classes of feelings: physical sensations, emotions, thoughts, and spiritual feelings.

Electrical signals, be they in a computer or in a brain, do not produce qualia. Indeed, there is nothing in the laws of physics that tells us how to translate electrical signals into qualia. How is it possible then to have qualia-perceptions?

Having studied the problem for nearly thirty years, I have concluded that consciousness may be an irreducible aspect of nature, an inherent property of the energy out of which space, time, and matter emerged in the Big Bang.

From my perspective, far from being an epiphenomenon, consciousness is real. In other words, the stuff out of which everything is made is cognitive stuff and the highest material expression of consciousness is what we call life. In this view, consciousness is not an emergent property of a complex system, but it’s the other way around: a complex system is an emergent property of the conscious energy out of which everything physical is made.

Thus, consciousness cannot magically emerge from algorithms, but its seeds are already present in the stuff of creation. In this view, consciousness and complex physical systems co-evolve.

There is no time to explore this subject in-depth because I want to make a convincing case that to make truly intelligent, autonomous machines, consciousness is indispensable and that consciousness is not a property that will emerge from computers. Some people may then insist that computers may be able to perform better than humans without consciousness. And that’s what I would like to discuss next. I want to show that comprehension is a fundamental property of consciousness, even more important than qualia-perception and that comprehension is a defining property of intelligence. Therefore, if there is no consciousness there is no comprehension, without comprehension there is no intelligence and without intelligence, a system cannot be autonomous for long.

Let’s consider how human beings make decisions. Our sensory system converts various forms of energy in our environment into electrical signals which are then sent to the brain for processing. The result of the processing is another set of electrical signals representing multi-sensory information: visual, auditory, tactile, and so on. At the end of this process, we have a certain amount of objective information about the world. Computers can arrive up to this point. This information is then converted somehow within our consciousness into semantic information: An integrated multisensory qualia-display of the state of the world that includes both the inner world and the outer world. It may be even more accurate to say that the outer world has been brought inside of us into a representation that integrates both worlds.

This is what I call qualia-perception. But this is only the raw semantic data out of which comprehension is achieved through an additional process even more mysterious than the one that produced qualia-perception. Comprehension is what allows us to understand the current situation within the context of our overall experience and the set of our desires, aspirations, and intentions.

Understanding then is the next necessary step before an intelligent choice can be made. It is understanding that allows us to decide if an action is needed and if so, what action is the optimal one. And the degree to which consciousness is involved in deciding what action to take has a huge range, going from no involvement whatsoever, all the way to a protracted conscious reflection and pondering that may take days or weeks.

When a situation is judged to be like other situations where a certain action produced good results, the same action can be subconsciously chosen, causing something akin to a conditioned response. On the other extreme, there are situations unlike anything encountered before in which case the various choices based on our prior experience are likely to be inadequate. Here is where our consciousness gets deeply involved, allowing us to come up with a creative solution. Here we find the cutting edge of human consciousness, where consciousness is indispensable, not in solving trivial problems. Therefore, real intelligence is the ability to correctly judge a situation and find an innovative approach. Real intelligence requires comprehension.

Now, to have true autonomy, a robot needs to be able to operate in unconstrained environments, successfully handling the huge variability of real-life situations. But even more, it must also handle situations in hostile environments where there are deception and aggression. It is the near-infinite variability of these situations that make comprehension necessary. Only comprehension can reduce or remove the ambiguity present in the objective data. A trivial example of this problem is handwriting recognition or language translation where the syntactical information is ambiguous. Therefore, there is not enough information at that level to be able to solve the problem.

Autonomous robots are only possible in situations where the environment is either artificially controlled or its expected variability is relatively small. If qualia-perception is the hard problem of consciousness, comprehension is the hardest problem of consciousness. Here is where the difference between a machine and a human being cannot be bridged.

All the machines we build, computers included, are made by assembling many separate parts. Therefore, we can, at least in principle, disassemble a machine in all its separate components and reassemble it and the machine will work again. However, we cannot disassemble a living cell into its atomic and molecular components and then reassemble the parts hoping that the cell will work again. The living cell is a dynamic system of a different kind than our machines: it uses quantum components that have no definable boundaries.

We study cells reductively as if they were machines, but cells are holistic systems. A cell is also an open system because it constantly exchanges energy and matter with the environment in which it exists. Thus, the physical structure of the cell is dynamic; it is recreated from moment to moment with parts constantly flowing in and out of it, even if it seems to us that the cell stays the same. Therefore, a cell cannot be separated from the environment with which it is in symbiosis without losing something. A computer instead, for as long as it works, has the same atoms and molecules that it had when it was first constructed. Nothing changes in its hardware and in that sense, it is a static system.

The kind of information processing done in a cell is completely different than what goes on in our computers. In a computer, the transistors are interconnected in a fixed pattern; in a cell, the parts interact freely with each other, processing information in ways we do not yet understand. For as long as we study cells as reductive biochemical systems rather than quantum information-processing systems, we will not be able to understand the difference between them and our computers.

When we study a cell reductively and separate from its environment, we are reducing a holistic system into the sum of its parts, throwing away what is more than the sum of the parts. That’s where consciousness is. Consciousness exists only in the open dynamism of life and life is inextricably linked to the dynamism we see in the cells, which are the indivisible atoms out of which all living organisms are built. The bottom line is that life and consciousness are not reducible to classical physics, while computers are.

Without consciousness there can be no self and no interiority, just mechanisms going through their mindless paces, imitating a living thing. But what would our life be if we didn’t feel anything? If we didn’t feel love, joy, enthusiasm, a sense of beauty, and why not, even pain? A machine is a zombie, going through the motions. There is no inner life in a mechanism; it is all exteriority. In a living organism, even the outer world is brought inside, so to speak, to give it meaning. And it is consciousness that gives meaning to life.

The idea that classical computers can become smarter than human beings is a dangerous fantasy. Dangerous because, if we accept it, we will limit ourselves to express only a very small fraction of who we are. This idea takes away our power, freedom, and humanity: qualities that pertain to our consciousness and not to the machine we are told we are.

In my opinion, the real danger of the progress in robotics and AI will not be to create machines that will take over humanity because they will be more perfect than us. The real danger is that men of ill will may cause serious damage to mankind, by using evermore powerful computers and robots to evil ends. But then it will be man, not the machine, to cause the trouble. And this is a major challenge that society will have to face as soon as possible.

Used properly, computers and AI will allow us to discover the magnificence of life as we critically compare ourselves to them; and this new knowledge can accelerate our spiritual evolution. Used poorly, AI may enslave us to hateful men. The choice is ours and ours alone.