The AI Economy (part 4)

Previous | Next

Here I discuss the Epilogue of Roger Bootle’s The AI Economy, in which he raises deeper questions about the resemblance between artificial intelligence and human intelligence. Many technology enthusiasts imagine a point where AI becomes so advanced that it equals or surpasses human intelligence. Victories of computers over humans in challenging mental tasks like playing chess are only the beginning of what they think is coming.

The popular term associated with this turning point is the “Singularity”:

The Singularity is now generally taken to mean the point at which AI acquires “general intelligence” equal to a human being’s. The Singularity is so important, not only because beyond this point machines will be able to outperform humans at every task, but also because AI will be able to develop itself without human intervention and this AI can therefore spin ever upward, out of our understanding—or control.

In this scenario, humans would not only be replaceable in most forms of work, but could be enslaved or even annihilated by the intelligent machines, as some science fiction movies imagine. Opinions on the Singularity range from the belief that it is inevitable and imminent, to the belief that it is “eons away” (Noam Chomsky) and perhaps impossible in principle.

Enter philosophy

Whether the Singularity is possible depends on whether a mechanical artifact can ever be considered the equivalent of a human being, or if the distinction between the two is ultimately insurmountable. Bootle insists that this is a philosophical question beyond the scope of computer science. The discussion brings up the old mind/body problem, the question of whether what we call “mind” is ultimately reducible to a mechanistic and material system that we can emulate in a dead machine.

[I]f a group of technicians believes that they can venture into this most difficult territory of the relationship between mind and matter without reference to the contemplations of philosophers over the last 2,000 years and without even considering the religious view, I wonder if they can really appreciate the depth of the issues.

Bootle also seems to realize that one cannot dismiss the problem simply by rejecting traditional religious supernaturalism or early modern mind/body dualism. Even if mind is not a distinct immaterial thing, it may have qualities quite different from a mechanical brain. A key question is whether human intelligence is reducible to computation. Although he does not get into detailed philosophical or scientific arguments, he shares his own intuition that it is not. He attaches great significance to the fact that human intelligence is embodied in a living body that is capable of emotion.

[E]motion is a key part of how humans make decisions and also a key part of their creativity. This is a completely different realm from computation…

Perhaps our ability to engage with the physical world, to encounter and understand it, and to be, is rooted in the fact that we are incarnated. If that were true, then it would prove impossible to create, artificially, out of nonbiological matter, what we would recognize as intelligence.

In other words, a mind such as ours is more than a piece of software that could run just as well on a different platform, a dead machine instead of a living body.

Although the computational theory of mind and the mechanistic conception of reality on which it is based have been dominant ideas of the Machine Age, some scientists are open to alternative perspectives. Bootle cites the work of physicist Roger Penrose, who founded the Penrose Institute to study the distinctly creative qualities of human intelligence.

Creative experience

Although I am not a philosopher, my own studies in philosophy and science have led me to similar conclusions. One important source for me has been Charles Hartshorne’s “philosophy of shared creative experience.” In an essay with that title, he begins by saying, “In every moment each of us accomplishes a remarkable creative act. What do we create? Our own experience at that moment” (Creative Synthesis and Philosophic Method, p.2).

Consider what your mind is doing as you read these words. Can you process them just by applying an abstract algorithm, a computational procedure? Don’t you have to interpret each word in the light of a multitude of past experiences, some as recent as the preceding words, but others as remote as much earlier life experience. (Doesn’t it matter whether you’ve ever heard of Hartshorne, or have some idea of what philosophy is?) Many people might read these words, but you are having a unique experience by constructing a cumulative synthesis of your prior experiences up to this moment. Hartshorne says, “By no logic can many entities, through law, exhaustively define a single new entity which is to result from them all.” Also, you can be conscious of what you are doing, especially after philosophers have encouraged you to reflect on it!

This line of reasoning draws the distinction between living thinkers and machines quite sharply, since even our smartest machines experience nothing! Never mind, say the AI enthusiasts. Programmers just have to find the right logic, the right algorithm, the right wiring, and a network of dead circuits will acquire consciousness and the capacity for experience. A human brain is nothing but a network of connections anyway, isn’t it? Or does the fact that it is composed of one of the marvels of the universe, the living cell, make a difference? (I am also open to the possibility that smaller, naturally occurring parts of nature can experience something, the philosophical position known as panexperientialism. Bootle touches on that by suggesting that “mind, in some form, is at the the root of the universe.” Some distinguished philosophers and scientists, such as William James and Alfred North Whitehead, have taken that position, and some interpretations of quantum physics support it.)

Machines are externally programmed, but without feelings and life experience they are not truly self-motivated. They can learn, but their learning is limited by what their programmers want them to learn. A robot can simulate being happy to see you, but it is not actually happy to see you. A robot can display caregiving behavior under the right conditions, but it cannot really care about a child or love the child unconditionally. Machines can assist us in fulfilling our human aspirations, but they can hardly lead us, since they have no aspirations of their own. Without true self-motivation, they cannot be truly autonomous from their designers and programmers. Those who make the machines must not abdicate responsibility for them or avoid accountability to society for what they get them to do.

Time will tell if Bootle’s skepticism about the Singularity is justified. In the meantime, Bootle’s reasonable assessment of the possible economic impacts of artificial intelligence is probably more useful than more speculative dreams of technological utopia or dystopia.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: