I seem to think, therefore I am. Can we tell the difference between Artificial Intelligence and true intelligence?
By David M Smith
Whenever I ask Siri a question and I get a useful response, I’m always compelled to touch the button one more time and say, “Thank you”. Siri is always gracious in reply.
Why do I do this? Am I crazy? I’m not just showing off Siri to whoever may be nearby, mind you — I find myself doing this even when I’m home alone. Intellectually, I know that Siri is just a machine, an algorithm, code. And yet, I still find myself maintaining my formal niceties with him. "Don't mention it," he says in response to my thanks.
Yes, him. I do choose the male voice for Siri, but that’s not the point. To me, and I suspect to many others, Siri is definitely “him” or “her”, and never “it”.
But why make Siri seem like a person at all? The programmers who created Siri's algorithms could have just as easily made Siri respond with no-nonsense facts and skipped the polite niceties — not to mention the jokes and easter eggs. (Try asking Siri to “Open the pod bay door” sometime.) Anything that isn’t a query or a command that Siri can’t process could much more easily be met with "No information available", or "That does not compute", a là the humourless bridge computer on Star Trek's Enterprise.
So why did Apple bother with all the “human” touches? (It’s an interesting contrast that the no-nonsense Google Now, Android’s answer to Siri, lacks even a name. Microsoft’s 'Cortana' personal assistant AI, on the other hand, looks to be doubling down on the personality.) I think the answer is that a human-like response nothing more than a better, simpler user interface — and Apple loves to make simple UIs. For the first time, we have all the technology to make humanlike UIs work. Think of the difference between those first-generation voice recognition systems — you might still have one in your car — that only understand a precious few words, precisely spoken. So frustrating to use! You have to remember exactly the words corresponding to the commands, and speak them clearly. It feels like working with a stubborn child, not a trusted navigator. Today, machine vocabulary, speech recognition and semantic parsing have now evolved to a stage where a voice-based interface can be truly useful.
With Siri, the apple engineers have gone one step further. Adding emotions — humor, confidence, sarcasm — lets us drop that part of our brain reserved for coping with computers and children, and lets us relax and interact naturally, and more effectively. Adding ‘emotions’ or ‘personality’ to a program like Siri isn't just a fun easter egg: it has the express purpose of making the system more useful. I expect we’ll see voice-based systems start to take on a broader range of helpful human-like characteristics, especially once more connected sensors start providing even more context to the AI agent. Some possible examples:
Anticipation: “I notice you're near a movie theater. Here are some movies you might like to see.”But these are just pretend emotions, right? Your smartphone couldn’t care less if you have a heart attack on the treadmill: it’s just expressing ‘concern’ because it’s a useful feature to add to a heart-rate monitor. (Also, dead consumers don't buy new smartphones.) Yet, these pretend emotions can easily have the useful effect of providing reassurance, or security, or comfort. And isn’t that the point of expressing emotion in the first place? Indeed, there’s evidence that our very first emotions aren’t heartfelt at all, but merely a means to an end. According to a Harvard study, babies cry to monopolize the attentions of the mother to prevent further children being created (at least for a while), not because they are truly in pain or fear.
Concern: “Your heart rate is very high. Why don’t you rest a bit?”
Compassion: “I’ve responded ‘attending’ for Mark’s funeral. I’m sorry for your loss.”
Urgency: “Your next meeting is in Berkeley is in 30 minutes, but with traffic it will take you 45 minutes to get there. Shall I send a message to Susan?”
Like accurate voice recognition, these simulated emotions make devices easier to use and more useful as well. Layer on enough of these emotion-like responses, you might start to believe that your smartphone really does care how you feel. But if you think it might be tricky to determine if emotions are "real" or just a soulless simulation, what about consciousness in general? Is our intelligence truly a higher form of reasoning, or merely the result of millions of years of evolution driving chemical and electrical reactions in our neurons, in a manner designed to optimize outcomes in the struggle for survival? How, indeed, do we distinguish between the two?
The answer isn’t clear-cut. Consider plants: despite lacking anything resembling a nervous system, some plants exhibit behaviors that seem indistinguishable from intelligence: memory, learning and communication. (Michael Pollan has an excellent article on the science of plant neurobiology in a recent issue of the New Yorker.) This concept of "plant intelligence" is controversial amongst biologists, because the scientific concept of intelligence is firmly rooted in the presence of a brain and nervous system, of which plants have neither. Nonetheless, it's undeniable that some plants exhibit problem-solving capabilities normally associated only with intelligent animals, like this climbing vine that mimics the leaves of its host tree as camouflage. If seemingly intelligent behavior can arise from nutrient flows in plants, perhaps intelligence can arise in other non-brain systems, too.
Similarly, it turns out that the concept of life itself is equally difficult to categorize. In a thought-provoking essay in the New York Times, Ferris Jabr makes the convincing case that whether something is "alive" (say a person, a tree) or "not alive" (a crystal? a flame? a virus?) is inherently undecidable, because "life" is merely a concept that we impose on the world around us.
Google's robotics expert Ray Kurzwell predicts that computers will gain what looks like a lot like consciousness in a little over a decade. (Kurzwell also the one who predicts that by 2045 our society will be irrevocably transformed by the Singularity, when AI’s surpasses human intelligence.) In the same way that we struggle to define whether a virus is truly “alive”, we’ll soon have similar challenges to distinguish “real” consciousness from artificial intelligence. But if we relax our preconceptions that life requires biology, and thai intelligence requires a brain, then our conceptual domain for what computers can become expands immediately.
Science fiction has often explored the consequences of artificial intelligences that compete with human intelligence. Often, the consequences are painted as dire: observe the warring humanlike Cylons of the rebooted Battlestar Galactica. Less often, the outcomes are depicted positively, such as the human Twombly's love of of the artificial intelligence Samantha in the movie Her. In both these examples, the protagonists find it impossible to distinguish these artificial intelligences from human intelligence. In the manner of an expanded Turing test, these non-human intelligences exhibit all of the characteristics, for good and ill, of real people. Perhaps it’s time to adapt Arthur C Clarke’s famous statement, and accept that any sufficiently advanced AI is indistinguishable from consciousness.
In the movie Transcendence, the computer scientist played by Morgan Freeman asks an artificial intelligence, “Can you prove that you are self-aware?”. The AI replies, “That’s an interesting challenge - can you prove that you are?”. That’s an answer worthy of any human.
>> Read more posts on robots in our RobotWatch collection.