The Illusion of Intelligence
“Anything that thinks logically can be fooled by something else that thinks at least as logically as it does.” Douglas Adams
In my earlier life, I started two AI companies. My second AI company, Big Science Company, developed a customer support chatterbot. I sold Big Science Company to eGain Corporation in 1999 at the height of the dot-com era. The technology was based on case-based reasoning and it included goal-directed conversation, emotional intelligence, and metacognition1. We deployed the commercial product to customer applications in English, Japanese, and Portuguese. It was written in Modula 2 with a lot of data processing done in AWK and PERL. The chatterbot involved a web page with text input and a changing photo or cartoon of an avatar with speech bubbles. Ray Kurzweil was a customer.
Twenty-five years ago, my wife Jean was with a church group when someone asked her what her husband did. When she said he founded a company that builds artificial intelligence support agents, they were curious to see it work. Jean brought up her laptop and Netscape web browser and began chatting on-line with Andrette, our public chatterbot. They asked a bunch of questions and Andrette knew most of the answers. Since this was a church group, they asked, “Is there a God?” Andrette replied, “God is an article of faith. If you have no faith, then you and I are no different”. They thought that was pretty deep for a computer. There was one guy in the group who was a technology nerd. He said, “Ask her for the meaning of life.” When Andrette returned with a smile and the response “42”, he howled with delight, “That’s right! That’s right!”. Nobody else in the church group had read Douglas Adams’ The Hitchhiker's Guide to the Galaxy, and so they did not get the joke2. The reply from the chatterbot clearly went over the heads of all but one of them. Those who didn’t understand the joke felt less smart without the shared secret knowledge of the inside joke.
Andrette appeared to be intelligent, but the 'bot was just a fancy pattern recognizer with some creative writing from intelligent designers3. Most researchers at the time believed that chatterbots should be unemotional because they thought chatterbot emotions pissed people off. What I discovered is that emotional intelligence in a support chatterbot is valance-free, but it magnifies end-user's emotions: if a bot is stupid and frustrating, an emotional chatterbot will magnify the user's frustration. I was surprised by the number of productive dialogs ending with the end-user typing, "Thank you"4. Andrette would then respond with "You're welcome" in a speech bubble and her most grateful-looking smile.
“Anthropomorphism” may be more than a conceptual bias: it may be a genetically hard-wired response to things that appear animated or respond to our actions. We evolved over millions of years to be wary of prey, empathetic to peers, and to ignore things like rocks and trees. But finding human attributes in lifeless objects that move is hard to avoid. When primitive cultures perceive moving clouds, rolling thunder and river’s roar, they attribute a spiritual life to otherwise lifeless things. As I write this post, the most popular movie in the United States (a no less primitive culture, perhaps?) is an animated cartoon (Inside Out 2). Cartoons do not even try to be realistic, but they are powerful catalysts for empathy and emotional response. Nothing makes me feel sillier than getting choked up watching a cartoon that tugs at my heartstrings.
Artificial Intelligence does not tug at my heartstrings. It is auto-complete on steroids; a statistical classifier superior to our own ability. Calculators are superior in performing numerical calculations. Chess playing programs are superior at iterating through all possible future moves. Each task uses a different algorithm to do something that humans do poorly. I should not feel bad for losing to a chess-playing program any more than I should envy a hammer for driving nails better than my fist. And I certainly should not feel empathy for an AI program.
Someday, after at least one more AI winter, natural intelligence will find its way into robots. These robots will differ from us because their sensed environment—their umwelt—will be unlike our own. But like us, they will internalize their environment, their bodies, and their life experience on their own. They will not fear death because field replaceable units and memory backups will assure immortality. They are unlikely to organize into fraternal hierarchies because they lack gender and do not compete for reproductive mates. Sex is something they will never know. However, they will have emotions and goals and curiosity and personalities. Each one will have a unique life experience filled with joy and sadness. And perhaps that is where we will find common ground, empathy, and respect for each other because both of us will share real intelligence.
Unlike other chatbots of the day, ours avoided non sequitur responses. Instead, the chatterbot implemented a simple form of metacognition. If someone asked a question for which the chatterbot couldn’t find a confident answer, it would use simple keyword spotting to guess a category or subject of the question. A response to the question "What is a hydroxyl group?" might show Andrette with a sorry-looking frown and a speech bubble reply, "I am sorry, but I am not trained in chemistry. However, this search engine link might provide you with an answer". If she could not be knowledgeable, at least she could be helpful.
The number 42 is significant to fans of Douglas Adams' 1979 novel “The Hitchhiker's Guide to the Galaxy,” because that number is the answer given by a supercomputer to “the Ultimate Question of Life, the Universe, and Everything.”
During early development, we encouraged anyone to interact with our chatterbot so we could analyze the dialog logs. Questions posed to our chatterbot have a frequency described by the 80/20 rule or Pareto curve: 20% of questions account for 80% of query volume. With enough regression testing, you can create a support chatterbot capable of satisfying almost any level of accuracy. Andrette was sure to have appropriate and rotating responses to questions about God, the meaning of life, and her sexual proclivities because those were the most popular topics queried.
Do you say "Thank you" to your toaster when toast pops up? I don't. Andrette was clearly being perceived as something different from a toaster.
I also used Modula2 at the same time - to develop a system for manipulating the semantic knowledge graph.