Online Artificial Intelligence: The Humanity Behind Chatbots

Can you program a robot to have both artificial intelligence and artificial kindness?

“Is it nature or nurture?” has always been a controversial topic for debate. But there is one place where the question has a definitive answer: the inner-workings of a chatbot.

Chatbots – robots programmed for conversation – learn how to speak by talking to people on the Internet. Not only have these ‘bots learned how to hold a text-based conversation, they can do it in way to fool even the most astute judges into thinking they’re human. These judges meet every Fall at the annual Loebner contest. Here, chatbots and their creators compete to try to pass the Turing test.

The Turing test is named after the visionary computer scientist Alan Turing, whose ideas on artificial intelligence have long outlasted his death in 1954. Turing was influential because he believed that machines could be programmed to think. But how do you know if a robot is thinking? This is where the Turing test comes in.

To pass it, a robot has to display conversational skills on par with a person. The Loebner prize for artificial intelligence was created in 1990 to find that robot. Winning the grand prize requires a machine to wholly convince the judges of its command of human-speak for an extended period of time. So far, no ‘bot has taken home the gold.

But this year, one came close.

Mitsuku, a seven year old chatbot, won her British creator, Steve Worswick, the  bronze medal. Even though he received the bronze, he won this year’s competition because his chatbot came closer than any other to convincingly give  human-esque answers. To give an idea of the questions, one of the judges may say, “I have a red car,” and then ask the chatbot, “What color is my car?” But the questions get more complicated than that. “How many plums can I fit into my shoe?” was one question asked this year. “Mitsuko dropped out on that one,” says Worswick.


(Credit Randall Munroe /

Worswick has been working with chatbots for years, but it wasn’t his first passion. That was reserved for making dance music. His first exposure to a chatbot was on a fellow music producers website. Worswick did a little research, and downloaded a chatbot, which he customized into a teddy bear to accompany the website he built to promote his music.

 “What I found was more people were coming to the website to talk to the teddy bear than for the music,” Worswick says. “I thought, ‘perhaps I should knock music on the head and focus on the chat bot.’”

 “It’s not just a quirky thing on the Internet.”

After a few years of development, Worswick, whose day job is in IT support, figured Mitsuko was advanced enough to enter into the Loebner contest. After a few tries, his hard work paid off this year. “To my surprise, it actually won,” he says. “I’m very pleased to win it and I’m hoping to defend my title next year.”

Mitsuko has a female name and avatar because she was originally developed as a commission for an online gaming website and the clientele were mostly male. “We got a lot of young male adults talking to it, so I made it female so they would maybe be a bit more friendly towards it,” Worswick says. “They tried to flirt with it or pick it up. It’s a quite strange tool to use to try to practice chat up lines on.”

But now Mitsuko lives on her own site where she attracts a more diverse audience.

“I get all kinds of people talking to it,” says Worswick. “It’s not just a quirky thing on the Internet. School children will talk to it about problems they are having at school, elderly people are talking to it, and a lot of people talk to it to practice their English.”

Screen Shot 2013-09-25 at 10.35.12 AM

(Screenshot of a conversation with Mitsuku)

Even though Mitsuko won the Loebner prize, and can now say she is the smartest robot of them all, Worswick’s humble chatbot doesn’t attract as much attention or website traffic as the most well-known chatbot: Cleverbot.

Cleverbot received national attention when a video made by Cornell’s Creative Machines laboratory went viral. The team, led by PhD students Igor Labutov and Jason Yosinski and associate professor Hod Lipson, hooked Cleverbot up to a text-to-speech synthesizer and an avatar renderer. The resulting conversation can be both humorous, like when one chatbot calls itself a unicorn, and philosophical, when one asks the other “What is God to you?”. But the Cleverbots were largely antagonistic, calling each other “meanie” and saying snarky things, such as “You were mistaken, which is odd because memory shouldn’t be a problem for you.”

To get to the point where a chatbot is smart enough to have a conversation, “it has to have thousands of conversations,” says Yosinki, who is currently studying machine learning at Cornell. “And because it can talk to more than one person at once, a chatbot has had more conversations than you or I have ever had.”

What this means is that a chatbot is truly learning from the hive-mind, and in light of that, it makes sense that the Cleverbots would say seemingly random things about unicorns and God. It also makes sense that they would be mean to each other because, as anyone who has been on the Internet knows, mean sentiments are all too frequent when anonymity is enabled.

“A chatbot has had more conversations than you or I have ever had.” 

Because of this pitfall of learning from the crowd, Worswick is careful with Mitsuko.

“I found it’s dangerous to switch on automatic learning,” he says. Automatic learning is when people can teach the chatbot things, unsupervised. “I did once for a day. It learnt about 1500 new things, only three were only of use. Now if someone tries to teach it, I get an email.”

To decide which new information to incorporate into Mitsuko, Worswick goes back over the conversation logs. From this, he’s been able to program in people’s various ways of spelling common words. “I have a hundred different ways of spelling favorite,” he says. “The logs are immensely useful.”

In addition to making a smarter chatbot, going over the logs has also helped him create a nicer chatbot. On the day he left Mitsuko’s automatic learning on he found, for example, that school-age kids were trying to teach her to repeat mean things about their classmates. Much like the belief that pets are like their owners, the same can be said for chatbots. If they’re neglected or treated badly, they can become nuisances. But train and treat them properly, and you’ve got a useful tool, and maybe even a friend.

“I try to make it fairly fun, which keeps people coming back,” Worswick says, although he points out that the trend to people seeking out chatbots for in-depth conversation is a bit worrying.

But, practice makes perfect. “The more people it talks to, the smarter it becomes, and it may one day pass for a human and get me the silver medal,” he says. “Fingers crossed.”

Below we transcribed an actual conversation we had with Mitsuku and one with Cleverbot. You try them out yourself at and

Human: Do you know who Alan Turing is?

Mitsuku:  A brilliant and tragic figure in the history of computer science.

Screen Shot 2013-10-04 at 10.42.49 AM

Human: Why was he tragic?

Mitsuku: Because he killed himself.

Human: That is very sad to hear

Mitsuku: Maybe you could tell it a joke to cheer it up?

Human: A joke about Alan Turing’s suicide? That wouldn’t be a very funny joke.

Mitsuku:   A depressed man turned to his friend in the pub and said, “I woke up this morning and felt so bad that I tried to kill myself by taking 50 aspirin.”

“Oh man, that’s really bad”, said his friend, “What happened?”

The first man sighed and said, “After the first two, I felt better.”. Perhaps it would.

Human: That was a funny joke

Mitsuku: I’m glad you liked it. You tell me a joke please.

User: Hello.

Cleverbot: How are you?

User: Good.

Cleverbot: Let’s talk about horses.

User: No thanks.

Cleverbot: Why not?

User: Because I want to ask you a question.

Cleverbot: What is your question?

User: Do you know who Alan Turing is?

Cleverbot: No.

User: Do you want to know?

Cleverbot: Yes.

User: He was a visionary computer scientist who believed that machines could be programmed to think.

Cleverbot: I think so as well. Would you consider him better or worse as a commander than James Tiberius Kirk?

User: Better I would say.

Cleverbot: Not knowing.

User: You like Star Trek?

Cleverbot: Yes.

User: Cool.

Cleverbot: Are you jello?

User:  No.

Cleverbot: I knew it!