Question: What is the square root of 4?

Answer: 15 puppies.

Does this sound like the recipient didn’t hear the question correctly? Well, maybe that’s because the recipient doesn’t have ears and is not a human at all.

The answer was created by a student team who created a “chatbot,” an intelligent agent that is supposed to pass off as a human. Student teams in the Department of Computer Science in the School of Engineering participated in a Turing test, a well-known test where intelligence is verified when a human evaluator is fooled into thinking the agent is actually a human.

Twenty-two teams of students created chatbots for Lydia Tapia’s Introduction to Artificial Intelligence (CS 427/527) course this fall. The exercise not only allows students to use the technical skills they are leaning in class in a tangible way, but it also fosters collaboration as students form teams based on different chatbots.

Students started working on their creations at the beginning of the semester. Students were encouraged to define a topic area expertise, so the chatbot could focus conversation on a certain topic.

The top eight teams presented their bot in class on Oct. 6. Bots in competition were Cult-Bot, Trump-Bot, Meme-Bot, Booze-Bot, Sick-Bot, Sleepy-Bot and Conspiracy-Bot. Students voted on each team, with scoring based on believability, humor and informedness. The winners were Cult-Bot and Trump-Bot.

The Cult-Bot team was made up of graduate student Torran Kahleck. In that creation, the user selects their gender, which brings up an appropriate avatar. Then the user asks the “Acolytes of Our Eternal Wriwrenis” questions and it uses a Markov chain to generate complicated responses pulled from quotes from the Dalai Lama and other sources, with the ability to detect nouns and verbs. The bot also used an API so that it can detect the user’s location to generate responses such as, “Hello, John from Albuquerque.”

Trump-Bot was created by computer science students Ryan De La O, Tyler Lynch, and Christopher Salinas. This chatbot relied upon some innovative machine learning techniques to generate answers, as well as a handful of canned responses — generally regarding China, Mexico, trade, and jobs — regardless of the question asked. The students collected direct quotes from Trump speeches and tweets to provide believable responses.

Tapia, an assistant professor of computer science, said this is the fifth year she has been teaching the artificial intelligence class and enjoys this exercise because she likes to find new ways to make the teaching of technical subjects interesting to her students.

“The goal of this project is to teach the students about constructing and querying a domain-specific knowledge base,” she said. “We start by reviewing conversations with Turing test winners, thus inspiring the students to make their own chatbot.” 

One of the students in class said that learning these skills will be valuable to his future.

“It was a great opportunity to learn more about chatbots and some of the AI techniques one can use to make them at least semi-believable,” said Kahleck, who created the Cult-Bot. “The open-ended approach Professor Tapia allowed us in coding the chatbot also provided an excellent opportunity for me to learn more about several web technologies I previously hadn't been exposed to.”

Kahleck said he’s already using what he’s learned to start on another chatbot having to do with machine learning techniques that can discern intention.

“I believe that the experience will absolutely be beneficial professionally as the prevalence of chatbots, and AI in general, continues to grow,” he said.

Examples of conversations generated by Trump-Bot and Cult-Bot.