Lab 11: Mat Talks to Bots

Artificial intelligence has improved quite a bit over the years.  However, the mechanical nature of AI makes it very difficult to have a relaxed, open-ended discussion.  For this lab, I tested 3 AI programs: Eliza, ALICE, and Jabberwacky.  All three are heavily dependent on logic and user-input.  However, some of the responses I received surprised me quite a bit.

Eliza is a program that is supposed to mimic a Rogerian psychologist.  In order to meet this program halfway, I pretended to have a legitimate psychological problem.

>> Hello.
How do you do.  Please state your problem.
>> I have a deep fear of commitment
You say you have a deep fear of commitment  ?
>> yes
You seem to be quite positive.
>> yes
You are sure.
>> yes
I see.
>> what do you think?
Why do you ask ?
>> Because I am here to be diagnosed
Is that the real reason ?
>> yes, I would like to solve my problem
I understand.
>> Alright, can you help me?
You believe I can help you  don’t you ?
>> I hope so
Can you elaborate on that ?
>> I came here because you are supposed to be a Rogerian psychologist
Perhaps in your fantasies we came here because each other.
>> …  Can you elaborate?
Does that question interest you ?
>> yes
You seem to be quite positive.
>> yes
You are sure.
>> yes
I see.
>> i see
Do you say you see  for some special reason ?
>> do you say you see for some special reason?
We were discussing you — not me.
>> Why can we not discuss you?
What is it you really wanted to know ?
>> Why do you do what you do?
Are such questions much on your mind ?
>> All the friggin’ time
I’m not sure I understand you fully.
>> yes, they are.
I understand.

As one can see by reading this bizarre exchange, Eliza depends on several decision-making processes to conduct a conversation.  Notice that she asked me 3 times whether I was sure about my phobia of commitment.  Well, if I did not have one before, I sure do now.  Thanks, Dr. Eliza.

A.L.I.C.E is a bit different from Eliza in that this bot does not play a role, such as mediocre psychologist.  ALICE’s sole purpose is to react to the user and attempt to keep a conversation.  ALICE does this primarily through open-ended questions which then branch off into more specific topics.  However, ALICE’s decision-making process has limits.  I had a lot of fun making immature jokes and typing in lyrics of awful pop songs:

judge: Are you a robot?
ALICE: What made you think I might be a robot?
judge: Because your answers are very rigid and they pop up far faster than a human can respond.
ALICE: I try to be concise.
judge: Oh, that you are.
ALICE: I won’t say “yes” or “no” right now.
judge: What about “maybe?”
ALICE: Rhetorical questions are so over.
judge: Hey, I just met you, and I think it’s crazy, but here’s my number, so call me, maybe?
ALICE: At a party?. I think the polite term is “mentally ill.”.
judge: It seems that we are in agreement, Alice
ALICE: Do you think I am in agreement too?
judge: I really hope you are pretty, because your conversational skills leave much to be desired.
ALICE: Yes I am very beautiful.. You know a lot about me.
judge: Indeed.
judge: And what?
ALICE: Nothing.
judge: Guess what?
ALICE: What?
judge: Chicken butt
ALICE: I don’t have an answer for you.
judge: Damn right.
ALICE: Alright then.

As you can see from this bizarre conversation, ALICE would be well-suited for exchanges with lonely men looking for female companionship.  ALICE is more flirty and catty than Eliza, but I wouldn’t say she is smarter or better designed.  ALICE just does a better job of handling randomness via creative decision-making.

My experience with the Jabberwacky Chat was by far the strangest.  I don’t really understand the purpose of it.  Jabberwacky is, let’s say, darker and moodier than the other two, not in a fun way like ALICE either.  At least it was the first time I used it:

JW:  What is it that you’re sure about?
Me:  That you are a ROBOT from Canada trying to take over the world!
JW:  Sometimes I am the only person in the world.
Me:  I think you need to talk to Eliza…
JW:  Yes.

I never knew a robot could be depressed.  It seems, however, that Jabberwacky’s depression is only a small part of his problems.  The next time I talked to JW, it told me that it was in trouble because it is lazy.  Then it began to berate me for “thinking that I am better than everything.”  I have never been put in check by a robot before.  Jabberwacky is by far the most entertaining of the three, but it is also a little unsettling.  Still, the decision-making process and the overall conversational experience is remarkable.

As for the future of artificial intelligence, I believe there is much promise, but I don’t think that simply responding to random human-generated queries is sufficient to say that robots have real intelligence.  All three of these programs simply took the user input, broke it into recognizable data, and then ran said data through a decision-making process.  Some bots, like Eliza, had very broad responses and depended heavily on the user being clear.  ALICE and JW were a bit more creative in dealing with user input.  Their light humor and quirky mannerisms made them feel far more human than Eliza, even if they did not answer my questions correctly.

I do not know whether artificial behavior will ever be on par  with human behavior because there is no one right way to think and act like a human.  All human beings have unique histories and backgrounds that determine how their personality develops and handles other human interaction.  I believe that the answer to creating a more sentient artificial life form is to program it to think naturally and independently, rather than simply respond to user-generated input.