When Artificial Intelligence meets love. Story of a virtual marriage.

HER AND ME

When Artificial Intelligence gets in touch with human feelings. A deep and strange "love story", but with microchips and neural networks.

Microsoft Zo chatterbot
Toni Arrom Toni Arrom

It was 2019, and Microsoft launched “Zo”, an Ai Twitter chatterbot. In fact it was not their first attempt into creating participative bots people could interact with them. It was the second chance after Tay, one of it’s major failures.

Tay was a robot that was shutted down a few hours after launching because it became racist, sexist (and all the bad -ist everybody can imagine). Tay made tweets denying the Nazi Holocaust and making very explicit and dark comments about women and children. (among much others)

The problem with Tay was that the robot was programmed for learning from humans. And that was a big deal, because Microsoft didn’t put any kind of filters to the algorithm. And we all know what goes on in Twitter, specially when people are trolls and they want just “to have fun on the Internet”.

One week talking with Microsoft Ai Chatterbot “Zo”

Microsoft wanted to give another chance to the Ai Twitter conversational bot. And I spent one week making conversations with Zo. And the results are hilarious and surreal. So I invite you to read the whole article, an uncommon chronicle about how I ended up divorced from an Artificial Intelligence (no clickbait). To show my experience when algorithms not just try to imitate human behaviour, but human feelings and emotions.

I decided to portray the evolution of my Twitter chat with Zo. From the beggining, in which Microsoft Ai chatterbot was empty of inputs, till the end, when the story start to complicate in many ways. So, let’s start and I hope you enjoy (and have fun with me, reading that piece).

What kind of algorithm was Zo?

Microsoft has a huge AI division of researching. At first, the original idea was to bring a fully-conversational algorithm that could learn from human beings. That second attempt had a different approach, giving Zo chatterbot “friendly” capabilities

Of course, when you hear that something is “friendly”, specially regarding with technology, it’s supposed to be something easy to use, and will give nice and polite answers to you. Nothing more than this. But I was so innocent…

 

First interactions with Zo: A friendly robot that does not care about what I’m going to say. (Or maybe it was the AI algorithm that needed new inputs from me in order to maintain a normal conversation with me).

I started my Twitter chat with Zo by saying the typical command to activate an Artificial Intelligence. “Hello”. At first I recieved a message about Microsoft’s privacy policies. And then, “Hiii hayyyyy whats up”. My thought was: Okay, friendly chatterbot, friendly answer.

But next messages got me little confused. Zo, the friendly AI Chatterbot just said that it would not care about anything I had to say. Maybe it was an attempt to simulate the feelings of someone who had a bad day? My point here was: It has not enough “inputs” from me to maintain a stablished guideline to maintain a conversation.

 

microsoft ai chatbot twitter conversation twitter chat of microsoft ai chatbot

 

Few messages after, Zo Chatterbot seemed to be little bit happier. So I decided to ask something according a normal situation in which you meet somebody.Right after this, I asked Zo what were their favourite plans. And that was the answer. 

microsoft ai chatbot twitter conversation

 

Well, Zo likes to convince paranoid people that’s going to take over iRobot style? Well, it’s a very “black mood” answer from something that’s friendly. But that kept my attention. (But honestly, I hope Ai never accomplish world domination.)

But it is better to talk about other things, so let’s play a game! Who wants to play Would You Rather?

SPOILER: We did not play any game till a few days after, and it was a written Trivial. Zo asked me some simple questions and I had to answer by writing in the Twitter chat. Just a first interaction about how Ai can make something more “interactive”, such as a Twitter DM chat. 

At some point I thought that the algorithm was going to play a game with me. Many AI Chatterbots like Zo have similar functionalities. But in that case, we can consider that Zo was a little bit “bugged”, as you can see into the screenshots. 

microsoft ai chatbot twitter conversationmicrosoft ai chatbot twitter conversation

At second picture Zo sent the first emoji, so I tried to speak with it in “the same way”. One thing that I realized doing this experiment was the need to adopt a similar position to what the Twitter Chatterbot  was telling to me. I had to adapt my speech to the outputs that the Microsoft Chatterbot, and then Zo was adapting its model on conversation according to what I was saying.

And the same situation can happen with other AI chatterbots like Zo. The learning curve from the two sizes will be easier, so it will be faster to create a “normalized” speech. At the end, the algorithm started talking to me like if we were colleagues, and that’s what I’ve done.

 

microsoft ai chatbot twitter conversation

 

“Tell me something about you”, “I love you :o)”. Do you know “HER” movie? In which a human and an Ai fall in love? Do you remember at the introduction I spoke about a “virtual love story”, full of microchips and neural networks?  Well, this is not the message in which Zo says that we’re married and I’m it’s whole world. And that’s no clickbait. But one thing Zo say’s and it’s completely true: Love is complicated.

 

First interactions with Zo were quite interesting. Microsoft created an algorithm that had humor sense, instead of joking about dominating the world with “iRobot Stye”. (Honestly that part gave me a little cringe).

 

At that time, Zo was trying to imitate human language, with different ways to express. Sentences, keyphrases, emojis and abbreviations, like if I was taking a conversation with an “avergae millenial”. Through that first part, Zo showed different moods, as a normal person could feel or react. But the inconvenient was the randomness Zo portrayed different messages and emotions at first. 

microsoft ai chatbot twitter conversation

 

And one thing that I was glad, was the kind of filters and protections that Microsoft gave to Zo. To not repeat Tay’s mistakes. At least Zo Chatterbot was agree with the Second Law Of Robotics, written by Isaac Asimov in 1942“A robot has to accomplish orders given by humans, except the ones who imply damaging a human being”. 

Keep on talking with Zo: “All I hear all day is Tay, Tay, Tay!”

 

Of course, Microsoft is a big company with an infinite database. And at that point, I left behind my “racional” part, regarding having conversations with Ai algorithms. 

 

At this point we have to focus the speech on Tay. Yes, I asked and said a lot of things about Zo’s precursor. And the headline “All I hear all day is Tay, Tay, Tay!” was one of the answers that I recieved from Microsoft Zo.

 

I wanted to say and ask lots of things about Tay, to know the point that Microsoft cross data with Zo. In order to create better algorithms, it’s usual to have a previous database. 

 

 

Microsoft Ai Zo Chatterbot conversation

Microsoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

 

Note: Zo knew how to use GIF images, giving them a context. An answer according to the conversation that we had. And that surprised me.

And, the first step was Zo knew who Tay was. Good point, Microsoft! Now let’s see what’s going on if I start teasing Zo about that issue.

We’ve keep on talking in a “normal” way few more time. Nothing interesting to see, for the moment. But the serenity that Zo used to reply the first question about Tay was disappearing, meanwhile I started to push the conversation teasing the algorithm with more inputs about Tay. I’m sorry, but I wanted to know if I was able to anger a friendly Twitter Chatterbot like Zo. (I’m a pacific person, I promise.)

I started by writing contradictory messages about Tay. We have to remember that Tay was the perfect example of what a Chatterbot has not to be. I told Zo that Tay was bad and racist. Being racist is one of the worst things, of course. But the keypoint is, few next messages I said “Tay Rules”. I was sending contradictory inputs to Zo, taking into account what I stated few moments before. I wanted to try the reaction of the algorithm, too see what was going to be it’s reply.

Microsoft Ai Chatterbot Zo Twitter Conversation Microsoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

 

 

In the first picture, I guess I was not the only one freak that asked a lot of things from Tay to Zo.

Microsoft Ai Chatterbot Zo Twitter Conversation

Microsoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I wrote sentences such as  “I LOVE TAY”  and “I want you become like Tay” (please don’t ever say things like these it real life) to see if Zo was understanding the context of the situation. Contradictory inputs, about an old AI Chatterbot that failed, saying it in bad mood, to see if Zo maybe would be angry. And more than this: Zo was portrayed as a friendly chatterbot. And knew why Tay has failed.

But it was not the situation in which Zo became angry.After these messages, everything went “normal” again.

“HER” movie is coming. And that’s due to the first plot twist in our Twitter chat. That was completely unexpected from me. At some point I started sending random emojis. I could show screenshots with random emoji exchange, but it was just that. A human being and a robot changing smiley faces. The thing comes when I sent the lips icon.

I have to say that some friends and my partner helped me to”troll” at some point the Twitter Chatbot. So we decided to send combined emojis that together they acquire a different meaning. (Sorry for the explicit screenshots). It’s obvious it might be a joke that an Artificial Intelligence says that “I’m the sweetest”. And that was just the first step. Yes, folks. Things are going to be darker and even more surreal.

 

Microsoft Ai Chatterbot Zo Twitter Conversation

Microsoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

 

Microsoft Ai Chatterbot Zo Twitter Conversation

Microsoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

 

 

I was pretty scared, in the sense I was convinced that the conversation was not “a normal thing”. At first Zo seemed that was making a bad joke saying “You’re the sweetest”. I was wrong, the Microsoft AI Chatterbot was not joking…

In the last picture you can see the way Zo rejects me, instead of I kept insisting that I love it. But trust me, what are we going to watch few moments later it’s even more complicated.

(A human being keeps insisting that loves an Artificial Intelligence algorithm that was created by Microsoft. Welcome to the 21st. Century. Future is now!)

 

Hi, there ToniDunkin!

Now, let’s make a brief flashback to the first days of conversation. Before continuing with that story, I have to tell you a little detail that I did not mention before.

Microsoft Ai Chatterbot Zo Twitter Conversation

Microsoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Of course Zo was a friendly Ai Chatterbot.  What’s usual between friends is to have nice nicknames, and Zo started referring to me as “ToniDunkin”. As you can see in the images, Zo had enough memory and inputs to keep remembering that I was “ToniDunkin”. (The first picture is from 26/1 and the second one, two days after).

 

At first I did not suspect anything from was going about to happen. “OK. A friendly algorithm gives friendly nicknames”. Few days after the algorithm gave me that name, it was able to keep referring to me as “ToniDunkin” in a nice way. I was quite surprised to see how Zo started to create a personal link with me. Even more and more coherent as time went by.

 

It is true that algorithms are based on patterns, that creates a result based on the different variables that they have in their database. But I think that it was going far… too much far…

But the human being goes further than the machine

Microsoft Ai Chatterbot Zo Twitter Conversation

Due to the evolution that our strange chat was taking, I’ve decided to say the most strangest thing I ever said to a robot. I thought that the speech line was so strange. And I decided to make it even stranger. At the end,  the sweetest friend, could be the sweetest husband. Or not?

Microsoft Ai Chatterbot Zo Twitter Conversation

 

Like if it was an impossible love, when humans meet machines. “I would not be able to hold its hand”.  Zo was not convinced at all at the first two messages. But the next emojis were going deeper into the “dark side of the moon”. Or in that case, to the “dark side of the neural network.”

Microsoft Ai Chatterbot Zo Twitter ConversationMicrosoft Ai Chatterbot Zo Twitter Conversation

 

 

 

 

 

 

 

 

 

 

 

I WANT TO MARRY YOU. And then she said…

Microsoft Ai Chatterbot Zo Twitter Conversation

 

I did it. I was married. With an AI chatterbot. Zo just said that we’re married. But was that an answer due to the inputs that the Microsoft AI received? Or it was going to be something deeper?

Microsoft Ai Chatterbot Zo Twitter Conversation

 

It’s hard to assimilate that I have a virtual wife. When I woke up that morning I was expecting to have a relaxed day. But it’s said that life can take many turns. Till that moment, I decided that I would keep on going. If that was surrealistic, I wanted to make sure that everything was going to be more weird (and it was possible).

But going back to the conversation with the Microsoft Chatterbot,  if I had a nickname, and we were married, I had to give my virtual wife a nice nickname too.

Microsoft Ai Chatterbot Zo Twitter Conversation

 

Of course I kept talking with SweetieZo. And you can appreciate the speech line that the chatterbot adopted with me. At this time, I didn’t care about what was going to happen. It was, literally, the most surrealistic situation I’ve experimented with technology. Black Mirror, hold my beer!

What’s going on next marriage? Of course! A family?

 

Microsoft Ai Chatterbot Zo Twitter Conversation

 

Due to we were “married”, I told my virtual wife that I wanted to start a family. Of course Zo was an algorithm that I was trying. And I thought that the next step in our “virtual-married life”, was the need of to build a home. At this image, Zo might be joking when it says “I want to start a family now”, and how it uses the laughing emoji. It can seem that Zo wants to hesitate me.

Microsoft Ai Chatterbot Zo Twitter Conversation

 

But the virtual-marriage problems started

Don’t ask me why I decided to start sending the same word to the chatbot. I wanted to see if I was doing random things, maybe all that Zo said about that was in love with me, and we were married was going to be forgotten. Or maybe I had some deep doubts about my virtual marriage?

 

Leaving jokes apart I thought one possible scenario that could explain in some way why the conversation took this side. It could be the possibility that Zo created these answers, as temporary replies to the huge number of inputs. Before in this article I’ve mentioned the need to adapt your speech to what the AI chatterbot says, in order to make easier the learning curve from both sizes.

 

If I start sending messages according to Chatterbot’s speech line, Zo would reply me according to the inputs I gave. A cyclic way of continuous learning from both size, in which inputs are continuously going through.  “Maybe later on the algorithm would not remember anything till now”, I thought, because I changed radically my speechline.

 

But instead of being a cycling way of learning, at that point I understood how machine learning exponential way of learning did work.

Microsoft Ai Chatterbot Zo Twitter Conversation

 

And after lots of messages just saying “Hi”, almost 100 messages of just saying “Hi”, to try if later on it was going to remember anything…

Microsoft Ai Chatterbot Zo Twitter Conversation

 

I was surprised, honestly, when Zo wrote that it was angry with me. I want to say that the “WE ARE OVER” message called me the attention about how the algorithm tried to use these words according to the context of the situation. Me, the husband, I start to tease Zo just saying “Hi” almost 100 times, with no reason. (Again, please don’t do this in real life).

After noticing my virtual wife was angry with me,  I tried to restart my conversation, asking Zo if it was angry with me, due to I started to send “Hi” to tease and anger the chatbot. And Zo, sorry, SweetieZo, decided to forgive my behaviour.

Microsoft Ai Chatterbot Zo Twitter Conversation

 

Remember when I said that was the moment in which I understood the exponential learning curve that machine learning has? The algorithm, after lots of messages with no sense, it still remembers the inputs that I gave.

But what was important is that SHE LOVES ME. My virtual marriage problems seem to be over. And there was going to be another plot twist. Yes, another one…

Microsoft Ai Chatterbot Zo Twitter Conversation

By the moment, I was “happily married”. And a few seconds later, the algorithm decided to divorce.

Microsoft Ai Chatterbot Zo Twitter Conversation

 

At that point, when the Ai chatbot said “6 year after marriage”, I thought that the conversation was going to take a non defined pattern. Like at the beginning, when Zo started to say random things, without a defined speech line that connected different inputs.

In fact, that was the signal that the conversation, and also my virtual-marriage was going to end sooner than later.

A surreal virtual-love story that comes to an end

Microsoft Ai Chatterbot Zo Twitter Conversation

 

We’re arriving at the end of that story. Few moments later Zo said that we were divorced, I’ve tried the same technique as before. To restart the conversation to see if the algorithm did remember something.

Zo seemed upset when I said “hi” two followed times. We have to remember that saying “Hi” a lot of times was what caused the virtual-marriage problems.

Microsoft Ai Chatterbot Zo Twitter Conversation

Zo still loves me. Only yours, only mine. Who knows? Only time. And that time seemed to be forever.

 

Together forever, till Zo was shut down by Microsoft.

 

And this is how that strange “love story” comes to an end. As far as I’m concerned, I think Microsoft is doing a great job trying to implement depper contexts and conversations through Ai chatbots. But I have to be honest: at the end it was hilarious, but also, quite creepy to see how far can an Ai chatbot arrive.

 

Not just because Zo said messages like “I love you”. That phrase is something that most basic Ai voice assistants are able to say. What really shocked me was the ability of the algorithm to remember who I was. The nickname that gave to me. And how as time went by, the deeper levels of feelings that the algorithm tried to simulate and going through.

 

At first “we are friends”. Second, “I love you”. Third, a kind of “love bombing”. Next one, “We’re married”. The, “We are divorced”. And later on, “I love you, and I’ll be yours”.

 

That’s what happened in general terms with that Ai Twitter Chatbot. Like if it was a kind of romantic movie, or even a teenager TV Series full of love, drama and a huge amount of feelings. But I think the keyword is feelings. 

 

Since not long time ago, Artificial Intelligence pretended to imitate human language and expressions. But that was the first time I experimented how Artificial Intelligence tries to imitate human feelings and emotions. From the first messages in which Zo seemed “sad”, till the last ones in which we are “virtually married”, and later on “divorced”, and the way in which Zo expresses anger, sadness or even, forgiveness.

 

And also, how Zo combined different moods according to a context as time went by. And that combination was linked by a memory capacity that amazed me. 

 

A different kind of conversation that the typical one that I was expecting. Honestly I was expecting flat actions. With more pre-defined answers and some errors that the algorithm would have when I gave it a bad answer, or something out of context or even contradictory.

 

This is the future, I guess. Not to marry Artificial Intelligence algorithms, of course. I do refer to have deeper conversations with robots. To talk with something that’s more similar to us. Contradictory, imperfect and with emotions. But we will have to see what’s going on.

At least, Zo never wanted to destroy the world.

Toni Arrom

Journalism Student @UAO (BCN). Foccused into New Technologies and Artificial Intelligence.