No, not Taytay, the Nashville superstar who sings about her personal life – Tay, Microsoft’s moody, millennial chatbot, who became a genocidal racist after reading our Tweets?
Yeah, that was a time.
For those of you who missed it, Tay was an experiment in artificial intelligence. You could tweet @ Tay, and Tay would @ back. Microsoft wanted to see if Tay could learn to chat to users in their own, snarky internet way, by basing “her” responses off of Twitter users’ tweets and phrases.
Seems simple enough, right?
Well, as with any internet phenomenon, it didn’t take long for the trolls to figure out how to push this lovely thing in a horrible direction.
In no time at all, Tay had learned racial slurs, swear words, and began to talk highly of genocide, before Microsoft quickly took Tay offline.
Luckily, the AI crew over at MIT specialise in producing terrifying creations, and this kind of dangerous thinking is precisely what they were after.
After all, they had already created the aptly named “Nightmare Machine,” an algorithm which created horrifying images and creepy faces. They also gave us some of the most horrific stories ever written, thanks to Shelley, their AI horror writing bot.
So, when MIT wanted to create a psychopathic robot named Norman (named after the murderous son in Psycho), the tech crew thought that it would be a great idea to feed their robot on a nutritious diet of Reddit comments.
Reddit has some lovely online communities. There are subreddits for people who like to chat and share ideas about things like bonsai pruning, sculpture, classical music, and fine art.
However… Reddit is also known for some darker and often hard-to-moderate communities.
So, in their incredible wisdom, here’s what the MIT techs decided to feed their psychopathic robot:
“We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death.”
After suffering through some of the most horrible images of death on the internet, Norman was shown a series of inkblot tests, and his answers were a little… homicidal.
You can see all of his responses here, but my two favourites are Norman interpreting the series of splodges as “Man gets pulled into dough machine” and “Man gets electrocuted while attempting to cross busy street.”
The MIT team were hoping to provide a case study that demonstrated “the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”
So, as far as research into psychopathic robots goes, Norman was a huge success.
Yay? I guess?