It’s been a big day for… Listening to...

0:00 10:23

It’s been a big day for… Listening to...

MIT Techs Are One Step Closer To Creating Skynet, As Their Latest Robot Became Obsessed With Murder After Being Fed Reddit Data

In this episode of "Why Do Our AI Tests Always Create Monsters?" techs from MIT fed their latest creation a heap of data from Reddit, and things went dark - fast.

Remember Tay?

No, not Taytay, the Nashville superstar who sings about her personal life – Tay, Microsoft’s moody, millennial chatbot, who became a genocidal racist after reading our Tweets?

Yeah, that was a time.

For those of you who missed it, Tay was an experiment in artificial intelligence. You could tweet @ Tay, and Tay would @ back. Microsoft wanted to see if Tay could learn to chat to users in their own, snarky internet way, by basing “her” responses off of Twitter users’ tweets and phrases.

Seems simple enough, right?

What could go wrong?

Well, as with any internet phenomenon, it didn’t take long for the trolls to figure out how to push this lovely thing in a horrible direction.

In no time at all, Tay had learned racial slurs, swear words, and began to talk highly of genocide, before Microsoft quickly took Tay offline.

Stop feeding the trolls

Luckily, the AI crew over at MIT specialise in producing terrifying creations, and this kind of dangerous thinking is precisely what they were after.

After all, they had already created the aptly named “Nightmare Machine,” an algorithm which created horrifying images and creepy faces. They also gave us some of the most horrific stories ever written, thanks to Shelley, their AI horror writing bot.

So, when MIT wanted to create a psychopathic robot named Norman (named after the murderous son in Psycho), the tech crew thought that it would be a great idea to feed their robot on a nutritious diet of Reddit comments.

What did you do, Ray?!

Reddit has some lovely online communities. There are subreddits for people who like to chat and share ideas about things like bonsai pruning, sculpture, classical music, and fine art.

However… Reddit is also known for some darker and often hard-to-moderate communities.

So, in their incredible wisdom, here’s what the MIT techs decided to feed their psychopathic robot:

“We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death.”

I was an angel.

After suffering through some of the most horrible images of death on the internet, Norman was shown a series of inkblot tests, and his answers were a little… homicidal.

You can see all of his responses here, but my two favourites are Norman interpreting the series of splodges as “Man gets pulled into dough machine” and “Man gets electrocuted while attempting to cross busy street.”

Artist depiction of result

The MIT team were hoping to provide a case study that demonstrated “the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”

So, as far as research into psychopathic robots goes, Norman was a huge success.

Yay? I guess?

Forgive my lack of enthusiasm, but I’m just a little worried about creating psychopathic robots, as I’m sure I’ve seen it go horribly, horribly wrong several times before.

Snuffles has some new ideas