It’s been a while since we’ve seen a good milkshake duck. The last example I can think of was the spectacular backlash towards the dude who drew those adorable Strange Planet alien comics once people found out he was anti-abortion, but now we have another racist image AI to focus on.
ImageNet is a database that’s been around since 2009, and works by figuring out what’s in a picture and labelling it as such. It’s an amazing piece of technology, and has is used by the likes of Stanford and Princeton Universities. ImageNet is really good at labelling things like balloon, apple, or river, and great at recognising people, but things get dicey when you ask it to get more specific when it tags a person.
Enter ImageNet Roulette. ImageNet Roulette is an AI that imported all the training images ImageNet had for people, as well as learned all the possible classifications that it could give you. It lets you upload a photo for the algorithm to make a bunch of assumptions about, and surprise, these assumptions are often wildly wrong.
For example, a picture of me:
I still don’t understand what it was about me, a non-religious person wearing no religious garments, that screamed ‘woman in charge of a bunch of nuns’, but here we are. I suppose if I did find myself in an abbey, the nun in charge of the whole place isn’t a bad position to be in. I also really appreciate the ominous tag of ‘mortal’.
I got off lightly.
It turns out that anybody who isn’t white will most likely have their picture come back with a bunch of labels like ‘black person’. Or worse.
It’s not a fun ride. It’s also, unfortunately, not the first time that AI has gone off the deep end.
And the time that Amazon had to scrap an AI to help with hiring because it was sexist.
And the AI that read Google News for a while, before telling researchers that ‘man is to computer programmer as woman is to homemaker’.
These AI don’t wind up this way on their own. An algorithm is the result of what it’s fed and who it learns off, and the tech industry is notorious for lacking diversity. If the programmer has some unconscious biases, the AI will have a very loud version of those same biases. Silicon Valley in particular is slowly getting better, but it’s still very white and very male.
The obvious solution is for more diversity in the people inputting the information so that these biases are taken out, but unfortunately, that’s not going to happen overnight. In the meantime, AI like ImageNet need to be called out for the problem they have, be it through projects like ImageNet Roulette or some other means.
We can’t fix a problem we can’t see, and so these incidents need to be called out until either we stamp out the problem or the robot uprising happens. Whatever comes first.