A recent trend on Twitter blew up, where users uploaded a picture of themselves to the ImageNet Roulette Website and AI would try to caption or guess the type of person they are. The captions or options are supposed to range from anything “computer-user” to “creep” to “enchantress”, but some people fear the technology had biases ingrained into its hardware.
ImageNet is the database in which the AI technology trained with, which was complied in 2009 of 14 million labeled images. The Roulette AI was trained on 2833 sub-categories of “person” to then be able to label the uploaded photos into these sub-categories.
The issues arises when most people of color are getting negative captions or labels to their uploads photos such as, “bad person”, “wrongdoer”, or “offender”. Stephen Bush, an editor for New Statesmen Political, uploaded a picture of him photoshopped in to Napoleon costume and was given the label “Igbo”, and ethnic group from Nigeria.
With this trend exploding across Twitterverse, the creators decide to use this as a way to highlight what could happen if the fundamental date that AI algorithms use is bad or biased. This is important because as we turn more to AI technology we cannot have biases of race, gender, or class ingrained into the system or the technology won’t work in society.