What was designed to be a fun, de-pixelating app became the center of controversy when a pixelated version of Obama was put through the app and came out as a white guy. Users cried foul, and rightfully so: in the times of nationwide protests against systemic racism, it’s hard to not see this as being part of a larger problem.
But is the app itself racist, or is there a bigger flaw with the god in the machine? The app used a specific type of Machine Learning to create the image, so does that mean that a computer has learned how to become racist?
What is Machine Learning?
Well, not quite, and to understand it further, we need to take a look at how machine learning works.
To put it simply, Machine Learning, or ML, is a sub-category of artificial intelligence wherein programmers program a computer to learn on its own. Machine Learning allows computers to learn through experience, improving their algorithms to react to certain scenarios better the second time around.
It sounds scary, but machine learning has been around for quite some time: it was first coined in 1959 by IBM engineer Arthur Samuel for application in artificial intelligence and computer gaming. It’s been used extensively in pattern classification since the 60s and is used today in various technologies like facial recognition and self-driving cars.
Machine Learning, Human Bias

In the case of “White Obama,” ML was used to de-pixelate an existing photo of the 44th President using what the program knows about facial reconstruction. Unfortunately, in the case of PULSE, the program had taken the image and assumed that the ‘correct’ reconstruction would be of a Caucasian person. And it’s not just limited to Obama: other people of color have tried the program using their photos and found that their de-pixelated versions came out as white versions of themselves.
What gives?
Well, scientists firmly believe that the flaw lies on the data set. The thing about Machine Learning is that it still relies on human input to determine the base data it learns with. Fill the AI with images of white people, and it will automatically think that this is the default and reconstruct anything based on that particular data. It’s not that the AI itself was biased, it’s just that it started with tools that allowed it to only learn with a particular bias. Combining unbiased datasets along with proper text annotation tools could help in positively reconstructing this biased system.
It’s Not as Smart As We Think…Yet
So while Machine Learning does have amazing implications for various industries, it’s still bound by the same biases that people hold. That makes it dangerous, but it’s also what keeps it from going full-on Skynet: as awesome as Machine Learning sounds, it’s still not complex enough to become a threat to people.
But, of course, the implications of a computer program that creates conclusions based on biased data sets is significant: remember when Microsoft’s Twitter bot started spewing out racist and hateful tirades because users fed it the same, or how twitter scams like “Sarah’s Discovery” went viral because it was carefully constructed to look like real tweets, thus ‘gaming’ the machine-learning algorithm of Twitter? Sure, it was a funny troll attempt, but this spoke volumes on the gullibility of AI and how political biases can be used to leverage a powerful tool.