AI Technology and Human Bias

by Ani Mosinyan   Sep 11, 2020

Artificial Intelligence has made massive strides in the tech world over the last decade, and just about every industry has been inundated with advanced AI systems. While powerful and highly intuitive, AI has not yet factored in the racial and gender prejudices that still exist in our culture today. 

A recent Forbes article titled “Why Companies Need Their Own AI Code Of Conduct” examines the relationship between business ethics and AI technology. AI must be regulated to protect human rights and advance racial equality among consumers. The writer stresses that AI will need to be regulated by the government if it continues to advance at its swift pace today. 

We use AI tools every day, whether it be Siri guiding us toward the highest-rated local coffee shop or asking Alexa to play 80s hit singles from the adjacent room. AI has simplified our lives through countless efforts, but it has made it infinitely more complicated in many ways. According to TechCrunch, a 2014 study revealed that Amazon’s AI hiring algorithm started discriminating against women during its hiring process. MIT reported in 2019 that facial recognition technology is less accurate in identifying humans with darker skin. Researchers at the National Institute of Standards and Technology (NIST) found racial bias in about 200 facial recognition algorithms. There need to be significant changes made to AI technology to combat racial and gender prejudice and advance equality across all environments. 

Tech companies have already begun to make changes to their AI models and technology to combat discrimination. According to the Business & Human Rights Resource Centre, large corporations such as IBM and Microsoft have regulated facial recognition technology. In June 2020, IBM decided to discontinue its development of facial recognition models and signed a document – along with Microsoft – that advocates for implementing algorithm ethics and rules. IBM also questioned whether or not AI facial recognition technology should be used by U.S. police, as there have been errors and accusations made due to the inaccuracy of the application. 

It is not enough to understand the technology and computer engineering behind AI. One must consider the human factor that plays into AI technology and the intricacies of human behavior. The issue is not what happens in the computer lab when scientists create AI tools. It is what occurs when technology is applied to real-world situations and cannot understand the complexities of human nature. AI is a machine, after all. This is the challenge scientists – and massive corporations – face today when it comes to artificial intelligence. To progress ethically, proper AI regulation tools need to be implemented to avoid human bias and further harm human rights. 

Ani Mosinyan, author at OpenLoans
Content Manager
Ani Mosinyan is the Content Manager for PLAT.ai. She has worked for publications such as Rare Bird Books and The Hollywood Reporter.