How Attributes in Algorithms Contribute to AI Bias

Dr. CI
3 min readAug 18, 2019

--

About a week ago I was facilitating an unconscious bias training to a cohort of brilliant students in a Masters of Science in Information Systems program. As a trainer/facilitator I work hard to customize our content to meet the needs of our students. So, even though I am not a very technical person, (but I am a subject matter expert in diversity, equity, and inclusion) I did my research and implemented a section of our training that focused on bias in algorithms when building AI. A very interesting thing happened during this section of the training.

We had just finished ripping apart a problem related to an actual company that decided to help judges speed up the sentencing process by building a program to sentence human beings based on prior arrest records. The students had no problem identifying that prior arrest records are not race neutral and of course people of Color, especially Black men, would suffer the most at the hands of such a program. Then it happened, a student who is a software engineer stated this, “research shows us that red cars are more likely to get into accidents, and as a researcher and developer we have to pay attention to what the data tells us.” He was simultaneously right and wrong. And here’s why, he failed to mention the part about what the data doesn’t tell us, and what tech companies need to develop that lens. As a researcher, I have come to understand that the way in which we capture data historically has been built on biases that do not always benefit marginalized and oppressed populations. I am going to call this “the red car reasoning.”

For example, if you are a company, whose methodologies are built on paying subjects to take your study, you are more likely to attract test subjects who need the compensation, people in poverty. Unless your study’s hypothesis is built on measuring certain factors related to socioeconomic status as well as other intersections, you will have a problem in your results. Within our data we see that there are trends built on researching certain populations that overlook important components of identity and systemic influences that contribute to inequity in our research methods.

Now back to the red car reasoning. The student was right in the sense that we do see research that tells us the story of people with red cars are more likely to get into accidents. A counter-narrative is missing, 1) Environmental Influence-what is the comparison of red cars sold in a given time period to other cars? 2)Intersections How did the researchers compare the regional number of red cars on the road to other cars 3) Identity Influence- which drivers are more likely to buy red cars etc. These are just three areas that come to mind, but there are many more. What the software engineer didn’t speak to in that moment is the thing many tech companies miss when building artificial intelligence, the building of algorithms and the equitable distribution of such factors when weighing out the impact of attributes. This is why you need diversity and inclusion in tech, to reach equitable outcomes.

I won’t dig too far into it, but it is definitely a topic worth revisiting. What do you think?

--

--

Dr. CI
Dr. CI

Written by Dr. CI

Dr. Cheryl Ingram aka Dr. CI, is a very successful entrepreneur, blogger, content creator and expert of diversity, equity, and inclusion practices.

No responses yet