Through my quest for knowledge in learning about AI, one recurring theme continues to surface: our human biases will permeate the data that informs AI systems.
Over the weekend, I watched Coded Bias on Netflix. It was incredibly informative – I highly recommend it!
I learned a number of things that helped me understand the nuances of AI and machine learning, which I want to share with those of us who are not “technical” people. (Credit to the film subjects Dr. Joy Buolamwini, Cathy O’Neil, and Meredith Broussard.)
1. Artificial intelligence systems use historical information to make a prediction about the future.
2. Machine learning is a scoring system that scores the probability of what you are about to do.
3. The data that powers artificial intelligence systems embeds the past, including the dark past.
So, in summary, these systems only have as much integrity as the data used to make these predictions. And because much of the development of broadly used AI systems currently is in the hands of big tech, primarily controlled by white men, they are largely biased in favor of that demographic.
All of that to say, if we do not ACT NOW, the biases in training data will perpetuate the inequality ingrained in our society and social systems moving forward.
Dr. Buolamwini is leading the charge with the founding of The Algorithmic Justice League, bringing to light and action our moral obligation to build AI systems with guardrails in place.
I look to women like her (and the others featured in the film!) to lead us to a more equitable and just future.
Comments