Computer scientist Joy Buolamwini was a graduate student at MIT when she made a startling discovery: The facial recognition software program she was working on couldn't detect her dark skin; it only registered her presence when she put on a white mask.
It was Buolamwini's first encounter with what she came to call the "coded gaze."
"You've likely heard of the 'male gaze' or the 'white gaze,'" she explains. "This is a cousin concept really, about who has the power to shape technology and whose preferences and priorities are baked in as well as also, sometimes, whose prejudices are baked in."
Buolamwini notes that in a recent test of Stable Diffusion's text-to-image generative AI system, prompts for high paying jobs overwhelmingly yielded images of men with lighter skin. Meanwhile, prompts for criminal stereotypes, such as drug dealers, terrorists or inmates, typically resulted in images of men with darker skin.
In her new book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini looks at the social implications of the technology and warns that biases in facial analysis systems could harm millions of people especially if they reinforce existing stereotypes.
"With the adoption of AI systems, at first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion," Buolamwini says. "Because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made."
Buolamwini says she got into computer science because she wanted to "build cool future tech" not to be an activist. But as the potential misuses of the technology became clearer, she realized she needed to speak out.
"I truly believe if you have a face, you have a place in the conversation about AI," she says. "As you encounter AI systems, whether it's in your workplace, maybe it's in the hospital, maybe it's at school, [ask] questions: 'Why have we adopted this system? Does it actually do what we think it's going to do?' "
On why facial recognition software makes mistakes
How is it that someone can be misidentified by a machine? So we have to look at the ways in which we teach machines to recognize the pattern of a face. And so the approach to this type of pattern recognition is often machine learning. And when we talk about machine learning, we're talking about training AI systems that learn from a set of data. So you have a dataset that would contain many examples of a human face, and from that dataset, using various techniques, the model would be trained to detect the pattern of a face, and then you can go further and say, "OK, let's train the model to find a specific face."
What my research showed and what others have shown as well is many of these datasets were not representative of the world at all. I started calling them "pale male" datasets, because I would look into the data sets and I would go through and count: How many light-skinned people, How many dark-skinned people? How many women, how many men and so forth. And some of the really important data sets in our field. They could be 70% men, over 80% lighter skinned individuals. And these sorts of datasets could be considered gold standards. ...
And so it's not then so surprising that you would have higher misidentification rates for people who are less represented when these types of systems were being developed in the first place. And so when you look at people like Porcha Woodruff, who was falsely arrested due to facial recognition misidentification, when you look at Robert Williams, who was falsely arrested due to facial misidentification in front of his two young daughters, when you look at Nijeer Parks, when you look at Randall Reed, Randall was arrested for a crime that occurred in a state he had never even set foot in. And all of these people I've mentioned they're all dark-skinned individuals.
On why AI misgenders female faces
Joy Buolamwini is the founder of the Algorithmic Justice League, an organization that raises awareness about the implications of AI. Her research was also featured in the Netflix documentary Coded Bias. Naima Green/Penguin Random House hide caption
Joy Buolamwini is the founder of the Algorithmic Justice League, an organization that raises awareness about the implications of AI. Her research was also featured in the Netflix documentary Coded Bias.
I looked at the research on gender classification, I saw with some prior studies, actually older women tended to be misgendered more often than younger women. And I also started looking at the composition of the various gender classification testing datasets, the benchmarks and so forth. And it's a similar kind of story to the dark skin here. It's not just the proportion of representation, but what type of woman is represented. So, for example, many of these face datasets are face datasets of celebrities. And if you look at women who tend to be celebrated, [they are] lighter skin women, but also [women who] fit very specific gender norms or gender presentation norms and stereotypes as well. And so if you have systems that are trained on some type of ideal form of woman that doesn't actually fit many ways of being a woman, this learned gender presentation does not reflect the world.
On being a "poet of code," and the success of her piece, "AI, Aint I a Woman?"
I spent so much time wanting to have my research be taken seriously. ... I was concerned people might also think it's a gimmick. ... And so after I published the Gender Shades paper and it was really well received in the academic world and also industry, in some ways I felt that gave me a little bit of a shield to experiment with more of the poetic side. And so shortly after that research came out, I did a poem called "AI, Ain't I a Woman?," which is both a poem and an AI audit from testing different AI systems out. And so the AI audit results are what drive the lyrics of the poems. And as I was working on that, it allowed me to connect with the work in a different way.
This is where the humanizing piece comes in. So it's one thing to say, "OK, this system is more accurate than that system," or "this system performs better on darker skin or performs better on lighter skin." And you can see the numbers. But I wanted to go from the performance metrics to the performance arts so you could feel what it's like if somebody is misclassified not just read the various metrics around it.
And so that's what the whole experimentation around "AI, Ain't I a Woman?" was. And that work traveled in places I didn't expect. Probably the most unexpected place was with the EU Global Tech panel. It was shown to defense ministers of every EU country ahead of a conversation on lethal autonomous weapons to humanize the stakes and think about what we're putting out.
On her urgent message for President Biden about AI
We have an opportunity to lead on preventing AI harms, and the subtitle of the book is protecting What Is Human in a World of Machines. And when I think of what is human, I think about our right to express ourselves, the essence of who we are and our expectations of dignity. I challenge President Biden for the U.S. to lead on what I call biometric rights. ...
I'm talking about our essence, our actual likeness. ... Someone can take the voice of your loved one, clone it and use it in a hoax. So you might hear someone screaming for your name, saying someone has taken something, and you have fraudsters who are using these voice clones to extort people. Celebrity won't save you. You had Tom Hanks, his likeness was being used with synthetic media with a deepfake to promote a product he had never even heard of.
So we see these algorithms of exploitation that are taking our actual essence. And then we also see the need for civil rights and human rights continue. It was very encouraging to see in the executive order that the principles from the Blueprint for an AI Bill of Rights such as protections from algorithmic discrimination, that the AI systems being used are effective, that there are human fallbacks were actually included, because that's going to be necessary to safeguard our civil rights and our human rights.
On how catastrophizing about AI killing us in the future neglects the harm it can do now
I'm concerned with the way in which AI systems can kill us slowly already. I'm also concerned with things like lethal autonomous weapons as well. So for me, you don't need to have super intelligent AI systems or advanced robotics to have a real harm. A self-driving car that doesn't see you on the road can be fatal and harmful. I think of this notion of structural violence where we think of acute violence: There's the gun, the bullet, the bomb. We see that type of violence. But what's the violence of not having access to adequate health care? What's the violence of not having housing and an environment free of pollution?
And so when I think about the ways in which AI systems are used to determine who has access to health care and insurance, who gets a particular organ, in my mind ... there are already many ways in which the integration of AI systems lead to real and immediate harms. We don't have to have super-intelligent beings for that.
Sam Briger and Thea Chaloner produced and edited this interview for broadcast. Bridget Bentz, Molly Seavy-Nesper and Beth Novey adapted it for the web.
Read the original here:
'Unmasking AI' author Joy Buolamwini says prejudice is baked into ... - NPR
Read More..