Bias in AI is real. But it doesn’t have to exist. – POLITICO

With help from Ella Creamer, Brakkton Booker and Ben Weyl

POLITICO illustration/Photo by AP

Hello, Recast friends! House Republicans narrowly passed a contentious defense bill and a major Alaska political rematch is on the way. But here for you is a fascinating interview between Mohar Chatterjee, a technology reporter at POLITICO, and AI ethicist Rumman Chowdhury. You can read the first part of this interview in last weeks Digital Future Daily.

Today, were diving deeper into the intersection of identity and technology. With AI igniting widespread public adoption and anxiety, the struggle to get this technology right is real. Its critics are worried that AI systems particularly large language models trained on massive quantities of data might have biases that deepen existing systemic discrimination.

These are not theoretical worries. Infamous examples of bias include a racially prejudiced algorithm used by law enforcement to identify potential repeat offenders, Amazons old AI-powered hiring tool that discriminated against women, and more recently, the ability to prompt ChatGPT to make racist or sexist inferences.

Getting these powerful AI systems to reveal all their pitfalls is a tall order but its one thats found interest from federal government agencies, industry leaders and the technologys day-to-day users. The Commerce Departments National Telecommunications and Information Administration is seeking feedback on how to support the development of AI audits, while the White Houses Office of Science and Technology Policy is gathering input from the public on what national priorities for AI should be.

We spoke to AI ethicist Rumman Chowdhury about her hopes and fears for this quickly spreading technology. Previously the director of Twitters META (Machine Learning Ethics, Transparency, and Accountability) team and the head of Accentures Responsible AI team, Chowdhurys expertise has been tapped by both Congress and the White House in recent months. She appeared as a witness at a June hearing held by the House Science, Space and Technology Committee to testify on how AI can be governed federally without stifling innovation. She is also one of the organizers for the White House-endorsed hacking exercise on large language models (called a red-teaming exercise) to be held at a large hacker conference called DEFCON in August. The exercise is meant to publicly identify potential vulnerabilities in these large AI models.

Chowdhury is currently a Responsible AI Fellow at the Harvard Berkman Klein Center and chief scientist at Parity Consulting.

Was The Recast forwarded to you by a friend? Dont forget to subscribe to the newsletter here.

Youll get a twice-weekly breakdown of how race and identity are the DNA of American politics and policy.

This interview has been edited for length and clarity.

THE RECAST: For many people, AI conjures notions of a dystopian future with machines taking over every aspect of our lives. Is that fear justified?

CHOWDHURY: At the current state of artificial intelligence, that fear is fully unjustified. And even in a future state of artificial intelligence, we cant forget that human beings have to build the technology and human beings have to implement the technology. So even in more of a stretch of the imagination, AI does not come alive and actively make decisions to harm people. People have to build the technology, and people have to implement it, for it to be harmful.

Examples of bias in AI include the ability to prompt ChatGPT to make racist or sexist inferences. | Richard Drew/AP Photo

THE RECAST: What does bias in AI look like? How will marginalized communities be impacted? Whats your biggest fear?

CHOWDHURY: Youve touched on exactly what my biggest fear is. My biggest fear is the problems we already see today, manifesting themselves in machine learning models and predictive modeling. And early AI systems already demonstrate very clear societal harm, reflecting bias in society.

So for example, if you look at medical implementations of artificial intelligence and machine learning, youll see how members of the African American community in particular, Black women are not treated well by these models because of the history of being not treated well by physicians. We see similar things in terms of biases in the workplace against minority groups. Over and over again, we have many clearly documented instances of this happening with AI systems. This doesnt go away because we make more sophisticated or smarter systems. All of this is actually rooted in the core data. And its the same data all of these models are trained on.

THE RECAST: Your team at Accenture built the first enterprise algorithmic bias detection and mitigation tool. Why were you concerned about bias in AI back then and how has this concern evolved?

CHOWDHURY: In the earlier days of responsible AI were talking 2017-2018 we were actually in a very similar state of having more philosophical conversations. There were a few of us and we were, frankly, primarily women who talked about the actual manifestation of real societal harm and injury. And some of the earliest books written about algorithmic bias came out a few years before that or around that time. In particular: Safiya Nobles Algorithms of Oppression, Virginia Eubanks Automating Inequality, Cathy ONeils Weapons of Math Destruction, all talk about the same topic. So the issue became: How do we create products and tools that work at the scale at which companies move to help them identify and stop harms before they go ahead building technology?

THE RECAST: Your team at Twitter discovered that the platforms algorithm favored right-wing posts. Googles Timnit Gebru blew the whistle on ethical dilemmas posed by large language models. Why do you think so many whistleblowers in tech are women particularly women of color?

CHOWDHURY: To clarify, this was during my time leading the machine learning ethics team at Twitter. So this work actually wasnt whistleblowing. This was approved by the company, we did this, you know, in conjunction with leadership. What we found in that study was that Twitters machine learning algorithm amplified center-right content in seven out of eight countries. What we werent able to find out was whether this was due to algorithmic bias or whether its due to human behavior. Those are actually two different root causes that have two different solutions.

Unfortunately, in many tech situations, my case is rare. Very often, issues that are raised by women and women of color get ignored in the workplace, because, more broadly, women of color tend to not be listened to in general. So its unsurprising to me that after having exhausted every internal channel or possibility, people who are typically ignored have to turn to more extreme measures.

Being a whistleblower is not romantic its actually very, very difficult for most individuals. If you think about what being a whistleblower means, you have essentially blackballed yourself from the industry that youve worked in the industry that you care about.

Unfortunately, this is more likely to happen to women of color. We are more likely to identify issues and have a stronger sense of justice and this desire to fix a problem, but simultaneously, we are more likely to be ignored. But again, I will say my example at Twitter was actually a rare case of that not happening.

Chowdhury was fired by Elon Musk shortly after he took over Twitter. | Susan Walsh/AP Photo

THE RECAST: You were fired by Elon Musk shortly after he took over Twitter. What are your thoughts on why you were a target?

CHOWDHURY: I dont see the kind of work that the machine learning ethics team did being aligned with the kind of company Elon Musk wants to run.

If we just look at the Twitter files, we look at the kinds of people hes attacked. Some of them being folks like Yoel Roth people who did things like trust and safety. The kind of work that my team did is very aligned with the work of teams that he is not funding or prioritizing. Frankly, hes firing teams that did that work. To be honest, I dont think I would have worked for that company anyway.

THE RECAST: When you testified before Congress last month, you said, Artificial intelligence is not inherently neutral, trustworthy, nor beneficial. Can you talk a little more about that?

CHOWDHURY: I very intentionally picked those words. There is this misconception that a sufficiently advanced AI model trained on significant amounts of data will somehow be neutral. How this technology is designed and who it is designed for is very intentional, and can build in biases. So technology is not neutral.

These models are also not inherently trustworthy. That ties to a term that I coined called moral outsourcing: this idea that technology is making these decisions and that the people making the decisions behind the scenes have no agency or no responsibility. Trustworthiness comes from building institutions and systems of accountability. Theres nothing inherently trustworthy about these systems, simply because they sound smart or use a lot of data or have really, really complex programming.

And just because you build something with the best of intentions doesnt actually mean that its going to inherently be beneficial. Theres actually nothing inherently beneficial about AI. We either build it to be beneficial in use or we dont.

THE RECAST: Why do we need a global AI governance organization as you mentioned in your congressional testimony?

CHOWDHURY: There are a couple of scenarios that are not great. One would be a splintering of technology. We are actually living in an era of splintered social media, which means that people get information mediated via different sources. That actually deepens rifts between different kinds of people. If somebody in China or Russia sees a very different take on whats happening in Ukraine compared to somebody in the U.S., their fundamental understanding of what is truth is very different. That makes it difficult for people to actually live in a globalized world.

Another thing that I am concerned with in creating global regulation is that the majority of the Global South is not included. Im part of the OECDs working group on AI governance, but these narratives are coming out of Europe, UK or the U.S. I just dont want there to be a lack of mindfulness, when creating global governance, in assuming that the Global South has nothing to say.

And there are some questions that actually are not global scale questions to ask. So this global entity, in order for it to supersede national sovereignty, these have to be really, really big questions. The way Ive been framing it is: What is the climate change of AI? What are the questions that are so big, they cant be addressed by a country or a company, and we need to push it up? So the default shouldnt be, Oh, clearly punt this to the global entity, it should be an exception rather than the rule.

You did it! You made it to Friday! And were sending you into the weekend with some news and a few must-reads and must-sees.

Divided on Defense: The GOP-led House passed a controversial defense bill Friday that targets the Pentagons policy on abortions, medical care for transgender troops and diversity issues. POLITICOs Connor OBrien reports that it doesnt have a shot at passing the Senate.

Alaska Grudge Match: Republican Nick Begich says hes making another run at Alaskas at-large congressional seat, once again challenging 2023 Power List nominee Rep. Mary Peltola, a Democrat. POLITICOs Eric Bazail-Eimil has more.

The crisis over American manhood is really code for something else, according to a new POLITICO Magazine piece from Virginia Heffernan.

A Korean conglomerate endeavors to build an elevator into the sky in Djunas part noir, part cyberpunk novel Counterweight, out now.

Earth Mama movingly traces the life of Gia (Tia Nomore), a mother trying to regain custody of her two kids in foster care.

Lakota Nation vs. United States weaves archival footage, interviews and images in its depiction of the tribes 150-year struggle to regain a homeland.

The surprise collab we never knew we needed: BTS Jung Kook and Latto on an energetic new bop, Seven.

Karol G drops S91, an emotional anthem inspired by a Bible verse, and a music video featuring a cross made of speakers.

TikTok of the Day: Generational differences

More:

Bias in AI is real. But it doesn't have to exist. - POLITICO

Related Posts

Comments are closed.