Just a few days after I posted a version of this column on @Medium, the Chronicle of Philanthropy had this story as its lead story, A.I. Could Prove Disastrous for Democracy: How Can Philanthropy Prepare? (https://www.philanthropy.com/article/a-i-could-prove-disastrous-for-democracy-how-can-philanthropy-prepare).
The author, #GordonWhitman, asserted that we need less A.I. in the world of philanthropy and more human connection. Why? Because donors cant discern the difference between an A.I. generated voice and a real person begging for money.
What he fails to acknowledge is that humanspeopleare a major part of the problem with A.I.
Less than a week ago, on #LinkedIn, I read this post about an #NPR story: @Carmen Drahl, AI was asked to create images of Black African docs treating white kids. Howd it go? (https://www.npr.org/sections/goatsandsoda/2023/10/06/1201840678/ai-was-asked-to-create-images-of-black-african-docs-treating-white-kids-howd-it-?). My #blackanthropology colleague, Dr. David Simmons (https://www.linkedin.com/in/david-simmons-87743a4/ ), responded to the article with this observation about the real danger behind #AI, and also an appeal:
AI still relies on humanscomplete with their biases and assumptions, both implicit and explicit. Lets work towards creating AI systems that are more inclusive.
The Oxford University researcher that Drahl wrote about, Dr. Arsenii Alenichev (https://www.ethox.ox.ac.uk/team/arsenii-alenichev ), had tried an experiment. The results he and his team of scientists reached over and over again proved that our fear should not be that #artificialintelligence will take over and ruin the human world. After all, A.I. is constrained by its data parameters.
What we do need to fear is how the coding and input of data into A.I. are done by human beings, who come already socialized and filled with cultural biases! Drahl explains,
[Alenichevs]goal was to see if AI would come up with images that flip the stereotype of white saviors or the suffering Black kids [He stated],We wanted to invert your typical global health tropes.
They realized AI did fine at providing on-point images if asked to show either Black African doctors or white suffering children. It was the combination of those two requests that was problematic.
Racial and Gender Inequality in Silicon Valley
What Alenichev learned was thisa computerized intelligence cannot imagine or configure anything beyond its programmers imagination. Artificial intelligence is locked into the social and cultural norms and conditioning of the people who are feeding it the information. And, while sometimes information can be neutral, more often than not, it is also accompanied by interpretations and value judgments.
This, if a (white) programmer cannot conceive of a Black doctor helping white suffering children, then that bias is coded into the A.I. In short, any machine (or A.I.) is only as smart (or empathetic) as the people who initially coded and input the data.
According to UC Santa Barbara sociologist and ethnographer Dr. France Winddance Twine, we probably shouldnt hold our breath for an inclusive A.I. , as Simmons requestedaint gonna happen.
Winddance documents in her latest book, Geek Girls: Inequality and Opportunity in Silicon Valley, how implicit and explicit racial biases and gender inequality abound in Silicon Valley! She concludes the book with this statement:
The technology sector is unjust and not yet a vehicle for economic justice and social mobility for everyone.
Whats an A.I. to do?
So, whats an A.I. to do?
Well, we know that artificial intelligence is not autonomous. It cannot create anythingat least not at this moment in timeoutside of the existing information stored within its database.
A.I. can reconfigure and make up factsand it can also plagiarize and create false data by linking things together and stealing online content from human researcher and writers as Matt Novak pointed out in a May 2023 Forbes article about the new Google search engine: Googles New AI-Powered Search Is A Beautiful Plagiarism Machine.
At this moment in time, A.I. does not have its own autonomous scarecrow brain; it simply mimics and expands upon its existing program.
It is true, if we believe Isaac Asimov in his Robot series, and the Will Smith movie I, Robot, A.I. could become a supercomputer, but it cannot, as Azmera Hammouri-Davis, M.T.S says #breaktheboxes of its human programmer.
Our greatest fear should not be of an autonomous A.I.like Hal the Computer in Space Odysseythough A.I.s are destined to create massive unemployment for laborers who are unskilled in use of technology. As the Chronicle of Philanthropy reminds us, they can be made to sound human.
Nonetheless, our greatest fear MUST be of A.I.s should be that they are being supercoded with #whitesupremacy ideology and #genderinequality data.
And, dont act surprised! This is not new stuff. Since 2018, groups like the Critical Code Studies Working Group at the University of Southern Califorinia, now called The Humanities and Critical Code Lab (HaCCS), led by Mark Merino, have been looking at issues of inequality in coding for several years.
Indeed, I discovered this fact some time agobefore Google began reading critiques of its coding practices. What I found was that if you typed #blackbeauty into the google search engine, all that appeared were images of horses, like in the movie Black Beauty.
Conversely, if you typed in #beauty, only images of #whitewomen appeared. Since then, Google has become more #WOKE and updated some images in the search parameters connected to these words. But whiteness still prevails.
These are just a few of the known biases coded into #A.I. historicallyand the recent experiments by Dr. Alenichev proves that racial stereotypes are still prevalent, such that in the coded minds of A.I. (and its programmers), all the suffering children are Black and nonwhite and ALL the medical doctor saviors are white.
In the case of Black doctors treating white suffering children, such biases and assumptions against this as a possibility are rooted in #whitesupremacy ideology and beliefs;
Disbelief in the professionalism of Black people are part of the tacit anti-Blackness knowledge around which most white people are socialized in America, and Europeans globally.
These human beliefs and biases will not change/cannot change until medical schools are more diverse, and Silicon Valley becomes equal, ungendered, diverse, equitable, and inclusive!
It is not the A.I. that needs a #DEI reboot, but the human beings that code them sure do!
But. dont hold your breath for immediate change.
The current climate of anti-CRT, anti-Blackness, and attempts to whitewash America history and negate the history of hundreds of years of human enslavement, suffering, and ongoing Black and Indigenous generational trauma, disparities, and inequality (https://www.politico.com/news/2023/07/24/florida-desantis-black-history-education-00107859) suggests little hope for change towards a more socially and racially intelligent A.I., based on the current state of biases in the tech industry in Silicon Valley and the mindset of its human coding professionals.
(c) 2023 Irma McClaurin
An earlier version published on Medium, Oct 20, 2023 (https://irmamcclaurin.medium.com/ai-is-not-the-problem-we-need-more-diverse-and-inclusive-humans-in-the-tech-sector-7cbec2ad2b77)
Dr. Irma McClaurin (https://linktr.ee/dr.irma/@mcclaurintweets) is a digital columnist on Medium, Culture and Education Editor forInsight News, and Ms. Magazine author. She is the founder of the Irma McClaurin Black Feminist Archive at the University of Massachusetts Amherst.An activist Black Feminist anthropologist and award-winning author, recognized in 2015 by the Black Press of America as Best in the Nation Columnist. She is a past president of Shaw University and recently was featured in the PBS American Experience documentary on Zora Neale Hurston as an anthropologist. A collection of herInsight Newscolumns,Justspeak: Reflections on Race, Culture & Politics in America,is forthcoming. She is also working on a book manuscript on Zora Neale Hurston and anthropology as well as a collection of short vignettes entitled Confessions of a Barstool DIVAH.
The rest is here:
A.I. (Artificial Intelligence) is not the Problem: We Need More Diverse and Inclusive Humans in the Tech Sector - Insight News
Read More..