AI Technology Threatens Educational Equity for Marginalized Students – Progressive.org

The fall semester is well underway, and schools across the United States are rushing to implement artificial intelligence (AI) in ways that bring about equity, access, and efficiency for all members of the school community. Take, for instance, Los Angeles Unified School Districts (LAUSD) recent decision to implement Ed.

Ed is an AI chatbot meant to replace school advisors for students with Individual Education Plans (IEPs), who are disproportionately Black. Announced on the heels of a national uproar about teachers being unable to read IEPs due to lack of time, energy, and structural support, Ed might seem to many like a sliver of hopethe silver bullet needed to address thechronic mismanagement of IEPs andongoing disenfranchisement of Black students in the district. But for Black students with IEPs, AI technologies like Ed might be more akin to a nightmare.

Since the pandemic, public schools have seen a proliferation of AI technologies that promise to remediate educational inequality for historically marginalized students. These technologies claim topredict behavior and academic performance,manage classroom engagement,detect and deter cheating, andproactively stop campus-based crimes before they happen. Unfortunately, because anti-Blackness is often baked into the design and implementation of these technologies, they often do more harm than good.

Proctorio, for example, is a popular remote proctoring platform that uses AI to detect perceived behavior abnormalities by test takers in real time. Because the platform employs facial detection systems that fail to recognize Black faces more than half of the time, Black students have an exceedingly hard time completing their exams without triggering the faulty detection systems, which results in locked exams, failing grades, and disciplinary action.

While being falsely flagged by Proctorio mightinduce test-taking anxiety or result in failed courses, the consequences for inadvertently triggeringschool safety technologies are much more devastating. Some of the most popular school safety platforms, like Gaggle and GoGaurdian, have been known to falsely identify discussions about LGBTQ+ identity, race related content, andlanguage used by Black youth as dangerous or in violation of school disciplinary policies. Because many of these platforms aredirectly connected to law enforcement, students that are falsely identified are contacted by police both on campus and in their homes. Considering that Black youth endure the highest rates of discipline, assault, and carceral contact on school grounds and aresix times more likely than their white peers to have fatal encounters with police, the risk of experiencing algorithmic bias can be life threatening.

These examples speak to the dangers of educational technologies designed specifically for safety, conduct, and discipline. But what about education technology (EdTech) intended for learning? Are the threats to student safety, privacy, and academic wellbeing the same?

Unfortunately, the use of educational technologies for purposes other than discipline seems to be the exception, not the rule. Anational study examining the use of EdTech found an overall decrease in the use of the tools for teaching and learning, with over 60 percent of teachers reporting that the software is used to identify disciplinary infractions.

Whats more, Black students and students with IEPs endure significantly higher rates of discipline not only from being disproportionately surveilled by educational technologies, but also from using tools like ChatGPT to make their learning experience more accommodating and accessible. This could include using AI technologies to support executive functioning, access translated or simplified language, or provide alternative learning strategies.

Many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance.

To be sure, the stated goals and intentions of educational technologies are laudable, and speak to our collective hopes and dreams for the future of schoolsplaces that are safe, engaging, and equitable for all students regardless of their background. But many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance, dashing many of these idealistic hopes.

To confront the disparities wrought by racially-biased AI, schools need a comprehensive approach to EdTech that addresses the harms of algorithmic racism for vulnerable groups. There are severalways to do this.

One possibility is recognizing that EdTech is not neutral. Despite popular belief, educational technologies are not unbiased, objective, or race-neutral, and they do not inherently support the educational success of all students. Oftentimes, racism becomes encoded from the onset of the design process, and can manifest in the data set, the code, the decision making algorithms, and the system outputs.

Another option is fostering critical algorithmic literacy. Incorporating critical AI curriculum into K-12 coursework, offering professional development opportunities for educators, or hosting community events to raise awareness of algorithmic bias are just a few of the ways schools can support bringing students and staff up to speed.

A third avenue is conducting algorithmic equity audits. Each year, the United Statesspends nearly $13 billion on educational technologies, with the LAUSD spending upwards of$227 million on EdTech in the 2020-2021 academic year alone. To avoid a costly mistake, educational stakeholders can work with third-party auditors to identify biases in EdTech programs before launching them.

Regardless of the imagined future that Big Tech companies try to sell us, the current reality of EdTech for marginalized students is troubling and must be reckoned with. For LAUSDthe second largest district in the country and the home of the fourteenth largest school police force in Californiathe time to tackle the potential harms of AI systems like Ed the IEP Chatbot is now.

The rest is here:

AI Technology Threatens Educational Equity for Marginalized Students - Progressive.org

Related Posts

Comments are closed.