GovCon Expert Joe Paiva Finds AI at a CrossroadsAmplifying Biases or Empowering All – GovCon Wire

By Joe Paiva, Chief Operating Officer at Fearless

The digital divide of the 1990s exacerbated long-standing inequities in our society.

As broadband internet and personal computers proliferated, they reached affluent neighborhoods and households first. This left economically disadvantaged communities, disproportionately communities of color, on the wrong side of the divide. The impacts on education, job skills development and economic opportunity further widened existing disparities.

Today, we face an even more dangerous new digital divide one fueled by the rapid rise of artificial intelligence and machine learning.

Algorithms are increasingly used to make high-stakes decisions that impact peoples livelihoods and quality of life from college admissions to job candidate screening, to home mortgage approvals to the allocation of government services.

The fundamental problem is this: most of these AI and ML models are trained on historical datasets that reflect centuries of systemic bias and discrimination. Theres redlining in housing, legacy admissions in higher education and underinvestment in schools and businesses in minority neighborhoods. These and countless other inequities are baked into the data by which AI based applications learn how to make predictions.

For example, a 2017study by researchers at the University of Virginia and the University of Washington found that AI algorithms used by major online platforms to target job ads were significantly less likely to show opportunities in engineering, computing and other high-paying fields to women compared to men. The algorithms had learned to optimize ad placement based on past engagement data, perpetuating long-standing gender disparities in STEM careers.Research articles have found similar issues in AI used for hiring, where models trained on historical employment records can entrench racial and gender biases in selection processes. Equally insidious but more difficult to document examples permeate.

Without intentional effort to identify and mitigate these biases, AI will continue to amplify past inequities and erect new barriers to opportunity for underrepresented groups.

And because of the digital divide that began in the 90s, underserved communities and people of color have faced significant barriers to developing digital skills, pursuing education and job opportunities, and participating in the digital economy. As a result, these groups are less likely to be developing and implementing the AI tools and practices that are threatening to further divide.

A 2020 study by the National Skills Coalition, Applying a racial equity lens to digital literacy, reveals stark disparities in digital skill attainment between white workers and their Black, Latino and Asian American and Pacific Islander peers.

The study found that while 41 percent of white workers have advanced digital skills, only 13 percent of Black workers, 17 percent of Latino workers and 28 percent of AAPI workers have attained this level. These gaps in advanced digital skills are the product of structural inequities deeply rooted in our society, from uneven access to quality education and training to biased hiring practices and lack of diversity in the tech sector.

As a result, rather than being the great equalizer we once hoped for, AI threatens to systematize and amplify the biases of the past, affecting access to opportunity for generations to come.

Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

There are promising examples of AI being deployed thoughtfully to identify bias and the social issues that are present in disparity. TheVeterans Administration is utilizing AI in many ways. TheSocial Determinants of Health Extractor, or SDOH, is an AI-powered tool that analyzes clinical notes in electronic health records to identify key social factors, such as a patients economic status, education, housing situation and social support networks, that may influence their health outcomes.

By using natural language processing and deep learning techniques, the system can automatically surface SDOH information. The extracted SDOH variables can then be used by researchers to examine how these social factors contribute to health disparities and impact clinical outcomes for veterans from minority or underserved communities.

Understanding these relationships is a critical step toward designing more targeted interventions and equitable care delivery practices that address the root social drivers of health.

In the criminal justice system, AI is being leveraged to address racial disparities in sentencing.Researchers at the Stanford Computational Policy Lab developed a machine learning model to identify bias in risk assessment tools used by judges to inform sentencing decisions.

By analyzing data from over 100,000 criminal cases in Broward County, Florida, the team found that Black defendants were nearly twice as likely as white defendants to be misclassified as high risk of recidivism.

Armed with this insight, policymakers and judges can take steps to mitigate the bias, such as adjusting risk thresholds or supplementing the algorithms with additional contextual information.

While AI alone cannot solve systemic inequities, these examples demonstrate its potential as a tool for diagnosing and beginning to address bias in high-stakes government decisions and actions.

To disrupt the cycle and close the digital divide, diversity and inclusion must become a strategic imperative. Not only within government agencies, but also the contracting community that serves them and the technology sector as a whole. Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

Failing to act will allow the new digital divide to calcify, further concentrating wealth and power in the hands of the few at the expense of the many.

The call to action is clear. As leaders in government and the technology ecosystem, we must:

The path ahead is clear. By embracing diversity, equity and inclusion as core values in the development and deployment of AI, we have the power to create a future where technology truly serves all.

When we harness the talents and perspectives of our nations full diversity, we can create AI systems that are more innovative, more equitable and more impactful. Realizing this vision will require sustained commitment and collaboration across government, industry, academia and communities. It will demand courageous leadership, honest introspection and a willingness to break from the status quo. But the potential rewardsa society where AI narrows opportunity gaps instead of widening them, where technology is a source of empowerment rather than exclusionare too great to ignore.

So let us seize this moment, and work together to build a future where the power of AI lifts up the full diversity of the American people. In this future, the digital divide gives way to digital dignity and innovation drives not just prosperity, but justice. This is the future we must build, and the future we will build, together.

Link:
GovCon Expert Joe Paiva Finds AI at a CrossroadsAmplifying Biases or Empowering All - GovCon Wire

Related Posts

Comments are closed.