Virginia legislators should be learning all they can about AI – News From The States

When you let your phone unlock itself by scanning your face, ask Alexa or Siri for directions, shop online, apply for a job or check your account in a banking app, your experience is underpinned by artificial intelligence (AI), technology that makes machines mimic human attributes like thinking, learning and suggesting. As AI continues infiltrating nearly every aspect of our society, its crucial that Virginia lawmakers and experts continue to study its benefits and risks. That task is particularly urgent as the technology has been shown to perpetuate inequity that already exists in many of our systems.

There are several pieces of legislation related to AI up for debate now in Virginias General Assembly. House Bill 249 calls for the development of a comprehensive framework for how law enforcement agencies use AI and a model policy to guide criminal justice systems use of machine learning technology by 2025.

Such measures sure would have been helpful five years ago, before Norfolk police briefly used a facial recognition app developed by Clearview AI to identify suspects in criminal investigations without public knowledge and without oversight from city or state leaders. Not only was that a huge privacy concern, but it could also have misidentified people especially people of color and connected them to a crime they had nothing to do with.

The U.S. National Institute of Standards and Technology (NIST) tested 189 face recognition algorithms, and found that most facial recognition AI systems have a significantly higher rate of false positive identifications for photos of Black, Asian, and indigenous peoples faces than for photos of white peoples faces, Dr. Jennifer Rhee, director of the AI Futures Lab at Virginia Commonwealth University, told me by email. Some facial recognition systems were found to have false positive rates 10 [times] to 100 [times] higher for Black, Native American, and Asian people than for white people.

Despite those very high error rates, versions of this technology are currently being used by some police departments in America, and for surveillance of public spaces and national borders. That may spell serious consequences for the people who are erroneously identified by these systems, Rhee added.

Worker protection laws arent ready for artificial intelligence, automation and other new technology

Facial recognition technology is riddled with bias, because most of the images used to train algorithms to recognize faces are white, male faces. Most of the engineers developing this technology over the past decade are also white and male; they create tech tools in their own image, and any bias they have is reflected in those creations.

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender, reads a 2018 MIT study led by Joy Buolamwini and Timnit Gebru that evaluated three commercial AI-powered facial recognition tools that consistently misidentified darker-skinned peoples gender, while almost always getting the gender of lighter-skinned people right. Other tools dont recognize Black faces as human, as demonstrated in 2015 when Google Photos automatically sorted 80 photos of a Black man into a folder it labeled gorillas.

Because AI systems are trained on data produced by society, these systems reflect societys biases and power dynamics, Rhee told me by email.

Artificial intelligence has been under study by state lawmakers since 2020, when the Artificial Intelligence Subcommittee of the Joint Commission on Technology and Science was founded. The function of the group, subcommittee chair Del. Cliff Hayes, D-Chesapeake, told me last summer, is to help legislators get a better understanding of how the technology works and its implications in Virginia.

We need to be equipped to deal with this in Virginia because AI and other technologies are evolving so rapidly, we cant necessarily sit around and wait for federal guidelines, said Hayes, who has had a nearly 30-year career in programming and technology management and is currently the city of Portsmouths chief information officer. Its important to view AI and other technological developments holistically, and to not focus exclusively on either the positive or negative aspects, he added.

As employers expand artificial intelligence in hiring, few states have rules

In 2021, Virginias legislature banned police departments across the commonwealth from using facial recognition technology without first gaining permission from the state. But policymakers quickly concluded that approach meant Virginia would miss out on the legitimate ways AI could aid law enforcement and criminal justice efforts.

An all-out ban of AI isnt the answer, we learned, Hayes said. The answer was to dissect and deal with the technology as it advances, so thats what were doing.

In 2022, legislators rolled back the ban, allowing police departments to use facial recognition only in certain instances. That legislation also required the Virginia State Police to develop a model policy regarding the investigative uses of facial recognition technology and mandated that any facial recognition technology used in Virginia be evaluated by the National Institute of Standards and Technology and have an accuracy score of at least 98 percent true positives across all demographic groups. The second measure was intended to address public concerns that the technology is less accurate in recognizing the faces of people with darker skin tones.

In June, Attorney General Jason Miyares co-authored a letter written by attorneys general of 23 states entreating the National Telecommunications and Information Administration to review AI governance policies. The letter cited data privacy, the need to develop AI safely without hampering innovation and the technologys possible impact on individuals legal and personal information as causes for concern and caution.

Last September, Gov. Glenn Youngkin issued a similarly themed executive order on artificial intelligence that focused on four areas: legal requirements, especially those related to privacy or intellectual property; policy standards for state agencies; appropriate IT safeguards such as cybersecurity and firewalls; and training for students.

That sounded good and all, except there wasnt a focus area dedicated specifically to one of the foremost challenges of AI: its well-known, deeply researched biases that fuel systemic discrimination against some groups of people, usually those who are already marginalized. This directive made me wonder if the governor truly had an interest in safeguarding Virginians of color and impoverished people from this pervasive problem.

Youngkins latest AI-related executive order, released in January, steps up to meet that challenge. Standards for how the technology should be used and taught in the states public schools and guidelines for AIs use in state agency databases issued through the order do strive to prevent harmful AI practices based on inherent biases that lead to discriminatory outcomes and mitigate any risk of bias and discrimination by AI Systems. Thats a sign the governor is listening to and learning from experts and everyday citizens concerned about how AI could unfairly impact some of us; its a good start.

Sen. Suhas Subramanyam, D-Loudoun, a few weeks ago introduced Senate Joint Resolution 14, which orders the Joint Legislative Audit and Review Commission to continue studying the impact of AI in the state with a specific focus on understanding deep fakes, data privacy implications, and misinformation, making sure the technology doesnt indirectly or directly lead to discrimination, finding ways to promote equity in AI algorithms, and looking at how AI can improve government operations and services.

Subramanyam, who was a technology policy adviser in the administration of former President Barack Obama, said his professional as well as personal experiences prompted him to put forward this measure.

We shouldnt necessarily stifle and overregulate [artificial intelligence], but we should look at how we can prevent bad practices and behaviors that stem from it, he told me during a 9 p.m. interview earlier this week. (It was the only time he could talk after a full day at Capitol Square; I whispered my questions to him from my bedroom closet, hoping not to wake my kids. That he would make time to speak on AI after hours spoke volumes about his interest in and passion for the subject.)

I come from an immigrant family, and one of the things Ive found, as someone with a funny name like mine and a background like mine, you tend to find a lot of overgeneralizations or caricatures of your culture already, and those things also show up in the datasets that we use with AI and emergent tech, Subramanyam said. If youve got a dataset thats not representative of all communities and cultures, then you will have an outcome that represents that characterization and misrepresentation.

Finally, among the roughly two dozen measures dealing with AI that are on the table this session is Senate Bill 621, which would establish a commission dedicated solely to advising the governor on AI policy in Virginia; I think Youngkin could surely use the help, and future leaders could too. Through this bill and others, Virginia now has the opportunity to implement artificial intelligence, machine learning and other emerging technologies in a responsible, ethical way that takes into account the specific harms possible for many of its citizens.

Alexa, play Ice Cubes You Can Do It.

Read the original post:
Virginia legislators should be learning all they can about AI - News From The States

Related Posts

Comments are closed.