Analysis: How Bidens new executive order tackles AI risks, and where it falls short – PBS NewsHour

President Joe Biden walks across the stage to sign an executive order about artificial intelligence in the East Room at the White House in Washington, D.C., Oct. 30, 2023. REUTERS/Leah Millis

Thecomprehensive, even sweeping, set of guidelinesfor artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, shows that the U.S. government is attempting to address the risks posed by AI.

As aresearcher of information systems and responsible AI, I believe the executive order represents an important step in building responsibleandtrustworthyAI.

WATCH: Biden signs order establishing standards to manage artificial intelligence risks

The order is only a step, however, and it leaves unresolved the issue of comprehensive data privacy legislation. Without such laws, people are at greater risk ofAI systems revealing sensitive or confidential information.

Technology is typically evaluated forperformance, cost and quality, but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:

The National Institute of Standards and Technology (NIST) issued acomprehensive AI risk management frameworkin January 2023 that aims to address many of these issues. The frameworkserves as the foundationfor much of the Biden administration's executive order. The executive order alsoempowers the Department of Commerce, NIST's home in the federal government, to play a key role in implementing the proposed directives.

Researchers of AI ethics have long cautioned thatstronger auditing of AI systemsis needed to avoid giving the appearance of scrutinywithout genuine accountability. As it stands, a recent study looking at public disclosures from companies found that claims of AI ethics practicesoutpace actual AI ethics initiatives. The executive order could help by specifying avenues for enforcing accountability.

READ MORE: Nations pledge to work together to contain 'catastrophic' risks of artificial intelligence

Another important initiative outlined in the executive order is probing for vulnerabilities ofvery large-scale general-purpose AI modelstrained on massive amounts of data, such as the models that power OpenAI's ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economyto perform red teamingand report the results to the government. Red teaming is using manual or automated methods to attempt toforce an AI model to produce harmful output for example, make offensive or dangerous statements like advice on how to sell drugs.

Reporting to the government is important given that a recent study foundmost of the companies that make these large-scale AI systems lackingwhen it comes to transparency.

Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the Department of Commerce todevelop guidance for labeling AI-generated content. Federal agencies will be required to useAI watermarking technology that marks content as AI-generated to reduce fraud and misinformation though it's not required for the private sector.

The executive order alsorecognizes that AI systems can pose unacceptable risksofharm to civil and human rightsand the well-being of individuals: "Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms."

A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order's directives in light of existing consumer privacy and data rights statutes.

Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it's difficult to measure the impact that decision-making AI systems haveon data privacy and freedoms.

It's also worth noting that algorithmic transparency is not a panacea. For example, the European Union's General Data Protection Regulation legislation mandates "meaningful information about the logic involved" in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understandhow the system affects them. But knowing how an AI system works doesn't necessarily tell youwhy it made a particular decision.

With algorithmic decision-making becoming pervasive, the White House executive order and theinternational summit on AI safetyhighlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.

This article is republished from The Conversation. Read the original article.

Left: President Joe Biden walks across the stage to sign an executive order about artificial intelligence in the East Room at the White House in Washington, D.C., Oct. 30, 2023. REUTERS/Leah Millis

Follow this link:

Analysis: How Bidens new executive order tackles AI risks, and where it falls short - PBS NewsHour

Related Posts

Comments are closed.