NIST Proposal Aims to Reduce Bias in Artificial Intelligence – Government Technology

The National Institute of Standards and Technology (NIST) recently announced the publication of A Proposal for Identifying and Managing Bias in Artificial Intelligence.

The proposal outlines a possible approach for reducing risk of bias in the use of artificial intelligence (AI) technology, and the agency is seeking comments from the public to strengthen that effort until Aug. 5.

Studies have shown that AI can be biased against people of color, and while there are legislative efforts in progress to tackle this issue from a policy standpoint, much of the issue hinges on the way the technology functions at its most basic level.

The proposal seeks to help industries using AI technology to develop a risk-based framework. The proposal notes that while reducing risk in these products is critical, it remains insufficiently defined.

The announcement details some of the possible discriminatory outcomes that can come from AI systems, such as wrongful arrests or unfairly rejecting qualified job applicants.

NISTs proposed approach involves three stages for reducing that bias: predesign, design and development, and deployment.

The first stage refers to where the AI products and their parameters are defined, as well as the determination of a products central purpose. In this phase, forward-thinking to possible problems is critical.

The next stage is design and development, where the engineering and modeling take place. In this stage, software designers must pay close attention to context and how predictions may affect different populations.

Finally, in the deployment stage, it is important that the products continue to be monitored. In some cases, they are deployed to the public with very little moderation in what follows.

The proposal concludes that while bias is not new or unique to AI, identifying and reducing that bias can help with a responsible use of the technology. According to one of the reports authors, NISTs Reva Schwartz, bias exists throughout this AI life cycle.

Determining methods for identifying and managing it is a vital next step, Schwartz said.

NIST is welcoming public feedback in the approach outlined in the proposal from people both within and outside of the technical industry. Comments can be made by downloading and completing a template and sending it via email to ai-bias@list.nist.gov.

See the original post:
NIST Proposal Aims to Reduce Bias in Artificial Intelligence - Government Technology

Related Posts

Comments are closed.