Ethical Machine Learning with Explainable AI and Impact Analysis – InfoQ.com

As more decisions are made or influenced by machines, theres a growing need for a code of ethics for artificial intelligence. The main question is, "I can build it, but should I?" Explainable AI can provide checks and balances for fairness and explainability, and engineers can analyze the systems impact on peoples lives and mental health.

Kesha Williams spoke about ethical machine learning at NDC Oslo 2023.

In the pre-machine learning world, humans made hiring, advertising, lending, and criminal sentencing decisions, and these decisions were often governed by laws that regulated the decision-making processes regarding fairness, transparency, and equity, Williams said. But now machines make or heavily influence a lot of these decisions.

A code of ethics is needed because machines can not only imitate and enhance human decision-making, but they can also amplify human prejudices, Williams said.

When people discuss ethical AI, youll hear several terms: fairness, transparency, responsibility, and human rights, Williams mentioned. Overall goals are not to perpetuate bias, consider the potential consequences, and mitigate negative impacts.

According to Williams, ethical AI boils down to one question:

I can build it, but should I? And if I do build it, what guardrails are in place to protect the person thats the subject of the AI?

This is at the heart of ethics in AI, Williams said.

According to William, ethics and risks can be incorporated using explainable AI, which would help us understand how the models make decisions:

Explainable AI seeks to bake in checks and balances for fairness and explainability during each stage of the machine learning lifecycle: problem formation, dataset construction, algorithm selection, training, testing, deployment, monitoring, and feedback.

We all have a duty as engineers to look at the AI/ML systems were developing from a moral and ethical standpoint, Williams said. Given the broad societal impact, mindlessly implementing these systems is no longer acceptable.

As engineers, we must first analyze these systems impact on peoples lives and mental health and incorporate bias checks and balances at every stage of the machine learning lifecycle, Williams concluded.

InfoQ interviewed Kesha Williams about ethical machine learning.

InfoQ: How does machine learning differ from traditional software development?

Kesha Williams: In traditional software development, developers write code to tell the machine what to do, line-by-line, using programming languages like Java, C#, JavaScript, Python, etc. The software spits out the data, which we use to solve a problem.

Machine learning differs from traditional software development in that we give the machine the data first, and it writes the code (i.e., the model) to solve the problem we need to solve. Its the complete reverse to start with the data, which is very cool!

InfoQ: How does bias in AI surface?

Williams: Bias shows up in your data if your dataset is imbalanced or doesnt accurately represent the environment the model will be deployed in.

Bias can also be introduced by the ML algorithm itself even with a well-balanced training dataset, the outcomes might favor certain subsets of the data compared to others.

Bias can show up in your model (once its deployed to production) because of drift. Drift indicates that the relationship between the target variable and the other variables changes over time and degrades the predictive power of the model.

Bias can also show up in your people, strategy, and the action taken based on model predictions.

InfoQ: What can we do to mitigate bias?

Williams: There are several ways to mitigate bias:

See the original post:
Ethical Machine Learning with Explainable AI and Impact Analysis - InfoQ.com

Related Posts

Comments are closed.