Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging – Flatland

Share this story

Published December 4th, 2019 at 9:59 AM

Artificial Intelligence (AI) the ability of machines to make decisions that normally require human expertise already is changing our world in countless ways, from self-driving cars to facial-recognition technology.

But the best and maybe the worst is yet to come.

AI is being used increasingly in health care, including the possibility of a radiology tool that might eliminate the need for tissue samples. Knowing that, the people leading a new project called Ethical-AI for the Center for Practical Bioethics (CPB) are trying to make sure that AI health care tools will be created and used in ethical ways.

The ethical questions the project is raising should have been considered in a systematic way years ago, of course. But the good news is that the recommendations produced by this effort may be able to prevent misconstruction or misuse of AI health care tools.

Weve been excited about technology since we landed on the moon, says Lindsey Jarrett, a researcher for Cerner Corp., a Kansas City-based global health care technology company. That has put us into a fast pace that I dont think we were prepared for. Now were looking and saying, OK, wait, hold on. Maybe we should re-evaluate this.

Jarrett is working with Matthew Pjecha, a (CPB) program associate, to produce a series of ethical guidelines for how AI should and shouldnt be used in health care.

When were talking about (AI in) health care, the stakes get really high really fast, Pjecha says. What were hoping comes from this project is a robust set of recommendations (about ethics) for people who are designing, implementing and using AI in health care.

Pjecha, Jarrett and CPB leaders, such as CPB President John G. Carney, worry that if AI tools are created without first thinking about ethical issues, the results can be disastrous for lots of people.

In 2018, for instance, Pjecha gave a presentation at a symposium, attended by Jarrett, in which he looked at an AI instrument used in Arkansas to allocate Medicaid benefits. Because that AI tool was flawed by a failure to include data from a broad segment of the population, it deployed an algorithm that threw many eligible Medicaid recipients off the program, resulting in severe problems.

Pjecha and Jarrett later decided to work together under the CPB umbrella to make sure future AI health care tools were designed properly and ethically.

Once an AI tool has been created, Pjecha says, if you get outcomes from them that youre not sure about or uncomfortable with its not easy to go back and find out why you got those. So its vital to make sure that the data that goes into creating AI tools is reliable and not biased in some way.

What we have learned, Pjecha says, is that AI will express the biases that their creators have.

One way in which technology is affecting health care is through the growing use of wearable activity monitors, which track our daily movements and bodily reactions.

But, says Jarrett, If someone is making really big clinical decisions based on the watch that youre wearing every day, there are lots of times when that device doesnt catch everything you need to know.

Pjecha adds: I could wear a Fitbit every day of my life and I dont think a picture of my life would really be captured in it. But those are the numbers. And we have a kind of fascination with the role that numbers play in the provision of health care.

Without broadly accepted ethical guidelines for AIs creation and use in health care, Pjecha says, 10 years down the roadwe would find ourselves with a health care system that is less relatable and less compassionate and less human. We know that AI systems are quickly going to start outpacing human physicians in certain types of tasks. A good example is recognizing anomalies in imaging.

AI tools, for instance, already can find imaging irregularities at the pixel level, which human eyes cant see. We need to figure out what it means when providers deploy a certain tool that is better qualified to make a type of call than they are, Pjecha says. Im really interested in what happens when one of these systems hypothetically makes a certain determination and a human physician disagrees with it. What kind of trust are we placing in these tools? A lot of these questions are just open.

And, adds Jarrett, another worry is that big companies are entering the health care space of the economy without knowing much about health care, such as Amazon and Google. That may add to the lack of ethical considerations required to make sure AI tools are fair.

So once again, we risk science and technology moving more quickly than our human capacity to understand and control them.

CPB and Cerner both are funding this project, though CPB continues to seek additional investments to support it.

Bill Tammeus, a Presbyterian elder and former award-winning Faith columnist forThe Kansas City Star, writes the daily Faith Matters blog forThe Starswebsite and columns forThe Presbyterian Outlook and formerly for The National Catholic Reporter. His latest book isThe Value of Doubt: Why Unanswered Questions, Not Unquestioned Answers, Build Faith. Email him atwtammeus@gmail.com.

Discover more unheard stories about Kansas City, every Thursday.

Check your inbox, you should see something from us.

See the article here:

Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging - Flatland

Related Posts

Comments are closed.