April 22, 2024 -Artificial intelligence tools (AI) are proliferating in healthcare at breakneck speed, amplifying calls from healthcare leaders and policymakers for greater alignment on the responsible use of these tools.
The FDA authorized692 artificial intelligence and machine learning devices in 2023, 171 devices more than its 2022 list. In response to this abundance of tools, various organizations have released their own definitions of what it means to use AI responsibly, including organizations like Pfizer, Kaiser Permanente, Optum, the White House, and others. However, the industry lacks an overarching set of principles to improve alignment. Brian Anderson, MD, CEOand co-founder of the Coalition for Health AI (CHAI), discusses the need for guidelines and standards for responsible AI in healthcare. He highlights key principles of responsible AI and emphasizes the importance of having various stakeholders--including patients--involved in developingthese standards and guidelines.
Brian Anderson, MD:
Is there an agreed-upon consensus around what trustworthy, responsible AI looks like in health? The answer we quickly found is no. You have a lot of very innovative companies and health systems building AI according to their own definitions of what that means and what that looks like.
Kelsey Waddill:
Welcome to Season 2 of Industry Perspectives, coming to you from HIMSS 2024 in Orlando, Florida. I'm Kelsey Waddill, multimedia manager and managing editor at Xtelligent Healthcare.
From 2024 to 2030, healthcare AI in the US market is expected to experience a 35.8 percent compound annual growth rate. As the tool proliferates and in view of the risks inherent to the healthcare industry, guidelines for AI utilization are essential and many are asking: what does it mean to use artificial intelligence responsibly in the healthcare context? Brian Anderson, CEO, chief digital health physician, and co-founder of the Coalition for Health AI, or CHAI, founded CHAI with this question in mind. He's going to break it down for us on today's episode of Industry Perspectives.
Brian, it's great to have you on Healthcare Strategies today. Thank you for making the time in a busy HIMSS schedule, I know, so we're glad that this could work out. I wanted to start out by asking about CHAI, the organization I know you co-founded in 2021, and so I wanted to get some more background on just how it started, your story.
Brian Anderson, MD:
Yeah, really its roots came out of the pandemic. So in the pandemic there were a lot of organizations that were non-traditional partners that were inherently competitive. You had pharma companies working together, you had technology giants working together trying to address this pandemic that was in front of us. And so coming out of it, there was a lot of goodwill in that space between a Microsoft, and a Google, and an Amazon, or various competing health systems wanting to still try to find a way to do good together.
And so one of the questions a number of us asked was: is there an agreed upon consensus around what trustworthy, responsible AI looks like in health? The answer we quickly found is no. You have a lot of very innovative companies and health systems building AI according to their own definitions of what that means and what that looks like. And inherently that means that there's a level of opaqueness to how we think about trust and how we think about performance of AI in these siloed companies. In a consequential space like health, that can be a problem.
And so we agreed that it would be a worthwhile cause and mission for our merry band of eight or ten health systems and technology companies to come together to really begin trying to build that consensus definition of what responsible, trustworthy, healthy AI looks like. And so that's been our north star since we launched in 2021, and it quickly took off.
It started with those eight or ten. It quickly grew to like a hundred. The US government got involved right away. Office of the National Coordinator and the Food and Drug Administration--which are the two main regulating bodies for AI--quickly joined our effort. White House, NIH, all the HHS agencies. And then, excitingly, a large number of health systems and other tech companies joined, to the point today where it got to the point where we had 1,300 organizations, thousands of people, part of this coalition of the willing.
That became a challenge. How do you meaningfully engage 1,300 organizations, 2,000 to 3000-plus individuals with a volunteer group? Not very well. And so we made the decision to form a nonprofit, started that in January of this year. The board was convened. They asked me to be CEO. I'm very honored and humbled by that. And so I took that role on. Today is day two, I think, technically, in my new role.
Kelsey Waddill:
Congratulations!
Brian Anderson, MD:
Thanks. So I'm really excited to be here at HIMSS, take part of the vibrant conversation that everyone is having about AI here.
Kelsey Waddill:
Yeah, well, I mean it's definitely one of the major themes of the conference this year. And for good reason because, in the last year alone, there's been so much change in this space in AI specifically and its use in healthcare.
I did want to zoom in for one second, you mentioned this phrase, I know it's throughout your website and your language, and it's "responsible AI in healthcare." And I feel like that's the question right now, is: what does that look like? What does that mean? And so I know that's a big part of why CHAI convened to begin with. So I was wondering if you could shed some light on what you found so far and what that means.
Brian Anderson, MD:
Yeah. It's an important thing to be grounded in. So it starts with, I think, in the basic context that health is a very consequential space. All of us are patients or caregivers at one point in our life. And so the tools that are used on us as patients or caregivers need to have a set of aligned principles that are aligned to the values of our society, that allow us to ensure that the values that we have as humans in our society align with those tools.
And so some of those very basic principles in responsible AI are things like fairness. So is the AI that's created fair? Does it treat people equally? Does it treat people fairly? When there's this concept in AI that I remind people, all algorithms are at the end of the day, is they're programs that are trained on our histories. And so it's a really critical question that we ask ourselves is: are those histories fair? Are they equitable? Right? And you smile, obviously the answer is probably not.
Kelsey Waddill:
Probably no.
Brian Anderson, MD:
And so then it takes an intentional effort when we think about fairness and building AI to do that in a fair way.
Another concept, there are concepts like privacy and security. So the kinds of AI tools that we build, we don't want them to leak out training data that might be, especially in health, personal identifiable, protected health information. And so how we build models--particularly, there's been some news in the LLM space that if you do the right kind of prompting or hacking of these models, you can get it to reveal what its training data is. So how, again, we build models in a privacy-preserving, secure way that doesn't allow for that is really important.
There are other concepts like transparency, which is really important. When you use a tool, you want to know how that tool was made. What were the materials it was made out of? Is it safe for me to get into this car and drive somewhere? Does it meet certain safety standards? For many of the common day things that we use, from microwaves, to toaster ovens, to cars, there's places where you can go and you can read the report, the safety reports on those tools.
In AI, we don't have that yet. And so there's a real transparency problem when it comes to understanding very basic things like: what was this model trained on? What was it made of? How did it do when you tested it and evaluated it? We have all of the car safety tests, the famous car crash videos that we are all familiar with. We don't have that kind of testing information that is developed and maintained by independent entities. We have organizations that sell models, technology companies that make certain claims, but the ability to independently validate that, very hard.
And so this idea of transparency in terms of how models we're trained, what their testing evaluation scores were, what their indications and limitations are, and a whole slew of other things go into this concept of transparency. So principles like that.
Other ones like usability, reliability, robustness--these are all principles of responsible AI. I won't bother detailing them all for you, but those are the principles that we're basing CHAI around. And so when we talk about building the standards or technical specifications, it means going from a 50,000-foot level and saying fairness is important to saying, "okay, in the generative AI space, what does bias mean? What does fairness mean in the output of an LLM?" We don't have a shared agreed upon common definition of what that looks like, and it's really important to have that.
So I'll give you an example. A healthcare organization wants to deploy an LLM as a chat bot out on their website. That LLM, if you give it the same prompt five or six times might have different outputs. A patient trying to navigate beneficiary enrollment might be sent in five different directions if they were to give the same prompting. So that brings up concepts like reliability, fairness, and how we measure accuracy. These are all things that are principles in responsible AI that for generative AI, for chatbots, we don't have a common agreed upon framework about what "good" looks like and how to measure alignment to that good. And so that's what we're focusing on in CHAI because it's an important question. We want that individual going to that hypothetical website with that chatbot to have a fair, reliable experience getting to the right place.
Kelsey Waddill:
Yeah, I think that captures pretty well the complexity of the role that AI plays in healthcare right now. The questions that are being asked right now in each of those points we could dive into for a while. But I am curious: we want to do this well, we want to build out these guidelines well, but there's also a bit of time pressure it seems like from my perspective, in terms of there's people who are, as you kind of alluded to, the privacy and security piece, there's those who want to use this and use any holes that we haven't quite figured out how to cover yet for malicious intent. There's that time pressure. There's also just the time pressure of: people are generating uses of AI at a very rapid pace, and we don't have a structure for this yet that's set in stone.
So I was curious what you would recommend in terms of prioritization in these conversations. Obviously that list you just mentioned I'm sure is part of that, but is there anything else that you'd say about how to pinpoint what we need to be working on right now?
Brian Anderson, MD:
It's a good question. In the health space, it's really hard because it's so consequential. A patient's life could be on the line, and yet you have innovators that are innovating in fantastic, amazing ways that could potentially save people's lives, developing new models that have emerging capabilities, identifying potential diagnoses, identifying new molecules that have never been thought of by a human before. And yet, because it's so consequential, we want to have the right guardrails and guidelines in place.
And so one of the approaches that I think we are recommending, two-part. One is when we formed CHAI, we wanted it to have innovators and regulators at the same table. And so these working groups are going to be focusing on developing these standards, developing the metrics and the methodology for evaluation, with having innovators, and regulators, and patients all at the same table. Because you can't have innovation working at the speed of light developing it without an understanding of what safe and effective, and what the guardrails are. You can't have regulators developing those guardrails without understanding what the standards are and the consensus perspective of what good responsible AI looks like coming from the innovators. And so one part of the answer to your question is bringing all the right stakeholders to the table and having patients at the center of what we're doing.
The second part is--so, because health is so consequential, there's risk involved. I would argue that there's layers or different levels of risk. An ICU, one might agree there's a level of risk that's pretty significant, a patient's life. They're on that edge in terms of life and death. AI that might be used in helping to move beds around in a hospital, not so consequential. So a patient's life might not necessarily be on the line determining efficiencies about where beds are moving.
And so from that perspective, we are looking to work with the innovation community to identify where they can accelerate and innovate in safe spaces, in those bed management efficiency, back office administration, drafting of emails, variety of different use cases that aren't as consequential or aren't as risky. Whereas the more risky ones, the ones like in the ICU where life and death is a matter of potentially what an AI tool is going to recommend or not recommend, those ones require going much more slowly and thinking through the consequences with more rigor and stronger guidelines and guardrails.
And so that's how we're approaching it, is: identifying where the less risky places are, looking to support innovation, building out guidelines around what responsible, trustworthy AI looks like, while slowly building up to some of those more risky places.
Kelsey Waddill:
That makes sense. And in our last question here, I just wanted to hear what you're excited about for this next year in this space, and specifically what CHAI's doing.
Brian Anderson, MD:
Yeah. I would say we're at a very unique moment in AI. I had shared earlier that all algorithms are are programs trained on our histories. We have an opportunity to address that set of inequities in our past. And that could take the form of a variety of different responses.
One of them, the one I hope, and the one we're driving to in CHAI is: how do we meaningfully engage communities that haven't traditionally had the opportunity to participate in so many of the digital health revolutions that have come before? As an example, models require data for training. To create a model that performs well, you need to have robust training data for it to be trained and tuned on whatever that population is. And so how can we work with marginalized communities to enable them to tell their digital story so that we can then partner with the model vendors to then train models on data from those communities, so that those models can be used in meaningful ways to help those communities and the health of those communities? That's one of the exciting things I'm looking forward to for the next year.
Kelsey Waddill:
Great. Well, I'm excited too. And thank you so much for this conversation and for making time for it today.
Brian Anderson, MD:
Absolutely. Thanks, Kelsey.
Kelsey Waddill:
Thank you.
Listeners, thank you for joining us on Healthcare Strategies's Industry Perspectives. When you get a chance, subscribe to our channels on Spotify and Apple. And leave us a review to let us know what you think of this new series. More Industry Perspectives are on the way, so stay tuned.
Excerpt from:
What principles should guide artificial intelligence innovation in healthcare? - HealthCareExecIntelligence
- What Is Machine Learning? | How It Works, Techniques ... [Last Updated On: September 5th, 2019] [Originally Added On: September 5th, 2019]
- Start Here with Machine Learning [Last Updated On: September 22nd, 2019] [Originally Added On: September 22nd, 2019]
- What is Machine Learning? | Emerj [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Microsoft Azure Machine Learning Studio [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Machine Learning Basics | What Is Machine Learning? | Introduction To Machine Learning | Simplilearn [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- What is Machine Learning? A definition - Expert System [Last Updated On: October 2nd, 2019] [Originally Added On: October 2nd, 2019]
- Machine Learning | Stanford Online [Last Updated On: October 2nd, 2019] [Originally Added On: October 2nd, 2019]
- How to Learn Machine Learning, The Self-Starter Way [Last Updated On: October 17th, 2019] [Originally Added On: October 17th, 2019]
- definition - What is machine learning? - Stack Overflow [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Artificial Intelligence vs. Machine Learning vs. Deep ... [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning in R for beginners (article) - DataCamp [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning | Udacity [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning Artificial Intelligence | McAfee [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- AI-based ML algorithms could increase detection of undiagnosed AF - Cardiac Rhythm News [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip - TechCrunch [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- Can the planet really afford the exorbitant power demands of machine learning? - The Guardian [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- New InfiniteIO Platform Reduces Latency and Accelerates Performance for Machine Learning, AI and Analytics - Business Wire [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- How to Use Machine Learning to Drive Real Value - eWeek [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- Machine Learning As A Service Market to Soar from End-use Industries and Push Revenues in the 2025 - Downey Magazine [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Rad AI Raises $4M to Automate Repetitive Tasks for Radiologists Through Machine Learning - - HIT Consultant [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Machine Learning Improves Performance of the Advanced Light Source - Machine Design [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Synthetic Data: The Diamonds of Machine Learning - TDWI [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- The transformation of healthcare with AI and machine learning - ITProPortal [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Workday talks machine learning and the future of human capital management - ZDNet [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Machine Learning with R, Third Edition - Free Sample Chapters - Neowin [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning - SemiEngineering [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Podcast: How artificial intelligence, machine learning can help us realize the value of all that genetic data we're collecting - Genetic Literacy... [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The Real Reason Your School Avoids Machine Learning - The Tech Edvocate [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Siri, Tell Fido To Stop Barking: What's Machine Learning, And What's The Future Of It? - 90.5 WESA [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Microsoft reveals how it caught mutating Monero mining malware with machine learning - The Next Web [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The role of machine learning in IT service management - ITProPortal [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Global Director of Tech Exploration Discusses Artificial Intelligence and Machine Learning at Anheuser-Busch InBev - Seton Hall University News &... [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The 10 Hottest AI And Machine Learning Startups Of 2019 - CRN: The Biggest Tech News For Partners And The IT Channel [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Startup jobs of the week: Marketing Communications Specialist, Oracle Architect, Machine Learning Scientist - BetaKit [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Here's why machine learning is critical to success for banks of the future - Tech Wire Asia [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- 3 questions to ask before investing in machine learning for pop health - Healthcare IT News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Machine Learning Answers: If Caterpillar Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- AI and machine learning platforms will start to challenge conventional thinking - CRN.in [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning Answers: If Twitter Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning Answers: If Seagate Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning Answers: If BlackBerry Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Amazon Releases A New Tool To Improve Machine Learning Processes - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Another free web course to gain machine-learning skills (thanks, Finland), NIST probes 'racist' face-recog and more - The Register [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Kubernetes and containers are the perfect fit for machine learning - JAXenter [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- TinyML as a Service and machine learning at the edge - Ericsson [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- AI and machine learning products - Cloud AI | Google Cloud [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning | Blog | Microsoft Azure [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning in 2019 Was About Balancing Privacy and Progress - ITPro Today [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- Here's why digital marketing is as lucrative a career as data science and machine learning - Business Insider India [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels - PCWorld [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Cloud as the enabler of AI's competitive advantage - Finextra [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Informed decisions through machine learning will keep it afloat & going - Sea News [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- The Problem with Hiring Algorithms - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- New Program Supports Machine Learning in the Chemical Sciences and Engineering - Newswise [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- AI-System Flags the Under-Vaccinated in Israel - PrecisionVaccinations [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- New Contest: Train All The Things - Hackaday [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- AFTAs 2019: Best New Technology Introduced Over the Last 12 MonthsAI, Machine Learning and AnalyticsActiveViam - www.waterstechnology.com [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Educate Yourself on Machine Learning at this Las Vegas Event - Small Business Trends [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Seton Hall Announces New Courses in Text Mining and Machine Learning - Seton Hall University News & Events [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Looking at the most significant benefits of machine learning for software testing - The Burn-In [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Leveraging AI and Machine Learning to Advance Interoperability in Healthcare - - HIT Consultant [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Adventures With Artificial Intelligence and Machine Learning - Toolbox [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Five Reasons to Go to Machine Learning Week 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Uncover the Possibilities of AI and Machine Learning With This Bundle - Interesting Engineering [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Learning that Targets Millennial and Generation Z - HR Exchange Network [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- Red Hat Survey Shows Hybrid Cloud, AI and Machine Learning are the Focus of Enterprises - Computer Business Review [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- Vectorspace AI Datasets are Now Available to Power Machine Learning (ML) and Artificial Intelligence (AI) Systems in Collaboration with Elastic -... [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- What is Machine Learning? | Types of Machine Learning ... [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- How Machine Learning Will Lead to Better Maps - Popular Mechanics [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- Jenkins Creator Launches Startup To Speed Software Testing with Machine Learning -- ADTmag - ADT Magazine [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- An Open Source Alternative to AWS SageMaker - Datanami [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- Machine Learning Could Aid Diagnosis of Barrett's Esophagus, Avoid Invasive Testing - Medical Bag [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- OReilly and Formulatedby Unveil the Smart Cities & Mobility Ecosystems Conference - Yahoo Finance [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]