Class of ’23: UVA Developed AI To Spot Early Sepsis. 2 Undergrads … – UVA Today

The Team Behind the Tool

Moore has spent much of his career looking for ways to battle sepsis. Recognizing the diagnostic potential of AI, he sought to collaborate with UVA experts.

He phoned a friend, UVA Engineerings Rich Nguyen.

Nguyen, an assistant professor in the Department of Computer Science, who also has an appointment in the School of Data Science, specializes in AI. He put together the cross-disciplinary team.

Were aiming in this collaboration for the computer scientists and the data scientists to be embedded into the clinical settings, Nguyen said.

The two fourth-year students have served as his research assistants.

Edwards, the statistics major, is minoring in computer science and social entrepreneurship. Boner, a Rodman Scholar, gained experience as a software research intern and as an extern with Cisco before starting on the sepsis project.

As part of their work, the students spent time in the medical ICU, making rounds with medical teams under the direction of Drs. Taison Bell and Kyle Enfield.

Behind the computer, The team has developed a data engineering pipeline, Nguyen said. They perform statistical and computational analysis on large-scale clinical data, which allows for fast experimentation with different machine learning models.

The team also includes Joy Qiu, a 2020 School of Data Science alumna, who works at the Center for Advanced Medical Analytics in the UVA School of Medicine.

Computer science alumni Matthew Pillari, a 2022 graduate, and Navid Jahromi, a 2021 graduate, were previously on the project. Pillari is now a machine learning engineer at Imagen, while Jahromi is a software engineer at Palantir Technologies.

Its important to note that no health care decisions have yet been made based on the tool.

Thats because the AI is still learning. And in order to learn, the AI is dipping into a vast archive of biometrics. The data is essentially played back, as if in real time, starting with the beginning of a patients stay.

Were feeding the AI a bunch of datasets, Boner said. The model is learning to match those data to tell us either, yes, the patient had bloodstream infection, or no, they didnt have bloodstream infection. We have that ground truth from the medical records. And, so, the AI is learning patterns in the time series that we have, and patterns in the way that a patients condition is changing over time, that might suggest bloodstream infection.

The effort is looking closely at specific types of patients, such as transplant recipients, because they can have differing physiological responses to infection, Moore said.

Thats resulted in some new discoveries.

Transplant patients are immunocompromised, the doctor explained. Thats due to receiving anti-rejection medicines. They are thought to not mount the same clinical signature of physiological response to infection as immunocompetent patients.

Our data suggest that they do, in fact, mount a robust response. But its likely not the same response as an immunocompetent patient. This finding may help us better identify bloodstream infections in this patient population.

One dilemma for doctors caring for transplant patients is intervention versus risk. Overuse of antibiotics, for example, can lead to antibiotic resistance and other unintended effects.

Having AI that can read the nuanced differences among individuals would allow for better-informed, more personalized care.

Like the technology itself, the students have been doing a lot of deep learning.

Edwards said she learned about the challenges associated with using AI in medicine. Being able to gain the direct insights of doctors and other medical professionals boosted her own understanding, she said. In turn, she hopes that translates to the tool.

Within our research, I focus specifically on explainable artificial intelligence, she said. Explainability refers to an AI models ability to explain its behavior in human terms. Many of the most powerful machine learning models are so complex that the way they make predictions isnt clearly understood. Explainability is critical for building trust in a machine learning model, and its especially important in a clinical setting where lives are at stake.

She added that, no matter where her career ends up taking her, she hopes to continue working at the intersection of technology and social impact.

Boner, in addition to contributing to the AIs deep-learning layers, wrote a conference paper with Nguyen and Moore as part of an undergraduate consortium.

Through this project, Ive learned, first and foremost, how to do research, Boner said. Ive collaborated with both technical and non-technical researchers toward a common goal, which has been very valuable.

He plans to pursue a doctorate in computer science at Duke University, where hell be focusing on interpretable AI for health care applications.

Moore praised both students many contributions to the project.

Louisa and Zack have been integral members of our research team, Moore said. Not only are they extremely talented and technically gifted in computer science and AI, but they are also intellectually curious and bring a fresh set of eyes and ideas to the problem of infection detection in the ICU. They have been a pleasure to work with, and Ive learned a lot from them.

Currently, the AI has the combined wisdom of 40,500 anonymized patient records, consisting of 4.1 million laboratory measurements, from which to draw.

Continued here:

Class of '23: UVA Developed AI To Spot Early Sepsis. 2 Undergrads ... - UVA Today

Related Posts

Comments are closed.