Page 2,751«..1020..2,7502,7512,7522,753..2,7602,770..»

Envision Healthcare Radiologists Harness Artificial Intelligence to Enhance Care Quality, Patient Experience – Yahoo Finance

NASHVILLE, Tenn., July 26, 2021--(BUSINESS WIRE)--Envision Healthcare, a leading national medical group, today announced that its radiologists are successfully leveraging artificial intelligence (AI) to enhance clinical evaluations and the delivery of high-quality, patient-centered care. The newly implemented AI software assists radiologists with disease detection, case prioritization and diagnosis, and has been optimized to detect three common and consequential medical emergencies: intracranial hemorrhages, pulmonary embolisms and cervical spine fractures.

This technology is being rolled out to radiologists using Envisions exclusive platform. AI software provides additional support in analyzing medical images and notifies radiologists of suspected findings to assist them in prioritizing time-sensitive conditions, such as a stroke or perforated bowel. Using AI to help enhance diagnostic accuracy and prioritize acute cases, patients can receive more timely treatment based on their condition and acuity level.

"Our radiology team does a tremendous job of reading scans to evaluate and diagnose millions of patients conditions with accuracy and timeliness," said Maria Rodriguez, MD, Chief of Radiology at Envision Healthcare. "We are continuously reviewing best practices and ways to advance the delivery of high-quality care. AI technology is one of the tools we can use to complement our clinical expertise, so we can continue achieving accurate reads and providing the right care to patients at the right time, ultimately saving their lives and improving overall health outcomes."

"AI has become invaluable, enabling radiologists to maintain and improve the quality of care we provide while meeting the growing demand for our expertise," said Chris Granville, MD, Envision Healthcare radiologist leading the medical groups AI implementation. "As one of the largest U.S. radiology groups caring for millions of patients from different backgrounds and locations, we have a highly unique and diversified dataset, which is integral to augmenting deep learning within AI. While we continue strengthening our AI application to improve our workflows and patient care, our ultimate goal is to use our dataset to help advance the AI community at large."

Story continues

Envision cares for 32 million patients every year, with its team of 800 radiologists conducting more than 10 million reads a year. Envision is uniquely positioned to improve the health of communities across the country. The medical group recently released performance data revealing it outperformed national benchmarks from the Centers for Medicare & Medicaid Services on key quality metrics for safe, timely, effective, patient-centered care in 2020. For radiology, Envisions turnaround times remained below the national standard of 30 minutes. The turnaround time for strokes, one of the most crucial diagnoses, was 27 percent lower than the national benchmark, allowing for faster cross-specialty clinical decision making, such as the administration of thrombolytics (clot busters), which leads to better recovery from strokes.

About Envision Healthcare Corporation

Envision Healthcare Corporation is a leading national medical group that delivers physician and advanced practice provider services, primarily in the areas of emergency and hospitalist medicine, anesthesiology, radiology/teleradiology, and neonatology to more than 1,800 clinical departments in healthcare facilities in 45 states and the District of Columbia. Post-acute care is delivered through an array of clinical professionals and integrated technologies which, when combined, contribute to efficient and effective population health management strategies. As a leader in ambulatory surgical care, the medical group operates and holds ownership in more than 250 surgery centers in 34 states and the District of Columbia, with medical specialties ranging from gastroenterology to ophthalmology and orthopaedics. In total, the medical group offers a differentiated suite of clinical solutions on a national scale with a local understanding of our communities, creating value for health systems, payers, providers, and patients. For additional information, visit http://www.envisionhealth.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20210726005544/en/

Contacts

Aliese PolkAliese.Polk@EnvisionHealth.com http://www.envisionhealth.com

Read more:
Envision Healthcare Radiologists Harness Artificial Intelligence to Enhance Care Quality, Patient Experience - Yahoo Finance

Read More..

U of T team working to address biases in artificial intelligence systems – News 1130

A University of Toronto team has launched a free service to address biases that exist in artificial intelligence (AI) systems, a technology that is increasingly used all around the world, that has the potential to have life-changing impacts on individuals.

Almost every AI system we tested has some sort of significant bias, says Parham Aarabi, a professor at the University of Toronto. For the past few years, one of the realizations has been that these AI systems are not always fair and they have different levels of bias. The challenge has been that, its been hard to know how much bias there is and what kind of bias there might be.

Earlier this year Aarabi, who has spent the last 20 years working on different kinds of AI systems, and his colleagues started HALT, a University of Toronto project launched to measure bias in AI systems, especially when it comes to recognizing diversity.

AI systems are used in many places, including in airports, by governments, health agencies, police forces, cell phones, social media aps, and in some cases by companies during hiring processes. In some cases, its as simple as walking down the street and having your face recognized.

However, its humans who design the data and systems that exist within an AI system, and thats where researchers say the biases can be created.

More and more, our interactions with the world are through artificial intelligence, Aarabi says. AI is around us and it involves us. We believe that if AI is unfair and has biases, it doesnt lead to good places so we want to avoid that.

The HALT team works with universities, companies, governments, and agencies that use AI systems. It can take them up to two weeks to perform a full evaluation, measuring the amount of bias present in the technologies, and the team can pinpoint exactly which demographics are being left out or impacted.

We can quantitatively measure how much bias there is, and from that, we can actually estimate what training data gaps there are, says Aarabi. The hope is, they can take that and improve their system, get more training data, and make it more fair.

To help their clients or partners, the team also provides a report along with guidelines on how the evaluated AI system can be improved and be made more fair.

Each case is unique, but Aarabi and his team have so far worked on 20 different AI systems and found that the number one issue has been the lack of training data for certain demographics.

If what you teach the AI is bias, for example, you dont have enough training data of all diverse inputs and individuals, then that AI does become bias, he says. Other things like the model type and being aware of what to look at and how to design AI systems, also can make an impact.

The HALT team has worked to evaluate technology which includes facial recognition, images, and even voice-based data.

We found that even dictation systems in our phones can be quite biased when it comes to dialect, Aarabi says. Native English speakers, it works reasonably well on. But if people have a certain kind of accent or different accents, then the accuracy level can drop substantially and usability of these phones becomes less.

Facial recognition has faced increased scrutiny over the years, as experts warn of the potential it has to perpetuate racial inequality. In parts of the world, the technology has been used by the criminal justice system and immigration enforcement, and there have been reports that the technology has led to the to wrongful identification and arrests of Black men in the U.S.

The American Civil Liberties Union has called for the stopping of face surveillance technologies, saying facial technology is racist, from how it was built to how it is used.

With the persisting use of these technologies, there have been calls and questions around the regulation of AI systems.

Its very important that when we use AI systems or when governments use AI systems, that there be rules in place that they need to make sure that theyre fair and validated to be fair, Aarabi says. I think slowly governments are waking up to that reality, but I do think we need to get there.

Former three-term Privacy Commissioner of Ontario, Ann Cavoukian, says most people are unaware of the consequences of AI and what its potential is in terms of positive and negatives, including biases that exist.

We found that the biases have occurred against people of colour, people of Indigenous backgrounds, she says. The consequences have to be made clear, and we have to look under the hood. We have to examine it carefully.

Earlier this year, an investigation found that the use of Clearview AIs facial-recognition technology in Canada, violated federal and provincial laws governing personal information.

In response to the investigation, it was announced that the U.S. firm would stop offering its facial-recognition services in Canada, including Clearview suspending its contract with the RCMP.

They slurp peoples images off of social media and use it without any consent or notice to the data subjects involved, says Cavoukian, who is now the Executive Director of the Global Privacy and Security by Design Centre. 3.3 billion facial images stolen, in my view, slurped from various social media sites.

Until recently, Cavoukian adds that law enforcement agencies were using the technology unbeknownst to police chiefs, most recently the RCMP. She says its important to raise awareness about what AI systems are used, and what their limitations are, particularly in their interactions with the public.

Government has to ensure that whatever it relies on for information that it acts on, is in fact accurate and thats largely missing with AI, Cavoukian says. The AI has to work equally for all of us, and it doesnt. Its bias, so how can we tolerate that.

RELATED: Canadian Civil Liberties Association has serious concerns about CCTV expansion in Ontario

Calls to address bias in AI arent only happening in Canada.

Late last month, the World Health Organization issued its first global report on Artificial Intelligence in health, saying the growing use of the technology comes with opportunities and challenges.

The technology has been used to diagnose, screen for diseases and support public health interventions in their management and response.

However, the report which includes a panel of experts appointed by WHO points out the risks of AI, including biases encoded in algorithms and the unethical collection and use of health data.

The researchers say AI systems trained to collect data from people in high-income countries, may not perform the same for others in low and middle income places.

Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm, read a quote by Dr. Tedros Adhanom Ghebreyesus, WHOs Director-General.

This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.

The health agency adds that AI systems should be carefully designed and trained to reflect the diversity of socio-economic and healthcare settings. Adding that governments, providers and designers should all work together to address the ethical and human rights concerns at every level of AI systems design and development.

Here is the original post:
U of T team working to address biases in artificial intelligence systems - News 1130

Read More..

Machine Learning for Cardiovascular Disease Improves When Social, Environmental Factors Are Included – NYU News

Research emphasizes the need for algorithms that incorporate community-level data, studies that include more diverse populations

Machine learning can accurately predict cardiovascular disease and guide treatmentbut models that incorporate social determinants of health better capture risk and outcomes for diverse groups, finds a new study by researchers at New York Universitys School of Global Public Health and Tandon School of Engineering. The article, published in the American Journal of Preventive Medicine, also points to opportunities to improve how social and environmental variables are factored into machine learning algorithms.

Cardiovascular disease is responsible for nearly a third of all deaths worldwide and disproportionately affects lower socioeconomic groups. Increases in cardiovascular disease and deaths are attributed, in part, to social and environmental conditionsalso known as social determinants of healththat influence diet and exercise.

Cardiovascular disease is increasing, particularly in low- and middle-income countries and among communities of color in places like the United States, said Rumi Chunara, associate professor of biostatistics at NYU School of Global Public Health and of computer science and engineering at NYU Tandon School of Engineering, as well as the studys senior author. Because these changes are happening over such a short period of time, it is well known that our changing social and environmental factors, such as increased processed foods, are driving this change, as opposed to genetic factors which would change over much longer time scales.

Machine learninga type of artificial intelligence used to detect patterns in datais being rapidly developed in cardiovascular research and care to predict disease risk, incidence, and outcomes. Already, statistical methods are central in assessing cardiovascular disease risk and U.S. prevention guidelines. Developing predictive models gives health professionals actionable information by quantifying a patients risk and guiding the prescription of drugs or other preventive measures.

Cardiovascular disease risk is typically computed using clinical information, such as blood pressure and cholesterol levels, but rarely take social determinants, such as neighborhood-level factors, into account. Chunara and her colleagues sought to better understand how social and environmental factors are beginning to be integrated into machine learning algorithms for cardiovascular diseasewhat factors are considered, how they are being analyzed, and what methods improve these models.

Social and environmental factors have complex, non-linear interactions with cardiovascular disease, said Chunara. Machine learning can be particularly useful in capturing these intricate relationships.

The researchers analyzed existing research on machine learning and cardiovascular disease risk, screening more than 1,600 articles and ultimately focusing on 48 peer-reviewed studies published in journals between 1995 and 2020.

They found that including social determinants of health in machine learning models improved the ability to predict cardiovascular outcomes like rehospitalization, heart failure, and stroke. However, these models did not typically include the full list of community-level or environmental variables that are important in cardiovascular disease risk. Some studies did include additional factors such as income, marital status, social isolation, pollution, and health insurance, but only five studies considered environmental factors such as the walkability of a community and the availability of resources like grocery stores.

The researchers also noted the lack of geographic diversity in the studies, as the majority used data from the United States, countries in Europe, and China, neglecting many parts of the world experiencing increases in cardiovascular disease.

If you only do research in places like the United States or Europe, youll miss how social determinants and other environmental factors related to cardiovascular risk interact in different settings and the knowledge generated will be limited, said Chunara.

Our study shows that there is room to more systematically and comprehensively incorporate social determinants of health into cardiovascular disease statistical risk prediction models, said Stephanie Cook, assistant professor of biostatistics at NYU School of Global Public Health and a study author. In recent years, there has been a growing emphasis on capturing data on social determinants of healthsuch as employment, education, food, and social supportin electronic health records, which creates an opportunity to use these variables in machine learning studies and further improve the performance of risk prediction, particularly for vulnerable groups.

Including social determinants of health in machine learning models can help us to disentangle where disparities are rooted and bring attention to where in the risk structure we should intervene, added Chunara. For example, it can improve clinical practice by helping health professionals identify patients in need of referral to community resources like housing services and broadly reinforces the intricate synergy between the health of individuals and our environmental resources.

In addition to Chunara and Cook, study authors include Yuan Zhao, Erica Wood, and Nicholas Mirin, students at the NYU School of Global Public Health. The research was supported by funding from the National Science Foundation (IIS-1845487).

About the NYU School of Global Public HealthAt the NYU School of Global Public Health (NYU GPH), we are preparing the next generation of public health pioneers with the critical thinking skills, acumen, and entrepreneurial approaches necessary to reinvent the public health paradigm. Devoted to employing a nontraditional, interdisciplinary model, NYU GPH aims to improve health worldwide through a unique blend of global public health studies, research, and practice. The School is located in the heart of New York City and extends to NYU's global network on six continents. Innovation is at the core of our ambitious approach, thinking and teaching. For more, visit http://publichealth.nyu.edu/.

About the New York UniversityTandon School of EngineeringThe NYU Tandon School of Engineering dates to 1854, the founding date for both the New York University School of Civil Engineering and Architecture and the Brooklyn Collegiate and Polytechnic Institute (widely known as Brooklyn Poly). A January 2014 merger created a comprehensive school of education and research in engineering and applied sciences, rooted in a tradition of invention and entrepreneurship and dedicated to furthering technology in service to society. In addition to its main location in Brooklyn, NYU Tandon collaborates with other schools within NYU, one of the countrys foremost private research universities, and is closely connected to engineering programs at NYU Abu Dhabi and NYU Shanghai. It operates Future Labs focused on start-up businesses in downtown Manhattan and Brooklyn and an award-winning online graduate program. For more information, visithttp://engineering.nyu.edu.

Here is the original post:
Machine Learning for Cardiovascular Disease Improves When Social, Environmental Factors Are Included - NYU News

Read More..

How Olympic Surfing Is Trying to Ride the Machine Learning Wave – The Wall Street Journal

TOKYOSouth African surfer Bianca Buitendag uses some apps and websites to gauge wind and wave conditions before she competes, but she doesnt consider surfing a high-tech sport. Its mostly about trying to gauge the weather.

Thats about it, she said this week.

Carissa Moore, who on Tuesday faced off with Buitendag for the sports first-ever Olympic gold medal, takes a different approach. She loads up on performance analytics, wave pools and science. The American, who beatBuitendag by nearly 6.5 points to win the gold medal on Tuesday,has competed on artificial wavesand uses technology such as a wearable ring that tracks her sleep and other vitals to help her coaches fine-tune her training and recovery.

Their different approaches go to the heart of a long-running tension in surfing: dueling images of the spiritual, naturalist wave rider versus the modern, techie athlete.

Theres this illusion that youre trying to sustain, even if youre aware of all the stuff thats gone into [surfing], said Peter Westwick, a University of Southern California surfing historian. Hes talking about the use of advanced polymer chemistry-enabled products in surfboards and wetsuits and complex weather modeling that helps govern where and how competitions like this Olympic event are held. The tech has roots in military research and development, he said.

See the rest here:
How Olympic Surfing Is Trying to Ride the Machine Learning Wave - The Wall Street Journal

Read More..

Holly Herndon on the power of machine learning and developing her digital twin Holly+ – The FADER

The FADER: Holly Herndon, thank you for joining us today for The FADER interview.

Holly Herndon: Thanks for having me.

So Holly+ has been live for about 24 hours. How have you felt about its reception so far?

Honestly, I've been really super pleased with it. I think at one point there were 10 hits to the server a second. So that means people were kind of going insane uploading stuff and that's basically what I wanted to happen. So I've been really, really happy with it. I also am happy with people kind of understanding that it's like, this is still kind of a nascent tech, so it's not a perfect rendition of my voice, but it's still, I think, a really interesting and powerful tool. And I think most people really got that. So I've been really pleased.

That's one of the things that drew me to Holly+ when I first read about it in a press release was that it seems like the technology is being developed specifically for this time and it is nascent and it is sort of still growing, but it feels like an attempt to get in on the ground floor of something that is already happening in a lot of different sectors of technology.

I mean, I've been working with, I like to say machine learning rather than artificial intelligence, because I feel like artificial intelligence is just such a loaded term. People imagine kind of like Skynet, it's kind of sentient. So I'm going to use machine learning for this conversation. But I've been working with machine learning for several years now. I mean the last album that I made, PROTO, I was creating kind of early models of my voice and also the voices of my ensemble members and trying to create kind of a weird hybrid ensemble where people are singing with models of themselves. So it's been going on for a while and of course machine learning has been around for decades, but there have been some really interesting kind of breakthroughs in the last several years that I think is why you see so much activity in this space now.

It's just so much more powerful now I think, than it was a couple decades back. We had some really interesting style transfer, white papers that were released. And so I think it's an exciting time to be involved with it. And I was really wanting to release a public version of kind of a similar technique that I was using on my album that people could just play with and have fun with. And I was actually just kind of reaching out to people on social media. And Yotam sent me back a video of one of my videos, but he had translated it into kind of an orchestral passage. And he was like, "I'm working on this exact thing right now." So it was perfect timing. And so we kind of linked up and started working on this specific Holly+ model.

Talk to me a little bit about some of those really powerful developments in machine learning that have informed Holly+.

Oh gosh. I mean, there's a whole history to go into. But I guess a lot of the research that was happening previously is a lot of people were using MIDI. Basically kind of trying to analyze MIDI scores to create automatic compositions in the style of an existing composer or a combination of existing composers. And I found that to be not so interesting. I'm really interested in kind of the sensual quality of audio itself. And I feel like so much is lost in a MIDI file. So much is lost in a score even. It's like the actual audio is material. I find really interesting. And so when some of the style transfers started to happen and we could start to deal with audio material rather than kind of a representation of audio through MIDI or through score material, that's when things I think got to be really interesting.

So you could imagine if you could do a style transfer of kind of any instrument onto any other instrument. Some of the really unique musical decisions that one might make as a vocalist or as a trombonist or as a guitarist, they're very different kind of musical decisions that you make depending on the kind of physical instrument that you're playing. And if you can kind of translate that to another instrument that would never make those kinds of same decisions, and I'm giving the instrument sentience here, but you know what I mean. Some of the kinds of decisions that a musician would make playing a specific instrument, if you can translate that onto others, I find that a really interesting kind of new way to make music and to find expression through sound generation. And I do think it is actually new for that reason.

I also wanted to talk a little bit about some of the ethical discussions around machine learning and some of the developments that have happened over the past year. Of course, the last time we spoke, it was about Travis Scott, which was an AI generated version of a Travis Scott song using his music, which was created without his consent. And over the past year as well, a Korean company has managed to develop an AI based on deceased popular singer and an entire reality show was created around that. It was something called AI vs. Human. So I was wondering if these sorts of developments in this sphere informed how you approached Holly+ and the more managerial aspects of how you wanted to present it to the world.

This is something that I think about quite a lot. I think that voice models, or even kind of physical likeness models or kind of style emulation, I think it opens up a whole new kind of question for how we deal with IP. I mean, we've been able to kind of re-animate our dead through moving picture or through samples, but this is kind of a brand new kind of field in that you can have the person do something that they never did. It's not just kind of replaying something that they've done in the past. You can kind of re-animate them in and give them entirely new phrases that they may not have approved of in their lifetime or even for living artists that they might not approve of. So I think it opens up a kind of Pandora's box.

And I think we're kind of already there. I mean if you saw the Jukebox project, which was super impressive. I mean, they could start with a kind of known song and then just kind of continue the song with new phrases and new passages and in a way that kind of fit the original style. It's really powerful. And we see some of the really convincing Tom Cruise deep fakes and things. These are kind of part of, I think, our new reality. So I kind of wanted to jump in front of that a little bit. There's kind of different ways that you could take it. You could try to be really protective over your self and your likeness. And we could get into this kind of IP war where you're just kind of doing take downs all the time and trying to hyper control what happens with your voice or with your likeness.

And I think that that is going to be a really difficult thing for most people to do, unless you kind of have a team of lawyers, which I'm sure that's probably already happening with people who do have teams of lawyers. But I think the more interesting way to do it is to kind of open it up and let people play with it and have fun with it and experiment. But then if people want to have kind of an officially approved version of something, then that would go through myself and my collective, which is represented through a DAO. And we can kind of vote together on the stewardship of the voice and of the likeness. And I think it really goes back to kind of really fundamental questions like who owns a voice? What does vocal sovereignty mean?

These are kind of huge questions because in a way a voice is inherently communal. I learned how to use my voice by mimicking the people around me through language, through centuries of evolution on that, or even vocal styles. A pop music vocal is often you're kind of emulating something that came before and then performing your individuality through that kind of communal voice. So I wanted to find a way to kind of reflect that communal ownership and that's why we decided to set up the DAO to kind of steward it as a community, essentially.

I saw on Twitter Professor Aaron Wright, he described DAOs as, "Subreddits with bank accounts and governance that can encourage coordination rather than shit posting and mobs." So how did you choose the different stewards that make up the DAO?

That's a really good question. And it's kind of an ongoing thing that's evolving. It's easy to say, "We figured out the DAO and it's all set up and ready to go." It's actually this thing that's kind of in process and we're working through the mechanics of that as we're going. It's also something that's kind of in real-time unfolding in terms of legal structures around that. I mean, Aaron, who you mentioned, he was part of the open law team that passed legislation in Wyoming recently to allow DAOs to be legally recognized entities, kind of like an LLC, because there's all kinds of, really boring to most people probably, complications around if a group of people ingest funds, who is kind of liable for tax for the XYZ? So there's all kinds of kind of regulatory frameworks that have to come together in order to make this kind of a viable thing.

And Aaron's done a lot of the really heavy lifting on making some of that stuff come about. In terms of our specific DAO, we're starting it out me and Matt. We started the project together and we've also invited in our management team from RVNG and also Chris and Yotam from Never Before Heard Sounds who created the voice model with us. And as well, we plan on having a kind of gallery that we're working on with Zuora. And so the idea is that people can make works with Holly+ and they can submit those works to the gallery. And the works that are approved or selective, then there's kind of a split between the artist and the gallery, the gallery being actually the DAO. And then any artist who presents in that way will also be invited into the DAO. So it's kind of ongoing. There will probably be other ways to onboard onto the DAO as we go, but we're wanting to keep it really simple as we start and not try to put the cart before the horse.

Now, of course, Holly+ is free to use right now for anyone who wants to visit the website. I was hoping you could explain to me how the average listener or a consumer of art can discern the difference between an official artwork that's been certified by the DAO versus something that was just uploaded to the website and taken and put into a tracker, a piece of art?

This is something we had to think about for a long time. It was like, "Do we want to ask people to ask for permission to use it in their tracks to release on Spotify or to upload?" And actually we came to the conclusion that we actually just wanted people to use it. It's not about trying to collect any kind of royalties in that way. I just want people to have fun with it and use it. So in terms of creating works and publishing them, it's completely free and open for anyone to use. We're kind of treating it almost like a VST, like a free VST at this point. So you can use it on anything and it's yours and what you make with it is yours. And you can publish that. And that is 100% yours.

We do have this gallery that we're launching on Zuora. That space is a little bit different in that you can propose a work to the DAO and then the DAO votes on which works we want to include in the gallery. And then those works, there would be a kind of profit split between the DAO and the artists. And basically the funds that are ingested from that, if those works do sell, are basically to go back to producing more tools for Holly+. It's not about trying to make any kind of financial gain, really. It's about trying to continue the development of this project.

Do you have any idea of what those future tools could look like right now?

Well, I don't want to give too much away, but there will be future iterations. So there might be some real-time situations. There might be some plugin situations. There's all kinds of things that we're working on. I mean, I think right now this first version, Chris and Yotam have been able to figure out how to transfer polyphonic audio into a model, which is... I'm a major machine learning nerd. So for me, I'm like, "Oh my God, I can't believe you all figured that out." That's been such a difficult thing for people to figure out. Usually people are doing monophonic, just simple one instrument, monophonic lines. But you can just put in a full track and it will translate it back. And what you get back, it's still does have that kind of machine learning, scratchy kind of neural net sound to it.

I think because it has that kind of quality it's easier for me to just open up and allow anyone to use that freely. I think as the tools evolve and speech and a more kind of maybe naturalistic likeness to my voice becomes possible, I think that that opens up a whole new set of questions around how that IP should be treated. And I certainly don't have all of the answers. It's definitely something that I'm kind of learning in public, doing and figuring out along the way. But I just see this kind of coming along the horizon and I wanted to try to find, I don't know, cool and interesting and somehow fair ways to try to work this out along the way.

Follow this link:
Holly Herndon on the power of machine learning and developing her digital twin Holly+ - The FADER

Read More..

Automated machine learning optimizes and accelerates predictive modeling from COVID-19 high throughput datasets – DocWire News

This article was originally published here

Sci Rep. 2021 Jul 23;11(1):15107. doi: 10.1038/s41598-021-94501-0.

ABSTRACT

COVID-19 outbreak brings intense pressure on healthcare systems, with an urgent demand for effective diagnostic, prognostic and therapeutic procedures. Here, we employed Automated Machine Learning (AutoML) to analyze three publicly available high throughput COVID-19 datasets, including proteomic, metabolomic and transcriptomic measurements. Pathway analysis of the selected features was also performed. Analysis of a combined proteomic and metabolomic dataset led to 10 equivalent signatures of two features each, with AUC 0.840 (CI 0.723-0.941) in discriminating severe from non-severe COVID-19 patients. A transcriptomic dataset led to two equivalent signatures of eight features each, with AUC 0.914 (CI 0.865-0.955) in identifying COVID-19 patients from those with a different acute respiratory illness. Another transcriptomic dataset led to two equivalent signatures of nine features each, with AUC 0.967 (CI 0.899-0.996) in identifying COVID-19 patients from virus-free individuals. Signature predictive performance remained high upon validation. Multiple new features emerged and pathway analysis revealed biological relevance by implication in Viral mRNA Translation, Interferon gamma signaling and Innate Immune System pathways. In conclusion, AutoML analysis led to multiple biosignatures of high predictive performance, with reduced features and large choice of alternative predictors. These favorable characteristics are eminent for development of cost-effective assays to contribute to better disease management.

PMID:34302024 | DOI:10.1038/s41598-021-94501-0

View original post here:
Automated machine learning optimizes and accelerates predictive modeling from COVID-19 high throughput datasets - DocWire News

Read More..

Will Roches Stock Rebound After A 4% Fall Following Its H1 Results? – Forbes

UKRAINE - 2019/03/28: In this photo illustration a Roche Holding AG logo seen displayed on a smart ... [+] phone. (Photo illustration by Igor Golovniov/SOPA Images/LightRocket via Getty Images)

The stock price of Roche Holdings ADR reached its all-time high of $48 just last week, before a recent sell-off in the stock, which led to over a 4% drop in its price to levels of $46 currently. Much of this fall came in late last week after the company announced its H1 results. Roche reported sales of $30.7 Bil Swiss Francs, better than the street estimates of $29.9 Bil Swiss Francs, led by continued growth in its diagnostics business, courtesy of the companys Covid-19 tests. However, the company also cautioned about a decline in diagnostics sales due to lower testing demand for Covid-19 going forward. Furthermore, despite a solid H1, the company didnt revise its guidance upward. All of these factors impacted the stock price move for Roche.

Now, after a 4% fall in a week, will RHHBY stock continue its downward trajectory over the coming weeks, or is a recovery in the stock imminent? According to the Trefis Machine Learning Engine, which identifies trends in the companys stock price using ten years of historical data, returns for RHHBY stock average 3% in the next one-month (twenty-one trading days) period after experiencing a 4% drop over the previous week (five trading days), implying that the RHHBY stock can see higher levels from here. Also, Gantenerumab - Roches treatment for Alzheimers disease - remains an important trigger for the company going forward. Roche is in discussions with the U.S. FDA, and if approved, it will be set to become yet another blockbuster drug for Roche.

But how would these numbers change if you are interested in holding RHHBY stock for a shorter or a longer time period? You can test the answer and many other combinations on the Trefis Machine Learning Engine to test Roche stock chances of a rise after a fall. You can test the chance of recovery over different time intervals of a quarter, month, or even just 1 day!

MACHINE LEARNING ENGINE try it yourself:

IF RHHBY stock moved by -5% over five trading days, THEN over the next twenty-one trading days RHHBY stock moves an average of 3%, with a good 67% probability of a positive return over this period.

Some Fun Scenarios, FAQs & Making Sense of Roche Stock Movements:

Question 1: Is the average return for Roche stock higher after a drop?

Answer: Consider two situations,

Case 1: Roche stock drops by -5% or more in a week

Case 2: Roche stock rises by 5% or more in a week

Is the average return for Roche stock higher over the subsequent month after Case 1 or Case 2?

RHHBY stock fares better after Case 1, with an average return of 2.9% over the next month (21 trading days) under Case 1 (where the stock has just suffered a 5% loss over the previous week), versus, an average return of 0.1% for Case 2.

In comparison, the S&P 500 has an average return of 3.1% over the next 21 trading days under Case 1, and an average return of just 0.5% for Case 2 as detailed in our dashboard that details the average return for the S&P 500 after a fall or rise.

Try the Trefis machine learning engine above to see for yourself how Roche stock is likely to behave after any specific gain or loss over a period.

Question 2: Does patience pay?

Answer: If you buy and hold Roche stock, the expectation is over time the near-term fluctuations will cancel out, and the long-term positive trend will favor you - at least if the company is otherwise strong.

Overall, according to data and Trefis machine learning engines calculations, patience absolutely pays for most stocks!

For RHHBY stock, the returns over the next N days after a -5% change over the last 5 trading days is detailed in the table below, along with the returns for the S&P500:

RHHBY Average Return

You can try the engine to see what this table looks like for Roche after a larger loss over the last week, month, or quarter.

Question 3: What about the average return after a rise if you wait for a while?

Answer: The average return after a rise is understandably lower than after a fall as detailed in the previous question. Interestingly, though, if a stock has gained over the last few days, you would do better to avoid short-term bets for most stocks.

Its pretty powerful to test the trend for yourself for Roche stock by changing the inputs in the charts above.

While Roche stock looks like it can gain more, 2020 has created many pricing discontinuities which can offer attractive trading opportunities. For example, youll be surprised how counter-intuitive the stock valuation is for Mettler vs Abbott.

See allTrefis Featured AnalysesandDownloadTrefis Datahere

Here is the original post:
Will Roches Stock Rebound After A 4% Fall Following Its H1 Results? - Forbes

Read More..

AI Machine Learning Could be Latest Healthcare Innovation – The National Law Review

As wementionedin the early days of the pandemic, COVID-19 has been accompanied by a rise in cyberattacks worldwide. At the same time, the global response to the pandemic has acceleratedinterestin the collection, analysis, and sharing of data specifically, patient data to address urgent issues, such as population management in hospitals, diagnoses and detection of medical conditions, and vaccine development, all through the use of artificial intelligence (AI) and machine learning. Typically, AIML churns through huge amounts of real-world data to deliver useful results. This collection and use of that data, however, gives rise to legal and practical challenges. Numerous and increasingly strict regulations protect the personal information needed to feed AI solutions. The response has been to anonymize patient health data in time-consuming and expensive processes (HIPAA alone requires the removal of 18 types of identifying information). But anonymization is not foolproof and, after stripping data of personally identifiable information, the remaining data may be of limited utility. This is where synthetic data comes in.

A synthetic dataset comprises artificial information that can be used as a stand-in for real data. The artificial dataset can be derived in different ways. One approach starts with real patient data. Algorithms process the real patient data and learn patterns, trends, and individual behaviors. The algorithms then replicate those patterns, trends, and behaviors in a dataset of artificial patients, such that if done properly the synthetic dataset has virtually the same statistical properties of the real dataset. Importantly, the synthetic data cannot be linked back to the original patients, unlike some de-identified or anonymized data, which have been vulnerable to re-identification attacks. Other approaches involve the use of existing AI models to generate synthetic data from scratch, or the use of a combination of existing models and real patient data.

While the concept of synthetic data is not new, it hasrecentlybeen described as a promising solution for healthcare innovation, particularly in a time when secure sharing of patient data has been particularly challenged by lab and office closures. Synthetic data in the healthcare space can be applied flexibly to fit different use cases, and they can be expanded to create more voluminous datasets.

Synthetic datas other reported benefits include the elimination of human bias and the democratization of AI (i.e., making AI technology and the underlying data more accessible). Critically too, regulations governing personal information, such as HIPAA, the EU General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA), may be read to permit the sharing and processing of original patient data (subject to certain obligations) such that the resulting synthetic datasets may carry less privacy risk.

Despite the potential benefits, the creation and use of synthetic data has its own challenges. First, there is the risk that AI-generated data is so similar to the underlying real data that real patient privacy is compromised. Additionally, the reliability of synthetic data is not yet firmly established. For example, it is reported that no drug developer has yet relied on synthetic data for a submission to the U.S. Food and Drug Administration because it is not known whether that type of data will be accepted by FDA. Perhaps most importantly, synthetic data is susceptible to adjustment, for good or ill. On the one hand, dataset adjustments can be used to correct for biases imbedded in real datasets. On the other, adjustments can also undermine trust in healthcare and medical research.

As synthetic data platforms proliferate and companies increasingly engage those services to develop innovative solutions, care should be exercised to guard against the potential privacy and reliability risks.

2021 Proskauer Rose LLP. National Law Review, Volume XI, Number 201

Visit link:
AI Machine Learning Could be Latest Healthcare Innovation - The National Law Review

Read More..

IBM’s newest quantum computer is now up-and-running: Here’s what it’s going to be used for – ZDNet

A Quantum System One, IBM's flagship integrated superconducting quantum computer, is now available on-premises in the Kawasaki Business Incubation Center in Kawasaki City.

IBM has unveiled a brand-new quantum computer in Japan, thousands of miles away from the company's quantum computation center in Poughkeepsie, New York, in another step towards bringing quantum technologies out of Big Blue's labs and directly to partners around the world.

A Quantum System One, IBM's flagship integrated superconducting quantum computer, is now available on-premises in the Kawasaki Business Incubation Center in Kawasaki City, for Japanese researchers to run their quantum experiments in fields ranging from chemistry to finance.

Most customers to date can only access IBM's System One over the cloud, by connecting to the company's quantum computation center in Poughkeepsie.

Recently, the company unveiled the very first quantum computer that was physically built outside of the computation center's data centers,when the Fraunhofer Institute in Germany acquired a System One. The system that has now been deployed to Japan is therefore IBM's second quantum computer that is located outside of the US.

The announcement comes as part of a long-standing relationship with Japanese organizations. In 2019, IBM and the University of Tokyo inaugurated the Japan-IBM Quantum Partnership, a national agreement inviting universities and businesses across the country to engage in quantum research. It was agreed then that a Quantum System One would eventually be installed at an IBM facility in Japan.

Building on the partnership, Big Blue and the University of Tokyolaunched the Quantum Innovation Initiative Consortium last yearto further bring together organizations working in the field of quantum. With this, the Japanese government has made it clear that it is keen to be at the forefront of the promising developments that quantum technologies are expected to bring about.

Leveraging some physical properties that are specific to quantum mechanics, quantum computers could one day be capable of carrying out calculations that are impossible to run on the devices that are used today, known as a classical computers.

In some industries, this could have big implications; and as part of the consortium, together with IBM researchers, some Japanese companies have already identified promising use cases. Mitsubishi Chemical's research team, for example, has developed quantum algorithms capable of understanding the complex behavior of industrial chemical compounds with the goal of improving OLED displays.

A recent research paper published by the scientistshighlighted the potential of quantum computers when it comes to predicting the properties of OLED materials, which could eventually lead to more efficient displays requiring low-power consumption.

Similarly, researchers from Mizuho Financial Group and Mitsubishi Financial Group have been developing quantum algorithms that could speedup financial operations like Monte Carlo simulations, which could allow for optimized portfolio management thanks to better risk analysis and option pricing.

With access to IBM's Quantum System One, research in those fields is now expected to accelerate. But other industry leaders exploring quantum technologies as part of the partnership extend from Sony to Toyota, through Hitachi, Toshiba or JSR.

Quantum computing is still in its very early stages, and it is not yet possible to use quantum computers to perform computations that are of any value to a business. Rather, scientists are currently carrying out proofs-of-concept, by attempting to identify promising applications and testing them at a very small scale, to be prepared for the moment that the hardware is fully ready.

This is still some way off. Building and controlling the components of quantum computers is a huge challenge, which has so far been limited to the confines of specialist laboratories such as IBM's Poughkeepsie computation center.

It is significant, therefore, that IBM's Quantum System One is now mature enough to be deployed outside of the company's lab.

"Thousands of meticulously engineered components have to work together flawlessly in extreme temperatures within astonishing tolerances," said IBM in a blog post.

Back in the US, too, quantum customers are showing interest in building quantum hardware in their own facilities. The Cleveland Clinic, for example,recently invested $500 million for Big Blue to build quantum hardware on-premises.

Read the original post:
IBM's newest quantum computer is now up-and-running: Here's what it's going to be used for - ZDNet

Read More..

URI to host international experts for conference on future of quantum computing – URI Today

KINGSTON, R.I. July 22, 2021 The University of Rhode Island will host more than a dozen international experts in the growing field of quantum information science in October for the inaugural Frontiers in Quantum Computing conference in celebration of the launch of URIs new masters degree program in quantum computing.

The conference, which will take place Oct. 18-20 on URIs picturesque Kingston Campus amid fall foliage, will feature daily talks and a panel focusing on the future of quantum computing, current developments and research, and educational initiatives for future leaders in the field.

We are bringing key quantum information science leaders from all sectorsgovernment, academia and industryto URI to present the latest developments and frontiers in the theoretical and experimental aspects of quantum computing, said Vanita Srinivasa, assistant professor of physics and director of URIs new Quantum Information Science Program. This is not only the inaugural event for our new degree. This will mark the first major conference on quantum computing for the region, and we want to establish Rhode Island as a center of quantum information science.

URIs new Master of Science in Quantum Computing is a meaningful step not only for our regions quantum workforce developmentbut also for the U.S. and global quantum ecosystem, said Christopher Savoie 92, founder and chief executive officer of Zapata Computing, who is a member of the conference steering committee and will be one of the speakers at the conference. This inaugural conference is a pioneering effort on the Universitys part to launch a regional center for quantum computingand its amazing to return to campus with some of our fields greatest contributors.

The list of speakers includes U.S. Sen. Jack Reed, who will deliver an address on the opening day of the conference, along with pioneers from across the U.S. as well as from Canada, Europe, and Australia. They will provide a look into different areas of quantum computing. They include:

With the launch of URIs masters program in quantum computing, education will be a focus of the conference, and college and high school students are encouraged to attend to learn more about the opportunities in the field and to make professional connections. Among the speakers focusing on education will be Professor Chandralekha Singh, president of the American Association of Physics Teachers. Also, students and postdoctoral scientists are invited to present their own work through research posters.

Leonard Kahn, chair of the URI Physics Department, says given the interdisciplinary nature of quantum computing, the conference will be of interest to a wide audience.

The future of quantum computing will impact us all as we go forward. People will come away from this conference with a far broader understanding of the topic, said Kahn. We hope to encourage the full URI community and all the high schools to attend. Well have talks on education, and our round-table discussion on the future of quantum computing will present a diverse group of people describing their views of the future. I think that will be enlightening to the general community. This is not just for people in physics. This not only affects the STEM disciplines, but also philosophy and the humanities.

The conference is planned as a primarily in-person event. But given the uncertainty of the pandemic and its impact on travel, some speakers may be virtual, and plans for a hybrid format are in place.

Along with Zapata Computing, sponsors for Frontiers in Quantum Computing include IBM and D-Wave both pioneers in the development of quantum computers. Sponsorship opportunities are still available. For more information, contact Leonard Kahn at lenkahn@uri.edu.

New masters program

The 38-credit Master of Science in Quantum Computing program, which starts in the fall as part of the Physics Departments new Quantum Information Science Program, aligns with the goals of the National Quantum Initiative Act. The act highlights the need for researchers, educators and students and for the growth of multidisciplinary curricula and research opportunities in quantum-based technologies.

The program, which builds on URIs strengths in quantum information science, optics and nanophysics, is geared toward students with a bachelors degree in physics or a closely related discipline and a basic understanding of quantum mechanics.

What makes the program unique in the U.S. is that a student can come in as a freshman and in five years graduate with both a bachelors degree in physics and a Master of Science in Quantum Computing, Kahn said.

Students graduating from the program can either use the degree as a direct route into the quantum computing industry or as preparation for doctoral studies. The non-thesis program includes specialized coursework in quantum computing and complementary graduate-level coursework in physics. The program also allows students the flexibility to tailor their studies to pursue interests in mathematics, computer science, and engineering through the programs interdisciplinary curriculum.

In addition, the program has established partnerships with industrial firms, such as Zapata Computing, as well as national labs and institutes that will help students take part in collaborative research projects and internships to ensure they are prepared to enter the workforce or to pursue further studies in the rapidly advancing field.

There are several major quantum computing efforts in New England, and students will have access to that, said Srinivasa. They will have access to industry and government internships that will give them hands-on, experiential learning, along with opportunities to pursue research with University faculty and their collaborators in this field.

Frontiers in Quantum Computing, lectures and the panel discussion on the future of quantum computing are free and open to the public but registration is required. To learn more about the conference or to register, go to the Frontiers in Quantum Computing webpage. For more information on the masters program, email Leonard Kahn at lenkahn@uri.edu or Vanita Srinivasa at vsriniv@uri.edu.

The rest is here:
URI to host international experts for conference on future of quantum computing - URI Today

Read More..