Page 652«..1020..651652653654..660670..»

Machine Learning Methods May Improve Brain Tumor … – HealthITAnalytics.com

November 21, 2023 -Researchers from University of Florida (UF) Health have demonstrated that a combination of machine learning (ML) and liquid chromatography-high resolution mass spectrometry (LC-HRMS) can help make brain tumor evaluations more efficient.

The research was published in the Journal of the American Society for Mass Spectrometry, detailing how these tools can refine the metabolomic and lipidomic characterization of meningioma tumors. While these are a common type of brain tumor, accurately assessing them is critical to prevent adverse outcomes.

Meningioma tumors are classified into three categories: grade I, grade II, and grade III. Grade I tumors are typically slow-growing and less threatening, so treatment focuses on tumor removal and follow-up monitoring for the patient. Grade III tumors are more aggressive, requiring both removal and radiation treatment.

Grade II tumors present a challenge for clinicians.

Grade II tumors are the gray zone, said study co-author Jesse L. Kresak, MD, a clinical associate professor in the UF College of Medicines department of pathology, immunology and laboratory medicine, in the press release. Do we take [the tumor] out and watch to see if it comes back? Or do we also irradiate the area with the idea of preventing a recurrence?

This dilemma led the researchers to pursue an approach that improves meningioma tumor evaluation and better guides clinicians treatment decisions.

To achieve this, the research team analyzed 85 meningioma samples, obtaining chemical profiles of each tumors small molecules and fats. Doing so allowed the researchers to more precisely characterize differences between grades of tumors and identify potential biomarkers that would be helpful for diagnosis.

Initially, the research team had not planned to incorporate ML into their study, instead opting to analyze the byproducts of metabolism within the tumor cells, which would have yielded a chemical fingerprint that would help differentiate between benign and malignant tumors.

However, the researchers realized that the incorporation of ML could help provide additional insights.

After talking about it, we knew that machine learning could be a good opportunity to find things that we wouldnt be able to find ourselves, explained Timothy J. Garrett, PhD, a co-author of the paper, an associate professor in the College of Medicines department of pathology, immunology and laboratory medicine and a UF Health Cancer Center member.

By utilizing ML, the tumor evaluation process became significantly more efficient. Kresak noted that when she is diagnosing a meningioma tumor, she can assess approximately 20 data points in ten minutes. With ML, 17,000 data points were analyzed in less than a second.

Incorporating ML did not lead to significant dips in accuracy. Of those tested, one of the ML models classified the grades of tumors with 87 percent initial accuracy, which the researchers indicated could be improved with the addition and analysis of more samples.

The research team noted that their findings may be useful for meningioma diagnosis and treatment, as tumors can be reclassified after initial pathologist assessment based on new information about the samples genetic makeup.

We are further understanding different tumors by using these tools. Its a way to help us get the right treatment for our patients, Kresak said.

The research is just one example of how health systems are investigating the use of data analytics and artificial intelligence (AI) to bolster oncology.

This month, the University of Texas MD Anderson Cancer Center established its Institute for Data Science in Oncology (IDSO), designed to transform cancer care through the application of clinical expertise and data science.

IDSO is set to focus on collaboration among stakeholders in medicine, science, academia, and industry in an effort to tackle cancer patients most urgent needs.

The institute will support enhanced data generation, collection, and management at MD Anderson, leading to advances in personalized care and patient experience.

See the original post here:
Machine Learning Methods May Improve Brain Tumor ... - HealthITAnalytics.com

Read More..

Who said what: using machine learning to correctly attribute quotes – The Guardian

Engineering blog

Todays blog does not come to you from any developer in product and engineering but from our talented colleagues in data and insight. Here, the Guardians data scientists share how they have teamed up with PhD students from University College London to train a machine learning model to accurately attribute quotes. Below the two teams explain how theyve been teaching a machine to understand who said what?

Alice Morris, Michel Schammel, Anna Vissens, Paul Nathan, Alicja Polanska and Tara Tahseen

Tue 21 Nov 2023 06.11 EST

Why do we care so much about quotes?

As we discussed in Talking sense: using machine learning to understand quotes, there are many good reasons for identifying quotes. Quotes enable direct transmission of information from a source, capturing precisely the intended sentiment and meaning. They are not only a vital piece of accurate reporting but can also bring a story to life. The information extracted from them can be used for fact checking and allow us to gain insights into public views. For instance, accurately attributed quotes can be used for tracking shifting opinions on the same subject over time, or to explore those opinions as a function of identity, e.g. gender or race. Having a comprehensive set of quotes and their sources is thus a rich data asset that can be used to explore demographic and socioeconomic trends and shifts.

We had already used AI to help with accurate quote extraction from the Guardians extensive archive, and thought it could help us again for the next step of accurate quote attribution. This time, we turned to students from UCLs Centre for Doctoral Training in Data Intensive Science. As part of their PhD programme that involves working on industry projects, we asked these students to explore deep learning options that could help with quote attribution. In particular, they looked at machine learning tools to perform a method known as coreference resolution.

What is coreference resolution?

In everyday language, when we mention the same entity multiple times, we tend to use different expressions to refer to it. The task of coreference resolution is to group together all mentions in a piece of text which refer back to the same entity. We call the original entity the antecedent and subsequent mentions, anaphora. In the simple example below:

Sarah enjoys a nice cup of tea in the morning. She likes it with milk.

Sarah is the antecedent for the anaphoric mention She. The antecedent or the mention or both can also be a group of words rather than a single one. So, in the example there is another group consisting of the phrase cup of tea and the word it as coreferring entities.

Why is coreference resolution so hard?

You might think grouping together mentions of the same entity is a trivial task in machine learning, however, there are many layers of complexity to this problem. The task requires linking ambiguous anaphora (e.g. she or the former First Lady) to an unambiguous antecedent (e.g. Michelle Obama) which may be many sentences, or even paragraphs, prior to the occurrence of the quote in question. Depending on the writing style, there may be many other entities interwoven into the text that dont refer to any mentions of interest. Together with the complication of mentions, potentially being several words long, makes this task even more difficult.

In addition, sentiment conveyed through language is highly sensitive to the choice of words we employ. For example, look how the antecedent of the word they shifts in the following sentences because of the change in verb following it:

The city councilmen refused the demonstrators a permit because they feared violence.

The city councilmen refused the demonstrators a permit because they advocated violence.

(These two subtly different sentences are actually part of the Winograd schema challenge, a recognized test of machine intelligence, which was proposed as an extension of the Turing Test, a test to show whether or not a computer is capable of thinking like a human being.)

The example shows us that grammar alone cannot be relied on to solve this task; comprehending the semantics is essential. This means that rules-based methods cannot (without prohibitive difficulty) be devised to perfectly address this task. This is what prompted us to look into using machine learning to tackle the problem of coreference resolution.

Artificial Intelligence to the rescue

A typical machine learning heuristic for coreference resolution would follow steps like these:

Extract a series of mentions which relate to real-world entities

For each mention, compute a set of features

Based on those features, find the most likely antecedent for each mention

The AI workhorse to carry out those steps is a language model. In essence, a language model is a probability distribution over a sequence of words. Many of you have probably come across OpenAIs ChatGPT, which is powered by a large language model.

In order to analyse language and make predictions, language models create and use word embeddings. Word embeddings are essentially mappings of words to points in a semantic space, where words with similar meaning are placed close together. For example, the location of the points corresponding to cat and lion would be closer together than the points corresponding to cat and piano.

Identical words with different meanings ([river] bank vs bank [financial institution], for example) are used in different contexts and will thus occupy different locations in the semantic space. This distinction is crucial in more sophisticated examples, such as the Winograd Schema. These embeddings are the features mentioned in the recipe above.

Language models use word embeddings to represent a set of text as numbers, which encapsulate contextual meaning. We can use this numeric representation to conduct analytical tasks; in our case, coreference resolution. We show the language model lots of labelled examples (see later) which, in conjunction with the word embeddings, train the model to identify coreferent mentions when it is shown text it hasnt seen before, based on the meaning of that text.

For this task, we chose language models built by ExplosionAI as they fitted well with the Guardians current data science pipeline. To use them, however, they needed to be properly trained, and to do that we needed the right data.

Training the model using labelled data

An AI model can be taught by presenting it with numerous labelled examples illustrating the task we would like it to complete. In our case, this involved first manually labelling over a hundred Guardian articles, drawing links between ambiguous mentions/anaphora and their antecedent.

Though this may not seem the most glamorous task, the performance of any model is bottlenecked by the quality of the data it is given, and hence the data-labelling stage is crucial to the value of the final product. Due to the complex nature of language and the resulting subjectivity of the labelling, there were many intricacies to this task which required a rule set to be devised to standardise the data across human annotators. So, a lot of time was spent with Anna, Michel and Alice on this stage of the project; and we were all thankful when it was complete!

Although tremendously information rich and time-consuming to produce, one hundred annotated articles was still insufficient to fully capture the variability of language that a chosen model would encounter. So, to maximise the utility of our small dataset, we chose three off-the-shelf language models, namely Coreferee, Spacys coreference model and FastCoref that have already been trained on hundreds of thousands of generic examples. Then we fine-tuned them to adapt to our specific requirements by using our annotated data.

This approach enabled us to produce models that achieved greater precision on the Guardian-specific data compared with using the models straight out of the box.

These models should allow matching of quotes with sources from Guardian articles on a highly automated basis with a greater precision than ever before. The next step is to run a large-scale test on the Guardian archive and to see what journalistic questions this approach can help us answer.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

One-timeMonthlyAnnual

Other

Continued here:
Who said what: using machine learning to correctly attribute quotes - The Guardian

Read More..

Tackle computer science problems using both fundamental and … – KDnuggets

Sponsored Content

The ability to use algorithms to solve real-world problems is a must-have skill for any developer or programmer. But a major issue for them is to dive into a big pool of algorithms and find the most relevant ones.

This book (50 Algorithms Every Programmer Should Know) will help you not only to develop the skills to select and use an algorithm to tackle problems in the real world but also to understand how it works.

You'll start with an introduction to algorithms and discover various algorithm design techniques before exploring how to implement different types of algorithms, with the help of practical examples. As you advance, you'll learn about linear programming, page ranking, and graphs, and will then work with machine learning algorithms to understand the math and logic behind them. Additionally, the book will delve into modern deep learning techniques, including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Networks (RNNs), providing insights into their applications. The expansive realm of Generative AI and Large Language Models (LLMs) such as ChatGPT will also be explored, unraveling the algorithms, methodologies, and architectures that drive their implementation.

Case studies will show you how to apply these algorithms optimally before you focus on deep learning algorithms and learn about different types of deep learning models along with their practical use. Finally, you'll become well-versed in techniques that enable parallel processing, giving you the ability to use these algorithms for compute-intensive tasks.

By the end of this programming book, you'll have become adept at solving real-world computational problems by using a wide range of algorithms, including modern deep learning techniques.

Hurry Up, grab your copy from: https://packt.link/wAk8W

Read the rest here:
Tackle computer science problems using both fundamental and ... - KDnuggets

Read More..

Revolutionizing Diagnostics: Machine Learning Unleashes the … – Spectroscopy Online

A recent study published in Applied Spectroscopy presents a new approach to biomedical diagnosis through the surface-enhanced Raman spectroscopy (SERS)-based detection of micro-RNA (miRNA) biomarkers using a comparative study of interpretable machine learning (ML) algorithms (1). Led by Joy Q. Li of Duke University, the research team conducted more SERS research by introducing a multiplexed SERS-based nanosensor, named the inverse molecular sentinel (iMS) for miRNA detection. As machine learning (ML) increasingly becomes a vital tool in spectral analysis, the researchers grappled with the high dimensionality of SERS data, a challenge for traditional ML techniques prone to overfitting and poor generalization (1).

The team explored the performance of ML methods, including convolutional neural network (CNN), support vector regression, and extreme gradient boosting, both with and without non-negative matrix factorization (NMF) for spectral unmixing of four-way multiplexed SERS spectra from iMS assays (1). Remarkably, CNN stands out for achieving high accuracy in spectral unmixing. However, the incorporation of NMF before CNN proves revolutionary, drastically reducing memory and training demands without compromising model performance on SERS spectral unmixing (1).

The study also used these ML models to analyze clinical SERS data from single-plexed iMS in RNA extracted from 17 endoscopic tissue biopsies. CNN and CNN-NMF, trained on multiplexed data, emerged as the top performers, demonstrating high accuracy in spectral unmixing (1).

To enhance transparency and understanding, the researchers employed gradient class activation maps and partial dependency plots to interpret the predictions. This approach not only showcases the potential of CNN-based ML in spectral unmixing of multiplexed SERS spectra, but it also underscores the significant impact of dimensionality reduction on performance and training speed (1).

This research highlights the intersection of spectroscopy and machine learning, providing new opportunities for precise and efficient diagnostics that could enhance biomedical applications and improve patient outcomes.

This article was written with the help of artificial intelligence and has been edited to ensure accuracy and clarity. You can read more about ourpolicy for usingAIhere.

(1) Li, J. Q., Neng-Wang, H., Canning, A. J., et al. Surface-Enhanced Raman Spectroscopy-Based Detection of Micro-RNA Biomarkers for Biomedical Diagnosis Using a Comparative Study of Interpretable Machine Learning Algorithms. Appl. Spectrosc. 2023, ASAP. DOI: 10.1177/0037028231209053

View original post here:
Revolutionizing Diagnostics: Machine Learning Unleashes the ... - Spectroscopy Online

Read More..

Machine learning could improve efficiency of X-ray-guided pelvic fracture surgery – Medical Xpress

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

close

Researchers at Johns Hopkins University are leveraging the power of machine learning to improve X-ray-guided pelvic fracture surgery, an operation to treat an injury commonly sustained during car crashes.

A team of researchers from the university's Whiting School of Engineering and the School of Medicine plan to increase the efficiency of this surgery by applying the benefits of surgical phase recognition, or SPR, a cutting-edge machine learning application that involves identifying the different steps in a surgical procedure to extract valuable insights into workflow efficiency, the proficiency of surgical teams, error rates, and more.

The team presented its X-ray-based SPR-driven approach, called Pelphix, last month at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention in Vancouver.

"Our approach paves the way for surgical assistance systems that will allow surgeons to reduce radiation exposure and shorten procedure length for optimized pelvic fracture surgeries," said research team member Benjamin Killeen, a doctoral candidate in the Department of Computer Science and a member of the Advanced Robotics and Computationally AugmenteD Environments (ARCADE) Lab.

SPR lays the foundation for automated surgical assistance and skill analysis systems that promise to maximize operating room efficiency. While SPR typically analyzes full-color endoscopic videos taken during surgery, it has to date ignored X-ray imagingthe only imaging available for many procedures, such as orthopedic surgery, interventional radiology, and angiology, leaving these procedures unable to reap the benefits of SPR-enabled advancements.

Despite the rise of modern machine learning algorithms, X-ray images are still not routinely saved or analyzed because of the human hours required to process them. So to begin applying SPR to X-ray-guided procedures, the researchers first had to create their own training dataset, harnessing the power of synthetic data and deep neural networks to simulate surgical workflows and X-ray sequences based on a preexisting database of annotated CT scan images. They simulated enough data to successfully train their own machine learning-powered SPR algorithm specifically for X-ray sequences.

"We simulated not only the visual appearance of images but also the dynamics of surgical workflows in X-ray to provide a viable alternative to real image sourcesand then we set out to show that this approach transfers to the real world," Killeen said.

The researchers validated their novel approach in cadaver experiments and successfully demonstrated that the Pelphix workflow can be applied to real-world X-ray-based SPR algorithms. They suggest that future algorithms use Pelphix's simulations for pretraining before fine-tuning on real image sequences from actual human patients.

The team is now collecting patient data for a large-scale validation effort.

"The next step in this research is to refine the workflow structure based on our initial results and deploy more advanced algorithms on large-scale datasets of X-ray images collected from patient procedures," Killeen said. "In the long term, this work is a first step toward obtaining insights into the science of orthopedic surgery from a big data perspective."

The researchers hope that Pelphix's success will motivate the routine collection and interpretation of X-ray data to enable further advances in surgical data science, ultimately improving the standard of care for patients.

"In some ways, modern operating theaters and the surgeries happening within them are much like the expanding universe, in that 95% of it is dark or unobservable," says senior co-author Mathias Unberath, an assistant professor of computer science, the principal investigator of the ARCADE Lab, and Killeen's advisor.

"That is, many complex processes happen during surgeries: tissue is manipulated, instruments are placed, and sometimes, errors are made andhopefullycorrected swiftly. However, none of these processes is documented precisely. Surgical data science and surgical phase recognitionand approaches like Pelphixare working to make that inscrutable 95% of surgery data observable, to patients' benefit."

More information: Benjamin D. Killeen et al, Pelphix: Surgical Phase Recognition from X-Ray Images in Percutaneous Pelvic Fixation, Medical Image Computing and Computer Assisted InterventionMICCAI 2023 (2023). DOI: 10.1007/978-3-031-43996-4_13

Continued here:
Machine learning could improve efficiency of X-ray-guided pelvic fracture surgery - Medical Xpress

Read More..

Hyper Oracle Introduces opML, a Game-Changing Approach to Machine Learning on Ethereum – Decrypt

San Francisco, California, November 21st, 2023, Chainwire

Hyper Oracle has successfully launched its Optimistic Machine Learning (opML) in the first open-source implementation of the technology. This solution will provide a flexible and performant approach for running large machine learning (ML) models on the Ethereum blockchain. In the meantime, the protocol continues to work on introducing elements of zero-knowledge technology, such as reducing challenge period time and unlocking privacy use cases.

The current age is marked by rapid advancements in artificial intelligence (AI) and machine learning (ML). When brought onchain, AI and ML will make smart contracts smarter and enable use cases previously thought impossible. At the same time, ML will benefit from fairness, transparency, decentralization, and other advantages of onchain validity.

Current implementations face the challenge of proving the validity of computation while addressing key challenges in cost, security, and performance. Both opML and zkML have emerged as methods of verifiably proving the model used to generate a specific output.

zkML utilizes zk proofs to verify ML models. While this leverages mathematics and cryptography to provide the highest levels of security, it also limits performance.

zkML suffers from limitations in memory usage, quantization, circuit size limit, etc., which means that only small models can be implemented. Thus, onchain ML and AI computing demand another solution for the practical implementation of large ML models like GPT 3.5.

opML ports AI model inference using an optimistic verification mechanism. This allows it to offer much more enhanced performance and flexibility than zkML. This makes it capable of running a variety of ML models on the mainnet, including extremely large ones.

One of its key advantages is its low cost and high efficiency, opML does not require extensive resources for proof generation; it can run a large language model (LLM) on a laptop without requiring a GPU.

Thus, it represents a promising approach to onchain AI and machine learning. It offers efficiency, scalability, and decentralization while maintaining high standards of transparency and security.

Note that while Hyper Oracles primary focus lies on the inference of ML models to allow for secure and efficient model computations, its current opML framework will also support the fine-tuning and training process. This makes it a versatile solution for various ML tasks.

About Hyper Oracle

Hyper Oracle is a programmable zkOracle protocol developed to make smart contracts smarter with richer data sources and more compute including onchain AI and machine learning. Its goal is to enable a new wave of decentralized applications (dApps) by addressing the limitations of smart contracts and existing middle-layer solutions.

Hyper Oracle makes historical onchain data and compress compute useful and verifiable with fast finality while preserving blockchain security and decentralization. All this is done with the hopes of empowering developers to interact with blockchains in new, different ways.

Hyper Oracle is trusted by the Ethereum Foundation, Compound, and Uniswap. Get more information on Hyper Oracles opML and use Stable Diffusion and LLaMa 2 in opML live.

Socials: Discord | Twitter | GitHub

COOJamieThe PR Geniusj.kingsley@theprgenius.com

See the original post here:
Hyper Oracle Introduces opML, a Game-Changing Approach to Machine Learning on Ethereum - Decrypt

Read More..

How AI and Machine Learning Are Revolutionizing Prostate Cancer … – UroToday

Read the Full Video Transcript

Alicia Morgans: Hi. I'm so excited to be here today with Professor Tamara Lotan, who is joining me from Johns Hopkins University. Thank you so much for being here.

Tamara Lotan: Great. Thanks for having me.

Alicia Morgans: Wonderful. Tamara, you are a pathologist. And we don't get to interview a lot of pathologists on UroToday, so I'm really excited to talk with you and really about the overlap of pathology and urologic oncology, and specifically how this has evolved in a really digitized and even AI type of world. Can you tell me a little bit about the progress that's been made in pathology and what we should be on the lookout for now as we're thinking about modern-day pathology?

Tamara Lotan: Yeah, these are great questions. I always start by telling people pathology has really not changed for the last hundred years or so. The way we practice, we use the same kind of microscopes, small updates. We still take the tissue and fix it in fixative and process it and cut it onto glass slides. Been doing this exactly the same way for a hundred years. So I think that the rise of digital pathology is really exciting, and that really is a step where we then take the glass slide and scan it so that we have a digital representation of that that we can look at on a computer screen. And then most importantly, use it now to apply all kinds of algorithms, especially machine learning or artificial intelligence algorithms, to try to augment the diagnoses that until now we've just been making by eye through the microscope. We can now make with some additional machine learning techniques that really, I think, augment our ability to make diagnoses and also to predict how patients are going to do in the future.

Alicia Morgans: Well, that's certainly really exciting. I wonder if you can walk us through some use cases. What are situations where we might actually see this?

Tamara Lotan: Yeah. Prostate cancer has really been a test case for the use of digital pathology from the beginning, I think because it's a very high volume practice, looking at prostate biopsies, for example. And a lot of what we do with prostate biopsies is semi-quantitative where we identify the cancer and then grade it by deciding what percent pattern 3, 4, 5 we think is in the tumor. And we know that humans don't do that very well by eye visually. We can only get to a certain accuracy visually, whereas a machine learning type algorithm where we can teach the machine to recognize these different patterns that the tumor cells are making and then quantify the relative proportions of those patterns can do that much more accurately.

We know when humans compare, if I grade a prostate cancer case and I compare to my colleague, probably only agreement, we say CAPA values is how we typically do the metric of agreement. And it's usually between 0.4 and 0.6, which is at most moderate agreement. And most clinicians, I think, have had that experience where they see a case that's seen at their institution. Outside institution, you get slight disagreements. So the hope, I think, is that these algorithms will help us for diagnosis, but then maybe most importantly for grading so that we can really standardize grading. Institutions that don't have urologic pathologists who have had a lot of extra training and maybe have higher inter-observer agreement in terms of grading are not available at all institutions, and certainly not internationally in many areas of the world. So if you had an algorithm that can do this more accurately and represent what a urologic pathologist would interpret on the slide, that would really equalize access to care.

So I think that's very exciting. And also just ensure a consistency in terms of grading, which we know is really the most important prognostic parameter in prostate cancer.

Alicia Morgans: I couldn't agree more. And it's so critical that I tell second opinions that come to see me in my clinic that perhaps even more than what I say is that second opinion by the pathology group, because understanding the risk based on the Gleason score, of course, is the way that we really need to personalize treatment and make those decisions in the localized disease setting. So what's another use case? Where else might we see the use of this?

Tamara Lotan: I think one exciting area in prostate is thinking about how we can go beyond Gleason grading. As you mentioned, Gleason grading is such a powerful prognostic parameter, but it actually was developed in the '60s based on I think only 300 or so cases in one VA hospital. So it's amazing that it performs as well as it does, but there's no question we can probably do even better than Gleason grading in terms of coming up with a totally novel system that predicts patient outcomes.

So I think another exciting use case is to think about training algorithms on large, large data sets where we know exactly how each patient did. And we can ask the computer to look at this totally agnostic to any human grading system and come up with features that are associated with prognosis and a totally novel prognostic prediction algorithm. And I think those kind of data sets may even improve on Gleason grading when we can do that, not just with 300 cases like Dr. Gleason did, but now with thousands and thousands of cases, much more diverse data sets, for example as well.

Alicia Morgans: Is something like that in the works right now, is that something that we should be on the lookout from Lotan, et al?

Tamara Lotan: Yeah. We and many other groups, I think, are working on developing exactly these algorithms. One challenge has been to just get enough cases that have these very high quality clinical annotations and also have whole slide images available. But yes, many groups are working on putting together those data sets and maybe even sharing them across institutions. So we really have multi-institutional data sets, which are especially critical in digital pathology.

Alicia Morgans: That'll be really exciting because not only will that help, I think, make the diagnostic process more equitable. It will be fast because you can run it through those kinds of tests in a really quick manner and get an answer to a patient, to a clinician to really get moving on a treatment plan.

Tamara Lotan: Yeah, a hundred percent. It also doesn't exhaust any tissue. This is using just the original diagnostic H&E or hematoxylin, eosin stain. So you don't have to worry about using tissue that you may then want to preserve for future genomic assays and things like that.

Alicia Morgans: Great. So what's another use case as we continue to pepper you with these questions?

Tamara Lotan: Yeah, sure. I think another great use case is a long prediction of response to therapy. So there already are some companies working in this space and academic groups also studying large data sets from clinical trials, for example, to try to predict. Does this patient need hormonal therapy in addition to radiation, for example, or perhaps they would respond just to radiation alone? So I think these, we're going to see more and more predictive biomarkers coming out from AI or deep learning algorithms that are using just the diagnostic pathology images, and that'll be very exciting.

Another area I think that is definitely growing is thinking about how we can maybe triage patients for downstream sequencing. We know, although we recommend patients have germline and somatic sequencing if they're high risk, in some cases, not everyone is getting that sequencing even nationally, certainly not internationally. So if we had, for example, machine learning algorithms that can look at these diagnostic tissue samples and say, "This patient has a 90% risk of having," for example, "a BRCA2 mutation," then we could really flag those cases that we think need sequencing especially and make sure that that happens in those cases.

Alicia Morgans: That would be incredibly powerful because I think if clinicians, particularly those busy clinicians who are maybe in a community setting where they see so many patients in a day from all different types of cancer and don't necessarily know the specifics of this test or that test for their prostate cancer patients, that would be just a wonderful way to raise a flag and say, "Make sure you get germline and somatic testing for this patient." That's great.

Tamara Lotan: Yeah, a hundred percent. I think that's very exciting.

Alicia Morgans: All of this excitement I think needs to be couched with our understanding of the limitations and the challenges that the field faces. And what are those?

Tamara Lotan: So probably the biggest challenge in terms of uptake, I think, in most pathology labs is the cost. So digital pathology is really not budget-neutral. It's adding cost at least right now because we still have to prepare the tissue in exactly the same way we would to just look at it using a glass slide under the microscope. But then we need an additional step where we buy often very expensive scanning, digital scanner machines often costing several hundreds of thousands of dollars. And if you're going to do the throughput of a very large pathology lab, you might spend millions and millions of dollars on this equipment. And then even more than that, that's a fixed cost, but even more than that is the cost of storing these images. So they're huge images actually compared to radiology images. A typical digital pathology image for a single slide might be three gigabytes or something like that. So you end up terabytes, petabytes of data and the expense of that, which just accumulates over the years. In labs like ours, which are producing a million slides a year, it becomes astronomical.

So I think a lot of groups are thinking about how to store images similar to our radiology colleagues where you have some that are immediately accessible and some that are in a colder kind of storage that reduces cost, for example, and hoping that the cost for storage goes down over time. But I think that what's really going to drive uptake is if we have these compelling use cases. So if pathologists are saying, "I really want a digital pathology practice in my group because I feel like that will make my Gleason grading more reproducible," if a clinician like yourself says, "I want you to put a test online that predicts the outcome of my patient better than grading alone," that hopefully will drive investment by big health systems in this technology.

Alicia Morgans: I agree. Once certain groups are doing it, then other groups will follow, and so on down the line. This is really, really important. So if you have to look back and think about digital pathology, these AI algorithms, machine learning, what would your message be to listeners as they try to think more deeply about pathology and the way that it intersects with our care of patients with GU malignancies?

Tamara Lotan: I think this is an incredibly exciting time. I think we're really at the precipice of this revolution in how we practice pathology, and we're only going to become more accurate, more reproducible, and more equitable across all kinds of care settings. So I think that's a super exciting part of being a pathologist right now. Many pathologists or many, I would say, non-pathologists say to pathologists, "Oh, you must be scared. You won't have a job in the future. You're going to be replaced by machines." But I think pathologists really look forward to this because I think this will take away the most menial parts of our job where we're just screening things and replace it with a much more efficient process and really allow us to focus on the things that are most exciting and that require all of that medical knowledge that we've accrued over the years to apply to our patient samples. So we're really looking forward, I think, to the future, and I think it's a very exciting time for the field.

Alicia Morgans: Well, I could not agree more, and I really look forward to seeing the bright future that is the new wave of pathology in GU oncology but also of course, I'm sure across the spectrum of cancers and perhaps even beyond. And I thank you so much for sharing your knowledge today, for your time, and for your expertise.

Tamara Lotan: Thanks so much for having me.

Go here to see the original:
How AI and Machine Learning Are Revolutionizing Prostate Cancer ... - UroToday

Read More..

New Amazon AI initiative includes scholarships, free AI courses – About Amazon

Artificial intelligence (AI) is the most transformative technology of our generation. If we are going to unlock the full potential of AI to tackle the worlds most challenging problems, we need to make AI education accessible to anyone with a desire to learn.

Thats why Amazon is announcing AI Ready, a new commitment designed to provide free AI skills training to 2 million people globally by 2025. To achieve this goal, were launching new initiatives for adults and young learners, and scaling our existing free AI training programsremoving cost as a barrier to accessing these critical skills.

The three new initiatives are:

The need for an AI-savvy workforce has never been greater. A new study by AWS and research firm Access Partnership found the following:

Amazon is launching AI Ready to help those with a desire to learn about AI and benefit from the tremendous opportunity ahead. The following initiatives are designed to open opportunities to those in the workforce today as well as the future generation.

To support professionals in the workplace, were announcing eight new, free AI and generative AI courses open to anyone and aligned to in-demand jobs. There is something for everyone with courses ranging from foundational to advanced and for business leaders as well as technologists. These courses augment the 80+ free and low-cost AI and generative AI courses and resources provided through AWS.

Through the AWS Generative AI Scholarship, AWS will provide Udacity scholarships, valued at more than $12 million, to more than 50,000 high school and university students from underserved and underrepresented communities globally.

We want to help as many students as possible. Eligible students can take the new Udacity course Introducing Generative AI with AWS for free. The course, which was designed by AI experts at AWS, introduces students to foundational generative AI concepts and guides them through a hands-on project. Upon successful course completion, students earn a certificate from Udacity to showcase their knowledge to future employers.

Amazon is kicking off a new collaboration between Amazon Future Engineer and Code.org to launch Hour of Code Dance Party: AI Edition. During this hour-long introduction to coding and AI, students will create their own virtual music video set to hit songs from artists including Miley Cyrus, Harry Styles, and more.

Students will code their virtual dancers choreography and use emojis as AI prompts to generate animated backgrounds. The activity will give participants an introduction to generative AI, including learning about large language models and how they are used to power the predictive analytics responsible for creating new images, text, and more.

Hour of Code will take place globally during Computer Science Education Week, December 410, engaging students and teachers in kindergarten through 12th grade. Additionally, AWS is providing up to $8 million in AWS Cloud computing credits to Code.org, which runs on AWS, to further support Hour of Code.

Amazons new AI Ready commitment is in addition to AWSs commitment to invest hundreds of millions of dollars to provide free cloud computing skills training to 29 million people by 2025, which has already trained more than 21 million people.

Excerpt from:
New Amazon AI initiative includes scholarships, free AI courses - About Amazon

Read More..

Researchers boost vaccines and immunotherapies with machine learning to drive more effective treatments – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

Small molecules called immunomodulators can help create more effective vaccines and stronger immunotherapies to treat cancer.

But finding the molecules that instigate the right immune response is difficult the number of drug-like small molecules has been estimated to be 1060, much higher than the number of stars in the visible universe.

In a potential first for the field of vaccine design, machine learning guided the discovery of new immune pathway-enhancing molecules and found one particular small molecule that could outperform the best immunomodulators on the market. The results are published in the journal Chemical Science.

"We used artificial intelligence methods to guide a search of a huge chemical space," said Prof. Aaron Esser-Kahn, co-author of the paper who led the experiments. "In doing so, we found molecules with record-level performance that no human would have suggested we try. We're excited to share the blueprint for this process."

"Machine learning is used heavily in drug design, but it doesn't appear to have been previously used in this manner for immunomodulator discovery," said Prof. Andrew Ferguson, who led the machine learning. "It's a nice example of transferring tools from one field to another."

Immunomodulators work by changing the signaling activity of innate immune pathways within the body. In particular, the NF-B pathway plays a role in inflammation and immune activation, while the IRF pathway is essential in antiviral response.

Earlier this year, the PME team conducted a high-throughput screen that looked at 40,000 combinations of molecules to see if any affected these pathways. They then tested the top candidates, finding that when those molecules were added to adjuvantsingredients that help boost the immune response in vaccinesthe molecules increased antibody response and reduced inflammation.

To find more candidates, the team used these results combined with a library of nearly 140,000 commercially available small molecules to guide an iterative computational and experimental process.

Graduate student Yifeng (Oliver) Tang used a machine learning technique called active learning, which blends both exploration and exploitation to efficiently navigate the experimental screening through molecular space. This approach learns from the data previously collected and finds potential high-performing molecules to be tested experimentally while also pointing out areas that have been under-explored and may contain some valuable candidates.

The process was iterative; the model pointed out potential good candidates or areas in which it needed more information, and the team conducted a high-throughput analysis of those molecules and then fed the data back into the active learning algorithm.

close

After four cycles and ultimately sampling only about 2% of the librarythe team found high-performing small molecules that had never been found before. These top-performing candidates improved NF-B activity 110%, elevated IRF activity by 83%, and suppressed NF-B activity by 128%.

One molecule induced a three-fold enhancement of IFN- production when delivered with what's called a STING (stimulator of interferon genes) agonist. STING agonists promote stronger immune responses within tumors and are a promising treatment for cancer.

"The challenge with STING has been that you can't get enough immune activity in the tumor, or you have off-target activity," Esser-Kahn said. "The molecule we found outperformed the best published molecules by 20 percent."

They also found several "generalists"immunomodulators capable of modifying pathways when co-delivered with agonists, chemicals that activate cellular receptors to produce a biological response. These small molecules could ultimately be used in vaccines more broadly.

"These generalists could be good across all vaccines and therefore could be easier to bring to market," Ferguson said. "That's quite exciting, that one molecule could play a multifaceted role."

To better understand the molecules found by machine learning, the team also identified common chemical features of the molecules that promoted desirable behaviors. "That allows us to focus on molecules that have these characteristics, or rationally engineer new molecules with these chemical groups," Ferguson said.

The team expects to continue this process to search for more molecules and hope others in the field will share datasets to make the search even more fruitful. They hope to screen molecules for more specific immune activity, like activating certain T-cells, or find a combination of molecules that gives them better control of the immune response.

"Ultimately, we want to find molecules that can treat disease," Esser-Kahn said.

A team from the Pritzker School of Molecular Engineering (PME) at The University of Chicago tackled the problem by using machine learning to guide high-throughput experimental screening of this vast search space.

More information: Yifeng Tang et al, Data-driven discovery of innate immunomodulators via machine learning-guided high throughput screening, Chemical Science (2023). DOI: 10.1039/D3SC03613H

Journal information: Chemical Science

See the original post:
Researchers boost vaccines and immunotherapies with machine learning to drive more effective treatments - Phys.org

Read More..

RoboGarden and University of Northern British Columbia partner on … – Canadian Manufacturing

CALGARY RoboGarden a cloud-based publisher that works on interactive, AI-driven digital skills learning experiences announces a new collaboration with University of Northern British Columbia Continuing Studies to offer its singular Machine Learning and Artificial Intelligence (AI) Bootcamp experience to the UNBC CS workforce upskilling and lifelong learning community.

New Organization for Economic Cooperation and Development (OECD) surveys of employers and workers in the manufacturing and finance sectors of seven countries shed new light on the impact that Artificial Intelligence has on the workplace. The findings suggest that both workers and their employers are generally very positive about the impact of AI on performance and working conditions. However, there are also concerns, including job loss. The surveys show that both training and worker consultation are associated with better outcomes for workers.

This unique learning program was developed with career progression and earning potential in mind, geared toward providing students with access to an industry with an average base pay of over $88k per year. Prior programming experience or credentials are not required. Instead, the Bootcamp meets students on their terms, encouraging them to transfer their prior studies and life experiences into industry-informed Machine Learning and AI projects. Delivered online via the designed in Calgary RoboGarden learning platform, students work through 10 progressive learning modules, offering an experience as engaging as it is enriching, and culminating in RoboGardens signature Capstone project. Coupled with live-virtual instructor sessions, this learning experience disrupts the typical education style to provide students with industry expert instructors and curated content to deliver outcomes for what is needed in the workforce now, and in the future.

We know that the Canadian economy across sectors is transforming rapidly with the introduction of Artificial Intelligence and Machine Learning-based tools, services and solutions, said Dr. Mohamed Elhabiby, Co-Founder and President of RoboGarden. It gives me great pleasure to know that were equipping the Canadian workforce with the skillsets they need to innovate and succeed in this new AI-driven era. The students well be teaching in partnership with our amazing collaborators at University of Northern British Columbia Continuing Studies, will complete the program ready to contribute with high-demand skills, bringing with them practical, future-minded learnings from our world-class instructors and curriculum.

Part of reimagining how we educate and learn to meet the challenges of a rapidly changing world is through collaboration. Our partnership with RoboGarden to deliver this online, instructor-supported Machine Learning and AI Digital Workforce Upskilling Bootcamp will provide residents in northern British Columbia with the most up-to-date tools and resources in this field, says UNBC Continuing Studies Interim Manager Stacey Linton. This innovative partnership will empower our learners with the cutting-edge knowledge and skills to meet the emerging needs of the workforce, both at home and further afield.

Continued here:
RoboGarden and University of Northern British Columbia partner on ... - Canadian Manufacturing

Read More..