Page 221«..1020..220221222223..230240..»

Machine learning approaches found to benefit Parkinson’s research – Parkinson’s News Today

Scientists exploring the potential of machine learning approaches in drug discovery for Parkinsons disease and other neurodegenerative disorders focusing on misfolded proteins that are the hallmark of such conditions found that one such method identified compounds two orders of magnitude more potent than ones previously reported, per a new study.

Using this method allowed the researchers, from the U.K. and the U.S., to identify compounds that can effectively block the clumping, or aggregation, of alpha-synuclein protein, an underlying cause of Parkinsons, the study reported.

We anticipate that using machine learning approaches of the type described here could be of considerable benefit to researchers working in the field of protein misfolding diseases [such as Parkinsons], and indeed early-stage drug discovery research in general, the researchers wrote.

Their study, Discovery of potent inhibitors of -synuclein aggregation using structure-based iterative learning was published in the journal Nature Chemical Biology.

Parkinsons disease is marked by the toxic accumulation of misfolded forms of the alpha-synuclein protein within dopamine-producing nerve cells those responsible for releasing the neurotransmitter dopamine. Dopamine is a signaling molecule that plays a role in controlling movement; Parkinsons results from the progressive loss of these cells.

Despite efforts to identify compounds that stop this toxic accumulation, there are, to date, no disease-modifying treatments available for Parkinsons.

Traditional strategies to identify novel therapies which involve screening large chemical libraries looking for potential candidates prior to any testing in humans are time-consuming, expensive, and often unsuccessful.

In the case of Parkinsons, the development of effective therapies has been hampered by the lack of methods to identify the right molecular targets.

One route to search for potential treatments for Parkinsons requires the identification of small molecules that can inhibit the aggregation of alpha-synuclein. But this is an extremely time-consuming process just identifying a lead candidate for further testing can take months or even years, Michele Vendruscolo, a professor at the University of Cambridge and the studys lead author, said in a university press release.

Now, the researchers developed a method that was able to use machine learning to quickly screen chemical libraries containing literally millions of compounds. The goal was to identify small molecules able to block the clumping of alpha-synuclein.

From a list of small molecules predicted to have a good binding to the alpha-synuclein aggregates, the researchers chose a small number of the top-ranking compounds to test experimentally as potent inhibitors of aggregation.

The results from these experimental assays were then fed to the machine learning model, which identified those with the most promising effects. This process was repeated a few times, so that highly potent compounds were identified.

Instead of screening experimentally, we screen computationally, Vendruscolo said.

Machine learning is having a real impact on the drug discovery process its speeding up the whole process of identifying the most promising candidates. For us this means we can start work on multiple drug discovery programs instead of just one.

By using the knowledge we gained from the initial screening with our machine learning model, we were able to train the model to identify the specific regions on these small molecules responsible for binding, then we can re-screen and find more potent molecules, Vendruscolo said.

Using this method, the researchers optimized the initial compounds to target pockets on the surfaces of the alpha-synuclein clumps.

In lab tests using brain tissue samples from patients with Lewy body dementia (LBD) and multiple system atrophy (MSA), two forms of atypical parkinsonism, the compounds effectively blocked aggregation of alpha-synuclein.

Machine learning is having a real impact on the drug discovery process its speeding up the whole process of identifying the most promising candidates, Vendruscolo said. For us this means we can start work on multiple drug discovery programs instead of just one.

According to Vendruscolo, so much is possible due to the massive reduction in both time and cost its an exciting time.

Excerpt from:
Machine learning approaches found to benefit Parkinson's research - Parkinson's News Today

Read More..

Machine Learning and Neural Network Can Be Effective Diagnostic Tools in MDS, Study Finds – AJMC.com Managed Markets Network

Artificial intelligence can improve detection of binucleated erythroblasts (BNEs), a rare and difficult-to-quantify phenomenon that can indicate myelodysplastic syndrome (MDS), according to a new report.

The investigators behind the study say the new method streamlines the use of new technology to make it more feasible to leverage machine learning. The study was published in Scientific Reports.1

The authors explained that MDS is notably heterogeneous, but it can typically be diagnosed based on morphologic bone marrow (BM) dysplasia and persistent cytopenia.

Myelodysplastic syndromes, or MDS, are a group of bone marrow disorders. | Image Credit: Sviatlana - stock.adobe.com

However, accurate diagnosis of cases in which mild cytopenias and subtle dysplastic changes are present can be difficult, and inter-scorer variability and subjectivity may be present, even among experienced hematopathologists, they wrote.

Some patients are left with indefinite diagnoses, such as idiopathic cytopenia of undetermined significance (ICUS) or clonal cytopenia of undetermined significance (CCUS), they said.

Given the current lack of precision, the investigators said it is important to identify objective, standardized methods of distinguishing MDS from nonclonal reactive causes of cytopenia and dysplasia.

Moreover, rare events that are indicative of MDS such as binucleated erythroblasts, while easy to identify using visual microscopy, can be challenging to quantify in large numbers, thus limiting statistical robustness, they said.

One possible solution is imaging flow cytometry (IFC), since it combines high-throughput data acquisition capacity and statistical robustness of conventional multicolor flow cytometry (MFC) together with high-resolution imaging capabilities of microscopy in a single system.

A previous study by the same group found IFC is effective at analyzing morphometric changes in dyserythropoietic BM cells.2 In that study, the investigators used IFC to analyze samples from 14 patients with MDS, 6 patients with ICUS/CCUS, 6 non-MDS controls, and 11 healthy controls.

The investigators found the IFC model, reliably identified and enumerated true binucleated erythroblasts at a significantly higher frequency in two out of three erythroblast maturation stages in MDS patients compared to normal BM (both P = .0001).

Still, they said the workflow of the feature-based IFC analysis is challenging and time-consuming, and requires software-specific expertise. Thats why, in the new paper, they proposed using a convolutional neural network (CNN) algorithm to analyze the IFC image data. They said the CNN algorithm has better accuracy and also more data-interpretation flexibility than feature-base analysis alone. In addition, they used artificial intelligence software specifically designed with a graphical user interface designed to render results that are meaningful to researchers who do not have advanced coding skills.

To test out the new method, the investigators used the raw data from the earlier study and analyzed it using the new artificial intelligence model in order to compare the models results to the previous IFC analysis. Each of the samples was also manually examined to validate the presence of BNEs.

The new model had an accuracy of 94.3% and a specificity of 98.2%. The latter means the model rarely misclassified non-BNEs and BNEs. The models sensitivity was lower, as 21.1% of BNEs in the data set were incorrectly classified as erythroblasts exhibiting irregular nuclear morphology. Overall, though, the investigators said the data suggest a high degree of confidence that when the model identifies a BNE it is correct.

The investigators said it was notable that the model worked as well as it did despite the small data set used to train it. They said incorporating a more robust data set would likely improve the models performance.

Emphasis should be placed on augmenting the classes of cells with irregular nuclear morphology and BNEs that posed classification difficulties, they wrote. Moreover, expanding the range of classification categories to include a category for uncertain cases, in addition to BNEs, doublets, and cells with irregular nuclear morphology, could be beneficial.

For now, though, the investigators said their study shows that AI has the potential to be an effective and efficient diagnostic tool for patients with MDS.

References:

See the original post here:
Machine Learning and Neural Network Can Be Effective Diagnostic Tools in MDS, Study Finds - AJMC.com Managed Markets Network

Read More..

Machine Learning Approach Effectively Predicts Remission in RA Following TNF Treatment – MD Magazine

Koshiro Sonomoto, PhD

Credit: Lupus KCR 2023

A low-cost predictive machine learning model successfully predicted achievement of remission based on Clinical Disease Activity (CDAI) measures among a cohort of patients with rheumatoid arthritis (RA) after 6 months of treatment with tumor necrosis factor (TNF) inhibitors, according to research published in Rheumatology and Therapy.1

As investigators only used CDAI measures at baseline at month 6, they believe this method demonstrates the achievability of creating regional cohorts to create low-cost models tailored to specific institutions or geographical regions.

Currently, treatment guidelines suggest initiating therapy with methotrexate, followed by either biologic disease-modifying antirheumatic drugs (bDMARDs) or Janus kinase (JAK) inhibitors if methotrexate is ineffective. However, recent studies have favored bDMARDs over JAKs and, therefore, treatment with a TNF is becoming an increasingly popular option for this patient population. Despite these encouraging findings, only 70% of patients initiating a TNF show a favorable response.2

Numerous efforts have been reported to predict the efficacy of TNF in advance, wrote a team of investigators led by Koshiro Sonomoto, PhD, associated with the University of Occupational and Environmental Health in Japan. However, access to these advanced technologies may be limited to certain countries and advanced facilities due to cost, labor requirements, and the need for process standardization. Conversely, predictive models based on routine clinical data are more accessible.

Patients with RA beginning treatment with a TNF as the first targeted synthetic (ts)/bDMARD after inadequate response to methotrexate were included in the analysis. Data were collected from the FIRST registry between August 2003 and October 2022. The analysis of baseline characteristics and 6-month CDAI used a variety of machine learning approaches, such as lasso logistic regression (Lasso), logistic regression with stepwise variable selection, support vector machine, and decision tree, along with 48 factors available in routine clinical practice for the prediction model.

A total of 4706 patients in the FIRST registry initiated treatment with b/tsDMARDs during the study period, of which 2223 were receiving methotrexate. Of these patients, 1630 received a TNF and 79 received a JAK. The average age of patients was 59.2 years, they had an average body mass index of 22.1, a mean disease duration of 75.7 months, and all were Asian. The mean dose of methotrexate was 11.3 mg/week and the mean CDAI score was 26.1.

The models exclusively relied on patient-reported outcomes and quantitative parameters, as opposed to subjective physician input.

Of the approaches, Lasso demonstrated advantages in predicting CDAI remission, with a specificity of 69.9%, sensitivity of 61.7%, and a mean area under the curve of .704. Patients who were predicted to respond to TNF achieved CDAI remission at an average rate of 53.2%, compared with only 26.4% of predicted non-responders.

These results could also help to identify an alternative b/tsDMARDs class for patients who were predicted to be TNF non-responders.

Investigators mentioned limitations, including the decreasing accuracy of the Lasso-generated remission predictive model among a Calendar cohort. In this group, which was split into a 9:1 ratio with a cutoff of October 2019, an increase in censoring was reported in the validation cohort. They suggest this could be due to COVID-19 complications.

This approach holds the potential to improve rheumatoid arthritis management by reducing the need for trial-and-error approaches and facilitating more personalized and effective treatment strategies, investigators concluded. While further validation is necessary, the study also suggests that creating cost-effective models tailored to specific regions or institutions is possible.

References

See the article here:
Machine Learning Approach Effectively Predicts Remission in RA Following TNF Treatment - MD Magazine

Read More..

Distinction Between Data Science, AI, and Machine Learning: Revealed – TimesTech

Modern businesses are aware of the usefulness of integrating Artificial Intelligence (AI), data science, and Machine Learning (ML). However, these concepts often need clarification, so deciding which technology will help and why can be challenging.

In this brief guide, we will reveal the details and characteristics of each technology and tell you more about its relationship and application.

Artificial Intelligence (AI) is a digital technology designed to imitate human intellect, qualifying devices to make decisions and learn like people. The prior purpose of AI usage in modern companies is to create algorithms and software systems to complete tasks that would typically demand human intervention independently. AI is excellent at analyzing data, pattern recognition, solving complex challenges, and accommodating dynamic conditions.

AI is categorized into two main types:

Artificial Intelligence is present in many areas of daily life. For example, most smartphones are equipped with AI assistants. AI algorithms are also used in healthcare, logistics, finance, education, and entertainment. Among the technologies powered by artificial intelligence are robotic systems, autopilot cars, content editors, etc.

Machine Learning (ML) is a subbranch of AI that represents self-learning gadgets and software systems that dont require constant manual updates to identify new patterns and change current workflow accordingly. ML allows machines to independently detect patterns, determine trends, make accurate forecasts, and produce conclusions based on previously acquired information.

Machine learning technology is divided into several subtypes:

Machine learning technology constitutes the most compact subset of Artificial Intelligence, so any industry using AI or data science can augment the current systems capabilities. The introduction of ML helps devices continually improve, enhancing prediction, fraud prevention, personalized recommendations, etc. By learning from constantly updated data, systems can act more efficiently and adjust to present conditions, achieving the highest possible efficiency.

Data science is a tool established on rigorous analytical techniques whose key application implies the derivation of concise analytical conclusions from large amounts of information. This technology involves selection, preparation, structuring, and analysis, which are mandatory procedures. Data science allows you to extract short analytics from huge data sets and use the information to your advantage.

Here are several examples of data usage in various business domains:

These examples are just a few ways data science transforms familiar tasks with valuable insights and analyzes massive amounts of data. This technology can handle any task that involves deep analysis of large amounts of data. So, if you are wondering how to make an app like Spotify or Netflix that will provide personal recommendations, the answer is to implement AI, ML, and Data Science.

All of the technologies we have listed have a common characteristic: they are all data-driven. Moreover, they are more connected than they seem: artificial intelligence includes data science, which in turn includes machine learning.

The key differences between the three concepts are in their intended purpose. So, the goal of artificial intelligence is to create smart digital systems and devices. Data science aims to process and analyze large amounts of information to help AI systems accomplish a given task. Machine learning is designed to train both of these systems, improving their performance and ensuring reliability and safety.

Thus, all three technologies are important drivers of digital progress for modern businesses, making them indispensable enablers for a multitude of industries.

AI is all about making smart devices, while machine learning is a special part of AI that teaches them how to study. On the other hand, data science is about getting valuable insights from data, which is important for both AI and machine learning. All of these fields have the potential to change industries and have a big impact on our everyday lives and careers.

Follow this link:
Distinction Between Data Science, AI, and Machine Learning: Revealed - TimesTech

Read More..

Generative AI Achieves Superresolution with Minimal Tuning | Research & Technology | May 2024 – Photonics.com

GRLITZ, Germany, May 2, 2024 Diffusion models for artificial intelligence (AI) produce high-quality samples and offer stable training, but their sensitivity to the choice of variance can be a drawback. The variance schedule controls the dynamics of the diffusion process, and typically it must be fine-tuned with a hyperparameter search for each application. This is a time-consuming task that can lead to suboptimal performance.

A new open-source algorithm, from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Imperial College London, and University College London, improves the quality and resolution of images, including microscopic images, with minimal fine-tuning.

The algorithm, called the Conditional Variational Diffusion Model (CVDM), learns the variance schedule as part of the training process. In experiments, the CVDMs approach to learning the schedule was shown to yield comparable or better results than models that set the schedule as a hyperparameter.

The CVDM can be used to achieve superresolution using an inverse problem approach.

The availability of big data analytics, along with new ways to analyze mathematical and scientific data, allow researchers to use an inverse problem approach to uncover the causes behind specific observations, such as those made in microscopic imaging.

By calculating the parameters that produced the observation, i.e., the image, a researcher can achieve higher-resolution images. However, the path from observation to superresolution is usually not obvious, and the observational data is often noisy, incomplete, or uncertain.

The model is sensitive to the choice of the predefined schedule that controls the diffusion process, including how the noise is added. When too little or too much noise is added, at the wrong place or wrong time, the result can be a failed training. Unproductive runs hinder the effectiveness of diffusion models.

Diffusion models have long been known as computationally expensive to train . . . But new developments like our Conditional Variational Diffusion Model allow minimizing unproductive runs, which do not lead to the final model, researcher Artur Yakimovich said. By lowering the computational effort and hence power consumption, this approach may also make diffusion models more eco-friendly to train.

The researchers tested the CVDM in three applications superresolution microscopy, quantitative phase imaging, and image superresolution. For superresolution microscopy, the CVDM demonstrated comparable reconstruction quality and enhanced image resolution compared to previous methods. For quantitative phase imaging, it significantly outperformed previous methods. For image superresolution, reconstruction quality was comparable to previous methods. The CVDM also produced good results for a wild clinical microscopy sample, indicating that it could be useful in medical microscopy.

Based on the experimental outcomes, the researchers concluded that fine-tuning the schedule by experimentation should be avoided, because the schedule can be learned during training in a stable way that yields the same or better results.

Of course, there are several methods out there to increase the meaningfulness of microscopic images some of them relying on generative AI models, Yakimovich said. But we believe that our approach has some new, unique properties that will leave an impact in the imaging community, namely high flexibility and speed at a comparable, or even better quality, compared to other diffusion model approaches.

The CVDM supports probabilistic conditioning on data, is computationally less expensive than established diffusion models, and can be easily adapted for a variety of applications.

In addition, our CVDM provides direct hints where it is not very sure about the reconstruction a very helpful property that sets the path forward to address these uncertainties in new experiments and simulations, Yakimovich said.

The work will be presented by della Maggiora at the International Conference on Learning Representations (ICLR 2024) on May 8 in poster session 3. ICLR 2024 takes place May 7-11, 2024, at the Messe Wien Exhibition and Congress Center, Vienna.

The research was published in the Proceedings of the Twelfth International Conference on Learning Representations, 2024 (www.arxiv.org/abs/2312.02246).

More:
Generative AI Achieves Superresolution with Minimal Tuning | Research & Technology | May 2024 - Photonics.com

Read More..

Learning Before Legislating in Texas’ AI Advisory Council – dallasinnovates.com

From controlling home environments with commands like Siri, turn on the living room lights to managing fraud and risk in financial institutions, artificial intelligence is integral to many products and services we use daily.

And the news cycle reminds us frequently that this is just the beginning that the full promise and peril of AI still lies before us. This is not just technology that will allow us to do the same things in a new way; it has the potential to make us extra human smarter, faster versions of ourselves.

Every aspect of civilization will be impacted, I believe, by AI, and therefore I wanted to study it thoughtfully and thoroughly before jumping into legislation, said Senator Tan Parker.

The Artificial Intelligence Advisory Council was established through House Bill 2060 during the 88th legislative session. Composed of founding members and Co-Chairs Senator Parker and Representative Gio Capriglione, along with five other public members, the council intends to increase the study, use, and public awareness of AI. At the heart of any successful endeavor lies collaboration. The Texas AI Council will serve as a nucleus for fostering collaboration among key stakeholders, including government agencies, industry leaders, academic institutions, and research centers.

There are very real and concerning downsides that have to be managed when it comes to AI and as a result of that, while I am always a free-market, free-enterprise guy trying to minimize regulation, some regulation will be necessary, said Senator Parker.

Thats why he and the AI advisory council are taking a thoughtful approach. Through public hearings and agency testimony, they will create recommendations for legislation, which they plan to issue by December 2024.

Communication and knowledge are the cornerstones of progress, and our council will serve as the catalyst, uniting minds from all sectors to produce thoughtful policy concerning AI advancement and technology, according to Senator Parker.

The groups first working meeting was at the end of March, when it heard from four state agencies, including the Texas Department of Information Research (DIR) and Texas Department of Transportation (TxDOT).

I was quite pleased, actually, with the progress and the thoughtfulness of the agencies in terms of how theyre approaching AI, Senator Parker noted.

For example, TxDOT is using AI to cut down accident response time, process payments, manage traffic, and evaluate aging infrastructure.

The Texas Workforce Commission also testified about their chatbot named Larry being used to screen calls and efficiently connect them with the best department. Parker doesnt envision this ever becoming an all-bot operation, saying the people of Texas are best served by man and machine working together.

We must maintain a human touch and a human presence with regard to the workforce commission, as you have people that are struggling for work and trying to find new careers and so forth, Senator Parker said.

The council will continue hearing from agencies and the public through the summerinformation that will help inform the groups recommendations. Parker is confident in this approach. He strongly believes in the states, particularly Texas, leading the nation on critical issues.

He pointed to Jennas Law. Passed in 2009 and amended in 2017, the legislation mandates K through 12 training for educators. After being passed, a study found educators reported suspected abuse almost four times more than before the training. Now, Senator Cornyn is moving that law through the U.S. Congress. Parker hopes to see it become a federal law by years end and believes the Lone Star State can again lead the nation on AI legislation.

Texas has long been a beacon of innovation and growth in many areas, and AI creates an unprecedented opportunity to further bolster the states reputation as a leader in groundbreaking research and development while increasing the benefits to Texans in their everyday lives. The council aims to support cutting-edge research initiatives and breakthroughs in AI while propelling Texas to the forefront of global innovation and efficiency.

The next AI Advisory Council meeting will be held at the Texas Capitol on May 8th. For more information, including background on council members, overall objectives, and when and where you can participate in public testimony, check out the website.

Voices contributor Nicole Ward is a data journalist for the Dallas Regional Chamber.

Sign up to keep your eye on whats new and next in Dallas-Fort Worth, every day.

Dallas Innovates, the Dallas Regional Chamber, and Dallas AI are teaming up to launch the new AI 75 program at Capital Factory's Future of AI Salon today. The first-ever list will recognize Dallas-Fort Worth innovators in artificial intelligence. Nominations are open through March 20.

The newly established Texas Capital Foundation is following the first round of grant awards by opening again for new submissions this November.

Tarleton State University received the go-ahead for a new biotechnology institute as part of Texas A&M-Fort Worth's burgeoning downtown research campus. Approved in mid-August by the Texas A&M University System Board of Regents, the biotech institute is situated in one of the nation's fastest-growing life sciences hubs. "More than 5,000 biotechnology manufacturing and research and development firms think Novartis, Alcon, AstraZeneca call Texas home," according to the university. And DFW now ranks seventh in the U.S. for life science and biotech jobs. The Tarleton State Biotechnology Institute will focus on discovery and innovation in bioinformatics and computational modeling.

At the Bush Center in Dallas on September 5, Capital Factory will host top tech minds to talk AI and AGI. Tech icon John Carmack will take the stage in a rare fireside chat on artificial general intelligence with AI expert Dave Copps. Here's what you need to know, along with advance insights from Copps.

As part of a nationwide effort, the NFEC chose Texas as one of its initial launch states because of the demonstrated need for greater economic empowerment among Texans.

Read more:
Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com

Read More..

Cohere Command R and R+ are now available in Amazon SageMaker JumpStart | Amazon Web Services – AWS Blog

This blog post is co-written with Pradeep Prabhakaran from Cohere.

Today, we are excited to announce that Cohere Command R and R+ foundation models are available throughAmazon SageMaker JumpStartto deploy and run inference. Command R/R+ are the state-of-the-art retrieval augmented generation (RAG)-optimized models designed to tackle enterprise-grade workloads.

In this post, we walk through how to discover and deploy Cohere Command R/R+ via SageMaker JumpStart.

Cohere Command R is a family of highly scalable language models that balance high performance with strong accuracy. Command R family include Command R and Command R+ models are optimized for RAG based workflows such as conversational interaction and long context tasks, enabling companies to move beyond proof of concept and into production. These powerful models are designed to handle complex tasks with high performance and strong accuracy, making them suitable for real-world applications.

Command R boasts high precision on RAG and tool use tasks, low latency and high throughput, a long 128,000-token context length, and strong capabilities across 10 key languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese.

Command R+ is the newest model, optimized for extremely performant conversational interaction and long-context tasks. It is recommended for workflows that lean on complex RAG functionality and multi-step tool use (agents), while Cohere R is well-suited for simpler RAG and single-step tool use tasks, as well as applications where price is a major consideration.

With SageMaker JumpStart, you can choose from a broad selection of publicly available foundation models. ML practitioners can deploy foundation models to dedicated SageMaker instances from a network-isolated environment and customize models using SageMaker for model training and deployment. You can now discover and deploy Cohere Command R/R+ models with a few choices inAmazon SageMaker Studioor programmatically through the SageMaker Python SDK. Doing so enables you to derive model performance and machine learning operations (MLOps) controls with SageMaker features such asSageMaker Pipelines,SageMaker Debugger, or container logs.

The model is deployed in an AWS secure environment and under your virtual private cloud (VPC) controls, helping provide data security. Cohere Command R/R+ models are available today for deployment and inferencing in Amazon SageMaker Studio inus-east-1(N. Virginia),us-east-2(Ohio),us-west-1(N. California),us-west-2(Oregon),Canada (Central), eu-central-1 (Frankfurt), eu-west-1(Ireland), eu-west-2 (London), eu-west-3 (Paris), eu-north-1 (Stockholm), ap-southeast-1 (Singapore), ap-southeast-2 (Sydney), ap-northeast-1(Tokyo) , ap-northeast-2(Seoul), ap-south-1(Mumbai), and sa-east-1(Sao Paulo).

You can access the foundation models through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in SageMaker Studio.

From the SageMaker JumpStart landing page, you can easily discover various models by browsing through different hubs, which are named after model providers. The Cohere Command R and R+ models are available in the Cohere hub. If you dont see these models, ensure you have the latest SageMaker Studio version by shutting down and restarting Studio Classic Apps.

To find the Command R and R+ models, search for Command R in the search box located at the top left of the SageMaker JumpStart landing page. Each model can be deployed on Amazon Elastic Compute Cloud (EC2) P5 instances powered by NVIDIA H100 Tensor Core GPUs (p5.48xlarge) and Amazon EC2 P4de instances powered by NVIDIA A100 Tensor Core GPUs (ml.p4de.24xlarge).

To illustrate model deployment, well deploy Cohere Command R+ on NVIDIA H100. Choose the model card to open the corresponding model detail page.

When you choose Deploy, a window appears prompting you to subscribe to the model on AWS Marketplace. Choose Subscribe, which redirects you to the AWS Marketplace listing for Cohere Command R+ (H100). Follow the on-screen instructions to complete the subscription process.

Once subscribed, return to the model detail page and choose Deploy in the window. The deployment process initiates.

Alternatively, you can choose Notebooks on the model card and open the example notebook in JupyterLab. This notebook provides end-to-end guidance on deploying the model for inference and cleaning up resources. You can also find this example notebook in the Cohere SageMaker GitHub repository. To ensure the security of the endpoint, you can configure AWS Key Management Service (KMS) key for a SageMaker endpoint configuration.

If an endpoint has already been created, you can simply connect to it:

Once your endpoint has been connected, you can perform real-time inference using the co.chat endpoint.

Command R/R+ is optimized to perform well in 10 key languages, as listed in the introduction. Additionally, pre-training data have been included for the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.

The model has been trained to respond in the language of the user. Heres an example in Spanish:

Heres what the response might look like:

Command R/R+ can also perform cross-lingual tasks, such as translation or answering questions about content in other languages.

Command R/R+ can ground its generations. This means that it can generate responses based on a list of supplied document snippets, and it includes citations in its response indicating the source of the information.

For example, the code snippet that follows produces an answer to How deep is the Mariana Trench along with inline citations based on the provided on-line documents.

Command R/R+, comes with a Tool Use API that enables the language model to interact with user-defined tools to automate highly sophisticated tasks. Command R/R+ in Tool Use mode creates API payloads (JSONs with specific parameters) based on user interactions and conversational history. These can be used to instruct any other application or tool.

For example, an application can be instructed to automatically categorize and route support tickets to the appropriate individual, change a status in customer relationship management software (CRM), or retrieve relevant snippets from a vector database. It comes in two variants; single-step and multi-step:

To explore these capabilities further, you can refer to the provided Jupyter notebook and Coheres AWS GitHub repository, which offer additional examples showcasing various use cases and applications.

After youve finished running the notebook and exploring the Cohere Command R and R+ models, its essential to clean up the resources youve created to avoid incurring unnecessary charges. Follow these steps to delete the resources and stop the billing:

In this post, we explored how to leverage the powerful capabilities of Coheres Command R and R+ models on Amazon SageMaker JumpStart. These state-of-the-art large language models are specifically designed to excel at real-world enterprise use cases, offering unparalleled performance and scalability. With their availability on SageMaker JumpStart and AWS Marketplace, you now have seamless access to these cutting-edge models, enabling you to unlock new levels of productivity and innovation in your natural language processing projects.

See the article here:
Cohere Command R and R+ are now available in Amazon SageMaker JumpStart | Amazon Web Services - AWS Blog

Read More..

Machine learning comes to Chrome’s address bar on Windows, Mac, and ChromeOS – Android Police

Summary

Google's foray into AI, exemplified by its popular AI chatbot, Gemini, is extending to more of its services and apps. Notably, the Chrome browser is poised to showcase its potential in incorporating AI features. In a recent update, we shared the news of a potential Gemini integration in Google Chrome for desktop to enhance the address bar. Now, the Chrome address bar, also known as the Omnibox, is set to elevate its intelligence by integrating cutting-edge machine-learning models.

As noted in the Chromium Blog, starting with Chrome version M124, Google is integrating machine learning models into Chrome's address bar to provide users with more accurate and relevant web page suggestions. These machine-learning models will also help increase the relevance of search suggestions.

Chrome software engineer Justin Donnelly sheds light on the challenges faced by the engineering team in enhancing Omnibox. The hand-crafted formulas it used previously were not well-suited to modern scenarios, leading to the scoring system remaining unchanged for a significant period. Moreover, altering a feature used billions of times daily posed another significant challenge for the team.

Donnelly added that the machine learning models used to train the Chrome Omnibox could identify some interesting patterns. For example, when users select a URL and then immediately return to Omnibox to search for another URL, the ML system decreases the relevance score of that URL. In future attempts, the ML system won't prioritize the URL with a lower score in that context.

According to Chrome software engineers, integrating machine learning models into the Omnibox holds immense potential for enhancing the user experience. These models could potentially adapt to the day's time, offering users more relevant results. Donnelly also revealed that the Chrome engineering team is exploring training specialized versions of the model for mobile, enterprise, or academic environments, further enhancing the user experience on various platforms.

The feature will be available on Google Chrome for Windows, Mac, and ChromeOS. Meanwhile, a similar feature is more likely to be added to the Android version of Chrome soon for a unified experience.

The rest is here:
Machine learning comes to Chrome's address bar on Windows, Mac, and ChromeOS - Android Police

Read More..

What is the Grad CAM method? – DataScientest

Grad CAM consists in finding out which parts of the image have led a convolutional neural network to its final decision. This method consists of producing heat maps representing the activation classes on the images received as input. Each activation class is associated with a specific output class.

These classes are used to indicate the importance of each pixel in relation to the class in question by increasing or decreasing the intensity of the pixel.

For example, if an image is used in a convolutional network of dogs and cats, the Grad-CAM visualisation can generate a heatmap for the cat class, indicating the extent to which the different parts of the image correspond to a cat, and also a heatmap for the dog class, indicating the extent to which the parts of the image correspond to a dog.

For example, lets consider a CNN of dogs and cats. The Grad-CAM method will generate a heatmap for the cat object class to indicate the extent to which each part of an image corresponds to a cat, and also a heatmap for the dog object class in the same way.

The class activation map assigns importance to each position (x, y) in the last convolutional layer by calculating the linear combination of activations, weighted by the corresponding output weights for the observed class (Australian terrier in the example below). The resulting class activation map is then resampled to the size of the input image. This is illustrated by the heatmap below.

See more here:
What is the Grad CAM method? - DataScientest

Read More..

What’s the future of AI? | McKinsey – McKinsey

Conceptual illustration of 7 glasslike panels floating over a grid. The panels transition from dark to light blue and 2 pink lines weave past the panels and pink dots float around the grid.

Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role.

0

countries

currently have national AI strategies

0

the year

AI capabilities will rival humans

$0.4 trillion

annually

gen AI could add to the global economy

Artificial intelligence is a machines ability to perform some cognitive functions we usually associate with human minds.

Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation.

Artificial general intelligence (AGI) is a theoretical AI system with capabilities that rival those of a human. Many researchers believe we are still decades, if not centuries, away from achieving AGI.

Deep learning is a type of machine learning that is more capable, autonomous, and accurate than traditional machine learning.

Prompt engineering is the practice of designing inputs for AI tools that will produce optimal outputs.

Machine learning is a form of artificial intelligence that is able to learn without explicit programming by a human.

Tokenization is the process of creating a digital representation of a real thing. Tokenization can be used to protect sensitive data or to efficiently process large amounts of data.

Read the original post:
What's the future of AI? | McKinsey - McKinsey

Read More..