Category Archives: Machine Learning

Conversational AI vs Generative AI: Which is Best for CX? – CX Today

Conversational AI vs. Generative AI: Which solution will turbocharge your contact centers performance and help you achieve your CX goals? Worldwide, the evolution of artificial intelligence has unlocked new waves of productivity for business leaders and teams.

While the impact of advanced AI algorithms can be felt everywhere, its particularly prominent in the contact center. In the last year alone, weve lost count of the number of contact center, CRM, and CX software vendors introducing new AI capabilities for customer service teams.

Though ChatGPT, Microsoft Copilot, and even solutions like NICEs Enlighten AI suite are driving focus to the rise of generative AI, its not the only intelligent tech making waves. Conversational AI is also emerging as a critical part of contact center success.

The question is, which of these two solutions do you need, and do you need to choose between one or the other? Heres your guide to conversational AI and generative AI in the contact center.

Conversational AI is a type of artificial intelligence that allows computer programs (bots) to simulate human conversations. It combines various AI techniques to ensure people can interact with computer systems just like talking to another human being.

Examples of conversational AI are everywhere. Smart assistants like Alexa and Siri use conversational AI to interact with users. Many of the chatbots installed on company websites leverage the same technology.

So, how does it all work?

While the nature of each conversational AI solution can vary depending on your chosen vendor, most tools feature the same central components:

After processing input, conversational AI tools can generate responses based on their data. Some more advanced solutions can even enhance their responses by using additional forms of analysis, such as sentiment analysis.

Conversational AI has become the backbone of many advances in the customer experience and contact center landscapes. It forms part of the tech behind conversational intelligence tools, such as those offered by CallMiner, Calabrio, and Talkdesk.

Its also a common component in the chatbots and virtual assistants customers interact with through text and speech, for self-service interactions.

The most common examples of conversational AI in customer service include:

Older chatbots were primarily rule-based solutions that used scripts to answer customer questions. Advanced chatbots, powered by conversational AI, use natural language processing to recognize speech, imitate human interaction, and respond to more complex inputs.

They can also operate across multiple channels, accompanying your contact center IVR system, chat apps, social media service strategies, and more. Plus, they can learn from interactions over time, becoming more effective and advanced.

Modern IVR systems also leverage conversational AI. Instead of giving customers a list of limited options to choose from, they can listen to what customers say, recognize their intent, and route them to the best agent or department.

With NLP, IVR systems can provide more accurate responses and even draw insights from company databases and CRMs to personalize interactions. They can also be configured to route conversations based on various factors, such as customer sentiment or agent skill level.

As mentioned above, conversational AI tools are a common component of conversational intelligence. Because they can process language and analyze interactions, they can offer companies insight into customer sentiment, track customer service trends, and highlight growth opportunities.

Some solutions can also automatically transcribe and translate calls, which can be ideal for enhancing compliance, as well as training initiatives.

When analyzing conversational AI vs. generative AI, its worth noting that both solutions have strengths and limitations. Conversational AI, for instance, can empower teams to deliver fantastic service across multiple channels 24/7. It can also help personalize interactions.

By analyzing previous discussions and real-time sentiment or intent, conversational AI can help ensure every customer gets a bespoke experience with your contact center.

Beyond that, conversational AI can:

However, conversational AI also has limitations. Although conversational AI tools are more advanced than traditional chatbots, they can still struggle with complex linguistic nuances and requests. They dont always understand customer accents or things like humor or sarcasm.

Plus, since theyre reliant on collecting and processing customer data, theres always a risk to the privacy and security of your contact center. Business leaders need to ensure they have the right security strategies in place to protect sensitive data.

Generative AI is a form of artificial intelligence that can generate new, original content, such as text and images, based on basic prompts. It uses deep learning and neural networks to produce highly creative answers to queries and requests.

Like conversational AI, generative AI is becoming a more common component of the contact center. CCaaS vendors offer companies access to generative AI-powered bots that can provide real-time coaching and assistance to agents or enhance the customer service experience.

Most of these solutions build on the foundations of conversational AI, enhancing bot performance with access to large language models (LLMs).

Alongside leveraging NLP technologies, most generative AI solutions rely on:

Since generative AI tools share many of the same features as conversational AI solutions, they can also address many of the same use cases. Were already seeing an increase in companies using generative AI to create intuitive chatbots and virtual assistants.

However, there are also additional opportunities for generative AI in the contact center, such as:

Generative AI excels at producing original content. It can help contact centers create knowledge bases, drawing on existing data in their ecosystem to design comprehensive guides. Generative AI bots can then surface this information to contact center agents in real-time and offer recommendations to guide them through a conversation.

They can even help organizations create more comprehensive training resources and onboarding tools for new contact center agents, boosting team performance.

Like conversational AI, generative AI tools can have a huge impact on customer service. They can understand the input shared by customers in real time and use their knowledge and data to help agents deliver more personalized, intuitive experiences.

Generative AI solutions can automatically create responses to questions on behalf of an agent and recognize keywords spoken in a conversation to surface relevant information. It can even draw insights from multiple different environments to help answer more complex queries.

One major use case for generative AI in the contact center is the ability to automate repetitive tasks, improving workplace efficiency. Generative AI bots can transcribe and translate conversations like their conversational alternatives and even summarize discussions.

They can pinpoint key action items and discussion trends, automatically classify and triage customer service tickets, and improve the routing process.

Like conversational AI, generative AI has both its pros and cons to consider. It can significantly enhance team productivity and creativity and guide agents through the process of delivering exceptional customer service. It can also help improve team efficiency by automating repetitive tasks like call summarization.

Plus, generative AI solutions can:

However, there are risks to generative AI, too. Like most forms of AI, generative AI relies on access to large volumes of data, which needs to be protected for compliance purposes. It can cause issues with data governance, particularly when teams have limited transparency into how an LLM works.

Plus, since generative AI creates unique original content, its subject to AI hallucinations, which means not all of the answers it gives will be correct.

Conversational AI and generative AI have a lot of overlapping capabilities and features. They both make it easier for human beings to interact intuitively with machines, and they can both understand natural input. However, there are some major differences:

So, conversational AI vs generative AI: which do you actually need?

Though conversational AI and generative AI have different strengths, they can both work in tandem to improve customer experience. Tools like Microsoft Copilot for Sales are considered generative AI models, but they actually use conversational AI, too.

There are various ways contact centers can connect generative AI and conversational AI. For instance, conversational AI bots can generate better answers to customer questions by calling on the insights of back-end generative models.

Smart conversational assistants can analyze inbound ticket information and assign issues to specialized generative models to help with customer service. Conversational bots can even draw insights from FAQs and knowledge bases created by generative AI during discussions.

Ultimately, weaving conversational and generative AI together amplifies the strengths of both solutions. While conversational AI bots can handle high-volume routine interactions in contact centers, solutions powered with generative algorithms can address more complex queries and offer additional support to agents.

The chances are, as both of these technologies continue to mature, well see CCaaS and contact center leaders introducing more tools that allow users to design their own systems that use the best of both models, such as Five9s generative AI studio.

Link:
Conversational AI vs Generative AI: Which is Best for CX? - CX Today

AWS Inferentia and AWS Trainium deliver lowest cost to deploy Llama 3 models in Amazon SageMaker JumpStart … – AWS Blog

Today, were excited to announce the availability of Meta Llama 3 inference on AWS Trainium and AWS Inferentia based instances in Amazon SageMaker JumpStart. The Meta Llama 3 models are a collection of pre-trained and fine-tuned generative text models. Amazon Elastic Compute Cloud (Amazon EC2) Trn1 and Inf2 instances, powered by AWS Trainium and AWS Inferentia2, provide the most cost-effective way to deploy Llama 3 models on AWS. They offer up to 50% lower cost to deploy than comparable Amazon EC2 instances. They not only reduce the time and expense involved in training and deploying large language models (LLMs), but also provide developers with easier access to high-performance accelerators to meet the scalability and efficiency needs of real-time applications, such as chatbots and AI assistants.

In this post, we demonstrate how easy it is to deploy Llama 3 on AWS Trainium and AWS Inferentia based instances in SageMaker JumpStart.

SageMaker JumpStart provides access to publicly available and proprietary foundation models (FMs). Foundation models are onboarded and maintained from third-party and proprietary providers. As such, they are released under different licenses as designated by the model source. Be sure to review the license for any FM that you use. You are responsible for reviewing and complying with applicable license terms and making sure they are acceptable for your use case before downloading or using the content.

You can access the Meta Llama 3 FMs through SageMaker JumpStart on the Amazon SageMaker Studio console and the SageMaker Python SDK. In this section, we go over how to discover the models in SageMaker Studio.

SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, refer to Get Started with SageMaker Studio.

On the SageMaker Studio console, you can access SageMaker JumpStart by choosing JumpStart in the navigation pane. If youre using SageMaker Studio Classic, refer to Open and use JumpStart in Studio Classic to navigate to the SageMaker JumpStart models.

From the SageMaker JumpStart landing page, you can search for Meta in the search box.

Choose the Meta model card to list all the models from Meta on SageMaker JumpStart.

You can also find relevant model variants by searching for neuron. If you dont see Meta Llama 3 models, update your SageMaker Studio version by shutting down and restarting SageMaker Studio.

You can choose the model card to view details about the model, such as the license, data used to train, and how to use it. You can also find two buttons, Deploy and Preview notebooks, which help you deploy the model.

When you choose Deploy, the page shown in the following screenshot appears. The top section of the page shows the end-user license agreement (EULA) and acceptable use policy for you to acknowledge.

After you acknowledge the policies, provide your endpoint settings and choose Deploy to deploy the endpoint of the model.

Alternatively, you can deploy through the example notebook by choosing Open Notebook. The example notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources.

In SageMaker JumpStart, we have pre-compiled the Meta Llama 3 model for a variety of configurations to avoid runtime compilation during deployment and fine-tuning. TheNeuron Compiler FAQhas more details about the compilation process.

There are two ways to deploy Meta Llama 3 on AWS Inferentia and Trainium based instances using the SageMaker JumpStart SDK. You can deploy the model with two lines of code for simplicity, or focus on having more control of the deployment configurations. The following code snippet shows the simpler mode of deployment:

To perform inference on these models, you need to specify the argument accept_eula as True as part of the model.deploy() call. This means you have read and accepted the EULA of the model. The EULA can be found in the model card description or from https://ai.meta.com/resources/models-and-libraries/llama-downloads/.

The default instance type for Meta LIama-3-8B is is ml.inf2.24xlarge. The other supported model IDs for deployment are the following:

SageMaker JumpStart has pre-selected configurations that can help get you started, which are listed in the following table. For more information about optimizing these configurations further, refer to advanced deployment configurations

OPTION_N_POSITI

ONS

The following code shows how you can customize deployment configurations such as sequence length, tensor parallel degree, and maximum rolling batch size:

Now that you have deployed the Meta Llama 3 neuron model, you can run inference from it by invoking the endpoint:

For more information on the parameters in the payload, refer to Detailed parameters.

Refer to Fine-tune and deploy Llama 2 models cost-effectively in Amazon SageMaker JumpStart with AWS Inferentia and AWS Trainium for details on how to pass the parameters to control text generation.

After you have completed your training job and dont want to use the existing resources anymore, you can delete the resources using the following code:

The deployment of Meta Llama 3 models on AWS Inferentia and AWS Trainium using SageMaker JumpStart demonstrates the lowest cost for deploying large-scale generative AI models like Llama 3 on AWS. These models, including variants like Meta-Llama-3-8B, Meta-Llama-3-8B-Instruct, Meta-Llama-3-70B, and Meta-Llama-3-70B-Instruct, use AWS Neuron for inference on AWS Trainium and Inferentia. AWS Trainium and Inferentia offer up to 50% lower cost to deploy than comparable EC2 instances.

In this post, we demonstrated how to deploy Meta Llama 3 models on AWS Trainium and AWS Inferentia using SageMaker JumpStart. The ability to deploy these models through the SageMaker JumpStart console and Python SDK offers flexibility and ease of use. We are excited to see how you use these models to build interesting generative AI applications.

To start using SageMaker JumpStart, refer to Getting started with Amazon SageMaker JumpStart. For more examples of deploying models on AWS Trainium and AWS Inferentia, see the GitHub repo. For more information on deploying Meta Llama 3 models on GPU-based instances, see Meta Llama 3 models are now available in Amazon SageMaker JumpStart.

Xin Huang is a Senior Applied Scientist Rachna Chadha is a Principal Solutions Architect AI/ML Qing Lan is a Senior SDE ML System Pinak Panigrahi is a Senior Solutions Architect Annapurna ML Christopher Whitten is a Software Development Engineer Kamran Khan is a Head of BD/GTM Annapurna ML Ashish Khetan is a Senior Applied Scientist Pradeep Cruz is a Senior SDM

See more here:
AWS Inferentia and AWS Trainium deliver lowest cost to deploy Llama 3 models in Amazon SageMaker JumpStart ... - AWS Blog

Revolutionize Customer Satisfaction with tailored reward models for your business on Amazon SageMaker | Amazon … – AWS Blog

As more powerful large language models (LLMs) are used to perform a variety of tasks with greater accuracy, the number of applications and services that are being built with generative artificial intelligence (AI) is also growing. With great power comes responsibility, and organizations want to make sure that these LLMs produce responses that align with their organizational values and provide the same unique experience they always intended for their end-customers.

Evaluating AI-generated responses presents challenges. This post discusses techniques to align them with company values and build a custom reward model using Amazon SageMaker. By doing so, you can provide customized customer experiences that uniquely reflect your organizations brand identity and ethos.

Out-of-the-box LLMs provide high accuracy, but often lack customization for an organizations specific needs and end-users. Human feedback varies in subjectivity across organizations and customer segments. Collecting diverse, subjective human feedback to refine LLMs is time-consuming and unscalable.

This post showcases a reward modeling technique to efficiently customize LLMs for an organization by programmatically defining rewards functions that capture preferences for model behavior. We demonstrate an approach to deliver LLM results tailored to an organization without intensive, continual human judgement. The techniques aim to overcome customization and scalability challenges by encoding an organizations subjective quality standards into a reward model that guides the LLM to generate preferable outputs.

Not all human feedback is the same. We can categorize human feedback into two types: objective and subjective.

Any human being who is asked to judge the color of the following boxes would confirm that the left one is a white box and right one is a black box. This is objective, and there are no changes to it whatsoever.

Determining whether an AI models output is great is inherently subjective. Consider the following color spectrum. If asked to describe the colors on the ends, people would provide varied, subjective responses based on their perceptions. One persons white may be anothers gray.

This subjectivity poses a challenge for improving AI through human feedback. Unlike objective right/wrong feedback, subjective preferences are nuanced and personalized. The same output could elicit praise from one person and criticism from another. The key is acknowledging and accounting for the fundamental subjectivity of human preferences in AI training. Rather than seeking elusive objective truths, we must provide models exposure to the colorful diversity of human subjective judgment.

Unlike traditional model tasks such as classification, which can be neatly benchmarked on test datasets, assessing the quality of a sprawling conversational agent is highly subjective. One humans riveting prose is anothers aimless drivel. So how should we refine these expansive language models when humans intrinsically disagree on the hallmarks of a good response?

The key is gathering feedback from a diverse crowd. With enough subjective viewpoints, patterns emerge on engaging discourse, logical coherence, and harmless content. Models can then be tuned based on broader human preferences. There is a general perception that reward models are often associated only with Reinforcement Learning from Human Feedback (RLHF). Reward modeling, in fact, goes beyond RLHF, and can be a powerful tool for aligning AI-generated responses with an organizations specific values and brand identity.

You can choose an LLM and have it generate numerous responses to diverse prompts, and then your human labelers will rank those responses. Its important to have diversity in human labelers. Clear labeling guidelines are critical. Without explicit criteria, judgments can become arbitrary. Useful dimensions include coherence, relevance, creativity, factual correctness, logical consistency, and more. Human labelers put these responses into categories and label them favorite to least favorite, as shown in the following example. This example showcases how different humans perceive these possible responses from the LLM in terms of their most favorite (labeled as 1 in this case) and least favorite (labeled as 3 in this case). Each column is labeled 1, 2, or 3 from each human to signify their most preferred and least preferred response from the LLM.

By compiling these subjective ratings, patterns emerge on what resonates across readers. The aggregated human feedback essentially trains a separate reward model on writing qualities that appeal to people. This technique of distilling crowd perspectives into an AI reward function is called reward modeling. It provides a method to improve LLM output quality based on diverse subjective viewpoints.

In this post, we detail how to train a reward model based on organization-specific human labeling feedback collected for various prompts tested on the base FM. The following diagram illustrates the solution architecture.

For more details, see the accompanying notebook.

To successfully train a reward model, you need the following:

Complete the following steps to launch SageMaker Studio:

Lets see how to create a reward model locally in a SageMaker Studio notebook environment by using a pre-existing model from the Hugging Face model hub.

When doing reward modeling, getting feedback data from humans can be expensive. This is because reward modeling needs feedback from other human workers instead of only using data collected during regular system use. How well your reward model behaves depends on the quality and amount of feedback from humans.

We recommend using AWS-managed offerings such as Amazon SageMaker Ground Truth. It offers the most comprehensive set of human-in-the-loop capabilities, allowing you to harness the power of human feedback across the machine learning (ML) lifecycle to improve the accuracy and relevancy of models. You can complete a variety of human-in-the-loop tasks with SageMaker Ground Truth, from data generation and annotation to model review, customization, and evaluation, either through a self-service or AWS-managed offering.

For this post, we use the IMDB dataset to train a reward model that provides a higher score for text that humans have labeled as positive, and a lower score for negative text.

We prepare the dataset with the following code:

The following example shows a sample record from the prepared dataset, which includes references to rejected and chosen responses. We have also embedded the input ID and attention mask for the chosen and rejected responses.

In this case, we use the OPT-1.3b (Open Pre-trained Transformer Language Model) model in Amazon SageMaker JumpStart from Hugging Face. If you want to do all of the training locally on your notebook instead of distributed training, you need to use an instance with enough accelerator memory. We run the following training on a notebook running on ml.g4dn.xlarge instance type:

In the following code snippet, we create a custom trainer that calculates how well a model is performing on a task:

It compares the models results for two sets of input data: one set that was chosen and another set that was rejected. The trainer then uses these results to figure out how good the model is at distinguishing between the chosen and rejected data. This helps the trainer adjust the model to improve its performance on the task. The CustomTrainer class is used to create a specialized trainer that calculates the loss function for a specific task involving chosen and rejected input sequences. This custom trainer extends the functionality of the standard Trainer class provided by the transformers library, allowing for a tailored approach to handling model outputs and loss computation based on the specific requirements of the task. See the following code:

The TrainingArguments in the provided code snippet are used to configure various aspects of the training process for an ML model. Lets break down the purpose of each parameter, and how they can influence the training outcome:

By configuring these parameters in the TrainingArguments, you can influence various aspects of the training process, such as model performance, convergence speed, memory usage, and overall training outcome based on your specific requirements and constraints.

When you run this code, it trains the reward model based on the numerical representation of subjective feedback you gathered from the human labelers. A trained reward model will give a higher score to LLM responses that humans are more likely to prefer.

You can now feed the response from your LLM to this reward model, and the numerical score produced as output informs you of how well the response from the LLM is aligning to the subjective organization preferences that were embedded on the reward model. The following diagram illustrates this process. You can use this number as the threshold for deciding whether or not the response from the LLM can be shared with the end-user.

For example, lets say we created an reward model to avoiding toxic, harmful, or inappropriate content. If a chatbot powered by an LLM produces a response, the reward model can then score the chatbots responses. Responses with scores above a pre-determined threshold are deemed acceptable to share with users. Scores below the threshold mean the content should be blocked. This lets us automatically filter chatbot content that doesnt meet standards we want to enforce. To explore more, see the accompanying notebook.

To avoid incurring future charges, delete all the resources that you created. Delete the deployed SageMaker models, if any, and stop the SageMaker Studio notebook you launched for this exercise.

In this post, we showed how to train a reward model that predicts a human preference score from the LLMs response. This is done by generating several outputs for each prompt with the LLM, then asking human annotators to rank or score the responses to each prompt. The reward model is then trained to predict the human preference score from the LLMs response. After the reward model is trained, you can use the reward model to evaluate the LLMs responses against your subjective organizational standards.

As an organization evolves, the reward functions must evolve alongside changing organizational values and user expectations. What defines a great AI output is subjective and transforming. Organizations need flexible ML pipelines that continually retrain reward models with updated rewards reflecting latest priorities and needs. This space is continuously evolving: direct preference-based policy optimization, tool-augmented reward modeling, and example-based control are other popular alternative techniques to align AI systems with human values and goals.

We invite you to take the next step in customizing your AI solutions by engaging with the diverse and subjective perspectives of human feedback. Embrace the power of reward modeling to ensure your AI systems resonate with your brand identity and deliver the exceptional experiences your customers deserve. Start refining your AI models today with Amazon SageMaker and join the vanguard of businesses setting new standards in personalized customer interactions. If you have any questions or feedback, please leave them in the comments section.

Dinesh Kumar Subramani is a Senior Solutions Architect based in Edinburgh, Scotland. He specializes in artificial intelligence and machine learning, and is member of technical field community with in Amazon. Dinesh works closely with UK Central Government customers to solve their problems using AWS services. Outside of work, Dinesh enjoys spending quality time with his family, playing chess, and exploring a diverse range of music.

Read more from the original source:
Revolutionize Customer Satisfaction with tailored reward models for your business on Amazon SageMaker | Amazon ... - AWS Blog

3 Machine Learning Stocks with the Potential to Make You an Overnight Millionaire – InvestorPlace

Keep an eye on machine learning stocks. Companies are tripping over each other for the technology, which involves feeding data to a machine so it can learn and even make human-like decisions. It could be a$503.4 billion marketby 2030. Two years later, it could be worth $771.38 billion, according to Precedence Research

Even better, were seeing substantial machine learning demand from just about every major industry. That includes healthcare, finance, retail, entertainment and manufacturing just as they are adopting the technology to boost revenue, cut costs and automate operations, as noted by Learn.G2.com. Even more impressive, about48% of global businesses are already using machine learning with 44% of them seeing lower business costs.

While we can always jump intoNvidia(NASDAQ:NVDA), its now an $880 stock thats already made quite a few investors very wealthy. In fact, the last time I mentioned NVDA in a machine learning article, it was only a $700 stock.If you missed its run, dont worry. There are plenty of other machine learning stocks with similar potential.

Source: Phonlamai Photo / Shutterstock.com

Lets start withLantern Pharma(NASDAQ:LTRN), a $52.95 million company trading at less than $5.

An artificial intelligence company, its helping to transform the cost and speed to oncology drug discovery and development with itsAI and machine learning platform, RADR.With the help of machine learning, AI and advanced genomics, its platform can scan billions of data points to help identity compounds that could help cancer patients.

Typically, with early-stage discovery and development, the traditional approachcan take three to five years. However, with companies like Lantern, the process can be as low as two years.

Most recently, the company announced, Multiple clinical trials across three AI-guided drug candidates are active with first expected data and readouts for LP-184 (for use across multiple cancer indications) in the second half of 2024; with additional next-generation drug development programs approaching IND studies.

Source: Al Serov / Shutterstock.com

Theres alsoRekor Systems(NASDAQ:REKR), a $156.57 million company thats leveraging AI and machine learning to identity infrastructure concerns for transportation, public safety and urban mobility. One of its key solutions is Rekor One an AI-powered roadway intelligence platform.

Most recently, the company announced substantial growth throughout 2023. Gross 2023 revenues of $34.9 million, for example, was 75% better than year ago numbers. Fourth quarter gross revenue jumped 71% year over year.

The total value of its contract values jumped 124% year over year to $49.1 million. And it narrowed its adjusted EBITDA loss from $37.4 million to $28.7 million for 2023.

Even better, as noted byInvetorplace contributor Josh Enomoto, For the current fiscal year, experts are calling for revenue of $66.07 million. Thats up a staggering 89.1% from last years tally of $34.93 million. In the following year, sales could jump to $88.74 million, implying a 34.3% gain over projected 2024 revenue.

Source: Owlie Productions / Shutterstock.com

Or, we can diversify at a lower cost with an exchange-traded fund such as theInvesco AI and Next Gen Software ETF(NYSEARCA:IGPT). With an expense ratio of 0.61%, the ETF holds some of the top AI and machine learning stocks on the market, including Nvidia,Alphabet(NASDAQ:GOOG),Meta Platforms (NASDAQ:META),Adobe(NASDAQ:ADBE),Advanced Micro Devices(NASDAQ:AMD) andQualcomm(NASDAQ:QCOM) to name a few.

Whats nice about this ETF is that we can buy 100 shares of it for just under $4,300, which allows us to diversify with its 98 holdings. Or, we can just buy NVDA and not have the same diversification, and pay about $87,700 for 100 shares.

Since bottoming out at around $30.27 in October, the IGPT ETF hit a high of $47.03 in March. Now back to $42.78, Id like to see it initially retest its prior high. Even better, the ETF is technically oversold on RSI, MACD, and Williams %R all of which are pivoting higher.

On Penny Stocks and Low-Volume Stocks:With only the rarest exceptions,InvestorPlacedoes not publish commentary about companies that have a market cap of less than $100 million or trade less than 100,000 shares each day. Thats because these penny stocks are frequently the playground for scam artists and market manipulators. If we ever do publish commentary on a low-volume stock that may be affected by our commentary, we demand thatInvestorPlace.comswriters disclose this fact and warn readers of the risks.

Read More:Penny Stocks How to Profit Without Getting Scammed

On the date of publication, Ian Cooperdid not have (either directly or indirectly) any positions in the securities mentioned.The opinions expressed in this article are those of the writer, subject to theInvestorPlace.comPublishing Guidelines.

Ian Cooper, a contributor to InvestorPlace.com, has been analyzing stocks and options for web-based advisories since 1999.

Read the original:
3 Machine Learning Stocks with the Potential to Make You an Overnight Millionaire - InvestorPlace

Google upgrades the Chrome Omnibox address bar with machine learning – Android Central

Google is making an under-the-hood change in the latest version of Chrome that is designed to improve the suggested webpage results that appear in the address bar, also known as the Omnibox.

In Chrome 124, these suggestions are now made with the help of machine learning models, which the company says are replacing "hand-built and hand-tuned formulas." Now that the address bar is powered by ML models, results should be more accurate and personalized to each user.

Justin Donnelly, a Chrome engineering lead working on the Omnibox, explains in a blog post that the old scoring system could not be adapted or changed over time. The engineer described it as "inflexible," and due to the lack of flexibility, "the scoring system went largely untouched for a long time." So, when looking at ways to improve the address bar and its suggestions, the Chrome team saw machine learning as the obvious solution.

ML models can often detect trends and insights that get past the human eye, and that was the case with the models powering the Omnibox. One tangible change in address bar behavior due to the switch to ML is a shift in how the "time since navigation" signal is perceived. Previously, the manual formula would give a higher relevance score to URLs that were recently accessed. However, the ML models found that this was not, in fact, what users were looking for.

"It turns out that the training data reflected a pattern where users sometimes navigate to a URL that was not what they really wanted and then immediately return to the Chrome omnibox and try again," Donnelly explains. "In that case, the URL they just navigated to is almost certainlynotwhat they want, so it should receive a low relevance score during this second attempt."

Aside from altering the way results are scored by relevance, Google will use ML models in the address bar to make webpage suggestions "more precise and relevant to you." Presumably, your browsing habits and other data Google collects will be used to tweak the Omnibox's behavior to best suit your needs. In other words, the way that people use the Chrome address bar can be used to retrain ML models that power it over time.

The new address bar is included in Chrome 124 for desktops, though you won't notice any visual differences. In the future, Google wants to add more signals to factor into relevance scores, such as time of day and environment.

Get the latest news from Android Central, your trusted companion in the world of Android

Read more here:
Google upgrades the Chrome Omnibox address bar with machine learning - Android Central

Art-focused university using AI in admissions – Inside Higher Ed

In 2019, Kyle OConnell had a vision of leveraging technology to boost in-person relationships with students at the School of the Art Institute of Chicago. He set out to create a machine learning-enabled system that could help with the admissions process, ultimately meant to direct employees energy and resources toward students in an earlier and more effective way.

We deal with the technology, but ultimately we want to bring it back to getting more in-person time with who we can make the most impact on, said OConnell, director of enrollment analytics and forecasting at the Chicago institution, known as SAIC. And theres more information we have about students than youre able to assess as an individual person.

He admitted that the machine-learning attempt didnt knock it out of the park on the first try, undermined by data that was not very robust. He worked to adjust the data-gathering process over the next couple years, and his timeline coincided with an opportunity to work with the Chicago technology consulting firm SPR to use machine-learning models during the application process.

Most Popular

At the start of 2023, SPR asked organizations to send in pitches on how to better the local community, with the winner getting $50,000 in honor of the companys 50th anniversary. The criteria were broadSPR received pitches on topics as diverse as drones and deforestationbut the firm ultimately chose SAIC because it fit best with our mission of boosting the local community, according to Steven Devoe, SPRs data specialty director.

Stacks of data from applicants with offers to attend SAIC are entered into the model, which parses more than 100 factors including the number of SAIC events the applicants attended, the types of programs they are interested in, and where they went to high school. It then spits back two outcomes: the likelihood a student would accept the admissions offersay, 50percent chance they would say yes to accepting an admissions offerand then a further yes or no if a student would actually end up attending the university. Oftentimes, institutions see summer melt from students who accept an enrollment offer but do not end up attending.

Both OConnell and Devoe were quick to point out the technology is not being used to dictate which students should and should not be accepted into the institution. Instead, the data illuminate the likelihood of the already-accepted students choosing to attend the university.

There are certainly things you could do with AI that are terrible, but the things a school would do with data is want to know more about directing resources and energy toward people we can help the most, OConnell said. Its, How can we find them better and earlier?

SPR and SAIC began working on the model in the first half of 2023 and began to utilize it in the latter half of the year. The results of using the model are largely unknown, as SAIC is still in its admissions cycle.

While OConnell said the institution needs to spend the next several months gathering the data before ultimately deciding on more uses, Devoe believes this could ultimately lend itself to budget and time savings.

If, for example, the art school determines that students from a specific country do not have a high likelihood of accepting an offer from SAIC, its officials may spend less marketing in that country. It also helps with planning for class size and sections, with SAIC officials having a more accurate outlook on which students are likely to end up on campus.

We created this focused on how to get more students access to higher ed, help the institute plan better and maybe spend dollars more effectively in terms of where its investing, Devoe said.

He added other higher education institutions have begun reaching out to ask for similar tools or models that could be used for other purposes, such as predicting the likelihood of students dropping out in their first academic term.

This is the first time SAIC is using AI and machine learning in admissions, but many institutions across the nation have turned toward the increasingly pervasive technology.

According to a September survey from higher education-focused magazine Intelligent, half of universities were using AI in their admissions process. This year, that number is expected to jump to more than 80percent. Institutions reported most often using AI to review transcripts and recommendation letters. Many of them stated that they used it to review personal essays as well, with some going as far as to conduct preliminary interviews with applicants using AI.

Application readers have been mechanically doing at least the first screen of applications for decades now, based on some uniform criteria given to them by the institution, Diane Gayeski, a professor of strategic communications at Ithaca College and a higher ed adviser for Intelligent, said in a previous interview with Inside Higher Ed.

Some of that can easily be done by a machine, she said. These are all algorithms. Whether a person uses them or a machine does, it doesnt make much difference.

However, SAIC does seem to be the first among art- and design-focused institutions to utilize the technology in admissions. While art students typically have to submit a portfolio in the application process, Lavoe stressed the machine-learning technology is not judging the portfolio in any way.

The art portfolio, it didn't find that interesting, he said, except for clocking which type of program a student would be interested in, such as painting or sculpture.

Many schools of art and design, while harboring some concerns, are leaning into the technology after the launch of ChatGPT late in 2022. Even the most angry illustration faculty have said, I hate it, I wish we could go traditional, but if youre a student today you would be an idiot if you didnt learn this before you go into the world, said Rick Dakan, chair of the AI Task Force at the Ringling College of Art and Design in Sarasota, Fla. It will be part of your career.

SAIC, upon receiving the machine-learning model for free, can utilize it as long as it sees fit. It may upgrade eventually, but for now, OConnell is content with taking things slowlyin contrast to the normal rhythm of the quick-moving tech world.

Its, Lets not try to do too much; lets start with a single thing were trying to look at, he said. Which is, can we use the tool along with other reporting and assessments? How does this fit into our workflow? And then, what are its possibilities from there.

View post:
Art-focused university using AI in admissions - Inside Higher Ed

Random robots are more reliable – EurekAlert

video:

Researchers tested the new AI algorithm's performance with simulated robots, such as NoodleBot.

Credit: Northwestern University

Northwestern University engineers have developed a new artificial intelligence (AI) algorithm designed specifically for smart robotics. By helping robots rapidly and reliably learn complex skills, the new method could significantly improve the practicality and safety of robots for a range of applications, including self-driving cars, delivery drones, household assistants and automation.

Called Maximum Diffusion Reinforcement Learning (MaxDiff RL), the algorithms success lies in its ability to encourage robots to explore their environments as randomly as possible in order to gain a diverse set of experiences. This designed randomness improves the quality of data that robots collect regarding their own surroundings. And, by using higher-quality data, simulated robots demonstrated faster and more efficient learning, improving their overall reliability and performance.

When tested against other AI platforms, simulated robots using Northwesterns new algorithm consistently outperformed state-of-the-art models. The new algorithm works so well, in fact, that robots learned new tasks and then successfully performed them within a single attempt getting it right the first time. This starkly contrasts current AI models, which enable slower learning through trial and error.

The research will be published on Thursday (May 2) in the journal Nature Machine Intelligence.

Other AI frameworks can be somewhat unreliable, said NorthwesternsThomas Berrueta, who led the study. Sometimes they will totally nail a task, but, other times, they will fail completely. With our framework, as long as the robot is capable of solving the task at all, every time you turn on your robot you can expect it to do exactly what its been asked to do. This makes it easier to interpret robot successes and failures, which is crucial in a world increasingly dependent on AI.

Berrueta is aPresidential Fellowat Northwestern and a Ph.D. candidate in mechanical engineering at theMcCormick School of Engineering. Robotics expertTodd Murphey, a professor of mechanical engineering at McCormick and Berruetas adviser, is the papers senior author. Berrueta and Murphey co-authored the paper withAllison Pinosky, also a Ph.D. candidate in Murpheys lab.

The disembodied disconnect

To train machine-learning algorithms, researchers and developers use large quantities of big data, which humans carefully filter and curate. AI learns from this training data, using trial and error until it reaches optimal results. While this process works well for disembodied systems, like ChatGPT and Google Gemini (formerly Bard), it does not work for embodied AI systems like robots. Robots, instead, collect data by themselves without the luxury of human curators.

Traditional algorithms are not compatible with robotics in two distinct ways, Murphey said. First, disembodied systems can take advantage of a world where physical laws do not apply. Second, individual failures have no consequences. For computer science applications, the only thing that matters is that it succeeds most of the time. In robotics, one failure could be catastrophic.

To solve this disconnect, Berrueta, Murphey and Pinosky aimed to develop a novel algorithm that ensures robots will collect high-quality data on-the-go. At its core, MaxDiff RL commands robots to move more randomly in order to collect thorough, diverse data about their environments. By learning through self-curated random experiences, robots acquire necessary skills to accomplish useful tasks.

Getting it right the first time

To test the new algorithm, the researchers compared it against current, state-of-the-art models. Using computer simulations, the researchers asked simulated robots to perform a series of standard tasks. Across the board, robots using MaxDiff RL learned faster than the other models. They also correctly performed tasks much more consistently and reliably than others.

Perhaps even more impressive: Robots using the MaxDiff RL method often succeeded at correctly performing a task in a single attempt. And thats even when they started with no knowledge.

Our robots were faster and more agile capable of effectively generalizing what they learned and applying it to new situations, Berrueta said. For real-world applications where robots cant afford endless time for trial and error, this is a huge benefit.

Because MaxDiff RL is a general algorithm, it can be used for a variety of applications. The researchers hope it addresses foundational issues holding back the field, ultimately paving the way for reliable decision-making in smart robotics.

This doesnt have to be used only for robotic vehicles that move around, Pinosky said. It also could be used for stationary robots such as a robotic arm in a kitchen that learns how to load the dishwasher.As tasks and physical environments become more complicated, the role of embodiment becomes even more crucial to consider during the learning process.This is an important step toward real systems that do more complicated, more interesting tasks.

The study, Maximum diffusion reinforcement learning, was supported by the U.S. Army Research Office (grant number W911NF-19-1-0233) and the U.S. Office of Naval Research (grant number N00014-21-1-2706).

Nature Machine Intelligence

Computational simulation/modeling

Not applicable

Maximum diffusion reinforcement learning

2-May-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

See more here:
Random robots are more reliable - EurekAlert

Machine learning approaches found to benefit Parkinson’s research – Parkinson’s News Today

Scientists exploring the potential of machine learning approaches in drug discovery for Parkinsons disease and other neurodegenerative disorders focusing on misfolded proteins that are the hallmark of such conditions found that one such method identified compounds two orders of magnitude more potent than ones previously reported, per a new study.

Using this method allowed the researchers, from the U.K. and the U.S., to identify compounds that can effectively block the clumping, or aggregation, of alpha-synuclein protein, an underlying cause of Parkinsons, the study reported.

We anticipate that using machine learning approaches of the type described here could be of considerable benefit to researchers working in the field of protein misfolding diseases [such as Parkinsons], and indeed early-stage drug discovery research in general, the researchers wrote.

Their study, Discovery of potent inhibitors of -synuclein aggregation using structure-based iterative learning was published in the journal Nature Chemical Biology.

Parkinsons disease is marked by the toxic accumulation of misfolded forms of the alpha-synuclein protein within dopamine-producing nerve cells those responsible for releasing the neurotransmitter dopamine. Dopamine is a signaling molecule that plays a role in controlling movement; Parkinsons results from the progressive loss of these cells.

Despite efforts to identify compounds that stop this toxic accumulation, there are, to date, no disease-modifying treatments available for Parkinsons.

Traditional strategies to identify novel therapies which involve screening large chemical libraries looking for potential candidates prior to any testing in humans are time-consuming, expensive, and often unsuccessful.

In the case of Parkinsons, the development of effective therapies has been hampered by the lack of methods to identify the right molecular targets.

One route to search for potential treatments for Parkinsons requires the identification of small molecules that can inhibit the aggregation of alpha-synuclein. But this is an extremely time-consuming process just identifying a lead candidate for further testing can take months or even years, Michele Vendruscolo, a professor at the University of Cambridge and the studys lead author, said in a university press release.

Now, the researchers developed a method that was able to use machine learning to quickly screen chemical libraries containing literally millions of compounds. The goal was to identify small molecules able to block the clumping of alpha-synuclein.

From a list of small molecules predicted to have a good binding to the alpha-synuclein aggregates, the researchers chose a small number of the top-ranking compounds to test experimentally as potent inhibitors of aggregation.

The results from these experimental assays were then fed to the machine learning model, which identified those with the most promising effects. This process was repeated a few times, so that highly potent compounds were identified.

Instead of screening experimentally, we screen computationally, Vendruscolo said.

Machine learning is having a real impact on the drug discovery process its speeding up the whole process of identifying the most promising candidates. For us this means we can start work on multiple drug discovery programs instead of just one.

By using the knowledge we gained from the initial screening with our machine learning model, we were able to train the model to identify the specific regions on these small molecules responsible for binding, then we can re-screen and find more potent molecules, Vendruscolo said.

Using this method, the researchers optimized the initial compounds to target pockets on the surfaces of the alpha-synuclein clumps.

In lab tests using brain tissue samples from patients with Lewy body dementia (LBD) and multiple system atrophy (MSA), two forms of atypical parkinsonism, the compounds effectively blocked aggregation of alpha-synuclein.

Machine learning is having a real impact on the drug discovery process its speeding up the whole process of identifying the most promising candidates, Vendruscolo said. For us this means we can start work on multiple drug discovery programs instead of just one.

According to Vendruscolo, so much is possible due to the massive reduction in both time and cost its an exciting time.

Excerpt from:
Machine learning approaches found to benefit Parkinson's research - Parkinson's News Today

Machine Learning and Neural Network Can Be Effective Diagnostic Tools in MDS, Study Finds – AJMC.com Managed Markets Network

Artificial intelligence can improve detection of binucleated erythroblasts (BNEs), a rare and difficult-to-quantify phenomenon that can indicate myelodysplastic syndrome (MDS), according to a new report.

The investigators behind the study say the new method streamlines the use of new technology to make it more feasible to leverage machine learning. The study was published in Scientific Reports.1

The authors explained that MDS is notably heterogeneous, but it can typically be diagnosed based on morphologic bone marrow (BM) dysplasia and persistent cytopenia.

Myelodysplastic syndromes, or MDS, are a group of bone marrow disorders. | Image Credit: Sviatlana - stock.adobe.com

However, accurate diagnosis of cases in which mild cytopenias and subtle dysplastic changes are present can be difficult, and inter-scorer variability and subjectivity may be present, even among experienced hematopathologists, they wrote.

Some patients are left with indefinite diagnoses, such as idiopathic cytopenia of undetermined significance (ICUS) or clonal cytopenia of undetermined significance (CCUS), they said.

Given the current lack of precision, the investigators said it is important to identify objective, standardized methods of distinguishing MDS from nonclonal reactive causes of cytopenia and dysplasia.

Moreover, rare events that are indicative of MDS such as binucleated erythroblasts, while easy to identify using visual microscopy, can be challenging to quantify in large numbers, thus limiting statistical robustness, they said.

One possible solution is imaging flow cytometry (IFC), since it combines high-throughput data acquisition capacity and statistical robustness of conventional multicolor flow cytometry (MFC) together with high-resolution imaging capabilities of microscopy in a single system.

A previous study by the same group found IFC is effective at analyzing morphometric changes in dyserythropoietic BM cells.2 In that study, the investigators used IFC to analyze samples from 14 patients with MDS, 6 patients with ICUS/CCUS, 6 non-MDS controls, and 11 healthy controls.

The investigators found the IFC model, reliably identified and enumerated true binucleated erythroblasts at a significantly higher frequency in two out of three erythroblast maturation stages in MDS patients compared to normal BM (both P = .0001).

Still, they said the workflow of the feature-based IFC analysis is challenging and time-consuming, and requires software-specific expertise. Thats why, in the new paper, they proposed using a convolutional neural network (CNN) algorithm to analyze the IFC image data. They said the CNN algorithm has better accuracy and also more data-interpretation flexibility than feature-base analysis alone. In addition, they used artificial intelligence software specifically designed with a graphical user interface designed to render results that are meaningful to researchers who do not have advanced coding skills.

To test out the new method, the investigators used the raw data from the earlier study and analyzed it using the new artificial intelligence model in order to compare the models results to the previous IFC analysis. Each of the samples was also manually examined to validate the presence of BNEs.

The new model had an accuracy of 94.3% and a specificity of 98.2%. The latter means the model rarely misclassified non-BNEs and BNEs. The models sensitivity was lower, as 21.1% of BNEs in the data set were incorrectly classified as erythroblasts exhibiting irregular nuclear morphology. Overall, though, the investigators said the data suggest a high degree of confidence that when the model identifies a BNE it is correct.

The investigators said it was notable that the model worked as well as it did despite the small data set used to train it. They said incorporating a more robust data set would likely improve the models performance.

Emphasis should be placed on augmenting the classes of cells with irregular nuclear morphology and BNEs that posed classification difficulties, they wrote. Moreover, expanding the range of classification categories to include a category for uncertain cases, in addition to BNEs, doublets, and cells with irregular nuclear morphology, could be beneficial.

For now, though, the investigators said their study shows that AI has the potential to be an effective and efficient diagnostic tool for patients with MDS.

References:

See the original post here:
Machine Learning and Neural Network Can Be Effective Diagnostic Tools in MDS, Study Finds - AJMC.com Managed Markets Network

Generative AI Achieves Superresolution with Minimal Tuning | Research & Technology | May 2024 – Photonics.com

GRLITZ, Germany, May 2, 2024 Diffusion models for artificial intelligence (AI) produce high-quality samples and offer stable training, but their sensitivity to the choice of variance can be a drawback. The variance schedule controls the dynamics of the diffusion process, and typically it must be fine-tuned with a hyperparameter search for each application. This is a time-consuming task that can lead to suboptimal performance.

A new open-source algorithm, from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Imperial College London, and University College London, improves the quality and resolution of images, including microscopic images, with minimal fine-tuning.

The algorithm, called the Conditional Variational Diffusion Model (CVDM), learns the variance schedule as part of the training process. In experiments, the CVDMs approach to learning the schedule was shown to yield comparable or better results than models that set the schedule as a hyperparameter.

The CVDM can be used to achieve superresolution using an inverse problem approach.

The availability of big data analytics, along with new ways to analyze mathematical and scientific data, allow researchers to use an inverse problem approach to uncover the causes behind specific observations, such as those made in microscopic imaging.

By calculating the parameters that produced the observation, i.e., the image, a researcher can achieve higher-resolution images. However, the path from observation to superresolution is usually not obvious, and the observational data is often noisy, incomplete, or uncertain.

The model is sensitive to the choice of the predefined schedule that controls the diffusion process, including how the noise is added. When too little or too much noise is added, at the wrong place or wrong time, the result can be a failed training. Unproductive runs hinder the effectiveness of diffusion models.

Diffusion models have long been known as computationally expensive to train . . . But new developments like our Conditional Variational Diffusion Model allow minimizing unproductive runs, which do not lead to the final model, researcher Artur Yakimovich said. By lowering the computational effort and hence power consumption, this approach may also make diffusion models more eco-friendly to train.

The researchers tested the CVDM in three applications superresolution microscopy, quantitative phase imaging, and image superresolution. For superresolution microscopy, the CVDM demonstrated comparable reconstruction quality and enhanced image resolution compared to previous methods. For quantitative phase imaging, it significantly outperformed previous methods. For image superresolution, reconstruction quality was comparable to previous methods. The CVDM also produced good results for a wild clinical microscopy sample, indicating that it could be useful in medical microscopy.

Based on the experimental outcomes, the researchers concluded that fine-tuning the schedule by experimentation should be avoided, because the schedule can be learned during training in a stable way that yields the same or better results.

Of course, there are several methods out there to increase the meaningfulness of microscopic images some of them relying on generative AI models, Yakimovich said. But we believe that our approach has some new, unique properties that will leave an impact in the imaging community, namely high flexibility and speed at a comparable, or even better quality, compared to other diffusion model approaches.

The CVDM supports probabilistic conditioning on data, is computationally less expensive than established diffusion models, and can be easily adapted for a variety of applications.

In addition, our CVDM provides direct hints where it is not very sure about the reconstruction a very helpful property that sets the path forward to address these uncertainties in new experiments and simulations, Yakimovich said.

The work will be presented by della Maggiora at the International Conference on Learning Representations (ICLR 2024) on May 8 in poster session 3. ICLR 2024 takes place May 7-11, 2024, at the Messe Wien Exhibition and Congress Center, Vienna.

The research was published in the Proceedings of the Twelfth International Conference on Learning Representations, 2024 (www.arxiv.org/abs/2312.02246).

More:
Generative AI Achieves Superresolution with Minimal Tuning | Research & Technology | May 2024 - Photonics.com