Category Archives: Machine Learning

Distinction Between Data Science, AI, and Machine Learning: Revealed – TimesTech

Modern businesses are aware of the usefulness of integrating Artificial Intelligence (AI), data science, and Machine Learning (ML). However, these concepts often need clarification, so deciding which technology will help and why can be challenging.

In this brief guide, we will reveal the details and characteristics of each technology and tell you more about its relationship and application.

Artificial Intelligence (AI) is a digital technology designed to imitate human intellect, qualifying devices to make decisions and learn like people. The prior purpose of AI usage in modern companies is to create algorithms and software systems to complete tasks that would typically demand human intervention independently. AI is excellent at analyzing data, pattern recognition, solving complex challenges, and accommodating dynamic conditions.

AI is categorized into two main types:

Artificial Intelligence is present in many areas of daily life. For example, most smartphones are equipped with AI assistants. AI algorithms are also used in healthcare, logistics, finance, education, and entertainment. Among the technologies powered by artificial intelligence are robotic systems, autopilot cars, content editors, etc.

Machine Learning (ML) is a subbranch of AI that represents self-learning gadgets and software systems that dont require constant manual updates to identify new patterns and change current workflow accordingly. ML allows machines to independently detect patterns, determine trends, make accurate forecasts, and produce conclusions based on previously acquired information.

Machine learning technology is divided into several subtypes:

Machine learning technology constitutes the most compact subset of Artificial Intelligence, so any industry using AI or data science can augment the current systems capabilities. The introduction of ML helps devices continually improve, enhancing prediction, fraud prevention, personalized recommendations, etc. By learning from constantly updated data, systems can act more efficiently and adjust to present conditions, achieving the highest possible efficiency.

Data science is a tool established on rigorous analytical techniques whose key application implies the derivation of concise analytical conclusions from large amounts of information. This technology involves selection, preparation, structuring, and analysis, which are mandatory procedures. Data science allows you to extract short analytics from huge data sets and use the information to your advantage.

Here are several examples of data usage in various business domains:

These examples are just a few ways data science transforms familiar tasks with valuable insights and analyzes massive amounts of data. This technology can handle any task that involves deep analysis of large amounts of data. So, if you are wondering how to make an app like Spotify or Netflix that will provide personal recommendations, the answer is to implement AI, ML, and Data Science.

All of the technologies we have listed have a common characteristic: they are all data-driven. Moreover, they are more connected than they seem: artificial intelligence includes data science, which in turn includes machine learning.

The key differences between the three concepts are in their intended purpose. So, the goal of artificial intelligence is to create smart digital systems and devices. Data science aims to process and analyze large amounts of information to help AI systems accomplish a given task. Machine learning is designed to train both of these systems, improving their performance and ensuring reliability and safety.

Thus, all three technologies are important drivers of digital progress for modern businesses, making them indispensable enablers for a multitude of industries.

AI is all about making smart devices, while machine learning is a special part of AI that teaches them how to study. On the other hand, data science is about getting valuable insights from data, which is important for both AI and machine learning. All of these fields have the potential to change industries and have a big impact on our everyday lives and careers.

Follow this link:
Distinction Between Data Science, AI, and Machine Learning: Revealed - TimesTech

Machine Learning Approach Effectively Predicts Remission in RA Following TNF Treatment – MD Magazine

Koshiro Sonomoto, PhD

Credit: Lupus KCR 2023

A low-cost predictive machine learning model successfully predicted achievement of remission based on Clinical Disease Activity (CDAI) measures among a cohort of patients with rheumatoid arthritis (RA) after 6 months of treatment with tumor necrosis factor (TNF) inhibitors, according to research published in Rheumatology and Therapy.1

As investigators only used CDAI measures at baseline at month 6, they believe this method demonstrates the achievability of creating regional cohorts to create low-cost models tailored to specific institutions or geographical regions.

Currently, treatment guidelines suggest initiating therapy with methotrexate, followed by either biologic disease-modifying antirheumatic drugs (bDMARDs) or Janus kinase (JAK) inhibitors if methotrexate is ineffective. However, recent studies have favored bDMARDs over JAKs and, therefore, treatment with a TNF is becoming an increasingly popular option for this patient population. Despite these encouraging findings, only 70% of patients initiating a TNF show a favorable response.2

Numerous efforts have been reported to predict the efficacy of TNF in advance, wrote a team of investigators led by Koshiro Sonomoto, PhD, associated with the University of Occupational and Environmental Health in Japan. However, access to these advanced technologies may be limited to certain countries and advanced facilities due to cost, labor requirements, and the need for process standardization. Conversely, predictive models based on routine clinical data are more accessible.

Patients with RA beginning treatment with a TNF as the first targeted synthetic (ts)/bDMARD after inadequate response to methotrexate were included in the analysis. Data were collected from the FIRST registry between August 2003 and October 2022. The analysis of baseline characteristics and 6-month CDAI used a variety of machine learning approaches, such as lasso logistic regression (Lasso), logistic regression with stepwise variable selection, support vector machine, and decision tree, along with 48 factors available in routine clinical practice for the prediction model.

A total of 4706 patients in the FIRST registry initiated treatment with b/tsDMARDs during the study period, of which 2223 were receiving methotrexate. Of these patients, 1630 received a TNF and 79 received a JAK. The average age of patients was 59.2 years, they had an average body mass index of 22.1, a mean disease duration of 75.7 months, and all were Asian. The mean dose of methotrexate was 11.3 mg/week and the mean CDAI score was 26.1.

The models exclusively relied on patient-reported outcomes and quantitative parameters, as opposed to subjective physician input.

Of the approaches, Lasso demonstrated advantages in predicting CDAI remission, with a specificity of 69.9%, sensitivity of 61.7%, and a mean area under the curve of .704. Patients who were predicted to respond to TNF achieved CDAI remission at an average rate of 53.2%, compared with only 26.4% of predicted non-responders.

These results could also help to identify an alternative b/tsDMARDs class for patients who were predicted to be TNF non-responders.

Investigators mentioned limitations, including the decreasing accuracy of the Lasso-generated remission predictive model among a Calendar cohort. In this group, which was split into a 9:1 ratio with a cutoff of October 2019, an increase in censoring was reported in the validation cohort. They suggest this could be due to COVID-19 complications.

This approach holds the potential to improve rheumatoid arthritis management by reducing the need for trial-and-error approaches and facilitating more personalized and effective treatment strategies, investigators concluded. While further validation is necessary, the study also suggests that creating cost-effective models tailored to specific regions or institutions is possible.

References

See the article here:
Machine Learning Approach Effectively Predicts Remission in RA Following TNF Treatment - MD Magazine

Learning Before Legislating in Texas’ AI Advisory Council – dallasinnovates.com

From controlling home environments with commands like Siri, turn on the living room lights to managing fraud and risk in financial institutions, artificial intelligence is integral to many products and services we use daily.

And the news cycle reminds us frequently that this is just the beginning that the full promise and peril of AI still lies before us. This is not just technology that will allow us to do the same things in a new way; it has the potential to make us extra human smarter, faster versions of ourselves.

Every aspect of civilization will be impacted, I believe, by AI, and therefore I wanted to study it thoughtfully and thoroughly before jumping into legislation, said Senator Tan Parker.

The Artificial Intelligence Advisory Council was established through House Bill 2060 during the 88th legislative session. Composed of founding members and Co-Chairs Senator Parker and Representative Gio Capriglione, along with five other public members, the council intends to increase the study, use, and public awareness of AI. At the heart of any successful endeavor lies collaboration. The Texas AI Council will serve as a nucleus for fostering collaboration among key stakeholders, including government agencies, industry leaders, academic institutions, and research centers.

There are very real and concerning downsides that have to be managed when it comes to AI and as a result of that, while I am always a free-market, free-enterprise guy trying to minimize regulation, some regulation will be necessary, said Senator Parker.

Thats why he and the AI advisory council are taking a thoughtful approach. Through public hearings and agency testimony, they will create recommendations for legislation, which they plan to issue by December 2024.

Communication and knowledge are the cornerstones of progress, and our council will serve as the catalyst, uniting minds from all sectors to produce thoughtful policy concerning AI advancement and technology, according to Senator Parker.

The groups first working meeting was at the end of March, when it heard from four state agencies, including the Texas Department of Information Research (DIR) and Texas Department of Transportation (TxDOT).

I was quite pleased, actually, with the progress and the thoughtfulness of the agencies in terms of how theyre approaching AI, Senator Parker noted.

For example, TxDOT is using AI to cut down accident response time, process payments, manage traffic, and evaluate aging infrastructure.

The Texas Workforce Commission also testified about their chatbot named Larry being used to screen calls and efficiently connect them with the best department. Parker doesnt envision this ever becoming an all-bot operation, saying the people of Texas are best served by man and machine working together.

We must maintain a human touch and a human presence with regard to the workforce commission, as you have people that are struggling for work and trying to find new careers and so forth, Senator Parker said.

The council will continue hearing from agencies and the public through the summerinformation that will help inform the groups recommendations. Parker is confident in this approach. He strongly believes in the states, particularly Texas, leading the nation on critical issues.

He pointed to Jennas Law. Passed in 2009 and amended in 2017, the legislation mandates K through 12 training for educators. After being passed, a study found educators reported suspected abuse almost four times more than before the training. Now, Senator Cornyn is moving that law through the U.S. Congress. Parker hopes to see it become a federal law by years end and believes the Lone Star State can again lead the nation on AI legislation.

Texas has long been a beacon of innovation and growth in many areas, and AI creates an unprecedented opportunity to further bolster the states reputation as a leader in groundbreaking research and development while increasing the benefits to Texans in their everyday lives. The council aims to support cutting-edge research initiatives and breakthroughs in AI while propelling Texas to the forefront of global innovation and efficiency.

The next AI Advisory Council meeting will be held at the Texas Capitol on May 8th. For more information, including background on council members, overall objectives, and when and where you can participate in public testimony, check out the website.

Voices contributor Nicole Ward is a data journalist for the Dallas Regional Chamber.

Sign up to keep your eye on whats new and next in Dallas-Fort Worth, every day.

Dallas Innovates, the Dallas Regional Chamber, and Dallas AI are teaming up to launch the new AI 75 program at Capital Factory's Future of AI Salon today. The first-ever list will recognize Dallas-Fort Worth innovators in artificial intelligence. Nominations are open through March 20.

The newly established Texas Capital Foundation is following the first round of grant awards by opening again for new submissions this November.

Tarleton State University received the go-ahead for a new biotechnology institute as part of Texas A&M-Fort Worth's burgeoning downtown research campus. Approved in mid-August by the Texas A&M University System Board of Regents, the biotech institute is situated in one of the nation's fastest-growing life sciences hubs. "More than 5,000 biotechnology manufacturing and research and development firms think Novartis, Alcon, AstraZeneca call Texas home," according to the university. And DFW now ranks seventh in the U.S. for life science and biotech jobs. The Tarleton State Biotechnology Institute will focus on discovery and innovation in bioinformatics and computational modeling.

At the Bush Center in Dallas on September 5, Capital Factory will host top tech minds to talk AI and AGI. Tech icon John Carmack will take the stage in a rare fireside chat on artificial general intelligence with AI expert Dave Copps. Here's what you need to know, along with advance insights from Copps.

As part of a nationwide effort, the NFEC chose Texas as one of its initial launch states because of the demonstrated need for greater economic empowerment among Texans.

Read more:
Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com

Cohere Command R and R+ are now available in Amazon SageMaker JumpStart | Amazon Web Services – AWS Blog

This blog post is co-written with Pradeep Prabhakaran from Cohere.

Today, we are excited to announce that Cohere Command R and R+ foundation models are available throughAmazon SageMaker JumpStartto deploy and run inference. Command R/R+ are the state-of-the-art retrieval augmented generation (RAG)-optimized models designed to tackle enterprise-grade workloads.

In this post, we walk through how to discover and deploy Cohere Command R/R+ via SageMaker JumpStart.

Cohere Command R is a family of highly scalable language models that balance high performance with strong accuracy. Command R family include Command R and Command R+ models are optimized for RAG based workflows such as conversational interaction and long context tasks, enabling companies to move beyond proof of concept and into production. These powerful models are designed to handle complex tasks with high performance and strong accuracy, making them suitable for real-world applications.

Command R boasts high precision on RAG and tool use tasks, low latency and high throughput, a long 128,000-token context length, and strong capabilities across 10 key languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese.

Command R+ is the newest model, optimized for extremely performant conversational interaction and long-context tasks. It is recommended for workflows that lean on complex RAG functionality and multi-step tool use (agents), while Cohere R is well-suited for simpler RAG and single-step tool use tasks, as well as applications where price is a major consideration.

With SageMaker JumpStart, you can choose from a broad selection of publicly available foundation models. ML practitioners can deploy foundation models to dedicated SageMaker instances from a network-isolated environment and customize models using SageMaker for model training and deployment. You can now discover and deploy Cohere Command R/R+ models with a few choices inAmazon SageMaker Studioor programmatically through the SageMaker Python SDK. Doing so enables you to derive model performance and machine learning operations (MLOps) controls with SageMaker features such asSageMaker Pipelines,SageMaker Debugger, or container logs.

The model is deployed in an AWS secure environment and under your virtual private cloud (VPC) controls, helping provide data security. Cohere Command R/R+ models are available today for deployment and inferencing in Amazon SageMaker Studio inus-east-1(N. Virginia),us-east-2(Ohio),us-west-1(N. California),us-west-2(Oregon),Canada (Central), eu-central-1 (Frankfurt), eu-west-1(Ireland), eu-west-2 (London), eu-west-3 (Paris), eu-north-1 (Stockholm), ap-southeast-1 (Singapore), ap-southeast-2 (Sydney), ap-northeast-1(Tokyo) , ap-northeast-2(Seoul), ap-south-1(Mumbai), and sa-east-1(Sao Paulo).

You can access the foundation models through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in SageMaker Studio.

From the SageMaker JumpStart landing page, you can easily discover various models by browsing through different hubs, which are named after model providers. The Cohere Command R and R+ models are available in the Cohere hub. If you dont see these models, ensure you have the latest SageMaker Studio version by shutting down and restarting Studio Classic Apps.

To find the Command R and R+ models, search for Command R in the search box located at the top left of the SageMaker JumpStart landing page. Each model can be deployed on Amazon Elastic Compute Cloud (EC2) P5 instances powered by NVIDIA H100 Tensor Core GPUs (p5.48xlarge) and Amazon EC2 P4de instances powered by NVIDIA A100 Tensor Core GPUs (ml.p4de.24xlarge).

To illustrate model deployment, well deploy Cohere Command R+ on NVIDIA H100. Choose the model card to open the corresponding model detail page.

When you choose Deploy, a window appears prompting you to subscribe to the model on AWS Marketplace. Choose Subscribe, which redirects you to the AWS Marketplace listing for Cohere Command R+ (H100). Follow the on-screen instructions to complete the subscription process.

Once subscribed, return to the model detail page and choose Deploy in the window. The deployment process initiates.

Alternatively, you can choose Notebooks on the model card and open the example notebook in JupyterLab. This notebook provides end-to-end guidance on deploying the model for inference and cleaning up resources. You can also find this example notebook in the Cohere SageMaker GitHub repository. To ensure the security of the endpoint, you can configure AWS Key Management Service (KMS) key for a SageMaker endpoint configuration.

If an endpoint has already been created, you can simply connect to it:

Once your endpoint has been connected, you can perform real-time inference using the co.chat endpoint.

Command R/R+ is optimized to perform well in 10 key languages, as listed in the introduction. Additionally, pre-training data have been included for the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.

The model has been trained to respond in the language of the user. Heres an example in Spanish:

Heres what the response might look like:

Command R/R+ can also perform cross-lingual tasks, such as translation or answering questions about content in other languages.

Command R/R+ can ground its generations. This means that it can generate responses based on a list of supplied document snippets, and it includes citations in its response indicating the source of the information.

For example, the code snippet that follows produces an answer to How deep is the Mariana Trench along with inline citations based on the provided on-line documents.

Command R/R+, comes with a Tool Use API that enables the language model to interact with user-defined tools to automate highly sophisticated tasks. Command R/R+ in Tool Use mode creates API payloads (JSONs with specific parameters) based on user interactions and conversational history. These can be used to instruct any other application or tool.

For example, an application can be instructed to automatically categorize and route support tickets to the appropriate individual, change a status in customer relationship management software (CRM), or retrieve relevant snippets from a vector database. It comes in two variants; single-step and multi-step:

To explore these capabilities further, you can refer to the provided Jupyter notebook and Coheres AWS GitHub repository, which offer additional examples showcasing various use cases and applications.

After youve finished running the notebook and exploring the Cohere Command R and R+ models, its essential to clean up the resources youve created to avoid incurring unnecessary charges. Follow these steps to delete the resources and stop the billing:

In this post, we explored how to leverage the powerful capabilities of Coheres Command R and R+ models on Amazon SageMaker JumpStart. These state-of-the-art large language models are specifically designed to excel at real-world enterprise use cases, offering unparalleled performance and scalability. With their availability on SageMaker JumpStart and AWS Marketplace, you now have seamless access to these cutting-edge models, enabling you to unlock new levels of productivity and innovation in your natural language processing projects.

See the article here:
Cohere Command R and R+ are now available in Amazon SageMaker JumpStart | Amazon Web Services - AWS Blog

What is the Grad CAM method? – DataScientest

Grad CAM consists in finding out which parts of the image have led a convolutional neural network to its final decision. This method consists of producing heat maps representing the activation classes on the images received as input. Each activation class is associated with a specific output class.

These classes are used to indicate the importance of each pixel in relation to the class in question by increasing or decreasing the intensity of the pixel.

For example, if an image is used in a convolutional network of dogs and cats, the Grad-CAM visualisation can generate a heatmap for the cat class, indicating the extent to which the different parts of the image correspond to a cat, and also a heatmap for the dog class, indicating the extent to which the parts of the image correspond to a dog.

For example, lets consider a CNN of dogs and cats. The Grad-CAM method will generate a heatmap for the cat object class to indicate the extent to which each part of an image corresponds to a cat, and also a heatmap for the dog object class in the same way.

The class activation map assigns importance to each position (x, y) in the last convolutional layer by calculating the linear combination of activations, weighted by the corresponding output weights for the observed class (Australian terrier in the example below). The resulting class activation map is then resampled to the size of the input image. This is illustrated by the heatmap below.

See more here:
What is the Grad CAM method? - DataScientest

Machine learning comes to Chrome’s address bar on Windows, Mac, and ChromeOS – Android Police

Summary

Google's foray into AI, exemplified by its popular AI chatbot, Gemini, is extending to more of its services and apps. Notably, the Chrome browser is poised to showcase its potential in incorporating AI features. In a recent update, we shared the news of a potential Gemini integration in Google Chrome for desktop to enhance the address bar. Now, the Chrome address bar, also known as the Omnibox, is set to elevate its intelligence by integrating cutting-edge machine-learning models.

As noted in the Chromium Blog, starting with Chrome version M124, Google is integrating machine learning models into Chrome's address bar to provide users with more accurate and relevant web page suggestions. These machine-learning models will also help increase the relevance of search suggestions.

Chrome software engineer Justin Donnelly sheds light on the challenges faced by the engineering team in enhancing Omnibox. The hand-crafted formulas it used previously were not well-suited to modern scenarios, leading to the scoring system remaining unchanged for a significant period. Moreover, altering a feature used billions of times daily posed another significant challenge for the team.

Donnelly added that the machine learning models used to train the Chrome Omnibox could identify some interesting patterns. For example, when users select a URL and then immediately return to Omnibox to search for another URL, the ML system decreases the relevance score of that URL. In future attempts, the ML system won't prioritize the URL with a lower score in that context.

According to Chrome software engineers, integrating machine learning models into the Omnibox holds immense potential for enhancing the user experience. These models could potentially adapt to the day's time, offering users more relevant results. Donnelly also revealed that the Chrome engineering team is exploring training specialized versions of the model for mobile, enterprise, or academic environments, further enhancing the user experience on various platforms.

The feature will be available on Google Chrome for Windows, Mac, and ChromeOS. Meanwhile, a similar feature is more likely to be added to the Android version of Chrome soon for a unified experience.

The rest is here:
Machine learning comes to Chrome's address bar on Windows, Mac, and ChromeOS - Android Police

What’s the future of AI? | McKinsey – McKinsey

Conceptual illustration of 7 glasslike panels floating over a grid. The panels transition from dark to light blue and 2 pink lines weave past the panels and pink dots float around the grid.

Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role.

0

countries

currently have national AI strategies

0

the year

AI capabilities will rival humans

$0.4 trillion

annually

gen AI could add to the global economy

Artificial intelligence is a machines ability to perform some cognitive functions we usually associate with human minds.

Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation.

Artificial general intelligence (AGI) is a theoretical AI system with capabilities that rival those of a human. Many researchers believe we are still decades, if not centuries, away from achieving AGI.

Deep learning is a type of machine learning that is more capable, autonomous, and accurate than traditional machine learning.

Prompt engineering is the practice of designing inputs for AI tools that will produce optimal outputs.

Machine learning is a form of artificial intelligence that is able to learn without explicit programming by a human.

Tokenization is the process of creating a digital representation of a real thing. Tokenization can be used to protect sensitive data or to efficiently process large amounts of data.

Read the original post:
What's the future of AI? | McKinsey - McKinsey

Google supercharges Chrome’s omnibox address bar with machine learning – TechSpot

Why it matters: Google is supercharging the address bar of its popular web browser with machine-learning capabilities. Known as the "omnibox" since it pulls double duty as both a URL entry field and search box, this unassuming text field is about to get a major upgrade.

The omnibox has evolved well beyond its humble beginnings as a place to type website addresses. It can now handle all sorts of queries and tasks by leveraging Google's vast search prowess. However, the suggestions and results it surfaces have been driven by a relatively rigid "set of hand-built and hand-tuned formulas." That's all about to change.

In a recent post on the Chromium blog, Justin Donnelly, the engineering lead for Chrome's omnibox, revealed that his team has been hard at work adapting machine learning models to drastically improve the omnibox's "relevance scoring" capabilities. In other words, omnibox will get much better at understanding the context behind your queries and providing more useful suggestions tailored to your needs.

According to Donnelly, when he surveyed colleagues on how to enhance the omnibox experience, improving the scoring system topped the wishlist. While the current rule-based approach works for a vast number of cases, it lacks flexibility and struggles to adapt to new scenarios organically. Enter machine learning.

By analyzing massive datasets of user interactions, browsing patterns, and historical data points like how frequently you visit certain sites, the new AI models can generate far more nuanced relevance scores. For instance, it learned that if you swiftly leave a webpage, chances are it wasn't what you were looking for, so suggestions for that URL get demoted.

As you use the smarter omnibox over time across Windows, Mac, and ChromeOS, it will continue refining and personalizing its suggestions based on your evolving interests and habits. Donnelly's team also plans to explore incorporating time-of-day awareness, specialized models for different user groups like mobile or enterprise, and other contextual signals.

Of course, enabling such deep personalization requires handing over more personal browsing data to Google's machine-learning models. How comfortable you are with that trade-off is a personal decision.

Google has been gradually rolling out these omnibox improvements over recent Chrome updates, with the machine learning models really flexing their muscles starting with version M124 expected in the coming months. And while not mentioned in the blog post, it's safe to assume the update would trickle down to mobile as well eventually.

See the original post:
Google supercharges Chrome's omnibox address bar with machine learning - TechSpot

Navigating the black box AI debate in healthcare – HealthITAnalytics.com

May 01, 2024 -Artificial intelligence (AI) is taking the healthcare industry by storm as researchers share breakthroughs and vendors rush to commercialize advanced algorithms across various use cases.

Terms like machine learning, deep learning and generative AI are becoming part of the everyday vocabulary for providers and payers exploring how these tools can help them meet their goals; however, understanding how these tools come to their conclusions remains a challenge for healthcare stakeholders.

Black box software in which an AIs decision-making process remains hidden from users is not new. In some cases, the application of these models may not be an issue, but in healthcare, where trust is paramount, black box tools could present a major hurdle for AI deployment.

Many believe that if providers cannot determine how an AI generates its outputs, they cannot determine if the model is biased or inaccurate, making them less likely to trust and accept its conclusions.

This assertion has led stakeholders to question how to build trust when adopting AI in diagnostics, medical imaging and clinical decision support. Doing so requires the healthcare industry to explore the nuances of the black box debate.

In this primer, HealthITAnalytics will outline black box AI in healthcare, alternatives to the black box approach and the current AI transparency landscape in the industry.

One of the major appeals of healthcare AI is its potential to augment clinician performance and improve care, but the black box problem significantly inhibits how well these tools can deliver on those fronts.

Research published in the February 2024 edition of Intelligent Medicine explores black box AI within the context of the do no harm principle laid out in the Hippocratic Oath. This fundamental ethical rule reflects a moral obligation clinicians undertake to prevent unnecessary harm to patients, but black box AI can present a host of harms unbeknownst to both physicians and patients.

[Black box AI] is problematic because patients, physicians, and even designers do not understand why or how a treatment recommendation is produced by AI technologies, the authors wrote, indicating that the possible harm caused by the lack of explainability in these tools is underestimated in the existing literature.

In the study, the researchers asserted that the harm resulting from medical AI's misdiagnoses may be more serious, in some cases, than that caused by human doctors misdiagnoses, noting that the unexplainability feature of such systems limits patient autonomy in shared decision-making and black box tools can create significant psychological and financial burdens for patients.

Questions of accountability and liability that come from adopting black box solutions may also hinder the proliferation of healthcare AI.

To tackle these concerns, many stakeholders across the healthcare industry are calling for the development and adoption of explainable AI algorithms.

Explainable AI (XAI) refers to a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms, according to IBM. [Explainability] is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making.

Having insights into these aspects of an AI algorithm, particularly in healthcare, can help ensure that these solutions meet the industrys standards.

Explainability can be incorporated into AI in a variety of ways, but clinicians and researchers have outlined a few critical approaches to XAI in healthcare in recent years.

A January 2023 analysis published in Sensors indicates that XAI techniques can be divided into categories based on form, interpretation type, model specificity and scope. Each methodology has pros and cons depending on the healthcare use case, but applications of these approaches have seen success in existing research.

A research team from the University of Illinois UrbanaChampaigns Beckman Institute for Advanced Science and Technology writing in IEEE Transactions on Medical Imaging demonstrated that a deep learning framework could help address the black box problem in medical imaging.

The researchers approach involved a model for identifying disease and flagging tumors in medical images like X-rays, mammograms and optical coherence tomography (OCT). From there, the tool generates a value between zero and one to denote the presence of an anomaly, which can be used in clinical decision-making.

However, alongside these values, the model also provides an equivalency map (E-map) a transformed version of the original medical image that highlights medically interesting regions of the image which helps the tool explain its reasoning and enables clinicians to check for accuracy and explain diagnostic findings to patients.

Other approaches to shed light on AIs decision-making have also been proposed.

In a December 2023 Nature Biomedical Engineering study, researchers from Stanford University and the University of Washington outlined how an auditing framework could be applied to healthcare AI tools to enhance their explainability.

The approach utilizes a combination of generative AI and human expertise to assess classifiers an algorithm used to categorize data inputs.

When applied to a set of dermatology classifiers, the framework helped researchers identify which image features had the most significant impact on the classifiers decision-making. This revealed that the tools relied on both undesirable features and features leveraged by human clinicians.

These insights could aid developers looking to determine whether an AI relies too heavily on spurious data correlations and correct those issues before deployment in a healthcare setting.

Despite these successes in XAI, there is still debate over whether these tools effectively solve the black box problem or whether black box algorithms are a problem.

While many in the healthcare industry maintain that black box algorithms are a major concern and discourage their use, some have raised questions about the nuances of these assertions. Others posit that the black box problem is an issue but indicate that XAI is not a one-size-fits-all solution.

One central talking point in these debates revolves around the use of other tools and technologies in healthcare that could be conceptualized as black box solutions.

Although [the black box AI] discussion is ongoing, it is worth noting that the mechanism of action of many commonly prescribed medications, such as Panadol, is poorly understood and that the majority [of] doctors have only a basic understanding of diagnostic imaging tools like magnetic resonance imaging and computed tomography, explained experts writing in Biomedical Materials & Devices.

While not all healthcare tools are necessarily well-understood, such solutions can be contentious in evidence-based medicine, which prioritizes the use of scientific evidence, clinical expertise and patient values to guide care.

Some have suggested that the black-box problem is less of a concern for algorithms used in lower-stakes applications, such as those that arent medical and instead prioritize efficiency or betterment of operations, the authors noted.

However, AI is already being used for various tasks, including decision support and risk stratification, in clinical settings, raising questions about who is responsible in the event of a system failure or error associated with using these technologies.

Explainability has been presented as a potential method to ease concerns about responsibility, but some researchers have pointed out the limitations of XAI in recent years.

In a November 2021 viewpoint published in the Lancet Digital Health, researchers from Harvard, the Massachusetts Institute of Technology (MIT) and the University of Adelaide argued that assertions about XAIs potential to improve trust and transparency represent false hope for current explainability methods.

The research team asserted that black box approaches are unlikely to achieve these goals for patient-level decision support due to issues like interpretability gaps, which characterize an aspect of humancomputer interaction wherein a model presents its explanation, and the human user must interpret said explanation.

[This method] relies on humans to decide what a given explanation might mean. Unfortunately, the human tendency is to ascribe a positive interpretation: we assume that the feature we would find important is the one that was used, the authors explained.

This is not necessarily the case, as there can be many features some invisible to humans that a model may rely on that could lead users to form an incomplete or inaccurate interpretation.

The research team further indicated that model explanations have no performance guarantees, opening the door for other issues.

[These explanations] are only approximations to the model's decision procedure and therefore do not fully capture how the underlying model will behave. As such, using post-hoc explanations to assess the quality of model decisions adds another source of error not only can the model be right or wrong, but so can the explanation, the researchers stated.

A 2021 article published in Science echoes these sentiments, asserting that the current hype around XAI in healthcare both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

The authors underscored that for many applications in medicine, developers must use complicated machine learning models that require massive datasets with highly engineered features. In these cases, a simpler, interpretable AI (IAI) model couldnt be used as a substitute. XAI provides a secondary alternative, as these models can approach the high level of accuracy achieved by black box tools.

But here, users still face the issue of post-hoc explanations that may make them feel as though they understand the models reasoning without actually shedding light on the tools inner workings.

In light of these and other concerns, some have proposed guidelines to help healthcare stakeholders determine when it is appropriate to use black box models with explanations rather than IAI such as when there is no meaningful difference in accuracy between an interpretable model and black box AI.

The debate around the use of black box solutions and the role of XAI is not likely to be resolved soon, but understanding the nuances in these conversations is vital as stakeholders seek to navigate the rapidly evolving landscape of AI in healthcare.

The rest is here:
Navigating the black box AI debate in healthcare - HealthITAnalytics.com

Machine learning vs deep learning vs neural networks: What’s the difference? – ITPro

The terms machine learning and deep learning can seem interchangeable to most people, but they arent. Both considered subdivisions within the world of artificial intelligence (AI), the two have many differences, especially in the architecture and use cases.

Machine learning, for instance, uses structured data and algorithms to train models, with the more data at disposal generally equating with more accurate and better trained models. The idea is to eliminate the need for human intervention. Deep learning, on the other hand, is a subset of machine learning and uses neural networks to imitate the way humans think, meaning the systems designed require even less human intervention.

Differentiating the two, in this way, is crucial to AI research and practical application of both, particularly as businesses attempt to integrate such technologies into their core processes, and recruit for skilled individuals to fill technical roles.

The chances are that youve probably used an application or system built on machine learning. Whether youve interacted with a chatbot, utilized predictive text, or gone to watch a show after Netflix recommended it to you, machine learning was likely at the core of these systems. Machine learning is a subset of AI, and a blanket term used to define machines that learn from datasets.

Using structured data that comes in the form of text, images, numbers, financial transactions, and many other things, machine learning can replicate the process of human learning. Collected data is used as training material to direct the machine learning model. Theoretically, the greater the volume of data that is used, the higher the quality of the model. Machine learning is all about allowing computers to self-program via training datasets and infrequent human interventions.

Supervised learning, semi-supervised learning, unsupervised learning, and reinforcement learning are all differing strands of machine learning processes.

The first of these techniques, supervised learning, involves machine learning scientists feeding labeled training data into algorithms to clearly define variables. This is so that the algorithm can start to understand connections. In contrast, unsupervised learning uses unlabelled data and allows the algorithms to actively search for relationships and connections. Acting as the logical midpoint between these processes, semi-supervised learning aids the models own comprehension of the data. Reinforcement learning, on the other hand, works by letting a machine complete a set of decisions for the purpose of achieving an objective in an unknown environment.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

A subset of machine learning, deep learning deploys systems of artificial neural networks to mimic the cognitive operations of the human mind.

A lack of sufficient compute power has, until now, created barriers to neural network learning capabilities. Huge strides in big data analytics have changed the landscape significantly, with larger and more complex neural networks now able to take form. This means that machines can now understand, learn, and react to complex scenarios quicker than human beings.

These neural networks are constructed in layers and designed to enable the transmission of data from node to node, much like neurons in the brain. Vast datasets are required to build these models but, once theyve been constructed, they can give users instant results with little needed in the way of human intervention. There are many, varied ways in which deep learning can be performed.

Convolutional Neural Networks (CNNs): These comprise multiple layers and are mostly used for image processing and object detection.

Recurrent Neural Networks (RNNs): These are types of artificial neural network that use sequential data or time series data. They are frequently used in problems, such as language translation, natural language processing (NLP), speech recognition, and image captioning.

Long Short-Term Memory Networks (LSTMs): These are types of Recurrent Neural Network (RNN) that can learn and remember long-term dependencies. They can be useful for complex problem domains like machine translation, speech recognition, and more.

Generative Adversarial Networks (GANs): These are generative deep learning algorithms that produce new data instances that look like the training data. It comprises two parts; a generator, which learns to generate false data, and a discriminator, which learns from that fake information. These networks have been used to produce fake images of people who have never existed as well as new and unique music.

Radial Basis Function Networks (RBFNs): These networks have an input layer, a hidden layer, and an output layer and are typically used for classification, regression, and time-series predictions.

Multilayer Perceptrons (MLPs): These are a type of feedforward (this means information moves only forward in the network) neural networks. These have an input layer and an output layer that are fully connected. There may also be hidden layers. These are used in speech-recognition, image-recognition, and machine-translation software.

Deep Belief Networks (DBNs): This looks like another feedforward neural network with hidden layers, but isnt. These are a sequence of restricted boltzmann machines which are sequentially connected. These are used to identify, gather and generate images, video sequences and motion-capture data.

Despite the frequent confusion about their similarities, deep learning is very much a subset of machine learning. Deep learning, however, is differentiated from its counterpart by the data types it interacts with and the ways in which it can learn.

Machine learning uses structured, labelled data to predict outcomes. This means a machine learning models input data defines specific features and is organised into tables. While it gets progressively better at carrying out the task in hand, there still requires there to be a human to intervene at points to ensure the model is working in the required way. In other words, if the predictions are not accurate, an engineer will make any adjustments needed to get back on track.

RELATED RESOURCE

A five step blueprint for master data management success

How to create a strategic plan for deploying your MDM initiative

FREE DOWNLOAD

That being said, deep learning systems involve algorithms that can autonomously decide on the accuracy of their predictions. This works via the presence of neural networks in deep learning models.

Another difference is that where machine learning can use small amounts of data to make predictions, deep learning needs much, much more data to make more accurate predictions.

While machine learning needs little time to train typically a few seconds to a few hours deep learning takes far longer as the algorithms used here involve many layers.

Outputs also differ between the two. Machine learning tends to output numerical values, such as a score or classification, while deep learning can output in multiple formats, such as text, scores, or even sounds and images.

Machine learning is already in use in a variety of areas that are considered part of day-to-day life, including on social media, on email platforms and, as mentioned, on streaming services like Netflix. These types of applications lend themselves well to machine learning, because theyre relatively simple and dont require vast amounts of computational power to process complicated decision-making.

Among some of the more complex uses of machine learning are computer vision, such as facial recognition, where technology can be used to recognise people in crowded areas. Handwriting recognition, too, can be used to identify an individual from documents that are scanned en masse. This would apply, for example, to academic examinations, police records, and so on. Speech recognition, meanwhile, such as those used in voice assistants are another application of machine learning.

Because of the nature of deep learning, on the other hand, this technology allows for far more complex decision-making, and near-fully autonomous systems, including robotics and autonomous vehicles.

Deep learning also has its uses in image recognition, where massive amounts of data is ingested and used to help the model tag, index, and annotate images. Such models are currently in use for generating art, in systems like DALLE. Similarly to machine learning, deep learning can be used in virtual assistants, in chat bots, and even in image colorisation. Deep learning has also had a particularly exciting impact in the field of medicine, such as in the development of personalised medicines created for somebodys unique genome.

Read more here:
Machine learning vs deep learning vs neural networks: What's the difference? - ITPro