Page 74«..1020..73747576..8090..»

A deep learning-based algorithm for pulmonary tuberculosis detection in chest radiography | Scientific Reports – Nature.com

This study was designed to use freely available open TB CXR datasets as training data for our AI algorithm. Subsequent accuracy analyses were performed using independent CXR datasets and actual TB cases from our hospital. All image data were de-identified to ensure privacy. This study was reviewed and approved by institutional review board (IRB) of Kaohsiung Veterans General Hospital, which waived the requirement for informed consent (IRB no.: KSVGH23-CT4-13). This study adheres to the principles of the Declaration of Helsinki.

The flowchart of the study design is shown in Fig.1. Due to a high prevalence of TB and varied imaging presentation, TB cannot be entirely excluded in case of CXR presenting with pneumonia or other entities. Our preliminary research indicated that training a model solely on TB vs. normal resulted in bimodally distributed predictive values. Therefore, CXRs that were abnormal but not indicative of TB usually had predictive value too high or too low, and failed to effectively differentiate abnormal cases from normal or TB. For common CXR abnormalities such as pneumonia and pleural effusion, the TB risk is lower, but not zero. Thus, we trained two models using 2 different training datasets, one for TB detection and another for abnormality detection. Then the output predictive values were averaged.

Flow chart of model training and validations.

The features of the CXR datasets for training is summarized in Table 1. The inclusion criteria are CXR of TB, other abnormality, or normal. Both posteroanterior view and anteroposterior view CXRs are included. The exclusion criteria are CXR with poor quality, lateral view CXR, children CXR, and those with lesions too small to detect at 224224 pixels size). All the CXR images were confirmed by C.F.C. to ensure both image quality and correctness.

Training dataset 1 is used for training algorithms to detect typical TB pattern on CXR. 348 TB CXRs and 3806 normal CXRs were collected from various open datasets for training, including the Shenzhen dataset from Shenzhen No. 3 Peoples Hospital, the Montgomery dataset19,20, and Kaggle's RSNA Pneumonia Detection Challenge21,22.

Training dataset 2 is used for training algorithms to detect CXR abnormalities. A total of 1150 abnormal CXRs and 627 normal CXRs were collected from the ChestX-ray14 dataset23. The abnormal CXRs consisted of consolidation: 185, cardiomegaly: 235, pulmonary edema 139, pleural effusion: 230, pulmonary fibrosis 106, and mass: 255.

In this study, we employed GoogleTM18, a free online AI software dedicated to image classification. GoogleTM provides a user-friendly web-based graphical interface that allows users to execute deep neural network computations and train image classification models with minimal coding requirements. By utilizing the power of transfer learning, GoogleTM significantly reduces the computational time and data amount required for deep neural network training. Within GoogleTM, the base model for transfer learning was MobileNet, a model pretrained by Google on the ImageNet dataset featuring 14 million images and capable of recognizing 1,000 classes of images. Transfer learning is achieved by modifying the last 2 layers of the pre-trained MobileNet, and then keep subsequent specific image recognition training18,24. In GoogleTM , all images are adjusted and cropped to 224224 pixels for training. 85% of the image is automatically divided into training dataset, and the remaining 15% into validation dataset to calculate the accuracy.

The hardware employed in this study included a 12th-generation Intel Core i9-12900K CPU with 16 cores, operating at 3.25.2 GHz, an NVIDIA RTX A5000 GPU equipped with 24GB of error-correction code (ECC) graphics memory, 128 GB of random-access memory (RAM), and a 4TB solid-state disk (SSD).

To evaluate the accuracy of the algorithms, we collected clinical CXR data for TB, normal cases, and pneumonia/other disease from our hospital.

Validation dataset 1 included 250 de-identified CXRs retrospectively collected from VGHKS. The CXRs dates were between January 1, 2010 and February 27, 2023. This dataset included 83TB (81 confirmed by microbiology, and 2 confirmed by pathology), 84 normal, and 83 abnormal other than TB cases (73 pneumonia, 14 pleural effusion, 10 heart failure, and 4 fibrosis. Some cases had combined features). The image size of these CXRs ranged from width: 17604280 pixels and height: 19314280 pixels.

Validation dataset 2 is a smaller dataset derived from validation dataset 1, for comparison of algorithm and physicians performance, and included 50 TB, 33 normal and 22 abnormal other than TB cases (22 pneumonia, 5 pleural effusion, 1 heart failure, and 1 fibrosis) CXRs. The features of the two validation datasets are provided in Table 1.

Data collected from clinical CXR cases included demographic data (such as age and sex), radiology reports, clinical diagnoses, microbiological reports, and pathology reports. All clinical TB cases included in the study had their diagnosis confirmed by microbiology or pathology. Their CXR was performed within 1 month of TB diagnosis. Normal CXRs were also reviewed by C.F.C. and radiology reports were considered. Pneumonia/other disease cases were identified by reviewing medical records and examinations, with diagnoses made by clinical physicians judgement, and without evidence of TB detected within three months period.

We employed validation dataset 2 to evaluate the accuracy of TB detection of 5 clinical physicians (five board-certified pulmonologists, average experience 10 years, range 516 years). Each physician performed the test without additional clinical information, and was asked to estimate the probability of TB in each CXR, consider whether sputum TB examinations were needed, and make a classification from three categories: typical TB pattern, normal pattern, or abnormal pattern (less like TB).

We also collected radiology reports from validation dataset 2 to evaluate their sensitivity for detecting TB. Reports mentioning suspicion of TB or mycobacterial infection were classified as typical TB pattern. Reports indicating abnormal patterns such as infiltration, opacity, pneumonia, effusion, edema, mass, or tumor (but without mentioning tuberculosis, TB, or mycobacterial infection) were classified as abnormal pattern (less like TB). Reports demonstrating no evident abnormalities were classified as normal pattern. Furthermore, by analyzing the pulmonologists decisions regarding sputum TB examinations, we estimate the sensitivity of TB detection in pulmonologists actual clinical practice.

Continuous variables are represented as meanstandard deviation (SD) or median (interquartile range [IQR]), while categorical variables are represented as number (percentage). For accuracy analysis, the receiver operating characteristic (ROC) curve was used to compute the area under the curve (AUC). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio (LR), overall accuracy, and F1 score were calculated. A confusion matrix was used to illustrate the accuracy of each AI model. Boxplots were used to evaluate the distribution of the predicted values of the AI models for each etiology subgroup.

The formulas for each accuracy calculation are as follows:

(TP is true positives, TN is true negatives, FP is false positives, FN is false negatives, P is all positives, and N is all negatives.)

$$begin{gathered} {text{P }} = {text{ TP}} + {text{FN}}, hfill \ {text{N }} = {text{ TN}} + {text{FP}}, hfill \ {text{Sensitivity }} = {text{ TP}}/{text{P }} times {1}00, hfill \ {text{Specificity }} = {text{ TN}}/{text{N }} times {1}00, hfill \ {text{PPV }} = {text{ TP}}/left( {{text{TP}} + {text{FP}}} right) , times {1}00, hfill \ {text{NPV }} = {text{ TN}}/left( {{text{TN}} + {text{FN}}} right) , times {1}00, hfill \ {text{LR}} + , = {text{ sensitivity}}/left( {{1} - {text{specificity}}} right), hfill \ {text{LR}} - , = , left( {{1} - {text{sensitivity}}} right)/{text{specificity}}, hfill \ {text{Overall accuracy }} = , left( {{text{TP }} + {text{ TN}}} right)/left( {{text{P}} + {text{N}}} right) , times {1}00, hfill \ {text{F1 score }} = , left( {{2 } times {text{ sensitivity }} times {text{ PPV}}} right)/left( {{text{sensitivity }} + {text{ PPV}}} right) , times {1}00, hfill \ end{gathered}$$

Continue reading here:
A deep learning-based algorithm for pulmonary tuberculosis detection in chest radiography | Scientific Reports - Nature.com

Read More..

From bioscience to skin care, startups are harnessing AI to solve problems big and small – Source – Microsoft

It started with a trek to one of Europes most remote areas.

Fascinated by the biodiversity of the worlds less-explored environments, biologists and explorers Glen Gowers and Oliver Vince spent a month on an ice cap in Iceland in 2019 undertaking what is believed to be the worlds first fully off-grid DNA sequencing expedition. Using solar power alone, the team spent a month sequencing DNA from microorganisms living in an area with both ice and a hot spring. Sequencing DNA refers to a method used to read the genetic code of an organism.

After returning to the U.K., the pair shared their data with Philipp Lorenz, a University of Oxford scientist whose research focuses on genomics and AI. It quickly became apparent that the data they had collected was unlike anything they had seen in any reference database so different, in fact, that the sequences couldnt be annotated using traditional methods.

That realization prompted Gowers and Vince to launch Basecamp Research, a London-based startup that aims to build the worlds largest database of natural biodiversity and apply AI and machine learning to advance bioscience. The company is among a wave of startups worldwide that are harnessing machine learning and artificial intelligence, particularly generative AI, to create AI-powered tools and solutions across an increasingly large swath of industries.

Pointing to the lack of data in the life sciences, Lorenz says there are 10 to the power of 26 species on the planet, but only a few million of those have been sequenced. In terms of comparison, thats about five drops of water compared to the Atlantic Ocean of what we dont know about life on Earth, says Lorenz, chief technology officer at Basecamp Research.

If you want to do deep learning on biological data, theres just a fundamental, enormous knowledge gap.

To bridge that gap, Basecamp Research is partnering with nature parks on five continents and working across 27 countries to sequence genomic information from the worlds most diverse and understudied biomes from volcanic islands and deep oceans to jungles and the Antarctic.

The company, which has close to 35 employees, collects samples only with consent from stakeholders including national and local governments, nature parks, research institutes and landowners. Basecamp Research shares benefits with stakeholders, such as employing local scientists, providing training and resources to partners, and sharing revenue if commercial products are developed from the locations Basecamp Research is working in.

We are the first company, really in the world, that is doing this at scale and collaboration with stakeholders, Lorenz says. In the age of generative AI, we are the only life science organization that can train AI models in which every point in the training dataset can be traced back to consent and benefit-sharing agreements.

In just two years, he says, Basecamp Research has built a database about five times larger and more diverse than any other of its type. Unlike traditional protein databases that primarily just store data, Basecamps database is a knowledge graph, a network that organizes data and shows the relationships between billions of data points, linking protein and DNA sequences to their biological, chemical and evolutionary contexts.

In March, Basecamp Research announced the launch of a new deep learning model named BaseFold. The model can predict 3D structures of large proteins and small-molecule interactions with protein targets more accurately than the popular AlphaFold2 model, according to Basecamp.

With its database, Basecamp Research is building deep learning models that are being used to design products such as gene editing systems for therapeutics and enzymes for food manufacturing. One client is developing proteins that break down difficult-to-recycle plastic waste. Another company is designing proteins for dyeing fabrics without using harmful chemicals.

Basecamp Research, Lorenz says, is motivated by a mission of ethical data collection and a core belief in the power of AI to advance biological discovery.

Biology and the life sciences are just fundamentally more complex than most other domains, he says. Ultimately, its going to be deep learning models and AI that will be able to deal with and understand the vastness and complexity of biology.

Microsoft for Startups Founders Hub was launched in 2021 to accelerate innovation by providing access to resources that were traditionally unavailable to fledgling companies. Open to any startup, the platform provides access to leading AI models, Azure credits, free and discounted development tools, and expert guidance. Tens of thousands of startups around the world are now part of Founders Hub, and the number of those companies using Microsoft AI has increased tenfold in the past year, according to Microsoft.

Microsoft for Startups Pegasus Program, an extension of Founders Hub launched in 2023, is an invite-only program that connects growth-stage startups with Microsoft customers in industries including AI, retail, health and life sciences, and cybersecurity. The program matches Microsofts top enterprise customers with the right startups to help them solve business challenges.

Microsofts focus on integrating AI into its products, from GitHub to Microsoft 365, is a differentiator and means startups not only get access to those tools but also to the expertise behind them, says Tom Davis, partner at Microsoft for Startups.

Its not just access to infrastructure, extra Azure credits and things like that, he says. Its access to knowledge and know-how that will help accelerate these startups. That understanding of how to build AI-based product applications is invaluable for startups.

Tammy McMiller joined Founders Hub not long after launching her company, Plan Heal, in 2022. Based in Chicago, Plan Heal offers AI-powered solutions that enable patients to monitor and report health metrics so care providers can better serve them.

The companys mission of empowering patients and providers is a personal one for McMiller. She decided to start the company after a family member who had complained of symptoms for more than two years and was regularly seeing her doctor was diagnosed with stage three colorectal cancer.

McMillers relative has been cancer-free for nine years, but she points to statistics showing that 167 million people in the U.S. have health issues such as high blood pressure, kidney disease or diabetes but dont know it.

My family member was one of those statistics, McMiller says. What that means is that people are living lives with a lower quality of health and not really understanding why. We decided to leverage AI to help people become better reporters of their health.

Through Plan Heals Smart Health Assessment, which is powered by Azure and integrates with electronic health record systems to help teams access real-time patient data, patients answer a few questions about their health on a regular basis and can upload images of symptoms or medications.

Algorithms analyze the data to provide insights and flag potential health issues for providers. Cramping or aching in the calves, for example, can indicate peripheral artery disease in a patient with diabetes, which can lead to amputation; a limb is amputated every three minutes and 30 seconds in the U.S. due to diabetes.

Those insights allow providers, who often have high caseloads and limited time with patients, to go into appointments with more information about a patients health and proactively come up with preventative treatments.

It really changes the dynamic from a disease care interaction to true proactive health care, says Dan Langille, a member of Plan Heals advisory board. Thats pretty powerful.

Plan Heals platform also offers targeted assessments for several high-cost, chronic diseases including diabetes and kidney disease. McMiller hopes to pilot the platform with a large-population health care provider this year, and early results seem promising. Testing found that 90% of patients who used the health assessment had a more engaging conversation with their provider, she says, and 85% received care services they would not have otherwise, such as additional examinations or tests.

As an aging population increases the demand for health care services, McMiller says, AI can play a valuable role in helping people track their health and identify potentially life-threatening conditions earlier.

Well always need professional health care teams. AIs never going to replace that, McMiller says. But if that care team member has the efficiency of AI to help automate different services, they can care for patients more efficiently.

In 2018, Anastasia Georgievskaya was a research scientist working with R&D teams at skin care companies to develop models for analyzing skin in a clinical setting. The work involved analyzing before and after images of skin that would show benefits from skin care products contrary to what some consumers believed, she says.

We started to ask people in the industry, why does everyone think that skin care is not working? Georgievskaya says. And the answer was that its working, but consumers are choosing the wrong product and buying products that were not designed for them.

That got Georgievskaya thinking about using AI and computer vision to replicate on smart phones the analysis she was doing in labs. If consumers could get accurate skin assessments easily through their phones, she reasoned, they could make more informed decisions and get better results from skin care products.

Georgievskaya co-founded Haut.AI, a company based in Tallinn, Estonia, in 2018 to provide skin care companies and retailers with customizable, AI-based skin diagnostic tools. Haut.AIs software uses selfies from consumers to assess skin metrics such as hydration, redness and wrinkles, then makes personalized product recommendations. A similar application analyzes hair condition, also through a selfie, to gauge features including frizziness, volume and color uniformness.

Haut.AIs newest product, SkinGPT, lets users upload photos and see how their skin would change over time when using particular products, like face serum with hyaluronic acid for fine lines and wrinkles; the company says the application is the first to use generative AI for skin care simulations. Haut.AI is also working on a chatbot that can provide consumers with input on skin analysis results and answer questions about ingredients in products or how to combine products.

The platforms algorithms are trained on a mix of lab data from anonymized images of human skin and synthetic data created with generative AI. Datasets in the beauty sector are limited, Georgievskaya says, and using synthetic data allows Haut.AI to train models to account for gender and population group differences, and environmental factors like air pollution and weather that can impact skin condition.

This blend of synthetic and real data gives a really impressive boost in system accuracy because it can cover a lot of use cases, especially for the groups that you dont usually have much of a dataset for, she says.

Haut.AI, which is part of the Microsoft for Startups Pegasus Program, has around 90 clients, including several that are using its technology for research and development, Georgievskaya says. The platform allows companies to collect data from thousands of study participants with their consent, she says, versus the traditional approach of bringing a few dozen participants to a research facility for testing.

Artificial intelligence, Georgievskaya believes, can provide a more objective and realistic analysis of skin than a person possibly could.

As humans, it is in our nature to be emotional, and we tend to underestimate or overestimate our skin, she says. And even if someone else tells you something about your skin, if theyre not a doctor, their judgment is also very biased and subjective. The algorithm helps you see objective measures. You can just snap a selfie and get this information in less than 10, 20 seconds.

While many startups are making products powered by AI, Weights & Biases mission is to provide tools to help AI developers build those solutions. Founded by Lukas Biewald and Chris Van Pelt, the San Francisco-based company arose out of an internship Biewald had at OpenAI in its early days as a research organization.

While at OpenAI, Biewald struggled to find a way to track his experiments. He asked other researchers what they were doing and found a hodgepodge of approaches, from keeping notes in a text editing app to creating Excel documents. There was no uniform way to track the different experiments going on and their performance.

We saw an opportunity, and really an itch that we had ourselves and wanted to scratch, Van Pelt says.

Biewald and Van Pelt, who previously founded machine learning and AI company Figure Eight, launched Weights & Biases in 2017 to provide tools to help AI developers better manage workflow and build and deploy models faster. The companys platform, which runs in Azure, allows users to track and visualize experiments, store models in a central registry, automatically capture the data and code used for models and share results with collaborators.

Demand for the platform has grown, Van Pelt says, since the release of ChatGPT has made it easier to build general-purpose large language models that can perform many tasks without requiring different models.

That really changed the dynamic, he says. The number of people that we could help with our tools went from a fairly small subset of engineers who were specialized in machine learning to essentially all engineers. Our customer base expanded dramatically.

The company, which is a member of the Microsoft for Startups Pegasus Program, has more than 1,000 customers across sectors ranging from tech to finance, health care, medicine, robotics, the automotive industry and academia (Weights & Biases gives its software to academics for free). OpenAI is a customer, as is Microsoft. The companys platform is being used to fuel drug discovery, advance autonomous vehicle development and improve health care delivery.

For Van Pelt, seeing the diversity of Weights & Biases customer base and the innovative ways customers are using its technology is one of his favorite aspects of the company.

I think every founder or person working on a startup wants to think that what theyre doing is going to change the world, he says. Im not making any big claims that Weights & Biases is changing the world.

But we do have a front-row seat [to watch] all of our customers that are doing things that we simply could not do before, and were helping them to do that. Its really gratifying to see.

Top photo: Biologists Glen Gowers and Oliver Vince, pictured here, launched Basecamp Research after a 2019 expedition to an ice cap in Iceland. (Photo courtesy of Basecamp Research)

Read the original post:
From bioscience to skin care, startups are harnessing AI to solve problems big and small - Source - Microsoft

Read More..

The rising software supply chain threat: Mounting a unified defense – CIO

Malicious actors have been pressing their advantage against vulnerable software supply chains with exponentially increasing attacks. Enterprises have been hampered in fighting back by lack of internal consensus on their security capabilities and practices. Recent survey findings uncovered multiple areas of disconnect between senior executives/managers (executives) and hands-on staff (doers).

Executives tended to have a comparatively rosier picture of their organizations security posture. Compared to the doers, executives believed they were implementing more security practices, using more solutions, and defending more effectively against open-source risk. Similarly, they underestimated the time their teams were spending on vulnerability remediation and software package approvals.

The executives and doers also had significantly different perceptions when it came to the incorporation of artificial intelligence (AI) and machine learning (ML) in software applications and for automated security scanning.

The research findings revealed region-specific concerns over SSC security as well.

North America (NA)-based organizations tend to be quicker to adopt ML models than those based in Europe, the Middle East, and Africa (EMEA) or the Asia-Pacific (APAC). Also, organizations in the US appear to have a greater comfort level when it comes to using AI and ML tools for code creation.

These findings suggest that the AI race is more intense in North America, where Silicon Valley technology giants have been investing heavily in its development, than in the EMEA or APAC regions.

Based on the survey findings, its clear that EMEA organizations exercise more caution when it comes to SSC risk than in other parts of the world. They are less inclined to deploy software to Internet of Things (IoT) devices, for example. Also, theres more resistance to integrating AI and ML in softwarelikely due to concerns over security and compliance.

Compared to North America and Asia, the regulatory environment is far more stringent in Europe, where organizations are sensitive to the requirements of the General Data Protection Regulation (GDPR), the Cybersecurity Act, and other key directives.

Yet despite their measured response to emerging software technologies, survey responses indicate that organizations in the EMEA region are aware of the potential of AI and ML tools and are open to considering ways to incorporate them in their SSCs.

Among the notable distinctions of APAC-based organizations is their comparative eagerness to incorporate AI and ML for scanning and remediation. Based on the survey results, they also have a very high comfort level with the use of AI and ML tools for code creation.

That could be problematic. If unchecked, APAC organizations enthusiasm for these emerging technologies might expose them to greater SSC security risk.

Corporate leaders are eager to bridge the perception gaps and adopt a comprehensive, unified solution to shore up SSC security. Whether based in NA, EMEA, or APAC, executives are eager to establish a unified SSC security defense posture for their organizations. Whats needed is a comprehensive solution that embraces automation, employs AI and ML models, and prioritizes integration across the entire software development lifecycle.

Continue reading here:
The rising software supply chain threat: Mounting a unified defense - CIO

Read More..

Exploring the limits of robotic systems | Penn Today – Penn Today

As machine learning enters the mainstream, consumers may assume that it can solve almost any problem. This is not true, says Bruce Lee, a doctoral student in Penn Engineerings Department of Electrical and Systems Engineering. Lees research works to identify how robotic systems learn to perform different tasks, focusing on how to tell when a problem may be too complexand what to do about it.

Lee, who is advised by Nikolai Matni, assistant professor in electrical and systems engineering and member of the Penn Research in Embedded Computing and Integrated Systems Engineering (PRECISE) Center, studies how robotic systems learn from data, with the goal of understanding when robots struggle to learn a dynamic system, and what approaches might be effective at combating those challenges.

His work offers insights into the fundamental limits of machine learning, guiding the development of new algorithms and systems that are both data-efficient and robust.

When I try to apply a reinforcement learning or imitation learning algorithm to a problem, I often reach a point where it does not work, and I have no idea why, says Lee. Is it a bug in my code? Should I just collect more data or run more iterations? Do I need to change the hyperparameters? Sometimes, the answer is none of the above. Rather, the problem is impossible to learn effectively, no matter what learning algorithm I use. My work can help researchers understand when this is the case.

Improving the way robotic systems learn from data enhances the safety and efficiency of self-driving cars, enabling them to make more reliable decisions in complex, dynamic environments. Similarly, robots operating in human environments, such as in health care or manufacturing, can become more adaptable and capable of performing a wider range of tasks with minimal human intervention. Ultimately, the goal is to create robotic systems that can better serve humanity, contributing to advancements in various fields including transportation, health care, and beyond.

Read more at Penn Engineering Today.

Continued here:
Exploring the limits of robotic systems | Penn Today - Penn Today

Read More..

If you build it, they will come: Experts believe reimbursement will follow AI implementation – Health Imaging

Lots of products, few payments

The U.S. Food and Drug Administration includes nearly 900 products on itslist of approved artificial intelligence- and machine learning-enabled medical devices, the great majority of which are tailored to radiology needs. Despite this, AI algorithms are still not a mainstay within radiology departments.

This could be, in part, due to the lack of reimbursement for such technology and decreasing payments to the specialty in general. After adjusting for inflation, the American College of Radiology estimates that Medicare reimbursement to radiologists has plummeted nearly 32% since 2005.What's more, the Centers for Medicare & Medicaid Services (CMS) has assigned payment for just around 10 of the AI/ML devices that are currently approved.

In the panel discussion, Jha said that if AI makes radiologists better or more efficient at their jobs, it should be adopted without getting too deep into return-on-investment (ROI) calculations because the return will be felt in other ways.

Joseph Cavallo, MD, MBA, an assistant professor of radiology and biomedical imaging at Yale School of Medicine, agreed, noting that the current absence of reimbursement should not deter stakeholders from exploring how AI can improve their practices. Reimbursement for the use of AI algorithms will be the exception more than the rule for a while, he said.

Some CPT codes have been created, but AI as a whole is going to have to be like PACS was for radiology for a while. Improving workflow and efficiency for radiologists now,will result in ROI and gains in the future, Cavallo suggested.

On an encouraging note, for those hoping to implement algorithms into their practice, Eliot Siegel, MD, FSIIM, chief ofimaging services at the University of Maryland School of Medicine, said that conversations related to payments for AI-based tools used during the diagnostic process are increasing at the federal level.

Like Jha, Siegel, believes that AI algorithms are on a similar path as PACS was.

It took a small number of years, but eventually people realized film wasn't practical anymore. The same will happen with AI, he said.

Read more about commercially available AI products at work below.

See the rest here:
If you build it, they will come: Experts believe reimbursement will follow AI implementation - Health Imaging

Read More..

Proactive Ways to Skill Up for AI – AiThority

Artificial Intelligence (AI) is rapidly transforming a multitude of industries, from healthcare and finance to transportation and marketing. As AI continues to infiltrate various sectors, the demand for AI-related skills is surging. Lets explore proactive ways to acquire the skills needed to thrive in this AI-driven landscape.

Also Read: AiThority Interview with Wendy Gonzalez, CEO of Sama

The emergence of AI technologies has triggered a monumental transformation in employment opportunities, elevating the worth of AI competencies. A LinkedIn survey reveals that roles specializing in AI have seen an annual growth rate of 74% over the previous four years, establishing it as the most rapidly expanding job sector. Likewise, Gartners research forecasts that by 2022, AI will generate 2.3 million jobs, while phasing out only 1.8 million.

Its worth noting that the demand for AI expertise isnt confined to tech firms. Industries such as healthcare, finance, and even agriculture are incorporating AI, thus broadening the range of roles requiring AI knowledge. Whether its data analysis, machine learning, or natural language processing, possessing AI skills can significantly enhance your employability and open doors to a myriad of career opportunities.

As you navigate the burgeoning field of Artificial Intelligence, honing specific core skills will be instrumental in setting you apart. Lets delve into these essential skills that are highly sought after in the AI industry.

Also Read: Cryptocurrency Hacking Has Become A Significant Threat

In the dynamic realm of AI, a proactive stance towards acquiring skills is not just advantageous but imperative. The days of waiting for opportunities to knock on your door are long gone. Heres your guide to taking control of your AI learning journey.

The AI landscape is in a constant state of flux, yet the need for AI competencies remains steadfast. From mastering programming languages and data analytics to gaining hands-on experience and networking, the avenues for skill acquisition are diverse. Certifications and real-world experience further solidify your standing in this competitive field. The key to success lies in taking proactive steps to continually enhance your skill set. Dont wait for the future to shape you; shape your future by skilling up in AI today.

[To share your insights with us as part of editorial or sponsored content, please write topsen@itechseries.com]

Continue reading here:
Proactive Ways to Skill Up for AI - AiThority

Read More..

Novel wearable system improves balance evaluation – Research & Development World

Researchers at Florida Atlantic University, some of whom are pictured, have developed a novel method using wearable sensors and AI that could reshape balance assessment practices. Credit: Alex Dolce, Florida Atlantic University

Traditionally, physicians have relied on subjective observations and specialized equipment to gauge balance in individuals with conditions such as Parkinsons disease, neurological injuries, and age-related decline. Such methods especially subjective ones can lack precision, are difficult to administer remotely, and can be inconsistent. To address such limitations, researchers from Florida Atlantic University have developed a novel approach using wearable sensors and advanced machine learning algorithms that could redefine balance assessment practices.

The research is published in Frontiers in Digital Health.

The researchers used wearable Inertial Measurement Unit (IMU) sensors placed on five body locations: ankle, lumbar, sternum, wrist, and arm. Data collection followed the Modified Clinical Test of Sensory Interaction on Balance (m-CTSIB) protocol, testing four sensory conditions: eyes open and closed on stable and foam surfaces. Each test lasted roughly 11 seconds, simulating continuous balance scenarios.

The scientists then preprocessed and extracted features from the raw sensor data. They then applied a trio of machine learning algorithms to estimate m-CTSIB scores: multiple linear regression, support vector regression, and the open-source software library XGBOOST.

The researchers trained and validated the models with wearable sensor data as input and corresponding m-CTSIB scores from Falltrak II as ground truth labels.

They used cross-validation methods, correlation with ground truth scores, and Mean Absolute Error (MAE) measures, to evaluate the performance

The XGBOOST model using lumbar sensor data yielded the best results, demonstrating high accuracy and strong correlation with ground truth balance scores. The lumbar and dominant ankle sensors produced the highest performance in balance score estimation.

In Frontiers in Digital Health, the researchers concluded that the findings pave the way for more precise and convenient balance assessments. They state the approach has immense potential to enhance balance performance assessment and management in various settings, including clinical environments, rehabilitation, and remote monitoring.

Read the original here:
Novel wearable system improves balance evaluation - Research & Development World

Read More..

Artificial Intelligence in GIS: Promise, Progress, and Possibilities | Summer 2024 | ArcNews – Esri

Imagine completing an ArcGIS project from start to finish without needing to click a user interface, open a tool, load a spreadsheet, or adjust symbols and colors. Rather than manually creating a map, users would simply communicate their requirements in natural language inside the software. A few prompts later, the user would have a map with their desired appearance and specifications.

These are real possibilities being investigated and evaluated by research and development teams building generative AI capabilities into ArcGIS. Early prototypes have shown promise in making this vision a reality.

In GIS, AI assistants offer a compelling opportunity to democratize what is already a powerful technology. They stand to make geospatial understanding more accessible to a wider audience and empower users of all skill levels to tackle complex challenges.

A different type of AI is already in use in ArcGIS.

Geospatial artificial intelligence, or GeoAI, accelerates GIS outcomes by leveraging AI subfields like pattern recognition, computer vision, and machine and deep learning methods. GIS professionals use it to automate feature extraction and similar repetitive tasks and to perform advanced analyses.

The development of AI assistants and GeoAI demands careful navigation, given the sensitive nature of GIS work and the important decisions that follow from it.

Esri is embracing the power of AI and the promise it brings. While it is tempting to move quickly, doing things right is more important than doing them fast.

With GeoAI, artificial intelligence is already delivering on its promise to dramatically improve how organizations solve spatial problems. It enables ArcGIS users to automate tasks that once required extensive manual efforts.

GeoAI tools are especially good at extracting meaningful geospatial features from a variety of data sources, including text documents and images. ArcGISwith any of the 70-plus ready-to-use pretrained deep learning packages from Esrican help users automate the extraction of features such as buildings, land-use polygons, swimming pools, solar panels, or trees from imagery or 3D point clouds.

Many different types of organizations use GeoAI capabilities to enhance their geographic approach.

A highway maintenance department can use GeoAI to identify cracks in roads based on drone imagery. Then staff can integrate this with data on traffic patterns to prioritize repair work.

Aid organizations can use GeoAI to make quick damage assessments. Using ArcGIS and a deep learning model, they can compare before-and-after satellite images and identify damaged buildings on a map.

In regions of the world where people live in informal settlements, local governments can use GeoAI to take a more accurate census. The process involves capturing aerial imagery and then, with a deep learning model, extracting building footprints to estimate population.

Each of these scenarios would have required tedious digitization that, in the past, was done manually. Now, users can apply out-of-the-box deep learning models to accelerate the job.

GeoAI also enables predictive analysis of vector data through machine learning algorithms. For example, a machine learning model can be used to estimate flash flood risk in an area based on factors related to precipitation, topography, hydrology, policies, and population demographics.

All this allows for better decision-making and planning by incorporating data-driven insight into GIS workflows.

As everyone knows, the GIS community does important work that informs impactful decisions. It is, therefore, imperative that the data involved is accurate and up-to-date.

This is a fundamental GIS concept that has been true for decades. AI raises the stakesespecially where decisions from AI models affect people and communities.

GeoAI in ArcGIS is built by following the highest standards for trustworthy AI, including well-documented models and instrumentation to help users measure accuracy and bias in analysis.

As has always been the case, GIS professionals must ask the right questions of the data.

Recent advancements in language models have opened exciting new possibilities for building generative AI capabilities into the ArcGIS user experience. These assistants are still in early development, but several prototypes have shown promising potential.

Broadly, two types of AI assistants are being evaluated inside ArcGIS.

The first type, embedded assistants, are designed to boost productivity on everyday tasks. They provide suggestions and automate repetitive actions inside regularly used ArcGIS tools.

Furthest along in development is a beta feature in ArcGIS Survey123. This assistant simplifies the survey design process by providing a conversational approach to building surveys. Prompting the assistant just as they might with ChatGPT, users can quickly create a survey draft without needing to navigate menus or interfaces in the tool.

Other embedded AI assistants are in the early stages of research and development at Esri.

One of these AI assistants aims to help ArcGIS users author SQL, Python, Cypher, and Arcade expressions in ArcGIS Pro. Another is the ArcGIS help system chatbot trained on volumes of ArcGIS documentation that can quickly answer how-to questions. A third assistant would help users conduct market planning and site selection inside ArcGIS Business Analyst.

Apart from the embedded assistants, the second type of assistant being evaluated for use in ArcGIS technology is a broader general AI assistant that might someday encompass the entire ArcGIS experience. Think of this as a sophisticated chatbot that understands GIS data and tools and can answer geospatial questions.

As a simple example, a municipality using ArcGIS Hub could build a site with a public-facing AI assistant that interprets a query about trash pickups. The assistant would reference authoritative open data about the pickup schedule from within the public works departments hub site and use a geocoding service to discern the users location.

Accuracy is paramount in the design. This assistant would invite the user to confirm their location by creating a map showing the geocoded address. For transparency, the assistant would cite its sourcea public works database.

The development of AI technology is moving at an astounding pace. We have only scratched the surface of what AI can do in GIS.

Users are already doing the foundational work. They are publishing data as services and adding metadata. High-quality data forms the backbone of how AI systems learn and reason.

In developing the AI tools for ArcGIS, much of the work involves mitigating risks. This means constraining inputs to authoritative sources and building configurable guardrails.

The development process demands responsible implementation. An Esri AI advisory boarda cross-functional team of technology, product, legal, security, and privacy officersprovides guidelines for responsibly implementing AI in ArcGIS.

Through a commitment to responsible implementation and continuous learning, Esri is helping organizations apply the promise of geography and AI to solve the most challenging problems.

Ismael Chivite is Esris senior principal product manager for AI assistants in ArcGIS. A geographer by training, Chivite loves helping people leverage GIS to improve the way they work. He has been with Esri since 2002 and is always looking for ideas to create and enhance Esri products. Outside of working hours, he likes LEGOs, rock climbing, Romanesque architecture, and Jamn Ibrico.

See more here:
Artificial Intelligence in GIS: Promise, Progress, and Possibilities | Summer 2024 | ArcNews - Esri

Read More..

AI and Social Media: What Users Must Understand – AiThority

AI and social media have become inseparable entities in todays digital landscape. It is revolutionizing the way we connect, communicate, and consume information. Artificial Intelligence, with its advanced algorithms and machine learning capabilities, has transformed social media platforms. They are now powerful tools for personalization, engagement, and targeted advertising.

Read: How Does AI Contribute To Web3 Intelligence?

AI-driven recommendation systems analyze user preferences and behaviors to curate personalized content feeds, enhancing user experience and maximizing platform engagement. Chatbots powered by AI are being utilized for customer service, providing i****** and efficient responses to user inquiries. Moreover, AI algorithms help with content moderation, flagging and removing inappropriate content to maintain a safe online environment.

However, the proliferation of AI in social media also raises concerns about privacy, data security, and algorithmic bias. As AI continues to evolve, the intersection between AI and social media will shape the future of digital interactions. It will be influencing social dynamics, information dissemination, and the boundaries of online privacy.

Read: Taking Generative AI from Proof of Concept to Production

As AI continues to shape social media platforms, it is crucial for users to understand the inner workings of algorithms. Here are some crucial details related to the working of algorithms:

Read: How AI can help Businesses Run Service Centres and Contact Centres at Lower Costs?

Lets explore some interesting developments that are likely to be witnessed in AI-driven social media:

As AI continues to shape social media, understanding algorithms, anticipating future advancements, and advocating for ethical practices will empower users to navigate the digital landscape responsibly and confidently.

[To share your insights with us as part of editorial or sponsored content, please write topsen@itechseries.com]

Excerpt from:
AI and Social Media: What Users Must Understand - AiThority

Read More..

Latest Research on VQA part1(Machine Learning 2024) | by Monodeep Mukherjee | Jun, 2024 – Medium

Tackling VQA with Pretrained Foundation Models without Further Training

Authors: Alvin De Jun Tan, Bingquan Shen

Abstract: Large language models (LLMs) have achieved state-of-the-art results in many natural language processing tasks. They have also demonstrated ability to adapt well to different tasks through zero-shot or few-shot settings. With the capability of these LLMs, researchers have looked into how to adopt them for use with Visual Question Answering (VQA). Many methods require further training to align the image and text embeddings. However, these methods are computationally expensive and requires large scale image-text dataset for training. In this paper, we explore a method of combining pretrained LLMs and other foundation models without further training to solve the VQA problem. The general idea is to use natural language to represent the images such that the LLM can understand the images. We explore different decoding strategies for generating textual representation of the image and evaluate their performance on the VQAv2 dataset

Originally posted here:
Latest Research on VQA part1(Machine Learning 2024) | by Monodeep Mukherjee | Jun, 2024 - Medium

Read More..