Page 822«..1020..821822823824..830840..»

Machine Learning Evaluation Metrics: Theory and Overview – KDnuggets

Building a machine learning model that generalizes well on new data is very challenging. It needs to be evaluated to understand if the model is enough good or needs some modifications to improve the performance.

If the model doesnt learn enough of the patterns from the training set, it will perform badly on both training and test sets. This is the so-called underfitting problem.

Learning too much about the patterns of training data, even the noise, will lead the model to perform very well on the training set, but it will work poorly on the test set. This situation is overfitting. The generalization of the model can be obtained if the performances measured both in training and test sets are similar.

In this article, we are going to see the most important evaluation metrics for classification and regression problems that will help to verify if the model is capturing well the patterns from the training sample and performing well on unknown data. Lets get started!

When our target is categorical, we are dealing with a classification problem. The choice of the most appropriate metrics depends on different aspects, such as the characteristics of the dataset, whether its imbalanced or not, and the goals of the analysis.

Before showing the evaluation metrics, there is an important table that needs to be explained, called Confusion Matrix, that summarizes well the performance of a classification model.

Lets say that we want to train a model to detect breast cancer from an ultrasound image. We have only two classes, malignant and benign.

Accuracy is one of the most known and popular metrics to evaluate a classification model. It is the fraction of the corrected predictions divided by the number of Samples.

The Accuracy is employed when we are aware that the dataset is balanced. So, each class of the output variable has the same number of observations.

Using Accuracy, we can answer the question Is the model predicting correctly all the classes?. For this reason, we have the correct predictions of both the positive class (malignant cancer) and the negative class (benign cancer).

Differently from Accuracy, Precision is an evaluation metric for classification used when the classes are imbalanced.

Precision answer to the following question: What proportion of malignant cancer identifications was actually correct?. Its calculated as the ratio between True Positives and Positive Predictions.

We are interested in using Precision if we are worried about False Positives and we want to minimize it. It would be better to avoid running the lives of healthy people with fake news of a malignant cancer.

The lower the number of False Positives, the higher the Precision will be.

Together with Precision, Recall is another metric applied when the classes of the output variable have a different number of observations. Recall answers to the following question: What proportion of patients with malignant cancer I was able to recognize?.

We care about Recall if our attention is focused on the False Negatives. A false negative means that that patient has a malignant cancer, but we werent able to identify it. Then, both Recall and Precision should be monitored to obtain the desirable good performance on unknown data.

Monitoring both Precision and Recall can be messy and it would be preferable to have a measure that summarizes both these measures. This is possible with the F1-score, which is defined as the harmonic mean of precision and recall.

A high f1-score is justified by the fact that both Precision and Recall have high values. If recall or precision has low values, the f1-score will be penalized and, then, will have a low value too.

When the output variable is numerical, we are dealing with a regression problem. As in the classification problem, its crucial to choose the metric for evaluating the regression model, depending on the purposes of the analysis.

The most popular example of a regression problem is the prediction of house prices. Are we interested in predicting accurately the house prices? Or do we just care about minimizing the overall error?

In all these metrics, the building block is the residual, which is the difference between predicted values and actual values.

It doesnt penalize high errors as much as other evaluation metrics. Every error is treated equally, even the errors of outliers, so this metric is robust to outliers. Moreover, the absolute value of the differences ignores the direction of error.

The Mean Squared Error calculates the average squared residuals.

Since the differences between predicted and actual values are squared, It gives more weight to higher errors,

so it can be useful when big errors are not desirable, rather than minimizing the overall error.

The Root Mean Squared Error calculates the square root of the average squared residuals.

When you understand MSE, you keep a second to grasp the Root Mean Squared Error, which is just the square root of MSE.

The good point of RMSE is that it is easier to interpret since the metric is in the scale of the target variable. Except for the shape, its very similar to MSE: it always gives more weight to higher differences.

Mean Absolute Percentage Error calculates the average absolute percentage difference between predicted values and actual values.

Like MAE, it disregards the direction of the error and the best possible value is ideally 0.

For example, if we obtain a MAPE with a value of 0.3 for predicting house prices, it means that, on average, the predictions are below of 30%.

I hope that you have enjoyed this overview of the evaluation metrics. I just covered the most important measures for evaluating the performance of classification and regression models. If you have discovered other life-saving metrics, that helped you on solving a problem, but they are not nominated here, drop them in the comments.Eugenia Anello is currently a research fellow at the Department of Information Engineering of the University of Padova, Italy. Her research project is focused on Continual Learning combined with Anomaly Detection.

View original post here:
Machine Learning Evaluation Metrics: Theory and Overview - KDnuggets

Read More..

AI and Machine Learning: Boosting Indian space exploration to new heights – The Financial Express

By Subhashis Kar

Indias space program has the potential to achieve unparalleled heights, thanks to a stunning confluence of cutting-edge technology and visionary ambition, fuelled by the unwavering synergy between Artificial Intelligence (AI) and Machine Learning (ML). The Indian Space Research Organisation (ISRO) is embarking on a new age of space exploration, employing AI and ML to explore the universes unknown frontiers.

From the successful launch of Chandrayaan-3, Indias moon expedition, to the Mars Orbiter expedition (Mangalyaan), which made India the first Asian nation to reach Mars, Indias voyage into space has been characterized by a succession of momentous milestones. The path ahead, however, is plagued with even bigger problems and opportunities, which is where AI and ML come into play. Precision and efficiency are crucial to Indias space efforts. Whether its sending satellites into orbit or researching distant celestial entities, every mission requires rigorous preparation and execution. AI and ML, with their ability to analyze data and recognize patterns, are essential tools in this sector. They enable ISRO scientists to optimize trajectory, anticipate ideal launch windows, and even simulate mission scenarios in order to increase the likelihood of success.

One of the most significant uses of artificial intelligence in space exploration is autonomous navigation. Traditionally, spaceships require regular human involvement for course corrections and changes. With AI-guided navigation systems, these spacecraft can make real-time choices based on sensor data, ensuring they stay on course even when millions of kilometers from Earth. This not only cuts mission expenses but also increases spacecraft longevity.In addition, machine learning algorithms are transforming our knowledge of the universe. Telescopes and observatories outfitted with machine learning algorithms can filter through massive volumes of data to find astronomical objects such as exoplanets and cosmic occurrences that would otherwise go unreported. This not only increases our understanding of the cosmos, but it also assists in the finding of possibly habitable planets outside of our solar system.

In recent years, India has also experimented with reusable space technologies. AI plays a critical role in improving launch vehicle recovery and refurbishing, making it economically feasible to send spacecraft into space on a more frequent basis. AI guarantees that launch vehicle reusability becomes a reality by learning from each missions performance and making improvements, lowering the overall cost of space exploration. Furthermore, the introduction of AI and ML in Indias space program goes beyond hardware and mission planning. It has also started to change how data is examined and understood. The Indian Space Science Data Center (ISSDC) processes and categorizes massive datasets acquired from space missions using ML algorithms. This speeds up the extraction of relevant scientific ideas, leading to ground-breaking breakthroughs in domains like astrophysics and planetary science.

International cooperation is becoming increasingly important as India expands its space capabilities. AI and machine learning have proven to be critical bridges in enabling effortless communication with space agencies and research institutes throughout the world. By standardizing data formats and analytic methodologies, these technologies guarantee a smooth interchange of information and knowledge, thrusting India even farther into the global space exploration scene.

Finally, the revolutionary potential of AI and ML is propelling Indias quest for excellence in space exploration to unprecedented heights. These technologies are important assets that enable accuracy, efficiency, and creativity in many aspects of Indias space program. The confluence of human brilliance and machine intelligence promises to uncover the mysteries of the universe and inspire future generations to dream beyond the sky as ISRO continues to reach for the heavens.

The author is CEO, Techbooze

Follow us onTwitter,Facebook,LinkedIn

More:
AI and Machine Learning: Boosting Indian space exploration to new heights - The Financial Express

Read More..

Machine learning for cataract classification/grading on ophthalmic imaging modalities: Survey – Medical Xpress

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

by Beijing Zhongke Journal Publising Co.

close

According to the World Health Organization (WHO), it is estimated that approximately 2.2 billion people suffer from visual impairment. Cataracts account for about 33% of visual impairment and are the number one cause of blindness (more than 50%) worldwide. Cataract patients can improve life quality and vision through early intervention and cataract surgery, which are efficient methods to reduce the blindness ratio and cataract-blindness burden for society simultaneously.

Clinically, cataracts are the transparency loss of crystalline lens area, which occurs when the protein inside the lens clumps together. They are associated with many factors, such as developmental abnormalities, trauma, metabolic disorders, genetics, drug-induced changes, age, etc.

Genetics and aging are two of the most important factors for cataracts. Cataracts can be categorized as age-related cataract, pediatrics cataract (PC), and secondary cataract according to their causes.

Depending on the location of the crystalline lens opacity, they can be grouped into nuclear cataract (NC), cortical cataract (CC), and posterior subcapsular cataract (PSC). NC denotes the gradual clouding and the progressive hardening in the nuclear region. CC is the form of white wedged-shaped and radially oriented opacities and develops from the outside edge of the lens toward the center in a spoke-like fashion. PSC is granular opacities, and its symptom includes small breadcrumbs or sand particles, which are sprinkled beneath the lens capsule.

Over the past years, ophthalmologists have used several ophthalmic images to diagnose cataracts based on their experience and clinical training. This manual diagnosis mode is error-prone, time-consuming, subjective, and costly, which is a great challenge in developing countries or rural communities, where experienced clinicians are scarce.

To prevent cataracts early and improve the precision and efficiency of cataract diagnosis, researchers have made great efforts in developing computer-aided diagnosis (CAD) techniques for automatic cataract classification/grading on different ophthalmic images, including conventional machine learning methods and deep learning methods.

The conventional machine learning method is a combination of feature extraction and classification/grading. In the feature extraction stage, a variety of image processing methods have been proposed to obtain visual features of cataracts according to different ophthalmic images, such as density-based statistics method, density histogram method, bag-of-features (BOF) method, Gabor wavelet transform, gray level co-occurrence matrix (GLCM), Haar wavelet transform, etc.

In the classification/grading stage, strong classification methods are applied to recognize different cataract severity levels, e.g., support vector machine (SVM). Over the past 10 years, deep learning has achieved great success in various fields, including medical image analysis, which can be viewed as a representation learning approach. It can learn low-level, middle-level, and high-level feature representations from raw data in an end-to-end manner (e.g., ophthalmic images).

Various deep neural networks have been utilized to tackle cataract classification/grading tasks like convolutional neural networks (CNNs), attention-based networks, Faster-RCNN, and multilayer perceptron (MLP) neural networks.

Previous surveys had summarized cataract types, cataract classification/grading systems, and ophthalmic imaging modalities, respectively. However, none had summarized ML techniques based on ophthalmic imaging modalities for automatic cataract classification/ grading systematically.

This survey is supposed to be the first survey that systematically summarizes recent advances in ML techniques for automatic cataract classification/grading. This survey mainly focuses on surveying ML techniques in cataract classification/grading, comprised of conventional ML methods and deep learning methods. The work is published in the journal Machine Intelligence Research.

Researchers surveyed these published papers through Web of Science (WoS), Scopus, and Google Scholar databases. To understand this survey easily, they also review ophthalmic imaging modalities, cataract grading systems, and commonly-used evaluation measures in brief. Then researchers introduce ML techniques step by step. They hope that this survey can provide a valuable summary of current research and present potential research directions for ML-based cataract classification/grading in the future.

Section 2 introduces six different eye images used for cataract classification/grading for the first time: slit lamp image, retroillumination image, ultrasonic image, fundus image, digital camera image, and anterior segment optical coherence tomography (AS-OCT) image. It also discusses their advantages and disadvantages.

To classify or grade the severity levels of cataracts (lens opacities) accurately and quantitatively, it is crucial and necessary to build standard/gold cataract classification/grading systems for clinical practice and scientific research purposes.

Section 3 briefly introduces six existing cataract classification/grading systems: Lens opacity classification system, Wisconsin grading system, Oxford clinical cataract classification and grading system, Johns Hopkins system, WHO cataract grading system, Fundus image-based cataract classification system.

Section 4 introduces ophthalmic image datasets used for cataract classification/grading, which can be grouped into private and public datasets. Private datasets include ACHIKO-NC dataset, ACHIKO-Retro dataset, CC-Cruiser dataset and Multicenter dataset. Public datasets include EyePACS dataset and HRF dataset.

Section 5 mainly investigates machine learning techniques for cataract classification/grading over the years, which comprises conventional machine learning methods and deep learning methods.

Scholars have developed state-of-the-art conventional ML methods over the years to automatically classify/grade cataract severity levels, aiming to assist clinicians in diagnosing cataracts efficiently and accurately. These methods consist of feature extraction and classification/grading.

The feature extraction methods are introduced based on ophthalmic image modalities, which include slit lamp image, retroillumination image, ultrasound images and digital camera images & AS-OCT images, fundus image. As for classification/grading, it consists of the introduction of support vector machine, linear regression, K-nearest neighbors, ensemble learning method, ranking and some other machine learning methods. Many deep learning methods are introduced in this paper, including multilayer perceptron neural networks, convolutional neural networks, recurrent neural networks, attention mechanisms and hybrid neural networks.

Section 6 introduces evaluation measures to assess the performance of cataract classification/grading. In this survey, classification denotes the cataract labels used for learning are discrete, e.g., 1, 2, 3, 4, while grading denotes cataract labels are continuous, such as 0.1, 0.5, 1.0, 1.6, and 3.3.

Although researchers have made significant development in automatic cataract classification/grading over the years, this field still has challenges.

Section 7 presents these challenges and gives possible solutions. This section consists of eight parts. The first part is the problem of lacking public cataract datasets. To this problem, it is necessary and significant to build public and standard ophthalmology image datasets based on standardized medical data collection and storage protocols.

The second part is about developing standard cataract classification/grading protocols based on new ophthalmic imaging modalities. Two possible solutions are proposed: Developing a cataract grading protocol based on clinical verification and building the mapping relationship between two ophthalmic imaging modalities.

The third part is the solutions to annotate cataract images accurately, such as semi-supervised learning, unsupervised learning and content-based image retrieval.

The fourth part is about how to classify/grade cataracts accurately for precise cataract diagnosis.

This survey provides the following research directions: clinical prior knowledge injection, multi-task learning for classification and segmentation, transfer learning, multimodality learning and image denoising.

The fifth and sixth sections are about improving the interpretability of deep learning methods and mobile cataract screening. The seventh section provides solutions to evaluate the generalization ability of machine learning methods for other eye disease classification tasks.

More information: Xiao-Qing Zhang et al, Machine Learning for Cataract Classification/Grading on Ophthalmic Imaging Modalities: A Survey, Machine Intelligence Research (2022). DOI: 10.1007/s11633-022-1329-0

Provided by Beijing Zhongke Journal Publising Co.

Go here to read the rest:
Machine learning for cataract classification/grading on ophthalmic imaging modalities: Survey - Medical Xpress

Read More..

Unlocking Battery Optimization: How Machine Learning and Nanoscale X-Ray Microscopy Could Revolutionize Lithium Batteries – MarkTechPost

A groundbreaking initiative has emerged from esteemed research institutions aiming to unravel the enigmatic intricacies of lithium-based batteries. Employing an innovative approach, researchers harness machine learning to meticulously analyze X-ray videos at the pixel level, potentially revolutionizing battery research.

The challenge at the heart of this endeavor is the quest for a comprehensive understanding of lithium-based batteries, particularly those constructed with nanoparticles of the active material. These batteries are the lifeblood of modern technology, powering many devices, from smartphones to electric vehicles. Despite their ubiquity, deciphering their complex inner workings has been a persistent challenge.

The breakthrough achieved by a multidisciplinary team from MIT and Stanford lies in their ability to extract profound insights from high-resolution X-ray videos of batteries in action. Historically, these videos were a goldmine of information, but their complexity made extracting meaningful data a daunting task.

Researchers emphasize the pivotal role played by the interfaces within these batteries in controlling their behavior. This newfound understanding opens doors to engineering solutions that could enhance battery performance significantly.

Furthermore, there is a pressing need for fundamental, science-based insights to expedite advancements in battery technology. By employing image learning to dissect nanoscale X-ray movies, researchers can now access previously elusive knowledge, which is crucial for industry partners aiming to develop more efficient batteries faster.

The research methodology involved capturing detailed scanning tunneling X-ray microscopy videos of lithium iron phosphate particles during the charging and discharging processes. Beyond the human eyes capacity, a sophisticated computer vision model scrutinized the subtle changes within these videos. The ensuing results were then compared to earlier theoretical models. Among their key revelations was the discovery of a correlation between the flow of lithium ions and the thickness of the carbon coating on individual particles. This discovery provides a promising avenue for optimizing future lithium-ion phosphate battery systems, ultimately enhancing battery performance.

In summary, the collaboration between esteemed research institutions and the integration of machine learning into battery research represents a significant leap forward in our understanding of lithium-based batteries. By shining a spotlight on the interfaces and leveraging the capabilities of image learning, scientists have unearthed new possibilities for enhancing the performance and efficiency of these vital energy storage devices. This research not only propels the boundaries of battery technology but also holds the promise of ushering in more advanced and sustainable energy solutions in the not-so-distant future.

Check out thePaper and Reference Article.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 30k+ ML SubReddit,40k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.

Excerpt from:
Unlocking Battery Optimization: How Machine Learning and Nanoscale X-Ray Microscopy Could Revolutionize Lithium Batteries - MarkTechPost

Read More..

The Evolution of Machine Learning In TB Diagnostics: Unlocking Patterns and Insights – ETHealthWorld

By Raghavendra Goud Vaggu

The severity of tuberculosis (TB) makes it a troubling crisis across the globe, especially as it is responsible for millions of deaths around the world. According to the World Health Organisation (WHO), TB was responsible for 1.6 million deaths in 2021, making it the 13th biggest killer and second leading infectious killer only after COVID-19 that year. With 10.6 million people falling ill in 2021 to the dreaded disease, and patients cutting across all demographics, there is need for vigilance.

continued below

The rise of computer-aided diagnostics has certainly added impetus to the drive for better TB diagnosis, especially because of better medical imaging that gives radiologists more precise interpretation of the patient's chest, blood, spine, or brain, depending on the part of the body that is affected. One of such tools is the CAD model which offers precise diagnosis of the TB cavity and clearly displays areas of interest when observing the chest x-ray image. This is a huge improvement on preexisting CAD systems which could not identify TB cavities, as a result of the lung field's superimposed anatomic parts.

Looking ahead in TB diagnosis and treatment

Within the framework of modern TB analysis, the place of data cannot be overemphasised. Fully automated CAD systems are being experimented for their great features, including handcrafted systems and deep features. These systems use pre-trained CNN frameworks and supervised learning to probe the parts of the body that are of interest using obtained data and processing such data to reach conclusive diagnosis. This also comes with the ongoing conversation around the difference between supervised and unsupervised learning and how the choice of the future can shape TB diagnosis. More advanced CAD systems will likely emerge in the future, powered by superior AI.

Raghavendra Goud Vaggu, Global CEO, Empe Diagnostics

(DISCLAIMER: The views expressed are sole of the author and ETHealthworld does not necessarily subscribe to it. ETHealthworld.com shall not be responsible for any damage caused to any person/organisation directly or indirectly)

Join the community of 2M+ industry professionals Subscribe to our newsletter to get latest insights & analysis.

Original post:
The Evolution of Machine Learning In TB Diagnostics: Unlocking Patterns and Insights - ETHealthWorld

Read More..

These companies are hiring software engineers to work on machine … – BetaKit – Canadian Startup News

Thomson Reuters, Procom, and Ripple are looking to hire machine learning experts across the country.

To find new solutions in the real world and the crypto-world, companies are taking advantage of artificial intelligence (AI) and machine learning (ML) anywhere they can. The companies below are looking for software developers and engineers in Canada to find and implement new machine-learning applications into their platforms. Check out all the organizations recruiting across the country at Jobs.BetaKit for more opportunities.

Thomson Reuters products include specialized software and tools for legal, tax, accounting, and compliance professionals combined with one of the worlds global news services Reuters.

Thomson Reuters is seeking an applied machine learning scientist and a senior applied machine learning scientist to find new real-world solutions for machine learning and deliver them. The qualifications and expectations for both roles are similar, including a masters or bachelors degree in a relevant field, but the senior role is expected to have eight years of experience over the five years of the junior role.

Hired candidates can expect a hybrid-work model out of a Toronto office, where they will formulate research and development plans in a collaborative environment.

Those interested in these positions and future ones with the company can bookmark its jobs page here.

Procom is a talent acquisition and workforce optimization firm helping clients find suitable recruits for unfilled jobs.

Currently Procom is recruiting a senior artificial intelligence machine learning developer for a 6-month position based out of Calgary. The recruiter is looking for a candidate to support the development of AI-powered chatbots and provide technical consultation for other AI and ML initiatives if required.

Hired candidates will have a minimum of four years experience in developing, deploying, and supporting Microsoft Azure AI Machine Learning solutions such as natural language processing, classification, predictions, intelligent document processing, and advanced video analytics.

For future recruitment opportunities with Procom, pay attention to its jobs page here.

Ripple is trying to connect traditional financial entities like banks, payment providers, and corporations with emerging blockchain technologies and their users.

As part of that mission, Ripple is hiring a senior engineering manager for its data platform. The hired candidate will be managing a team that implements the data infrastructure for analytics, machine learning, and other business functions in Ripples platform. Applicants should have more than 10 years of experience in software development and more than five years of experience managing teams.

Ripple has many open job postings, including for software engineers outside of the machine learning space, here.

Original post:
These companies are hiring software engineers to work on machine ... - BetaKit - Canadian Startup News

Read More..

Fujitsu and the Linux Foundation Launch Fujitsus Automated Machine Learning and AI Fairness Technologies: Pioneering Transparency, Ethics, and…

In an era marked by the rapid advancement of artificial intelligence (AI) technologies, the issues of transparency, ethics, and accessibility have taken center stage. While AI solutions have undoubtedly propelled the field forward, there remains a critical need to address issues related to fairness and accessibility. Recognizing this imperative, Fujitsu, a leading developer of AI technologies in Japan, has embarked on a groundbreaking commitment to open-source AI in collaboration with the Linux Foundation. This initiative addresses these challenges and aims to provide accessible solutions that can benefit a broader range of developers and industries.

Existing AI solutions have undoubtedly driven progress in the field, but they often fall short when addressing issues related to fairness and accessibility. Fujitsus latest endeavor, in partnership with the Linux Foundation, seeks to bridge these gaps and offer practical solutions that can empower developers and industries alike.

One of the cornerstones of this initiative is the automated machine learning project known as SapientML. This innovative project offers the capability to rapidly create highly efficient machine learning models and custom algorithms for a companys unique data. By expediting the development process and facilitating the fine-tuning of precise models, SapientML plays a pivotal role in accelerating progress in the AI field. It significantly reduces time-to-market for AI solutions, enabling companies to bring their innovations to the world more swiftly and effectively.

The second project, Intersectional Fairness, addresses a crucial aspect of AI development mitigating biases within AI systems. This technology is designed to excel at identifying subtle biases that may emerge at the intersection of attributes like gender, age, and ethnicity. Overcoming these often overlooked biases is paramount in creating fair and ethical AI systems that serve diverse populations equitably. Intersectional Fairness technology aligns with societal values and ethical standards, ensuring that AI systems are inclusive and impartial.

The efficacy of these solutions is further underscored by their metrics, which provide tangible evidence of their capabilities. SapientMLs ability to swiftly generate optimized machine learning models and tailored code has a transformative impact on AI development, offering a competitive edge in the industry. On the other hand, Intersectional Fairness technology not only identifies hidden biases but also actively contributes to eliminating them, fostering the creation of AI systems that are advanced technologically and ethically sound.

In conclusion, Fujitsus unwavering commitment to open-source AI, in collaboration with the Linux Foundation, heralds a new era in the development of AI technologies. This initiative goes beyond simply addressing the pressing issues of transparency and fairness; it also democratically opens access to cutting-edge AI technologies. As AI continues to shape our modern world, collective open-source efforts exemplify AIs immense potential to be a tool for global innovation while adhering to rigorous ethical standards. The future of AI embraces inclusivity, accessibility, and fairness for all, and Fujitsus initiatives are leading the way toward this bright future.

Check out theReference Article.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 30k+ ML SubReddit,40k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.

Here is the original post:
Fujitsu and the Linux Foundation Launch Fujitsus Automated Machine Learning and AI Fairness Technologies: Pioneering Transparency, Ethics, and...

Read More..

5 Machine Learning Algorithms Commonly Used in Python – Analytics Insight

This article gathers 5 machine-learning algorithms used in Python for analyzing and making predictions from data

Machine learning algorithms are essential for deriving knowledge from data and generating predictions. There are a number of widely used machine learning algorithms in Python that offer solid tools for addressing a variety of issues. These algorithms are made to extract patterns and correlations from data, allowing computers to reason and forecast the future. This postwill examine five well-known machine-learning algorithms usedin Python.

1. Naive Bayes- The classification approach used by this algorithm, which is based on the Bayes theorem, works by assuming that characteristics belonging to the same class are unaffected by features belonging to other types. Even though the elements are interdependent, the algorithm takes that they are unrelated. This approach provides a model that performs admirably with enormous datasets.

2. Random Forest- It essentially exemplifies an ensemble learning approach for classification, regression, and other issues that works by assembling a variety of decision trees during the training phase. Each decision tree is assigned a class in Random Forest, which categorizes objects based on qualities. The type that reports the most trees is then selected using this algorithm.

3. Linear Regression- It aids in result prediction while taking into account independent variables. The linear link between independent and dependent variables is established with the aid of this ML technique. It basically implies that it illustrates how the value of the independent variables affects the dependent variable.

4. Back-propagation- By changing the weights of the input signals, this algorithm may create the necessary output signals by designing supplied functions. This algorithm for supervised learning is employed in the classification and regression processes. By using the gradient descent or delta rule techniques, back-propagation determines the error function values with the lowest minimums. It is how the method determines the necessary weights to reduce or eliminate error functions.

5. KNN, or K-nearest Neighbours- It can categorize the data points by analyzing the labels of the data points that are present around the target data points and making predictions. Both classification and regression tasks require KNN. It is a method for supervised learning that is used to identify patterns in data and find anomalies.

See more here:
5 Machine Learning Algorithms Commonly Used in Python - Analytics Insight

Read More..

Exploring Mild Cognitive Impairment to Alzheimer’s Disease … – Physician’s Weekly

The following is a summary of Neuroimaging and machine learning for studying the pathways from mild cognitive impairment to alzheimers disease: a systematic review, published in the August 2023 issue of Neurology by Ahmadzadeh et al.

Researchers performed a systematic review of the latest neuroimaging and machine learning methods for predicting Alzheimers disease dementia conversion from mild cognitive impairment.

They conducted their search in accordance with the systematic review guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The search encompassed PubMed, SCOPUS, and Web of Science databases.

The results showed that out of 2,572 articles, 56 fulfilled the inclusion criteria. The potential was observed using a multimodality framework and deep learning for predicting MCI to AD dementia conversion.

They concluded the potential of utilizing neuroimaging data and advanced learning algorithms for predicting AD progression. Challenges faced by researchers and future research directions were outlined. The protocol was registered as CRD42019133402 and published in the Systematic Reviews journal.

Source: bmcneurol.biomedcentral.com/articles/10.1186/s12883-023-03323-2

Link:
Exploring Mild Cognitive Impairment to Alzheimer's Disease ... - Physician's Weekly

Read More..

NASA reveals latest weapon to ‘search the heavens’ for UFOs, aliens – Fox News

Artificial intelligence and machine learning will be "essential" to finding and proving the existence of extraterrestrial life and UFOs, NASA said.

The space agency recently released its highly anticipated 36-page UFO report that said NASA doesn't have enough high-quality data to make a "definitive, scientific conclusion" about the origin of UFOs.

Moving forward, AI will be vital to pinpointing anomalies while combing through large datasets, according to the report compiled by NASA's independent research team on UAPs (unidentified anomalous phenomena), a fancy word for UFO.

"We will use AI and machine learning to search the skies for anomalies and will continue to search the heavens for habitable reality," NASA Administrator Bill Nelson said during a Sept. 14 briefing. "AI is just coming on the scene to be explored in all areas, so why should we limit any technological tool in analyzing, using data that we have?"

NASA CAN'T EXPLAIN HANDFUL OF UFO SIGHTINGS AS IT SEARCHS FOR SIGNS OF LIFE

The members of NASA's UAP (unidentified anomalous phenomena) study. (NASA)

Dr. Nicola Fox, NASA's associate administrator, elaborated on Nelson's point, saying AI "is an amazing tool" to find "signatures that are sort of buried in data."

That's how NASA, and scientists around the world, are going to be able to find the metaphorical needle in a haystack, Fox said.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

"So a lot of our data are just sort of wiggly line plots. We get excited about wiggly line plots, by the way, but sometimes, you see the wiggles, but you miss a signal," she said.

"By using artificial intelligence, we can often find signatures. So one example we've had is to be able to find signatures of superstorms using very old data that, you know, really is before sort of like routine scientific satellite data."

A Fox News Digital-created UFO hotspot map based off information from the Department of Defense. (Julia Bonavita/Fox News Digital based on AARO's Data)

UAP reporting trends presented during April 19, 2023, Senate hearing. (U.S. Senate Committee on Armed Services)

Using AI was a key component of the 16-member, independent UAP research team's report.

"The panel finds that sophisticated data analysis techniques, including artificial intelligence and machine learning, must be used in a comprehensive UAP detection campaign when coupled with systematic data gathering and robust curation," the report says.

UFO HOTSPOTS MAP REVEALS CLUSTER OF SIGHTINGS LINKED TO ATOM BOMBS, WAR ZONES

The use of AI has been a controversial topic that governments around the world, including the U.S., are grappling with.

Advocates have lauded the potential capabilities of generative AI and the possibility it could catapult society to the next evolution of humankind. On the flip side, it can also create a dystopian future if guardrails aren't put in place, or if it's in the hands of ill-intended users, experts have warned.

WATCH VIDEO ABOUT PENTAGON'S NEW UFO WEBSITE

Earlier this month, over 100 members of Congress met with big tech tycoons such as Elon Musk and Mark Zuckerberg about AI, and some senators expressed concern about unregulated AI.

The NASA panel was asked if regulating AI would impact the space agency's ability to use the budding technology to potentially find extraterrestrial life.

RUSSIAN UFO ENGAGEMENTS, SECRET TIC TAC REPORT AND 3 KEY FIGURES SLIP UNDER RADAR AT CONGRESSIONAL HEARING

Nelson brushed off concerns that regulations would hamper NASA's mission.

"No, don't think that any attempts to that the Congress has underway to try to write a law that would appropriately put guardrails around AI for other reasons is anyway going to inhibit us from utilizing the tools of AI to help us in our quest on this specific issue," Nelson said in response to the question.

READ NASA'S FULL SEPT. 14 REPORT

NASA's study of UAPs is separate from the Pentagon's investigation through the All-domain Anomaly Resolution Office (AARO), although the two investigations are running on parallel tracks that include corroborative efforts.

CLICK HERE TO GET THE FOX NEWS APP

Much like a team of peer reviewers, NASA commissions independent study teams as a formal part of NASAs scientific process, and such teams provide the agency external counsel and an increased network of perspectives from scientific experts.

They were assigned to pinpoint the data available around UAP and produce a report that outlines a roadmap for how NASA can use its tools of science to obtain usable data to evaluate and provide suggestions moving forward.

Excerpt from:
NASA reveals latest weapon to 'search the heavens' for UFOs, aliens - Fox News

Read More..