Category Archives: Machine Learning
AI goes nuclear: INL expo showcases machine learning and artificial intelligence – East Idaho News
IDAHO FALLS Artificial intelligence is transforming the way the nuclear industry works, and Idaho National Laboratory is leading the way developing applications to streamline processes while improving safety at nuclear power plants.
INL scientists showcased 15 projects on Artificial Intelligence (AI) and Machine Learning at an expo at the Energy Innovation Laboratory in Idaho Falls on Tuesday.
Were here to learn about some of the incredible science happening related to artificial intelligence and machine learning, said Katya Le Blanc, human factors scientist at Idaho National Laboratory. Were also developing technologies that can eventually be deployed by the nuclear industry and be used by nuclear utilities.
According to a lab news release, computers that mimic cognitive functions and apply advanced algorithms can help researchers analyze and solve a variety of complex technical challenges. This new approach helps everything from improving materials design for advanced reactors to enhancing nuclear power plant control rooms so they become more effective and efficient.
Technologies on display at the conference included RAVEN a Risk Analysis Virtual ENvironment that provides an open source multi-purpose framework for machine learning, artificial intelligence and digital twinning.
One machine learning technology called Inspection Portal is part of the light water reactor sustainability program that analyzes and aggregates data from human-submitted reports to identify trends and help optimize the operation of nuclear power plants.
The programs machine learning operation is trained on millions of records from across the industry.
We can do things here at the INL that no one else can do, said Brian Wilcken, nuclear science and technology data scientist. Utility companies try to do things like this. They cant touch it. We have so much data we can train extremely powerful models.
Other AI systems provide image detection to read gauges, pinpoint anomalies and determine if a valve has been turned, if a screw is corroded or if a fire breaks out in a nuclear plant. These advancements could reduce the need for personnel to perform menial checks at a nuclear power plant and free up manpower for higher-level work and applications.
Additional tools evaluate the economics of different energy mixes and how to analyze the best cost-benefit and other factors (such as) the reliability associated with energy systems, Le Blanc said.
These systems can determine the proper output needed from a nuclear power plant, a hydro plant and a solar facility in order to meet peoples demand for electricity when they need it while optimizing economic benefit as well, she said.
Some of the applications utilize existing AI programs, while others were created in-house at Idaho National Laboratory.
Sometimes, it requires that you develop it. Theres not a model that can do what you need it to do, but sometimes theres something that already exists that you can adapt, Le Blanc said. It varies depending on (the situation), but theres no reason to start from scratch.
The Artificial Intelligence and Machine Learning Expo is in its second year.
In the future, organizers hope to expand and collaborate with other experts in the AI space to further share the research occurring at Idaho National Laboratory.
I read a lot of papers inside scientific journals related to AI, Le Blanc said. Seeing how this stuff actually works, being able to mess around with it, play with it, talk to the researchers, see what theyre doing and get direct access and ask them questions thats just exciting!
Read the original here:
AI goes nuclear: INL expo showcases machine learning and artificial intelligence - East Idaho News
How Has Machine Learning Optimized Lending Decisions? – Block Telegraph
In the evolving landscape of financial services, machine learning is revolutionizing how institutions make lending decisions. From enhancing loan propensity and risk scoring to modernizing credit scoring, weve gathered insights from a Staff Machine Learning Engineer and a Chief AI Officer, among others, to share how this technology has optimized lending decisions. Here are the top five expert perspectives on the transformative impact of machine learning in the sector.
Statistical analysis has always been used in the financial lending space. We are now seeing machine learning supplementing the use of just plain old statistics. The ML models deployed nowadays serve two main purposes, viz., loan propensity scoring and risk scoring.
The former determines the propensity of a user for taking out a loan, and the latter determines the probability of that loan being paid off. These two models together determine which users are reached out to by marketing and sales teams, thereby optimizing the size and the quality of the user group to reach out to.
With the rise in digital payment platforms, credit card companies now have access to high-quality spending data of their potential customers.
Although companies have always used traditional machine-learning models for computing credit scores and identifying target customers, they now implement reinforcement learning as the data is more readily available.
They create self-improving models, which usein addition to customer metricstheir own systems feedback in the correct identification of target customers.
Vertigo Bank is at the forefront of utilizing machine-learning technology to revolutionize lending decisions in a real-world setting. By leveraging machine-learning algorithms, the bank is able to optimize risk assessment, tailor offers to individual customers, detect fraudulent activities, and streamline the lending process for increased efficiency.
One of the key examples showcased in the text is the case study of Ryan Baldwin, a graphic designer seeking a personal loan from Vertigo Bank. Through the application of machine learning, the bank is able to analyze various data points related to Ryans credit history, income, spending habits, and other relevant information to make an informed lending decision. This not only streamlines the loan approval process but also ensures that the offer presented to Ryan is personalized to his specific financial situation and needs.
Furthermore, the integration of machine-learning algorithms into Vertigo Banks lending system allows for improved customer segmentation, fraud detection, process automation, decision-making, and regulatory compliance. By accurately segmenting customers based on their financial profiles, the bank can tailor offers and services to meet the unique needs of each segment. Additionally, the advanced fraud detection capabilities of machine-learning technology help in identifying and preventing potential fraudulent activities, safeguarding both the bank and its customers.
Moreover, the automation of various processes through machine-learning algorithms results in a more efficient and streamlined lending system. From loan application processing to approval decisions, machine learning helps in reducing manual intervention, minimizing errors, and speeding up the overall process. This not only enhances operational efficiency but also leads to a more seamless and convenient experience for customers like Ryan.
Overall, the implementation of machine-learning technology at Vertigo Bank leads to swift, personalized, and efficient loan approval experiences for customers. This, in turn, improves customer satisfaction, risk management, operational efficiency, and regulatory compliance within the lending system. By embracing the power of machine learning, Vertigo Bank is able to stay ahead of the curve in the competitive financial industry and provide its customers with cutting-edge lending solutions.
In my experience, one of the most transformative aspects of machine learning in financial institutions has been the use of predictive analytics to evaluate a borrowers creditworthiness. Previously, loan officers relied on credit scores and a handful of factors, sometimes excluding creditworthy borrowers who didnt fit the model. Now, machine learning algorithms can analyze vast datasets, including alternative data sources like cash-flow management or utility bill payments.
A couple of years ago, I helped a private lending institution in South Dakota develop an ML model that evaluated a businesss cash-flow patterns and utility payments to assess its creditworthiness. For individuals, the ML model evaluated non-traditional indicators of reliability such as mobile phone usage, data usage, income analysis, alternative sources of income, etc. This helped them approve microloans to a new segment of the population who previously wouldnt have qualified, boosting financial inclusion.
As I have witnessed lending institutions struggle to acquire new customers and an entire demographic that remained untapped, I was quickly able to understand the need to focus on margin maximization and not just risk minimization. So, my advice is: Dont just rely on traditional credit scores. Look for data that reflects a borrowers financial responsibility. This way, both the lender and the borrower will benefit.
If you ask me, its not a big change that will uproot a lenders established business but rather, an intuitive one that molds itself based on the unique requirements of each lending institution, whether it is banks, CDFIs, or private lenders. After all, the technology is rightly named: Machine Learning, which means the machine will keep on learning and modifying its functions to give lending institutions the power to make informed decisions, better serve their customers, and foster a more resilient and sustainable lending ecosystem, all while seamlessly integrating with their current operations.
Traditional scorecards are costly and time-consuming, requiring dedicated teams to manually adjust data for each client or product. They adapt slowly to economic changes and can introduce biases that affect lending fairness. In contrast, ML offers a much smarter solution. By analyzing historical data like demographics, transaction histories, and credit records, ML models evaluate a wide range of borrower traits. Advanced models like LightGBM and XGBoost handle complex data with high precision, processing over 600 data points to enhance credit score accuracy and provide a deeper understanding of credit risk.
In practice, the results are impressive. For example, fintech company Nextbank, which supplies banking software to leading Asian banks, asked us to help build one of the first ML-powered credit scoring systems. Using LightGBM and XGBoost, the system achieved a 97% accuracy rate, processing over 500 million loan applications and significantly reducing default risks.
One major advantage of ML in lending decisions is its ability to continuously improve by learning from new information. This ensures lending decisions are based on the most current and comprehensive data, leading to better risk management. Moreover, ML reduces bias in lending. By relying on actual repayment data instead of human judgment, ML models ensure fair and objective decision-making, meeting regulatory standards and promoting fair financial practices.
Traditional financial institutions often rely on manual processes for loan underwriting, resulting in slow decision-making. On average, closing a home loan takes 35 to 40 days. ML credit scoring can speed up this process by up to 30% through a smart combination of automation and predictive analytics for risk assessments.
As the financial sector continues to digitize, MLs role in lending will only grow. Its ability to analyze vast amounts of data, predict outcomes accurately, and adapt to new information not only optimizes lending decisions but also modernizes the financial services industry.
Original post:
How Has Machine Learning Optimized Lending Decisions? - Block Telegraph
CZI Sci-Tech Convening Discusses AI Advances & Biology – Blog – Chan Zuckerberg Initiative
Seventy-five years ago, the mathematician and computer scientist Alan Turing posed a simple but powerful question that changed the course of technology: Are machines capable of thought?
Since then, artificial intelligence has advanced at an extraordinary pace, and today, its opening the door to the digital age of biology.
From leveraging machine learning to help visualize the location and interactions of proteins within live cells to training a deep-learning model that can predict the impact of gene perturbations in cell types or genes, the application of AI methodologies to make sense of and draw insights from massive amounts of scientific data is ushering in a new level of insights into human health and disease.
This theme was the main focus of CZIs recent science technology convening, which brought together computational biologists, engineers, data scientists, product designers, and leaders from across the organization and our family of scientific institutes to explore the frontiers of AI for biomedical research. CZIers and our collaborators led sessions on topics ranging from how machine learning is expediting the annotation of tomograms from cryo-electron tomography experiments to building customized ultraviolet microscopes to detect and diagnose malaria in low-resourced settings.
Industry AI experts including Boris Power, head of applied research at OpenAI, and Bryan Catanzaro, vice president of applied deep learning research at NVIDIA also led talks about the promise of training AI models to expand the scientific communitys foundational understanding of human biology.
Three main themes surfaced after two days of enriching discussions:
Take a closer look at these takeaways below.
Biologists are going to have very strong simulations enabled by virtual cell models in a way thats not possible today, said Steve Quake, CZIs head of science, during the opening remarks. His point emphasized how AI will fundamentally change and accelerate the way scientists do research in the coming years.
For example, the virtual cell models CZI is building will be able to predict the response of immune cells to different genetic mutations faster and in more robust combinations than current methods without the need to collect costly and invasive physical samples from patients. Its like having a combination lock for human biology once you have the code, it will open up a host of new information about what happens when cells become diseased and what it takes for them to become healthy again.
Marinka Zitnik, assistant professor of biomedical informatics at Harvard Medical School and associate faculty at the Kempner Institute for the Study of Natural and Artificial Intelligence, led a session that further highlighted AIs role in transforming scientific research in the context of her day-to-day work. Zitnik, a CZI collaborator and Science in Society grant partner, highlighted how machine learning algorithms are being used to augment research and provide new insights at different time and spatial scales.
One example is SHEPHERD, a deep learning approach built by Zitniks team that can provide individualized diagnoses of rare genetic diseases. Given the limited data on rare diseases, the model is pre-trained on known associations between variants, genes and phenotypes from patient-agnostic data. The model is then trained on simulated patient data before being fine-tuned in the real world, potentially speeding up diagnoses and improving patient outcomes.
When evaluated across 12 sites throughout the United States, SHEPHERD was able to nominate disease-causing genes for 75% of patients from a cohort affiliated with the Undiagnosed Diseases Network. The model also narrowed down the top five possible genes responsible for those diseases among the tens of thousands of genes prioritized by the model. By providing a broad characterization of novel diseases helping researchers identify genes harboring mutations that can lead to disease, and connecting patients with similar genetic and phenotypic features for potential clinical follow-ups SHEPHERD is fundamentally changing the way researchers like Zitnik study and develop potential therapeutic targets for rare diseases. This can shorten the time for diagnosis and improve outcomes for patients.
Over the last decade, scientists, academic research labs and philanthropic organizations like CZI have been collecting, aggregating and curating enormous amounts of detailed, high-resolution biological information about the trillions of cells within the human body. These datasets are sequence- or image-based two complementary modalities that are fundamental to advancing biomedical research.
Manu Leonetti, director of systems biology at the Chan Zuckerberg Biohub San Francisco (CZ Biohub SF), and James Zou, associate professor of biomedical data science at Stanford University, led discussions about the opportunities with training AI on multimodal datasets. Leonetti, a cell biologist, described imaging as one of the foundational modalities for biology, allowing scientists to explore advanced techniques like transcriptomics under a microscope.
Imaging has the power of being able to give us extremely dense multimodal profiles of cells, said Leonetti. We can ask questions across scales while following cells in the context of their native environment, whether looking at cells in a dish, or tissues, or even at the scale of an entire organism.
New developments in deep learning are fueling the power of imaging. At the CZ Biohub SF, for example, Leonetti and his colleague Loic Royer are developing new algorithms to extract functional information from biological images. Royers imaging AI team has also trained a de-noising algorithm called Aydin that dramatically increases the usability of microscopy images and tools that can recognize and quantify biological objects from complex images to accelerate analysis.
Zou also shared examples of how generative AI is transforming biomedicine, including a case study showing how models can help identify and synthesize molecules to guide the development of antibiotics.
Generative AI can really help us expand the search space, said Zou. If we can use the help of AI to explore small molecules that we have not seen before in nature and are likely good drug candidates, that can likely be transformative for drug discovery.
The deep learning method Zou cited pinpointed a candidate molecule that could fight against various pathogens, including antibiotic-resistant bacteria. This breakthrough comes at an especially critical time, given the rise in antibiotic-resistant pathogens globally.
On the topic of modalities, Zou shared his perspective on why language can be a unifying framework for integrating vast amounts of biological information.
The reason why Im particularly excited about language is the knowledge thats summarized in written text, he said. Theres a lot more information captured in language beyond whats shared in numerical data.
To illustrate this point, Zou zeroed in on recent advances in protein modeling like ESMFold and AlphaFold, which draw correlation patterns from sequences. While these models are powerful tools for making predictions about protein structure, they arent trained on existing literature about the role and function of different proteins.
However, Zou also said that fine-tuning these protein language models with information from existing literature decades of knowledge summarized in papers leads to a notable boost in the capability of these models.
Today, most of the fields AI models are designed for applications in specific research areas, whether in the context of identifying genetic mutations that can lead to rare diseases or identifying new molecules that can overpower antibiotic-resistant pathogens.
But in the future, CZIs goal is to build and train a general-purpose model or virtual cells that can transfer information across datasets and conditions, serve multiple queries concurrently, and unify data from different modalities.
Explore more: How AI Can Uncover the Laws of Biology
Theofanis Karalestos, CZIs head of AI for science, provided attendees with a closer look at our vision for building a general-purpose model that can serve as a foundational resource for biomedical research. Karaletsos started his talk by highlighting the extraordinary amount of biological information generated over the last decade, which is breaking Moores Law.
By bridging the gap between these datasets and advances in AI, we get to the heart of where we want to be as machine learners, said Karaletsos. We want to simulate a generative process such that in some coarse-grain level of casualty even if it doesnt get things exactly right at a fine level but at some level, well have useful models that will allow us to ask questions about the data and query them in interesting ways for counterfactuals.
To bring these virtual cell models online, the early days of CZIs AI strategy will focus on training models and making these models and the datasets used for training and validation available to the community, which will require deep cross-functional collaboration with our teams, AI/machine learning experts, and biologists using these models.
Ultimately, this approach will pave the way for an open, accessible digital platform for biology, which will house next-generation models and systems trained on expansive multimodal datasets. Scientists will be able to access these models via APIs and visualizations to pose complex questions and test theories about the fundamental mechanisms of human biology faster and more accurately than traditional experimentation methods and existing, more specialized generative AI models.
Over time, we want this to handle basic biology tasks, Karaletos concluded. We hope itll be useful for disease and ultimately for cellular engineering because we want to understand cells in a generative way.
Learn more about CZIs AI strategy for science and our vision to build predictive models of cells and cell systems.
Read more:
CZI Sci-Tech Convening Discusses AI Advances & Biology - Blog - Chan Zuckerberg Initiative
Idaho National Laboratory hosts second annual AI and machine learning expo – Post Register
State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada
Zip Code
Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe
Read the original:
Idaho National Laboratory hosts second annual AI and machine learning expo - Post Register
A deep learning-based algorithm for pulmonary tuberculosis detection in chest radiography | Scientific Reports – Nature.com
This study was designed to use freely available open TB CXR datasets as training data for our AI algorithm. Subsequent accuracy analyses were performed using independent CXR datasets and actual TB cases from our hospital. All image data were de-identified to ensure privacy. This study was reviewed and approved by institutional review board (IRB) of Kaohsiung Veterans General Hospital, which waived the requirement for informed consent (IRB no.: KSVGH23-CT4-13). This study adheres to the principles of the Declaration of Helsinki.
The flowchart of the study design is shown in Fig.1. Due to a high prevalence of TB and varied imaging presentation, TB cannot be entirely excluded in case of CXR presenting with pneumonia or other entities. Our preliminary research indicated that training a model solely on TB vs. normal resulted in bimodally distributed predictive values. Therefore, CXRs that were abnormal but not indicative of TB usually had predictive value too high or too low, and failed to effectively differentiate abnormal cases from normal or TB. For common CXR abnormalities such as pneumonia and pleural effusion, the TB risk is lower, but not zero. Thus, we trained two models using 2 different training datasets, one for TB detection and another for abnormality detection. Then the output predictive values were averaged.
Flow chart of model training and validations.
The features of the CXR datasets for training is summarized in Table 1. The inclusion criteria are CXR of TB, other abnormality, or normal. Both posteroanterior view and anteroposterior view CXRs are included. The exclusion criteria are CXR with poor quality, lateral view CXR, children CXR, and those with lesions too small to detect at 224224 pixels size). All the CXR images were confirmed by C.F.C. to ensure both image quality and correctness.
Training dataset 1 is used for training algorithms to detect typical TB pattern on CXR. 348 TB CXRs and 3806 normal CXRs were collected from various open datasets for training, including the Shenzhen dataset from Shenzhen No. 3 Peoples Hospital, the Montgomery dataset19,20, and Kaggle's RSNA Pneumonia Detection Challenge21,22.
Training dataset 2 is used for training algorithms to detect CXR abnormalities. A total of 1150 abnormal CXRs and 627 normal CXRs were collected from the ChestX-ray14 dataset23. The abnormal CXRs consisted of consolidation: 185, cardiomegaly: 235, pulmonary edema 139, pleural effusion: 230, pulmonary fibrosis 106, and mass: 255.
In this study, we employed GoogleTM18, a free online AI software dedicated to image classification. GoogleTM provides a user-friendly web-based graphical interface that allows users to execute deep neural network computations and train image classification models with minimal coding requirements. By utilizing the power of transfer learning, GoogleTM significantly reduces the computational time and data amount required for deep neural network training. Within GoogleTM, the base model for transfer learning was MobileNet, a model pretrained by Google on the ImageNet dataset featuring 14 million images and capable of recognizing 1,000 classes of images. Transfer learning is achieved by modifying the last 2 layers of the pre-trained MobileNet, and then keep subsequent specific image recognition training18,24. In GoogleTM , all images are adjusted and cropped to 224224 pixels for training. 85% of the image is automatically divided into training dataset, and the remaining 15% into validation dataset to calculate the accuracy.
The hardware employed in this study included a 12th-generation Intel Core i9-12900K CPU with 16 cores, operating at 3.25.2 GHz, an NVIDIA RTX A5000 GPU equipped with 24GB of error-correction code (ECC) graphics memory, 128 GB of random-access memory (RAM), and a 4TB solid-state disk (SSD).
To evaluate the accuracy of the algorithms, we collected clinical CXR data for TB, normal cases, and pneumonia/other disease from our hospital.
Validation dataset 1 included 250 de-identified CXRs retrospectively collected from VGHKS. The CXRs dates were between January 1, 2010 and February 27, 2023. This dataset included 83TB (81 confirmed by microbiology, and 2 confirmed by pathology), 84 normal, and 83 abnormal other than TB cases (73 pneumonia, 14 pleural effusion, 10 heart failure, and 4 fibrosis. Some cases had combined features). The image size of these CXRs ranged from width: 17604280 pixels and height: 19314280 pixels.
Validation dataset 2 is a smaller dataset derived from validation dataset 1, for comparison of algorithm and physicians performance, and included 50 TB, 33 normal and 22 abnormal other than TB cases (22 pneumonia, 5 pleural effusion, 1 heart failure, and 1 fibrosis) CXRs. The features of the two validation datasets are provided in Table 1.
Data collected from clinical CXR cases included demographic data (such as age and sex), radiology reports, clinical diagnoses, microbiological reports, and pathology reports. All clinical TB cases included in the study had their diagnosis confirmed by microbiology or pathology. Their CXR was performed within 1 month of TB diagnosis. Normal CXRs were also reviewed by C.F.C. and radiology reports were considered. Pneumonia/other disease cases were identified by reviewing medical records and examinations, with diagnoses made by clinical physicians judgement, and without evidence of TB detected within three months period.
We employed validation dataset 2 to evaluate the accuracy of TB detection of 5 clinical physicians (five board-certified pulmonologists, average experience 10 years, range 516 years). Each physician performed the test without additional clinical information, and was asked to estimate the probability of TB in each CXR, consider whether sputum TB examinations were needed, and make a classification from three categories: typical TB pattern, normal pattern, or abnormal pattern (less like TB).
We also collected radiology reports from validation dataset 2 to evaluate their sensitivity for detecting TB. Reports mentioning suspicion of TB or mycobacterial infection were classified as typical TB pattern. Reports indicating abnormal patterns such as infiltration, opacity, pneumonia, effusion, edema, mass, or tumor (but without mentioning tuberculosis, TB, or mycobacterial infection) were classified as abnormal pattern (less like TB). Reports demonstrating no evident abnormalities were classified as normal pattern. Furthermore, by analyzing the pulmonologists decisions regarding sputum TB examinations, we estimate the sensitivity of TB detection in pulmonologists actual clinical practice.
Continuous variables are represented as meanstandard deviation (SD) or median (interquartile range [IQR]), while categorical variables are represented as number (percentage). For accuracy analysis, the receiver operating characteristic (ROC) curve was used to compute the area under the curve (AUC). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio (LR), overall accuracy, and F1 score were calculated. A confusion matrix was used to illustrate the accuracy of each AI model. Boxplots were used to evaluate the distribution of the predicted values of the AI models for each etiology subgroup.
The formulas for each accuracy calculation are as follows:
(TP is true positives, TN is true negatives, FP is false positives, FN is false negatives, P is all positives, and N is all negatives.)
$$begin{gathered} {text{P }} = {text{ TP}} + {text{FN}}, hfill \ {text{N }} = {text{ TN}} + {text{FP}}, hfill \ {text{Sensitivity }} = {text{ TP}}/{text{P }} times {1}00, hfill \ {text{Specificity }} = {text{ TN}}/{text{N }} times {1}00, hfill \ {text{PPV }} = {text{ TP}}/left( {{text{TP}} + {text{FP}}} right) , times {1}00, hfill \ {text{NPV }} = {text{ TN}}/left( {{text{TN}} + {text{FN}}} right) , times {1}00, hfill \ {text{LR}} + , = {text{ sensitivity}}/left( {{1} - {text{specificity}}} right), hfill \ {text{LR}} - , = , left( {{1} - {text{sensitivity}}} right)/{text{specificity}}, hfill \ {text{Overall accuracy }} = , left( {{text{TP }} + {text{ TN}}} right)/left( {{text{P}} + {text{N}}} right) , times {1}00, hfill \ {text{F1 score }} = , left( {{2 } times {text{ sensitivity }} times {text{ PPV}}} right)/left( {{text{sensitivity }} + {text{ PPV}}} right) , times {1}00, hfill \ end{gathered}$$
Continue reading here:
A deep learning-based algorithm for pulmonary tuberculosis detection in chest radiography | Scientific Reports - Nature.com
From bioscience to skin care, startups are harnessing AI to solve problems big and small – Source – Microsoft
It started with a trek to one of Europes most remote areas.
Fascinated by the biodiversity of the worlds less-explored environments, biologists and explorers Glen Gowers and Oliver Vince spent a month on an ice cap in Iceland in 2019 undertaking what is believed to be the worlds first fully off-grid DNA sequencing expedition. Using solar power alone, the team spent a month sequencing DNA from microorganisms living in an area with both ice and a hot spring. Sequencing DNA refers to a method used to read the genetic code of an organism.
After returning to the U.K., the pair shared their data with Philipp Lorenz, a University of Oxford scientist whose research focuses on genomics and AI. It quickly became apparent that the data they had collected was unlike anything they had seen in any reference database so different, in fact, that the sequences couldnt be annotated using traditional methods.
That realization prompted Gowers and Vince to launch Basecamp Research, a London-based startup that aims to build the worlds largest database of natural biodiversity and apply AI and machine learning to advance bioscience. The company is among a wave of startups worldwide that are harnessing machine learning and artificial intelligence, particularly generative AI, to create AI-powered tools and solutions across an increasingly large swath of industries.
Pointing to the lack of data in the life sciences, Lorenz says there are 10 to the power of 26 species on the planet, but only a few million of those have been sequenced. In terms of comparison, thats about five drops of water compared to the Atlantic Ocean of what we dont know about life on Earth, says Lorenz, chief technology officer at Basecamp Research.
If you want to do deep learning on biological data, theres just a fundamental, enormous knowledge gap.
To bridge that gap, Basecamp Research is partnering with nature parks on five continents and working across 27 countries to sequence genomic information from the worlds most diverse and understudied biomes from volcanic islands and deep oceans to jungles and the Antarctic.
The company, which has close to 35 employees, collects samples only with consent from stakeholders including national and local governments, nature parks, research institutes and landowners. Basecamp Research shares benefits with stakeholders, such as employing local scientists, providing training and resources to partners, and sharing revenue if commercial products are developed from the locations Basecamp Research is working in.
We are the first company, really in the world, that is doing this at scale and collaboration with stakeholders, Lorenz says. In the age of generative AI, we are the only life science organization that can train AI models in which every point in the training dataset can be traced back to consent and benefit-sharing agreements.
In just two years, he says, Basecamp Research has built a database about five times larger and more diverse than any other of its type. Unlike traditional protein databases that primarily just store data, Basecamps database is a knowledge graph, a network that organizes data and shows the relationships between billions of data points, linking protein and DNA sequences to their biological, chemical and evolutionary contexts.
In March, Basecamp Research announced the launch of a new deep learning model named BaseFold. The model can predict 3D structures of large proteins and small-molecule interactions with protein targets more accurately than the popular AlphaFold2 model, according to Basecamp.
With its database, Basecamp Research is building deep learning models that are being used to design products such as gene editing systems for therapeutics and enzymes for food manufacturing. One client is developing proteins that break down difficult-to-recycle plastic waste. Another company is designing proteins for dyeing fabrics without using harmful chemicals.
Basecamp Research, Lorenz says, is motivated by a mission of ethical data collection and a core belief in the power of AI to advance biological discovery.
Biology and the life sciences are just fundamentally more complex than most other domains, he says. Ultimately, its going to be deep learning models and AI that will be able to deal with and understand the vastness and complexity of biology.
Microsoft for Startups Founders Hub was launched in 2021 to accelerate innovation by providing access to resources that were traditionally unavailable to fledgling companies. Open to any startup, the platform provides access to leading AI models, Azure credits, free and discounted development tools, and expert guidance. Tens of thousands of startups around the world are now part of Founders Hub, and the number of those companies using Microsoft AI has increased tenfold in the past year, according to Microsoft.
Microsoft for Startups Pegasus Program, an extension of Founders Hub launched in 2023, is an invite-only program that connects growth-stage startups with Microsoft customers in industries including AI, retail, health and life sciences, and cybersecurity. The program matches Microsofts top enterprise customers with the right startups to help them solve business challenges.
Microsofts focus on integrating AI into its products, from GitHub to Microsoft 365, is a differentiator and means startups not only get access to those tools but also to the expertise behind them, says Tom Davis, partner at Microsoft for Startups.
Its not just access to infrastructure, extra Azure credits and things like that, he says. Its access to knowledge and know-how that will help accelerate these startups. That understanding of how to build AI-based product applications is invaluable for startups.
Tammy McMiller joined Founders Hub not long after launching her company, Plan Heal, in 2022. Based in Chicago, Plan Heal offers AI-powered solutions that enable patients to monitor and report health metrics so care providers can better serve them.
The companys mission of empowering patients and providers is a personal one for McMiller. She decided to start the company after a family member who had complained of symptoms for more than two years and was regularly seeing her doctor was diagnosed with stage three colorectal cancer.
McMillers relative has been cancer-free for nine years, but she points to statistics showing that 167 million people in the U.S. have health issues such as high blood pressure, kidney disease or diabetes but dont know it.
My family member was one of those statistics, McMiller says. What that means is that people are living lives with a lower quality of health and not really understanding why. We decided to leverage AI to help people become better reporters of their health.
Through Plan Heals Smart Health Assessment, which is powered by Azure and integrates with electronic health record systems to help teams access real-time patient data, patients answer a few questions about their health on a regular basis and can upload images of symptoms or medications.
Algorithms analyze the data to provide insights and flag potential health issues for providers. Cramping or aching in the calves, for example, can indicate peripheral artery disease in a patient with diabetes, which can lead to amputation; a limb is amputated every three minutes and 30 seconds in the U.S. due to diabetes.
Those insights allow providers, who often have high caseloads and limited time with patients, to go into appointments with more information about a patients health and proactively come up with preventative treatments.
It really changes the dynamic from a disease care interaction to true proactive health care, says Dan Langille, a member of Plan Heals advisory board. Thats pretty powerful.
Plan Heals platform also offers targeted assessments for several high-cost, chronic diseases including diabetes and kidney disease. McMiller hopes to pilot the platform with a large-population health care provider this year, and early results seem promising. Testing found that 90% of patients who used the health assessment had a more engaging conversation with their provider, she says, and 85% received care services they would not have otherwise, such as additional examinations or tests.
As an aging population increases the demand for health care services, McMiller says, AI can play a valuable role in helping people track their health and identify potentially life-threatening conditions earlier.
Well always need professional health care teams. AIs never going to replace that, McMiller says. But if that care team member has the efficiency of AI to help automate different services, they can care for patients more efficiently.
In 2018, Anastasia Georgievskaya was a research scientist working with R&D teams at skin care companies to develop models for analyzing skin in a clinical setting. The work involved analyzing before and after images of skin that would show benefits from skin care products contrary to what some consumers believed, she says.
We started to ask people in the industry, why does everyone think that skin care is not working? Georgievskaya says. And the answer was that its working, but consumers are choosing the wrong product and buying products that were not designed for them.
That got Georgievskaya thinking about using AI and computer vision to replicate on smart phones the analysis she was doing in labs. If consumers could get accurate skin assessments easily through their phones, she reasoned, they could make more informed decisions and get better results from skin care products.
Georgievskaya co-founded Haut.AI, a company based in Tallinn, Estonia, in 2018 to provide skin care companies and retailers with customizable, AI-based skin diagnostic tools. Haut.AIs software uses selfies from consumers to assess skin metrics such as hydration, redness and wrinkles, then makes personalized product recommendations. A similar application analyzes hair condition, also through a selfie, to gauge features including frizziness, volume and color uniformness.
Haut.AIs newest product, SkinGPT, lets users upload photos and see how their skin would change over time when using particular products, like face serum with hyaluronic acid for fine lines and wrinkles; the company says the application is the first to use generative AI for skin care simulations. Haut.AI is also working on a chatbot that can provide consumers with input on skin analysis results and answer questions about ingredients in products or how to combine products.
The platforms algorithms are trained on a mix of lab data from anonymized images of human skin and synthetic data created with generative AI. Datasets in the beauty sector are limited, Georgievskaya says, and using synthetic data allows Haut.AI to train models to account for gender and population group differences, and environmental factors like air pollution and weather that can impact skin condition.
This blend of synthetic and real data gives a really impressive boost in system accuracy because it can cover a lot of use cases, especially for the groups that you dont usually have much of a dataset for, she says.
Haut.AI, which is part of the Microsoft for Startups Pegasus Program, has around 90 clients, including several that are using its technology for research and development, Georgievskaya says. The platform allows companies to collect data from thousands of study participants with their consent, she says, versus the traditional approach of bringing a few dozen participants to a research facility for testing.
Artificial intelligence, Georgievskaya believes, can provide a more objective and realistic analysis of skin than a person possibly could.
As humans, it is in our nature to be emotional, and we tend to underestimate or overestimate our skin, she says. And even if someone else tells you something about your skin, if theyre not a doctor, their judgment is also very biased and subjective. The algorithm helps you see objective measures. You can just snap a selfie and get this information in less than 10, 20 seconds.
While many startups are making products powered by AI, Weights & Biases mission is to provide tools to help AI developers build those solutions. Founded by Lukas Biewald and Chris Van Pelt, the San Francisco-based company arose out of an internship Biewald had at OpenAI in its early days as a research organization.
While at OpenAI, Biewald struggled to find a way to track his experiments. He asked other researchers what they were doing and found a hodgepodge of approaches, from keeping notes in a text editing app to creating Excel documents. There was no uniform way to track the different experiments going on and their performance.
We saw an opportunity, and really an itch that we had ourselves and wanted to scratch, Van Pelt says.
Biewald and Van Pelt, who previously founded machine learning and AI company Figure Eight, launched Weights & Biases in 2017 to provide tools to help AI developers better manage workflow and build and deploy models faster. The companys platform, which runs in Azure, allows users to track and visualize experiments, store models in a central registry, automatically capture the data and code used for models and share results with collaborators.
Demand for the platform has grown, Van Pelt says, since the release of ChatGPT has made it easier to build general-purpose large language models that can perform many tasks without requiring different models.
That really changed the dynamic, he says. The number of people that we could help with our tools went from a fairly small subset of engineers who were specialized in machine learning to essentially all engineers. Our customer base expanded dramatically.
The company, which is a member of the Microsoft for Startups Pegasus Program, has more than 1,000 customers across sectors ranging from tech to finance, health care, medicine, robotics, the automotive industry and academia (Weights & Biases gives its software to academics for free). OpenAI is a customer, as is Microsoft. The companys platform is being used to fuel drug discovery, advance autonomous vehicle development and improve health care delivery.
For Van Pelt, seeing the diversity of Weights & Biases customer base and the innovative ways customers are using its technology is one of his favorite aspects of the company.
I think every founder or person working on a startup wants to think that what theyre doing is going to change the world, he says. Im not making any big claims that Weights & Biases is changing the world.
But we do have a front-row seat [to watch] all of our customers that are doing things that we simply could not do before, and were helping them to do that. Its really gratifying to see.
Top photo: Biologists Glen Gowers and Oliver Vince, pictured here, launched Basecamp Research after a 2019 expedition to an ice cap in Iceland. (Photo courtesy of Basecamp Research)
Read the original post:
From bioscience to skin care, startups are harnessing AI to solve problems big and small - Source - Microsoft
The rising software supply chain threat: Mounting a unified defense – CIO
Malicious actors have been pressing their advantage against vulnerable software supply chains with exponentially increasing attacks. Enterprises have been hampered in fighting back by lack of internal consensus on their security capabilities and practices. Recent survey findings uncovered multiple areas of disconnect between senior executives/managers (executives) and hands-on staff (doers).
Executives tended to have a comparatively rosier picture of their organizations security posture. Compared to the doers, executives believed they were implementing more security practices, using more solutions, and defending more effectively against open-source risk. Similarly, they underestimated the time their teams were spending on vulnerability remediation and software package approvals.
The executives and doers also had significantly different perceptions when it came to the incorporation of artificial intelligence (AI) and machine learning (ML) in software applications and for automated security scanning.
The research findings revealed region-specific concerns over SSC security as well.
North America (NA)-based organizations tend to be quicker to adopt ML models than those based in Europe, the Middle East, and Africa (EMEA) or the Asia-Pacific (APAC). Also, organizations in the US appear to have a greater comfort level when it comes to using AI and ML tools for code creation.
These findings suggest that the AI race is more intense in North America, where Silicon Valley technology giants have been investing heavily in its development, than in the EMEA or APAC regions.
Based on the survey findings, its clear that EMEA organizations exercise more caution when it comes to SSC risk than in other parts of the world. They are less inclined to deploy software to Internet of Things (IoT) devices, for example. Also, theres more resistance to integrating AI and ML in softwarelikely due to concerns over security and compliance.
Compared to North America and Asia, the regulatory environment is far more stringent in Europe, where organizations are sensitive to the requirements of the General Data Protection Regulation (GDPR), the Cybersecurity Act, and other key directives.
Yet despite their measured response to emerging software technologies, survey responses indicate that organizations in the EMEA region are aware of the potential of AI and ML tools and are open to considering ways to incorporate them in their SSCs.
Among the notable distinctions of APAC-based organizations is their comparative eagerness to incorporate AI and ML for scanning and remediation. Based on the survey results, they also have a very high comfort level with the use of AI and ML tools for code creation.
That could be problematic. If unchecked, APAC organizations enthusiasm for these emerging technologies might expose them to greater SSC security risk.
Corporate leaders are eager to bridge the perception gaps and adopt a comprehensive, unified solution to shore up SSC security. Whether based in NA, EMEA, or APAC, executives are eager to establish a unified SSC security defense posture for their organizations. Whats needed is a comprehensive solution that embraces automation, employs AI and ML models, and prioritizes integration across the entire software development lifecycle.
Continue reading here:
The rising software supply chain threat: Mounting a unified defense - CIO
Exploring the limits of robotic systems | Penn Today – Penn Today
As machine learning enters the mainstream, consumers may assume that it can solve almost any problem. This is not true, says Bruce Lee, a doctoral student in Penn Engineerings Department of Electrical and Systems Engineering. Lees research works to identify how robotic systems learn to perform different tasks, focusing on how to tell when a problem may be too complexand what to do about it.
Lee, who is advised by Nikolai Matni, assistant professor in electrical and systems engineering and member of the Penn Research in Embedded Computing and Integrated Systems Engineering (PRECISE) Center, studies how robotic systems learn from data, with the goal of understanding when robots struggle to learn a dynamic system, and what approaches might be effective at combating those challenges.
His work offers insights into the fundamental limits of machine learning, guiding the development of new algorithms and systems that are both data-efficient and robust.
When I try to apply a reinforcement learning or imitation learning algorithm to a problem, I often reach a point where it does not work, and I have no idea why, says Lee. Is it a bug in my code? Should I just collect more data or run more iterations? Do I need to change the hyperparameters? Sometimes, the answer is none of the above. Rather, the problem is impossible to learn effectively, no matter what learning algorithm I use. My work can help researchers understand when this is the case.
Improving the way robotic systems learn from data enhances the safety and efficiency of self-driving cars, enabling them to make more reliable decisions in complex, dynamic environments. Similarly, robots operating in human environments, such as in health care or manufacturing, can become more adaptable and capable of performing a wider range of tasks with minimal human intervention. Ultimately, the goal is to create robotic systems that can better serve humanity, contributing to advancements in various fields including transportation, health care, and beyond.
Read more at Penn Engineering Today.
Continued here:
Exploring the limits of robotic systems | Penn Today - Penn Today
If you build it, they will come: Experts believe reimbursement will follow AI implementation – Health Imaging
Lots of products, few payments
The U.S. Food and Drug Administration includes nearly 900 products on itslist of approved artificial intelligence- and machine learning-enabled medical devices, the great majority of which are tailored to radiology needs. Despite this, AI algorithms are still not a mainstay within radiology departments.
This could be, in part, due to the lack of reimbursement for such technology and decreasing payments to the specialty in general. After adjusting for inflation, the American College of Radiology estimates that Medicare reimbursement to radiologists has plummeted nearly 32% since 2005.What's more, the Centers for Medicare & Medicaid Services (CMS) has assigned payment for just around 10 of the AI/ML devices that are currently approved.
In the panel discussion, Jha said that if AI makes radiologists better or more efficient at their jobs, it should be adopted without getting too deep into return-on-investment (ROI) calculations because the return will be felt in other ways.
Joseph Cavallo, MD, MBA, an assistant professor of radiology and biomedical imaging at Yale School of Medicine, agreed, noting that the current absence of reimbursement should not deter stakeholders from exploring how AI can improve their practices. Reimbursement for the use of AI algorithms will be the exception more than the rule for a while, he said.
Some CPT codes have been created, but AI as a whole is going to have to be like PACS was for radiology for a while. Improving workflow and efficiency for radiologists now,will result in ROI and gains in the future, Cavallo suggested.
On an encouraging note, for those hoping to implement algorithms into their practice, Eliot Siegel, MD, FSIIM, chief ofimaging services at the University of Maryland School of Medicine, said that conversations related to payments for AI-based tools used during the diagnostic process are increasing at the federal level.
Like Jha, Siegel, believes that AI algorithms are on a similar path as PACS was.
It took a small number of years, but eventually people realized film wasn't practical anymore. The same will happen with AI, he said.
Read more about commercially available AI products at work below.
See the rest here:
If you build it, they will come: Experts believe reimbursement will follow AI implementation - Health Imaging
Proactive Ways to Skill Up for AI – AiThority
Artificial Intelligence (AI) is rapidly transforming a multitude of industries, from healthcare and finance to transportation and marketing. As AI continues to infiltrate various sectors, the demand for AI-related skills is surging. Lets explore proactive ways to acquire the skills needed to thrive in this AI-driven landscape.
Also Read: AiThority Interview with Wendy Gonzalez, CEO of Sama
The emergence of AI technologies has triggered a monumental transformation in employment opportunities, elevating the worth of AI competencies. A LinkedIn survey reveals that roles specializing in AI have seen an annual growth rate of 74% over the previous four years, establishing it as the most rapidly expanding job sector. Likewise, Gartners research forecasts that by 2022, AI will generate 2.3 million jobs, while phasing out only 1.8 million.
Its worth noting that the demand for AI expertise isnt confined to tech firms. Industries such as healthcare, finance, and even agriculture are incorporating AI, thus broadening the range of roles requiring AI knowledge. Whether its data analysis, machine learning, or natural language processing, possessing AI skills can significantly enhance your employability and open doors to a myriad of career opportunities.
As you navigate the burgeoning field of Artificial Intelligence, honing specific core skills will be instrumental in setting you apart. Lets delve into these essential skills that are highly sought after in the AI industry.
Also Read: Cryptocurrency Hacking Has Become A Significant Threat
In the dynamic realm of AI, a proactive stance towards acquiring skills is not just advantageous but imperative. The days of waiting for opportunities to knock on your door are long gone. Heres your guide to taking control of your AI learning journey.
The AI landscape is in a constant state of flux, yet the need for AI competencies remains steadfast. From mastering programming languages and data analytics to gaining hands-on experience and networking, the avenues for skill acquisition are diverse. Certifications and real-world experience further solidify your standing in this competitive field. The key to success lies in taking proactive steps to continually enhance your skill set. Dont wait for the future to shape you; shape your future by skilling up in AI today.
[To share your insights with us as part of editorial or sponsored content, please write topsen@itechseries.com]
Continue reading here:
Proactive Ways to Skill Up for AI - AiThority