Page 887«..1020..886887888889..900910..»

Rise Of The Machine LearningDeep Fakes Could Threaten Our Democracy – Forbes

used to deceive voters as we head into the next primary season.getty

Photos circulated on social media earlier this summer showing former U.S. President Donald Trump hugging and even kissing Dr. Anthony Fauci. The images weren't real of course, and they weren't the work of some prankster either. The images, which were generated with the aid of artificial intelligence-powered "Deep Fake" technology, were shared online by Florida Governor Ron DeSantis' rapid response team.

It was part of a campaign to criticize Trump for not firing Fauci, the former top U.S. infectious disease official who pushed for the Covid-19 restrictions at the height of the pandemic.

Deep Fakes being employed in the 2024 election has already been seen as a major concern, and last month the Federal Election Commission began a process to potentially regulate such AI-generated content in political ads ahead of the 2024 election. Advocates have said this is necessary to safeguard voters from election disinformation.

For years, there have been warnings about the danger of AI, and most critics have suggested the machines could take over in a scenario similar to science fiction films such as The Terminator or The Matrix, where they literally rise up and enslave humanity.

Yet, the clear and present danger could actually be AI used to deceive voters as we head into the next primary season.

"Deep Fakes are almost certain to influence the 2024 elections," warned Dr. Craig Albert, professor of political science and graduate director of the Master of Arts in Intelligence and Security Studies at Augusta University.

"In fact, the U.S. Intelligence Community expected these types of social media influence operations to occur during the last major election cycle, 2022, but they did not occur to any substantial effect," Albert noted.

However, the international community has already witnessed sophisticated Deep Fakes in the Russia-Ukraine War. Although the most sophisticated of these came from Ukraine, it is certain that the government of Russia took notice and is planning on utilizing these in the near future, suggested Albert.

"Based on their history of social media information warfare and how they have impacted U.S. elections generally over the past near decade, it is almost assured that the U.S. can expect to see this during the 2024 election cycle," he added.

The threat from AI-generated content is magnified due to the fact that so many Americans now rely on social media as a primary news source. Videos from sources that paid to be "verified" on platforms such as X (formerly Twitter) and Facebook can go viral quickly, and even when other users question the validity of that content from otherwise unvetted sources, many will still believe it to be real.

It is made worse because there is so little trust in politicians today.

"The danger for the individuals is this practice can do a lot of damage to the image and trustworthiness of the person attacked and eventually there will be laws put in place that would more effectively penalize the practice," suggested technology industry analyst Rob Enderle of the Enderle Group.

"Identity theft laws might apply now once attorneys start looking into how to mitigate this behavior," Enderle continued. "It is one thing to accuse an opponent of doing something they didn't do, but crafting false evidence to convince others they did it should be illegal but the laws may have to be revised to more effectively deal with this bad behavior."

The political candidatesat all levelsshouldn't wait for the FEC to act. To restore election integrity, there should be calls for anyone seeking office not to employ Deep Fakes or other manipulated videos and photos as a campaign tool.

"Beyond a doubt, all U.S. officials should agree to not engage in any social-media or cyber-enabled influence campaigns including Deep Fakes within the domestic sphere or for domestic consumption," said Albert. "Candidates should not endorse propaganda within the U.S. to impact voting behavior or policy construction at all. Engaging in Deep Fake creation or construction would fit within this category and ought to be severely restricted for candidates and politicians for ethical and national security reasons."

Yet, even if the candidates make such pledges, there will still be domestic and foreign operators who employ the technology. All of the political campaigns will likely be watching for such attacks, but voters will also need to be vigilent as well. Much of this is actually pretty straightforward and obvious.

"One should never trust unverified, non-official sources of videos and sound bites," added Albert. "These are all easy to fake, manipulate, and distort, and for candidate pages, easy to create cyber-personas that aren't authentic. If videos, sound bites, or social media posts appear and seem to cause some form of emotional reaction in the public realm, that is a signal to be slow to judge the medium until it has been verified as authentic."

I am a Michigan-based writer who has contributed to more than four dozen magazines, newspapers and websites. I covered the Detroit bankruptcy for Reuters in 2014, and I currently cover international affairs for 19FortyFive and cybersecurity for ClearanceJobs.

The rest is here:
Rise Of The Machine LearningDeep Fakes Could Threaten Our Democracy - Forbes

Read More..

What is the future of machine learning? – TechTarget

Machine learning algorithms generate predictions, recommendations and new content by analyzing and identifying patterns in their training data. These capabilities power widely used technologies such as digital assistants and recommendation algorithms, as well as popular generative AI tools including ChatGPT and Midjourney.

Although these high-profile examples of generative AI have recently captured public attention, machine learning has promising applications in contexts ranging from big data analytics to self-driving cars. And adoption is already widespread: In a recent survey by consulting firm McKinsey & Company, 55% of respondents said their organization had adopted AI in some capacity.

Many of the underlying concepts powering today's machine learning applications date back as far as the 1950s, but the 2010s saw several advances that enabled this widespread business use:

These developments moved AI and machine learning into the mainstream business realm. Popular AI use cases in today's workplaces include predictive analytics, customer service chatbots and AI-assisted quality control, among many others.

Machine learning developments are expected across a range of fields over the next five to 10 years. The following are a few examples:

Among the many possible use cases for machine learning, several areas are expected to lead adoption, including natural language processing (NLP), computer vision, machine learning in healthcare and AI-assisted software development.

With the rise in popularity of ChatGPT and other large language models (LLMs), it's no surprise that NLP is currently a major area of focus in machine learning. Potential NLP developments over the next few years include more fluent conversational AI, more versatile models and an enterprise preference for narrower, fine-tuned language models.

As recently as 2018, the machine learning field was overall more focused on computer vision than NLP, said Ivan Lee, founder and CEO of Datasaur, which builds data labeling software for NLP contexts. But over the past year, he's noticed a significant shift in the industry's focus.

"We're seeing a lot of companies that maybe haven't invested in AI in the last decade coming around to it," Lee said. "Industries like real estate, agriculture, insurance -- folks who maybe haven't spent as much time with NLP -- now they're trying to explore it."

As with other fields within machine learning, improvements in NLP will be driven by advances in algorithms, infrastructure and tooling. But NLP evaluation methods are also becoming an increasingly important area of focus.

"We're starting to see the evolution of how people approach fine-tuning and improving [LLMs]," Lee said. For example, LLMs themselves can label data for NLP model training. Although data labeling can't yet -- and likely shouldn't -- be fully automated, he said, partial automation with LLMs can expedite model training and fine-tuning.

Because language is essential to so many tasks, NLP has applications in almost every sector. For example, LLM-powered chatbots such as ChatGPT, Google Bard and Anthropic's Claude are designed to be versatile assistants for diverse tasks, from generating marketing collaterals to summarizing lengthy PDFs.

But specialized language models fine-tuned on enterprise data could provide more personalized and contextually relevant responses to user queries. For example, an enterprise HR chatbot fine-tuned on internal documentation could account for specific company policies when answering users' natural language questions.

"The beauty of [ChatGPT] is that you can try a million different queries," Lee said. "But in the business setting, you really want to narrow that scope down. ... It's OK if [a recipe generator] doesn't tell me the best travel plans for San Antonio, but it better be fully tested and really good at recipes."

Outside of LLMs, computer vision is among the top areas of machine learning seeing an uptick in enterprise interest, said Ben Lynton, founder and CEO of AI consulting firm 10ahead AI.

Like NLP, computer vision has applications across many industries. Adoption will likely be spurred by improvements in algorithms such as image classifiers and object detectors, as well as increased access to sensor data and more customized models. Possible trends in the realm of computer vision include the following:

In generative AI, image generators such as Dall-E and Midjourney are already used by consumers as well as in marketing and graphic design. Moving forward, advances in video generation could further transform creative workflows.

Lee is particularly interested in multimodal AI, such as combining advanced computer vision capabilities with NLP and audio algorithms. "Image, video, audio, text -- using transformers, you can basically boil everything down to this core language and then output whatever you'd like," he said. For example, a model could create audio based on a text prompt or a video based on an input image.

Machine learning in healthcare could accelerate medical research and improve treatment outcomes. Promising areas include early disease detection, personalized medicine and scientific breakthroughs thanks to powerful models such as the protein structure predictor AlphaFold.

Hospitals have begun adopting clinical decision support systems powered by machine learning to aid in diagnosis, treatment planning and medical imaging analysis. AI-assisted analysis of complex medical scans could help expedite diagnosis by identifying abnormalities -- for example, correcting corrupted MRI data or detecting heart defects in electrocardiograms.

A top area of focus is developing and automating patient engagement efforts with machine learning, said Hal McCard, an attorney at law firm Spencer Fane whose practice focuses on the healthcare sector. Machine learning models can analyze massive health data sets to better predict patient outcomes, enabling healthcare providers to develop more personalized, timelier interventions that improve adherence to treatment regimens.

Here, the biggest shift isn't the underlying technology, but rather the scale. "Machine learning for data-predicted solutions and population health is not a new concept," McCard said. Rather, what's changing is "how it's being applied and the effectiveness with which you can take that output and ... use it to drive better outcomes in patient care and clinical care."

NLP has also shown some promise for clinical decision-making and summarizing physician notes. But for the foreseeable future, implementation still requires close human oversight. In a recent study, ChatGPT provided inappropriate cancer treatment recommendations in a third of cases and produced hallucinations in nearly 13%.

"When it comes to clinical decision-making, there are so many subtleties for every patient's unique situation," said Dr. Danielle Bitterman, the study's corresponding author and an assistant professor of radiation oncology at Harvard Medical School, in a release announcing the findings. "A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide."

Machine learning is also changing technical roles by automating repetitive coding tasks and detecting potential bugs and security vulnerabilities.

Emerging generative tools such as ChatGPT, GitHub Copilot and Tabnine can produce code and technical documentation based on natural language prompts. Although human review remains essential, offloading initial writing of boilerplate code to AI can significantly speed up the development process.

Combined with NLP advances, this could mean more interactive, chat-based functionalities in future integrated development environments. "I think in the future, coding editors will have a more chat-based interface," said Jonathan Siddharth, co-founder and CEO of Turing, a company that matches developers with employers seeking technical talent. "Every software engineer [will have] an AI assistant beside them who they can talk to when they code."

In software testing and monitoring, using machine learning techniques such as anomaly detection and predictive analytics to parse log data can help IT teams predict system failures or identify bottlenecks. Similarly, AIOps tools could use machine learning to automatically scale resource allocations based on usage patterns and suggest more efficient infrastructure setups.

Although prompt engineering -- the practice of crafting queries for generative AI models that yield the best possible output -- has recently been a hot topic in the tech community, it's unlikely that prompt engineer will continue to be a standalone role as generative models become more adept. "I don't think 'prompt engineer' is going to be a position you're hired for," Lee said.

However, experts do expect fluency with generative AI tools to become an increasingly important skill for technical professionals. "In terms of software engineering, I think we're going to see more and more engineers who know how to prompt LLMs," Siddharth said. "I think it'll be a broadly applicable skill."

Enthusiasm and optimism abound, but implementing machine learning initiatives requires addressing practical challenges and security risks as well as potential social and environmental harms.

Adopting machine learning raises pressing ethical concerns, such as algorithmic bias and data privacy. On the technical side, integrating machine learning into legacy systems and existing IT workflows can be difficult, requiring specialized skills in machine learning operations, or MLOps, and engineering. And whether emerging generative AI tools will live up to the hype in real workplaces remains unclear.

In NLP, for example, human-level fluency remains far off, and it's unclear whether AI will ever truly replicate human performance or reasoning in open-ended scenarios. LLMs can generate convincing text, but lack common sense or reasoning abilities. Similar limitations exist for other areas, such as computer vision, where models still struggle with unfamiliar data and lack the contextual understanding that comes naturally to humans. Given these limitations, it's important to carefully choose the best machine learning approach for a given use case -- if machine learning is indeed necessary at all.

"There is a class of problems that can be solved with generative AI," Siddharth said. "There is an even bigger class of problems that can be solved with just AI. There's an even bigger class of problems that could be solved with good data science and data analytics. You have to figure out what's the right solution for the job."

Moreover, generative AI is often riskier to implement than other types of models, particularly for sectors such as healthcare that deal with highly sensitive personal data. "The generative solutions that seek to produce original content and things like that, I think, carry the most risk," McCard said.

In evaluating potential privacy risks for external products, McCard emphasized the importance of understanding a model's data sources. "It's a little bit unrealistic to think that you're going to get insight into the algorithm," he said. "So, understanding that it might not ultimately be possible to understand the algorithm, then I think the question turns to the data sources and the rights of use in the data sources."

The massive amounts of training data that machine learning models require make them costly and difficult to build. Increasing use of compute resources following the generative AI boom has strained cloud services and hardware providers, including an ongoing shortage of GPUs. Additional demands for specialized machine learning hardware could further exacerbate these supply chain issues.

This ties into another foundational challenge, Lynton said: namely, the state of a company's IT infrastructure. He gave the example of a consulting engagement with an industry-leading client whose accounting, procurement and customer data systems were all on different legacy systems that could not communicate with one another -- including two that were discontinued and unmaintainable.

"It's slightly terrifying, but this is a very common situation for many large companies," Lynton said. "The reason this is an issue for AI adoption is that most leadership teams are unaware of their IT landscape and so may budget X million [dollars] for AI, but then get little to no ROI because a great deal of it is wasted in trying to patch together their systems."

McCard raised a similar concern about readiness for implementation in healthcare settings. "I have serious questions about the ability of some of these tools, especially the generative tools, to interface or be interoperable with the electronic health record systems and other systems that these health systems are currently running," he said.

The hardware and computations required for machine learning initiatives also have environmental implications, particularly with the rise of generative AI. Training machine learning models involves high levels of carbon emissions, particularly for large models with billions of parameters.

"The main risk is that people generate more carbon by training AI models than their sustainability use cases could ever save," Lynton said. "This wasn't a huge problem with the more established fields ... but now with [generative AI], it's a real threat."

To mitigate climate impacts, Lynton suggests focusing on choosing computationally efficient models and measuring the environmental impact of an AI project from start to finish. More efficient model architectures mean shorter training times and, in turn, a smaller carbon footprint.

Enterprise interest in machine learning is on the rise, with investment in generative AI alone projected to grow four times over the next two to three years.

"AI transformation is the new digital transformation," Siddharth said. "Every large enterprise company that I meet is thinking about what their AI strategy should be." Specifically, he said, companies are interested in exploring how AI and machine learning can help them better serve users or improve operational efficiency.

But in practice, not all companies are ready for the transition. For many enterprises, AI and machine learning are "still surprisingly a box-ticking exercise or risky investment, more than an accepted necessity," Lynton said. In many cases, an order comes down to "incorporate AI into the business," without further detail on what that actually entails, he said.

Moving forward, ensuring success in enterprise machine learning initiatives will require companies to slow down, rather than rushing to keep up with the AI hype. Start small with a pilot project, get input from a wide range of teams, ensure the organization's data and tech stacks are modernized, and implement strong data governance and ethics practices.

Lynton suggests taking an automation-first strategy. Rather than going full steam ahead on a complex AI initiative, start by automating five manual, repetitive and rules-based processes, such as a daily data entry task that involves entering a report from a procurement system into a separate accounting system.

These automation use cases are typically cheaper and show ROI more quickly compared with complex machine learning applications. Thus, an automation-first strategy can quickly give leaders a picture of their organization's readiness for an AI initiative -- which, in turn, can help prevent costly missteps.

"In a lot of cases, the outcome is that they are not [ready], and it's more important to first upgrade [or] combine some legacy systems," Lynton said.

Read more from the original source:
What is the future of machine learning? - TechTarget

Read More..

Machine learning-based diagnosis and risk classification of … – Nature.com

The workflow of the current study is presented in Fig.1. The following sections are dedicated to the description of data acquisition, radiomic features extraction, and diagnostic modeling framework, including feature selection methods, machine learning algorithms, and the process of evaluation and comparison of the models.

Workflow of the proposed radiomics models for automated diagnosis of coronary artery disease and risk classification from rest/stress myocardial perfusion imaging using single-photon emission computed tomography.

A total of 395 patients suspicious of coronary artery disease who underwent 2-day stress-rest protocol MPI SPECT were enrolled in this study. All the data were anonymized and used without any intervention on patients diagnosis, treatment, or management. The study was approved by the institutional review board (IRB) of Shahid Beheshti University of Medical Sciences (IRB code: IR.SBMU.MSP.REC.1399.368). Informed consent was waived for all subjects by the same IRB listed above. All methods were performed in accordance with the relevant guidelines and regulations. To emulate a real clinical scenario, we did not apply any conditional inclusion/exclusion criteria to the dataset. However, it is noteworthy to mention that the enrolled dataset did not include patients with myocardial infarction.

SPECT imaging was performed for all patients with a 2-day stress-rest myocardial perfusion protocol. Both rest and stress (induced by exercise, dipyridamole, or dobutamine) myocardial perfusion images were included in this study. On average, 555 to 925MBq of 99mTc-MIBI was administered intravenously into patients based on published guidelines37, 38. For exercise stress protocol, the radiopharmaceutical was injected when the patients heart rate reached 85% of its maximum value. Exercise testing was continued for at least 1min after injection of the radiopharmaceutical to maintain constant maximal cardiac oxygen demand. For the pharmacological stress test, dipyridamole was injected at a dose of 0.56mg/kg over 4min (or dobutamine at a dose of 5 to 10g per kilogram every 3 to 5min), followed by the injection of the radiopharmaceutical after three minutes39. Image acquisition was performed after 1520 and 60min post-injection for the exercise and pharmacologic stress tests, respectively40.

The images were acquired on a single-head gamma camera (Intermedical- MULTICAM 1000, Germany) imaging system using 32 projections over a 180 arc from right anterior oblique to left posterior oblique, stepping 30s for each projection, with a matrix size of 6464 and pixel dimension of 5.3575.357mm2. Supine stress imaging began 15 to 60min after stress.

Two nuclear medicine physicians reviewed patients gated MPI SPECT, additional clinical information and history, and classified patients as normal or diagnosed with CAD. Moreover, CAD positive patients were classified into low-, intermediate-, and high-risk groups. The ground truth was established based on a consensus between two physicians, and in cases where there was no agreement, a senior nuclear medicine physician made the final decision. Patients clinical information included prior MPI SPECT, blood pressure, echocardiography results, ECG and exercise test results, hyperlipidemia, Body Mass Index (BMI), and diabetes mellitus status. It is noteworthy that the physician had access to the traditional quantitative SPECT scores, such as Summed Stress (SSS), Rest (SRS), and Difference Scores (SDS), etc., and wall motion and thickening information from the gated datasets and the raw SPECT projections.

The dataset included 78 normal and 317 CAD patients including 135 low-, 127 intermediate, and 55 high-risk patients. The patients demographic information is summarized in Table 1.

The left ventricle myocardium, excluding the cardiac cavity, was manually segmented using the 3D-slicer software package41 by a nuclear medicine technologist with more than ten years of experience and edited/verified by an experienced nuclear medicine physician.

The Image Biomarker Initiative Standardization (IBSI)42 suggests interpolating images to isotropic voxel sizes to obtain rotationally invariant also to standardize the voxel size of images. However, in our dataset, all scans already had isotropic voxel spacing of 5.3575.3575.357 mm3. Hence, we kept them intact to avoid further manipulation of intensities. In addition, intensity levels inside the VOI were discretized to 64Gy levels to ease the calculation of texture features. The radiomic features were calculated using Standardized Environment for Radiomics Analysis (SERA)43, a MATLAB-based package compliant with the IBSI guideline. For the purpose of validating reproducibility, this package has been evaluated in multi-center standardization studies44. A total of 118 features, including 13 intensity-based, 12 intensity histogram (ih), 3 intensity volume histogram (ivh), and 90 3D textural features (25Gy-level co-occurrence matrix (GLCM), 16Gy-level run length matrix (GLRLM), 16Gy-level size zone matrix (GLSZM), 12Gy-level distance zone matrix (GLDZM), 5 neighborhood gray-tone difference matrix (NGTDM), and 16 neighborhood gray-level dependence matrix (NGLDM)) were extracted for each VOI. Absolute value First-order statistical features (min, max, average, etc.) were considered irrelevant since MPI SPECT images were not quantitative36. Morphological features were also irrelevant since the VOI was the whole left ventricle myocardium. Family, names, and abbreviations of the extracted features are listed in Supplementary Table S1.

In this section, we introduce different rings in the chain of the proposed automated diagnostic framework, including establishment of diagnostic tasks and feature sets, feature selection, classifiers, and models evaluation process.

Two diagnostic tasks were defined in this study for the models.

(1) The first task is CAD diagnosis, including classification of patients into negative, and positive CAD (normal/abnormal classification).

(2) The second task is risk diagnosis, including classification of patients into low-risk (negative, and low-risk CAD) and high-risk (intermediate- and high-risk) patients. Table 2 lists the tasks and their descriptions.

Rest-, Stress-, Delta-, and combined (combination of all) -radiomics feature sets were added to clinical features, including age, sex, family history, diabetes status, smoking status, and ejection fraction (calculated from SPECT images) to be fed into different models for diagnosing tasks 1 and 2.

The data were randomly divided into 80% and 20% for training and testing partitions. In all models, features extracted from the training dataset were normalized using the Z-score, and the obtained mean and standard deviation were applied to the corresponding feature extracted from the test dataset. Many of the extracted features may not correlate with the investigated outcome (not relevant features) or may correlate highly with each other (redundant features). These features do not provide new information and should therefore be excluded. We used three different FS methods, one filter-based: Maximum Relevance Minimum Redundancy (mRMR)45, and two wrapper-based: Boruta46 and Recursive Feature Elimination47 with the Random Forest as the core machine (RF-RFE). Since the used dataset for task 1 was unbalanced (78 normal and 317 abnormal patients), after the features were selected, we applied Synthetic Minority Over-sampling Technique (SMOTE) on the training data with selected features to correct for plausible biases48.

Classification of the patients was performed using nine different machine learning methods, namely Decision Tree (DT), Gradient Boosting (GB), K-Nearest Neighbor (KNN), Logistic Regression (LR), Multi-Layer Perceptron (MLP), Nave Bayes (NB), Random Forest (RF), Support Vector Machine (SVM) and eXtreme Gradient Boosting (XGB) algorithms. The hyperparameters were optimized in fivefold cross-validation in the training data by random-search for models with more than 100 different parameter settings (XGB and Random Forest) and grid-search for models with less than 100 different parameter settings. Subsequently, the optimum parameters were applied to the test data with 1000 bootstraps. The hyperparameters for each classifier and the range of their values are presented in Table 3. All FS and ML models were selected based on their public availability to increase the reproducibility of the study.

The area under the ROC curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE) metrics were used to evaluate the performance of the models. In addition, the performance of the best models was statistically compared using the DeLong test (significance threshold<0.05). All analysis was performed using R 4.0 (mlr library version 2.18).

Read this article:
Machine learning-based diagnosis and risk classification of ... - Nature.com

Read More..

Machine Learning for .NET Developers Starts with ML.NET and … – Visual Studio Magazine

Q&A

There's no more important topic than machine learning in the developer space right now as advanced AI constructs like ChatGPT and Microsoft's "Copilot" assistants are transforming the industry.

With software development's AI-powered future recently announcing itself, putting AI to work right now in your apps is a good way to position yourself for that future today.

"There are so many opportunities for developers to include it in a wide variety of applications," said Microsoft MVP (AI) Veronika Kolesnikova, a senior software engineer at Liberty Mutual.

She will help developers take advantage of those opportunities in an upcoming session titled "Machine Learning for .NET Developers" at the big Live! 360 conference set for November in Orlando.

"If you heard a lot about data collection and processing, creating and training models and ready to try it all yourself with the help of .NET, I'll show you where to start," said Kolesnikova, who has extensive experience in full-stack development using C#, .NET, Java and Typescript, Azure and AWS clouds.

For this session, of course, it's all about Azure and its built-in tooling that helps devs get started in ML regardless of their experience.

"Your first completely functional ML.NET project won't take too long to create. In this session we'll talk about the specifics of ML.NET, its applications and see how Auto ML can be a good starting point," said the sought-after public speaker. She promises attendees will learn:

We recently caught up with Kolesnikova to learn more about her session in a short Q&A.

VisualStudioMagazine: What inspired you to present a session on machine learning for .NET developers?Kolesnikova The goal of all my talks is to show developers how easy it is to start working with ML and AI without a lot of experience. Developers don't need to be professional data scientists in order to start using ML in their applications. I feel like .NET developers can feel extra intimidated by ML: they might feel they not only need special education, but also learn data science specific languages like Python and R.

"With this talk I want .NET developers to see how they can use their favorite development language for their ML tasks and feel inspired to start creating and using custom ML models."

Veronika Kolesnikova, Sr. Software Engineer, Microsoft MVP (AI), Liberty Mutual

With this talk I want .NET developers to see how they can use their favorite development language for their ML tasks and feel inspired to start creating and using custom ML models.

With machine learning being integrated across various tech stacks, what sets ML.NET apart from other machine learning libraries, especially for .NET developers?ML.NET allows .NET developers to build custom models that can be used everywhere: in the cloud, on-premises, on a device, etc without the need for switching between development languages. ML.NET can also be used to work with models trained with other technologies.

For developers new to machine learning, the tech and its processes can seem daunting. How does ML.NET simplify or streamline the learning curve, and what prerequisites would you suggest for those attending the session?I think the main benefit of ML.NET that helps to simplify the process and save time is less context switching -- writing all the parts of the solution using one language and tech stack. Another very important feature is AutoML support. Although the attendees don't need to know anything about ML, I would recommend taking a look at how ML works and what model types/algorithms are available.

You've mentioned AutoML as a potential starting point. Could you expand on just one of its benefits, especially in the context of .NET development, and how it integrates with ML.NET?AutoML saves a lot of time and effort when creating custom models. By saving time on routine tasks .NET developers can focus on other important tasks: solution architecture, model integration, etc.

The Model Builder tool is a highlight of ML.NET. In your experience, how has it changed the way developers approach building and training machine learning models?AutoML is in the core of the Model Builder, so it allows to jump start a ML-based solution even without any experience. The democratization of ML and AI in general makes it easy for anyone to start using ML. Oh! Did I say the Model Builder was free?!

Machine learning models are only as good as the data they're trained on. How does ML.NET facilitate the process of data collection and processing, ensuring the creation of robust models?ML.NET supports developers in all steps of the ML lifecycle: data organization and cleanup, model training and testing, retraining and MLOps. Before training a custom model it's important to understand the Responsible AI principles and data cleanup options. ML.NET has all the tools a developer can use for data preparation. Examples of data cleanup function from ML.NET: ReplaceMissingValues, NormalizeMinMax, NormalizeBinning.

As the technology landscape constantly evolves, where do you see the future of ML.NET in the broader spectrum of AI and machine learning, and what advice would you give developers looking to specialize in this area using .NET?ML.NET is constantly evolving to keep up with all the latest ML features. With increasing popularity of AI more and more developers will want to start training custom models and using those models. I'm sure ML.NET can be a great starting point in .NET developers' machine learning journey. It's great as both learning tool and production ready tool. In the end everyone will decide for themselves how far they want to go: work with data, build custom models, use pre-built models, create MLOps setup, combine ML.NET with other languages and tools, etc.

Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Save up to $400 if you register by September 22!" said the organizer of the event, which is presented by the parent company of Virtualization & Cloud Review.

About the Author

David Ramel is an editor and writer for Converge360.

More:
Machine Learning for .NET Developers Starts with ML.NET and ... - Visual Studio Magazine

Read More..

New Rice Continuing Studies course to explore generative AI … – Rice News

TheGlasscock School of Continuing Studiesat Rice will host a course exploring the possibilities and potential perils of generative artificial intelligence (AI) starting Sept. 20.

The course titled Generative Artificial Intelligence and Humanity is open to the public and will examine machine learning and related tools like ChatGPT for various aspects of human life, including education, work, health, creativity, equity, justice, democracy and what it means to be human.

Taught by Rice faculty, the course aims to provide a comprehensive overview of the latest developments in AI and its potential impacts on society.

A primary part of the Glasscock Schools mission is to provide community access to Rice faculty and the incredible and transformative research that is taking place on our campus, saidRobert Bruce,dean of the Glasscock School.

Additionally, we exist to inform and equip our city with the latest knowledge and skills needed to navigate work and life. This course is a prime example of both of those principles. As the proliferation of AI applications has exponentially accelerated just this year, we are excited to give Houstonians access to some of the leading scholars on the subject to help them understand and navigate this brave new world.

Through a series of lectures, discussions and hands-on exercises, students will explore case studies from various domains to gain a deeper understanding of the potential benefits and drawbacks of these technologies. They will also learn about strategies for ensuring that AI is used in ways that promote equity and justice.

Topics and speakers will include:

AI and Democracy, Moshe Vardi

Understanding Generative AI and Machine Learning: How Machines Learn and Decide, Vicente Ordez-Romn

A History of the Limitations and Possibilities of Artificial Intelligence, Elizabeth Petrick

How Generative AI May Reshape the Workforce, Fred Oswald

Responsible AI for Health, Kirsten Ostherr

What It Means to Be Human in an Age of AI: Philosophical and Ethical Issues, Rodrigo Ferreira

How Human Is AI Creativity? Anthony Brandt

Generative AI and Education, Richard Baraniuk

Its remarkable how many Rice faculty across disciplines are researching and teaching about the societal impact of generative AI," saidCathy Maris, the Glasscock Schools assistant dean for Community Learning and Engagement. "No one field has the solutions to these complex challenges. This course gives the public access to speakers from the fields of computer science, history, psychology, English, medical humanities and music. We hope this confluence of perspectives will offer powerful insights for and with our community.

The course will be held on campus from 7-8:30 p.m. every Wednesday night Sept. 20 - Nov. 8.

To learn more, clickhere.

See more here:
New Rice Continuing Studies course to explore generative AI ... - Rice News

Read More..

Scientist in Molecular Engineering by Machine Learning job with … – Nature.com

The Hospital for Sick Children (SickKids) Research Institute seeks an outstanding scientist whose research is focused on the development and utilization of computational machine learning approaches for the design and engineering of biomolecules. Designed molecular biologics - including antibodies, nanobodies, miniproteins, vaccines, enzymes, toxins, peptides, and nucleic acids - are poised to revolutionize biomedical research and therapeutic discovery. This permanent position lies at the interface between computational design, structural biology, and therapeutic development, and aligns with our SickKids Precision Child Health strategic initiative.

The successful candidate will be appointed as a Scientist in the Molecular Medicine research program at the SickKids Research Institute. SickKids is a world-renowned paediatric hospital with seven fully integrated research programs. The successful applicants laboratory will be located in the state-of-the-art Peter Gilgan Centre for Research & Learning (686 Bay Street, Toronto, Canada), in the Discovery District of the heart of downtown Toronto. This unique environment for biomedical science sits in close proximity to nine other academic hospital research centres and the University of Toronto campus.

The successful applicant will initiate and maintain an original, competitive, and independently funded research program of international caliber in the area of biomolecular design using machine learning in conjunction with experimental approaches including functional assays, structure determination, biophysical characterization and/or directed evolution. Designed biologics would be applied as probes of biological function and/or candidate therapeutic leads across the breadth of paediatric medicine. The successful candidate will benefit from the extensive research and core facilities of SickKids, the University of Toronto and its affiliated institutions for structural biology, biophysics, drug discovery, cellular imaging, functional genomics, proteomics, metabolomics, bioinformatics, computational biology, machine learning, and artificial intelligence, as well as new inter-institutional initiatives focused on biologics and therapeutic design.

The successful applicant is expected to qualify for an academic status-only appointment in an appropriate department at the University of Toronto, Canadas largest university and a world leader in machine learning. The successful candidate will also be considered by the Vector Institute for appointment as a Faculty Member or Faculty Affiliate. Vector is home to over 700 active researchers with broad expertise in artificial intelligence, including Faculty Members, Faculty Affiliates, and trainees in a world-class machine learning research environment. Vector is supported by government and private industry, in partnership with Ontario universities. Faculty that are co-recruited with Vector benefit from access to high performance computing capacity and resources for cutting edge artificial intelligence and machine learning research at Vector.

Applicants must have a PhD, MD, or MD/PhD or equivalent in a relevant discipline and a record of scientific accomplishments in the aforementioned research areas. Salary will be commensurate with qualifications and experience. A competitive benefits package will be offered along with support for relocation expenses.

Application Process

Interested individuals should email their application comprised of a curriculum vitae, a description of past research (maximum 1 page), a detailed proposed research program (maximum 4 pages), and copies of main research publications in PDF format to the Co-Chairs, Molecular Engineering by Machine Learning Search Committee at molecularengineering.search@sickkids.ca by November 7, 2023. Applicants must also arrange to have three signed letters of reference on institutional letter head sent directly to the Search Committee Chairs at molecularengineering.search@sickkids.ca, indicating the applicants name in the subject line, also by November 7, 2023. Late applications may be reviewed, but priority will be given to those submitted by the closing date.The search committee will interview applicants beginning in late 2023, with a potential start date in summer or fall 2024.

SickKids believes that diversity positively impacts science and is essential to sustain our vibrant world-leading research community. SickKids welcomes applications from racialized persons / persons of colour, women, Indigenous Peoples, persons with disabilities, 2SLGBTQIA+ persons, and others who contribute to the further diversification of ideas. Informed by the Accessibility for Ontarians with Disabilities Act (AODA), the Ontario Human Rights Code, and our Access and Accommodation Policy, SickKids is proud to make accommodations to support applicants during the interview and assessment process, if requested. Please advise the SickKids Research Institute Faculty Development & Diversity Office at faculty.office@sickkids.ca of your accessibility needs during the recruitment process. Information received relating to accommodation will be addressed confidentially. As part of the application process, you will be asked to complete a brief voluntary diversity survey. Any information directly related to you is confidential and cannot be accessed by either the search committee or human resources staff. Results will be aggregated for institutional planning purposes. The self-identification information is collected, used, disclosed, retained and disposed of in accordance with thePrivacy Actand theAccess to Information Act.

SickKids recognizes that scholars have varying career paths and that career interruptions can be part of an excellent academic record. Candidates are encouraged to share any personal circumstances in order to allow for a fair assessment of their application.

All qualified applicants are encouraged to apply; however, in accordance with Canadian immigration requirements, Canadians and permanent residents will be given priority. The successful candidate will hold an appropriate and valid work permit, if applicable. Only those applicants selected for the interview will be contacted. Wherein a practicing MD, the successful candidate must hold or be eligible for licensure with the College of Physicians and Surgeons of Ontario.

Applicants may direct any informal inquiries to:

Co-Chairs, Molecular Engineering by Machine Learning Search Committee

SickKids Research Institute - The Peter Gilgan Centre for Research & Learning

686 Bay Street Toronto, Ontario Canada M5G 0A4

molecularengineering.search@sickkids.ca

See the original post:
Scientist in Molecular Engineering by Machine Learning job with ... - Nature.com

Read More..

Updates on Multitask learning part1(Machine Learning) | by … – Medium

Author : Juan Lu, Mohammed Bennamoun, Jonathon Stewart, JasonK. Eshraghian, Yanbin Liu, Benjamin Chow, Frank M. Sanfilippo, Girish Dwivedi

Abstract : Diagnostic investigation has an important role in risk stratification and clinical decision making of patients with suspected and documented Coronary Artery Disease (CAD). However, the majority of existing tools are primarily focused on the selection of gatekeeper tests, whereas only a handful of systems contain information regarding the downstream testing or treatment. We propose a multi-task deep learning model to support risk stratification and down-stream test selection for patients undergoing Coronary Computed Tomography Angiography (CCTA). The analysis included 14,021 patients who underwent CCTA between 2006 and 2017. Our novel multitask deep learning framework extends the state-of-the art Perceiver model to deal with real-world CCTA report data. Our model achieved an Area Under the receiver operating characteristic Curve (AUC) of 0.76 in CAD risk stratification, and 0.72 AUC in predicting downstream tests. Our proposed deep learning model can accurately estimate the likelihood of CAD and provide recommended downstream tests based on prior CCTA data. In clinical practice, the utilization of such an approach could bring a paradigm shift in risk stratification and downstream management. Despite significant progress using deep learning models for tabular data, they do not outperform gradient boosting decision trees, and further research is required in this area. However, neural networks appear to benefit more readily from multi-task learning than tree-based models. This could offset the shortcomings of using single task learning approach when working with tabular data.

2.Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs (arXiv)

Author : Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Xiaolin Huang, Johan A. K. Suykens

Abstract : Multitask learning (MTL) leverages task-relatedness to enhance performance. With the emergence of multimodal data, tasks can now be referenced by multiple indices. In this paper, we employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices and preserve their structural relations. Based on this representation, we propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs), where the CP factorization is deployed over the coefficient tensor. Our approach allows to model the task relation through a linear combination of shared factors weighted by task-specific factors and is generalized to both classification and regression problems. Through the alternating optimization scheme and the Lagrangian function, each subproblem is transformed into a convex problem, formulated as a quadratic programming or linear system in the dual form. In contrast to previous MTL frameworks, our decision function in the dual induces a weighted kernel function with a task-coupling term characterized by the similarities of the task-specific factors, better revealing the explicit relations across tasks in MTL. Experimental results validate the effectiveness and superiority of our proposed methods compared to existing state-of-the-art approaches in MTL. The code of implementation will be available at https://github.com/liujiani0216/TSVM-MTL

See the original post:
Updates on Multitask learning part1(Machine Learning) | by ... - Medium

Read More..

Amazons Rajeev Rastogi on AI and Machine Learning revolutionising workplace trends – People Matters

Machine Learning and AI technologies are transforming the future of work by enabling data-driven decision-making and automating routine tasks. In an exclusive conversation with People Matters, Rajeev Rastogi, Vice President of International Machine Learning at Amazon India, shared how programmes designed for the evolving landscape prepare both young and experienced workers for emerging job roles.

Over the past decade, the trajectory of machine learning has been on a continuous upswing, gaining traction across various industries. It is being aggressively adopted by manufacturing, financial services, retail, transportation, agriculture, healthcare sectors, and many more. Machine Learning has become a significant lever in solving customer problems, and the demand for machine learning roles is expected to increase significantly among employers. A study by the World Economic Forum proclaims that AI, machine learning, and data segments will be the top emerging job roles in India over the next five years, while the talent pool is expected to remain the same.

Skill shifts have accompanied the introduction of modern technologies in the workplace since at least the Industrial Revolution, and the adoption of machine learning will mark an acceleration over the shifts of even the recent past.

Training, workshops, and initiatives help talent prepare for crucible experiences and become their best selves. Machine Learning Summer School is a good example of a platform to help foster ML excellence and strive towards developing applied science skills in young talent.

Companies have been adopting technology to better virtual collaboration and tapping into the promise of AI and machine learning to reinvent techniques for boosting employee engagement, productivity, and well-being since the advent of remote work.

Machine learning and data science are advanced tools used to analyze data and enhance decision-making. Informed decisions to pursue a career in this fieldcareer in this fieldmay be made easier if you are aware of the differences between data science and machine learning.

At Amazon India, we believe in fostering a culture of growth and providing equal opportunities for all individuals to reach their full potential. Our commitment to equality extends to various communities, including women, LGBTQIA individuals, military veterans, and differently-abled individuals. We value the unique perspectives that each person brings to our workplace, recognizing the immense value they add to Amazon India.

Amazon's upskilling and reskilling initiatives play a pivotal role in ensuring the workforce is prepared for the machine learning era. While advanced skills will be in demand, Amazon also emphasizes the importance of basic digital literacy. This recognition stems from the understanding that in an age of machine learning, everyone should have the foundational skills to navigate the digital landscape effectively.

Read the rest here:
Amazons Rajeev Rastogi on AI and Machine Learning revolutionising workplace trends - People Matters

Read More..

PhD Candidate in Machine Learning in Neurology job with … – Times Higher Education

About the job

Do you want to participate in a groundbreaking interdisciplinary research project combining neurology, advanced computational science, and technology? We have a vacant position as PhD candidate for 3 years.

The Department of Neuromedicine and Movement Science, in collaboration with the Department of Computer Science, has recently launched a large-scale research project on the application of machine learning in headache research entitledMachine Intelligence in Headache (MI-HEAD).The overall goal of the project is to develop and apply artificial intelligence and machine learning methods and frameworks to improve the medical treatment of individuals with primary headaches.

In the project, an extensive database consisting of available Norwegian health register data combined with clinical data is used to develop models that predict the effect of migraine medications at the individual level. Such prediction may optimize the administration of correct treatment to individuals with migraine and significantly reduce the negative impacts of headache. We will also carry out a large-scale randomized controlled clinical trial to evaluate the effect of using machine learning to optimize treatment for individuals with migraine.

MI-HEAD is organized under the newly establishedNorwegian Centre for Headache Research (NorHEAD), a nationwide Centre for Clinical Treatment Research funded by the Research Council of Norway. NorHEAD is hosted by the Department of Neuromedicine and Movement Science at NTNU, and collaborates with academic institutions, hospitals, and industry across the nation. MI-HEAD also works in close collaboration with the world-leading High-Dimensional Neurology group at UCL Queen Square Institute of Neurology, London, UK.

Department of Neuromedicine and Movement Science(INB), conducts research and education covering av wide range of areas related to the nervous system, sense organs, the head, and motion control and movement.

Sustainability is an important part of our social mission. As an employee at INB, we want you to get involved in the development of a sustainable future. Together with your colleagues, you will contribute to the department achieving its sustainability goals.

This position is a unique opportunity to contribute to a highly advanced and meaningful research area as part of a large-scale interdisciplinary and international initiative.

For a position as a PhD Candidate, the goal is a completed doctoral education up to an obtained doctoral degree.

Duties of the position

In line with the aims of MI-HEAD, the position will be responsible for:

Required selection criteria

The appointment is to be made in accordance withRegulations concerning the degrees of Philosophiae Doctor (PhD) and Philosodophiae Doctor (PhD) in artistic research at NTNUandNational guidelines for appointment as PhD, post doctor and research assistant

Preferred selection criteria

Personal characteristics

Emphasis will be placed on personal and interpersonal qualities.

We offer

Salary and conditions

As a PhD candidate (code 1017) you are normally paid from gross NOK 532200 per annum before tax, depending on qualifications and seniority. From the salary, 2% is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is 3 years.

Appointment to a PhD position requires that you are admitted to the PhD program inMedicine and Health SciencesorMedical Technologywithin three months of employment, and that you participate in an organized PhD program during the employment period.

The engagement is to be made in accordance with the regulations in force concerningState Employees and Civil Servants, and the acts relating toControl of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

The position is subject to external funding by the Norwegian Research Council.

It is a prerequisite you can be present at and accessible to the institution daily.

About the application

The application and supporting documentation to be used as the basis for the assessment must be in English.

Publications and other scientific work must be attached to the application. Please note that your application will be considered based solely on information submitted by the application deadline. You must therefore ensure that your application clearly demonstrates how your skills and experience fulfil the criteria specified above.

The application must include:

If all,or parts,of your education has been taken abroad, we also ask you to attach documentation of the scope and quality of your entire education, both bachelor's and master's education, in addition to other higher education. Description of the documentation required can befoundhere. If you already have a statement fromNOKUT,pleaseattachthisas well.

We will take joint work into account. If it is difficult to identify your efforts in the joint work, you must enclose a short description of your participation.

In the evaluation of which candidate is best qualified, emphasis will be placed on education,experienceand personal and interpersonalqualities.Motivation,ambitions,and potential will also countin the assessment ofthe candidates.

NTNU is committed to following evaluation criteria for research quality according toThe San Francisco Declaration on Research Assessment- DORA. This means that we pay special attention to the quality and professional breadth of these works. We also consider experience from research management and participation in research projects. We place great emphasis on your scientific work from the last five years.

General information

Working at NTNU

NTNU believes that inclusion and diversity is a strength. We want our faculty and staff to reflect Norways culturally diverse population and we continuously seek to hire the best minds. This enables NTNU to increase productivity and innovation, improve decision making processes, raise employee satisfaction, compete academically with global top-ranking institutions, and carry out our social responsibilities within education and research. NTNU emphasizes accessibility and encourages qualified candidates to apply regardless of gender identity, ability status, periods of unemployment or ethnic and cultural background.

The city ofTrondheimis a modern European city with a rich cultural scene. Trondheim is the innovation capital of Norway with a population of 200,000. The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world. Professional subsidized day-care for children is easily available. Furthermore, Trondheim offers great opportunities for education (including international schools) and possibilities to enjoy nature, culture and family life and has low crime rates and clean air quality.

As an employeeatNTNU, you must at all times adhere to the changes that the development in the subject entails and the organizational changes that are adopted.

A public list of applicants with name, age, job title and municipality of residence is prepared after the application deadline. If you want to reserve yourself from entry on the public applicant list, this must be justified. Assessment will be made in accordance withcurrent legislation. You will be notified if the reservation is not accepted.

If you have any questions about the position, please contact Anker Stubberud, telephone +4745229174, email anker.stubberud@ntnu.no. If you have any questions about the recruitment process, please contact HR Adviser Bente Kristin rbogen Andersen, e-mail: bente.k.a.andersen@ntnu.no

If you think this looks interesting and in line with your qualifications, please submit your application electronically via jobbnorge.no with your CV, diplomas and certificates attached. Applications submitted elsewhere will not be considered. Upon request, you must be able to obtain certified copies of your documentation.

Application deadline:1 October 2023

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

The Department of Neuroscience and Movement Science (INB) is an ambitious department in strong development. Great diversity in medicine and health disciplines is a hallmark of the department. We give priority to social relevance and quality in both education and research.

Several of our groups of researchers and teaching staff are at the forefront of their fields in Norway and internationally. Our ambition: Through interdisciplinary collaboration, INB creates forward-looking education and research that lead the way in improving health and function.Read more about the Department here: http://www.ntnu.edu/inb

Deadline1st October 2023EmployerNTNU - Norwegian University of Science and TechnologyMunicipalityTrondheimScopeFulltimeDuration TemporaryPlace of serviceEdvard Griegs gt. , 7030 Trondheim

Read more here:
PhD Candidate in Machine Learning in Neurology job with ... - Times Higher Education

Read More..

Revolutionizing Drug Development with Machine Learning to … – Cryptopolitan

Description

In a groundbreaking development that could transform the landscape of drug discovery and development, researchers at Pohang University of Science and Technology (POSTECH) have harnessed the power of machine learning to predict a drugs chances of approval before clinical trials even begin. Their findings, recently published in the esteemed journal EBioMedicine, offer a promising solution Read more

In a groundbreaking development that could transform the landscape of drug discovery and development, researchers at Pohang University of Science and Technology (POSTECH) have harnessed the power of machine learning to predict a drugs chances of approval before clinical trials even begin. Their findings, recently published in the esteemed journal EBioMedicine, offer a promising solution to one of the pharmaceutical industrys most pressing challenges the high rate of drug candidates that fail during clinical trials despite showing promise in preclinical testing.

The pursuit of new pharmaceuticals is not merely a scientific endeavor but a vital mission that affects the health and well-being of humanity at large. The development of innovative drugs is instrumental in advancing medical treatments, preventing diseases, and ultimately improving the quality of life for individuals around the globe. However, the arduous journey from laboratory discovery to market availability is fraught with obstacles and uncertainties.

One of the most significant hurdles in drug development is the staggering economic losses incurred when a drug candidate fails during clinical trials. These trials involve diverse population groups and are designed to assess the safety and efficacy of a drug in real-world scenarios. Even when a drug has shown exceptional promise in preclinical stages, the transition to clinical trials can reveal unexpected challenges, leading to setbacks that cost pharmaceutical companies billions of dollars.

To address this critical issue, it is imperative to understand why certain drugs, despite passing rigorous preclinical testing, falter during clinical trials. Moreover, there is a pressing need to develop methods that can predict a drugs chances of approval before embarking on these costly and time-consuming trials.

Enter Professor Sanguk Kim and PhD candidate Minhyuk Park, leading a research team at POSTECHs Department of Life Sciences. Leveraging the power of machine learning, they have achieved remarkable success in predicting potential drug outcomes and side effects before clinical trials commence.

The crux of their groundbreaking research lies in addressing a fundamental discrepancy in drug effects observed between cell lines and animals, commonly used in preclinical testing, and their ultimate impact on humans. This discrepancy arises from variations in how drug target genes function and are expressed in cells as opposed to humans. Neglecting this critical difference can lead to severe and unanticipated side effects when drugs are administered to actual patients, deviating significantly from the promising results seen in laboratory settings.

The researchers at POSTECH tackled this challenge head-on by focusing on the disparities in drug effects between cells and humans. Their approach involved a comprehensive analysis of the CRISPR-Cas9 knockout and loss-of-function mutation rate-based gene perturbation effects in cells and humans, respectively. By evaluating this discrepancy, they aimed to predict the likelihood of a drugs approval, drawing from a dataset that included 1404 approved drugs and 1070 unapproved drugs.

To further validate the risk associated with drug targets exhibiting the cells/humans discrepancy, the researchers delved into the targets of drugs that had previously failed in clinical trials or been withdrawn from the market due to safety concerns. This meticulous analysis provided crucial insights into the factors contributing to drug failures and enabled the research team to refine their predictive models.

What sets this research apart from conventional approaches is its integration of both chemical and genetic strategies. While traditional methods primarily rely on a drugs chemical properties to predict its success, the POSTECH team recognized the significance of genetic differences between preclinical models and humans. By harmonizing these two facets, they achieved a level of accuracy previously unattainable in drug safety and success predictions.

The implications of this research are nothing short of revolutionary. Machine learnings ability to predict a drugs chances of approval with a high degree of accuracy has the potential to reshape the pharmaceutical industry. By providing pharmaceutical companies with a tool to make more informed decisions about which drug candidates to advance to clinical trials, this technology has the potential to reduce the risk of costly failures and accelerate the development of safe and effective drugs.

As with any transformative technology, the use of machine learning in drug development raises important ethical considerations. Ensuring the privacy and security of patient data used in these predictive models is paramount. Additionally, regulatory agencies will need to adapt to accommodate the use of these innovative approaches in the drug approval process, striking a balance between innovation and safety.

The work conducted by Professor Sanguk Kim, Minhyuk Park, and their team at POSTECH represents a significant step forward in drug development. Their integration of machine learning, genetic insights, and chemical properties promises to revolutionize the way pharmaceuticals are discovered and developed, ultimately benefiting not only the industry but also the health and well-being of individuals worldwide. The journey from laboratory discovery to clinical approval may soon become a more efficient and predictable path, ushering in a new era of medical innovation.

Here is the original post:
Revolutionizing Drug Development with Machine Learning to ... - Cryptopolitan

Read More..