Page 2,642«..1020..2,6412,6422,6432,644..2,6502,660..»

Column: How the war on terror came to look like a culture war among ourselves – Yahoo News

Smoke rises from the World Trade Center on Sept. 11, 2001, seen from a tugboat evacuating people from Manhattan to New Jersey. (Hiro Oshima / WireImage)

I got back to the U.S. from my honeymoon on Sept. 10, 2001. My wife went straight home to Washington, D.C., to start her new job at the Justice Department. I went to Washington state, where wed gotten married, to retrieve our dog Cosmo, whom wed left with family.

I was in a hotel room in Pendleton, Ore., when I saw the first reports of a plane hitting the World Trade Center. I used something called AOL Instant Messenger to tell my co-workers to turn on the TV.

Because my wife and I had dated for a long time, I used to say that the war on terror changed our daily lives more than getting married. As weird as that sounds, in some ways it was true. The Washington we returned to had changed. Her new job as the attorney generals chief speechwriter at the dawn of the war on terror was a bracing new chapter for us both. And politics, particularly conservative politics my beat, for want of a better term transformed almost overnight.

I was editor of National Review Online back then, and it fell to me to fire Ann Coulter from National Review, which mostly amounted to dropping her syndicated column. A few days after 9/11, Coulter had written this about airport security: "It is preposterous to assume every passenger is a potential crazed homicidal maniac. We know who the homicidal maniacs are. They are the ones cheering and dancing right now. We should invade their countries, kill their leaders and convert them to Christianity."

The rhetoric of the next decade often echoed this sort of beating of the war drum. A flood of books both serious and silly poured forth about the war on terror, World War IV and the generational struggle with Islam and Islamists that, we were told, would define our future and our childrens future.

Two decades later, it seems like the past really is a foreign country, and not just for the 1 in 4 Americans who werent alive on 9/11.

We always see the past through the prism of the present. As the historian R.G. Collingwood put it, "Every new generation must rewrite history in its own way." For instance, after 9/11, the dates that defined the past shriveled. The Bolshevik Revolution of 1917 and the fall of the Berlin Wall and end of the Cold War in 1989 shrank, while 1979, the founding of revolutionary Iran, seemed to grow larger.

Story continues

Similarly, its hard not to look back on the excesses of the war on terror and see our reflection there. Many on the left greeted the crisis more as an opportunity to find fault with America than as an opportunity to unify against a common foe.

From hysteria on the left about the Patriot Acts supposedly tyrannical assault on libraries to the rights wild fantasies that America was surrendering to sharia law, the war on terror, in hindsight, looks a lot like our current culture war, just with different issues.

In the early years, fretting over the threat to free speech posed by the war on terror was a liberal and left-wing obsession. Ward Churchill, the University of Colorado professor who called the victims of 9/11 little Eichmanns, became a martyr to free expression. Dissent, we were told, was the highest form of patriotism.

Yet when Barack Obama became president, dissent lost its patriotic glow for leftists who wanted a hecklers veto against those who would provoke jihadists with cartoons or films mocking Muhammad or even asinine "own the libs" stunts like burning the Quran.

Wherever you come down on the specific controversies over the last 20 years, its hard not to be filled with regret and a little embarrassment by the solipsistic tendency of American politics to turn every issue into an excuse to vent mutual partisan animosity.

Even more depressing is the realization that the last two decades have left us less prepared, at least in the ways that matter most, for the next 9/11. Of course, the next 9/11 will look different, but the reaction probably wont.

@JonahDispatch

This story originally appeared in Los Angeles Times.

Read the original:
Column: How the war on terror came to look like a culture war among ourselves - Yahoo News

Read More..

Facebook developing machine learning chip – The Information – Reuters

A 3D-printed Facebook logo is seen placed on a keyboard in this illustration taken March 25, 2020. REUTERS/Dado Ruvic/Illustration

Sept 9 (Reuters) - Facebook Inc (FB.O) is developing a machine learning chip to handle tasks such as content recommendation to users, The Information reported on Thursday, citing two people familiar with the project.

The company has developed another chip for video transcoding to improve the experience of watching recorded and live-streamed videos on its apps, according to the report.

Facebook's move comes as major technology firms, including Apple Inc (AAPL.O) Amazon.com Inc (AMZN.O) and Alphabet Inc's (GOOGL.O) Google, are increasingly ditching traditional silicon providers to design their own chips to save up on costs and boost performance. (https://reut.rs/3E0NlVN)

In a 2019 blog, Facebook said it was building custom chip designs specially meant to handle AI inference and video transcoding to improve performance, power and efficiency of its infrastructure, which at that time served 2.7 billion people across all its platforms.

The company had also said it would work with semiconductor players such as Qualcomm Inc (QCOM.O), Intel Corp (INTC.O) and Marvell Technology (MRVL.O) to build these custom chips as general-purpose processors alone would not be enough to manage the volume of workload Facebook's systems handled.

However, The Information's report suggests that Facebook is designing these chips completely in-house and without the help of these firms.

"Facebook is always exploring ways to drive greater levels of compute performance and power efficiency with our silicon partners and through our own internal efforts," a company spokesperson said.

Reporting by Chavi Mehta in Bengaluru; Editing by Anil D'Silva

Our Standards: The Thomson Reuters Trust Principles.

See the original post:
Facebook developing machine learning chip - The Information - Reuters

Read More..

Prediction of arrhythmia susceptibility through mathematical modeling and machine learning – pnas.org

Significance

Despite our understanding of the many factors that promote ventricular arrhythmias, it remains difficult to predict which specific individuals within a population will be especially susceptible to these events. We present a computational framework that combines supervised machine learning algorithms with population-based cellular mathematical modeling. Using this approach, we identify electrophysiological signatures that classify how myocytes respond to three arrhythmic triggers. Our predictors significantly outperform the standard myocyte-level metrics, and we show that the approach provides insight into the complex mechanisms that differentiate susceptible from resistant cells. Overall, our pipeline improves on current methods and suggests a proof of concept at the cellular level that can be translated to the clinical level.

At present, the QT interval on the electrocardiographic (ECG) waveform is the most common metric for assessing an individuals susceptibility to ventricular arrhythmias, with a long QT, or, at the cellular level, a long action potential duration (APD) considered high risk. However, the limitations of this simple approach have long been recognized. Here, we sought to improve prediction of arrhythmia susceptibility by combining mechanistic mathematical modeling with machine learning (ML). Simulations with a model of the ventricular myocyte were performed to develop a large heterogenous population of cardiomyocytes (n = 10,586), and we tested each variants ability to withstand three arrhythmogenic triggers: 1) block of the rapid delayed rectifier potassium current (IKr Block), 2) augmentation of the L-type calcium current (ICaL Increase), and 3) injection of inward current (Current Injection). Eight ML algorithms were trained to predict, based on simulated AP features in preperturbed cells, whether each cell would develop arrhythmic dynamics in response to each trigger. We found that APD can accurately predict how cells respond to the simple Current Injection trigger but cannot effectively predict the response to IKr Block or ICaL Increase. ML predictive performance could be improved by incorporating additional AP features and simulations of additional experimental protocols. Importantly, we discovered that the most relevant features and experimental protocols were trigger specific, which shed light on the mechanisms that promoted arrhythmia formation in response to the triggers. Overall, our quantitative approach provides a means to understand and predict differences between individuals in arrhythmia susceptibility.

Author contributions: M.V. and E.A.S. designed research; M.V., X.M., and E.A.S. performed research; M.V., X.M., and E.A.S. analyzed data; and M.V. and E.A.S. wrote the paper.

The authors declare no competing interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2104019118/-/DCSupplemental.

Go here to see the original:
Prediction of arrhythmia susceptibility through mathematical modeling and machine learning - pnas.org

Read More..

Government and Industry May Miss Health Care’s Machine Learning Moment | Opinion – Newsweek

The public has lost confidence in our public health agencies and officialsand also our political leadersthanks to confusing and contradictory pronouncements and policies related to the coronavirus pandemic.

But don't think for a moment that leadership failures in health care are confined to COVID-19. Two other recent stories highlight how both government and industry are missing an opportunity to use new machine learning technology to deliver better health care at a lower cost.

In the government's case, Lina Khan's activist Federal Trade Commission (FTC) is opposing a merger between Illumina and GRAIL, two medical technology companies. The latter company, which spun off from Illumina in 2015, has developed a test capable of providing early detection of 50 different types of cancer. Bringing GRAIL back under the Illumina framework would get this life-saving technology to market faster and more efficientlybut the FTC couldn't let a little thing like saving lives get in the way of its anti-business agenda. To its credit, Illumina closed the deal in August without waiting for approval from the FTC or European regulators.

Unfortunately, the government isn't the only entity that too often makes decisions without fully understanding how disruptive innovation really works. The frustrating case of Epic Systems' Early Detection of Sepsis model shows the industry itself can fall prey to a Luddite mindset.

Epic's model is designed to detect and prevent sepsis, a leading cause of death that also accounts for 5 percent of U.S. hospitalization costs. Many of these deaths and the associated costs can be prevented with early diagnosis. Epic is so committed to helping hospitals reduce sepsis-related deaths that it developed and gives awayfor freean AI-powered early-warning model that helps alert doctors and nurses when a patient might need a second look.

The sepsis algorithm has produced encouraging results with customers, but is facing criticism in the health care industry press. In one peer-reviewed study, Prisma reported a 22 percent decrease in mortalitywhich could translate to millions of lives saved if it were implemented globally. More recently, in a controlled clinical trial, MetroHealth found a meaningful reduction in mortality and length of stay, and reduced the time to antibiotic treatment of septic patients in the ED by almost an hour.

Critics of Epic's algorithm have claimed that it is not yet good enough at detecting sepsis cases. But this accusation reveals a misunderstanding of the technology involved. Both the GRAIL and Epic models utilize machine learninga form of artificial intelligence that compares information from a test or a patient's medical record against vast amounts of data about previous cases with known outcomes. By its nature, machine learning gets better as it acquires new information. Failure to account for this fact has led many in government and media to dismiss promising innovations.

Machine learning cannot replace the expertise of a human doctor or nurse, but it offers those human health care providers a powerful new tool to see patterns and evidence they could never notice on their own. These cutting-edge technologies have the potential to save millions of lives and drastically reduce the cost of health care, if we let them.

Getting these complex algorithms right takes time, but we cannot allow the perfect to be the enemy of the already excellent. In the GRAIL case, the government needs to do what it always needs to do, and just get out of the way. In Epic's case, the industry needs to develop a deeper appreciation for how disruptive innovation works and work closely with bold creators to bring revolutionary technologies to life.

Steve Forbes is Chairman and Editor-in-Chief of Forbes Media.

The views expressed in this article are the writer's own.

See the original post:
Government and Industry May Miss Health Care's Machine Learning Moment | Opinion - Newsweek

Read More..

Computer vision and deep learning provide new ways to detect cyber threats – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

The last decades growing interest in deep learning was triggered by the proven capacity of neural networks in computer vision tasks. If you train a neural network with enough labeled photos of cats and dogs, it will be able to find recurring patterns in each category and classify unseen images with decent accuracy.

What else can you do with an image classifier?

In 2019, a group of cybersecurity researchers wondered if they could treat security threat detection as an image classification problem. Their intuition proved to be well-placed, and they were able to create a machine learning model that could detect malware based on images created from the content of application files. A year later, the same technique was used to develop a machine learning system that detects phishing websites.

The combination of binary visualization and machine learning is a powerful technique that can provide new solutions to old problems. It is showing promise in cybersecurity, but it could also be applied to other domains.

The traditional way to detect malware is to search files for known signatures of malicious payloads. Malware detectors maintain a database of virus definitions which include opcode sequences or code snippets, and they search new files for the presence of these signatures. Unfortunately, malware developers can easily circumvent such detection methods using different techniques such as obfuscating their code or using polymorphism techniques to mutate their code at runtime.

Dynamic analysis tools try to detect malicious behavior during runtime, but they are slow and require the setup of a sandbox environment to test suspicious programs.

In recent years, researchers have also tried a range of machine learning techniques to detect malware. These ML models have managed to make progress on some of the challenges of malware detection, including code obfuscation. But they present new challenges, including the need to learn too many features and a virtual environment to analyze the target samples.

Binary visualization can redefine malware detection by turning it into a computer vision problem. In this methodology, files are run through algorithms that transform binary and ASCII values to color codes.

In a paper published in 2019, researchers at the University of Plymouth and the University of Peloponnese showed that when benign and malicious files were visualized using this method, new patterns emerge that separate malicious and safe files. These differences would have gone unnoticed using classic malware detection methods.

According to the paper, Malicious files have a tendency for often including ASCII characters of various categories, presenting a colorful image, while benign files have a cleaner picture and distribution of values.

When you have such detectable patterns, you can train an artificial neural network to tell the difference between malicious and safe files. The researchers created a dataset of visualized binary files that included both benign and malign files. The dataset contained a variety of malicious payloads (viruses, worms, trojans, rootkits, etc.) and file types (.exe, .doc, .pdf, .txt, etc.).

The researchers then used the images to train a classifier neural network. The architecture they used is the self-organizing incremental neural network (SOINN), which is fast and is especially good at dealing with noisy data. They also used an image preprocessing technique to shrink the binary images into 1,024-dimension feature vectors, which makes it much easier and compute-efficient to learn patterns in the input data.

The resulting neural network was efficient enough to compute a training dataset with 4,000 samples in 15 seconds on a personal workstation with an Intel Core i5 processor.

Experiments by the researchers showed that the deep learning model was especially good at detecting malware in .doc and .pdf files, which are the preferred medium for ransomware attacks. The researchers suggested that the models performance can be improved if it is adjusted to take the filetype as one of its learning dimensions. Overall, the algorithm achieved an average detection rate of around 74 percent.

Phishing attacks are becoming a growing problem for organizations and individuals. Many phishing attacks trick the victims into clicking on a link to a malicious website that poses as a legitimate service, where they end up entering sensitive information such as credentials or financial information.

Traditional approaches for detecting phishing websites revolve around blacklisting malicious domains or whitelisting safe domains. The former method misses new phishing websites until someone falls victim, and the latter is too restrictive and requires extensive efforts to provide access to all safe domains.

Other detection methods rely on heuristics. These methods are more accurate than blacklists, but they still fall short of providing optimal detection.

In 2020, a group of researchers at the University of Plymouth and the University of Portsmouth used binary visualization and deep learning to develop a novel method for detecting phishing websites.

The technique uses binary visualization libraries to transform website markup and source code into color values.

As is the case with benign and malign application files, when visualizing websites, unique patterns emerge that separate safe and malicious websites. The researchers write, The legitimate site has a more detailed RGB value because it would be constructed from additional characters sourced from licenses, hyperlinks, and detailed data entry forms. Whereas the phishing counterpart would generally contain a single or no CSS reference, multiple images rather than forms and a single login form with no security scripts. This would create a smaller data input string when scraped.

The example below shows the visual representation of the code of the legitimate PayPal login compared to a fake phishing PayPal website.

The researchers created a dataset of images representing the code of legitimate and malicious websites and used it to train a classification machine learning model.

The architecture they used is MobileNet, a lightweight convolutional neural network (CNN) that is optimized to run on user devices instead of high-capacity cloud servers. CNNs are especially suited for computer vision tasks including image classification and object detection.

Once the model is trained, it is plugged into a phishing detection tool. When the user stumbles on a new website, it first checks whether the URL is included in its database of malicious domains. If its a new domain, then it is transformed through the visualization algorithm and run through the neural network to check if it has the patterns of malicious websites. This two-step architecture makes sure the system uses the speed of blacklist databases and the smart detection of the neural networkbased phishing detection technique.

The researchers experiments showed that the technique could detect phishing websites with 94 percent accuracy. Using visual representation techniques allows to obtain an insight into the structural differences between legitimate and phishing web pages. From our initial experimental results, the method seems promising and being able to fast detection of phishing attacker with high accuracy. Moreover, the method learns from the misclassifications and improves its efficiency, the researchers wrote.

[architecture]

I recently spoke to Stavros Shiaeles, cybersecurity lecturer at the University of Portsmouth and co-author of both papers. According to Shiaeles, the researchers are now in the process of preparing the technique for adoption in real-world applications.

Shiaeles is also exploring the use of binary visualization and machine learning to detect malware traffic in IoT networks.

As machine learning continues to make progress, it will provide scientists new tools to address cybersecurity challenges. Binary visualization shows that with enough creativity and rigor, we can find novel solutions to old problems.

Original post:
Computer vision and deep learning provide new ways to detect cyber threats - TechTalks

Read More..

NCAR will collaborate on new initiative to integrate AI with climate modeling | NCAR & UCAR News – UCAR

Sep 10, 2021 - by Laura Snider

The National Center for Atmospheric Research (NCAR) is a collaborator on a new $25 million initiative that will use artificial intelligence to improve traditional Earth system models with the goal of advancing climate research to better inform decision makers with more actionable information.

The Center for Learning the Earth with Artificial Intelligence and Physics (LEAP) is one of six new Science and Technology Centers announced by the National Science Foundation to work on transformative science that will broadly benefit society. LEAP will be led by Columbia University in collaboration with several other universities as well as NCAR and NASAs Goddard Institute for Space Studies.

The goals of LEAP support NCARs Strategic Plan, which emphasizes the importance of actionable Earth system science.

LEAP is a tremendous opportunity for a multidisciplinary team to explore the potential of using machine learning to improve our complex Earth system models, all for the long-term benefit of society, said NCAR scientist David Lawrence, who is the NCAR lead on the project. NCARs models have always been developed in collaboration with the community, and were excited to work with skilled data scientists to develop new and innovative ways to further advance our models.

LEAP will focus its efforts on the NCAR-based Community Earth System Model. CESM is an incredibly sophisticated collection of component models that when connected can simulate atmosphere, ocean, land, sea ice, and ice sheet processes that interact with and influence each other, which is critical to accurately project how the climate will change in the future. The result is a model that produces a comprehensive and high-quality representation of the Earth system.

Despite this, CESM is still limited by its ability to represent certain complex physical processes in the Earth system that are difficult to simulate. Some of these processes, like the formation and evolution of clouds, happen at such a fine scale that the model cannot resolve them. (Global Earth system models are typically run at relatively low spatial resolution because they need to simulate decades or centuries of time and computing resources are limited.) Other processes, including land ecology, are so complicated that scientists struggle to identify equations that accurately capture what is happening in the real world.

In both cases, scientists have created simplified subcomponents known as parameterizations to approximate these physical processes in the model. A major goal of LEAP is to improve on these parameterizations with the help of machine learning, which can leverage the incredible wealth of Earth system observations and high-resolution model data that has become available.

By training the machine learning model on these data sets, and then collaborating with Earth system modelers to incorporate these subcomponents into CESM, the researchers expect to improve the accuracy and detail of the resulting simulations.

Our goal is to harness data from observations and simulations to better represent the underlying physics, chemistry, and biology of Earths climate system, said Galen McKinley, a professor of earth and environmental sciences at Columbia. More accurate models will help give us a clearer vision of the future.

To learn more, read the NSF announcement and the Columbia news release.

See all News

Go here to read the rest:
NCAR will collaborate on new initiative to integrate AI with climate modeling | NCAR & UCAR News - UCAR

Read More..

Artificial Intelligence: Should You Teach It To Your Employees? – Forbes

Back view of a senior professor talking on a class to large group of students.

AI is becoming strategic for many companies across the world.The technology can be transformative for just about any part of a business.

But AI is not easy to implement.Even top-notch companies have challenges and failures.

So what can be done?Well, one strategy is to provide AI education to the workforce.

If more people are AI literate and can start to participate and contribute to the process, more problemsboth big and smallacross the organization can be tackled, said David Sweenor, who is the Senior Director of Product Marketing at Alteryx.We call this the Democratization of AI and Analytics. A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.

Just look at Levi Strauss & Co.Last year the company implemented a full portfolio of enterprise training programsfor all employees at all levelsfocused on data and AI for business applications.For example, there is the Machine Learning Bootcamp, which is an eight-week program for learning Python coding, neural networks and machine learningwith an emphasis on real-world scenarios.

Our goal is to democratize this skill set and embed data scientists and machine learning practitioners throughout the organization, said Louis DeCesari, who is the Global Head of Data, Analytics, and AI at Levi Strauss & Co.In order to achieve our vision of becoming the worlds best digital apparel company, we need to integrate digital into all areas of the enterprise.

Granted, corporate training programs can easily become a waste.This is especially the case when there is not enough buy-in at the senior levels of management.

It is also important to have a training program that is more than just a bunch of lectures.You need to have outcomes-based training, said Kathleen Featheringham, who is the Director of Artificial Intelligence Strategy at Booz Allen.Focus on how AI can be used to push forward the mission of the organization, not just training for the sake of learning about AI. Also, there should be roles-based training.There is no one-size-fits-all approach to training, and different personas within an organization will have different training needs.

AI training can definitely be daunting because of the many topics and the complex concepts.In fact, it might be better to start with basic topics.

A statistics course can be very helpful, said Wilson Pang, who is the Chief Technology Officer at Appen.This will help employees understand how to interpret data and how to make sense of data. It will equip the company to make data driven decisions.

There also should be coverage of how AI can go off the rails.There needs to be training on ethics, said Aswini Thota, who is a Principal Data Scientist at Bose Corporation.Bad and biased data only exacerbate the issues with AI systems.

For the most part, effective AI is a team sport.So it should really involve everyone in an organization.

The acceleration of AI adoption is inescapablemost of us experience AI on a daily basis whether we realize it or not, said Alex Spinelli, who is the Chief Technology Officer at LivePerson.The more companies educate employees about AI, the more opportunities theyll provide to help them stay up-to-date as the economy increasingly depends on AI-inflected roles. At the same time, nurturing a workforce thats ahead of the curve when it comes to understanding and managing AI will be invaluable to driving the companys overall efficiency and productivity.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.

See the original post here:
Artificial Intelligence: Should You Teach It To Your Employees? - Forbes

Read More..

Leveraging Artificial Intelligence and Machine Learning to Solve Scientific Problems in the U.S. – OpenGov Asia

The U.S. Department of Energy (DOE) advanced Computational and Data Infrastructures (CDIs) such as supercomputers, edge systems at experimental facilities, massive data storage, and high-speed networks are brought to bear to solve the nations most pressing scientific problems.

The problems include assisting in astrophysics research, delivering new materials, designing new drugs, creating more efficient engines and turbines, and making more accurate and timely weather forecasts and climate change predictions.

Increasingly, computational science campaigns are leveraging distributed, heterogeneous scientific infrastructures that span multiple locations connected by high-performance networks, resulting in scientific data being pulled from instruments to computing, storage, and visualisation facilities.

However, since these federated services infrastructures tend to be complex and managed by different organisations, domains, and communities, both the operators of the infrastructures and the scientists that use them have limited global visibility, which results in an incomplete understanding of the behaviour of the entire set of resources that science workflows span.

Although scientific workflow systems increase scientists productivity to a great extent by managing and orchestrating computational campaigns, the intricate nature of the CDIs, including resource heterogeneity and the deployment of complex system software stacks, pose several challenges in predicting the behaviour of the science workflows and in steering them past system and application anomalies.

Our new project will provide an integrated platform consisting of algorithms, methods, tools, and services that will help DOE facility operators and scientists to address these challenges and improve the overall end-to-end science workflow.

Research professor of computer science and research director at the University of Southern California

Under a new DOE grant, the project aims to advance the knowledge of how simulation and machine learning (ML) methodologies can be harnessed and amplified to improve the DOEs computational and data science.

The project will add three important capabilities to current scientific workflow systems (1) predicting the performance of complex workflows; (2) detecting and classifying infrastructure and workflow anomalies and explaining the sources of these anomalies; and (3) suggesting performance optimisations. To accomplish these tasks, the project will explore the use of novel simulation, ML, and hybrid methods to predict, understand, and optimise the behaviour of complex DOE science workflows on DOE CDIs.

Assistant director for network research and infrastructure at RENCI stated that in addition to creating a more efficient timeline for researchers, we would like to provide CDI operators with the tools to detect, pinpoint, and efficiently address anomalies as they occur in the complex DOE facilities landscape.

To detect anomalies, the project will explore real-time ML models that sense and classify anomalies by leveraging underlying spatial and temporal correlations and expert knowledge, combine heterogeneous information sources, and generate real-time predictions.

Successful solutions will be incorporated into a prototype system with a dashboard that will be used for evaluation by DOE scientists and CDI operators. The project will enable scientists working on the frontier of DOE science to efficiently and reliably run complex workflows on a broad spectrum of DOE resources and accelerate time to discovery.

Furthermore, the project will develop ML methods that can self-learn corrective behaviours and optimise workflow performance, with a focus on explainability in its optimisation methods. Working together, the researchers behind Poseidon will break down the barriers between complex CDIs, accelerate the scientific discovery timeline, and transform the way that computational and data science are done.

As reported by OpenGov Asia, the U.S. Department of Energys (DOE) Argonne National Laboratory is leading efforts to couple Artificial Intelligence (AI) and cutting-edge simulation workflows to better understand biological observations and accelerate drug discovery.

Argonne collaborated with academic and commercial research partners to achieve near real-time feedback between simulation and AI approaches to understand how two proteins in the SARS-CoV-2 viral genome interact to help the virus replicate and elude the hosts immune system.

Read the original here:
Leveraging Artificial Intelligence and Machine Learning to Solve Scientific Problems in the U.S. - OpenGov Asia

Read More..

Machine Learning augmented docking studies of aminothioureas at the SARS-CoV-2-ACE2 interface – DocWire News

This article was originally published here

PLoS One. 2021 Sep 9;16(9):e0256834. doi: 10.1371/journal.pone.0256834. eCollection 2021.

ABSTRACT

The current pandemic outbreak clearly indicated the urgent need for tools allowing fast predictions of bioactivity of a large number of compounds, either available or at least synthesizable. In the computational chemistry toolbox, several such tools are available, with the main ones being docking and structure-activity relationship modeling either by classical linear QSAR or Machine Learning techniques. In this contribution, we focus on the comparison of the results obtained using different docking protocols on the example of the search for bioactivity of compounds containing N-N-C(S)-N scaffold at the S-protein of SARS-CoV-2 virus with ACE2 human receptor interface. Based on over 1800 structures in the training set we have predicted binding properties of the complete set of nearly 600000 structures from the same class using the Machine Learning Random Forest Regressor approach.

PMID:34499662 | DOI:10.1371/journal.pone.0256834

Read the original here:
Machine Learning augmented docking studies of aminothioureas at the SARS-CoV-2-ACE2 interface - DocWire News

Read More..

How Horizon Plans To Bring Quantum Computing Out Of The Shadows – Forbes

Breakthroughs in quantum computing keep coming the latest quantum processor designed by Google has solved a complex mathematical calculation in less than four minutes; the most advanced conventional computers would require 10,000 years to get to an answer. Heres the problem though: even as scientists perfect the quantum computing hardware, there arent many people with the expertise to make use of it, particularly in real-life settings.

Joe Fitzsimons, the founder of Horizon Quantum Computing, believes he is well-placed to help here. Fitzsimons left academia in 2018 following years of research at Oxford University and the Quantum Information and Theory group in Singapore, spotting an opportunity. Were building the tools that will help people take advantage of these advances in the real world, he explains.

To understand Horizons unique selling point does not require a crash course in quantum computing. The key point is that while conventional computing uses binary processing technique a world reduced to 0 or 1 quantum computing operates using many combinations of these digits simultaneously; that means it can get results far more quickly.

The problem for anyone wanting to take advantage of this speed and power is that conventional computer programs wont run on quantum computing. And not only do you need a different language to tell your quantum computer what to do, the program also needs to be able to work out the best way for the machine to achieve a given outcome; not every possible route will secure an advantage.

A further difficulty is that quantum computer programmers are in short supply. And quantum computer programmers who also understand the intricacies of commercial problems that need solving in financial services, pharmaceuticals or energy, say are non-existent.

Horizon aims to fill this gap. Our role is to make quantum computing accessible by building the tools with which people can use it in the real world, he explains. If there is a problem that can be addressed by quantum computing, we need to make it more straightforward to do so.

Think of Horizon as offering a translation service. If you have written a programme to deliver a particular outcome on a conventional computer, Horizons translation tool will turn it into a programme that can deliver the same outcome from a quantum processor. Even better, the tool will work out the best possible way to make that translation so that it optimises the power of quantum computing to deliver your outcome more speedily.

Horizon's Joe Fitzsimons wants to drive access to quantum computing

In the absence of such tools, real-life applications for quantum computing have been developing slowly. One alternative is to use one of the libraries of programmes that already exist for quantum computing, assuming there is one for your particular use case. Another is to hire a team of experts or buy expertise in from a consultant to build your application for you, but this requires time and money, even if talent with the right skills for your outcome is actually out there.

Instead, we are trying to automate what someone with that expertise would do, adds Fitzsimons. If youre an expert in your particular field, we provide the quantum computing expertise so that you don't need it.

We are not quite at the stage of bringing quantum computing to the masses. For one thing, hardware developers are still trying to perfect the machines themselves. For another, we dont yet have a clear picture of where quantum computing will deliver the greatest benefits, though it is increasingly clear that the most promising commercial use cases lie in industries that generate huge amounts of data and require complex analytics to drive insight from that information.

Nevertheless, Fitzsimons believes widescale adoption of quantum computing is coming closer by the day. He points to the huge volumes of funding now going into the industry not least, private sector investment is doubling each year and the continuing technical breakthroughs.

From a commercial perspective, the forecasts are impressive. The consulting group BCG thinks the quantum computing sector could create $5bn-$10bn worth of value in the next three to five years and $450bn to $850bn in the next 15 to 30 years. And Horizon is convinced it can help bring those paydays forward.

Originally posted here:
How Horizon Plans To Bring Quantum Computing Out Of The Shadows - Forbes

Read More..