Page 3,908«..1020..3,9073,9083,9093,910..3,9203,930..»

Want To Be AI-First? You Need To Be Data-First. – Forbes

Data First

Those that implement AI and Machine Learning project learn quickly that machine learning projects are not application development projects. Much of the value of machine learning projects rest in the models, training data, and configuration information that guides how the model is applied to the specific machine learning problem. The application code is mostly a means to implement the machine learning algorithms and "operationalize" the machine learning model in a production environment. That's not to say that application code is not necessary after all, the computer needs some way to operationalize the machine learning model but focusing a machine learning project on the application code is missing the big picture. If you want to be AI-first for your project, you need to have a data-first perspective.

Use data-centric methodologies and data-centric technologies

Therefore it follows that if you're going to have a data-first perspective, you need to use a data-first methodology. There's certainly nothing wrong with Agile methodologies as a way of iterating towards success, but Agile on its own leaves much to be desired as it's focused on functionality and delivery of application logic. There are already data-centric methodologies out there that have been proven in many real-world scenarios. One of the most popular is the Cross Industry Standard Process for Data Mining (CRISP-DM), which focuses on the steps needed for successful data projects. In the modern age, it makes sense to merge the notably non-agile CRISP-DM with Agile Methodologies to make it more relevant. While this is still a new area for most enterprises implementing AI projects, we see this sort of merged methodology approach to be more successful than trying to shoehorn all the aspects of an AI project into existing application-focused Agile methodologies.

It stands to reason that if you have a data-centric perspective on AI then you need to pair your data-centric methodologies with data-centric technologies. This means that your choice of tooling to implement all those artifacts detailed above need to be, first and foremost, data-focused. Don't use code-centric IDEs when you should be using data notebooks. Don't use enterprise integration middleware platforms when you should be using tools that focus on model development and maintenance. Don't use so-called machine learning platforms that are really just a pile of cloud-based technologies or overgrown big data management platforms. The tools you use should support the machine learning goals you need, which are in turn supported by the activities you need to do and the artifacts you need to create. Just because a GPU provider has a toolset doesn't mean that it's the right one to use. Just because a big enterprise vendor or a cloud vendor has a "stack" doesn't mean it's the right one. Start from the deliverables and the machine learning objectives and work your way backwards.

Another big consideration is where and how machine learning models will be deployed - or in AI-speak "operationalized". AI models can be implemented in a remarkably wide range of places from "edge" devices sitting disconnected from the internet to mobile and desktop applications; from enterprise servers to cloud-based instances; and all manner of autonomous vehicles and craft. Each of these locations is a place where AI models and implementations can and do exist. This amount of model operationalization heterogeneity highlights even more so how ludicrous the idea of a single machine learning platform is. How can one platform at the same time provide AI capabilities in a drone, mobile app, enterprise implementation, and cloud instance. Even if you source all this technology from a single vendor, it will be a collection of different tools that sit under a single marketing umbrella rather than a single, cohesive, interoperable platform that makes any sense.

Build data-centric talent

All this methodology and technology can't assemble itself. If you're going to be successful at AI projects you're going to need to be successful at building an AI team. And if the data-centric perspective is the correct one for AI, then it makes sense that your team also needs to be data-centric. The talent to build apps or manage enterprise systems or data is not the same to build AI models, tune algorithms, work with training data sets, and operationalize ML models. The primary core of your AI team needs to be data scientists, data engineers, and those folks responsible for putting machine learning models into operation. While there's always a need for coding, development, and project management, finding and growing your data-centric talent is key to long term success of your AI initiatives.

The primary challenge with building data talent is that it's hard to find and grow. The primary reason for this is because data isn't code. You need folks who know how to wrangle lots of data sources, compile them into clean data sets, and then extract information needles from data haystacks. In addition, the language of AI is math, not programming logic. So a strong data team is also strong in the right kinds of math to understand how to select and implement AI algorithms, properly tweak hyperparameters, and properly interpret testing and validation results. Simply guessing about and changing training data sets and hyperparameters at random is not a good way to create AI projects that deliver value. As such, data-centric talent grounded in a fundamental understanding of machine learning math and algorithms combined with an understanding of how to deal with big data sets is crucial to AI project success.

Prepare to continue to invest for the long haul

It should be pretty obvious at this point that the set of activities for AI are indeed very much data-centric and the activities, artifacts, tools, and team need to follow from that data-centric perspective. The biggest challenge is that so much of that ecosystem is still being developed and is not fully available for most enterprises. AI-specific methodologies are still being tested in large scale projects. AI-specific tools and technologies are still being developed, enhanced, and evolutionary changes are being released on a rapid scale. AI talent continues to be tight and is an area where we're just starting to see investment in growth of this skill set.

As a result, organizations that need to be successful with AI, even with this data-centric perspective, need to be prepared to invest for the long haul. Find your peer groups to see what methodologies are working for them and continue to iterate until you find something that works for you. Find ways to continuously update your team's skills and methods. Realize that you're on the bleeding edge with AI technology and prepare to reinvest in new technology on a regular basis, or invent your own if need be. Even though the history of AI spans at least seven decades, we're still in the early stages of making AI work for large scale projects. This is like the early days of the Internet or mobile or big data. Those early pioneers had to learn the hard way, making many mistakes before realizing the "right" way to do things. But once those ways were discovered, organizations reaped big rewards. This is where we're at with AI. As long as you have a data-centric perspective and are prepared to continue to invest for the long haul, you will be successful with your AI, machine learning, and cognitive technology efforts.

Visit link:
Want To Be AI-First? You Need To Be Data-First. - Forbes

Read More..

Data Transparency and Curation Vital to Success of Healthcare AI – HealthLeaders Media

Amid advances in precision medicine, healthcare is facing the twin challenges of having to curate and tailor the use of patient data to drive genomics-powered breakthroughs.

That was the takeaway from the AI & data sciences track of last weeks Precision MedicineWorld Conference in Santa Clara, California.

"There aren't a lot of physicians saying, 'Bring me more AI,' " said John Mattison, MD, emeritus CMIO and assistant medical director of Kaiser Permanente. "Every physician is saying bring me a safer and more efficient way to deliver care."

Mattison recalled his prolonged conversations with the original developers of IBM's Watson AI technology. "Initially they had no human curation whatsoever," he said. "As Stanford has published over and over again, most of medical published literature is subsequently refuted or ignored, because it's wrong. The original Watson approach was pure machine curation of reported literature without any human curation."

But human curation is not without its own biases. Watson's value to Kaiser was further eroded by Watson's focus on oncology patient data from Memorial Sloan Kettering Cancer Center and MD Anderson Cancer Center, Mattison said.

"I don't really want curation from those two institutions, because they're fee for service, and you get all these biases. The amount of money the drug companies spend on lobbying doctors to use their more expensive novel drugs is remarkably influential. If you're involved in clinical care, you want to take the best output of machine learning and you want to make sure that you have good human curation," which in Kaiser's case, emphasizes value-based care over fee-for-service, he added.

A key in human curation of machine learning and AI is how transparent the curation is, and how accessible the authoring environment for such curation is, so clinicians can make appropriate substitutions for their own requirements, Mattison said.

A current challenge of health systems is being approached by machine learning and AI companies who remain in stealth mode and are not being up-front about how and where that technology will share patient data, making it difficult for chief data officers to introduce the technology to the health system.

"Using [the patient data] for some commercial, unexpected purpose is very different than using it for the purpose that you have agreed with the health system that you're going to be using it with," said Cora Han, JD, chief health data officer with UC Health, the umbrella organization for UCSF, UCLA, UC Irvine, UC Davis, UC San Diego, and UC Riverside health systems.

Related: Opinion: An 'Epic' Pushback as U.S. Prepares for New Era of Empowering Patient Health Data

A recurring theme during the conference was the need for a third party to provide trusted certification that machine learning and AI algorithms are free from bias, such as confirmation bias or ascertainment bias, meaning basing algorithms on a cohort of patients who do not represent the entire population served by the health system.

"We have no certification groups right now that certify these things as being fair," said Atul Butte, MD, director of UCSF's Bakar Computational Health Sciences Institute. "Imagine a world in five to 10 years where we're only going to buy or license methods or algorithms that have been certified as being fair in our population, in the University of California."

UCLA Health has met or exceeded the goal of representing its own demographics within Atlas, the systems community health initiative that "aims to recruit 150,000 patients across the health system with the goal of creating California's largest genomic resource that can be used for translational medicine," according to the UCLA Health website.

"We are a far cry from [meeting] L.A. county" demographics, said Clara Lajonchere, PhD, deputy director of the UCLA Institute for Precision Health. Currently, 15% of Atlas patients are Latino, and 6%7% are African-American. "While those rates exceed that of some of the other large-scale studies, it still really underlies how critical diversity is going to be."

Recent alliances such as the Google/Ascension agreement, or the Mayo Clinic/nference startup for drug development are further enabling the kind of volume, velocity, and variety that will drive machine learning and AI innovations in healthcare, Han said.

HIPAA, which has enabled business associates such as nference to safely enter patient-sharing relationships with providers such as Mayo, can work against the principle of transparency. "If a tech company signs a BAA with a hospital system, [outsiders] don't get to see that contract," Butte said. "We could take it on faith that all the right terms were put in that contract, but sometimes just naming two entities in a sentence seems sinister and ominous in some ways."

Health systems with more than 100 years of trust associated with their brand find themselves partnering with startups with little or no such trust, and this creates additional tension in the healthcare system.

In addition, concerns linger that deidentified data will somehow be able to be reidentified through the course of its use and sharing by innovative startups.

"Whole genomes, it's hard to deidentify those," Han said. "These are issues that we will be working through."

We just need to develop a set of standards about how privacy is controlled, said Brook Byers, founder and partner with Kleiner Perkins, a Silicon Valley venture capital firm.

Related: Epic's CEO Is Urging Hospital Customers to Oppose Rules That Would Make It Easier to Share Medical Info

Scott Mace is a contributing writer for HealthLeaders.

See the original post:
Data Transparency and Curation Vital to Success of Healthcare AI - HealthLeaders Media

Read More..

Patenting Considerations for Artificial Intelligence in Biotech and Synthetic Biology – Part 2: Key Issues in Patent Subject Matter Eligibility -…

In our first blog in this multi-part series, we explored key considerations for protecting artificial intelligence (AI) inventions in biotech and synthetic biology. In this part 2 of the series, we will examine some key considerations and hurdles in patenting machine learning-based biotech or synthetic biology inventions.

In this series, we are focusing on artificial intelligence inventions, but as Alan Turing aptly pointed out, that neologism is a suitcase term because you can stuff a lot of intelligence classifications and different types of technologies into it. Many of the ground-breaking AI developments in biotech are in the AI subfield of Machine Learning. First, we will briefly discuss what is meant by Machine Learning and discuss some relevant terms. Second, we will review some real world challenges in patenting AI inventions.

What is Machine Learning?

Machine learning (ML) is basically a term to cover algorithms that use statistics to find and apply patterns in digitally stored data, which can be images, numbers, words, etc. (For a user-friendly overview on the different terms, please see Karen Haos article What is Machine Learning? from the MIT Tech Review, available here.) Deep learning is a subfield of machine learning.

Source: https://www.edureka.co/blog/ai-vs-machine-learning-vs-deep-learning/

There are three general types of ML algorithms: Supervised Learning, Unsupervised Learning, and Reinforcement Learning. The MIT Tech Review published this helpful flow chart to explain what kind of ML the algorithm is using, though if you want a more technical explanation this is a helpful resource.

An ML algorithm is a way of classifying information, and a neural network is a type of algorithm that is meant to classify information the same way a human brain does. For example, a neural network can look at pictures and recognize certain elements, like pixel colors and classify them according to what they show. Neural networks are made up of nodes. A node is an individual computation where an algorithm assigns significance (or weight) to input data, the sum of that information is then passed through the activation function which determines what, if anything, is done with the output.

Heres a diagram of what one node might look like:

Image Credit: Skymind

A neural network is several nodes together. Deep Learning (DL) is when more than three layers of neural networks are stacked.

Image Credit:Oracle

DL has spawned many of the most significant advancements in biotech in the past few years and is continuing to drive advancements. For example, DL can predict how genetic variation alters cellular processes involved in pathogenesis, use patient data to characterize disease progression, or speed up computational methods to predict protein structure.

Patenting Machine Learning Inventions

Applying for patent protection presents certain risks, especially for computer-based inventions. If your invention is merely a way to improve the functioning of a computer, without tying it to a practical application, then there is a significant risk that the patent office that may ultimately reject the application because it is based on ineligible subject matter. Abstract ideas are subject matter that is ineligible for patent protection and can include mental processes (concepts performed by the human mind), methods of organizing human activity (such as fundamental economic concepts or managing interactions between peoples), or mathematical relationships, formulas or calculations. This last category is particularly important to AI-based inventions. For example, under U.S. law, an invention that is a stand-alone algorithm is likely to be seen as no more than abstract mathematics and, therefore, not eligible for patent protection.

Mathematical calculations that can be performed by the human mind are the basic tools of scientific and technological work, which are free to all men and reserved exclusively to none. Mayo Collaborative Servs. v. Prometheus Labs., 566 U.S. 66 (2012). This may seem an absurd restriction to some, as the human mind might be able to carry out the millions of calculations a neural network can perform, even if there is no guarantee that a human mind could finish those calculations in one lifetime. However, permitting patents on basic calculations would cripple scientific exploration and advancement. Therefore, to be eligible for patent protection, an invention centered on an algorithm must significantly advance a specific technical application, not merely use an algorithm to solve a problem. The patent application must explain in detail how the claimed algorithm interacts with the physical infrastructure of the computer, network, or both and explain the real world problem the invention is meant to address.

As previously discussed here and here, the tying of algorithms to real world solutions is a requirement in many jurisdictions globally, including the European Patent Office (EPO) and Israel. For example, new guidelines issued by the European Patent Office stress that the AI inventions must have an application for a specific field of technology. In this respect, patent offices are taking a somewhat technical approach and considering AI elements of an invention as any other software element.

Many AI patents face an uphill battle for patentability due to the use of computer systems and algorithms and the rapidly evolving law surrounding subject matter eligibility. To address the changes in law and stem the many patent application rejections, the U.S. Patent and Trademark Office (USPTO) issued Revised Patent Subject Matter Eligibility Guidance in January 2019 and Patent Eligibility Guidance Update in October 2019 which included examples for the revised subject matter eligibility. USPTO director Andrei Iancu stated recently that rejections of AI related patent applications have dropped from 60% to about 32% since the January 2019 guidelines issued.

The USPTOs Example 39 from the October 2019 Patent Eligibility Guidance Update provides a very helpful example of an allowable patent claim for a method of training a neural network for facial detection. The invention attempts to solve the problem of inaccurate facial recognition through using an expanded training set of facial images and then addressing false positives by retraining the algorithm on a new set of images.

The example claim recites A computer-implemented method of training a neural network for facial detection comprising: [a set of digital images] training the neural network in a first stage using the first training set; creating a second training set for a second stage of training comprising the first training set and digital non-facial images that are incorrectly detected as facial images after the first stage of training; and training the neural network in a second stage using the second training set.

The USPTO analysis of this claim finds that it is patent-eligible subject matter, despite including an algorithm, because while some of the limitations may be based on mathematical concepts, the mathematical concepts are not recited in the claims This shows that when an invention involves a neural network, a key focus of the claims should be the inventive means of achieving the result and not the underlying mathematical concepts. For while the claim does mention the computer-implemented method, it does not recite any mathematical relationships, formulas, or calculations.

One example of an invention that uses deep learning is U.S. Patent No. 10,196,427 Epitope focusing by variable effective antigen surface concentration. This invention provides compositions and methods for the generation of an antibody or immunogenic composition, such as a vaccine, through epitope focusing by variable effective antigen surface concentration.

According to the disclosure and the abstract, the invention relies heavily on in silico bioinformatics meaning, scientific experiments or research conducted or produced by means of computer modeling or computer simulation for the science of collecting and analyzing complex biological data. For example, the disclosure describes neural networks to generate a map of the protein surfaces of a particular antigen or to generate an in silico library of antigenic variants. The abstract describes one step of the invention as generating in silico a library of potential antigens for use in the immunogenic composition.

However, the claims avoid tripping up on the subject matter eligibility requirement by not reciting the algorithms or the use of a computer in the claims. The claims merely describe what the computer is used to accomplish, without mentioning that the calculations are performed in silico.

For example, claim 1 recites a method for eliciting an immune response in a human subject, the method comprising: delivering at least six antigens to the human subject, wherein each of the at least six antigens comprises: a target epitope that is common to each of the at least six antigens; and one or more non-conserved regions that are outside of the target epitope; wherein the at least six antigens are delivered such that each individual antigen of the at least six antigens is delivered in an amount that is insufficient to be immunogenic to the human subject on its own, while the at least six antigens are delivered in a combined amount that is sufficient to generate an immune response to the target epitope in the human subject. Claim 1 and the remaining claims, all dependent, may contain limitations that are based on mathematical concepts but the claim language does not recite those mathematical concepts.

Researchers have made many significant advancements in diagnosis of different kinds of cancer through ML. Patenting these types of formulations of aggregated data inventions can be a challenge as inventions that merely present the results of collecting and analyzing information without additional elements that identify a particular tool for the presentation or application of the data, are likely abstract ideas. Abstract ideas are another category of unpatentable subject matter and inventions that involve mathematical manipulation of data without additional elements to append that abstract idea are unpatentable.

In a recent example, the U.S. Patent Trial and Appeal Board (PTAB) affirmed an Examiners determination that Application No. 13/417,188, aimed at using ML to modernize cancer treatment failed subject matter eligibility. 2018 Pat. App. LEXIS 3052, *3 (PTAB April 19, 2018). In that case, the invention was a way to connect multiple genomic alterations such as copy number, DNA methylation, somatic mutations, mRNA expression and microRNA expression to create an [i]ntegrated pathway analysis [] expected to increase the precision and sensitivity of causal interpretations for large sets of observations.

Claim 1 of the patent application read as follows: 1. A method of conveying biological sequence data, comprising: generating a data packet including a first header containing network routing information, a second header containing header information pertaining to the biological sequence data, and a payload containing a representation of the biological sequence data relative to a reference sequence; storing the data packet in a queue in communication with a network interface; and transmitting the data packet over a network accessible through the network interface.

The patent application was rejected by the USPTO as ineligible subject matter because the claimed method of generating a dynamic pathway map (DPM) was merely algorithmic concepts involving the mathematical manipulation of data. The Examiner determined that the claims do not include additional elements/steps appended to the abstract idea that are sufficient to amount to significantly more than mathematical concepts and that, even though the additional elements appended to the abstract idea integrated multiple data sources to identify reproducible and interpretable molecular signatures of tumorigenesis and progression, those elements were routine and conventional techniques for collecting data.

In addition to the abstract idea issues, the 13/417,188 application it was also rejected by the USPTO for double patenting, which means that another patent application filed by the same inventors presumably covered the same technology. Interestingly, the USPTO issued patent No. 10,192,641 on that other patent application. That other application included a limitation in claim 1 that reads: formulating a treatment option for the patient based on the reference pathway activity of the factor graph, wherein at least one of the above method operations is performed through a processor. This limitation may have provided the missing additional steps appended to the abstract idea to amount to sufficiently more than mathematical concepts.

Many nascent protein engineering technology companies are developing fascinating sustainably sourced products using ML. One such company is Arzeda which is developing scratch proof computer screens for cell phones using a renewable source you might not believe tulips. Arzeda has ported the metabolic pathway responsible for making a natural molecule called tulipalin, found in tulips, into industrial microbes. Arzeda is harnessing the power of machine learning to combine protein design, pathway design, HT screening and strain construction, to create and improve designer fermentation strains for virtually any chemical.

Arzedas U.S. Patent No. 10,025,900 describes its invention as providing computational methods for engineering, selecting, and/or identifying proteins with a desired activity, but as we have seen with the other successful applications, the claims do not state the mathematical equations, but rather the process to obtain the desired results. Here is part of claim 1 of the 900 patent:

(c) computationally selecting one or more amino acid sequences having structural homology and/or sequence homology to the template protein having the enzymatic activity;

(d) providing a structural model for each of the amino acid sequences selected in step (c);

(e) selecting the amino acid sequences satisfying the functional site description comprising steps of computationally docking a ligand and optimizing positioning of amino acid side chains and main chain atoms of the amino acid sequences; and

(f) recombinantly expressing and confirming the enzymatic activity for at least one of the amino acid sequences that satisfies the functional site description selected from step (e), thereby making the protein having the enzymatic activity.

Key Lessons in Patenting Machine Learning Inventions

There are two key takeaways from the USPTO guidelines and the successful ML-based patent applications. First, focus on the requirements for how the desired result is achieved. Similar to the USPTO requirements for ML patents, the EPO guidelines require a similar inventive step and a further technical effect, which you can read about here. Second, use caution when reciting specific mathematical equations within the claim language.

Our next post in this series will focusing on the challenges and benefits of protecting your AI biotech inventions under trade secret law and how to determine what kind of IP protection, patents or trade secret, would be most beneficial for your AI biotech invention.

Excerpt from:
Patenting Considerations for Artificial Intelligence in Biotech and Synthetic Biology - Part 2: Key Issues in Patent Subject Matter Eligibility -...

Read More..

How Artificial Intelligence Is Improving The Pharma Supply Chain – Forbes

Artificial intelligence (AI) will transform the pharmaceutical cold chain not in the distant, hypothetical future, but in the next few years. As the president of a company that has been actively involved in the creation of an application that will utilize machine learning to generate predictive data on environmental hazards in the biopharmaceutical cold chain cycle, I've seen firsthand the promise of this technology.

When coupled with machine learning and predictive analytics, the AI transformation goes much deeper than smarter search functions. It holds the potential to address some of the biggest challenges in pharmaceutical cold chain management. Here are some examples:

Analytical decision-making: Most companies capture only a fraction of their datas potential value. By aggregating and analyzing data from multiple sources a drug order and weather data along a delivery route, for example AI-based systems can provide complete visibility with predictive data throughout the cold chain. Before your cold chain starts, you can predict hurdles and properly allocate resources.

Analytical decision-making relies on companies having actionable data and real-time visibility throughout the cold chain. Just-in-time delivery of uncompromised drug product relies on predictive data analytics. With the help of analytical decision-making, cold chain logistics and overall drug cost, patient risk, and gaps in the pharmaceutical pipeline will be significantly reduced.

For example, BenevolentAI in the United Kingdom is using a platform of computational and experimental technologies and processes to draw on vast quantities of mined and inferred biomedical data to improve and accelerate every step of the drug discovery process.

Supply chain management (SCM): A 2013 study by McKinsey & Company detailed a severe lack of agility in pharmaceutical supply chains. It noted that replenishment times from manufacturer to distribution centers averaged 75 days for pharmaceuticals but 30 days for other industries, and reported the need for better transparency around costs, logistics, warehousing and inventory. Assuring drug efficacy, patient identity and chain of custody integrated with supply chain agility is where the true value of AI lies for the drug industry.

DataRobot is an example where the agile pharmaceutical supply chain can be implemented with an AI platform powered by open-source algorithms that are able to model automation by using historical drug delivery data. Supply chain managers can build a model that accurately predicts whether a given drug order could be consolidated with another upcoming order to the same location or department.

Inventory management: Biomarkers are making personalized medicine mainstream. Consequently, pharmaceutical companies must stock many more therapeutics but in much lower quantities. AI-based inventory management can determine which product is most likely to be needed (and how often), track exactly when it's delivered to a patient, and provide delivery time and delays or incidents that might trigger replacement shipment within hours.

OptumRx increasingly uses AI/ML to manage data it collects in a healthcare setting. Since becoming operational, the AI/ML system is able to continuously improve itself by analyzing data and outcomes, all without additional intervention. Early results indicate that AI/ML is adding agility to the cold chain already by reducing the number of shortages or excess inventory of drug products needed.

Warehouse automation: Integrating AI into warehouse automation tools speeds communications and reduces errors in pick and pack settings. At its simplest, AI predicts which items will be stored the longest and positions them accordingly. With this approach, Lineage Logistics, a cold-chain food supplier, increased productivity by 20%. In another example, AI positions high-volume items so they are easily accessible while still reducing congestion.

FDA Embraces AI and Big Data

Historically, pharmaceutical companies have been slow to adapt to disruptive technologies because of the important oversight role played by the FDA. However, the FDA realizes AIs potential to learn and improve performance. It already has approved AI to detect diabetic retinopathy and potential strokes in patients, and updated regulationsare expected soon to help streamline the implementation of this important tool.

Gain A Competitive Edge

For pharmaceutical companies looking to implement AI into their cold chain, here are some steps to take to become an early adopter:

1. Prepare your data, and ensure you own it. You need a strong pipeline of clean data and a mature logistics ecosystem with historical data on temperature, environmental conditions and packaging, as well as any other data you collect during your cold chain. If you dont have clean data stored, start collecting it now. If you think you have the data, verify that you own it. Some vendors claim ownership of the thermal data their systems generate and dont allow it to be manipulated by third-party software. In that case, it cant be combined with other data sources for AI analysis. Either negotiate ownership or change vendors.

2. Define your area of need: Where do you need a competitive edge? Start small with one factor that makes a measurable impact on your cold chain. That may be inventory control, packaging optimization, logistics, regulatory strategy or patient compliance. Track metrics, and tie them to business value.

3. Assemble the right people, and verify your internal capabilities. Implementing or supporting an AI/machine learning strategy requires skills that IT personnel typically lack. Consider upskilling your IT team or adding an AI skills requirement for your next new hires.

AI is at a turning point. In the next decade, it is expected to contribute a massive amount of money to the global economy. In the life sciences market alone, AI is valued at $902.1 million and is expected to grow at a rate of 21.1% through 2024. As part of this growth, I believe AI will also make significant contributions to the pharmaceutical supply chain.

See the original post:
How Artificial Intelligence Is Improving The Pharma Supply Chain - Forbes

Read More..

Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same – Forbes

People are the ultimate power behind AI.

There are a lot of compelling things about artificial intelligence, but people still need to get comfortable with it. As shown in a recent survey of 1,500 decision makers released by Cognilytica, about 40 percent indicate that they are currently implementing at least one AI project or plan to do so. Issues getting in the way include limited availability of AI skills and talent, as well as justifying ROI.

Having the right mindset is half the battle with successfully building AI into the organization. This means looking beyond traditional, cold ROI measures, and looking at the ways AI will enrich and amplify decision-making. Ravi Bapna, professor at the University of Minnesotas Carlson School of Management, says attitude wins the day for moving forward with AI. In a recent Knowledge@Wharton article, he offers four ways AI means better decisions:

AI helps leverage the power and the limitations of tacit knowledge: Many organizations have data that may sit unused because its beyond the comprehension of the human mind. But with AI and predictive modeling applied, new vistas open up. What many executives do not realize is that they are almost certainly sitting on tons of administrative data from the past that can be harnessed in a predictive sense to help make better decisions, Bapna says.

AI spots outliers: AI quickly catches outlying factors. These algorithms fall in thedescriptive analyticspillar, a branch of machine learning that generates business value by exploring and identifying interesting patterns in your hyper-dimensional data, something at which we humans are not great.

AI promotes counter-factual thinking: Data by itself can be manipulated to justify pre-existing notions, or miss variables affecting results. Counter-factual thinking is a leadership muscle that is not exercised often enough, says Bapna relates. This leads to sub-optimal decision-making and poor resource allocation. Casual analytics encourages counter-factual thinking. Not answering questions in a causal manner or using the highest paid persons opinion to make such inferences is a sure shot way of destroying value for your company.

AI enables combinatorial thinking: Even the most ambitious decisions are tempered by constraints to the point where new projects may not be able to deliver. Most decision-making operates in the context of optimizing some goal maximizing revenue or minimizing costs in the presence of a variety of constraints budgets, or service quality levels that have to be maintained, says Bapna. Needless to say, this inhibits growth. Combinatorial thinking, based on prescriptive analytics, can provide answers, he says. Combinatorial optimizations algorithms are capable of predicting favorable outcomes that deliver more value for investments.

Continue reading here:
Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same - Forbes

Read More..

China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) – The National Interest Online

Artificial intelligence (AI) is increasingly embedded into every aspect of life, and China is pouring billions into its bid to become an AI superpower. China's three-step plan is to pull equal with the United States in 2020, start making major breakthroughs of its own by mid-decade, and become the world's AI leader in 2030.

There's no doubt that Chinese companies are making big gains. Chinese government spending on AI may not match some of the most-hyped estimates, but China is providing big state subsidies to a select group of AI national champions, like Baidu in autonomous vehicles (AVs), Tencent in medical imaging, Alibaba in smart cities, Huawei in chips and software.

State support isn't all about money. It's also about clearing the road to success -- sometimes literally. Baidu ("China's Google") is based in Beijing, where the local government has kindly closed more than 300 miles of city roads to make way for AV tests. Nearby Shandong province closed a 16 mile mountain road so that Huawei could test its AI chips for AVs in a country setting.

In other Chinese AV test cities, the roads remain open but are thoroughly sanitized. Southern China's tech capital, Shenzhen, is the home of AI leader Tencent, which is testing its own AVs on Shenzhen's public roads. Notably absent from Shenzhen's major roads are motorcycles, scooters, bicycles, or even pedestrians. Two-wheeled vehicles are prohibited; pedestrians are comprehensively corralled by sidewalk barriers and deterred from jaywalking by stiff penalties backed up by facial recognition technology.

And what better way to jump-start AI for facial recognition than by having a national biometric ID card database where every single person's face is rendered in machine-friendly standardized photos?

Making AI easy has certainly helped China get its AI strategy off the ground. But like a student who is spoon-fed the answers on a test, a machine that learns from a simplified environment won't necessarily be able to cope in the real world.

Machine learning (ML) uses vast quantities of experiential data to train algorithms to make decisions that mimic human intelligence. Type something like "ML 4 AI" into Google, and it will know exactly what you mean. That's because Google learns English in the real world, not from memorizing a dictionary.

It's the same for AVs. Google's Alphabet cousin Waymo tests its cars on the anything-goes roads of everyday America. As a result, its algorithms have learned how to deal with challenges like a cyclist carrying a stop sign. Everything that can happen on America's roads, will happen on America's roads. Chinese AI is learning how to drive like a machine, but American AI is learning how to drive like a human -- only better.

American, British, and (especially) Israeli facial recognition AI efforts face similar real-world challenges. They have to work with incomplete, imperfect data, and still get the job done. What's more, they can't throw up too many false positives -- innocent people identified as threats. China's totalitarian regime can punish innocent people with impunity, but in democratic countries, even one false positive could halt a facial recognition roll-outs.

It's tempting to think that the best way forward for AI is to make it easy. In fact, the exact opposite is true. Like a muscle pushed to exercise, AI thrives on challenges. Chinese AI may take some giant strides operating in a stripped-down reality, but American AI will win the race in the real world. Reality is complicated, and if it's one thing Americans are good at, it's dealing with complexity.

Salvatore Babones is an adjunct scholar at the Centre for Independent Studies and an associate professor at the University of Sydney.

Read the original post:
China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) - The National Interest Online

Read More..

Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security – Security Intelligence

Artificial intelligence (AI) isnt new. What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable.

Artificial intelligence is many things to many people. One fairly neutral definition is that its a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, its time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape. 2020 is the year large organizations will come to rely on AI for security.

AI isnt magic, but for many specific use cases, the right tool for the job will increasingly involve AI. Here are six reasons why thats the case.

The monetary calculation every organization must make is the cost of security tools, programs and resources on one hand versus the cost of failing to secure vital assets on the other. That calculation is becoming easier as the potential cost of data breaches grows. And these costs arent stemming from the cleanup operation alone; they may also include damage to the brand, drops in stock prices and loss of productivity.

The average total cost of a data breach is now $3.92 million, according to the 2019 Cost of a Data Breach Report. Thats an increase of nearly 12 percent since 2014. The rising costs are also global, as Juniper Research predicts that the business costs of data breaches will exceed $5 trillion per year by 2024, with regulatory fines included.

These rising costs are partly due to the fact that malware is growing more destructive. Ransomware, for example, is moving beyond preventing file access and toward going after critical files and even master boot records.

Fortunately, AI can help security operations centers (SOCs) deal with these rising risks and costs. Indeed, the Cost of a Data Breach Report found that cybersecurity AI can decrease average costs by $230,000.

The percentage of state-sponsored cyberattacks against organizations of all kinds is also growing. In 2019, nearly one-quarter (23 percent) of breaches analyzed by Verizon were identified as having been funded or otherwise supported by nation-states or state-sponsored actors up from 12 percent in the previous year. This is concerning because state-sponsored attacks tend to be far more capable than garden-variety cybercrime attacks, and detecting and containing these threats often requires AI assistance.

An arms race between adversarial AI and defensive AI is coming. Thats just another way of saying that cybercriminals are coming at organizations with AI-based methods sold on the dark web to avoid setting off intrusion alarms and defeat authentication measures. So-called polymorphic malware and metamorphic malware change and adapt to avoid detection, with the latter making more drastic and hard-to-detect changes with its code.

Even social engineering is getting the artificial intelligence treatment. Weve already seen deepfake audio attacks where AI-generated voices impersonating three CEOs were used against three different companies. Deepfake audio and video simulations are created using generative adversarial network (GAN) technologies, where two neural networks train each other (one learning to create fake data and the other learning to judge its quality) until the first can create convincing simulations.

GAN technology can, in theory and in practice, be used to generate all kinds of fake data, including fingerprints and other biometric data. Some security experts predict that future iterations of malware will use AI to determine whether they are in a sandbox or not. Sandbox-evading malware would naturally be harder to detect using traditional methods.

Cybercriminals could also use AI to find new targets, especially internet of things (IoT) targets. This may contribute to more attacks against warehouses, factory equipment and office equipment. Accordingly, the best defense against AI-enhanced attacks of all kinds is cybersecurity AI.

Large organizations are suffering from a chronic expertise shortage in the cybersecurity field, and this shortage will continue unless things change. To that end, AI-based tools can enable enterprises to do more with the limited human resources already present in-house.

The Accenture Security Index found that more than 70 percent of organizations worldwide struggle to identify what their high-value assets are. AI can be a powerful tool for identifying these assets for protection.

The quantity of data that has to be sifted through to identify threats is vast and growing. Fortunately, machine learning is well-suited to processing huge data sets and eliminating false positives.

In addition, rapid in-house software development may be creating many new vulnerabilities, but AI can find errors in code far more quickly than humans. To embrace rapid application development (RAD) requires the use of AI for bug fixing.

These are just two examples of how growing complexity can inform and demand the adoption of AI-based tools in an enterprise.

There has always been tension between the need for better security and the need for higher productivity. The most usable systems are not secure, and the most secure systems are often unusable. Striking the right balance between the two is vital, but achieving this balance is becoming more difficult as attack methods grow more aggressive.

AI will likely come into your organization through the evolution of basic security practices. For instance, consider the standard security practice of authenticating employee and customer identities. As cybercriminals get better at spoofing users, stealing passwords and so on, organizations will be more incentivized to embrace advanced authentication technologies, such as AI-based facial recognition, gait recognition, voice recognition, keystroke dynamics and other biometrics.

The 2019 Verizon Data Breach Investigations Report found that 81 percent of hacking-related breaches involved weak or stolen passwords. To counteract these attacks, sophisticated AI-based tools that enhance authentication can be leveraged. For example, AI tools that continuously estimate risk levels whenever employees or customers access resources from the organization could prompt identification systems to require two-factor authentication when the AI component detects suspicious or risky behavior.

A big part of the solution going forward is leveraging both AI and biometrics to enable greater security without overburdening employees and customers.

One of the biggest reasons why employing AI will be so critical this year is that doing so will likely be unavoidable. AI is being built into security tools and services of all kinds, so its time to change our thinking around AIs role in enterprise security. Where it was once an exotic option, it is quickly becoming a mainstream necessity. How will you use AI to protect your organization?

View original post here:
Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security - Security Intelligence

Read More..

Gift will allow MIT researchers to use artificial intelligence in a biomedical device – MIT News

Researchers in the MIT Department of Civil and Environmental Engineering (CEE) have received a gift to advance their work on a device designed to position living cells for growing human organs using acoustic waves. The Acoustofluidic Device Design with Deep Learning is being supported by Natick, Massachusetts-based MathWorks, a leading developer of mathematical computing software.

One of the fundamental problems in growing cells is how to move and position them without damage, says John R. Williams, a professor in CEE. The devices weve designed are like acoustic tweezers.

Inspired by the complex and beautiful patterns in the sand made by waves, the researchers' approach is to use sound waves controlled by machine learning to design complex cell patterns. The pressure waves generated by acoustics in a fluid gently move and position the cells without damaging them.

The engineers developed a computer simulator to create a variety of device designs, which were then fed to an AI platform to understand the relationship between device design and cell positions.

Our hope is that, in time, this AI platform will create devices that we couldnt have imagined with traditional approaches, says Sam Raymond, who recently completed his doctorate working with Williams on this project. Raymonds thesis title, "Combining Numerical Simulation and Machine Learning," explored the application of machine learning in computational engineering.

MathWorks and MIT have a 30-year long relationship that centers on advancing innovations in engineering and science, says P.J. Boardman, director of MathWorks. We are pleased to support Dr. Williams and his team as they use new methodologies in simulation and deep learning to realize significant scientific breakthroughs.

Williams and Raymond collaborated with researchers at the University of Melbourne and the Singapore University of Technology and Design on this project.

Go here to see the original:
Gift will allow MIT researchers to use artificial intelligence in a biomedical device - MIT News

Read More..

Access Intelligence Announces Artificial Intelligence Strategist Chris Benson Will Deliver the Keynote Presentation at Connected Plant Conference 2020…

HOUSTONChris Benson, principal artificial intelligence strategist for global aerospace, defense, and security giant Lockheed Martin, will give the opening keynote presentation at the Connected Plant Conference 2020, which will take place February 25 to 27, 2020, at the Westin Peachtree Plaza in Atlanta, Georgia.

Kicking off the event, Benson will shed light on the vast role and potential that artificial intelligence (AI) offers as the world embarks on a definitive fourth industrial revolution. While AI is a technology that is still emerging within the power and chemical processing sectors, it has made notable headway in other industries, including defense, security, and manufacturing, and it is commonly hailed as an integral technology evolution that will take IIoT to the next level. Some even describe AI as the software engine that will drive the fourth revolution.

Bensons address will glean from his deep knowledge of AI as a long-time solutions architect for AI and machine learning (ML), and the emerging technologies they intersectincluding robotics, IoT, augmented reality, blockchain, mobile, edge, and cloud. As a renowned thought-leader on AI and related fields, Benson frequently gives captivating speeches on numerous topics about the subject. He also discusses AI with an array of experts as co-host of the Practical AI podcast, which reaches thousands of AI enthusiasts each week. Benson is also the founder and organizer of the Atlanta Deep Learning Meetupone of the largest AI communities in the world.

Before he joined Lockheed Martin, where he oversees strategies related to AI and AI ethics, Benson was chief scientist for AI and ML at technology conglomerate Honeywell SPS, and before that, he was on the AI team at multinational professional services company Accenture.

This years Connected Plant Conference, scheduled for February 25 to 27 in Atlanta, is the only event covering digital transformation/digitalization for the power and chemical process industries. Presenters will explore the fast-paced advances in automation, data analytics, computing networks, smart sensors, augmented reality, and other technologies that companies are using to improve their processes and overall businesses in todays competitive environment.

To register or for more information, see: https://www.connectedplantconference.com

Go here to read the rest:
Access Intelligence Announces Artificial Intelligence Strategist Chris Benson Will Deliver the Keynote Presentation at Connected Plant Conference 2020...

Read More..

NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring – Science Times

(Photo : Bigstock) AI Learning and Artificial Intelligence Concept - Icon Graphic Interface showing computer, machine thinking and AI Artificial Intelligence of Digital Robotic Devices

The nearshoring technology industry is finding a rapid growth in demand from North American companies for engineers and data science services related to the advances that are coming through the ongoing artificial intelligence (AI) revolution. Companies find high value in working on new and sophisticated applications with nearshoring firms that are close in proximity, time zones, language, and business culture.

In recent years, the costs involved in offshoring have increased in relative comparison to nearshoring costs. Additionally, tech education opportunities in the Western hemisphere have become more advantageous. Western countries have far fewer holidays and lost workdays than in offshore countries as well. In this article, NearShore Technology examines current AI trends impacting nearshoring.

AI has been an active field for computer scientists and logicians for decades, and in recent years hardware and software capabilities have advanced to the stage allowing for the actual implementation of many AI processes. In general, AI describes the ability of a program and associated hardware to simulate human intelligence, reasoning, and analysis of real-world data. Logical algorithms are allowing for increased learning, logic, and creativity with AI processes. Increased technological capabilities are allowing AI to process information in quantities and with perceptive abilities that are beyond traditional human powers. Many industrial processes are finding great utility from machine learning, an AI-based process that allows technology systems to evolve and learn based on experience and self-development.

The huge tech companies that mainly focus on how customers use software programs are leading the way in AI development. Companies like Google, Amazon, and Facebook are positioning immense resources to advance their AI processes' abilities to understand and predict customer behavior. In addition to tech and retail firms, healthcare, financial services, and auto manufacturers (aiming at a future of autonomous cars) are all committing to developing effective AI tech. From routine activities such as customer support and billing to more intuition-based activities like investing and making strategic decisions, AI is becoming a central part of competing in almost every industry.

AI development requires experienced and skillful software engineers and programmers. The ability of an AI application to operate effectively is dependent first on the quantity and quality of data that it is provided. Algorithms must be able to perceive relevant data and also to learn and improve based on the data that is received. Programmers and engineers must be able to understand and facilitate algorithm improvement over time, as AI applications are never really completed and are constantly in development. Programmers must rely on a sufficient number of competent data scientists and analysts to sort and identify the nature and quality of information processed by an AI application to provide a meaningful understanding of how well the AI is functioning. The entire process is changing and progressing quickly, and the effectiveness of AI is determined by the abilities of the engineers and programmers who are involved.

Historically, many traditional IT services have been suited for offshoring. Most traditional IT and call center support services were routine, and the cost-efficiency of offshoring these processes around the world made economic sense in many situations. When skilled programming and data science are not a requirement, offshoring has had a place in the mix for many local companies. However, the worldwide shortage of skilled engineers and data scientists is most prevalent in the parts of the world normally used for offshore services.

Nearshoring AI technology development allows local companies to have meaningful and real-time relationships with programmers and data specialists who have the requisite skills needed. These nearshore relationships are vital to the ongoing nature of AI development.

Among the most important considerations of a successful nearshoring AI relationship are examining the actual skill and education of the nearshore firm's workers. A nearshore provider's team should be up to date with the latest technology developments and should have experience and a history of success in the relevant industry. As a process that depends on natural language use, it is important that AI developers are native or fluent speakers of the client company's language. Working with a nearshore firm that is proximately near in time and place also helps the firm to properly understand the culture and needs of a company's market and customers. A nearshore firm working on AI processes should feel like a complete partner and not just another outsourced provider of routine tasks.

NearShore Technology is a US firm headquartered in Atlanta with offices throughout North America. The company focuses on meeting all the technology needs of its clients. NearShore partners with technology officers and leaders to provide effective and timely solutions that fit each customer's unique needs. NearShore uses a family-based approach to provide superior IT, Medtech, Fintech, and related services to our customers and partners throughout North America.

Go here to read the rest:
NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring - Science Times

Read More..