Page 3,654«..1020..3,6533,6543,6553,656..3,6603,670..»

Microsoft: This is how to protect your machine-learning applications – TechRepublic

Understanding failures and attacks can help us build safer AI applications.

Modern machine learning (ML) has become an important tool in a very short time. We're using ML models across our organisations, either rolling our own in R and Python, using tools like TensorFlow to learn and explore our data, or building on cloud- and container-hosted services like Azure's Cognitive Services. It's a technology that helps predict maintenance schedules, spots fraud and damaged parts, and parses our speech, responding in a flexible way.

SEE:Prescriptive analytics: An insider's guide (free PDF)(TechRepublic)

The models that drive our ML applications are incredibly complex, training neural networks on large data sets. But there's a big problem: they're hard to explain or understand. Why does a model parse a red blob with white text as a stop sign and not a soft drink advert? It's that complexity which hides the underlying risks that are baked into our models, and the possible attacks that can severely disrupt the business processes and services we're building using those very models.

It's easy to imagine an attack on a self-driving car that could make it ignore stop signs, simply by changing a few details on the sign, or a facial recognition system that would detect a pixelated bandanna as Brad Pitt. These adversarial attacks take advantage of the ML models, guiding them to respond in a way that's not how they're intended to operate, distorting the input data by changing the physical inputs.

Microsoft is thinking a lot about how to protect machine learning systems. They're key to its future -- from tools being built into Office, to its Azure cloud-scale services, and managing its own and your networks, even delivering security services through ML-powered tools like Azure Sentinel. With so much investment riding on its machine-learning services, it's no wonder that many of Microsoft's presentations at the RSA security conference focused on understanding the security issues with ML and on how to protect machine-learning systems.

Attacks on machine-learning systems need access to the models used, so you need to keep your models private. That goes for small models that might be helping run your production lines as much as the massive models that drive the likes of Google, Bing and Facebook. If I get access to your model, I can work out how to affect it, either looking for the right data to feed it that will poison the results, or finding a way past the model to get the results I want.

Much of this work has been published in a paper in conjunction with the Berkman Klein Center, on failure modes in machine learning. As the paper points out, a lot of work has been done in finding ways to attack machine learning, but not much on how to defend it. We need to build a credible set of defences around machine learning's neural networks, in much the same way as we protect our physical and virtual network infrastructures.

Attacks on ML systems are failures of the underlying models. They are responding in unexpected, and possibly detrimental ways. We need to understand what the failure modes of machine-learning systems are, and then understand how we can respond to those failures. The paper talks about two failure modes: intentional failures, where an attacker deliberately subverts a system, and unintentional failures, where there's an unsafe element in the ML model being used that appears correct but delivers bad outcomes.

By understanding the failure modes we can build threat models and apply them to our ML-based applications and services, and then respond to those threats and defend our new applications.

The paper suggests 11 different attack classifications, many of which get around our standard defence models. It's possible to compromise a machine-learning system without needing access to the underlying software and hardware, so standard authorisation techniques can't protect ML-based systems and we need to consider alternative approaches.

What are these attacks? The first, perturbation attacks, modify queries to change the response to one the attackers desire. That's matched by poisoning attacks, which achieve the same result by contaminating the training data. Machine-learning models often include important intellectual property, and some attacks like model inversion aim to extract that data. Similarly, a membership inference attack will try to determine whether specific data was in the initial training set. Closely related is the concept of model stealing, using queries to extract the model.

SEE:5G: What it means for IoT(free PDF)

Other attacks include reprogramming the system around the ML model, so that either results or inputs are changed. Closely related are adversarial attacks that change physical objects, adding duct tape to signs to confuse navigation or using specially printed bandanas to disrupt facial-recognition systems. Some attacks depend on the provider: a malicious provider can extract training data from customer systems. They can add backdoors to systems, or compromise models as they're downloaded.

While many of these attacks are new and targeted specifically at machine-learning systems, they are still computer systems and applications, and are vulnerable to existing exploits and techniques, allowing attackers to use familiar approaches to disrupt ML applications.

It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services.

Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results.

If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning. We can only defend against intentional attacks if we know that we've protected ourselves and our systems from mistakes we've made. The old adage "test, test, and test again" is key to building secure and safe machine learning -- even when we're using pre-built models and service APIs.

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

Follow this link:
Microsoft: This is how to protect your machine-learning applications - TechRepublic

Read More..

Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA

Tecton.ai emerged from stealth and formally launched with its data platform for machine learning. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed machine learning models. Tecton is in private beta with paying customers, including a Fortune 50 company.

Tecton.ai also announced $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia. Both Martin Casado, general partner at Andreessen Horowitz, and Matt Miller, partner at Sequoia, have joined the board.

Tecton.ai founders Mike Del Balso (CEO), Kevin Stumpf (CTO) and Jeremy Hermann (VP of Engineering) worked together at Uber when the company was struggling to build and deploy new machine learning models, so they createdUbers Michelangelo machine learning platform. Michelangelo was instrumental in scaling Ubers operations to thousands of production models serving millions of transactions per second in just a few years, and today it supports a myriad of use cases ranging from generating marketplace forecasts, calculating ETAs and automating fraud detection.

Del Balso, Stumpf and Hermann went on to found Tecton.ai to solve the data challenges that are the biggest impediment to deploying machine learning in the enterprise today. Enterprises are already generating vast amounts of data, but the problem is how to harness and refine this data into predictive signals that power machine learning models. Engineering teams end up spending the majority of their time building bespoke data pipelines for each new project. These custom pipelines are complex, brittle, expensive and often redundant. The end result is that 78% of new projects never get deployed, and 96% of projects encounter challenges with data quality and quantity(1).

Data problems all too often cause last-mile delivery issues for machine learning projects, said Mike Del Balso, Tecton.ai co-founder and CEO. With Tecton, there is no last mile. We created Tecton to empower data science teams to take control of their data and focus on building models, not pipelines. With Tecton, organizations can deliver impact with machine learning quickly, reliably and at scale.

Tecton.ai has assembled a world-class engineering team that has deep experience building machine learning infrastructure for industry leaders such as Google, Facebook, Airbnb and Uber. Tecton is the industrys first data platform that has been designed specifically to support the requirements of operational machine learning. It empowers data scientists to build great features, serve them to production quickly and reliably and do it at scale.

Tecton makes the delivery of machine learning data predictable for every company.

The ability to manage data and extract insights from it is catalyzing the next wave of business transformation, said Martin Casado, general partner at Andreessen Horowitz. The Tecton team has been on the forefront of this change with a long history of machine learning/AI and data at Google, Facebook and Airbnb and building the machine learning platform at Uber. Were very excited to be partnering with Mike, Kevin, Jeremy and the Tecton team to bring this expertise to the rest of the industry.

The founders of Tecton built a platform within Uber that took machine learning from a bespoke research effort to the core of how the company operated day-to-day, said Matt Miller, partner at Sequoia. They started Tecton to democratize machine learning across the enterprise. We believe their platform for machine learning will drive a Cambrian explosion within their customers, empowering them to drive their business operations with this powerful technology paradigm, unlocking countless opportunities. We were thrilled to partner with Tecton along with a16z at the seed and now again at the Series A. We believe Tecton has the potential to be one of the most transformational enterprise companies of this decade.

Sign up for the free insideBIGDATAnewsletter.

Continue reading here:
Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company - insideBIGDATA

Read More..

AI, machine learning and automation in cybersecurity: The time is now – GCN.com

INDUSTRY INSIGHT

The cybersecurity skills shortage continues to plague organizations across regions, markets and sectors, and the government sector is no exception.According to (ISC)2, there are only enough cybersecurity pros to fill about 60% of the jobs that are currently open -- which means the workforce will need to grow by roughly 145% to just meet the current global demand.

The Government Accountability Office states that the federal government needs a qualified, well-trained cybersecurity workforce to protect vital IT systems, and one senior cybersecurity official at the Department of Homeland Security has described the talent gap as a national security issue. The scarcity of such workers is one reason why securing federal systems is on GAOs High Risk list.Given this situation, chief information security officers who are looking for ways to make their existing resources more effective can make great use of automation and artificial intelligence to supplement and enhance their workforce.

The overall challenge landscape

Results of our survey, Making Tough Choices: How CISOs Manage Escalating Threats and Limited Resources show that CISOs currently devote 36% of their budgets to response and 33% to prevention.However, as security needs change, many CISOs are looking to shift budget away from prevention without reducing its effectiveness. An optimal budget would reduce spend on prevention and increase spending on detection and response to 33% and 40% of the security budget, respectively.This shift would give security teams the speed and flexibility they need to react quickly in the face of a threat from cybercriminals who are outpacing agencies defensive capabilities.When breaches are inevitable, it is important to stop as many as possible at the point of intrusion, but it is even more important to detect and respond to them before they can do serious damage.

One challenge to matching the speed of todays cyberattacks is that CISOs have limited personnel and budget resources. To overcome these obstacles and attain the detection and response speeds necessary for effective cybersecurity, CISOs must take advantage of AI, machine learning and automation.These technologies will help close gaps by correlating threat intelligence and coordinating responses at machine speed. Government agencies will be able to develop a self-defending security system capable of analyzing large volumes of data, detecting threats, reconfiguring devices and responding to threats without human intervention.

The unique challenges

Federal agencies deal with a number of challenges unique to the public sector, including the age and complexity of IT systems as well as the challenges of the government budget cycle.IT teams for government agencies arent just protecting intellectual property or credit card numbers; they are also tasked with protecting citizens sensitive data and national security secrets.

Charged with this duty but constrained by limited resources, IT leaders must weigh the risks of cyber threats against the daily demands of keeping networks up and running. This balancing act becomes more difficult as agencies migrate to the cloud, adopt internet-of-things devices and transition to software-defined networks that have no perimeter. These changes mean government networks are expanding their attack surface with no additional -- or even fewerdefensive resources. Its part of the reason why the Verizon Data Breach Investigations Report found that government agencies were subjected to more security incidents and more breaches than any other sector last year.

To change that dynamic, the typical government set-up of siloed systems must be replaced with a unified platform that can provide wider and more granular network visibility and more rapid and automated response.

How AI and automation can help

The keys to making a unified platform work are AI and automation technologies. Because organizations cannot keep pace with the growing volume of threats by manual detection and response, they need to leverage AI/ML and automation to fill these gaps. AI-driven solutions can learn what normal behavior looks like in order to detect anomalous behavior.For instance, many employees typically access a specific kind of data or only log on at certain times. If an employees account starts to show activity outside of these normal parameters, an AI/ML-based solution can detect these anomalies and can inspect or quarantine the affected device or user account until it is determined to be safe or mitigating action can be taken.

If the device is infected with malware or is otherwise acting maliciously, that AI-based tool can also issue automated responses. Making these tactical tasks the responsibility of AI-driven solutions frees security teams to work on more strategic problems, develop threat intelligence or focus on more difficult tasks such as detecting unknown threats.

IT teams at government agencies that want to implement AI and automation must be sure the solution they choose can scale and operate at machine speeds to keep up with the growing complexity and speed of the threat. In selecting a solution, IT managers must take time to ensure solutions have been developed using AI best practices and training techniques and that they are powered by best-in-class threat intelligence, security research and analytics technology. Data should be collected from a variety of nodes -- both globally and within the local IT environment -- to glean the most accurate and actionable information for supporting a security strategy.

Time is of the essence

Government agencies are experiencing more cyberattacks than ever before, at a time when the nation is facing a 40% cybersecurity skills talent shortage. Time is of the essence in defending a network, but time is what under-resourced and over-tasked government IT teams typically lack. As attacks come more rapidly and adapt to the evolving IT environment and new vulnerabilities, AI/ML and automation are rapidly becoming necessities.Solutions built from the ground up with these technologies will help government CISOs counter and potentially get ahead of todays sophisticated attacks.

About the Author

Jim Richberg is a Fortinet field CISO focused on the U.S. public sector.

Read more here:
AI, machine learning and automation in cybersecurity: The time is now - GCN.com

Read More..

How To Verify The Memory Loss Of A Machine Learning Model – Analytics India Magazine

It is a known fact that deep learning models get better with diversity in the data they are fed with. For instance, data in a use case related to healthcare data will be taken from several providers such as patient data, history, workflows of professionals, insurance providers, etc. to ensure such data diversity.

These data points that are collected through various interactions of people are fed into a machine learning model, which sits remotely in a data haven spewing predictions without exhausting.

However, consider a scenario where one of the providers ceases to offer data to the healthcare project and later requests to delete the provided information. In such a case, does the model remember or forget its learnings from this data?

To explore this, a team from the University of Edinburgh and Alan Turing Institute assumed that a model had forgotten some data and what can be done to verify the same. In this process, they investigated the challenges and also offered solutions.

The authors of this work wrote that this initiative is first of its kind and the only work that comes close is the Membership Inference Attack (MIA), which is also an inspiration to this work.

To verify if a model has forgotten specific data, the authors propose a Kolmogorov Smirnov (K-S) distance-based method. This method is used to infer whether a model is trained with the query dataset. The algorithm can be seen below:

Based on the above algorithm, the researchers have used benchmark datasets such as MNIST, SVHN and CIFAR-10 for experiments, which were used to verify the effectiveness of this new method. Later, this method was also tested on the ACDC dataset using the pathology detection component of the challenge.

The MNIST dataset contains 60,000 images of 10 digits with image size 28 28. Similar to MNIST, the SVHN dataset has over 600,000 digit images obtained from house numbers in Google Street view images. The image size of SVHN is 32 32. Since both datasets are for the task of digit recognition/classification, this dataset was considered to belong to the same domain. CIFAR-10 is used as a dataset to validate the method. CIFAR-10 has 60,000 images (size 32 32) of 10-class objects, including aeroplane, bird, etc. To train models with the same design, the images of all three datasets are preprocessed to grey-scale and rescaled to size 28 28.

Using the K-S distance statistics about the output distribution of a target model, said the authors, can be obtained without knowing the weights of the model. Since the models training data are unknown, few new models called the shadow models were trained with the query dataset and another calibration dataset.

Then by comparing the K-S values, one can conclude if the training data contains information from the query dataset or not.

Experiments have been done before to check the ownership one has over data in the world of the internet. One such attempt was made by the researchers at Stanford in which they investigated the algorithmic principles behind efficient data deletion in machine learning.

They found that for many standard ML models, the only way to completely remove an individuals data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. In a trade-off between efficiency and privacy, a challenge arises because algorithms that support efficient deletion need not be private, and algorithms that are private do not have to support efficient deletion.

Aforementioned experiments are an attempt to probe and raise new questions related to the never-ending debate about the usage of AI and privacy. The objective in these works is to investigate the idea of how much authority an individual has over specific data while also helping expose the vulnerabilities within a model if certain data is removed.

Check more about this work here.

comments

Continue reading here:
How To Verify The Memory Loss Of A Machine Learning Model - Analytics India Magazine

Read More..

Could Machine Learning Replace the Entire Weather Forecast System? – HPCwire

Just a few months ago, a series of major new weather and climate supercomputing investments were announced, including a 1.2 billion order for the worlds most powerful weather and climate supercomputer and a tripling of the U.S. operational supercomputing capacity for weather forecasting. Weather and climate modeling are among the most power-hungry use cases for supercomputers, and research and forecasting agencies often struggle to keep up with the computing needs of models that are, in many cases, simulating the atmosphere of the entire planet as granularly and as regularly as possible.

What if that all changed?

In a virtual keynote for the HPC-AI Advisory Councils 2020 Stanford Conference, Peter Dueben outlined how machine learning might (or might not) begin to augment and even, eventually, compete with heavy-duty, supercomputer-powered climate models. Dueben is the coordinator for machine learning and AI activities at the European Centre for Medium-Range Weather Forecasts (ECMWF), a UK-based intergovernmental organization that houses two supercomputers and provides 24/7 operational weather services at several timescales. ECMWF is also the home of the Integrated Forecast System (IFS), which Dueben says is probably one of the best forecast models in the world.

Why machine learning at all?

The Earth, Dueben explained, is big. So big, in fact, that apart from being laborious, developing a representational model of the Earths weather and climate systems brick-by-brick isnt achieving the accuracy that you might imagine. Despite the computing firepower behind weather forecasting, most models remain at a 10 kilometer resolution that doesnt represent clouds, and the chaotic atmospheric dynamics and occasionally opaque interactions further complicate model outputs.

However, on the other side, we have a huge number of observations, Dueben said. Just to give you an impression, ECMWF is getting hundreds of millions of observations onto the site every day. Some observations come from satellites, planes, ships, ground measurements, balloons This data collected over the last several decades constituted hundreds of petabytes if simulations and climate modeling results were included.

If you combine those two points, we have a very complex nonlinear system and we also have a lot of data, he said. Theres obviously lots of potential applications for machine learning in weather modeling.

Potential applications of machine learning

Machine learning applications are really spread all over the entire workflow of weather prediction, Dueben said, breaking that workflow down into observations, data assimilation, numerical weather forecasting, and post-processing and dissemination. Across those areas, he explained, machine learning could be used for anything from weather data monitoring to learning the underlying equations of atmospheric motions.

By way of example, Dueben highlighted a handful of current, real-world applications. In one case, researchers had applied machine learning to detecting wildfires caused by lightning. Using observations for 15 variables (such as temperature, soil moisture and vegetation cover), the researchers constructed a machine learning-based decision tree to assess whether or not satellite observations included wildfires. The team achieved an accuracy of 77 percent which, Deuben said, doesnt sound too great in principle, but was actually quite good.

Elsewhere, another team explored the use of machine learning to correct persistent biases in forecast model results. Dueben explained that researchers were examining the use of a weak constraint machine learning algorithm (in this case, 4D-Var), which is a kind of algorithm that would be able to learn this kind of forecast error and correct it in the data assimilation process.

We learn, basically, the bias, he said, and then once we have learned the bias, we can correct the bias of the forecast model by just adding forcing terms to the system. Once 4D-Var was implemented on a sample of forecast model results, the biases were ameliorated. Though Dueben cautioned that the process is still fairly simplistic, a new collaboration with Nvidia is looking into more sophisticated ways of correcting those forecast errors with machine learning.

Dueben also outlined applications in post-processing. Much of modern weather forecasting focuses on ensemble methods, where a model is run many times to obtain a spread of possible scenarios and as a result, probabilities of various outcomes. We investigate whether we can correct the ensemble spread calculated from a small number of ensemble members via deep learning, Dueben said. Once again, machine learning when applied to a ten-member ensemble looking at temperatures in Europe improved the results, reducing error in temperature spreads.

Can machine learning replace core functionality or even the entire forecast system?

One of the things that were looking into is the emulation of different permutation schemes, Dueben said. Chief among those, at least initially, have been the radiation component of forecast models, which account for the fluxes of solar radiation between the ground, the clouds and the upper atmosphere. As a trial run, Dueben and his colleagues are using extensive radiation output data from a forecast model to train a neural network. First of all, its very, very light, Dueben said. Second of all, its also going to be much more portable. Once we represent radiation with a deep neural network, you can basically port it to whatever hardware you want.

Showing a pair of output images, one from the machine learning model and one from the forecast model, Dueben pointed out that it was hard to notice significant differences and even refused to tell the audience which was which. Furthermore, he said, the model had achieved around a tenfold speedup. (Im quite confident that it will actually be much better than a factor of ten, Dueben said.)

Dueben and his colleagues have also scaled their tests up to more ambitious realms. They pulled hourly data on geopotential height (Z500) which is related to air pressure and trained a deep learning model to predict future changes in Z500 across the globe using only that historical data. For this, no physical understanding is really required, Dueben said, and it turns out that its actually working quite well.

Still, Dueben forced himself to face the crucial question.

Is this the future? he asked. I have to say its probably not.

There were several reasons for this. First, Dueben said, the simulations were unstable, eventually blowing up if they were stretched too far. Second of all, he said, its also unknown how to increase complexity at this stage. We only have one field here. Finally, he explained, there were only forty years of sufficiently detailed data with which to work.

Still, it wasnt all pessimism. Its kind of unlikely that its going to fly and basically feed operational forecasting at one point, he said. However, having said this, there are now a number of papers coming out where people are looking into this in a much, much more complicated way than we have done with really sophisticated convolutional networks and they get, actually, quite good results. So who knows!

The path forward

The main challenge for machine learning in the community that were facing at the moment, Dueben said, is basically that we need to prove now that machine learning solutions can really be better than conventional tools and we need to do this in the next couple of years.

There are, of course, many roadblocks to that goal. Forecasting models are extraordinarily complicated; iterations on deep learning models require significant HPC resources to test and validate; and metrics of comparison among models are unclear. Dueben also outlined a series of major unknowns in machine learning for weather forecasting: could our explicit knowledge of atmospheric mechanisms be used to improve a machine learning forecast? Could researchers guarantee reproducibility? Could the tools be scaled effectively to HPC? The list went on.

Many scientists are working on these dilemmas as we speak, Dueben said, and Im sure we will have an enormous amount of progress in the next couple of years. Outlining a path forward, Dueben emphasized a mixture of a top-down and a bottom-up approach to link machine learning with weather and climate models. Per his diagram, this would combine neutral networks based on human knowledge of earth systems with reliable benchmarks, scalability and better uncertainty quantification.

As far as where he sees machine learning for weather prediction in ten years?

It could be that machine learning will have no long-term effect whatsoever that its just a wave going through, Dueben mused. But on the other hand, it could well be that machine learning tools will actually replace almost all conventional models that were working with.

The rest is here:
Could Machine Learning Replace the Entire Weather Forecast System? - HPCwire

Read More..

Dascena Announces Publication of Prospective Study Evaluating Effect of its Machine Learning Algorithm on Severe Sepsis Prediction – Business Wire

OAKLAND, Calif.--(BUSINESS WIRE)--Dascena, Inc., a machine learning diagnostic algorithm company that is targeting early disease intervention to improve patient care outcomes, announced today the publication of the companys prospective study evaluating its algorithm for the prediction of severe sepsis. The publication, Effect of a Sepsis Prediction Algorithm on Patient Mortality, Length of Stay, and Readmission: a Prospective Multicenter Clinical Outcomes Evaluation of Real-world Patient Data from 9 US Hospitals, was published today in the peer-reviewed journal BMJ Health & Care Informatics.

Sepsis is notoriously difficult to diagnose and treat, resulting in significant mortality and a high cost of treatment, said Ritankar Das, chief executive officer of Dascena. Our algorithm helps clinicians identify sepsis at an earlier stage, thereby allowing for earlier intervention to improve patient outcomes, and in turn, reduces the costs associated with treatment.

Study Design

The study prospectively evaluated multiyear, multicenter real-world clinical data from 75,147 patient encounters that were monitored by the InSight machine learning algorithm for sepsis prediction at facilities ranging from community hospitals to large academic centers. Hospitalized patients, including patients in intensive care units (ICUs) and emergency department visits were included. Data was evaluated to determine the algorithms effect on outcomes including in-hospital mortality, hospital length of stay, and 30-day readmission. This study, which was conducted in both ICU and non-ICU patients, confirms the significant mortality benefit observed in a previous intensive care unit study (LINK).

During the InSight algorithm operation, patient data was captured from the hospitals electronic health records in real-time and hospital staff were informed when a patient was determined to be at high risk for sepsis.

Study Findings

Of the 75,147 patient encounters monitored by the InSight algorithm, 17,758 patient hospital stays met two or more Systemic Inflammatory Response Syndrome (SIRS) criteria and were therefore included in the analysis. The InSight algorithm implementation resulted in:

We partnered with Dascena, starting in 2017, to bring the latest technology in the fight against sepsis to our hospital. We have found that the machine learning algorithm can pick up subtle factors in the patient that may not be obvious until much later in the illness, said Hoyt J. Burdick, M.D., senior vice president and chief medical officer of Cabell Huntington Hospital and lead author on the study. We are excited to report data today from one of the largest studies of its kind, of improvements in both increased patient survival and reduced healthcare costs.

About Dascena

Dascena is developing machine learning diagnostic algorithms to enable early disease intervention and improve care outcomes for patients. For more information, visit Dascena.com.

See original here:
Dascena Announces Publication of Prospective Study Evaluating Effect of its Machine Learning Algorithm on Severe Sepsis Prediction - Business Wire

Read More..

Global Machine Learning As A Service (Mlaas) Market : Industry Analysis And Forecast (2020-2027) – MR Invasion

Global Machine Learning as a Service (MLaaS) Marketwas valued about US$ XX Bn in 2019 and is expected to grow at a CAGR of 41.7% over the forecast period, to reach US$ 11.3 Bn in 2027.

The report study has analyzed revenue impact of covid-19 pandemic on the sales revenue of market leaders, market followers and disrupters in the report and same is reflected in our analysis.

REQUEST FOR FREE SAMPLE REPORT:https://www.maximizemarketresearch.com/request-sample/55511/

Market Definition:

Machine learning as a service (MLaaS) is an array of services that offer ML tools as part of cloud computing services. MLaaS helps clients profit from machine learning without the cognate cost, time and risk of establishing an in-house internal machine learning team.The report study has analyzed revenue impact of covid-19 pandemic on the sales revenue of market leaders, market followers and disrupters in the report and same is reflected in our analysis.

Machine Learning Service Providers:

Global Machine Learning as a Service (MLaaS) Market

Market Dynamics:

The scope of the report includes a detailed study of global and regional markets for Global Machine Learning as a Service (MLaaS) Market with the analysis given with variations in the growth of the industry in each regions. Large and SMEs are focusing on customer experience management to keep a complete and robust relationship with their customers by using customer data. So, ML needs to be integrated into enterprise applications to control and make optimal use of this data. Retail enterprises are shifting their focus to customer buying patterns with the rising number of e-commerce websites and the digital revolution in the retail industry. This drives the need to track and manage the inventory movement of items, which can be done using MLaaS. The use of MLaaS by retail enterprises for inventory optimization and behavioral tracking is expected to have a positive impact on global market growth.Apart from this, the growing trend of digitization is driving the growth of the MLaaS market globally. Growth in adoption of cloud-based platforms is expected to positively impact the growth of the MLaaS market. However, a lack of qualified and skilled persons is believed to be the one of the challenges before the growth of the MLaaS market. Furthermore, increasing concern toward data privacy is anticipated to restrain the development of the global market.

Market Segmentation:

The report will provide an accurate prediction of the contribution of the various segments to the growth of the Machine Learning as a Service (MLaaS) Market size. Based on organization size, SMEs segment is expected to account for the largest XX% market share by 2027. SMEs businesses are also projected to adopt machine learning service. With the help of predictive analytics ML, algorithms not only give real-time data but also predict the future. Machine learning solutions are used by SME businesses for fine-tuning their supply chain by predicting the demand for a product and by suggesting the timing and quantity of supplies vital for satisfying the customers expectations.

Regional Analysis:

The report offers a brief analysis of the major regions in the MLaaS market, namely, Asia-Pacific, Europe, North America, South America, and the Middle East & Africa.North America play an important role in MLaaS market, with a market size of US$ XX Mn in 2019 and will be US$ XX Mn in 2027, with a CAGR of XX% followed by Europe. Most of the machine learning as service market companies are based in the U.S and are contributing significantly in the growth of the market. The Asia-Pacific has been growing with the highest growth rate because of rising investment, favorable government policies and growing awareness. In 2017, Google launched the Google Neural Machine Translation for 9 Indian languages which use ML and artificial neural network to upsurges the fluency as well as accuracy in their Google Translate.

Recent Development:

The MMR research study includes the profiles of leading companies operating in the Global Machine Learning as a Service (MLaas) Market. Companies in the global market are more focused on enhancing their product and service helps through various strategic approaches. The ML providers are competing by launching new product categories, with advanced subscription-based platforms. The companies have adopted the strategy of version up gradations, mergers and acquisitions, agreements, partnerships, and strategic collaborations with regional and global players to achieve high growth in the MLaaS market.

Such as, in April 2019, Microsoft developed a platform that uses machine teaching to help deep strengthening learning algorithms tackle real-world problems. Microsoft scientists and product inventors have pioneered a complementary approach called ML. This relies on people know how to break a problem into easier tasks and give ML models important clues about how to find a solution earlier.

DO INQUIRY BEFORE PURCHASING REPORT HERE:https://www.maximizemarketresearch.com/inquiry-before-buying/55511/

The objective of the report is to present a comprehensive analysis of the Global Machine Learning as a Service (MLaaS) Market including all the stakeholders of the industry. The past and current status of the industry with forecasted market size and trends are presented in the report with the analysis of complicated data in simple language. The report covers all the aspects of the industry with a dedicated study of key players that includes market leaders, followers and new entrants by region. PORTER, SVOR, PESTEL analysis with the potential impact of micro-economic factors by region on the market has been presented in the report. External as well as internal factors that are supposed to affect the business positively or negatively have been analyzed, which will give a clear futuristic view of the industry to the decision-makers.

The report also helps in understanding Global Machine Learning as a Service (MLaaS) Market dynamics, structure by analyzing the market segments and projects the Global Machine Learning as a Service (MLaaS) Market size. Clear representation of competitive analysis of key players by Application, price, financial position, Product portfolio, growth strategies, and regional presence in the Global Machine Learning as a Service (MLaaS) Market make the report investors guide.Scope of the Global Machine Learning as a Service (MLaaS) Market

Global Machine Learning as a Service (MLaaS) Market, By Component

Software ServicesGlobal Machine Learning as a Service (MLaaS) Market, By Organization Size

Large Enterprises SMEsGlobal Machine Learning as a Service (MLaaS) Market, By End-Use Industry

Aerospace & Defense IT & Telecom Energy & Utilities Public sector Manufacturing BFSI Healthcare Retail OthersGlobal Machine Learning as a Service (MLaaS) Market, By Application

Marketing & Advertising Fraud Detection & Risk Management Predictive analytics Augmented & Virtual reality Natural Language processing Computer vision Security & surveillance OthersGlobal Machine Learning as a Service (MLaaS) Market, By Region

Asia Pacific North America Europe Latin America Middle East AfricaKey players operating in Global Machine Learning as a Service (MLaaS) Market

Ersatz Labs, Inc. BigML Yottamine Analytics Hewlett Packard Amazon Web Services IBM Microsoft Sift Science, Inc. Google AT&T Fuzzy.ai SAS Institute Inc. FICO Predictron Labs Ltd.

MAJOR TOC OF THE REPORT

Chapter One: Machine Learning as a Service Market Overview

Chapter Two: Manufacturers Profiles

Chapter Three: Global Machine Learning as a Service Market Competition, by Players

Chapter Four: Global Machine Learning as a Service Market Size by Regions

Chapter Five: North America Machine Learning as a Service Revenue by Countries

Chapter Six: Europe Machine Learning as a Service Revenue by Countries

Chapter Seven: Asia-Pacific Machine Learning as a Service Revenue by Countries

Chapter Eight: South America Machine Learning as a Service Revenue by Countries

Chapter Nine: Middle East and Africa Revenue Machine Learning as a Service by Countries

Chapter Ten: Global Machine Learning as a Service Market Segment by Type

Chapter Eleven: Global Machine Learning as a Service Market Segment by Application

Chapter Twelve: Global Machine Learning as a Service Market Size Forecast (2019-2026)

Browse Full Report with Facts and Figures of Machine Learning as a Service Market Report at:https://www.maximizemarketresearch.com/market-report/global-machine-learning-as-a-service-mlaas-market/55511/

About Us:

Maximize Market Research provides B2B and B2C market research on 20,000 high growth emerging technologies & opportunities in Chemical, Healthcare, Pharmaceuticals, Electronics & Communications, Internet of Things, Food and Beverages, Aerospace and Defense and other manufacturing sectors.

Contact info:

Name: Vikas Godage

Organization: MAXIMIZE MARKET RESEARCH PVT. LTD.

Email: sales@maximizemarketresearch.com

Contact: +919607065656/ +919607195908

Website: http://www.maximizemarketresearch.com

Read the original post:
Global Machine Learning As A Service (Mlaas) Market : Industry Analysis And Forecast (2020-2027) - MR Invasion

Read More..

Machine Learning in Medicine Market 2020-2024 Review and Outlook – Latest Herald

ORBIS RESEARCH has recently announced Global Machine Learning in Medicine Market report with all the critical analysis on current state of industry, demand for product, environment for investment and existing competition. Global Machine Learning in Medicine Market report is a focused study on various market affecting factors and comprehensive survey of industry covering major aspects like product types, various applications, top regions, growth analysis, market potential, challenges for investor, opportunity assessments, major drivers and key players

This report is directed to arm report readers with conclusive judgment on the potential of mentioned factors that propel relentless growth in Global Machine Learning in Medicine Market. The report on Machine Learning in Medicine Market makes concrete headway in identifying and deciphering each of the market dimensions to evaluate logical derivatives which have the potential to set the growth course in Global Machine Learning in Medicine Market. Besides presenting notable insights on Machine Learning in Medicine Market factors comprising above determinants, the report further in its subsequent sections of this detailed research report on Machine Learning in Medicine Market states information on regional segmentation, as well as thoughtful perspectives on specific understanding comprising region specific developments as well as leading market players objectives to trigger maximum revenue generation and profits. This high-end research comprehension on Machine Learning in Medicine Market renders major impetus on detailed growth.

This study covers following key players:Monday.com

Request a sample of this report @ https://www.orbisresearch.com/contacts/request-sample/4328851

The report is directed to arm report readers with conclusive judgment on the potential of mentioned factors that propel relentless growth in Global Machine Learning in Medicine Market. A thorough run down on essential elements such as drivers, threats, challenges, opportunities are discussed at length in this elaborate report on Machine Learning in Medicine Market and eventually analyzed to document logical conclusions. This Machine Learning in Medicine Market also harps on competitive landscape, accurately identifying opportunities as well as threats and challenges. This report specifically unearths notable conclusions and elaborates on innumerable factors and growth triggering decisions that make this Machine Learning in Medicine Market a highly remunerative one.

This meticulous research based analytical review on Machine Learning in Medicine Market is a high end expert handbook portraying crucial market relevant information and developments, encompassing a holistic record of growth promoting triggers encapsulating trends, factors, dynamics, challenges, and threats as well as barrier analysis that accurately direct and influence profit trajectory of Machine Learning in Medicine Market. This high-end research comprehension on Machine Learning in Medicine Market renders major impetus on detailed growth facets, in terms of product section, payment and transaction platforms, further incorporating service portfolio, applications, as well as a specific compilation on technological interventions that facilitate ideal growth potential in Global Machine Learning in Medicine Market.

Access Complete Report @ https://www.orbisresearch.com/reports/index/global-machine-learning-in-medicine-market-professional-survey-2019-by-manufacturers-regions-countries-types-and-applications-forecast-to-2024

Market segment by Type, the product can be split intoOn-premisesSoftware-as-a-Service (SaaS)Cloud Based

Market segment by Application, split intoAerospaceAutomotive industryBiotech and pharmaceuticalChemical industryConsumer productsAerospace

This high-end research comprehension on Machine Learning in Medicine Market renders major impetus on detailed growth facets, in terms of product section, payment and transaction platforms, further incorporating service portfolio, applications, as well as a specific compilation on technological interventions that facilitate ideal growth potential in Global Machine Learning in Medicine Market.

The report also incorporates ample understanding on numerous analytical practices such as SWOT and PESTEL analysis to source optimum profit resources in Machine Learning in Medicine Market. Other vital factors related to the Machine Learning in Medicine Market such as scope, growth potential, profitability, and structural break-down have been distinctively documented in this keyword report to leverage holistic market growth.

For Enquiry before buying report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4328851

Some Key TOC Points:1 Industry Overview of Machine Learning in Medicin2 Industry Chain Analysis of Machine Learning in Medicine3 Manufacturing Technology of Machine Learning in Medicine4 Major Manufacturers Analysis of Machine Learning in MedicineContinued

About Us:Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us:Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

Originally posted here:
Machine Learning in Medicine Market 2020-2024 Review and Outlook - Latest Herald

Read More..

Rise in the demand for Machine Learning & AI skills in the post-COVID world – Times of India

The world has seen an unprecedented challenge and is battling this invisible enemy with all their might. The Novel coronavirus spread has left the global economies holding on to strands, businesses impacted and most people locked down. But while the physical world has come to a drastic halt or slow-down, the digital world is blooming. And in addition to understanding the possibilities of home workspaces, companies are finally understanding the scope of Machine Learning and Artificial Intelligence. A trend that was already gardening all the attention in recent years, ML & AI have particularly taken the centre-stage as more and more brands realise the possibilities of these tools. According to a research report released in February, demand for data engineers was up 50% and demand for data scientists was up 32% in 2019 compared to the prior year. Not only is machine learning being used by researchers to tackle this global pandemic, but it is also being seen as an essential tool in building a world post-COVID.

This pandemic is being fought on the basis of numbers and data. This is the key reason that has driven peoples interest in Machine Learning. It helps us in collecting, analysing and understanding a vast quantity of data. Combined with the power of Artificial Intelligence, Machine Learning has the power to help with an early understanding of problems and quick resolutions. In recent times, ML & AI are being used by doctors and medical personnel to track the virus, identify potential patients and even analyse the possible cure available. Even in the current economic crisis, jobs in data science and machine learning have been least affected. All these factors indicate that machine learning and artificial intelligence are here to stay. And this is the key reason that data science is an area you can particularly focus on, in this lockdown.

The capabilities of Machine Learning and Data Sciences One of the key reasons that a number of people have been able to shift to working from home without much hassle has to be the use of ML & AI by businesses. This shift has also motivated many businesses, both small-scale and large-scale, to re-evaluate their functioning. With companies already announcing plans to look at a more robust working mechanism, which involves less office space and more detailed and structured online working systems, the focus on Machine Learning is bound to increase considerably.

The Current PossibilitiesThe world of data science has been coming out stronger during this lockdown and the interest and importance given to the subject are on the rise. AI-powered mechanics and operations have already made it easier to manage various spaces with lower risks and this trend of turning to AI is bound to increase in the coming years. This is the reason that being educated in this field can improve your skills in this segment. If you are someone who has always been intrigued by data sciences and machine learning or are already working in this field and are looking for ways to accelerate your career, there are various courses that you can turn to. With the increased free time that staying at home has facilitated us with, beginning an additional degree to pad up your resume and also learn some cutting-edge concepts while gaining access to industry experts.

Start learning more about Machine Learning & AIIf you are wondering where to begin this journey of learning, a leading online education service provider, upGrad, has curated programs that would suit you! From Data Sciences to in-depth learnings in AI, there are multiple programs on their website that covers various domains. The PG Diploma in Machine Learning and AI, in particular, has a brilliant curriculum that will help you progress in the field of Machine Learning and Artificial Intelligence. A carefully crafted program from IIIT Bangalore which offers 450+ hours of learning with more than 10 practical hands-on capstone projects, this program has been designed to help people get a deeper understanding of the real-life problems in the field.

Understanding the PG Diploma in Machine Learning & AIThis 1-year program at upGrad has been articulated especially for working professionals who are looking for a career push. The curriculum consists of 30+ Case Studies and Assignments and 25+ Industry Mentorship Sessions, which help you to understand everything you need to know about this field. This program has the perfect balance between the practical exposure required to instil better management and problem-solving skills as well as the theoretical knowledge that will sharpen your skills in this category. The program also gives learners an IIIT Bangalore Alumni Status and Job Placement Assistance with Top Firms on successful completion.

Originally posted here:
Rise in the demand for Machine Learning & AI skills in the post-COVID world - Times of India

Read More..

A.I. can’t solve this: The coronavirus could be highlighting just how overhyped the industry is – CNBC

Monitors display a video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii, in Beijing, May 10, 2018. Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.

Gilles Sabri | The New York Times

The world is facing its biggest health crisis in decades but one of the world's most promising technologies artificial intelligence (AI) isn't playing the major role some may have hoped for.

Renowned AI labs at the likes of DeepMind, OpenAI, Facebook AI Research, and Microsoft have remained relatively quiet as the coronavirus has spread around the world.

"It's fascinating how quiet it is," said Neil Lawrence, the former director of machine learning at Amazon Cambridge.

"This (pandemic) is showing what bulls--t most AI hype is. It's great and it will be useful one day but it's not surprising in a pandemic that we fall back on tried and tested techniques."

Those techniques include good, old-fashioned statistical techniques and mathematical models. The latter is used to create epidemiological models, which predict how a disease will spread through a population. Right now, these are far more useful than fields of AI like reinforcement learning and natural-language processing.

Of course, there are a few useful AI projects happening here and there.

In March, DeepMind announced that it hadused a machine-learning technique called "free modelling" to detail the structures of six proteins associated with SARS-CoV-2, the coronavirus that causes the Covid-19 disease.Elsewhere, Israeli start-up Aidoc is using AI imaging to flag abnormalities in the lungs and a U.K. start-up founded by Viagra co-inventor David Brown is using AI to look for Covid-19 drug treatments.

Verena Rieser, a computer science professor at Heriot-Watt University, pointed out that autonomous robots can be used to help disinfect hospitals and AI tutors can support parents with the burden of home schooling. She also said "AI companions" can help with self isolation, especially for the elderly.

"At the periphery you can imagine it doing some stuff with CCTV," said Lawrence, adding that cameras could be used to collect data on what percentage of people are wearing masks.

Separately, a facial recognition system built by U.K. firm SCC has also been adapted to spot coronavirus sufferers instead of terrorists.In Oxford, England, Exscientia is screening more than 15,000 drugs to see how effective they are as coronavirus treatments. The work is being done in partnership withDiamond Light Source, the U.K.'s national "synchotron."

But AI's role in this pandemic is likely to be more nuanced than some may have anticipated. AI isn't about to get us out of the woods any time soon.

"It's kind of indicating how hyped AI was," said Lawrence, who is now a professor of machine learning at the University of Cambridge. "The maturity of techniques is equivalent to the noughties internet."

AI researchers rely on vast amounts of nicely labeled data to train their algorithms, but right now there isn't enough reliable coronavirus data to do that.

"AI learns from large amounts of data which has been manually labeled a time consuming and expensive task," said Catherine Breslin, a machine learning consultant who used to work on Amazon Alexa.

"It also takes a lot of time to build, test and deploy AI in the real world. When the world changes, as it has done, the challenges with AI are going to be collecting enough data to learn from, and being able to build and deploy the technology quickly enough to have an impact."

Breslin agrees that AI technologies have a role to play. "However, they won't be a silver bullet," she said, adding that while they might not directly bring an end to the virus, they can make people's lives easier and more fun while they're in lockdown.

The AI community is thinking long and hard about how it can make itself more useful.

Last week, Facebook AI announced a number of partnerships with academics across the U.S.

Meanwhile, DeepMind's polymath leader Demis Hassabis is helping the Royal Society, the world's oldest independent scientific academy, on a new multidisciplinary project called DELVE (Data Evaluation and Learning for Viral Epidemics). Lawrence is also contributing.

Go here to see the original:
A.I. can't solve this: The coronavirus could be highlighting just how overhyped the industry is - CNBC

Read More..