Page 2,850«..1020..2,8492,8502,8512,852..2,8602,870..»

CIO Leadership: Building Trust at the Executive Level Will Drive the Discussion at the 2021 HMG – GlobeNewswire

WESTPORT, Conn., June 16, 2021 (GLOBE NEWSWIRE) -- HMG Strategy, the Worlds #1 digital platform for enabling technology executives to reimagine the enterprise and reshape the business world, is excited to be hosting its 2021 HMG Live! Chicago CIO Executive Leadership Summit on June 17. HMG Strategys highly interactive digital events bring together the worlds most distinguished and innovative business technology leaders to discuss the most pressing leadership, strategic, cultural, technology and career challenges and opportunities that technology executives face today and into the future.

The 2021 HMG Live! Chicago CIO Executive Leadership Summit will focus on recommendations from CIOs and business technology executives on how to build trusting relationships across the C-suite along with advice for regaining trust after it has been lost.

The CEO and members of the executive team are relying on the CIO and other technology executives to offer bold ideas for developing new go-to-market models and other business innovation, said Hunter Muller, President and CEO of HMG Strategy. But in order to succeed with these efforts, technology leaders must have trusting relationships in place with members of the executive team.

Top-tier business technology executives and industry experts speaking at the 2021 HMG Live! Chicago CIO Executive Leadership Summit will include:

Valued Partners for the 2021 HMG Live! Chicago CIO Summit include Akamai, Auth0, BetterCloud, Blue Prism, Check Point Software Technologies, Darktrace, Forescout Technologies, Gigamon, Globant, Horizon3.ai, Illumio, RingCentral, SafeGuard Cyber, SIM Chicago, Skybox Security, Starburst, Upwork and Zscaler.

To learn more about the 2021 HMG Live! Chicago CIO Executive Leadership Summit and to register for the event, click here.

HMG Strategy will be hosting its 2021 HMG Live! Toronto CIO Executive Leadership Summit on June 21. Key topics that will be explored at this event include recommendations for cultivating a connected culture into the future including one that embraces diversity, equity and inclusion. Speakers at this event will also share recommendations for working with the CEO and the Board on looming cyber threats, including how best to communicate these risks and articulate where funding is needed most.

Prominent technology executives speaking at this event will include:

Valued Partners for the 2021 HMG Live! Toronto CIO Executive Leadership Summit include Akamai, Auth0, BetterCloud, Blue Prism, Check Point Software Technologies, The CIO Association of Canada, Darktrace, Forescout Technologies, Globant, Horizon3.ai, Infoblox, Illumio, RingCentral, Rubrik, SafeGuard Cyber, SIM Toronto, Skybox Security, and Zscaler.

To learn more about the 2021 HMG Live! Toronto CIO Executive Leadership Summit and to register for the event, click here.

HMG Strategy will also be hosting its 2021 HMG Live! New York Global Innovation Summit on June 24. Timely topics to be discussed at this event will include the role that business technology executives can play in reimagining how the business operates, applying security innovation to address the evolving threat landscape, along with real-world use cases for applying hot technologies such as artificial intelligence, machine learning, robotic process automation, the Internet of Things, computer vision and augmented reality.

World-class executives speaking at the 2021 HMG Live! New York Global Innovation Summit will include:

Valued Partners for the 2021 HMG Live! New York Global Innovation Summit include Akamai, Auth0, BetterCloud, Darktrace, Forescout Technologies, FPT Corporation, Globant, Horizon3.ai, Incorta, Illumio, NPower, RingCentral, SafeGuard Cyber, SIM New York Metro, Skybox Security, Starburst, Zendesk and Zscaler.

To learn more about the 2021 HMG Live! New York Global Innovation Summit and to register for the event, click here.

To learn about HMG Strategys upcoming CIO and CISO Summits, click here.

HMG STRATEGYS 2021 GLOBAL LEADERSHIP INSTITUTE AWARDS

The HMG Strategy 2021 Global Leadership Institute Awards honor exemplary technology leaders and leadership teams who are delivering exceptional value to their organizations. This award recognizes those who have reimagined and reinvented themselves to place their organizations on the fast track to groundbreaking transformation in dynamic times. Technology executives and their teams who receive these awards are being recognized for accomplishments in the following areas: Diversity, Equity and Inclusion; Leading into the C-suite; Creating New Go-to-Market Business Models; Modernizing Enterprise Architecture; and Building a Culture of Trust.

World-class technology executives who were recognized for their contributions to their companies and to the industry at the 2021 HMG Live! U.K. CIO Executive Leadership Summit on June 16 include:

To learn more about HMG Strategys 2021 Global Leadership Institute Awards and to nominate a deserving executive, click here.

HMG STRATEGYS CUSTOM WEBINARS AND DIGITAL ROUNDTABLES

On June 22, HMG Strategy will be hosting the HMG Security Innovation Webinar. Research conducted by HMG Strategy reveals that one of the top focus areas for CISOs and security leaders is working with the CEO and the executive team to enable innovation and grow the business. In this exclusive webinar, HMG Strategy President and CEO Hunter Muller asks the CEOs and leaders of innovative cybersecurity technology companies about the challenges they solve for clients, the innovation they deliver to companies, and how their solutions are differentiated in the market.

Speakers for this highly interactive event include Ritesh Agrawal, CEO, Airgap Networks Inc.

To learn more about this event and to register for the webinar, click here.

About HMG Strategy

HMG Strategy is the world's leading digital platform for connecting technology executives to reimagine the enterprise and reshape the business world. Our regional and virtual CIO and CISO Executive Leadership Series, authored books and Digital Resource Center deliver unique, peer-driven research from CIOs, CISOs, CTOs and technology executives on leadership, innovation, transformation and career ascent. HMG Strategy offers a range of peer-driven research services such as its CIO & CISO Executive Leadership Alliance (CELA) program which bring together the worlds top CIOs, CISOs and technology executives to brainstorm on the top opportunities and challenges facing them in their roles.

HMG Strategy also produces the HMG Security Innovation Accelerator Panel, a new webinar series thats designed to connect enterprise CISOs and security leaders with the most innovative cybersecurity companies from across the world.

The HMG Strategy global network consists of over 400,000 senior IT executives, industry experts and world-class thought leaders.

To learn more about the 7 Pillars of Trust for HMG Strategy's unique business model, click here.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/2fcc0288-cffa-40d5-8a6d-592909a8312f

Read more here:
CIO Leadership: Building Trust at the Executive Level Will Drive the Discussion at the 2021 HMG - GlobeNewswire

Read More..

Cyber and AI experts bring ideas to life to help combat national security concerns – De Montfort University

Cyber security and artificial intelligence (AI) experts at De Montfort University Leicester (DMU) have been given an opportunity to turn their ideas into commercial products as part of a government-funded accelerator programme.

The initiative, called CyberASAP (Cyber Security Academic Startup Accelerator Programme), is funded by the UK Government Department for Digital, Culture Media & Sport (DCMS) and delivered through KTN and Innovate UK. It aims to develop and sustain a security sector that meets national security demands as part of the governments 1.9billion national cyber security strategy.

(Image: Adi Goldstein/Unsplash)

Two teams made up of academics from DMUs Cyber Technology Institute (CTI) and Institute of Artificial Intelligence (IAI) successfully secured funding in the first phase of the programme, which requires each team to identify the value proposition of their prospective product.

With the support of KTN, the two teams have learned a lot of skills needed to operate in a start-up environment, through a series of interventions (including bootcamps, workshops, and mentoring), giving participating teams vital insights into the key milestones necessary to take their product from the lab to the market.

Dr Richard Smith, Associate Professor of Cyber Security at DMU, said: The CyberASAP programme provides academics with the skills to operate in the commercial world. It is about taking the skill base in UK universities and teaching academics how to exploit their research, skills and expertise within the commercial market.

CyberASAP was launched in 2017 and since then has helped more than 90 teams from universities across the UK to develop their innovations.

The first DMU team, called CyberAgents, is led by Dr Smith alongside Professor Daniela Romano. Together they are developing an incident response team using human/AI hybrid technology.

Many organisations have limited Incident Response capabilities and need to improve them in a safe environment, explained Dr Smith. We use behavioural science and data science to create innovative human/AI team defending together.

The AI-agent learns human-like creativity but with faster than human reasoning, and can plug any skill gaps in an organisations capability. Also, the more the agent learns, the more it improves.

The second team, SACRED, is being led by Dr Mary He, alongside Professor Yingjie Yang and Dr Francisco J Aparicio Navarro.

They are developing a unique and innovative web security tool, driven by advanced AI technology, to detect malicious web crawlers (computer programs that are calibrated to automatically search the Internet, also called spider or bot), thus protecting online services and websites of businesses or organisations and ensuring their customers privacy and data security.

Both teams have made effort to analyse the market size, competitors and potential collaborators of their prospective products, added Dr He. The CyberASAP initiative will speed up the pace of the UK cybersecurity development, tackling the global challenges in cybersecurity, and therefore helping to implement the National Security Strategy of the UK; to use all our national capabilities to build Britain's prosperity, extend our nation's influence in the world and strengthen our security.

Dr Smith added: Not only does CyberASAP help to grow the innovation produced by DMU academics, it increases our universitys footprint and brand in the commercial sector.

Posted on Thursday 10th June 2021

Read the original:
Cyber and AI experts bring ideas to life to help combat national security concerns - De Montfort University

Read More..

Machine Learning Can Reduce Worry About Nanoparticles In Food – Texas A&M Today – Texas A&M University Today

Machine learning algorithms developed by researchers can predict the presence of any nanoparticle in most plant species.

Getty Images

While crop yield has achieved a substantial boost from nanotechnology in recent years, alarms over the health risks posed by nanoparticles within fresh produce and grains have also increased. In particular, nanoparticles entering the soil through irrigation, fertilizers and other sources have raised concerns about whether plants absorb these minute particles enough to cause toxicity.

In a new study published online in the journalEnvironmental Science and Technology,researchers at Texas A&M University have used machine learning to evaluate the salient properties of metallic nanoparticles that make them more susceptible for plant uptake. The researchers said their algorithm could indicate how much plants accumulate nanoparticles in their roots and shoots.

Nanoparticles are a burgeoning trend in several fields, including medicine, consumer products and agriculture. Depending on the type of nanoparticle, some have favorable surface properties, charge and magnetism, among other features. These qualities make them ideal for a number of applications. For example, in agriculture, nanoparticles may be used as antimicrobials to protect plants from pathogens. Alternatively, they can be used to bind to fertilizers or insecticides and then programmed for slow release to increase plant absorption.

These agricultural practices and others, like irrigation, can cause nanoparticles to accumulate in the soil. However, with the different types of nanoparticles that could exist in the ground and a staggeringly large number of terrestrial plant species, including food crops, it is not clearly known if certain properties of nanoparticles make them more likely to be absorbed by some plant species than others.

As you can imagine, if we have to test the presence of each nanoparticle for every plant species, it is a huge number of experiments, which is very time-consuming and expensive, said Xingmao Samuel Ma, associate professor in the Zachry Department of Civil and Environmental Engineering. To give you an idea, silver nanoparticles alone can have hundreds of different sizes, shapes and surface coatings, and so, experimentally testing each one, even for a single plant species, is impractical.

Instead, for their study, the researchers chose two different machine learning algorithms, an artificial neural network and gene-expression programming. They first trained these algorithms on a database created from past research on different metallic nanoparticles and the specific plants in which they accumulated. In particular, their database contained the size, shape and other characteristics of different nanoparticles, along with information on how much of these particles were absorbed from soil or nutrient-enriched water into the plant body.

Once trained, their machine learning algorithms could correctly predict the likelihood of a given metallic nanoparticle to accumulate in a plant species. Also, their algorithms revealed that when plants are in a nutrient-enriched or hydroponic solution, the chemical makeup of the metallic nanoparticle determines the propensity of accumulation in the roots and shoots. But if plants are grown in soil, the contents of organic matter and the clay in soil are key to nanoparticle uptake.

Ma said that while the machine learning algorithms could make predictions for most food crops and terrestrial plants, they might not yet be ready for aquatic plants. He also noted that the next step in his research would be to investigate if the machine learning algorithms could predict nanoparticle uptake from leaves rather than through the roots.

It is quiteunderstandable that people are concerned about the presence of nanoparticles in their fruits, vegetables and grains, said Ma. But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns.

Other contributors include Xiaoxuan Wang, Liwei Liu and Weilan Zhang from the civil and environmental engineering department.

This research is partly funded by the National Science Foundation and the Ministry of Science and Technology, Taiwan under the Graduate Students Study Abroad Program.

See the article here:
Machine Learning Can Reduce Worry About Nanoparticles In Food - Texas A&M Today - Texas A&M University Today

Read More..

Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals – KPVI News…

CAMBRIDGE, Mass., June 16, 2021 /PRNewswire/ -- Akamai Technologies, Inc. (NASDAQ: AKAM), the world's most trusted solution for protecting and delivering digital experiences, today announces platform security enhancements to strengthen protection for web applications, APIs, and user accounts. Akamai's machine learning derives insight on malicious activity from more than 1.3 billion daily client interactions to intelligently automate threat detections, time-consuming tasks, and security logic to help professionals make faster, more trustworthy decisions regarding cyberthreats.

In its May 9 report Top Cybersecurity Threats in 2021, Forrester estimates that due to reasons "exacerbated by COVID-19 and the resulting growth in digital interactions, identity theft and account takeover increased by at least 10% to 15% from 2019 to 2020." The leading global research and advisory firm notes that we should "anticipate another 8% to 10% increase in identity theft and ATO [account takeover] fraud in 2021." With threat actors increasingly using automation to compromise systems and applications, security professionals must likewise automate defenses in parallel against these attacks to manage cyberthreats at pace.

New Akamai platform security enhancements include:

Adaptive Security Engine for Akamai's web application and API protection (WAAP) solutions, Kona Site Defender and Web Application Protector, is designed to automatically adapt protections with the scale and sophistication of attacks, while reducing the effort to maintain and tune policies. The Adaptive Security Engine combines proprietary anomaly risk scoring with adaptive threat profiling to identify highly targeted, evasive, and stealthy attacks. The dynamic security logic intelligently adjusts its defensive aggressiveness based on threat intelligence automatically correlated for each customer's unique traffic. Self-tuning leverages machine learning, statistical models, and heuristics to analyze all triggers across each policy to accurately differentiate between true and false positives.

Audience Hijacking Protection has been added to Akamai Page Integrity Manager to detect and block malicious activity in real time from client-side attacks using JavaScript, advertiser networks, browser plug-ins, and extensions that target web clients. Audience Hijacking Protection is designed to use machine learning to quickly identify vulnerable resources, detect suspicious behavior, and block unwanted ads, pop-ups, affiliate fraud, and other malicious activities aimed at hijacking your audience.

Bot Score and JavaScript Obfuscation have been added to Akamai Bot Manager, laying the foundation for ongoing innovations in adversarial bot management, including the ability to take action against bots aligned with corporate risk tolerance. Bot Score automatically learns unique traffic and bot patterns, and self-tunes for long-term effectiveness; JavaScript Obfuscation dynamically changes detections to prevent bot operators from reverse engineering detections.

Akamai Account Protector is a new solution designed to proactively identify and block human fraudulent activity like account takeover attacks. Using advanced machine learning, behavioral analytics, and reputation heuristics, Account Protector intelligently evaluates every login request across multiple risk and trust signals to determine if it is coming from a legitimate user or an impersonator. This capability complements Akamai's bot mitigation to provide effective protection against both malicious human actors and automated threats.

"At Akamai, our latest platform release is intended to help resolve the tension between security and ease of use, with key capabilities around automation and machine learning specifically designed to intelligently augment human decision-making," said Aparna Rayasam, senior vice president and general manager, Application Security, Akamai. "Smart automation adds immediate value and empowers users with the right tools to generate insight and context to make faster and more trustworthy decisions, seamlessly all while anticipating what attackers might do next."

For more information about Akamai's Edge Security solutions, visit our Platform Update page.

About Akamai

Akamai secures and delivers digital experiences for the world's largest companies. Akamai's intelligent edge platform surrounds everything, from the enterprise to the cloud, so customers and their businesses can be fast, smart, and secure. Top brands globally rely on Akamai to help them realize competitive advantage through agile solutions that extend the power of their multi-cloud architectures. Akamai keeps decisions, apps, and experiences closer to users than anyone and attacks and threats far away. Akamai's portfolio of edge security, web and mobile performance, enterprise access, and video delivery solutions is supported by unmatched customer service, analytics, and 24/7/365 monitoring. To learn why the world's top brands trust Akamai, visit http://www.akamai.com, blogs.akamai.com, or @Akamai on Twitter. You can find our global contact information at http://www.akamai.com/locations.

Contacts:

Tim Whitman

Media Relations

617-444-3019

twhitman@akamai.com

Tom Barth

Investor Relations

617-274-7130

tbarth@akamai.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/akamai-unveils-machine-learning-that-intelligently-automates-application-and-api-protections-and-reduces-burden-on-security-professionals-301313433.html

SOURCE Akamai Technologies, Inc.

Read more from the original source:
Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals - KPVI News...

Read More..

Data Insights and Machine Learning Take Charge of the Maritime Sales Process – Hellenic Shipping News Worldwide

While the maritime industry has been hesitant engaging in use of data insight and machine learning, the table is now about to turn. Today, an increasing number of maritime companies actively use data insights to improve sales, supply chain activities, and increase revenues among these the worlds largest ship supplier, Wrist Ship Supply.

The need for efficiency in the maritime sector has led companies to actively use data as a measure to optimize the supply chain. This has paved the way for new ship supply services centered around data insights and machine learning to increase top and bottom-line figures.

According to the leading data and analytics firm, GateHouse Maritime, data insights can make a noticeable difference in the maritime sector. With a combination of historic and real-time ocean data, machine learning, and smart algorithms, maritime supply companies can predict vessel destinations and arrivals with high precision.

Traditionally, vessel tracking has been a time consuming, manual process characterized by imprecise predictions and uncertainty. But today, the process can be automated and turn large amounts of data into tangible leads and sales:

With the help of data insights, it is possible to predict arrivals several days in advance with almost 100 percent accuracy. This allows maritime supply companies to obtain an obvious competitive advantage, as they can operate proactively and sell services to potential customers days before a given vessel calls into port, says CEO at GateHouse, Maritime, Martin Dommerby Kristiansen.

Data analytics strengthen the worlds largest ship supplierFour years ago, the worlds largest ship supplier, Wrist Ship Supply, realized a strategy that would integrate data analytics in numerous business areas. The global ship supplier is a full-service provider, providing service for marine, offshore and navy operations, such as supplying consumables, handling of owners goods and spare parts storage and forwarding.

Today, Wrist Ship Supply works strategically with data analytics and business intelligence to improve internal processes and increase value for customers:

In recent years, we have experienced an increasing pull from the market and as a market leader within ship supply, we feel obliged to take part in the digital transformation. Data analysis has proven to be a cornerstone and a very important tool for measuring and improving performances across our own as well as customers supply chain. Now, our business model is infused with data analytics and business intelligence that strengthen efficiency and reliability in both internal and external operations, explains Business Analysis Director at Wrist Ship Supply, Birthe Boysen.

For Birthe Boysen and Wrist Ship Supply, data analytics has especially proven its worth within sales:

It is crucial for us to know where potential customer vessels are heading and when they arrive in different ports. This allows us to coordinate our sales efforts and establish contact in advance. Not only does this make us more efficient, but it also creates value for customers, because all service activities can be planned several days ahead of arrival.

While the data-driven sales approach has increased the focus on KPIs, it has also become an important part of budgeting. Therefore, it has been a key priority for Wrist Ship Supply to be able to navigate in the ocean of available data:

We have an almost endless amount of data available, and it easily becomes exhausting to keep track of numbers and figures. Therefore, we prioritize to make sure that both internal and external stakeholders can make sense of the conclusions in our data insights. If employees or customers cannot fathom the overall lines in our data results, it will be difficult to use analytics in any way, Nadia Hay Kragholm, Senior Business Analyst in Wrist remarks.

According to Martin Dommerby Kristiansen, data insight has the potential to transform the entire maritime industry because efficiency has never been more important:

The maritime industry is indeed reliant on efficiency across the value chain. Recently, we have seen how a vessel stuck in the Suez Canal for only a few days can impact not only the maritime industry, but the entire transportation and logistics sector. This goes to show how important data insight and analytics can prove to be for companies that wish to operate proactively and minimize disorder in the supply chain.

GateHouse Maritime is a leader in Ocean Visibility solutions. We help global maritime service providers, cargo owners and logistic companies with transparent and accurate location data and predictions, cargo transport status, and offshore asset protection and surveillance. Our powerful maritime data foundation consists of 273 billion datapoints and +30 analysis and predictive models used for data-driven decisions by maritime operators worldwide. GateHouse Maritime is a subsidiary of GateHouse Holding, founded in 1992 and headquartered in Denmark, and which also holds the subsidiaries GateHouse SatCom and GateHouse Igniter.Source: GateHouse Maritime A/S

Go here to read the rest:
Data Insights and Machine Learning Take Charge of the Maritime Sales Process - Hellenic Shipping News Worldwide

Read More..

Using large-scale experiments and machine learning to discover theories of human decision-making – Science Magazine

Discovering better theories

Theories of human decision-making have proliferated in recent years. However, these theories are often difficult to distinguish from each other and offer limited improvement in accounting for patterns in decision-making over earlier theories. Peterson et al. leverage machine learning to evaluate classical decision theories, increase their predictive power, and generate new theories of decision-making (see the Perspective by Bhatia and He). This method has implications for theory generation in other domains.

Science, abe2629, this issue p. 1209; see also abi7668, p. 1150

Predicting and understanding how people make decisions has been a long-standing goal in many fields, with quantitative models of human decision-making informing research in both the social sciences and engineering. We show how progress toward this goal can be accelerated by using large datasets to power machine-learning algorithms that are constrained to produce interpretable psychological theories. Conducting the largest experiment on risky choice to date and analyzing the results using gradient-based optimization of differentiable decision theories implemented through artificial neural networks, we were able to recapitulate historical discoveries, establish that there is room to improve on existing theories, and discover a new, more accurate model of human decision-making in a form that preserves the insights from centuries of research.

See more here:
Using large-scale experiments and machine learning to discover theories of human decision-making - Science Magazine

Read More..

Mydecine Innovations kicks off machine learning-based drug discovery program with the University of Alberta – Proactive Investors USA & Canada

The program will enable the company to more rapidly screen hundreds of thousands of new molecules without the need to produce them, allowing Mydecine to focus on the strongest potential therapeutics

() () () has launched its in-silico drug discovery program in conjunction with researchers at the University of Alberta (UofA), the company announced.

Led by computer-assisted drug development expert and UofA assistant professor at the Li Ka Shing Institute of Virology, Khaled Barakat, the program is focused on utilizing artificial intelligence/machine learning (AI/ML) to support drug screenings, including both the ability to build drugs from the receptor up and assess drugs around the receptors of Mydecines choosing.

The in-silico (read: computer simulated) program will enable the company to more rapidly screen hundreds of thousands of new molecules without the need to produce them, allowing Mydecine to focus on the strongest potential therapeutics for its chemical and natural development programs, the company said.

Mydecine will also be able to more efficiently screen its own proprietary library of novel compounds designed by Chief Science Officer Rob Roscow and advisory board member, Denton Hoyer.

Years of research have shown that the chemical components of psychoactive and non-psychoactive mushrooms can be extremely powerful in a therapeutic setting and yet, there is still so much that we dont understand about how these molecules can affect biological systems, CEO Josh Bartch said in a statement.

As the next evolution of drug discovery progresses forward, we strongly believe that this new age will be fully led by artificial intelligence and machine learning. Expanding our R&D efforts with the addition of our cutting-edge AI/ML drug screening program will allow our research teams to take a leading role within the psychedelic community to more efficiently expand our knowledge of these components and their pharmacological value.

At UofA, Barakat and his team specialize in understanding the nature and biophysical processes underlying protein-drug interaction, protein-protein interactions, protein-DNA interactions, drug off-target interactions and predicting drug-mediated toxicity.

Dr. Barakat and his team have built an impressive reputation as leaders at the intersection of technology and pharmacological science, Bartch said. Adding their specialization in developing innovative computer models and novel technologies to predict protein-protein and protein-drug interactions will bring tremendous value to Mydecines research and enable us to more quickly bring to market effective drugs that can produce better outcomes for patients.

Contact Andrew Kessel at andrew.kessel@proactiveinvestors.com

Follow him on Twitter @andrew_kessel

Originally posted here:
Mydecine Innovations kicks off machine learning-based drug discovery program with the University of Alberta - Proactive Investors USA & Canada

Read More..

2 supervised learning techniques that aid value predictions – TechTarget

This article is excerpted from the course "Fundamental Machine Learning," part of the Machine Learning Specialist certification program from Arcitura Education. It is the ninth part of the 13-part series, "Using machine learning algorithms, practices and patterns."

This article explores the numerical prediction and category prediction supervised learning techniques. These machine learning techniques are applied when the target whose value needs to be predicted is known in advance and some sample data is available to train a model. As explained in Part 4, these techniques are documented in a standard pattern profile format.

A data set may contain a number of historical observations (rows) amassed over a period of time where the target value is numerical in nature and is known for those observations. An example is the number of ice creams sold and the temperature readings, where the number of ice creams sold is the target variable. To obtain value from this data, a business use case might require a prediction of how much ice cream will be sold if the temperature reading is known in advance from the weather forecast. As the target is numerical in nature, supervised learning techniques that work with categorical targets cannot be applied (Figure 1).

The historical data is capitalized upon by first finding independent variables that influence the target dependent variable and then quantifying this influence in a mathematical equation. Once the mathematical equation is complete, the value of the target variable is predicted by inputting the values of the independent values.

The data set is first scanned to find the best independent variables by applying the associativity computation pattern to find the relationship between the independent variables and the dependent variable. Only the independent variables that are highly correlated with the dependent variable are kept. Next, linear regression is applied.

Linear regression, also known as least squares regression, is a statistical technique for predicting the values of a continuous dependent variable based on the values of an independent variable. The dependent and independent variables are also known as response and explanatory variables, respectively. As a mathematical relationship between the response variable and the explanatory variables, linear regression assumes that a linear correlation exists between the response and explanatory variables. A linear correlation between response and explanatory variables is represented through the line of best fit, also called a regression line. This is a straight line that passes as closely as possible through all points on the scatter plot (Figure 2).

Linear regression model development starts by expressing the linear relationship. Once the mathematical form has been established, the next step is to estimate the parameters of the model via model fitting. This determines the line of best fit achieved via least squares estimation that aims to reduce the sum of squared error (SSE). The last stage is to evaluate the model either using R squared or mean squared error (MSE).

MSE is a measure that determines how close the line of best fit is to the actual values of the response variable. Being a straight line, the regression line cannot pass through each point; it is an approximation of the actual value of the response variable based on estimated values. The distance between the actual and the estimated value of response variable is the error of estimation. For the best possible estimate of the response variable, the errors between all points, as represented by the sum of squared error, must be minimized. The line of best fit is the line that results in the minimum possible sum of squares errors. In other words, MSE identifies the variation between the actual value and the estimated value of the response variable as provided by the regression line (Figure 3).

The coefficient of determination, called R squared, is the percentage of variation in the response variable that is predicted or explained by the explanatory variable, with values that vary between 0 and 1. A value equal to 0 means that the response variable cannot be predicted from the explanatory variable, while a value equal to 1 means the response variable can be predicted without any errors. A value between 0 and 1 provides the percentage of successful prediction.

In regression, more than two explanatory variables can be used simultaneously for predicting the response variable, in which case it is called multiple linear regression.

The numerical prediction pattern can benefit from the application of the graphical summaries computation pattern by drawing a scatter plot to graphically validate if a linear relationship exists between the response and explanatory variables (Figure 4).

There are cases where a business problem involves predicting a category -- such as whether a customer will default on their loan or whether an image is a cat or a dog -- based on historical examples of defaulters and cats and dogs, respectively. In this case, the categories (default/not default and cat/dog) are known in advance. However, as the target class is categorical in nature, numerical predictive algorithms cannot be applied to train and predict a model for classification purposes (Figure 5).

Supervised machine learning techniques are applied by selecting a problem-specific machine learning algorithm and developing a classification model. This involves first using the known example data to train a model. The model is then fed new unseen data to find out the most appropriate category to which the new data instance belongs.

Different machine learning algorithms exist for developing classification models. For example, naive Bayes is probabilistic while K-nearest neighbors (KNN), support vector machine (SVM), logistic regression and decision trees are deterministic in nature. Generally, in the case of a binary problem -- cat or dog -- logistic regression is applied. If the feature space is n-dimensional (a large number of features) with complex interactions between the features, KNN is applied. Naive Bayes is applied when there is not enough training data or fast predictions are required, while decision trees are a good choice when the model needs to be explainable.

Logistic regression is based on linear regression and is also considered a class probability estimation technique, since its objective is to estimate the probability of an instance belonging to a particular class.

KNN, also known as lazy learning and instance-based learning, is a black-box classification technique where instances are classified based on their similarity, with a user-defined (K) number of examples (nearest neighbors). No model is explicitly generated. Instead, the examples are stored as-is and an instance is classified by first finding the closest K examples in terms of distance, then assigning the class based on the class of the majority of the closest examples (Figure 6).

Naive Bayes is a probability-based classification technique that predicts class membership based on the previously observed probability of all potential features. This technique is used when a combination of a number of features, called evidence, affects the determination of the target class. Due to this characteristic, naive Bayes can take into account features that may be insignificant when considered on their own but when considered accumulatively can significantly impact the probability of an instance belonging to a certain class.

All features are assumed to carry equal significance, and the value of one feature is not dependent on the value of any other feature. In other words, the features are independent. It serves as a baseline classifier for comparing more complex algorithms and can also be used for incremental learning, where the model is updated based on new example data without the need for regenerating the whole model from scratch.

A decision tree is a classification algorithm that represents a concept in the form of a hierarchical set of logical decisions with a tree-like structure that is used to determine the target value of an instance. [See discussion of decision trees in part 2 of this series.] Logical decisions are made by performing tests on the feature values of the instances in such a way that each test further filters the instance until its target value or class membership is known. A decision tree resembles a flowchart consisting of decision nodes, which perform a test on the feature value of an instance, and leaf nodes, also known as terminal nodes, where the target value of the instance is determined as a result of traversal through the decision nodes.

The category prediction pattern normally requires the application of a few other patterns. In the case of logistic regression and KNN, applying the feature encoding pattern ensures that all features are numerical as these two algorithms only work with numerical features. The application of the feature standardization pattern in the case of KNN ensures that none of the large magnitude features overshadow smaller magnitude features in the context of distance measurement. Naive Bayes requires the application of the feature discretization pattern as naive Bayes only works with nominal features. KNN can also benefit from the application of feature discretization pattern via a reduction in feature dimensionality, which contributes to faster execution and increased generalizability of the model.

The next article covers the category discovery and pattern discovery unsupervised learning patterns.

Read the original post:
2 supervised learning techniques that aid value predictions - TechTarget

Read More..

AWS leader talks about technologies needed to take precision medicine to the next level – Healthcare IT News

One of the most significant challenges to the advancement of precision medicine has been the lack of an infrastructure to support translational bioinformatics, supporting organizations as they work to uncover unique datasets to find novel associations and signals.

By supporting greater interoperability and collaboration, data scientists, developers, clinicians and pharmaceutical partners have the opportunity to leverage machine learning to reduce the time it takes to move from insight to discovery, ultimately leading to the right patients receiving the right care, with the right therapeutic at the right time.

To get a better understanding of challenges surrounding precision medicine and its future, Healthcare IT News sat down with Dr. Taha Kass-Hout, director of machine learning at AWS.

Q: You've said that one of the most significant challenges to the advancement of precision medicine has been the lack of an infrastructure to support translational bioinformatics. Please explain this challenge in detail.

A: One of the challenges in developing and utilizing storage, analytics and interpretive methods is the sheer volume of biomedical data that needs to be transformed that often resides on multiple systems and in multiple formats. The future of healthcare is so vibrant and dynamic and there is an opportunity for cloud and big data to take on a larger role to help the industry address these areas.

For example, datasets used to perform tasks such as computational chemistry and molecular simulations that help de-risk, and advance molecules into development, contain millions of data points and require billions of calculations to produce an experimental output. In order to bring new therapeutics to market faster, scientists need to move targets through development faster and find more efficient ways to collaborate both inside and outside of their organizations.

Another challenge is that large volumes of data acquired by legacy research equipment, such as microscopes and spectrometers, is usually stored locally. This creates a barrier for securely archiving, processing and sharing with collaborating researchers globally. Improving access to data, securely and compliantly, while increasing usability is critical to maximizing the opportunities to leverage analytics and machine learning.

For instance, Dotmatics' cloud-based software provides simple, unified, real-time access to all research data in Dotmatics and third-party databases, coupled with integrated, scientifically aware informatics solutions for small molecule and biologics discovery that expedite laboratory workflows and capture experiments, entities, samples and test data so that in-house or multi-organizational research teams become more efficient.

Today we are seeing a rising wave of healthcare organizations moving to the cloud, which is enabling researchers to unite R&D data with information from across the value chain, while benefiting from compute and storage options that are more cost-effective than on-premises infrastructure.

For large datasets in the R&D phase, large-scale, cloud-based data transfer services can transfer hundreds of terabytes and millions of files at speeds up to 10 times faster than open-source tools. Storage gateways ensure experimental data is securely stored, archived and available to other permissioned collaborators. Uniting data in a data lake improves access and helps to eliminate silos.

Cloud-based hyperscale computing and machine learning enable organizations to collaborate across datasets, create and leverage global infrastructures to maintain data integrity, and more easily perform machine learning-based analyses to accelerate discoveries and de-risk candidates faster.

For example, six years agoModerna started building databases and information-based activities to support all of their programs. Today, they are fully cloud-based, and their scientists don't go to the lab to pipette their messenger RNA and proteins. They go to their web portal, the Drug Design Studio that runs on the cloud.

Through the portal, scientists can access public and private libraries that contain all the messenger RNA that exists and the thousands of proteins they can produce. Then, they only need to press a button and the sequence goes to a fully automated, central lab where data is collected at every step.

Over the years, data from the portal and lab has helped Moderna improve their sequence design and production processes and improve the way their scientists gather feedback. In terms of research, all of Moderna's algorithms rely on computational power from the cloud to further their science.

Q: You contend that by supporting greater interoperability and collaboration, data scientists, developers, clinicians and pharmaceutical partners have the opportunity to leverage machine learning to reduce the time it takes to move from insight to discovery. Please elaborate on machine learning's role here in precision medicine.

A: For the last decade, organizations have focused on digitizing healthcare. In the next decade, making sense of all this data will provide the biggest opportunity to transform care. However, this transformation will primarily depend on data flowing where it needs to, at the right time, and supporting this process in a way that is secure and protects patients' health data.

It comes down to interoperability. It may not be the most exciting topic, but it's by far one of the most important, and one the industry needs to prioritize. By focusing on interoperability of information and systems today, we can ensure that we end up in a better place in 10 years than where we are now. And so, everything around interoperability around security, around identity management, differential privacy is likely to be part of this future.

Machine learning models trained to support healthcare and life sciences organizations can help automatically normalize, index and structure data. This approach has the potential to bring data together in a way that creates a more complete view of a patient's medical history, making it easier for providers to understand relationships in the data and compare this to the rest of the population, drive increased operational efficiency, and have the ability to use data to support better patient health outcomes.

For example, AstraZeneca has been experimenting with machine learning across all stages of research and development, and most recently in pathology to speed up the review of tissue samples. Labeling the data is a time-consuming step, especially in this case, where it can take many thousands of tissue-sample images to train an accurate model.

AstraZeneca uses a machine learning-powered, human-in-the-loop data-labeling and annotation service to automate some of the most tedious portions of this work, resulting in at least 50% less time spent cataloging samples.

It also helps analysts spot trends and anomalies in the health data and derive actionable insights to improve the quality of patient care, make predictions for medical events such as stroke or congestive heart failure, modernize care infrastructure, increase operational efficiency and scale specialist expertise.

Numerate, a discovery-stage pharmaceutical, uses machine learning technologies to more quickly and cost-effectively identify novel molecules that are most likely to progress through the research pipeline and become good candidates for new drug development.

The company recently used its cloud-based platform to rapidly discover and optimize ryanodine receptor 2 (RYR2) modulators, which are being advanced as new drugs to treat life-threatening cardiovascular diseases.

Ryanodine 2 is a difficult protein to target, but the cloud made that process easier for the company. Traditional methods could not have attacked the problem, as the complexity of the biology makes the testing laborious and slow, independent of the industry's low 0.1% screening hit rate for much simpler biology.

In Numerate's case, using the cloud enabled the company to effectively decouple the trial-and-error process from the laboratory and discover and optimize candidate drugs five times faster than the industry average.

Machine learning also is helping power the entire clinical development process. Biopharma researchers use machine learning to design the most productive trial protocols, study locations, recruitmentand patient cohorts to enroll. Researchers not trained as programmers can use cloud-based machine learning services to build, train and deploy machine learning algorithms to help with pre-clinical studies, complex simulations and predictive workflow optimization.

Machine learning can also help accelerate the regulatory submission process, as the massive amounts of data generated during clinical trials can be captured and effectively shared to collaborate between investigators, contract research organizations (CROs) and sponsor organizations.

For example, the Intelligent Trial Planner (ITP) from Knowledgent, now part of Accenture, uses machine learning services to determine the feasibility of trial studies and forecast recruitment timelines. The ITP platform enables study design teams at pharma organizations to run prediction analysis in minutes, not weeks, allowing them to iterate faster and more frequently.

Powered by machine learning, real-time scenario planning helps to facilitate smarter trial planning by enabling researchers to determine the most optimal sites, countries and/or protocol combinations.

By eliminating poor performing sites, trial teams have the potential to reduce their trial cost by 20%. And by making data-driven decisions that are significantly more accurate, they can plan and execute clinical trials faster, leading to hundreds of thousands in cost savings for every month saved in a trial.

Additionally, purpose-built machine learning is supported by cost-effective cloud-based compute options. For example, high-performance computing (HPC) can quickly scale to accommodate large R&D datasets, orchestrating services and simplifying the use and management of HPC environments.

Data transformation tools can also help to simplify and accelerate data profiling, preparation and feature engineering, as well as enable reusable algorithms both for new model discovery and inference.

The healthcare and life sciences industry has come a long way in the last year. However, for progress and transformation to continue, interoperability needs to be prioritized.

Q: The ultimate goal of precision medicine is the right patients receiving the right care, with the right therapeutic, at the right time. What do healthcare provider organization CIOs and other health IT leaders need to be doing with machine learning and other technologies today to be moving toward this goal?

A: The first things IT leaders need to ask themselves is: 1) If they are not investing yet in machine learning, do they plan to this year? And 2) What are the largest blockers to machine learning in their teams?

Our philosophy is to make machine learning available to every data scientist and developer without the need to have a specific background in machine learning, and then have the ability to use machine learning at scale and with cost efficiencies.

Designing a personalized care pathway using therapeutics tuned for particular biomarkers relies on a combination of different data sources such as health records and genomics to deliver a more complete assessment of a patient's condition. By sequencing the genomes of entire populations, researchers can unlock answers to genetic diseases that historically haven't been possible in smaller studies and pave the way for a baseline understanding of wellness.

Population genomics can improve the prevention, diagnosis and treatment of a range of illnesses, including cancer and genetic diseases, and produce the information doctors and researchers need to arrive at a more complete picture of how an individual's genes influence their health.

Advanced analytics and machine learning capabilities can use an individual or entire population's medical history to better understand relationships in data and in turn deliver more personalized and curated treatment.

Second, healthcare and life sciences organizations need to be open to experimenting, learning about and embracing both cloud and technology and many organizations across the industry are already doing this.

Leaders in precision medicine research such as UK Biobank, DNAnexus, Genomics England, Lifebit, Munich Lukemia Lab, Illumina, Fabric Genomics, CoFactor Genomics and Emedgene all leverage cloud and technology to speed genomic interpretation.

Third, supporting open collaboration and data sharing needs to be a business priority. The COVID-19 Open Research Dataset (CORD-19) created last year by a coalition of research groups provided open access to the plenary of available global COVID-19 research and data.

This was one of the primary factors that enabled the discovery, clinical trial and delivery of the mRNA-based COVID-19 vaccines in an unprecedented timeframe. Additionally, our Open Data Programmakes more than 40 openly available genomics datasets accessible, providing the research community with a single documented source of truth.

Commercial solutions that have leveraged machine learning to enable large-scale genomic sequencing include organizations such as Munich Leukemia Lab, who has been able to use the Field Programmable Gate Array-based compute instances to greatly speed up the process of whole genome sequencing.

As a result, what used to take 20 hours of compute time can now be achieved in only three hours. Another example is Illumina, which is using cloud solutions to offer its customers a lower-cost, high-performance genomic analysis platform, which can help them speed their time to insights as well as discoveries.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more here:
AWS leader talks about technologies needed to take precision medicine to the next level - Healthcare IT News

Read More..

The CIO’s Guide to Building a Rockstar Data Science and AI Team | eWEEK – eWeek

Just about everyone agrees that data scientists and AI developers are the new superstars of the tech industry. But ask a group of CIOs to define the precise area of expertise for data science-related job titles, and discord becomes the word of the day.

As businesses seek actionable insights by hiring teams that include data analysts, data engineers, data scientists, machine learning engineers and deep learning engineers, a key to success is understanding what each role can and cant do for the business.

Read on to learn what your data science and AI experts can be expected to contribute as companies grapple with ever-increasing amounts of data that must be mined to create new paths to innovation.

In a perfect world, every company employee and executive works under a well-defined set of duties and responsibilities.

Data science isnt that world. Companies often will structure their data science organization based on project need: Is the main problem maintaining good data hygiene? Or is there a need to work with data in a relational model? Perhaps the team requires someone to be an expert in deep learning, and to understand infrastructure as well as data?

Depending on a companys size and budget, any one job title might be expected to own one or more of these problem-solving skills. Of course, roles and responsibilities will change with time, just as theyve done as the era of big data evolves into the age of AI.

That said, its good for a CIO and the data science team she or he is managing today to remove as much of the ambiguity as possible regarding roles and responsibilities for some of the most common roles those of the data analyst, data engineer, data scientist, machine learning engineer and deep learning engineer.

Teams that have the best understanding of how each fits into the companys goals are best positioned to deliver a successful outcome. No matter the role, accelerated computing infrastructure is also key to powering success throughout the pipeline as data moves from analytics to advanced AI.

Its important to recognize the work of a data analyst, as these experts have been helping companies extract information from their data long before the emergence of the modern data science and AI pipeline.

Data analysts use standard business intelligence tools like Microsoft Power BI, Tableau, Qlik, Yellowfin, Spark, SQL and other data analytics applications. Broad-scale data analytics can involve the integration of many different data sources, which increases the complexity of the work of both data engineers and data scientists another example of how the work of these various specialists tends to overlap and complement each other.

Data analysts still play an important role in the business, as their work helps the business assess its success. A data engineer might also support a data analyst who needs to evaluate data from different sources.

Data scientists take things a step further so that companies can start to capitalize on new opportunities with recommender systems, conversational AI, and computer vision, to name a few examples.

A data engineer makes sense of messy data and theres usually a lot of it. People in this role tend to be junior teammates who make data nice and neat (as possible) for data scientists to use. This role involves a lot of data prep and data hygiene work, including lots of ETL (extract, transform, load) to ingest and clean data.

The data engineer must be good with data jigsaw puzzles. Formats change, standards change, even the fields a team is using on a webpage can change frequently. Datasets can have transmission errors, such as when data from one field is incorrectly entered into another.

When datasets need to be joined together, data engineers need to fix the data hygiene problems that occur when labeling is inconsistent. For example, if the day of the week is included in the source data, the data engineer needs to make sure that the same format is used to indicate the day, as Monday could also be written as Mon., or even represented by a number that could be one or zero depending on how the days of the week are counted.

Expect your data engineers to be able to work freely with scripting languages like Python, and in SQL and Spark. Theyll need programming language skills to find problems and clean them up. Given that theyll be working with raw data, their work is important to ensuring your pipeline is robust.

If enterprises are pulling data from their data lake for AI training, this rule-based work can be done by a data engineer. More extensive feature engineering is the work of a data scientist. Depending on their experience and the project, some data engineers may support data scientists with initial data visualization graphs and charts.

Depending on how strict your company has been with data management, or if you work with data from a variety of partners, you might need a number of data engineers on the team. At many companies, the work of a data engineer often ends up being done by a data scientist, who preps her or his own data before putting it to work.

Data scientists experiment with data to find the secrets hidden inside. Its a broad field of expertise that can include the work of data analytics and data processing, but the core work of a data scientist is done by applying predictive techniques to data using statistical machine learning or deep learning.

For years, the IT industry has talked about big data and data lakes. Data scientists are people who finally turn these oceans of raw data into information. These experts use a broad range of tools to conduct analytics, experiment, build and test models to find patterns. To be great at their work, data scientists also need to understand the needs of the business theyre supporting.

These experts use many applications, including NumPy, SciKit-Learn, RAPIDS, CUDA, SciPy, Matplotlib, Pandas, Plotly, NetworkX, XGBoost, domain-specific libraries and many more. They need to have domain expertise in statistical machine learning, random forests, gradient boosting, packages, feature engineering, training, model evaluation and refinement, data normalization and cross-validation. The depth and breadth of these skills make it readily apparent why these experts are so highly valued at todays data-driven companies.

Data scientists often solve mysteries to get to the deeper truth. Their work involves finding the simplest explanations for complex phenomena and building models that are simple enough to be flexible yet faithful enough to provide useful insight. They must also avoid some perils of model training, including overfitting their data sets (that is, producing models that do not effectively generalize from example data) and accidentally encoding hidden biases into their models.

A machine learning engineer is the jack of all trades. This expert architects the entire process of machine and deep learning. They take AI models developed by data scientists and deep learning engineers and move them into production.

These unicorns are among the most sought-after and highly paid in the industry and companies work hard to make sure they dont get poached. One way to keep them happy is to provide the right accelerated computing resources to help fuel their best work. A machine learning engineer has to understand the end-to-end pipeline, and they want to ensure that pipeline is optimized to deliver great results, fast.

Its not always easily intuitive, as the machine learning engineers must know the apps, understand the downstream data architecture, and key in on system issues that may arise as projects scale. A person in this role must understand all the applications used in the AI pipeline, and usually needs to be skilled in infrastructure optimization, cloud computing, containers, databases and more.

To stay current, AI models need to be reevaluated to avoid whats called model drift as new data impacts the accuracy of the predictions. For this reason, machine learning engineers need to work closely with their data science and deep learning colleagues who will need to reassess models to maintain their accuracy.

A critical specialization for the machine learning engineer is deep learning engineer. This person is a data scientist who is an expert in deep learning techniques. In deep learning, AI models are able to learn and improve their own results through neural networks that imitate how human beings think and learn.

These computer scientists specialize in advanced AI workloads. Their work is part science and part art to develop what happens in the black box of deep learning models. They do less feature engineering and far more math and experimentation. The push for explainable AI (XAI) model interpretability and explainability can be especially challenging in this domain.

Deep learning engineers will need to process large datasets to train their models before they can be used for inference, where they apply what theyve learned to evaluate new information. They use libraries like PyTorch, TensorFlow and MXNet, and need to be able to build neural networks and have strong skills in statistics, calculus and linear algebra.

Given all of the broad expertise in these key roles, its clear that enterprises need a strategy to help them grow their teams success in data science and AI. Many new applications need to be supported, with the right resources in place to help this work get done as quickly as possible to solve business challenges.

Those new to data science and AI often choose to get started with accelerated computing in the cloud, and then move to a hybrid solution to balance the need for speed with operational costs. In-house teams tend to look like an inverted pyramid, with more analysts and data engineers funneling data into actionable tasks for data scientists, up to the machine learning and deep learning engineers.

Your IT paradigm will depend on your industry and its governance, but a great rule of thumb is to ensure your vendors and the skills of your team are well aligned. With a better understanding of the roles of a modern data team, and the resources they need to be successful, youll be well on your way to building an organization that can transform data into business value.

ABOUT THE AUTHOR

By Scott McClellan, Head of Data Science, NVIDIA

Read more here:
The CIO's Guide to Building a Rockstar Data Science and AI Team | eWEEK - eWeek

Read More..