Page 2,755«..1020..2,7542,7552,7562,757..2,7602,770..»

The future of data science and risk management – Information Age

This article will explore what the possible future of data science capabilities for risk management could entail

What will the future of data science for risk management hold?

Data science has been vital in enhancing risk management operations in recent times. With cyber attacks, including phishing and ransomware, on the rise since the Covid-19 pandemic took hold, managing and mitigating the effects of such incidents, with the aid of network visibility, is key to business continuity. Additionally, there are IT outages and insider threats to contend with, which also require a strong risk management strategy.

In this article, we explore how the future of data sciences role in risk management initiatives will take shape.

With incidents that can bring operations to a stand-still becoming more diverse, its vital that those risk management measures are as agile as possible to avoid being caught out. Data science can help businesses to better analyse short-term and long-term trends, and respond to possible risks and disruption quickly, and this is set to be focused on more going forward.

Whether in marketing, sales, demand, pricing or operations, the key to risk management is not only in spotting the potential risks, but in understanding their likelihood, scale and impact and then reacting accordingly, said Matt Andrew, partner & UK managing director of Ekimetrics.

In retail, for example, weve seen the impact of not having a thorough enough understanding of market, category and consumer trends and risks with mitigations in place soon enough to react in the face of a market-changing pandemic. For the likes of Arcadia Group and Debenhams, factors such as the high cost of brick and mortar stores and a failing offer, including poor e-commerce, became increasingly impossible to deal with. Those that had already begun to invest in this area of data science will have had a better chance to regroup quickly and make better decisions, from big pivots to the ability to capitalise on micro opportunities.

By understanding the potential range of outcomes and how they interact through data analytics, businesses can support greater agility in their decision-making about where and how to invest, and help to future-proof against other risks that are yet to emerge.

We gauged the perspectives of experts in data science, asking them about the biggest emerging trends in data science. Read here

A key aspect of data science that has a bright future is automation. This decreases strain on data scientists while speeding up processes, and when it comes to mitigating risks, automation can minimise errors when it comes to data reconciliation the movement and alignment of critical company data between systems.

Douggie Melville-Clarke, head of data science at Duco, explained: As businesses move towards making more data-first decisions, the emphasis on data automation is growing, with companies automating as much of the data reconciliation process as possible to speed up process, help businesses scale and crucially mitigate risk.

Data reconciliation has traditionally cost financial firms significant sums of money through man hours and regulatory fines. Automation takes away the human error element from data reconciliation. Manual tasks can often become tedious to a human brain leaving room for error, but a computer cant get bored or show up to work tired. Its consistent. And this consistency is crucial when dealing with large datasets.

Repeatable tasks can be delegated to a computer to handle more efficiently and with a lower error rate freeing up the workforce to do jobs that add more value to the business, such as new product offering or adapting to regulatory changes.

Data automation platforms also enable businesses to get a full view of the data transformation process, end to end. Through automated data lineage, businesses can track the cleansing and manipulation processes the data undergoes, giving them a holistic view of the data in a structured way, as opposed to an unstructured one. This aids with error spotting and reporting, both internally and to regulatory boards.

According to Trevor Morgan, product manager at comforte AG, the value-add that data science is set to bring to risk management in the near future is two-fold: the ability to manage more data in one go, and looking to the future rather than past events.

Enterprise data is growing nearly exponentially, and it is also increasing in complexity in terms of data types, said Morgan.

We have gone way past the time when humans could sift through this amount of data in order to see large-scale trends and derive actionable insights. The platforms and best practices of data science and data analytics incorporate technologies which automate the analytics workflows to a large extent, making dataset size and complexity much easier to tackle with far less effort than in years past.

The second value-add is to leverage machine learning, and ultimately artificial intelligence, to go beyond historical and near-real-time trend analysis and look into the future, so to speak. Predictive analysis can unveil new customer needs for products and services and then forecast consumer reactions to resultant offers. Equally, predictive analytics can help uncover latent anomalies that lead to much better predictions about fraud detection and potentially risky behaviour.

Nothing can foretell the future with 100% certainty, but the ability of modern data science to provide scary-smart predictive analysis goes well beyond what an army of humans could do manually.

Gartner has forecasted that security and risk management spending worldwide will grow 12.4% to reach $150.4 billion in 2021

While AI has demonstrated the capability of helping to increase the agility of organisations decision making, there is also the matter of higher regulation of the technology to consider, with legislation in the EU being a notable example. To stay compliant, risk management aided by data science is likely to be the way forward.

Data science and risk management professionals will work hand in hand to ensure risk and governance procedures are at a high standard, said Theresa Bercich, director of product strategy and principal data scientist at Lucinity.

AI compliance will be more regulated, as evidenced by the EU creating legislation around this topic. This means that new job titles, positions and people will join the world of AI (which has already started), that will create frameworks for governance and risk.

The power of AI and the demand for its value proposition is driving significant changes in the technology space including the breakdown of traditional silos and the development of intelligent software deploying data in a productive manner.

Data science trends in healthcare as identified by experts in the field

Data science trends in banking leveraging data science capabilities in order to accelerate operations and increase flexibility

How to embark on a data science career the key factors to consider.

More:

The future of data science and risk management - Information Age

Read More..

How to get clinicians onboard with predictive analytics – Healthcare IT News

Healthcare has higher barriers to adopting data science than other industries. State-of-the-art analytics solutions are already available, but few of them are in use by clinicians.

At University of Virginia Health System, health leaders worked to establish a culture of data-driven decision-making with clinicians, with data science guides clinicians in finding opportunities for improvement, designing and implementing interventions, and evaluating impacts.

Bommae Kim, senior data scientist at Hackensack Meridian Health and until last year with UVA Health, also as a senior data scientist said a key challenge to wider adoption is lack of interest.

"Due to their disinterest or ambivalence to data science, it may be difficult to find opportunities to work with clinicians to begin with," she said.

Kim, who along with Dr. Jonathan Michel, director of data science at University of Virginia Health, will speak on the topic next month at HIMSS21. She said a lack of trust and a lack of understanding are two other challenges to adoption of analytics solutions

"Clinicians may disagree with analytics results due to lack of trust in data science," she said. "It may also be challenging to introduce advanced analytics due to the level of data literacy."

She explained the key opportunities for clinicians adopting data science depends on the analytics maturity and executive leadership support at the organization.

"Of the multiple aspects to consider, I'd like to point out actionability in finding opportunities," said Kim. "Unless strong clinician support is already in place, it would be extremely challenging to succeed in purely clinical topics, for example sepsis."

She noted those clinical topics are certainly important to any health system but may not be readily actionable for many reasons.

On the other hand, Kim noted some topics are highly relevant to clinicians yet not purely clinical--LOS and readmissions, for instance.

"Their causes and interventions are not necessarily clinical, unlike sepsis, and clinicians seem more open to data scientists' suggestions in less-clinical domains," she said. "I would consider them more actionable topics. Once a strong relationship is built with clinicians, it'll be easier to move to more clinical domains with their support."

She explained UVA Health Data Science often engages with clinicians by presenting data analysis about their patients and workflows as for their project or interest. Such sessions naturally lead clinicians to data-driven decision making.

"Through such engagement, we built trust and improved data literacy among clinicians," said Kim.

"Moreover, in the process data scientists learned what clinicians truly want and need. What they ask for may not be what they truly want or need. With improved clinician trust and data literacy and a better understanding of clinician needs, we were able to move toward more advanced analytics."

Jonathan Michel and Bommae Kim will address the use of data science among clinicians at HIMSS21 in a session titled "Making Prescriptive Analytics Work for Clinicians." It's scheduled for Thursday, August 12 from 1-2 p.m. in room Wynn Lafite 2

Visit link:

How to get clinicians onboard with predictive analytics - Healthcare IT News

Read More..

How NASA is using knowledge graphs to find talent – VentureBeat

All the sessions from Transform 2021 are available on-demand now. Watch now.

One of NASAs biggest challenges is identifying where data science skills reside within the organization. Not only is data science a new discipline its also a fast-evolving one. Knowledge for each role is constantly shifting due to technological and business demands.

Thats where David Meza, acting branch chief of people analytics and senior data scientist at NASA, believes graph technology can help. His team is building a talent mapping database using Neo4j technology to build a knowledge graph to show the relationships between people, skills, and projects.

Meza and his team are currently working on the implementation phase of the project. They eventually plan to formalize the end user application and create an interface to help people in NASA search for talent and job opportunities. Meza told VentureBeat more about the project.

VentureBeat: Whats the broad aim of this data led project?

David Meza: Its about taking a look at how we can identify the skills, knowledge and abilities, tasks, and technology within an occupation or a work role. How do we translate that to an employee? How do we connect it to their training? And how do we connect that back to projects and programs? All of that work is a relationship issue that can be connected via certain elements that associate all of them together and thats where the graph comes in.

VentureBeat: Why did you decide to go with Neo4j rather than develop internally?

Meza: I think there was really nothing out there that provided what we were looking for, so thats part of it. The other part of the process is that we have specific information that were looking for. Its not very general. And so we needed to build something that was more geared towards our concepts, our thoughts, and our needs for very specific things that we do at NASA around spaceflights, operations, and things like that.

VentureBeat: Whats the timeline for the introduction of Neo4j?

Meza: Were still in the implementation phase. The first six to eight months was about research and development and making sure we had the right access to the data. Like any other project, thats probably our most difficult task making sure we have the right access, the right information and thinking about how everything is related. While we were looking at that, we also worked in parallel on other issues: whats the model going to look like, what algorithms are we going to use, and how are we going to train these models? Weve got the data in the graph system now and were starting to produce a beta phase of an application. This summer through the end of the year, were looking towards formalizing that application to make it more of an interface that an end user can use.

VentureBeat: Whats been the technical process behind the implementation of Neo4j?

Meza: The first part was trying to think about whats going to be our occupational taxonomy. We looked at: How do we identify an occupation? What is the DNA of an occupation? And similarly, we looked at that from an employee perspective, from a training perspective, and from a program or project perspective. So simply put, we broke everything down into three different categories for each occupation: a piece of knowledge, a skill, and a task.

VentureBeat: How are you using those categories to build a data model?

Meza: If you can start identifying people that have great knowledge in natural language processing, for example, and the skills they need to do a task, then from an occupation standpoint you can say that specific workers need particular skills and abilities. Fortunately, theres a database from the Department of Labor called O*NET, which has details on hundreds of occupations and their elements. Those elements consist of knowledge, skills, abilities, tasks, workforce characteristics, licensing, and education. So that was the basis for our Neo4j graph database. We then did the same thing with training. Within training, youre going to learn a piece of knowledge; to learn that piece of knowledge, youre going to get a skill; and to get that skill, youre going to do exercises or tasks to get proficient in those skills. And its similar for programs: we can connect back to what knowledge, skills, and tasks a person needs for each project.

VentureBeat: How will you train the model over time?

Meza: Weve started looking at NASA-specific competencies and work roles to assign those to employees. Our next phase is to have employees validate and verify that the associated case around knowledge, skills, abilities, tasks, and technologies that what we infer based on the model is either correct or incorrect. Then, well use that feedback to train the model so it can do a little bit better. Thats what were hoping to do over the next few months.

VentureBeat: What will this approach mean for identifying talent at NASA?

Meza: I think it will give the employees an opportunity to see whats out there that may interest them to further their career. If they want to do a career change, for example, they can see where they are in that process. But I also think it will help us align our people better across our organization, and we will help track and maybe predict where we might be losing skills, where we maybe need to modify skills based on the shifting of our programs and the shifting of our mission due to administration changes. So I think itll make us a little bit more agile and it will be easier to move our workforce.

VentureBeat: Do you have any other best practice lessons for implementing Neo4j?

Meza: I guess the biggest lesson that Ive learned over this time is to identify as many data sources that can help you provide some of the information. Start small you dont need to know everything right away. When I look at knowledge graphs and graph databases, the beauty is that you can add and remove information fairly easily compared to a relational database system, where you have to know the schema upfront. Within a graph database or knowledge graph, you can easily add information as you get it without messing up your schema or your data model. Adding more information just enhances your model. So start small, but think big in terms of what youre trying to do. Look at how you can develop relationships, and try to identify even latent relationships across your graphs based on the information you have about those data sources.

Read the original post:

How NASA is using knowledge graphs to find talent - VentureBeat

Read More..

Researchers link heavy wildfire smoke in Reno to increased risk of contracting COVID-19 – Reno Gazette-Journal

Cathie Anderson| The Sacramento Bee via Associated Press

SACRAMENTO, Calif. Cases of COVID-19 rose sharply last year in Reno, Nevada, when a heavy layer of wildfire smoke settled over the city, according to scientists at the Desert Research Institute, and they and other scientists are postulating that there is a link between air pollution and increased susceptibility to the new coronavirus.

Our results showed a substantial increase in the COVID-19 positivity rate in Reno during a time when we were affected by heavy wildfire smoke from California wildfires, said Daniel Kiser, a co-lead author of the study published in the Journal of Exposure Science and Environmental Epidemiology. This is important to be aware of as we are already confronting heavy wildfire smoke ... with COVID-19 cases again rising in Nevada and other parts of the western U.S.

Kiser, an assistant research scientist of data science at the institute, said he became interested in studying the effect of the microscopic particulate matter from wildfires after reading a Canadian scientists article on the dual effect of confronting both issues at the same time.

In the preface to her work, senior scientist Sarah Henderson of the British Columbia Center for Disease Control, wrote: As we enter the wildfire season in the northern hemisphere, the potential for a dangerous interaction between SARS-CoV-2 and smoke pollution should be recognized and acknowledged. This is challenging because the public health threat of COVID-19 is immediate and clear, whereas the public health threat of wildfire smoke seems distant and uncertain in comparison. However, we must start preparing now to effectively manage the combination of public health threats.

Kiser is hoping that his research results will motivate people to get vaccinated and to wear masks to reduce their exposure to the virus and to tiny wildfire particulate matter that measures 2.5 micrometers or less.

Thats about 1/30th the size of a human hair at its largest. Scientists refer to it as PM 2.5 for short.

To analyze the relationship between this fine wildfire ash and COVID-19 positivity rates, Kiser and his team collected data from the Washoe County Health District and the regions big hospital system, Renown Health.

He said they discovered that the PM 2.5 was responsible for a 17.7% increase in the number of COVID-19 cases that occurred during a period of prolonged smoke that took place between Aug. 16, 2020, and Oct. 10, 2020.

Washoe Countys 450,000 residents, many of whom live in Reno, experienced 43 days of elevated PM 2.5 during that period, researchers said, compared with 26 days for residents of the San Francisco Bay Area.

We had a unique situation here in Reno last year where we were exposed to wildfire smoke more often than many other areas, including the Bay Area, said Dr. Gai Elhanan, co-lead author of the study and an associate research scientist of computer science at the institute. We are located in an intermountain valley that restricts the dispersion of pollutants and possibly increases the magnitude of exposure, which makes it even more important for us to understand smoke impacts on human health.

The relationship between COVID-19 positivity rates and air pollution in general has gained interest among scientists around the world, and Kiser and Elhanan cite research papers from Europe and Asia that explore the phenomenon as well.

Kent Pinkerton, an expert on air pollution on the faculty at the University of California, Davis, said theres concern among physicians and scientists about the impact of climate change on cardiopulmonary health, a topic hes currently addressing in an article hes submitting to a medical journal.

Hotter temperatures, climate change, wildfires, air pollution, all seem to have some association with a greater risk of COVID-19 cases, Pinkerton said. If youre susceptible to air pollution, such as particulate matter, it could be that you just have a situation where youll be also much more susceptible to viral particles that might be in the air that youre breathing. Its not that the air pollution makes the COVID-19 cases more likely to happen, but it may simply be a reflection of just the fact that, where areas of high pollution are, ... the risk for COVID-19 cases may be greater.

Pinkerton said he read a paper on a study out of Turkey, which was submitted to a medical journal, and researchers there also found a terrible upswing in COVID-19 cases linked to increased air pollution.

No one has yet found the mechanism that increases the risk, Kiser and Pinkerton said, but there have been some hypotheses.

Could the new coronavirus be hitching rides on the PM 2.5 and managing to remain virulent as it is breathed into peoples lungs? Certainly, PM 2.5 has been found in the smallest air sacs of peoples lungs.

Kisers team cites a study out of Northern Italy where researchers found the new coronavirus on particulate matter, and Pinkerton noted that the pathogen has been detected in water supplies and in sewage.

We know that dust from the Mongolian desert, that comes across the Pacific Ocean, can carry at least biological material, whether it be viral or bacterial, Pinkerton said. What people have argued about is that the dust can be a carrier for microorganisms.

It raises questions, Pinkerton added, of how long a virus can survive.

Kiser and Pinkerton said researchers also have postulated that the PM 2.5 irritates nasal, throat and lung passages, creating inflammation that makes those areas ripe for infection. Some research has even suggested that the PM 2.5 increases the presence of a histamine receptor to which the COVID-19 virus attaches, Kiser said.

Elhanan said: We believe that our study greatly strengthens the evidence that wildfire smoke can enhance the spread of SARS-CoV-2. We would love public health officials across the U.S. to be a lot more aware of this because there are things we can do in terms of public preparedness in the community to allow people to escape smoke during wildfire events.

In fact, the U.S. Centers for Disease Control and Prevention have a website about wildfire smoke and COVID-19 that provides tips on how to prepare for wildfire season, including identifying high-efficiency air filters and maintaining a supply of N95 respirators which filter out particulates.

Go here to read the rest:

Researchers link heavy wildfire smoke in Reno to increased risk of contracting COVID-19 - Reno Gazette-Journal

Read More..

Decode Your Future in Software Development With This Discounted Bundle – PCMag.com

Has your life been turned upside down by the chaos of the last year? Why not take the opportunity to switch to a career in software development?

The2021 Google Software Engineering Manager Prep Bundlelets you train at your own pace, with more than 90 hours of content covering Java, C#, Python, data science, and more.

If you're in website development, check out two courses on user interface design: JavaFX: Build Beautiful User Interfaces and UI Design. Need extra certifications to boost your resum? Get some important test prep in ISACA CISA (Certified Information Systems Auditor) 2021 and Certified Information Security Manager (CISM).

Software Architecture: Functional Programming in C# can certainly be helpful for coders, and there are multiple Python courses, including Machine Learning with Python and Python Engineering Animations: Bring Math & Data to Life.

Take a deep dive into the practical applications of natural language processing, such as spam detection, with Data Science: Natural Language Processing (NLP) in Python. Then follow that up with the Advanced NLP & Sequence Models with Deep Learning course, which covers neural machine translation, text classification, and more. If you want to focus on big data, you'll love the Big Data Code Optimization in Python NumPy: Sound Processing class.

Plus, this bundle has you covered with practical business applications: Learn to use the free business intelligence and data analytics tool Google Data Studio, and how to completely transform your marketing campaigns with artificial intelligence to achieve significantly improved results.

Don't miss this opportunity to gain advanced training in a variety of tech specializations. PCMag readers can get The 2021 Google Software Engineering Manager Prep Bundle on sale for$47.7698% off the $2,388 MSRP.

Prices subject to change.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Originally posted here:

Decode Your Future in Software Development With This Discounted Bundle - PCMag.com

Read More..

NIH expands biomedical research in the cloud with Microsoft Azure – National Institutes of Health

News Release

Tuesday, July 20, 2021

Microsoft Azure has joined the National Institutes of Healths Science and Technology Research Infrastructure for Discovery, Experimentation, and Sustainability (STRIDES) Initiative as the newest cloud service provider to support biomedical research. The addition of this latest industry partner will further the STRIDES Initiatives aim to accelerate biomedical research in the cloud by reducing economic and process barriers as well as providing cost-effective access to cloud platforms, training, cloud experts, and best practices for optimizing research in the cloud.

In just a few years, the STRIDES Initiative has expanded access to critical infrastructure and cutting-edge cloud resources for NIH researchers, as well as NIH-funded researchers at more than 2,500 academic institutions across the nation. To date, NIH has helped more than 425 research programs and projects leverage cloud resources through the STRIDES Initiative. Collectively, researchers have used more than 83 million hours of computational resources to access and analyze more than 115 petabytes of high-value biomedical data in the cloud. This is equivalent to 2.3 million four-drawer filing cabinets full of text.

By leveraging the STRIDES Initiative, the National Library of MedicinesSequence Read Archive (SRA) one of the worlds largest, publicly available genome sequence repositories migrated over 43 petabytes of next generation sequencing data to the cloud, easing access for millions of researchers. Using the cloud, researchers can nowsearch the entire catalog of genomic data and take advantage of the computational tools for analysis.

The cloud can help democratize access to high-value research data and the most advanced analytical technologies for all researchers. Expanding our network of providers and access to the most advanced computational infrastructure, tools, and services provides the agility and flexibility that researchers need to accelerate research discoveries, said Andrea T. Norris, Director of NIHs Center for Information Technology and NIH Chief Information Officer. Partnering with Microsoft Azure as a cloud service provider furthers our goals to enhance discovery and improve efficiency in biomedical research.

We often risk losing the value of biomedical data because of the sheer volumes being generated and digitized around the world. By leveraging cloud and artificial intelligence capabilities, biomedical researchers are able to quickly identify and extract critical, lifesaving insights from this sea of information, said Toni Townes-Whitley, President, U.S. Regulated Industries, Microsoft. We are honored to collaborate with the NIH to help researchers solve some of todays biggest medical challenges, in support of a healthier and more sustainable global population.

A central tenet of the STRIDES Initiative is that data made available through these partnerships will incorporate standards endorsed by the biomedical research community to make data Findable, Accessible, Interoperable, and Reusable (FAIR).

NIH has an ambitious vision of a modernized, FAIR biomedical data landscape, said Susan K. Gregurick, Ph.D., Associate Director for Data Science and Director of the Office of Data Science Strategy at NIH. By partnering with Microsoft Azure, which has over three decades of experience in the cloud space, we can strengthen NIHs data ecosystem and accelerate data-driven research and discovery.

Microsoft Azure joins Google Cloud and Amazon Web Services in supporting the STRIDES Initiative.

About the NIH Office of Data Science Strategy: The Office of Data Science Strategy (ODSS) leads implementation of the NIH Strategic Plan for Data Science through scientific, technical, and operational collaboration with the institutes, centers, and offices that comprise NIH. The office was formed in 2018 within the Division of Program Coordination, Planning, and Strategic Initiatives, which plans and coordinates trans-NIH initiatives and research supported by the NIH Common Fund. More information is available at the Office of Data Science Strategy website: datascience.nih.gov.

About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

###

Read the original:

NIH expands biomedical research in the cloud with Microsoft Azure - National Institutes of Health

Read More..

Deadline 2024: Why you only have 3 years left to adopt AI – VentureBeat

All the sessions from Transform 2021 are available on-demand now. Watch now.

If your company has yet to embrace AI, youre in a race against the clock. And by my calculations, you have just three years left.

How did I arrive at 2024 as the deadline for AI adoption? My prediction formulated with KUNGFU.AI advisor Paco Nathan is rooted in us noticing that many futurists J curves show innovations typically have a 12-to-15-year window of opportunity, a period between when a technology emerges and when it reaches the point of widespread adoption.

While AI can be traced to the mid-1950s and machine learning dates back to the late 1970s, the concept of deep learning was popularized by the AlexNet paper published in 2012. Of course, its not just machine learning that started the clock ticking.

Though cloud computing was initially introduced in 2006, it didnt take off until 2010 or so. The rise of data engineering can also be traced to the same year. The original paper for Apache Spark was published in 2010, and it became foundational for so much of todays distributed data infrastructure.

Additionally, the concept of data science has a widely reported inception date of 2009. Thats when Jeff Hammerbacher, DJ Patil and others began getting recognized for leading data science teams and helping define the practice.

If you do the math, those 20092012 dates put us within that 12-to-15-year window. And that makes 2024 the cutoff for companies hoping to gain a competitive advantage from AI.

If you look at the graph below from Everett Rogers Diffusion of Innovations youll get a sense of how those who wait to put AI into production will miss out on cornering the market. Here the red line shows successive groups adopting new technology while the purple line shows how market share eventually reaches a saturation level.

Source: Everett Rogers, Diffusion of Innovations

A 2019 survey conducted by the MIT Sloan Management Review and Boston Consulting Group explicitly shows how the Diffusion of Innovations theory applies to AI. Their research was based on a global survey of more than 3,000 executives, managers, and analysts across various industries.

Once the responses to questions around AI understanding and adoption were analyzed, survey respondents were assigned to one of four distinct categories:

Pioneers (20%) These organizations possess a deep knowledge of AI and incorporate it into their offerings and internal processes. Theyre the trailblazers.

Investigators (30%) These organizations understand AI but arent deploying it beyond the pilot stage. Theyre taking more of a look before you leap approach.

Experimenters (18%) These organizations are piloting AI without truly understanding it. Their strategy is fake-it-until-you-make-it.

Passives (32%) These organizations have little-to-no understanding of AI and will likely miss out on the opportunity to profit from it.

The 2020 survey, which uses the same questions and methodology, gives even greater insight into how executives embrace AI. 87% believe AI will offer their companies an advantage over others. Just 59% of companies, however, have an AI strategy.

Comparing the MIT and BCG 2020 survey responses to those since the surveys inception in 2017, a growing number of execs recognize that competitors are using AI. Yet only one in 10 companies are using AI to generate significant financial benefits.

I anticipate this gap between leaders and laggards will continue widening, making this your companys last chance to take action before 2024 (if it hasnt already).

MIT and BCGs 2020 data reveals that companies focused on the initial steps of AI adoption (ensuring data, talent, and a strategy are in place) will have a 21% chance of becoming a market leader. When companies begin to iterate on AI solutions with their organizational users (effectively adopting AI and applying it across multiple use cases) that chance rises to 39%. And those that can orchestrate the macro and micro interactions between humans and machines (sharing knowledge amongst both and smartly structuring those interactions) will have a 73% chance of market leadership.

Building upon MIT and BCGs success predictions, McKinsey & Company has specifically broken down how AI integration impacts revenue in this 2020 chart.

Source: McKinsey & Company Global Survey, 2020

While the ROI for AI integration can be immediate, thats not typically the case. According to MIT and BCGs 2019 data, only two out of three companies that have made some investment in AI (Investigators and Experimenters) report gains within three years. This stat improves to three out of five when companies that have made significant investments in AI (Pioneers) are included.

The 2020 MIT/BCG data builds upon this, claiming companies that use AI to make extensive changes to many business processes are 5X more likely to realize a major financial benefit vs. those making small or no changes to a few business processes.

So where will you be in 2024? On your way to reaping the rewards of AI, or lamenting that you missed an opportunity for market advantage?

Steve Meier is a co-founder and Head of Growth at AI services firm KUNGFU.AI.

More here:

Deadline 2024: Why you only have 3 years left to adopt AI - VentureBeat

Read More..

How the National Science Foundation is taking on fairness in AI – Brookings Institution

Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficiallyin this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.

The FAI program is an investment in what the NSF calls use-inspired research, where scientists attempt to address fundamental questions inspired by real world challenges and pressing scientific limitations. Use-inspired research is an alternative to the traditional basic research, which attempts to make fundamental advances in scientific understanding without necessarily a specific practical goal. NSF is better known for basic research in computer science, where the NSF provides 87% of all federal basic research funding. Consequently, the FAI program is a relatively small portion of the NSFs total investment in AIaround $3.3 million per year, considering that Amazon covers half of the cost. In total, the NSF requested $868 million in AI spending, about 10% of its entire budget for 2021, and Congress approved every penny. Notably, this is a broad definition of AI spending that includes many applications of AI to other fields, rather than fundamental advances in AI itself, which is likely closer to $100 or $150 million, by rough estimation.

The FAI program is specifically oriented towards the ethical principle of fairnessmore on this choice in a moment. While this may seem unusual, the program is a continuation of prior government funded research into the moral implications and consequences of technology. Starting in the 1970s, the federal government started actively shaping bioethics research in response to public outcry following the APs reporting on the Tuskegee Syphilis Study. While the original efforts may have been reactionary, they precipitated decades of work towards improving the biomedical sciences. Launched alongside the Human Genome Project in 1990, there was an extensive line of research oriented towards the ethical, legal, and social implications of genomics. Starting in 2018, the NSF funded 21 exploratory grants on the impact of AI on Society, a precursor to the current FAI program. Today, its possible to draw a rough trend line through these endeavors, in which the government is becoming more concerned with first pure science, then the ethics of the scientific process, and now the ethical outcomes of the science itself. This is a positive development, and one worth encouraging.

NSF made a conscious decision to focus on fairness rather than other prevalent themes like trustworthiness or human-centered design. Dr. Erwin Gianchandani, an NSF deputy assistant director, has described four categories of problems in FAIs domain, and these can each easily be tied to present and ongoing challenges facing AI. The first category is focused on the many conflicting mathematical definitions of fairness and the lack of clarity around which are appropriate in what contexts. One funded project studied the human perceptions of what fairness metrics are most appropriate for an algorithm in the context of bail decisionsthe same application of the infamous COMPAS algorithm. The study found that survey respondents slightly preferred an algorithm that had a consistent rate of false positives (how many people were unnecessarily kept in jail pending trial) between two racial groups, rather than an algorithm which was equally accurate for both racial groups. Notably, this is the opposite quality of the COMPAS algorithm, which was fair in its total accuracy, but resulted in more false positives for Black defendants.

The second category, Gianchandani writes, is to understand how an AI system produces a given result. The NSF sees this as directly related to fairness because giving an end-user more information about an AIs decision empowers them to challenge that decision. This is an important pointby default, AI systems disguise the nature of a decision-making process and make it harder for an individual to interrogate the process. Maybe the most novel project funded by NSF FAI attempts to test the viability of crowdsourcing audits of AI systems. In a crowdsourced audit, many individuals might sign up for a toole.g., a website or web browser extensionthat pools data about how those individuals were treated by an online AI system. By aggregating this data, the crowd can determine if the algorithm is being discriminatory, which would be functionally impossible for any individual user.

The third category seeks to use AI to make existing systems fairer, an especially important task as governments around the world are continuing to consider if and how to incorporate AI systems into public services. One project from researchers at New York University seeks, in part, to tackle the challenge of fairness when an algorithm is used in support of a human decision-maker. This is perhaps inspired by a recent evaluation of judges using algorithmic risk assessments in Virginia, which concluded that the algorithm failed to improve public safety and had the unintended effect of increasing incarceration of young defendants. The NYU researchers have a similar challenge in minddeveloping a tool to identify and reduce systemic biases in prosecutorial decisions made by district attorneys.

The fourth category is perhaps the most intuitive, as it aims to remove bias from AI systems, or alternatively, make sure AI systems work equivalently well for everyone. One project looks to create common evaluation metrics for natural language processing AI, so that their effectiveness can be compared across many different languages, helping to overcome a myopic focus on English. Other projects looks at fairness in less studied methods, like network algorithms, and still more look to improve in specific applications, such as for medical software and algorithmic hiring. These last two are especially noteworthy, since the prevailing public evidence suggests that algorithmic bias in health-core provisioning and hiring is widespread.

Critics may lament that Big Tech, which plays a prominent role in AI research, is present even in this federal programAmazon is matching the support of the NSF, so each organization is paying around $10 million. Yet there is no reason to believe the NSFs independence has been compromised. Amazon is not playing any role in the selection of the grant applications, and none of the grantees contacted had any concerns about the grant-selection process. NSF officials also noted that any working collaboration with Amazon (such as receiving engineering support) is entirely optional. Of course, it is worth considering what Amazon has to gain from this partnership. Reading the FAI announcement, it sticks out that the program seeks to contribute to trustworthy AI systems that are readily accepted and that projects will enabled broadened acceptance of AI systems. It is not a secret that the current generation of large technology companies would benefit enormously from increased public trust in AI. Still, corporate funding towards genuinely independent research is good and unobjectionable especially relative to other options like companies directly funding academic research.

Beyond the funding contribution, there may be other societal benefits from the partnership. For one, Amazon and other technology companies may pay more attention to the results of the research. For a company like Amazon, this might mean incorporating the results into its own algorithms, or into the AI systems that it sells through Amazon Web Services (AWS). Adoption into AWS cloud services may be especially impactful, since many thousands of data scientists and companies use those services for AI. As just an example, Professor Sandra Wachter of the Oxford Internet Institute was elated to learn that a metric of fairness she and co-authors had advocated for had been incorporated into an AWS cloud service, making it far more accessible for data science practitioners. Generally speaking, having an expanded set of easy-to-use features for AI fairness makes it more likely that data scientists will explore and use these tools.

In its totality, FAI is a small but mighty research endeavor. The myriad challenges posed by AI are all improved with more knowledge and more responsible methods driven by this independent research. While there is an enormous amount of corporate funding going into AI research, it is neither independent nor primarily aimed at fairness, and may entirely exclude some FAI topics (e.g., fairness in the government use of AI). While this is the final year of the FAI program, one of NSF FAIs program directors, Dr. Todd Leen, stressed when contacted for this piece that the NSF is not walking away from these important research issues, and that FAIs mission will be absorbed into the general computer science directorate. This absorption may come with minor downsidesfor instance, a lack of clearly specified budget line and no consolidated reporting on the funded research projects. The NSF should consider tracking these investments and clearly communicating to the research community that AI fairness is an ongoing priority of the NSF.

The Biden administration could also specifically request additional NSF funding for fairness and AI. For once, this funding would not be a difficult sell to policymakers. Congress funded the totality of the NSFs $868 million budget request for AI in 2021, and President Biden has signaled clear interest in expanding science funding; his proposed budget calls for a 20% increase in NSF funding for fiscal year 2022, and the administration has launched a National AI Research Taskforce co-chaired by none other than Dr. Erwin Gianchandani. With all this interest, bookmarking $5 to $10 million per year explicitly for the advancement of fairness in AI is clearly possible, and certainly worthwhile.

The National Science Foundation and Amazon are donors to The Brookings Institution. Any findings, interpretations, conclusions, or recommendations expressed in this piece are those of the author and are not influenced by any donation.

Read this article:

How the National Science Foundation is taking on fairness in AI - Brookings Institution

Read More..

How Hackathons Are Changing The Way Data Scientists Are Hired – Analytics India Magazine

Today, hackathons are one of the primary recruitment tools for tech companies. Organisations invest in new-age tools to hire employees by assessing their problem-solving approach and skills to manage time & people. In machine learning hackathons, the participants are given a problem statement and need to work with a dataset to create an accurate model to top the leaderboard. The gamifying experience makes the hiring process more interactive and less stressful for both candidates and recruiters.

A recent report showed universities that leveraged technologies like hackathons in their hiring process achieved a 70 percent on-boarding rate.

For a better perspective, think of a way of solving a problem as quickly as possible. To crack the code, you will certainly need multiple minds to work and innovate together. Thats the simplest definition of a hackathon, Nikhil Barshika, founder of Imarticus Learning.

ML hackathons allow employers to take a closer look at how potential hires deal with real-world situations. Lets take a deep dive into why hackathons are changing the way data scientists are hired.

In a bid to spotlight virtual hackathon as a non-traditional channel of recruitment, MachineHack hosted The Great Indian Hiring Hackathon (2020) in collaboration with 12 prominent companies including Aditya Birla Group, Bridgei2i, Concentrix and Fractal.

Since its inception, MachineHack has aimed to empower data scientists to innovate. Data scientists, despite having tremendous talent and innovation to offer, are facing unprecedented challenges during this pandemic, and we at MachineHack want to tap into that pool of talent, said Bhasker Gupta, CEO & Founder, AIM.

MachineHack has an ongoing fortnight-long hiring hackathon, Mathco. Thon for data scientists and machine learning practitioners. TheMathCompany will interview the candidates who make it to the top leaderboard positions. The participants also stand to win a cash prize.

Organisations can also leverage hackathons for training and upskilling employees, thus preparing them for senior and more relevant roles within the company. The hackathon approach helps companies achieve two goals: to promote the work culture among existing employees and build a substantial brand recall value. For example, Karan Juneja, a regular participant in MachineHack hackathons and its grandmaster, has said hackathons have helped him pick up new data science skills.

While employers get the opportunity to shortlist the best talents for their organisation, candidates get a hang of the organisations work culture. The hackathons offer the ideal setting for both candidates and recruiter to understand if they are the right fit for each other.

I am a Liberal Arts graduate who enjoys researching new topics and writing about them. An aspiring journalist, I love to read books, go on a drive on rainy days and listen to old Bollywood music.

Follow this link:

How Hackathons Are Changing The Way Data Scientists Are Hired - Analytics India Magazine

Read More..

Data Science Platform Market 2020: Potential Growth, Challenges, and Know the Companies List Could Potentially Benefit or Loose out From the Impact of…

Data Science Platform Market is latest research study released by Adroit Market Research evaluating the market risk side analysis, highlighting opportunities and leveraged with strategic and tactical decision-making support (2021-2028). The market Study is segmented by key a region that is accelerating the marketization. The report provides information on market trends and development, growth drivers and the changing investment structure of the Data Science Platform Market. Some of the key players profiled in the study areMicrosoft, IBM, Google, MathWorks, Cloudera, Altair Engineering, SAS, Wolfram, Alteryx, and SAP. Moreover, the other potential players in the data science platform market are RapidMiner, Dataiku, Civis Analytics, Databricks, and Anaconda.

By end users/application, market is sub-segmented as: by Application (Logistics, Marketing, Sales, Customer Support, Human Resource, and Others), Industry Vertical (IT & Telecom, BFSI, Retail, Healthcare, Government & Defense, and Others)

Breakdown by type, the market is categorized as: by Platform (Solutions and Services (Managed Services and Professional Services))

Regional Analysis for Data Science Platform Market includes: North America, US, Canada, Mexico, Europe, Germany, France, U.K., Italy, Russia, Nordic Countries, Benelux, Rest of Europe, Asia, China, Japan, South Korea, Southeast Asia, India, Rest of Asia, South America, Brazil, Argentina, Rest of South America, Middle East & Africa, Turkey, Israel, Saudi Arabia, UAE & Rest of Middle East & Africa

The Data Science Platform Market study covers on-going status, % share, upcoming growth patterns, development cycle, SWOT analysis, sales channels & distributions to anticipate trending scenarios for years to come. It aims to recommend analysis of the market by trend analysis, segment breakdown, and players contribution in Data Science Platform market upliftment. The market is sized by 5 major regions i.e., North America, Europe, Asia Pacific (includes Asia & Oceania separately), Middle East and Africa (MEA), and Latin America and further broken down by 18+ jurisdiction or countries like China, the UK, Germany, United States, France, Japan, India, group of Southeast Asian & Nordic countries etc.

Players profiled in the report:Microsoft, IBM, Google, MathWorks, Cloudera, Altair Engineering, SAS, Wolfram, Alteryx, and SAP. Moreover, the other potential players in the data science platform market are RapidMiner, Dataiku, Civis Analytics, Databricks, and Anaconda.

Consumer Traits Includes Following Patterns**

Consumer Buying patterns (e.g., comfort & convenience, economical, pride)

Customer Lifestyle (e.g., health conscious, family orientated, community active)

Expectations (e.g., service, quality, risk, influence)

Major Highlights from the Data Science Platform Market factored in the Analysis

Data Science Platform Market Measures & Parameters Addressed in Study: The report highlights Data Science Platform market features such segment revenue, weighted average selling price by region, capacity utilization rate, production & production value, % gross margin by company, consumption, import & export, demand & supply, cost bench-marking of finished product in Data Science Platform Industry, market share and annualized growth rate (Y-o-Y) and % CAGR.

Major Strategic Data Science Platform Market Developments: Activities such as Research & Development (R&D) by phase, ongoing and completed Merger & Acquisition (M&A) [deal value, purpose, effective year], Joint ventures (JVs), Technological tie-ups, Suppliers partnerships & collaborations, agreements, new launches etc taken by Data Science Platform Industry players during projected timeframe of study.

The Data Science Platform Market report provides the rigorously studied and evaluated data of the top industry players and their scope in the market by means of various analytical tools. To gain a deep dive analysis; qualitative commentary on changing market dynamics {drivers, restraints & opportunities}, PESTLE, 5-Forces, Feasibility study, BCG matrix, SWOT by players, Heat Map analysis etc have been provided to better correlate key players product offering in the market.

1. Data Science Platform Market Overview

Market Snapshot

Definition

Product Classification

2. Data Science Platform Market Dynamics

Drivers, Trends, Restraints

Market Factors Analysis

3. New Entrants and Entry-barriers

4. Standardization, Regulatory and collaborative initiatives

Manufacturing Process Analysis

Industrial/Supply Chain Analysis, Sourcing Strategy and Downstream Buyers

5. Data Science Platform Market Competition by Manufacturers

6. Data Science Platform Market Value [USD], Capacity, Supply (Production), Consumption, Price, Export-Import (EXIM), by Region (2016-2020)

.

7. Data Science Platform Revenue (Value), Production, Sales Volume, by Region (2021-2028)

8. Data Science Platform Market Trend by Type {, HKP::LT;500ml, 500-1000ml & HKP::GT;1000ml}

9. Data Science Platform Market Analysis by Application {Offline Store & Online Store}

10. Data Science Platform Market Manufacturers Profiles/Analysis

Market Share Analysis by Manufacturers (2020)

Manufacturers Profiles (Overview, Financials, SWOT etc)

Connected Distributors/Traders

Marketing Strategy by Key Manufacturers/Players

Thanks for reading Data Science Platform Industry research publication; you can also get individual chapter wise section or region wise report version like America, LATAM, Europe, Nordic nations, Oceania or Southeast Asia or Just Eastern Asia.

About Us

Adroit Market Research is an India-based business analytics and consulting company incorporated in 2018. Our target audience is a wide range of corporations, manufacturing companies, product/technology development institutions and industry associations that require understanding of a markets size, key trends, participants and future outlook of an industry. We intend to become our clients knowledge partner and provide them with valuable market insights to help create opportunities that increase their revenues. We follow a code Explore, Learn and Transform. At our core, we are curious people who love to identify and understand industry patterns, create an insightful study around our findings and churn out money-making roadmaps.

Contact Us:

Ryan Johnson

Account Manager Global

3131 McKinney Ave Ste 600, Dallas,

TX75204, U.S.A.

https://neighborwebsj.com/

View post:

Data Science Platform Market 2020: Potential Growth, Challenges, and Know the Companies List Could Potentially Benefit or Loose out From the Impact of...

Read More..