Page 2,282«..1020..2,2812,2822,2832,284..2,2902,300..»

5 Top Trends in the Data Analytics Job Market – Datamation

Data analytics jobs have been well paid and in high demand for some time.

The IT Skills and Certifications Pay Index by Foote Partners shows that such skills often merit a pay premium, and the average salary of these specialists has been steadily rising. Among the high-paying areas currently are risk analytics, big data analytics, data science, prescriptive analytics, predictive analytics, modeling, Apache Hadoop, and business analytics.

But data analytics is a broad term. It encompasses business intelligence (BI) and visualization as well as the application of analytics to other functions, such as IT and cybersecurity.

Here are some of the five top trends in data analytics jobs:

See more: The Data Analytics Job Market

Experience or certification in a specific programming language or analytics discipline used to be a passport to good jobs. It will still gain people some positions, but they need more if they hope to move up the pay scale.

For analytics professionals, listing proficiency in SAS, Python, or R may get someone past the initial HR screening, but thats about it, said Sean OBrien, SVP of education at SAS.

Data analytics candidates need experience, certification, and other human skills to succeed in todays market.

It used to be enough to crunch some numbers and then tell the business an outcome or prediction using regular language.

These days, executives demand more. A top trend for data analytics jobs is the increasing importance of communication skills and storytelling. The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more often, its data experts that are setting strategy.

The ability to make analytics outputs relatable to stakeholders across the business will set them apart, said OBrien with SAS.

Its not enough to be able to clean, integrate, and analyze huge amounts of data. Analytics pros have to understand how data and analytics directly support business goals and be able to communicate the story the data is telling. They need to be able to not just present trends and reports but communicate their meaning.

Cybersecurity trends apply to data analytics in two ways: Analysts need to be aware of and possess some security skills if they are to keep their platforms and models secure. But perhaps even more importantly, analytics jobs are becoming available in greater frequency in security. Analysts are needed who can unlock the vast troves of data available in system logs, alerts, and organizational data to find the potential incursions and isolate threats.

Flexibly and securely viewing trusted data in context through shared applications across an industry ecosystem also enables process and governance improvement, said Jeffrey Hojlo, an analyst at IDC.

Storage, too, has transitioned into the analytics arena. Storage administrators are spending less time managing storage devices and more time managing data. This entails being more strategic about data mobility, data management, data services, and delivering the foundation for generating value from unstructured data.

Storage administrators must leverage analytics about files, such as types of files, access times, owners, and other attributes, said Randy Hopkins, VP of global systems engineering and enablement at Komprise.

This knowledge will allow them to manage data throughout its life cycle from creation to deletion and in a way that looks across storage and clouds to deliver the best bang for the buck as well as the best performance for distinct workflows and departmental needs.

See more: Top Data Analytics Certifications

Risk is a hot area across the business world. And it is up to risk management and risk analysts to identify, analyze, and accept or mitigate any uncertainty that may exist in business or investment decisions.

A variety of tactics are used to determine risk. For example, a common tool is known as standard deviation, which is a statistical measure where data is plotted around a central tendency. Management can then see how much risk might be involved and how to minimize that risk.

Those skilled in modern risk analytics are now in greater demand, as the risk management field transitions from manual or traditional methods. Accordingly, risk analytics and risk assessment jobs rose by 5.3% in value over a six-month period, according to surveys by Foote Partners. This form of business intelligence exploits structured and unstructured data as a way to model scenarios and outcomes and provide insight into potential fraud, market risk, credit risk, financial risk, supply chain risk, and other areas of risk.

As a sign that there was definite substance to the hype around big data, Foote Partners notes that big data analytics jobs continue to be in demand. They have risen in value by 13.3% over a six-month period.

Big data analytics sits at the crossroads of many BI and analytics disciplines. Traditionally, analysts would attack small subsets of structured data and then present their findings. Gradually, larger data sets were added as compute and memory resources became more available.

Today, analytics needs to be applied not just to structured but to unstructured data. Big data analysts, then, can use advanced analytics techniques on huge data sets that include structured, semi-structured, and unstructured data from many sources. Their insights fuels faster decision making, make it possible to create more accurate models, and offer prediction of future trends with a higher degree of accuracy.

See more: 10 Top Companies Hiring for Data Analytics Jobs

Excerpt from:

5 Top Trends in the Data Analytics Job Market - Datamation

Read More..

Report on National Enrollment Trends: Institutions Using Advanced Analytics and AI Fared Nearly Three Times Better Than Most Colleges – PRNewswire

PITTSBURGH, Feb. 15, 2022 /PRNewswire/ -- Othot, developer of predictive and prescriptive analytics tools for higher education, today released the 2021 Higher Ed Enrollment Trends Pulse Report, with new findings on the impact of data science and advanced analytics on enrollment over the past year. The reportdrawing on insights from more than 30 of colleges and universities across the United Statesfound that institutions using advanced analytics and data science have been able to buck national trendlines, even as enrollment and other key indicators of student progress continue to decline.

After the unprecedented effects of the pandemic in 2020, 2021 was expected to be a return to normalcy. However, the rise of new COVID-19 variants and other ongoing effects of the pandemic required many institutions to revert to virtual instruction. This resulted in less of a rebound than anticipated as reflected in the data.

To produce this analysis, a team of researchers and data analysts at Othot tracked partner institution deposit and summer melt trends and compared final enrollment data of new, first-time students to National Student Clearinghouse (NSC) trends. Key findings in the 2021 Enrollment Trends Report include:

"Institutions are experiencing uneven enrollment fluctuations--and many of them are negative due to regional population declines and demographic changes. And the economic uncertainty caused by the pandemic may not provide an expected increase in students due to the tight labor market. The answers to today's enrollment challenges require a more sophisticated understanding of the complex variables that influence student success," said Andy Hannah, co-founder of Othot. "As institutions confront trends that threaten to undo hard-fought gains in enrollment and student success, this research highlights the ways in which institutions can successfully leverage advanced analytics to weather the difficult national climate."

The report reviews campus-specific data for institutions who are using Othot's prescriptive modeling and advanced analytics for enrollment and student success. Colleges and universities use this suite of predictive and prescriptive analytics and data visualization to understand which students are most likely to enrolland the most effective strategies to improve their likelihood to enroll further. To view the entire 2021 Enrollment Trends Pulse Report, visit https://www.othot.com/2021-higher-ed-enrollment-trends-pulse-report.

About Othot, Inc.: Othot is the leader in advanced analytics and artificial intelligence solutions for higher education institutions across the United States. Together, Othot and its partner schools focus on each institution's specific enrollment, student success, and advancement goals. Othot's cloud-based solution provides continuous intelligence in real time and empowers schools to engage each prospective, current, and former student with the right tactic at the right time.

About Liaison International: About Liaison International: Through a powerful combination of technology and services, Liaison helps over 31,000 undergraduate, graduate and postbac programs across more than 1,000 campuses achieve their recruitment, admissions, enrollment and student success goals. Liaison's solutions include its Centralized Application Service (CAS), TargetX and Othot as well as its Enrollment Marketing services (EMP), SlideRoom and Time2Track.

SOURCE Liaison

Read more:

Report on National Enrollment Trends: Institutions Using Advanced Analytics and AI Fared Nearly Three Times Better Than Most Colleges - PRNewswire

Read More..

How data science is leading to better network optimisation – Siliconrepublic.com

Huaweis Alex Agapitos discusses how the latest data science techniques have become necessary in the maintenance of networks.

Data science has become essential in pretty much every industry that uses data, from supply chains and healthcare to insurance and e-commerce.

In the world of telecoms, data science techniques are required to optimise networks through predictive modelling techniques. To learn more about this, SiliconRepublic.com heard from Alex Agapitos, a principal AI architect at the Huawei Ireland Research Centre.

Agapitos has a degree in software engineering and a PhD in computer science. He worked as a post-doctoral researcher in the Complex and Adaptive Systems Laboratory at University College Dublin before joining Huawei in 2016.

He said the introduction of 5G, IoT and edge computing brings new complexities to network operations, which have made manual maintenance infeasible without the latest data science.

Dominant success stories revolve around the use cases of reactive/predictive maintenance and network optimisation, he said.

In the former, outlier detection and predictive modelling techniques mine for patterns in historical data to accurately anticipate and warn about imminent network failures. This allows operators to identify early warning signs of failure and their associated root causes, enabling early interventions before failures affect end users.

Agapitos said another important transformation that data science has brought is autonomous network optimisation.

Deep learning-based predictive modelling allows simulation models of the network environment to be trained using historical data and then combined with data-driven optimisation algorithms that continuously reconfigure the network, he said.

The arrival of data-hungry applications including virtual reality, self-driving cars and gaming will further escalate the need for autonomous data-driven solutions in 5G and beyond.

With data science already driving autonomous network optimisation, Agapitos said he sees an era of intelligent telecommunication networks with minimal human supervision coming down the line.

Advances in multi-agent systems will allow the network to be modelled and implemented as a collection of autonomous agents that perceive their environment and take actions to cooperatively meet a set of global goals, such us keeping the network performance at near-optimal levels at all times, he said.

To deal with ever-changing network conditions, it is crucial for autonomous agents to have the ability to continually acquire, fine-tune and transfer knowledge and skills throughout their life cycle, which is a research area known as continual or lifelong learning.

Advancing lifelong learning for machine learning systems is an ongoing challenge but Agapitos said there is plenty of emerging research in this area.

He also said the advancing complexity and sophistication of intelligent telecommunication networks will inevitably pose a challenge to the human operator in understanding the reasoning behind autonomous decision-making.

Trustworthiness of the autonomous systems internal functionality is of fundamental importance and it will be realised through advances in explainable AI.

Explainable AI is a research area that sits at the intersection of data science, deep learning and symbolic AI. The aim is to develop methods and techniques that produce accurate, explainable models of why and how an AI algorithm or prediction model arrives at a specific decision, so that the result can be understood by a human.

While the need for data grows within society, so too does the question of privacy. Agapitos said he believes the issue of data privacy can be addressed via another area of data science a machine learning technology known as federated learning.

While standard machine learning approaches require centralising the training data in one machine or in the cloud, federated learning enables AI native network elements or user equipment to collaboratively learn a shared prediction model while keeping all the training data on-premise or on-device, he said.

In a nutshell, federated learning proceeds as follows: the network element or user equipment downloads the current model from a shared coordinator, it improves the model by online learning based on data generated locally at the network element or user equipment, and then summarises the model changes as a small model update.

This small update is then sent back to the coordinator using encrypted communication, where it is immediately averaged with peer model updates to improve the shared model. Federated learning allows for smarter models, lower latency, less power consumption, all while ensuring privacy.

Dont miss out on the knowledge you need to succeed. Sign up for theDaily Brief, Silicon Republics digest of need-to-know sci-tech news.

See the article here:

How data science is leading to better network optimisation - Siliconrepublic.com

Read More..

Closing the gap between real and synthetic data – Analytics India Magazine

Synthetic data was listed among the top five biggest data science trends for 2022, and Gartner named it among the top strategic predictions for this year. In a world that is highly driven by data, privacy and process issues often limit the kind of data researchers might require. A promising way out here is artificially generated data or synthetic data.

Various algorithms and tools are used to generate this synthetic data which is then used in a wide variety of applications. When used properly, synthetic data can be a good addition to human-annotated data while maintaining the speed and cost factors of the project.

Despite the massive opportunities that synthetic data brings to the table, one of the main challenges it faces is the reality gap that exists. A neural network can tell the difference between simulation and reality. This domain gap, which is also referred to the as uncanny valley, limits the real-world performance of machine learning models trained only in simulation. Closing the gap is important to research and practical challenge for the effective use of synthetic data.

Real-world data often contains a large amount of variability. To match up to this variability even in synthetic data generation, researchers are increasingly depending on domain randomisation. Speaking particularly about computer vision applications, domain randomisation can help randomise parameters like lighting, pose, object textures, etc.

Domain randomisation has been viewed as an alternative to high-fidelity synthetic images. The domain randomisation technique was first introduced by Josh Tobin and his team via a paper titled Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World. In this paper, the researchers defined domain randomisation as a promising method for addressing the reality gap, where the simulator is randomised to expose the model to a range of environments instead of just one at training time. The team worked on the hypothesis that if the variability in simulation is big enough, the models trained in a simulation will generalise to the real world without additional training.

In 2018, researchers from NVIDIA presented a data randomisation approach to train a neural network to accomplish complex tasks like object detection. Results of this technique were found to be comparable with more expensive and labour intensive datasets. In this technique, synthetic images were randomly perturbed during training while focusing on relevant features. They were able to demonstrate that domain randomisation outperforms more photorealistic datasets and improves performance on results obtained using real data alone.

A slight improvement over domain randomisation is structured domain randomisation. It takes into account the structure and the context of a scene. Unlike domain randomisation, which places objects and distractors randomly according to a uniform probability distribution, structured domain randomisation places objects and distractors according to the probability distributions with respect to the specific problem at hand. This approach helps neural networks in taking the context into consideration during detection tasks.

Despite the popularity of domain randomisation, the technique requires a domain expert to define the parts that must stay invariant. Conversely, increasing photorealism requires an artist to model the specific domains in detail, which increases the cost of generating data. The whole exercise defeats the entire cost-effectiveness aspect a major selling point of synthetic data.

Enter domain adaption.

Domain adaption is an approach that makes a model that is trained on one domain of data to work well even with a different target domain. One of the most popular domain adaptation techniques is the usage of GANs. Conditional GANs, in particular, take additional inputs to the condition generated output. The image conditional GANs form a general-purpose framework for image-to-image translation problems. Conditional GAN was proposed in late 2014. In this technique, the GAN architecture is modified by adding label y as a parameter to the input of the generator module. This architecture generates corresponding data points while adding labels to the discriminator input to distinguish real data better.

Read the rest here:

Closing the gap between real and synthetic data - Analytics India Magazine

Read More..

5 Things to Consider When Operationalizing Your Machine Learning | Transforming Data with Intelligence – TDWI

5 Things to Consider When Operationalizing Your Machine Learning

Operationalizing machine learning models requires a different process than creating those models. To be successful at this transition, you need to consider five critical areas.

As machine learning teams start, most of their work is done in a laboratory mode. This means that they work through the process in a very manual yet scientific manner. They iteratively develop valuable machine learning models by forming a hypothesis, testing the model to confirm this hypothesis, and adjusting to improve model behavior. As these projects mature and evolve, it often becomes important to take them out of experimentation mode and operationalize them.

Operationalizing machine learning requires a shift in mindset and a different set of skills for those performing the work. The inquisitive state of what-ifs and trial and error give way to practices that are predictable and stable. The goal is to reproduce the same valuable results that were generated as part of the creation process but do it in a way that is more hands off and long running. This changes the teams goals from experimentation to experience management.

To effectively operationalize your machine learning model, consider these five key areas: data collection, error management, consumption, security, and model management.

Data Collection

During the experimentation phase, much of the data collection and cleansing is done manually. A training and testing data set is pulled from the source -- that source could be a data lake, a data warehouse, or an operational system -- and is often hand curated. The merging, matching, deduping, and overall data wrangling is generally done one step at a time. This is mainly because the data scientists are not sure what will persist (and what wont) in the data set. This data management process can span from work done in programming languages such as Python and R to work performed using a spreadsheet or a text editor.

With an operational model, the uncertainty of what data is valuable is removed and all the data wrangling done during the build phase now needs to be automated and productionalized. This means that the scripts used during the development phase need to be standardized into something that can be supported in a production environment. This can mean rewriting scripts into a supported language, automating the steps performed in a spreadsheet using scripting or an ETL tool, or ensuring that all the data sources used are being updated regularly and are accessible as part of the data collection process.

Error Management

When data scientists are working through the process one step at a time, they manage the errors that arise. From dirty data to data access issues, if data scientists run into a problem, they interact with the people and systems that can resolve it. With these unanticipated challenges, the most effective path forward is to address them one at a time as they arise.

This is not the case once the models have been promoted to a production environment. As these models become integrated with an overall data pipeline, downstream process come to rely on their output and errors have a higher risk of business disruption. As many of these potential errors as possible need to be anticipated during the pre-operation design and development stage and automated mechanisms need to be designed and developed to address them.

Read this article:

5 Things to Consider When Operationalizing Your Machine Learning | Transforming Data with Intelligence - TDWI

Read More..

Amperity Announces Strategic Collaboration Agreement with AWS to Deliver Customer-Centric Interactions and Drive Product Innovation – PRNewswire

SEATTLE, Feb. 15, 2022 /PRNewswire/ -- Amperity, a leading customer data platform (CDP) for consumer brands, today announced a Strategic Collaboration Agreement with Amazon Web Services, Inc. (AWS) to help support digital transformations for companies looking to deploy first-party data strategies and engage with new customer segments.

This collaboration will serve customers by enabling Amperity to deliver cloud-based customer-centric tools and artificial intelligence-driven (AI) insight. It will also support the scaling of Amperity's AWS practice with deeper cooperation in delivering innovative products and services that provide a unified view of the customer.

"At Brooks Running, truly understanding our runners is at the heart of everything we do," said Melanie Allen, Senior Vice President and Chief Marketing Officer at Brooks Running. "Based on our experience using both AWS and Amperity, this collaboration makes perfect sense as it builds upon each company's core strengths to deliver high-level actionable insights that directly impact how we serve runners with only the best brand experiences."

This strategic go-to-market collaboration arrives at a moment when large consumer-facing enterprises are accelerating their data analytics to keep pace with radical changes in consumer habits and preferences. Under this collaboration, Amperity and AWS will help customers transform the way they interact with their end-customers with industry-relevant, cloud-based solutions. Amperity's CDP is available to all AWS customers in AWS Marketplace, allowing them to optimize their technology stack and streamline in-stack operations. Together with AWS services such as Amazon Kinesis, users gain access to a scalable and durable real-time data streaming service to meet their interoperability needs. Customers can also couple Amperity's Customer 360 and analytics capabilities with Amazon Redshift and Amazon Athena to easily ingest raw data from AWS services and egress smart segments back for further analysis or activation.

"Our technology partners play a crucial role in enhancing the customer experience across every touchpoint, including marketing, sales, information technology, and data science,"said Derek Slager, CTO at Amperity. "It's an honor to enter into a Strategic Collaboration Agreement of this kind with AWS, serving the growing digital customer experience segment. Working together, we will ensure our joint customers have first-class experiences through the innovative services that both AWS and Amperity provide."

"AWS and Amperity understand that to transform faster, enterprises need a better way to store, access, and leverage customer data," said Ruba Borno, Vice President, Worldwide Channel and Alliances at AWS. "Our line-of-business customers in every industry frequently tell us that they need to leverage their data to be more agile in response to changing consumer behaviors and preferences. This collaboration with Amperity is driving modern data solutions that deliver consumer insights, faster, to our customers."

To learn more about Amperity, please visit http://www.amperity.com. For information on how Amperity and AWS are working together to support Brooks Running, read the "Brooks Running Improves Paid Search Click-Through Rate by 260% Using Amperity on AWS" case study.

About AmperityAmperity is a leading Enterprise CDP trusted by the world's most loved brands to accelerate their shift to first-party customer relationships.Amperity has revolutionized the way brands identify, understand, and connect with their customers by leveraging AI to deliver a comprehensive and actionable Customer 360. This unified view improves marketing performance, fuels accurate customer insights, and enables world-class, real-time customer experiences. With Amperity, technical teams are freed from endless integration and data management, and business teams have direct access to the comprehensive data they need to build long-term customer loyalty and drive growth. Amperity serves many of the world's most loved brands, including Wyndham Hotels & Resorts, Alaska Airlines, Michaels, Tapestry, Lucky Brand, DICK'S Sporting Goods, Crocs, Servco Automotive Retail, First Hawaiian Bank, and many more. To learn more, visit Amperity.com.

SOURCE Amperity

View original post here:

Amperity Announces Strategic Collaboration Agreement with AWS to Deliver Customer-Centric Interactions and Drive Product Innovation - PRNewswire

Read More..

From "data for good" to "data for impact" – NationSwell

Data science has the power to accelerate social and environmental

progress. Yet according to Salesforce.orgs Nonprofit Trends Report, only 22% of social impact organizations have achieved high data maturity today. As a result, the data for good sector has the tendency to rely too heavily on creating flashy, new tools to fix problems. But these tools often fail to move the needle on real impact, and many underserved communities and the non-profit organizations that serve them continue to need better access to skills and capacity to leverage these innovations.

social and environmental progress. Yet according to Salesforce.orgs Nonprofit Trends Report, only 22% of social impact organizations have achieved high data maturity today. As a result, the data for good sector has the tendency to rely too heavily on creating flashy, new tools to fix problems. But these tools often fail to move the needle on real impact, and many underserved communities and the non-profit organizations that serve them continue to need better access to skills and capacity to leverage these innovations.

In order to actually deliver impact through data science at scale, what needs to change across our sector?At a recent data.org event, we convened social impact organizations, funders, and data science leaders to explore ways to address this challenge. We sought participants insights and gained a clearer sense of what it will take for data to be accessed and applied for good.What follows are three calls to action that emerged from our conversation. We believe that realizing these calls would catalyze a shift toward scalable, sustainable, and genuinely community-driven projects that help the social good sector use data science to realize impact.

Its easy to fall for the flash and glimmer of a new AI solution but we cant stop there. We have to deepen our understanding of the problems that we are trying to solve, and our commitment to working with the people and communities that experience real challenges everyday. This might seem like a small shift, but its seismic. It pushes us beyond thinking only about the mechanics of a technical solution and instead challenges us to ask how new technology can change the balance of power in favor of people and communities that have been systematically excluded or harmed. To be clear, passion for new technical solutions isnt bad. Many problems we face in the social impact sector do require innovation and creativity. But simply having a new approach doesnt guarantee actual impact. Our metric for success cannot simply be that we delivered a solution. That solution must meaningfully contribute to reducing suffering or improving equity.

Doing this isnt easy. It requires technical experts to diversify their networks and engage with humility. True understanding of social issues cannot be done without community experience and partnership. Creating technology far from the community it purports to benefit rarely works. Instead, we must partner with communities to develop solutions that are responsive and designed to scale in the real world.Funders play a critical role in shifting the focus from novel solutions to actual impact. Much of the innovation funding ecosystem currently focuses on building new things instead of investing in long-term capacity building and problem solving. As solution builders, it can be easy to lose focus on the impact you seek in favor of amplifying what will be most attractive to funders. Change makers and funders bear a joint responsibility to honor the complexity and context of the problem at hand and continually seek to deliver impact, not getting distracted by a desire to over index on what might be considered the shiny, data-driven technology of the moment. Disciplined focus on what specific problem data science is helping you understand or address at any one moment in time is essential when unlocking the power of this technology. Without a disciplined approach, the use of data science can be distracting and potentially dilute or derail your impact.So, we must follow the problem. And one of the things we might learn as we follow it is that the problem is not solvable because of a single data science method. For people coming from data science backgrounds and engineering backgrounds, that means that you might actually have to admit that you maybe arent the biggest part of the solution. And that reflection, and the maturity around that reflection, is absolutely critical for figuring out what you can do, for figuring out an angle in, for figuring out an approach or an impact model that actually does speak to the real problem. You have to identify what problem it is that you are capable of solving and find true product-impact fit. While following the problem seems intuitive, it is inherently very difficult. But its urgently necessary if we want to advance and truly use data to drive impact rather than just giving rise to pilots that explore emerging technologies. As social impact officers, implementers, and funders, we must honor the complexity of the problems that we seek to solve, and be committed enough to fall in love with the actual problems themselves.

Advancing our sector also means seeing and supporting projects through to the very end, to where people are applying it to their everyday lives or organizations. It is much easier to build a new product and get it to a Minimal Viable Product stage. But then, to deliver on the impact, you have to actually use the product over time. You have to build the muscle for iteration. Embracing iteration helps to solve one key challenge social impact organizations face: a lack of clarity around the metric for which they are optimizing. In profit-driven business, its much more straightforward: Does a new recommender algorithm, for example, increase engagement, conversions, and then revenue?But for social impact organizations, measurement and agreement on what the key metrics actually are can make this messier. Building a muscle for iteration means you commit to actually looking at the outcomes of deploying a new method, and that youre able to regularly and reliably measure those outcomes at a reasonable cost. And like building muscle in the gym, this process requires trial and error and an ongoing commitment.Funders have traditionally taken a very linear, more short-term approach to supporting solutions providing resources to get to the end of an initial pilot, for instance but the messy nature of achieving impact goals demands that we should be embracing a more iterative mindset and approach. Common case studies for success like BlueConduits data driven approach to helping Flint with its water crisis or GiveDirectlys efforts to use data science to target cash transfers for COVID-19 relief all reflect an iterative narrative, reinforcing the ideal process of idea, implementation, and success, with funding and governmental support at every step of the journey. However, those seamless journeys are the exception, not the rule. The reality of driving impact outcomes is more like life: unpredictable and requiring constant course correction. Imagine an exciting new algorithm that promises to solve hunger in a community. We might expect there to be funding to build the algorithm, have the paper written about it, get press published; but, when it comes to working through the application of it with 20 non-profits with different use cases, we may realize that the algorithm will need continuous refining, and that the exercise of testing and refining will take us in new and unexpected directions around how to effectively serve diverse neighborhoods or, at worst, that no one needs the technology in its initial form, and well have to go back to the drawing board and build something fundamentally different from the initial solution.Thats where our current systems for funding and support can fall apart. So, we need solution builders and funders to anticipate and embrace the 2.0s of the project, the 3.0s, and beyond. Only through the creation of Minimum Viable Products and its testing phase can we understand that component of the problem statement that we can effectively influence, improve, predict, or make more efficient.

Sustaining and scaling data science for impact requires a deep commitment to capacity building and technical education. This capacity building must happen across the ecosystem, from implementing organizations, through to funders. At this stage investing in the capacity of humans is probably the most powerful thing that we can do to move along the transformation curve. Because humans and systems are what actually move the needle on solving problems, investments in human systems ensure that innovation happens at scale, rather than just one thing at a time. Katharine Lucey, who leads Solar Sister, is a perfect example of what you unlock when you invest in the humans and internal capacity behind a solution. With data.orgs support through the Inclusive Growth and Recovery Challenge, she invested in making sure she had data experts on her team and the budget to support them in the long term. As a result, her work in supporting local women entrepreneurs in Africa who work with clean energy has become a model for how data science can help steer social impact. That evolution is the direct result of investments in capacity. As another example of building capacity of partners: The Center for Global Action devises a system for locating and measuring poverty. But the step that actually helped people in poverty was getting money to them, and having policy makers who understood this system and could adapt it and move it through. So the CEGA system of data measurements for poverty was important, but only in as much as it enabled a sophisticated, human-driven administrative process that was actually distributing money.At the end of the day, it will be our subject matter experts who understand the complexity and the context of the challenges faced by the communities seeking to solve problems in their neighborhoods. We have a responsibility to make sure that this type of thinking, learning, and tooling is available. How do we train more? How do we implement more for more people?As problem solvers, and funders of problem solvers, there needs to be more consideration of the patience of capital especially when were talking about product-impact fit and learning around how to fund product roadmaps. We need to be asking not just, What can the technology do? but, How do we train more people? How long can they sustain this work? What else do the people doing this work need? How do we build interdisciplinary teams that have the data skills, technical skills, community insight and subject matter expertise of the problem?

Funders or impact partners shouldnt be afraid if any of this sounds overly ambitious or daunting: its just a different mindset, and different set of knowledge to acquire. We can all do this together but to do it, we must change how we build, fund, train, support, and lead the sector moving forward. We must move from being solutions-focused to being problem-focused, from launch-focused to iteration-focused, and from tech-focused to capacity-focused. These challenges require all of us innovator, funder, and implementer alike to contribute. Theyre complex challenges, but its exactly what data.org was set up to do. For practical information and inspirational ideas to help social impact organizations use data science to solve the worlds biggest problems, check out data.orgs public resource library.

View post:

From "data for good" to "data for impact" - NationSwell

Read More..

Datamatch: The science behind how to find love, companionship this Valentine’s Day – UW Badger Herald

With Valentines Day right around the corner, it feels as though everyone is frantically looking for love. COVID-19, isolations and awkward interactions are a recipe for disaster when looking for love or even companionship.

Datamatch is a college-oriented, online matchmaking service created by a group of Harvard students in 1994. Datamatch is now offered at over 30 colleges and universities around the country, and the University of Wisconsin is lucky enough to have made the list already.

Datamatch uses artificial intelligence algorithms to pair people in a science-based way. The algorithms are extremely accurate hence the years of success behind them.

UW researchers employ virtual reality technology to reimagine scientific explorationIn a world with increasing reliance on technology, the University of Wisconsins Virtual Environments Group has been utilizing virtual reality Read

The service runs a Python script on the night before Valentines Day to generate matches automatically, UW Datamatch president Caelan Kleinhans said. The website is coded using Javascript and React.

Everyones data and everyones answers get compiled in a specific way to match people with other people in ways that they want, Kleinhans said. So it matches people with other people that are looking for the same things, which is really cool. And then it also matches you with people who are similar or different to you based on your answers.

Debunked: Are cold temperatures making you sick?Editors note: Debunked is a science news series that focuses on dismantling common myths and misconceptions with the help of Read

The Datamatch survey is sent out every year Feb. 7, and people have one week to complete the survey. The questions include personal preferences and traits, as well as what someone is looking for in their partner or companion.

Feb. 14, Valentines Day, the survey closes and the matches get sent out early that morning just in time to find love for the biggest day of the year. Everyone then receives around 10 similar matches based on their survey responses and the algorithm. Once these 10 matches are released, users can select any matches that seem of interest to them and mutual matches can set up a date.

UW researchers dive into the power of sea creatures for disease treatmentFish are friends and so are sharks! University of Wisconsin researchers at the LeBeau Lab use sharks as a valuable Read

Once matches are made, participants in Datamatch can go to events that include watching a movie, ice skating and swing dance lessons, according to Mitchell Schroeder, a member of the UW Datamatch chapter.

Those events are kind of geared toward anyone whos done Datamatch, Schroeder said. You can come with your friends, you can come with people you matched with and meet on there. You can really come with anyone. Were just trying to get people together on campus and have a good time.

While you and your uber-compatible date may not be fans of math and science, Datamatch just goes to show that science can be applied in many different ways.

Go here to see the original:

Datamatch: The science behind how to find love, companionship this Valentine's Day - UW Badger Herald

Read More..

Data, Decarbonization, and The New Math of Sustainability – Transmission & Distribution World

Utilities canand MUSTplay a leading role in the clean-energy transition, and in their own sustainable transformation.

Depending on how you look at it, the glass is either half full or half empty. The half full view says that if utilities can rise to the challenge and meet the 2030, 2040 and 2050 sustainability goals they have laid out, worldwide emissions could fall precipitously. The half empty response counters that the utility industry is constrained by regulation that stifles innovation. Adding to the difficulty, the utility industry is constrained by downward cost pressure, meaning large rate hikes are not an option.

As someone who has dedicated 35+-years to a career in the utility space and seen the best and worst tendenciesand everything in betweenin our industry, I may surprise you by saying Im not in the half-full camp. I think theres even more in the glass. The challenge ahead is likely the biggest weve ever faced as an industry, but it is also a once-in-a-lifetime opportunity to grow, radically improve, and make a lasting positive impact on society, all fueled by the following macro-drivers:

Sustainability: Decarbonization efforts in service of reaching net zero carbon emissions are the clarion call of the industry. Mass adoption of renewables at scale wind, solar, battery storage, and an emerging set of new energy sources is in motion. At the same time, collaborating with business and residential customers using distributed energy resources, such as solar and demand response, will only grow in importance. Feeding into all of this is increasing demand for EVs, part of an electrification wave that could drive utility growth for decades.

Safety and Reliability: While utilities are fighting the consequences of climate change more severe fires, droughts, floods, and temperature extremesthey are doing so from a position of weakness, and with aging infrastructure. A smarter, more resilient grid must be built to literally weather the storm, and it will cost billions.

Equity: It has always been an issue, but because of COVID-19 and its ongoing variants, low and moderate income (LMI) residential customers and small and medium sized businesses (SMBs) have gotten clobbered, and they need, and are demanding, more from their utilities to handle energy expenses and gain access to sustainability programs.

But we cant just spend our way out of this. The limited ability of utilities to raise rates fundamentally changes the math of transformation. With hundreds of billions in needed capital investment on one side and affordable rates on the other, O&M costs have moved to the middle, becoming the fulcrum. Simply put, utilities numbers for sustainability, safety, reliability, and equity dont and cant add up without a massive decrease in O&M expense and optimized capital expenditures.

But how is that going to happen? That is literally the multi-billion-dollar question, one that necessitates a fundamentally new-and-improved approach. That approach is rooted in data. The strategic use of data is now a must-have capability for every utility in search of increased customer engagement, infrastructure optimization, and sweeping cost reduction.

Just look at the Sustainability Transformation Plan laid out by Evergy. The plan accelerates Evergys transition to cleaner energy (80% reduction in CO2 emissions by 2050), increases capital investment in critical utility infrastructure to nearly US $5 billion, anticipates 5-6% rate base growth and 6-8% EPS through 2024, and promises increased benefits for customers and communities. And it reduces O&M costs by 25%.

Thats thinking BIG.

But theres a reason this big thinking is more than wishful thinking. The prevalence of AMI and other utility and customer data, the maturation of predictive data science, the evolution of AI and machine learning solutions, and the growing understanding of how to put them all together to move the needle quickly and efficiently means there has never been a better time for utilities to play offense.

Instead of a cadence-based approach to managing the grid, the industry is moving to a data-driven risk-spend efficiency approach to optimize capital investments and O&M spend, managing reliability proactively at a more granular level. In addition to cost savings, this data-driven approach also enables a more collaborative regulatory process I call results-based compliance, which ties spending directly back to reliability measures. It relies on data, not a calendar, to optimize spending on vegetation management, for example.

With data in hand, the time is NOW for utilities to dive in and accelerate their own transformation to becoming a more sustainable utility.

Because utilities matter. They always havejust ask anyone without power or safe drinking waterbut they matter now more than ever. In addition to keeping the lights on, the coolers cooling, and the water flowing, utilities are now poised to play a starring role in the very survival of the planet.

The good news is its doable, and doable now.

Ted Schultz is the chief executive officer of ESource. He has more than 35 years of industry experience and is known for his focus on providing value to customers. Prior to joining ESource, he served as CEO of TROVE Predictive Data Science, senior vice president of utility solutions at Ecova, and vice president of marketing and energy efficiency at Duke Energy. Ted has held several advisory board positions at the national level, including with the Edison Foundation, National Action Plan for Energy Efficiency, EPRI Energy Services, and JD Power Smart Grid Services. He currently serves on the Marketing Executives Conference.

Continue reading here:

Data, Decarbonization, and The New Math of Sustainability - Transmission & Distribution World

Read More..

Why Big Data Analytics, AI/ML will be the Most In-Demand Skills in India in 2022? – Analytics Insight

Why Big Data Analytics, AI/ML will be the Most In-Demand Skills in India in 2022?

The need for skilling and upskilling reached a new high amid the pandemic and in 2022, big data analytics, along with AI/ML, are reportedly to be the most in-demand skills in India. With rapid tech adoption across industries and entirely tech-enabled sectors such as IT and BFSI, the role of AI and Machine learning will only continue to grow in 2022, with a significant increase in the demand for related roles. Industry reports suggest that AI/machine learning investments in India will continue to grow at a CAGR of 33.49 percent till 2023.

To enhance customer engagement, more and more organizations are adopting chatbots which are forecast to empower approximately 45 percent of organizations customer support services by 2022. The future of work is location-agnostic and hybrid, with increased skilling initiatives being undertaken by both employers and employees. Leading tech-enabled industries such as IT, FinTech, BFSI, and crypto will continue to flourish with talent demand spikes. It is also interesting to note that employee flexibility would be critical towards retaining talent in the future, and the Great Shuffle is a reinforcement of how the huge demand in the jobs market is opening the door for employees to select a career of their choice. The Indian fintech market is expanding rapidly and is estimated to become the third-largest market in the world by 2025.

According to the annual trends report, Indian IT has continued to hire through the course of the pandemic and will exhibit similar trends in 2022.

The IT industry is reportedly forecast to grow 7 percent in the current year and is likely to see a gross employee addition of around 450,000 in the second half of FY22. The top skills organizations are on the lookout for are Big data science, cloud computing, artificial intelligence, Blockchain, and machine learning. The demand for sales professionals is estimated to increase especially in industries such as fintech, retail, e-commerce, and social commerce. With a number of employees preferring remote working and staying in their hometowns, more and more organizations are considering setting up smaller offices in tier 2 cities or utilizing co-working spaces to provide employees with better resources and access to technology. This would, in turn, lead to an increased demand to hire across tier 2 cities in the coming months.

Hiring for freshers has picked up over the last three months of the year and is estimated to increase considerably in 2022.

The continuing emergence of startups will further contribute to the demand for entry-level professionals across industries. According to researchers, India has about 6.33 crore micro, small and medium enterprises. The number of registered SMBs grew 18.5% YoY to 25.13 lakh units in 2020, from 21.21 lakh units in 2019. As of 2020, registered SMBs were dominated by micro-enterprises at 22.06 lakh units, over 18.70 lakh in 2019, while small enterprise units went up from 2.41 lakh to 2.95 lakh. Midsized businesses marginally increased from 9,403 units to 10,981 units in the same period. SMBs and MSMEs are currently reported as employing more than 130 million people and contribute about 6.11% to Indias manufacturing GDP, 24.63% of the GDP from service activities as well as 33.4% of manufacturing output.

Share This ArticleDo the sharing thingy

See the article here:

Why Big Data Analytics, AI/ML will be the Most In-Demand Skills in India in 2022? - Analytics Insight

Read More..