Page 1,516«..1020..1,5151,5161,5171,518..1,5301,540..»

Cloud computing firm Rackspace to lay off 4% of workforce – ETCIO South East Asia

San Francisco, March 28 (IANS) US-based cloud computing company Rackspace Technology has said it would lay off about 4 per cent of its workforce globally amid tough macroeconomic conditions.

According to San Antonio Express-News, the layoffs will cut nearly 275 employees across the company's global workforce.

However, it is not confirmed how many employees would be let go from the company's San Antonio headquarters or what their roles would be.

According to the recent US Securities and Exchange Commission filing, Rackspace had about 700 local employees in December, and its global employment was about 6,800 in 23 countries, including 3,100 in North America.

"Like so many companies that are feeling the effects of the macroeconomic downturn, Rackspace is no exception," Casey Shilling, chief marketing officer, was quoted as saying.

"During this uncertain time, it is important that we align our cost structure to the demands of the business. This requires some elimination of roles across the company," she added.

The company said that it will provide severance and other resources to those employees who are laid off, the report mentioned.

-IANS

Read this article:
Cloud computing firm Rackspace to lay off 4% of workforce - ETCIO South East Asia

Read More..

Why 5G and edge computing are key to retail success on Microsoft Cloud – CIO

The retail industry is transforming rapidly. Modern retailers rely heavily on automation for managing inventory, shelf design, customer service, and logistics. Video cameras and sensors that allow for unique store design help to enhance the customer experience. Technology is truly powering retail transformation, setting modern stores apart from traditional brick-and-mortar ones.

It is no easy feat sending all these video streams and sensor data to the cloud for real-time analysis. High bandwidth is required to move heavy data streams. So is low latency for quick data processing and decision making, especially when robotics is involved.

This is where edge computing and edge-native applications become relevant for retail stores. They allow computing to occur closer to the source of dataright inside the store. Coupled with a private 5G communication network, retailers can deploy cost-effective and high performing edge-native applications.

At the same time, companies must maintain secure environments and prevent fraud. According to a recent Microsoft blog, organizations can use security and compliance solutions in Microsoft 365 E5 to have visibility into their threat landscape and leverage built-in AI and machine learning in Microsoft Sentinel and Microsoft Defender for Cloud to proactively manage threats and reduce alert fatigue.

Read the full blog post to learn more.

Read more from the original source:
Why 5G and edge computing are key to retail success on Microsoft Cloud - CIO

Read More..

Introduction to Data Science with Python: How is it Beneficial? – Analytics Insight

Learn how beneficial data science with Python in this simplified guide with meaningful resources

The need for more effective and efficient data storage increased significantly as the globe entered the era of big data in recent decades. Businesses utilizing big data put a lot of effort into developing frameworks that can store a lot of data. Eventually, frameworks like Hadoop were developed, aiding in the storage of enormous volumes of data.

When the storage issue was resolved, attention turned to processing the data that had already been saved. Data science has emerged as the method of the future for handling and evaluating data in this situation. Data science is becoming a crucial component of any industry dealing with massive volumes of data. Businesses currently employ experts and data scientists who take the data and transform it into a useful resource.

Lets now get into data science and the advantages of using Python for data science.

Lets start by studying data science and then using Python to learn about it. Finding and examining data in the actual world is fundamental to data science, which then employs this knowledge to address practical business issues.

Now that you are aware of what data science is, lets first discuss Python before delving deeply into the subject of data science with Python.

We require a programming language or tool, such as Python, for data science. Although there are other data science tools, such as SAS and R, this post will concentrate on Python and how it may help with data science.

Python has recently gained a lot of popularity as a programming language. Its usage in data science, the Internet of Things, artificial intelligence, and other technologies have increased its appeal.

Since it has expensive mathematical or statistical features, Python is utilized as a programming language for data research. That is one of the key explanations for why Python is used by data scientists all around the world. Python has emerged as the preferred programming language, particularly for data science, if you follow patterns over the previous few years.

Python is one of the most popular programming languages for data science for several additional reasons, including:

Speed: Python is comparatively quicker than other programming languages in terms of speed.

Availability: There are several packages created by other users that are readily available and may be utilized.

Design objective: Pythons syntactic responsibilities are simple to comprehend and intuitive, making it easier to create applications with intelligible code.

Python is a straightforward programming language to learn, and it supports certain fundamental operations like adding and printing statements. But, you must import certain libraries if you wish to undertake data analysis. Many instances include:

Pandas: Tool for working with structured data.

NumPy: A powerful library that helps you create n-dimensional arrays

SciPy: Offers scientific features like Fourier analysis and linear algebra

Matplotlib: Mostly used for visualization.

Scikit-learn: Used for all machine learning operations.

Original post:

Introduction to Data Science with Python: How is it Beneficial? - Analytics Insight

Read More..

The Berkson-Jekel Paradox and its Importance to Data Science – KDnuggets

If you are a Data Scientist or an aspiring one, you will know the importance of statistics in the sector. Statistics help Data Scientists to collect, analyze, and interpret the data by identifying patterns and trends, to then make future predictions.

A statistical paradox is when a statistical result contradicts expectations. It can be very difficult to pinpoint the exact cause, as it is hard to understand the data without the use of further methods. However, they are an important element for Data Scientists as it gives them a lead on what could possibly be causing the misleading results.

Here is a list of statistical paradoxes relevant to data science:

In this article, we will be focusing on the Berkson-Jekel paradox and its relevance to Data Science.

Berkson-Jekel paradox is when two variables are correlated in data, however, when the data is grouped or subsetted, the correlation is not identified. To put it in layman's terms, the correlation is different in different subgroups of the data.

The Berkson-Jekel paradox is named after the first statisticians who described the paradox, Joseph Berkson and John Jekel. The discovery of the Berkson-Jekel paradox is when the two statisticians were studying the correlation between smoking and lung cancer. During their study, they found a correlation between people who had been hospitalized for pneumonia and lung cancer, in comparison to the general population. However, they conducted further research which showed that the correlation was due to smokers being hospitalized for pneumonia more, in comparison to people who did not smoke.

Based on the statistician's first research on the Berkson-Jekel paradox, you may say that more research was required to figure out the exact reasoning behind the correlation. However, there are also other reasons why the Berkson-Jekel paradox occurs.

Statistical reasoning is very important in Data Science, and the main issue is dealing with misleading results. As a data scientist, you want to ensure that you are producing accurate results that can be used in the decision-making process and for future predictions. Making incorrect predictions or misleading results is the last thing on the cards.

There are a few methods that you can use to avoid the Berkson-Jekel Paradox:

If you are dealing with misleading results due to the sample data not being representative of the population, a solution would be to use data from a variety of sources. This will help you to get a more representative sample of the population, research more on the variables, and get a better understanding.

Misleading outputs can hold a company back. Therefore, when working with data, data professionals need to understand the limitations of the data theyre working with, different variables and the relationship between them, and how to reduce misleading results from happening.

If you would like to know more about Simpsons Paradox, have a read of this: Simpsons Paradox and its Implications in Data Science

If you would like to know more about the other statistical paradoxes, have a read of this: 5 Statistical Paradoxes Data Scientists Should KnowNisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

Original post:

The Berkson-Jekel Paradox and its Importance to Data Science - KDnuggets

Read More..

Scientific Journeys: From genetics to the environment and back – Environmental Factor Newsletter

Last October, David Reif, Ph.D., joined what he calls his dream team in the NIEHS Division of Translational Toxicology. As head of the Predictive Toxicology Branch, he leads a multidisciplinary group focused on predicting how individuals and populations respond to environmental exposures. The group aims to improve public health through the development and promotion of cutting-edge, computer-based methods and research models.

Reif recently talked with Environmental Factor about why he transitioned from academia to NIEHS, his journey from genetics to toxicology, and what most excites him about the future.

Environmental Factor: What drew you to NIEHS?

David Reif: Team science. Im a data scientist who enjoys tackling big problems that require expertise beyond my own. I had been a professor for 10 years. I really liked it, but I found that the projects that most motivated me were those involving long-term collaborations. I had been following and using team-built tools coming out of NIEHS, teaching them to my students and incorporating them in my own research. Then this opportunity came up, and I thought it would be great to be on the inside to advance translational toxicological research and predict how gene-by-environment interactions can influence human health.

EF: What makes the Predictive Toxicology Branch unique?

DR: We have a mix that doesnt exist anywhere else in world. Its basically my dream super lab, my dream team. We have computational quantitative biologists, computational chemists, scientists working on geospatial health analytics, and researchers promoting in vitro [cell-based] models and new approach methodologies, all together in one branch.

EF: Can you share how your education and training shaped your career path?

DR: My graduate training was in human genetics and statistics, but I really wanted to study environmental health problems. At that time, in my view, the tools for measuring the environments impact on health were unsophisticated. But scientists had just mapped the human genome, so it was an exciting time, and it felt like everything was possible.

I completed postdoctoral training at the U.S. Environmental Protection Agency, in the just-launched National Center for Computational Toxicology, which had the atmosphere of a startup company. The new center marked the beginning of programs to rapidly test all the chemicals we didnt know about. I was there for seven years as a statistician [principal Investigator], and I really invested in the field. I took formal courses in toxicology, and I started going to Society of Toxicology conferences. I stayed in the toxicology and environmental health research space as a professor at North Carolina State University, even though I joined a genetics department.

EF: Is the shift from genetics to toxicology common?

DR: I don't know if it's common, but I think it's conducive because genetics is a mechanism for both responding to the environment in the near term and a way to transmit information across generations. And you can apply genetics and genomics to lots of different kinds of problems. In some projects here, I'm full circle back to doing clinical studies involving human exposures, but now we have a vastly more sophisticated characterization of the environment to consider.

For example, the exposome, which represents the totality of our environmental exposures, could not be effectively measured back when I was training. Huge progress has been made recently, much like genetic and genomics technologies advanced quickly when I was earning my Ph.D. I feel like we're now in the exposomics era, and it's letting new kinds of science come to the fore.

EF: What is your vision for the Predictive Toxicology Branch?

DR: We have an opportunity to use machine learning, artificial intelligence, and many other cutting-edge tools to produce scientific knowledge that translates directly to human health. It is truly predictive data science that doesn't exist elsewhere because we can make a prediction, build models based on tons of data, and then test those in new experiments. We can predict what we think is going happen, and we can generate the data to validate that prediction.

I want our branch to be a destination for people to come and learn these skills, to establish the branch as a center of excellence for training, and to attract the best scientific talent.

EF: With many chemicals in the environment today, what should the average person know?

DR: One of the best things about government science is that it doesn't have an agenda. Its about truth and robustness. I'm hoping that we can provide those things to people, so when they ask questions about what they are exposed to and how to avoid harm, we can be a trusted source for that information.

I think we're at a cool inflection point where the confluence of technology, talent, and awareness are all coming together. Things have to change, and we're ready to change them.

(Caroline Stetler is Editor-in-Chief of the Environmental Factor, produced monthly by the NIEHS Office of Communications and Public Liaison.)

More:

Scientific Journeys: From genetics to the environment and back - Environmental Factor Newsletter

Read More..

What Is An IoT Ecosystem: Examples And Diagram – Dataconomy

The emergence of the Internet of Things (IoT) has led to the proliferation of connected devices and sensors that generate vast amounts of data. This data is a goldmine of insights that can be harnessed to optimize various systems and processes. However, to unlock the full potential of IoT data, organizations need to leverage the power of data science. Data science can help organizations derive valuable insights from IoT data and make data-driven decisions to optimize their operations.

Coherence between IoT and data science is critical to ensure that organizations can maximize the value of their IoT ecosystems. It requires a deep understanding of the interplay between IoT devices, sensors, networks, and data science tools and techniques. Organizations that can effectively integrate IoT and data science can derive significant benefits, such as improved efficiency, reduced costs, and enhanced customer experiences.

An IoT (Internet of Things) ecosystem refers to a network of interconnected devices, sensors, and software applications that work together to collect, analyze, and share data. The ecosystem consists of various components, including devices, communication networks, data storage, and analytics tools, that work together to create an intelligent system that enables automation, monitoring, and control of various processes.

IoT protocols 101: The essential guide to choosing the right option

Some key characteristics of an IoT ecosystem include the following:

An IoT ecosystem diagram is a visual representation of the components and relationships that make up an IoT ecosystem. It typically includes devices, communication networks, data storage, and analytics tools that work together to create an intelligent system.

The diagram provides a high-level overview of the ecosystem and helps to visualize the various components and how they are interconnected. It can also be used to identify potential areas for improvement and optimization within the system.

IoT ecosystem architecture refers to the design and structure of an IoT system, including the various components and how they are connected.

There are several layers to an IoT ecosystem architecture, including:

IoT ecosystems play an important role in data science, as they generate vast amounts of data that can be used to drive insights and optimize processes.

Some ways that IoT ecosystems contribute to data science include:

IoT ecosystems provide a rich source of data that can be used to drive insights and optimize processes, making them a valuable tool in the data science toolkit.

IoT ecosystems are composed of various components that work together to collect, process, and transmit data.

IoT ecosystems consist of both hardware and software components that work together to enable automation, monitoring, and control of various processes. Some of the key hardware and software components of IoT ecosystems include:

Each component in an IoT ecosystem plays a critical role in enabling the system to function effectively. Understanding the role of each component is essential in designing and optimizing IoT ecosystems. Some of the key roles of each component in IoT ecosystems include:

Choosing the right components for IoT ecosystems is essential in ensuring that the system functions effectively and efficiently. Some of the key reasons why choosing the right components is important to include:

Designing and implementing IoT ecosystems can be challenging due to various factors, such as the complexity of the system, the diversity of devices, and the need for interoperability. Some of the common challenges in designing and implementing IoT ecosystems include the following:

Overcoming the challenges of designing and implementing IoT ecosystems requires a combination of technical expertise, strategic planning, and effective execution. Some of the solutions for overcoming IoT ecosystem design and implementation challenges include:

Designing IoT ecosystems for data science requires careful planning and execution. Some of the best practices for designing IoT ecosystems for data science include:

Designing IoT ecosystems for data science requires a combination of technical expertise, strategic planning, and effective execution, and organizations need to adopt best practices to ensure success.

IoT and machine learning: Walking hand in hand towards smarter future

Data science plays a critical role in optimizing IoT ecosystems by enabling organizations to derive insights from the vast amounts of data generated by IoT devices and sensors. Data science can help organizations identify trends and patterns, predict future events, and optimize processes.

Some of the key ways that data science can be used to optimize IoT ecosystems include:

Leveraging data science to optimize IoT ecosystem performance requires a combination of technical expertise, strategic planning, and effective execution. Some of the key steps involved in leveraging data science to optimize IoT ecosystem performance include:

There are several examples of data science applications in IoT ecosystems. Some of the key examples include:

IoT ecosystems pose significant security and privacy challenges due to the sheer volume of data generated by numerous devices and sensors. The data can include highly sensitive information, such as biometric data, personal information, and financial details, making it critical to ensure that it is secured and protected.

One of the significant concerns is device security, where the devices are vulnerable to hacking, compromising their integrity and privacy. Network security is also a concern, where the data transmitted over the networks may be intercepted and compromised. Data privacy is another critical concern where there is a risk of unauthorized access to the vast amounts of sensitive data generated by IoT devices.

Devices and sensors are vulnerable to various types of attacks, including malware, distributed denial-of-service (DDoS) attacks, and phishing scams. These attacks can compromise the security of the devices and data generated, leading to devastating consequences.

Data breaches are another concern where the vast amounts of data generated by IoT devices need to be stored and transmitted securely. Any breach of the data can expose sensitive information, leading to privacy violations, identity theft, and other serious consequences.

Security and privacy concerns can have a significant impact on data science in IoT ecosystems. Data quality can be compromised due to security and privacy concerns, leading to incomplete or inaccurate data that can affect the effectiveness of data science. The volume of data that is available for analysis may also be limited due to security and privacy concerns. Furthermore, security and privacy concerns can make it challenging to store and transmit data securely, increasing the risk of unauthorized access and misuse.

Building trust in IoT ecosystems: A privacy-enhancing approach to cybersecurity

Ensuring security and privacy in IoT ecosystems requires a combination of technical expertise, strategic planning, and effective execution. Some of the best practices for ensuring security and privacy in IoT ecosystems include:

Ensuring security and privacy in IoT ecosystems are essential in enabling organizations to leverage data science to optimize their systems. Implementing best practices can help organizations minimize security and privacy risks and derive maximum value from their IoT ecosystems.

In closing, the combination of IoT and data science offers a world of endless possibilities for organizations looking to optimize their systems and processes. However, it also presents significant challenges, particularly around security and privacy.

To ensure the coherence of IoT and data science, organizations must take a comprehensive approach to data management and security, adopting best practices and adhering to industry standards. By doing so, they can unlock the full potential of their IoT ecosystems, derive valuable insights from their data, and make data-driven decisions that drive growth and success.

As IoT continues to evolve and expand, organizations that can effectively leverage data science to analyze IoT data will be well-positioned to thrive in the digital age.

Read more from the original source:

What Is An IoT Ecosystem: Examples And Diagram - Dataconomy

Read More..

Tip of the Iceberg podcast: Kroger’s data science company on … – The Packer

You know the saying, "It's not all about you?"

Well, it is when it comes to what consumers want from their fresh fruit and vegetable shopping experience. They want personalization.

Retailers can use this knowledge to answer these questions: How can retailers use their produce-shopper data to sell more produce, to gain and retain consumer loyalty and to keep shoppers engaged?

Kroger's retail-data science,insights and media company, called84.51, has been studying these questions. Becky Eldredge, vice president of commercial customer media and loyalty at84.51, has some answers.

Listen to this episode of the "Tip of the Iceberg" podcastfor a chat with Eldredge about the company's research and findings. And learn what retailers and produce marketers can do with this knowledge.

Expect to take away from this podcast:

Another episode to try: The Fresh Market's Gerrick Polinsky on training personal shoppers

Read the original:

Tip of the Iceberg podcast: Kroger's data science company on ... - The Packer

Read More..

What Delivery Apps Reveal About Food Disparities in Los Angeles … – USC Viterbi School of Engineering

A map of the LA food environment shows more nutritious options clustered in more affluent areas. Image/Abigail Horn

Its midweek, and the fridge is bare. How do you stave off the gnawing hunger? Perhaps you turn straight to your favorite local takeout or a trusty delivery app to get that food fix.

For one researcher at USC Viterbi School of Engineering, delivery apps and online menus are an even more valuable resource. The digital menu information offered by online ordering sites represents a big-data goldmine that can give us a detailed map of the nutrition of Angelenoswhere disparities in food access exist and how we can address them.

Abigail Horn, research assistant professor in the Daniel J. Epstein Department of Industrial and Systems Engineering and the USC Information Sciences Institute, studies how data can increase our understanding of eating behavior, diet and health. Her latest work aims to build a map of the nutritional quality of the urban foodscape in Los Angeles, using machine learning to analyze information on digital menus for all restaurants, from top-100 chains to small mom-and-pop operations.

The results will help inform public health research, which generally relies on overly simplistic measures of food environment nutritional quality. The work will ultimately inform decisions with the potential to improve zoning policies to increase access to healthy, nutritional options in disadvantaged communities.

The NIH National Institute of Minority Health recently awarded the project the Top Poster award at the inaugural workshop of the Health Equity in Action Annual Workshop.

The research is centered on identifying where healthy and unhealthy food outlets are located. The work has a particular focus on where unhealthy outlets are clustered in the city, and where there is low access to healthy food. These areas are often known as food swamps neighborhoods with a high concentration of low-quality fast-food outlets, liquor stores and convenience stores and a lower concentration of fresh food outlets and grocery stores.

A cyclist delivers for Uber Eats. Image/Robert Anasch.

Existing public health research frequently characterizes food environment nutritional quality by focusing on the number of food outlets in a neighborhood within each business listing category, for example, using the North American Industry Classification System (NAICS)categorization of food outlets which differentiates limited service from full service restaurants and grocery. outlets. Limited service restaurants are often used as a proxy for fast-food outlets, which are in turn considered an indicator of poor nutritional options. It does not consider a newer generation of limited service restaurants that offer menu alternatives such as salads, grain bowls, and wrapsthink Sweetgreen, Veggie Grill or Chipotle. Harnessing this classification system, a neighborhoods proximity to a grocery store would be the only positive indicator of nutritional outcomes.

Horn said that these types of categorizations make it difficult to gain an accurate overview of the nutrition of the city and which residents faced disparities in access to healthy food.

If its fast food, its unhealthy, Horn said, or if its a grocery store, its healthy, because there is some fresh food there. Were really missing the diversity of nutritional quality within these categories and across categories. There is so much digital menu data available, and so the idea of the project was to use that data to create a more refined indicator of the nutritional quality of the food on offer.

The key hurdle the project faced was that restaurants are not required to publish their nutrient information, aside from chain brands with 20 or more outlets. For non-chains, the available information is limited to how these businesses describe individual dishes on their menus.

In work together with Computer Scientist at the USC Information Sciences Institute Keith Burghardt, Andrs Abeliuk from the University of Chile and USC Viterbi Master of Science in Applied Data Science student Alex Seo, Horn developed a model that estimated the nutritional value of menu items across a broad cross-section of restaurants. The teamhas partnered withEdamam, a food database and provider of nutrition data and analytics, which is sharing data to power the model.

Thats where the machine learning comes in. Edamam has created an impressive database of what we call generic food items, which they developed through their own proprietary analysis of over 50 million recipes online. So we use that to train a neural network to predict the nutrient distribution of a food item, Horn said.

The neural network was also refined with the help of a fastText language embedding model, a simple pre-trained text-based model. Horn said when the model is used together with data from the recipe databases, this gives additional context to food words on menus and how the terms are used together, thereby improving the predictions.

Horn also used data from another company, Nutritionix, to gather detailed nutrition and ingredient content data from large chain restaurant menus.

Abigail Horn Research Assistant Professor in the Daniel J. Epstein Department of Industrial and Systems Engineering and the USC Information Sciences Institute

We are also accessing data from websites like Yelp, Horn said. In addition, Edamam, the nutrition data company that shared the generic meal data with us, had previously done some scraping of restaurant menus from across the country. Next we will be applying the algorithm to restaurants in this database and use it to investigate disparities in the urban foodscape in LA.

The project is funded by the Southern California Center for Latino Health, and Horn said a key focus is to better understand and capture the types of restaurants and food options available in areas with predominantly Latino residents.

Initial findings examined around 1000 restaurants in the region, all within the limited service NAICS category and found a wide distribution in the nutritional quality of the menus.

Within that one category of limited service restaurants, we see a huge diversity of nutritional scores. When we plot those on the map you see there are some pretty clear geographical patterns in where the healthier outlets within that category are located, Horn said. What we see is that the unhealthiest outlets are almost uniformly distributed across the map. They are everywhere. The healthiest outlets are clustered only in the more affluent neighborhoods.

It may seem like theres a food swamp in Venice because there are still a lot of unhealthy outlets but when you examine it, there are also a lot of salad joints there. Thats something we dont see in East or South LA, she said.

As the project expands, Horn said that she hoped it could be a tool for public health researchers to paint a more accurate nutritional picture of the city.

It would be wonderful if it could help inform policy around rezoning in neighborhoods, Horn said, and help to create quantified metrics of where we have disparities and the true nutritional quality of food outlets.

Horn said there are federal food financing initiatives that invest money into improving food environments and incentivizing new healthy outlet options and grocery stores.

In those kinds of financing schemes, that could be a scenario where we can help identify neighborhoods that truly have extreme disparities, instead of relying on business listing categories to tell us what might be unhealthy and needs reforming, she said.

Published on April 3rd, 2023

Last updated on April 3rd, 2023

More here:

What Delivery Apps Reveal About Food Disparities in Los Angeles ... - USC Viterbi School of Engineering

Read More..

Aware March Madness Collaboration Study Finds Office Picks on … – PR Newswire

Employee Collective Proves Highly Accurate Based on Collaboration Data

COLUMBUS, Ohio, April 3, 2023 /PRNewswire/ -- Aware, a leader in collaboration intelligence, has analyzed over 400 million anonymized employee conversations in online collaboration environments between February 1 and March 31 to provide insights on the NCAA March Madness tournament. The study is part of ongoing research to evaluate conversational data for broader insights into employee sentiment and the crossover between organizational intelligence and business impact.

The Aware Data Science team analyzed employee sentiment surrounding each NCAA team, comparing it against Vegas bookmaker odds to determine if there was a correlation. The research was conducted using the Aware collaboration intelligence platform and proprietary Natural Language Processing (NLP) models which identify, analyze and deliver contextual intelligence sourced directly from the authentic voice of the employee in real time collaboration environments including Slack and Teams.

Key findingsinclude:

"The data surprised us with unexpectedly high correlations between workplace sentiments and the actual outcomes of the NCAA tournament," said Jason Morgan, PhD, and Head of Data Science at Aware. "When compared against the Vegas odds and other predictors, our models found a high degree of accuracy coming from the collective voice of the employee across collaboration channels where people where highly engaged, talking, sharing, and emoting about their March Madness predictions."

"Look at the information gleaned from the employee voice on March Madness and imagine applying the same degree of accuracy to understanding business-critical insights," said Jeff Schumann, CEO and founder of Aware. "Collaboration channels are a trove of highly accurate organizational intelligence if businesses can leverage the data, listen, analyze, and apply their workplace intelligence to inform their operations. But too often companies can't get their ear to the ground and are fixated on generating content, rather than listening."

For more information about Aware and its solutions, please visit: https://www.awarehq.com.

About AwareFounded in 2017 and headquartered in Columbus, Ohio, Awareis a global leader in contextual intelligence that identifies and reduces risk, maintains compliance, and uncovers new business insights from conversations at scale. Consolidate, enrich, search, and manage data across tools like Slack, WorkJam, Teams and Zoom for immediate visibility across the organization. Aware's comprehensive platform empowers executives with organizational insights, while also helping legal, compliance, information security, and IT professionals to address the modern challenges of enterprise collaboration: archiving, monitoring, DLP, eDiscovery, retention and legal holds. Aware is a strategic platform integration partner to Microsoft, Slack, and Workplace from Meta. AstraZeneca, Memorial Health System, Sun Life Financial, BT Group, Farmers Insurance and Mercado Libre are among the global business leaders that trust Aware for contextual intelligence. Aware is consistently recognized by Gartner analysts as a market leading vendor, distinguished in 2022 reports: Magic Quadrant for Enterprise Information Archiving; Market Guide for Insider Risk Management Solutions; Market Guide for Voice of the Employee Solutions; Vendor Identification for Data Loss Prevention Toolkit. Learn more at http://www.awarehq.com.

Media ContactLee RegalLumina Communications for Aware[emailprotected]

SOURCE Aware

Read this article:

Aware March Madness Collaboration Study Finds Office Picks on ... - PR Newswire

Read More..

Charlotte research expenditures among top third in U.S., led by … – Inside UNC Charlotte

UNC Charlottes upward trajectory toward top-tier research university status is supported by a recent national report of research expenditures for U.S. colleges and universities.

In the latest National Science Foundation Higher Education Research and Development Survey, Charlotte moved into the top third of all U.S. colleges and universities for fiscal research expenditures.

During the past decade, UNC Charlotte was among the fastest-growing universities in the nation, and with increased investment from federal and state sources, our research has flourished exponentially, said Chancellor Sharon L. Gaber.

According to the report, Charlottes research expenditures have risen 87% since 2012.The Universitys number of doctoral graduates has grown 62%, too, according to the NSF Earned Doctorate rankings.

CCI faculty conducting innovative research

Leading the Universitys efforts is the College of Computing and Informatics, which, according to the recently released 2021 NSF HERD report, now ranks 39th in the U.S. for research expenditures in the area of computer and information sciences and second among North Carolina universities.

CCI Dean Bojan Cukic said the college has seen considerable growth in research funding during the past decade.

Our faculty expertise in cybersecurity, bioinformatics and genomics, and artificial intelligence is attracting higher levels of federal funding, he noted. For the college, it aids in attracting new faculty in these areas of research excellence, and for students, it ensures they are receiving a high-quality education that will prepare them to excel in their professional careers or advanced studies.

Charlotte is the No. 1 producer of computer science bachelors degrees in North Carolina, and the Queen City has become a top city for startups and a top 10 Tech Town, according to CompTIA. CCI is providing the talented, diverse tech workforce that will drive continued economic development for the region.

Key focus areas: CIPHER, cybersecurity and AI

Through investment from the General Assemblys Engineering a Smart and Secure Future for North Carolina, Charlotte will grow enrollments in computer science, data science and engineering by more than 2,000 students during the next five years. The Universitys School of Data Science was the first of its kind in the Carolinas.

Special state legislative funding for the Center for Computational Intelligence to Predict Health and Environmental Risks is powering its efforts to understand, prevent and combat future outbreaks of known and unknown infectious diseases. CIPHER, as it is known, features 20-plus experts from bioinformatics, engineering and data science to biology, mathematics and computer science, and more who are combining forces to identify vital connections among the worlds natural environment, interactions with animals and human health.

Cybersecurity is an area in which Charlotte has been a national leader for more than 20 years. CCI houses a Center of Academic Excellence in Cyber Defense Education and Research as designated by the National Security Agency and Department of Homeland Security. Researchers are leading the investigation not only into how to keep governments and industries safe from unwarranted electronic intrusion but private homes as well.

Artificial intelligence, increasingly in the news, has wide-ranging applications from autonomous modes of transportation to affecting the boundaries of machine intelligence and human creativity. CCI faculty and students have created unmanned aerial vehicles that can survey and report a scene of a road accident, developed automated chatbots and conversational agents, and designed computer vision and analytics applications that improve customer experience at major retail stores.

Shaping What's Next, UNC Charlotte's 10-year strategic plan, lays out a new vision as an emerging top-tier global research university to support the fast-growing Charlotte region.

UNC Charlottes upward trajectory is a win-win for the region, said Gaber. As we have seen in cities across the U.S., top-tier public research universities drive innovation and supercharge businesses, helping to advance economic mobility in their regions.

Commercializing research

Charlotte is among national leaders in commercializing its research. Since 2018, University researchers have been awarded 87 new patents. According to the Association of University Technology Managers, Charlotte ranks in the top 10 nationally per research dollar spent for the number of new patents filed, new patents approved and new startup ventures formed.

Gaber noted greater support for Charlottes burgeoning research enterprise will enable the region to better compete against other metropolitan areas when it comes to attracting new industry jobs and business relocations and other investments.

About the NSF Higher Education Research and Development Survey

The Higher Education Research and Development Survey is the primary source of information on research and development expenditures at U.S. colleges and universities. The survey collects information on R&D expenditures by field of research and source of funds and also gathers information on types of research, expenses, and headcounts of R&D personnel. The survey is an annual census of institutions that spent at least $150,000 in separately accounted for R&D in the fiscal year.

Continued here:

Charlotte research expenditures among top third in U.S., led by ... - Inside UNC Charlotte

Read More..