Page 1,052«..1020..1,0511,0521,0531,054..1,0601,070..»

QA Qa Xiong – UBNow: News and views for UB faculty and staff – University at Buffalo

Jinjun Xiongs years of experience with artificial intelligence (AI) are making a dramatic impact at UB.

SUNY Empire Innovation Professor of Computer Science and Engineering, Xiong is scientific director and co-director of the AI Institute for Exceptional Education, a national institute developing artificial intelligence systems that identify and assist young children with speech and/or language processing challenges. It was established earlier this year with a five-year, $20 million grant from the National Science Foundation.

Xiong also serves as co-director of UBsInstitute for Artificial Intelligence and Data Science (IAD),where he connects investigators including clinical and translational researchers with the power of AI.

These efforts include:

I am also always looking for new ideas for how we can make the IAD platform more useful and accessible for all UB investigators, Xiong says.

He believes it is important for researchers and the public to understand artificial intelligence, and the ways in which it is changing our world. In a Q&A with UBNow, Xiong discusses the impact of AI on research now and in the future, and analyzes how it will affect health care.

How is AI impacting clinical and translational research now, and how will it be in the future?

AI is already impacting clinical research in multiple ways, such as medical imagining analyses for skin cancer detection, MRI imaging segmentation, clinical trials data understanding, wearable sensors to improve patient monitoring the list just goes on and on. The future of clinical practices will incorporate more and more intelligent solutions enabled by more efficient and intelligent algorithms, all aiming to improve the patient quality of care. One such example is the growing capabilities of AI, especially the recent amazing results from generative AI like ChatGPT, where it is conceivable that AI-augmented agents such as chatbots can help with providing more accessible and higher-quality health literacy for patients.

To some degree, every future professional needs to understand a bit about AI and computing, by either talking to AI experts/researchers or learning online to gain a general understanding of how AI works, and what AI can do and cannot do right now and even in the near future. With that basic understanding, people working in a particular domain like medicine can revisit their daily practices and think out of the box about where AI can help in their current practice flows, and then engage with an AI expert to co-imagine and then co-design a possible AI-driven solution.

The public should realize that the impact of AI to health care is real and inevitable. There is always an ethical and moral issue around AI in health care, as it may potentially remove autonomy from humans. But that is exactly why the public should be aware of the technology so they can be part of the conversation to find meaningful solutions. I believe the voices of the public should be heard in charting a new direction for humankind with AI.

The power of AI can only become real when it is applied to solve a particular domain problem.

For more information on IAD research initiatives, write to Xiong atjinjun@buffalo.edu.

Visit link:

QA Qa Xiong - UBNow: News and views for UB faculty and staff - University at Buffalo

Read More..

Master of Science in Data Science – National University

National Universitys MS in Data Science program focuses on advanced topics like how to develop, implement, and maintain the hardware and software tools needed to make efficient and effective use of big data, including databases, data marts, data warehouses, machine learning, analytic programming, and artificial intelligence and optimization. With this knowledge, youll be equipped with the industry-current credentials needed to pursue in-demand positions* like:

Employer seeking data science professionals span a large range of service and manufacturing settings. For example, top employers of computer and IT professionals include IBM, Microsoft, and Facebook, while JP Morgan Chase, Wells Fargo, and Travelers Group regularly recruit finance and insurance professionals with an MS in Data Science.

In the consulting world, Deloitte, KPMG, and Accenture are top employers, and Humana and Anthem dominate the healthcare industry. If youre interested in biotech or pharmaceutical manufacturing, keep Johnson and Johnson and Bayer on your radar. Ryder and Uber are two top employers in the transportation sector.

With your MS in Data Science, you can not only expect to be in demand, youre also likely to be well compensated. The Bureau of Labor Statistics states that the median annual wage for management analysts $93,000 in May 2021, and the highest 10 percent earned more than $163,760.**

**SOURCE: https://www.bls.gov/ooh/computer-and-information-technology/computer-and-information-research-scientists.htm#tab-5

Excerpt from:

Master of Science in Data Science - National University

Read More..

Jianwei Niu named interim dean of University College – UTSA

UTSAs Mission

The University of Texas at San Antonio is dedicated to the advancement of knowledge through research and discovery, teaching and learning, community engagement and public service. As an institution of access and excellence, UTSA embraces multicultural traditions and serves as a center for intellectual and creative resources as well as a catalyst for socioeconomic development and the commercialization of intellectual property - for Texas, the nation and the world.

To be a premier public research university, providing access to educational excellence and preparing citizen leaders for the global environment.

We encourage an environment of dialogue and discovery, where integrity, excellence, inclusiveness, respect, collaboration and innovation are fostered.

UTSA is a proud Hispanic Serving Institution (HSI) as designated by the U.S. Department of Education.

The University of Texas at San Antonio, a Hispanic Serving Institution situated in a global city that has been a crossroads of peoples and cultures for centuries, values diversity and inclusion in all aspects of university life. As an institution expressly founded to advance the education of Mexican Americans and other underserved communities, our university is committed to ending generations of discrimination and inequity. UTSA, a premier public research university, fosters academic excellence through a community of dialogue, discovery and innovation that embraces the uniqueness of each voice.

Read the rest here:

Jianwei Niu named interim dean of University College - UTSA

Read More..

The Ethical Imperative of AI Differential Privacy in Data Science – Fagen wasanni

The Ethical Imperative of AI Differential Privacy in Data Science

The rapid advancements in artificial intelligence (AI) and data science have opened up new horizons for various industries, from healthcare to finance. The ability to analyze massive amounts of data has led to significant breakthroughs in areas such as personalized medicine, fraud detection, and even self-driving cars. However, with these technological advancements comes the responsibility to ensure that the privacy of individuals is protected. This is where the concept of differential privacy comes into play, serving as an ethical imperative in the realm of AI and data science.

Differential privacy is a mathematical framework that allows data scientists to analyze and share data while preserving the privacy of individuals within the dataset. It works by adding a carefully calculated amount of noise to the data, ensuring that the results of any analysis remain statistically accurate while making it virtually impossible to identify any individuals information. This approach has gained significant traction in recent years, with tech giants like Apple and Google adopting differential privacy techniques to protect user data.

The ethical imperative of implementing differential privacy in AI and data science stems from the potential harm that can be caused by privacy breaches. In todays digital age, personal information is more valuable than ever, and the consequences of mishandling such data can be severe. For instance, unauthorized access to medical records could lead to discrimination based on health conditions, while financial data breaches can result in identity theft and fraud. Furthermore, the misuse of personal information can have long-lasting psychological effects on individuals, leading to a loss of trust in institutions and a sense of vulnerability.

In addition to the potential harm caused by privacy breaches, there is also a growing concern about the potential for AI algorithms to perpetuate and even amplify existing biases and inequalities. This is particularly relevant in the context of machine learning, where algorithms are trained on large datasets to identify patterns and make predictions. If the data used to train these algorithms contains biases, the resulting AI systems can inadvertently perpetuate these biases, leading to unfair and discriminatory outcomes.

Differential privacy can help mitigate these concerns by ensuring that sensitive information is protected while still allowing for valuable insights to be gleaned from the data. By preserving individual privacy, differential privacy reduces the risk of harmful consequences resulting from privacy breaches. Moreover, by allowing data scientists to work with anonymized data, differential privacy can help to identify and address potential biases in AI algorithms, leading to more fair and equitable outcomes.

The ethical imperative of differential privacy in AI and data science is further underscored by the growing body of legislation aimed at protecting individual privacy. Regulations such as the European Unions General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have placed stringent requirements on organizations to safeguard personal information and provide greater transparency and control to individuals over their data. Implementing differential privacy techniques can help organizations comply with these regulations while still enabling them to leverage the power of AI and data science to drive innovation and growth.

In conclusion, the ethical imperative of AI differential privacy in data science is clear. As the world becomes increasingly data-driven, it is crucial for organizations to strike a balance between harnessing the power of AI and data science and protecting the privacy of individuals. Differential privacy offers a promising solution to this challenge, enabling data scientists to gain valuable insights from data while preserving individual privacy and mitigating the risk of harmful consequences. By adopting differential privacy techniques, organizations can not only meet their ethical obligations but also build trust with their customers and stakeholders, ensuring the long-term success and sustainability of their AI and data science initiatives.

Link:

The Ethical Imperative of AI Differential Privacy in Data Science - Fagen wasanni

Read More..

$424K grant to better predict weather, climate through machine … – University of Hawaii

Improved weather and climate forecasting using machine learning and artificial intelligence is the focus of a new University of Hawaii at Mnoa project. Results are expected to have a major impact in Hawaii and other tropical climate areas around the world.

Associate Professor Peter Sadowski from the Information and Computer Sciences Department in the College of Natural Sciences earned a five-year, $424,293 CAREER grant from the National Science Foundation (NSF). CAREER grants are designed to support early-career faculty to serve as academic role models in research and education.

One of the risks of climate change for Hawaii is extreme weather events, and current scientific models are poor at estimating these risks, Sadowski said. This project will provide a completely new approach modeling these risks, using the latest advancements in AI (artificial intelligence).

Sadowskis project will develop machine-learning methods to predict the risk of adverse weather and climate events. AI will be used to develop new data-driven computational methods for modeling risk and apply these methods to weather applications.

In particular, these models will be applied to forecasting solar irradiance and precipitation, two areas that are particularly important for tropical islands such as the Hawaiian Islands. Estimating the risk of rapid changes in solar power generation is necessary for managing energy grids that are seeing a rapid increase in variable renewable sources, and floods claim hundreds of lives and billions in property damage each year in the U.S. alone.

Artificial intelligence methods have greatly improved translating text into predictions using images and video. A key development is the ability to learn probabilistic models of images and video. The research will leverage existing data from numerical simulations of atmospheric variables, observations from satellites and ground-based weather station data from the NSF-funded CHANGE-HI project. The machine-learning methods developed by this project will complement existing physics-based weather prediction models by providing location-specific forecasts with increased speed, higher resolution and probabilistic accuracy.

This research will be paired with an educational outreach program that includes a summer data science course for high school students and a workshop to share data science teaching materials with Hawaiis K12 teachers.

More here:

$424K grant to better predict weather, climate through machine ... - University of Hawaii

Read More..

Argonne and University of Chicago researchers improve … – Argonne National Laboratory

The introduction of millions of electric vehicles (EVs) on to the power grid will create a transformational opportunity for Americas decarbonization efforts. However, it also brings with it an important challenge. Scientists and engineers are looking for the best way to ensure that vehicles can be charged smartly, efficiently, cheaply and clean by a grid that may not be able to accommodate them all at once or all the time.

Researchers at the U.S. Department of Energys Argonne National Laboratory and graduate students at the University of Chicago are collaborating on an exciting new project to tackle that challenge. This project will use a particular combination of computational rewards and punishments a technique called reinforcement learning to train an algorithm to help schedule and manage the charging of a diverse set of electric vehicles.

The first group of vehicles that the team is studying are those being charged by Argonne employees at the laboratorys Smart Energy Plaza, which offers both AC regular chargers and DC fast chargers. Because employees dont typically need their vehicles during the workday, there can be some flexibility in terms of when each car gets charged.

Theres a certain total amount of power that can be allocated, and different people have different needs in terms of when they need to have their cars available at the end of the day, said Argonne principal electrical engineer Jason Harper. Being able to train a model to work within the constraints of a particular employees departure time while being cognizant of peak demands on the grid will allow us to provide efficient, low-cost charging.

When you have a lot of EVs charging at the same time, they can create a peak demand on the power station. This introduces increased charges, which were trying to avoid, added Salman Yousaf. Yousaf is a graduate student in applied data science at the University of Chicago who is working on the project with three other students.

The reinforcement learning in the algorithm works by incorporating feedback from positive results, like an EV having the desired amount of charge at the designated departure time. It also incorporates negative results, like having to draw power past a certain peak threshold. Based on this data, the charge scheduling algorithm can make more intelligent decisions about which cars to charge when.

Smart charge scheduling is really an optimization problem, Harper said. In real time, the charging station is constantly having to make tradeoffs to make sure that each car is being charged as efficiently as possible.

Although the Argonne charging stations are the first location where the projects researchers are performing reinforcement learning, there is the potential to expand far beyond the laboratorys gates. Theres a lot of flexibility when it comes to charging at home, where overnight charging would allow for some ability to move around how the charging load is distributed, Yousaf said.

True smart charging is really taking into consideration all of the actors in the ecosystem, Harper added. That means the utility, the charging station owner and the EV driver or homeowner. We want to meet the needs of everyone while still being mindful of the restrictions that everyone faces.

Future work with the model will involve a simulation of a much larger charging network that will initially be based on data collected from Argonnes chargers.

Harper and his colleagues have also developed a mobile app called EVrest that allows users of networked charging stations (in this case, initially Argonne employees) to reserve stations and participate in smart charge scheduling. The EVrest platform collects data on charging behavior and will use that data to train future AI models to aid in smart charge management and vehicle grid integration.

Read the original post:

Argonne and University of Chicago researchers improve ... - Argonne National Laboratory

Read More..

Genedata Announces Licensing Agreement with Gilead Sciences to … – Bio-IT World

Genedata, the leading provider of enterprise software solutions for biopharmaceutical R&D, today announced a licensing agreement with Gilead Sciences, Inc. to help the company leverage the power of multi-omics data in the discovery of life-changing therapeutics across multiple indications. The agreement includes a software license to Genedata Profiler, a domain-specific, state-of-art data integration and analytics platform, and access to consulting resources to support the adoption and integration of the solution within the companysinfrastructure.

At Gilead, we work every day to discover innovative therapeutics for people with life-threatening diseases, said Li Li, Executive Director, Research Data Sciences at Gilead. Utilizing Genedata Profiler will enhance our capabilities for integration and analysis of large-scale multi-modal datasets as we continue to accelerate research for the discovery of transformative medicines.

As a comprehensive, end-to-end computational solution, Genedata Profiler enables biopharmaceutical organizations to conduct data-rich discovery and translational research and make more informed decisions, ultimately supporting the development of more effective, targeted treatments. It empowers data scientists to decipher complex patterns associated with a specific disease from Next Generation Sequencing (NGS) and other high-dimensional data, facilitating target identification and evaluation and biomarker research for a range of therapeutic indications.

The main benefits of Genedata Profiler include:

Beyond software, Genedata provides scientific domain expertise and 26 years of experience in supporting the biopharma industry in increasing the productivity and quality of their research output through digital transformation.

We are excited to have signed Gilead Sciences to our rapidly growing customer list for our Genedata Profiler platform, said Othmar Pfannes, Ph.D., CEO of Genedata. Genedata Profiler democratizes access to R&D data and includes fit-for-purpose analytics, while automating highly complex data operations, all key requirements for enabling biopharmaceutical organizations to generate precision medicines as efficiently as possible.

About Gilead Sciences

Gilead Sciences, Inc. is a biopharmaceutical company that has pursued and achieved breakthroughs in medicine for more than three decades, with the goal of creating a healthier world for all people. The company is committed to advancing innovative medicines to prevent and treat life-threatening diseases, including HIV, viral hepatitis, COVID-19, and cancer. Gilead operates in more than 35 countries worldwide, with headquarters in Foster City, California.

http://www.gilead.com

LinkedIn|Twitter

About Genedata

Genedata transforms data into intelligence with innovative software solutions that incorporate extensive biopharma R&D domain knowledge. Multinational biopharmaceutical organizations and cutting-edge biotechs around the globe rely on Genedata to digitalize and automate data-rich and complex R&D processes. From early discovery all the way to the clinic, Genedata solutions help maximize the ROI in R&D expenditure. Founded in 1997, Genedata is headquartered in Basel, Switzerland with additional offices in Boston, London, Munich, San Francisco, Singapore, and Tokyo.

http://www.genedata.com

LinkedIn | Twitter | YouTube

ContactAllison KurzGenedataPublic Relationspr@genedata.com

Disclaimer

The statements in this press release that relate to future plans, events or performance are forward-looking statements that involve risks and uncertainties, including risks associated with uncertainties related to contract cancellations, developing risks, competitive factors, uncertainties pertaining to customer orders, demand for products and services, development of markets for the Company's products and services. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date hereof. The Company undertakes no obligation to release publicly the result of any revisions to these forward-looking statements that may be made to reflect events or circumstances after the date hereof or to reflect the occurrence of unanticipated events.

All product and service names mentioned are the trademarks of their respective companies.

Follow this link:

Genedata Announces Licensing Agreement with Gilead Sciences to ... - Bio-IT World

Read More..

Research Rooted in Machine Learning Challenges Conventional … – National Institute of Justice

Researchers have developed a new analytical method to better understand how individuals move toward violent extremism.

Using machine learning, a form of artificial intelligence, the method reveals clusters of traits associated with possible pathways to terrorist acts. The resource may improve our understanding of how an individual becomes radicalized toward extremist violence.

The report on a scientific study that deploys those tools and blends elements of data science, sociology, and criminology is calling into question some common assumptions about violent extremism and the homegrown individuals who are motivated to engage in behaviors supporting violent jihadist ideologies. See Table 1.

Table 1 shows select key insights from the project aimed at developing a new computational methodology that can mine multiple large databases to screen for behaviors associated with violent extremism.

The study departs from the research communitys common use of demographic profiles of extremist individuals to predict violent intentions. Profiling runs the risk of relying on ethnic stereotypes in extremism studies and law enforcement practices, particularly with respect to American Muslims. According to the researchers, the method isolated the behaviors associated with potential terrorist trajectories, after being trained with thousands of text data coded by researchers.

Machine learning is an application of artificial intelligence that uses existing data to make predictions or classifications about individuals, actions, or events. The machine learns by observing many examples until it can statistically replicate them.

Researchers scanned large datasets to spot traits or experiences that are collectively associated with terrorist trajectories employing a process that blends machine learning (see What Is Machine Learning?), and an evidence-based behavioral model of radicalization associated with violence and other terrorism-related activities.

The machine-learning computational method analyzes, while learning from, copious data to isolate behaviors associated with potential terrorist trajectories.

The graph component depicts clusters of behavioral indicators that reveal those trajectories. The datasets generating those indicators include investigator notes, suspicious activity reports, and shared information. See "What Do We Mean by Graph? Defining It in Context."

This tool for understanding violent extremism is the work of Colorado State University and Brandeis University investigators, supported by the National Institute of Justice. The tool aims to isolate somewhat predictable radicalization trajectories of individuals or groups who may be moving toward violent extremism.

A key element of the work was the development of a Human-in-the-Loop system, which introduces a researcher into the data analysis. Because the data are so complex, the researcher mitigates difficulties by assisting the algorithm at key points during its training. As part of the process, the researcher writes and rewrites an algorithm to pick up key words, phrases, or sentences in texts. Then the researcher sorts those pieces of text with other text segments known to be associated with radicalization trajectories.

The Human-in-the-Loop factor is designed to help researchers code data faster, build toward a law enforcement intelligence capable of capturing key indicators, and enable researchers to transform textual data into a graph database. The system relies on a software-based framework designed to help overcome challenges posed by massive data volumes and complex extremist behaviors.

The research stems from the premise that radicalization is the product of deepening engagements that can be observed in changing behaviors. This concept is based on researchers observations that the radicalization process occurs incrementally.

The radicalization trajectory concept suggests that a linear pathway exists from an individual entertaining extremist ideas to ultimately taking extremist action marked by violence in the name of ideology.

The research findings validated that premise.

The researchers used 24 different behavioral indicators to search databases for evidence of growing extremism. Some examples of indicators are desire for action, issuance of threats, ideological rebellion, and steps toward violence. (See Figure 1 for an example of a set of cues, or behaviors, that the researchers associate with one behavioral indicator associated with planning a trip abroad.)

Source: Dynamic, Graph-Based Risk Assessments for the Detection of Violent Extremist Radicalization Trajectories Using Large Scale Social and Behavioral Data, by A. Jayasumana and J. Klausen, Table 5, p. 23.

Because violent extremism remains a relatively rare phenomenon, data on known individuals who committed terrorist events was mined to identify cues representing behavioral extremist trajectories. To that end, researchers collected three types of data:

The sources of collected data were public documents ranging from news articles to court documents, including indictments and affidavits supporting complaints.

Of the 1,241 individuals studied, the researchers reported that 421 engaged in domestic terrorist violence, 390 became foreign fighters, and 268 became both foreign fighters and individuals engaged in domestic terrorism. A minority (162) were convicted of nonviolent terrorism-related offenses.

Researchers analyzed time-stamped behavioral data such as travel abroad, a declaration of allegiance, information seeking, or seeking a new religious authority using graph techniques to assess the order of subjects behavioral changes and most common pathways leading to terrorism-related action. See the sidebar What do we mean by graph? Defining it in context.

The researchers made several notable findings beyond those presented in Table 1.

Although researchers found that terrorist crimes are often the work of older (at least 25 years old, on average) individuals, the agecrime relationship varied across types of terrorist offenses. They found that, on average, people who committed nonviolent extremist acts were 10 years older than those who became foreign fighters. Although younger men (median age 23) are more likely to take part in insurgencies abroad, slightly older men (median ages 25-26) who have adopted jihadist ideologies are more likely to engage in violent domestic terrorist attacks. Individuals who did something violent at home were, on average, four years older than foreign fighters.

Researchers also found that men and a few women at any age may engage in nonviolent criminal support for terrorism. Also, men are six times more likely than women to commit violent offenses, both in the United States and abroad.

According to this study, individuals who have adopted jihadist ideologies and who are immigrants are more likely than those who are homegrown to engage in domestic extremist violence.

The dataset, comprising more than 1,200 individuals who had adopted jihadist ideologies, was used to track radicalization trajectories. It was limited by the availability of sufficiently detailed text sources, which introduced an element of bias. Much of the public data on terrorism come from prosecutions, but not all terrorism-related offenses are prosecuted in state or federal U.S. courts. Some of the subjects died while fighting for foreign terror organizations, which limited the available information on them.

Although data from public documents may be freely shared, the researchers noted that research based on public sources can be extremely time consuming.

Often public education efforts on anti-terrorism take place at schools where children learn about recruitment tactics by extremist groups and warning signs of growing extremism. However, the study found that more than half of those who commit extremist violent acts in the United States are older than 23 and typically not in school. This suggests that anti-terrorism education efforts need to expand beyond school settings.

By using machine learning to identify persons on a trajectory toward extremist violence, the research supports a further move away from relying on user profiles of violent extremists and toward the use of behavioral indicators.

The research described in this article was funded by NIJ award 2017-ZA-CX-0002, awarded to Colorado State University. This article is based on the grantee report Dynamic, Graph-Based Risk Assessments for the Detection of Violent Extremist Radicalization Trajectories Using Large Scale Social and Behavioral Data, by A. Jayasumana and J. Klausen.

A graph, in the context of this research, is a mathematical representation of a collection of connections (called edges) between things (called nodes). Examples would be a social network or a crime network, or points on a map with paths connecting the points. The concept is analogous to cities, and roads or flights paths connecting them, on a map. The researchers in this violent extremism study isolated clusters of traits representing a more likely pathway to violent extremism. The concept is similar to a map app choosing roads that are least congested (allowing for most traffic) between two points. Graphs in this sense can be quite visual and make good conventional graphics.

Return to text

Originally posted here:

Research Rooted in Machine Learning Challenges Conventional ... - National Institute of Justice

Read More..

Simplifying with Data-Driven Warehouse Optimization – Supply and Demand Chain Executive

BluePlanetStudio/stock.adobe.com

Theres a perfect storm brewing in the evolution of warehouses and DC businesses and the optimization solution lies in creating a recipe in which technology is a main ingredient.

The industry faces a number of challenges. Increased customer expectations and the growth of ecommerce - magnified by labor issues - have put more pressure on financial performance and corporate transformation.

After the pandemic made distribution excellence even more essential to obtaining necessary goods and services, the warehouse function has been elevated to a central strategy for many companies.Companies have changed their thinking from viewingdistribution as a cost center tothinking aboutdistribution as acompetitive advantage.There is an urgency to act on these changing market conditions, but some changes take a long time to implement.

So, whats the answer? In my eyes, its a marriage of the old and new. If you use data science-led innovative software products as the platform and infrastructure for growth, you can apply them to the basic tasks in the DC picking, batching, travel. To tie things together its a collaboration between workers and technology that enables workers to be more efficient and effective at their jobs.

A Lucas Systems survey found thatnearly 3 out of 4 (74%) on-floor workers will consider a pay cut at another company for an opportunity to use technology if it helps them in their job. Workers also said they are physically spent, spending over a third of their day walking and would welcome techs help in the form of robots or other tech tools. This tells me that workers are eager to use technology in new beneficial ways.

Fromconstant enhancementsfocused on incremental process changes tocontinuous optimizationand operational transformation,DCs need tools that can help them manage the week-to-week, day-to-day and even the hour-to-hour changes that occur. Its about finding the real problems, not the symptoms of the problems. It also is about prioritizing solutions which optimize DC operations and processes and allow frontline workers to work to the top of their ability level.

The opportunity lies mainly in three areas:

There have generally been only slight gains in picker productivity or in slotting programs to reduce travel, with pickers in most facilities still spending more than 30% of their day traveling throughout the warehouse rather than picking product. But there are ways to optimize and reduce travel and generate immediate productivity gains without changing overall systems. Thats where AI, software and technology come in.

Lets look at how applying technology could transform your operation. In batching work, there are quite a number of variables to consider, including:

Software can dynamically apply real-time optimization algorithms, putting work into priority sequence, based on rules that you can configure and adjust, and run millions of possible combinations in a split -second, so that users are never waiting for work. For example, maybe you concentrate pick density early in the day, when there is time before the first orders are due at the dock. As the day progresses, the software can automatically transition to optimize for priority, as route cut-off times are approaching, based on the schedule for that day or week. The software then determines an optimized path for the user to take through the warehouse to complete their work, taking into account a number of factors such as:

Whats important to understand is that you can generally achieve excellent results without completely changing and updating systems, like a brand-new WMS or ERP. The behind-the-scenes parts of the process, and how you get to the answers is incredibly complex and best handled by software technology, ideally using artificial intelligence and machine learning to continually iterate and enhance the process.

Using AI to optimize DC processes has proven to be powerful in ecommerce pick-to-cart operations, where customers report doubling productivity without any fundamental changes to their picking processes. Although the biggest productivity gains are seen in each picking, the AI-based software tools have proven equally applicable to case picking, replenishment and other activities where workers are visiting many locations per work assignment. In grocery and food DCs, AI-based optimization has demonstrated travel savings of 15%-30% in case pick to pallet applications.

The right software optimizes hands-on work and streamlines manual processes, thus making the work easier, faster and more accurate for workers. For managers, machine learning-based applications provide new insights and recommendations, empowering them rather than taking control out of their hands. Ultimately, thats better for all, and for the overall business.

Read more from the original source:

Simplifying with Data-Driven Warehouse Optimization - Supply and Demand Chain Executive

Read More..

Discover the Golden Ratio: How to Perfectly Balance Data Science … – Medium

On a regular day, I found myself standing in front of an art piece at a local museum. As my eyes scanned the grandeur of the structure, a realization hit me. Business, like this work of art, also follows a golden ratio, a harmonious blend of data science and intuition that could alter the course of your decision-making journey.

Can you imagine two seemingly distinct entities like data science and intuition being the Da Vinci Code to effective business decision making? It sounds absurd, doesnt it? The Gen Z in me wants to joke, Whats next? Yoga and quantum physics?

Let me tell you a story. Sandra, an entrepreneur with a tech startup, was on the verge of collapse. She had all the data science insights but ignored her gut instincts. Then, she met an old mentor who revealed the golden ratio, the fusion of data and intuition. In no time, she turned the tables around. Her startup thrived, and she was hailed as a visionary.

Doesnt it sound like a plot twist? Welcome to the reality of todays business world where data crunching machines and human intuition come together to create magic. Here, one complements the other, filling in the gaps, and thats what we call the golden ratio.

Scientific evidence backs this approach. A study from the University of Amsterdam shows that intuitive decisions backed by data analysis are often more effective. When data science meets intuition, we get an amalgamation of raw numerical power and the human touch.

But, hold on! Dont pack your bags for a trip to a secluded mountain to hone your intuition. And dont invest all your money in the most advanced data analytics tool, either. The golden ratio isnt about extremes. Its about finding the perfect balance.

Now that we have the recipe, lets prepare the dish. Over time, with practice and reflection, youll start seeing the golden ratio in your decisions. Its a continuous process, a journey towards harmonious decision-making.

So, are you ready to discover your golden ratio and revolutionize your decision-making?

Imagine a day when you stand before your business, seeing the golden ratio in action, driving success. Remember, the journey to finding the perfect balance can be as thrilling as standing in front of a masterpiece and realizing that it holds the secret to your business success.

Drop your experiences in the comments and lets discuss how the golden ratio is reshaping decision-making landscapes. Follow my Medium account for more unique insights and daring ideas. Lets embark on this exciting journey together!

See the article here:

Discover the Golden Ratio: How to Perfectly Balance Data Science ... - Medium

Read More..