Page 2,123«..1020..2,1222,1232,1242,125..2,1302,140..»

IBM’s AutoAI Has The Smarts To Make Data Scientists A Lot More Productive But What’s Scary Is That It’s Getting A Whole Lot Smarter – Forbes

IBM

I recently had the opportunity to discuss current IBM artificial intelligence developments with Dr. Lisa Amini, an IBM Distinguished Engineer and the Director of IBM Research Cambridge, home to the MIT-IBM Watson AI Lab. Dr. Amini was previously Director of Knowledge & Reasoning Research in the Cognitive Computing group at IBMs TJ Watson Research Center in New York. Dr. Amini earned her Ph.D. degree in Computer Science from Columbia University. Dr. Amini and her team are part of IBM Research tasked with creating the next generation of Automated AI and data science.

I was interested in automation's impact on the lifecycles of artificial intelligence and machine learning and centered our discussion around next-generation capabilities for AutoAI.

AutoAI automates the highly complex process of finding and optimizing the best ML model, features, and model hyperparameters for your data. AutoAI does what otherwise would need a team of specialized data scientists and other professional resources, and it does it much faster.

AI model building can be challenging

How Much Automation Does a Data Scientist Want?

Building AI and machine learning models is a multifaceted process that involves gathering requirements and formulating the problem. Before model training begins, data must be acquired, assessed, and preprocessed to identify and correct data quality issues.

Because the process is so complex, data scientists and ML engineers typically create ML pipelines to link those steps together for reuse each time data and models are refined. Pipelines handle data cleansing and manipulation operations for model training, testing and deployment, and inference. Constructing and tuning a pipeline is not only complex but also labor-intensive. It requires a team of trained resources who understand data science, plus subject-matter experts knowledgeable about the models purpose and outputs.

It is a lengthy process because there are many design choices to be made, plus a myriad of tuning adjustments for various data processing and modeling stages.

The pipeline's high degree of complexity makes it a prime candidate for automation.

IBM AutoAI automates model building across the entire AI lifecycle

IBM

According to Dr. Amini, AutoAI does in minutes what would typically take hours to days for a whole team of data scientists. Automated functions include data preparation, model development, feature engineering, and hyperparameter optimization.

IBM

End-to-end automation of an entire model building process can result in significant resource savings. Here is a partial list of AutoAI features:

AutoAI provides a significant productivity boost. Even a person with basic data science skills can automatically select, train, and tune a high-performing ML model with customized data in just a few mouse clicks.

However, expert data scientists can rapidly iterate on potential models and pipelines, and experiment with the latest models, feature engineering techniques, and fairness algorithms. This can all be done without having to code pipelines from scratch.

Future AI automation projects

IBM Research is working on several next-generation AI automation projects, such as next-generation algorithms to handle new data types, bring new automated quality and fairness, and dramatically boost scale and performance.

Dr. Amini provided a deep dive into two especially interesting next-generation capabilities for scaling enterprise AI: AutoAI for Decisions and Semantic Data Science.

AutoAI for improved decision making

Time series forecasting is one of the most popular but one of the most difficult predictive analytics. It uses historical data to predict the timing of future results. Time series forecasting is commonly used for financial planning, inventory, and capacity planning. The time dimensions within a dataset make analysis difficult and require more advanced data handling.

IBM

IBMs AutoAI product already supports Time Series forecasting. It automates the following steps of building predictive models:

Dr. Amini explained that after a time series forecast is created in many settings, the next step is to leverage that forecast for improved decision-making.

For example, a data scientist might build a time series forecasting model for product demand, but the model can also be used as input for inventory restocking decisions with the goal to maximize profit by reducing costly over-stocking of too much inventory or avoiding lost sales due to stock outages.

Simple heuristics are sometimes used for inventory restocking decisions, such as determining when inventory should be restocked and by how much. In other cases, a more systematic approach, called decision optimization, is leveraged to build a prescriptive model to complement the predictive time series forecasting model.

Prescriptive analytics (as opposed to predictive analytics) use sophisticated mathematical modeling techniques and data structures for decision optimization and leverage expertise in short supply. However, products for automated decision optimization pipeline generation created directly from data, like AutoAI for predictive models, do not exist today.

Multi-model pipelines

IBM

Dr. Amini explained that the best results are obtained by using both machine learning and decision optimization. To support that capability, IBM researchers are working on multi-model pipelines that could accommodate the needs of predictive and prescriptive models. Multi-models will allow business analysts and data scientists to use a common model to discuss aspects of the problem from each other's perspectives. Such a product would also promote and improve collaboration between diverse but equally essential resources.

Automation for Deep Reinforcement Learning

The new capability to automate pipeline generation for decision models is now available through the Early Access program from IBM Research. It leverages deep reinforcement learning to learn an end-to-end model from data to decision policy. The technology, called AutoDO (Automated Decision Optimization), leverages reinforcement learning (RL) models and gives data scientists the capability to train machine learning models to perform sequential decision-making under uncertainty. Automation for reinforcement learning (RL) is critical because RL algorithms are highly sensitive to internal hyperparameters. Therefore, they require significant expertise and manual effort to tune them to specific problems and data sets.

Dr. Amini explained that the technology automatically selects the best reinforcement learning model to use according to the data and the problem. Using advanced search strategies, it also selects the best configuration of hyperparameters for the model.

The system can automatically search historical data sets or any gym-compatible environment to automatically generate, tune, and rank the best RL pipeline. The system supports various flavors of reinforcement learning, including online and offline learning and model-free and model-based algorithms.

Scaling AI with automation

Automation for reinforcement learning tackles two pressing problems for scaling AI in the enterprise.

First, it provides automation for sequential decision-making problems where uncertainty may weaken heuristic and even formal optimization models that don't utilize historical data.

Secondly, it brings an automated, systematic approach to the challenging reinforcement learning model building domain.

Semantic Data Science

State-of-the-art automated ML products like AutoAI can efficiently analyze historical data to create and rank custom machine learning pipelines. It includes automated feature engineering, which expands and augments the feature space of data to optimize model performance. Automated methods currently rely on statistical techniques to explore the feature space.

However, if a data scientist understands the semantics of the data, it is possible to leverage domain knowledge to expand the feature space to increase model accuracy. This expansion can be done using complementary data from internal or external data sources. Feature space is the group of features used to characterize data. For example, if the data is about cars, the feature space could be (Ford, Tesla, BMW).

Complementary feature transformations may be found in existing python scripts or relationships described in the literature. Despite this, knowing which features and transformations are relevant, a user must have sufficient technical skills to decipher and translate from code and documents.

IBM

New semantic power for data scientists

Dr. Amini described another powerful new capability created by IBM Research called Semantic Data Science that automatically detects semantic concepts for a given dataset. Semantic concepts characterize concepts to help understand the words and sentences to provide a way for meanings to be represented. Once AutoAI has detected the proper semantic concepts, the program uses those concepts in a broad search for relevant features and feature engineering operations that may be present in existing code, data, and literature.

AutoAI can use these new, semantically-rich features to improve the accuracy of generated models and provide human-readable explanations with these generated features.

IBM

Even without having domain expertise to assess these semantic concepts or new features, a data scientist can still run AutoAI experiments. However, data scientists who want to understand and interact with the discovered semantic concepts can use the Semantic Feature Discovery visual explorer to explore discovered relationships.

Users can go directly from the visual explorer into the python code or document where the new feature originated simply by clicking the Sources hyperlink, as shown in the graphics below.

IBM

IBM

The Semantic Data Science capability is also available as an IBM Research Early Access offering. Some of the capabilities are even available for experimentation on IBMs API Hub.

Dr. Amini concluded our conversation and summed up the vast research effort IBM is pouring into AutoAI with one single yet efficient sentence:

We want AutoAI and Semantic Data Science to do what an expert data scientist would want to do but may not always have the time or domain knowledge to do by themselves.

Wrap-up key points

Analyst Notes:

For more information and comments about quantum computing and artificial intelligence, follow Paul Smith-Goodson on Twitter @Moor_Quantum

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, or speaking sponsorships. The company has had or currently has paid business relationships with 88, A10 Networks, Advanced Micro Devices, Amazon, Ambient Scientific, Anuta Networks, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak (now Aruba-HPE), SONY Optical Storage, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zoho, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is a personal investor in technology companies dMY Technology Group Inc. VI and Dreamium Labs.

Read more:

IBM's AutoAI Has The Smarts To Make Data Scientists A Lot More Productive But What's Scary Is That It's Getting A Whole Lot Smarter - Forbes

Read More..

Building The Future Of AI-Powered Retail Starts With Trust – Forbes

Jiazhen Zhu, lead data engineer and machine learning engineer

Jiazhen Zhu leads the end-to-end data team at Walmart Global Governance DSI, a diverse group of data engineers and data scientists united in building a better platform through data-driven decisions and data-powered products. Zhu first joined Walmart Global Tech in 2019 to oversee both data engineering and machine learning, giving him a unique vantage point into the interrelated worlds of DataOps, MLOps and data science. Before Walmart Global Tech, Zhu was a data scientist and software engineer at NTT Data.

Can you briefly introduce yourself and outline your role at Walmart Global Tech?

Currently, I am the lead data engineer and machine learning engineer at Walmart Global Tech. I work with data end-to-end, starting with where we get the data on through to how we clean the data, transfer the data, feed the data into model training and then ultimately move models into the production layer. I enjoy overseeing this process and bring a decade of experience working in both data and machine learning, building platforms across both.

What was your career journey getting up to this point?

After completing my bachelors degree in computer science, I worked as a software engineer at Citi focused on the data warehouse used to build models and support scalable data modeling. Then, I completed a masters degree in data science and worked as both a software engineer and data scientist. All of this is interrelated as data engineering and machine learning engineering are really just part of software engineering typically, the software engineer will be focused on the application or UI or full-stack tasks, whereas the machine engineer and data engineer are more focused on the data and model, respectively.

How does Walmart Global Tech fit into Walmart overall?

Walmart Global Tech works on cutting-edge technologies that create unique and innovative experiences for our associates, customers and members across Walmart, Sams Club and Walmart International. We solve the myriad challenges that every retailer faces, whether its dealing with suppliers, distribution, ordering, innovation, shopping experience, or after-sales service. The one commonality across all of these is that they all benefit from technology.

You oversee both data engineering and machine learning any lessons for others in terms of the benefits of structuring the organization this way? This must give you a unique vantage point on data-centric AI, per your recent blog.

In other companies, these functions are often separated in different organizations. My own experience is that if we can combine the different roles particularly the data scientists, the research scientists, data engineers, machine learning engineers, and software engineers in one team, it can speed up product development. Since most of the domains require specialized knowledge, combining many different roles into one team can also help bring new innovative ideas to the product.

How do you think about the build-versus-buy calculus when it comes to ML platforms?

For MLOps platforms, which is obviously a new area, it varies its not as simple as saying we have one tech stack that we follow every time. What we do is approach these decisions based on requirements then we make sure that each component will be easy to replace or rebuild, so down the road we dont have to rebuild the whole thing just because one component no longer fits our needs.

What types of models is Walmart Global Tech deploying into production and why?

It depends on the area, requirements, and end-customers. At the outset, I always start with the question: do we need machine learning at all to solve this problem, or is there an easier way to fix it that we should implement instead? If machine learning is necessary, then its often much easier and better to pick a simple model like regression or linear regression to ensure good performance. We leverage those types of models for base cases. When there is a good existing model to use, we often will adapt or use it like BERT for natural language processing.

I want to emphasize that for the model itself, trust is critical. Not everyone will trust the model. This is why I said at the beginning that the simplest is often the best. Not using machine learning or if you do need to use machine learning, leveraging a model that offers easier explanations like linear regression models is preferable. The black box nature of BERT or deep learning makes the task of helping people or customers understand the model more difficult.

Ultimately, if customers or people dont trust the model its useless. So building a process to explain the model is critical. Its also important to troubleshoot the model itself.

Sounds like model explainability and being able to trust a models decisions is really important to your team?

Yes, its important not only for the model but also for the product and its customers. If you can explain a model to a customer or a user, you can explain it to yourself also so its win-win that way as well. No one likes a black box.

What is your strategy for model monitoring and model performance management?

Since change is always happening, monitoring is really the key to successful MLOps. Whether its from a data engineering or machine learning engineering perspective, we always task the role with monitoring all processes across the pipeline or infrastructure. The data engineer, for instance, will look at whether there are data quality issues, data mismatches, missingness, and more.

For machine learning, monitoring spans both the data and the model itself. We look at data drift, concept drift and performance across key metrics (i.e. AUC) to get to the bottom of issues and inform the retraining process. There is a lot you can track, so having access to key metrics for root cause analysis and getting notifications and alerts really helps.

This must be a really interesting time at Walmart given record demand, supply chain challenges, inflation and a lot more. Have you experienced any interesting issues with production models reacting to a new environment?

Definitely, yes. The one constant is that the data are always changing. A model trained on social network data, for example, may see broad impacts to model performance when the social network data drastically changes or disappears overnight. Issues like these are very common.

Half of data scientists we recently surveyed (50.3%) say that their business counterparts do not understand machine learning. How have you successfully navigated this hurdle to scale your ML practice?

This kind of situation is common in the industry. As discussed, some models are black boxes and few trust black boxes that are unopened, which is why explainability is so important. If your customers look at it and understand why a model made a particular decision, trust will grow over time.

For models that directly impact customers, how do you incorporate customer feedback into your models?

Customer feedback is so important. The data might change or the concept might change, but if customer feedback is part of the ML process then we can use that customer data to retrain the model in near real-time and have better model performance and a better ability to predict reality as a result. Having that human-in-the-loop process to check things can help ensure that models are relevant and performing well.

What is your most favorite and least favorite part of your current role?

I love data and I love playing with data, so thats really one of my favorite aspects of my current role. Incidentally, its also one of the more difficult parts of the job because you need to know the data well before you fit it into a model. In terms of machine learning, one of the hardest things is knowing how to pick the right approach not just for the model, not just for the data, but also the tech stacks, scalability and everything about the ML pipeline.

Link:

Building The Future Of AI-Powered Retail Starts With Trust - Forbes

Read More..

Data technology comes to the construction industry – VentureBeat

To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Watch for their articles in the Data Pipeline.

Its no secret that data is changing the world as we know it. Like every other industry, more data from more sources are coming to architecture, engineering and construction (AEC).

>> Read more in the Data Pipeline <<

Toric is using data and analytics to transform the AEC industry. They provide real-time insights to help AEC firms and owners, as well as operators, reduce waste and increase sustainability. This data platform enables construction professionals to make better, data-driven decisions at a much faster pace than what has previously been possible.

I worked in the construction industry from 2007 to 2010. Then, only a few of the most forward-thinking firms were using data and technology to improve business processes.

Over the past 10 years, AEC firms hired more people who grew up with technology and were comfortable using technology in their jobs.

COVID-19 has further awakened a sleeping giant. Leading AEC firms and their project managers have realized the critical need to capture data from job sites remotely.

Every construction firm is at a different place in its use of data and digitalization. The use of ERP, scheduling, project management, BIM, drones, scans and photography for open space varies by company. More data and technology solutions for the industry are being introduced every year. Procore led the way two decades ago. Today, Turner Construction is using robots to handle dangerous tasks on job sites.

Most of the AEC companies using data today are doing so with old data. When data is collected it needs to be cleaned, structured and analyzed to improve safety, quality, productivity and profitability. Data value and accuracy decrease over time. For instance, a photo of an active job site on Monday will no longer be accurate on Tuesday.

Ad-hoc processes result in a number of challenges for AEC firms:

Toric is at the forefront of this data-driven transformation. The company is working to address the aforementioned challenges by providing real-time data for real-time decision-making to reduce errors, mistakes, costs and risks.

The data landscape is chaotic. There is more data, more sources, tools and solutions including artificial intelligence/machine learning, data mining, data science, predictive and prescriptive analytics, et al. Toric has created a no-code data platform to take advantage of all these tools.

The platform integrates, transforms, visualizes and automates data across projects. Data is then consolidated in one workspace for analysis. They have more than 100 tools available to clean, transform and augment data. Additionally, the data is updated in real-time, so project managers can make immediate, well-informed decisions to properly equip the project to move forward.

The construction analytics platform helps deliver more accurate bids, tracks progress and improves digital delivery by referencing all past project data. It integrates Procore, Autodesk, ERP systems and spreadsheets. Estimation and project tracking are all analytics-driven. Historical data is leveraged for data applications.

Architecture and engineering firms can build an analytics model for their BIM design process. They do this by tapping into BIM models and other data sources to support data-driven design, QA, quantification and change management. Users can perform continuous data modeling, track design to project requirements and create data apps to improve customer experience.

Owners and operators use Toric to track key building metrics during design, integrate and compare bids against design and create a complete data lifecycle for digital twins.

The average AEC firm with 100 projects is adding 1PB of data every year. Much of this data is unstructured. Its expensive and hard to find data analysts and scientists to get value from the data in addition to capturing, cleaning and integrating it.

Suffolk Construction is a 40-year-old, $3.9 billion firm based in Boston. Its one of the most mature firms regarding its data strategy, with 30 data scientists on staff. Suffolk has integrated three of its 20 systems with Torics platform, replacing their home-grown systems for data ingestion and data capture.

Hensel Phelps, a 2,000-person, construction management firm founded in 1937 has one data analyst on staff. With Toric, their data analyst will be able to automate thousands of projects using just one tool.

Data quality, data cataloging and real-time data analysis for the AEC industry did not exist three years ago. Advances in the industry will see data leaders making significant impacts on several fronts. Environmental issues are a key issue today and can only be addressed with data. More sophisticated and precise proposals will result in reduced costs and a stronger competitive position as data tools are used to evaluate and project costs.

Conscientious owner/operators who own a lot of real estate and care about the health and environmental compatibility of their properties. Theyre requiring designers, engineers and contractors to know the amount of carbon that goes into constructing a new building.

Embodied carbon is a major issue for owners that want to be environmentally conscientious. They want to see the analysis of the carbon footprint and to know if its more efficient to build their new or reclaimed building with steel or concrete. They want to know the environmental impact of a design change.

Previously, information about embodied carbon was subjective. Today, its objective and its incumbent for AEC firms to have a firm grasp on their data to be competitive.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read this article:

Data technology comes to the construction industry - VentureBeat

Read More..

APAC Big Data and Analytics Spending to Grow 19% in 2022 – CDOTrends

Asia Pacific spending on big data and analytics (BDA) solutions will grow by 19% in 2022 and rise 1.6 times to USD53.3 billion by 2025, says analyst firm IDC. The prediction was made in the latest version of IDCs Worldwide Big Data and Analytics Spending Guide, consisting of global spending mapped over 19 industries.

According to the analyst firm, investments are driven by the evolving needs of enterprises to gain operational efficiency and operational resilience. Businesses are increasing their operational resilience with the ability to provide a real-time response to external market disruptions such as the pandemic, supply chain vulnerabilities, or rapidly evolving customer needs.

Extracting value from data

Organizations need to have a clear strategy to extract value from their data asset and enable evidence-based decision making, data science explorations, and actionable insight delivery at scale, says IDC associate research director Jessie Danqing Cai.

Some key findings include:

It is worth noting that while on-premises deployments captured a maximum revenue share of $6.0 billion in 2022, this is expected to change in the future. Specifically, public cloud services are expected to capture a lions share of the spending at 52.6% in 2025 at USD9.17 billion.

Organizations priority area for investment in data and analytics varies by industry, says Abhik Sarkar, a market analyst at IDC.

Finance and government sectors prioritize automation and cybersecurity, while retail and wholesale focus more on customer experience, and manufacturing focuses on worker productivity/quality. Hence, it is imperative that vendors target outcomes of their solutions by industries, said Sarkar.

The full report can be accessed here (Paid subscription).

Image credit: iStockphoto/marchmeena29

Go here to read the rest:

APAC Big Data and Analytics Spending to Grow 19% in 2022 - CDOTrends

Read More..

Sidetrade reveals B2B global payment trends from its unique Data Lake with an industry-first predictive payment intelligence map of 20.7 million…

Sidetrade

Data aggregated and anonymized from over 593 million B2B transactions worth $4.6 trillion, from 20.7 million buying companies worldwide

Global payment trends over the last three years revealed:

Economists at Cebr say increasing costs and interest rate hikes are making late payments more likely

Global AI-powered Order-to-Cash platform, Sidetrade has today revealed global payment trends from the Sidetrade Data Lake. Never-before-seen in the industry, the global payment trends - based on the payment behavior of 20.7 million buying companies worldwide - enable companies, governments, and analysts to make data-driven decisions in a time of uncertainty.

Developed by Sidetrades best-in-class engineers and leading data scientists since 2015, the Sidetrade Data Lake is the heart of Sidetrades innovative Order-to-Cash AI solutions to enhance cash optimization. It is used by data-driven companies (i.e. suppliers) to understand and predict buyer payment behaviors.

A unique tool within the Order-to-Cash market, it is a repository of data of past B2B payment transactions. It is comprised of more than 593 million payment experiences worth a total of $4.6 trillion processed over the last three years from 20.7 million buyers worldwide. Todays launch provides users with a free and interactive map into aggregated and anonymized insights from the Sidetrade Data Lake.

A resource for companies, governments, and analystsThe publication of global payment trends from the Sidetrade Data Lake is an industry-first and comes as global uncertainties and increasing inflationary pressures place a greater emphasis on cash and credit management.

Updated on a quarterly basis, the global payment trends give users access to never-before-available insights into how buyers pay their suppliers depending on country and industry, enabling them to better monitor customers payment trends across the world, predict payment behavior and decrease collection time. This provides invaluable support for effective decision-making and payment negotiations, thereby cutting days sales outstanding (DSO).

Story continues

Sidetrade CEO, Olivier Novasque, commented: Protecting and accelerating cash flow has never been so critical as bad debt risk and inflation are dramatically increasing everywhere. This can best be achieved if companies harness enough customer behavior data and become more forward-looking. Data science and artificial intelligence are essential to fighting against late payment. Since each buyers payment behavior is different, a dedicated and automated collection strategy is needed to be efficient.

Were delighted today to offer a first look into the Sidetrade Data Lake and bring the power of true AI to companies at a time when they need it the most.

Innovation and digital transformation to support businessesThe Sidetrade Data Lake harnesses the power of Sidetrades next-generation AI technology (aka Aimie) to predict the likelihood that a customer will pay late. Aimie then recommends customer-specific collection strategy based on buyer payments patterns, and other data insights.

Sidetrade Chief Product Officer, Rob Harvey, added: In B2B companies, every CFO wants industry benchmarking data on their customer payments; they thrive on data. But how well do finance teams actually know their customers? Do they know who is likely to pay them late? Which industries wait the longest to get paid? Now, CFOs can finally understand their customer behaviors, better understand their competition, and make smarter decisions; much like Tesla uses driver data to make innovative changes, enhancing and optimizing the driver experience.

With the Sidetrade Data Lake, we unlock enterprise data and simplify access to payment data management. At Sidetrade, we are committed to saving time for businesses, exploring smart efficient processes in the Order-to-Cash cycle, assisting in decision-making, and sharing information in real-time. Sidetrade is the Waze for business payments!

The Sidetrade Data Lake is the result of years of hard work of collecting, cleaning up, matching, and enriching huge volumes of data. Transactions between Sidetrades customers (the suppliers) and their buyers within the Sidetrade Data Lake are aggregated, anonymized, and layered with algorithms to determine a predictive payment metric for each analyzed buyer company. This is the engine behind Sidetrades AI technology. It takes decision-making support to the next level, providing data-based recommendations on the most effective collection, dispute resolution and risk control actions.

Mark Sheldon, Chief Technology Officer of Sidetrade: We are proud to offer a first look at our Data Lake today and showcase the depth of our data, technology and true AI capabilities.

What is particularly exciting about the Sidetrade Data Lake technology, is the fact weve not only succeeded in getting large amounts of data cleansed, matched and into one place, but that weve also layered that with next-generation AI and predictive capabilities, and are able to measure its value. This latter part in particular is something that many companies often struggle with, and why were proud to be able to clearly demonstrate the value to our customers.

Best and worst payment behaviors around the world revealedLeveraging its Data Lake, Sidetrade reveals the best and worst markets and industries for late payments over the last three years.

A global snapshotOn average, companies around the world pay supplier invoices 21 days late taking the length of time between an invoice being issued to getting paid, to a total of 53 days on average.

Scandinavian businesses tend to pay the quickest overall, in particular Sweden, with a delay of just 7 days, well below the global average.

Getting back to pre-pandemic unpaid invoice ratiosOver two years on from the start of the pandemic, the US, France, and Italy still havent fully reverted back to their pre-pandemic unpaid invoice rates according to the Sidetrade Unpaid Invoice Tracker. Conversely, Belgium, Spain, the UK, and the Netherlands now have better unpaid invoice ratios than before the start of the pandemic.

France holds the title for the worst amongst the seven countries tracked, with 25% of the value of all overdue invoices as of April 16, 2022.

27-day delays in the USUS companies rank tenth globally for longest payment delays, with a mean of 27 days.

The industry with the shortest payment delays in the US is Manufacturing: 22 days. The worst industries are Financial Services & Insurance, and Leisure & Hospitality: 30 and 31 days, respectively.

Payment disparities in EuropeAcross Europe, there are significant disparities in payment delays from country to country, ranging from 7 days (Sweden) to 26 days late (Ireland).

On average, it takes companies in Europe 45 days to collect payment a mean delay of 16 days.

UK & Ireland exceeding European average for payment terms and delaysCompanies in the UK and Ireland are among the top four worst in Europe for payment delays, with means of 21 and 26, respectively. This is despite both countries having higher payment terms (34 days in the UK, 31 days in Ireland) than the European average of 29 days.

In the UK, the industries with the shortest payment delays are Utilities & Energy at 19 days, and Retail & CPG at 20 days. Industries in the UK with the longest payment delays are Financial Services & Insurance at 25 days, and Public Services, HR Services and Transportation & Logistics at 24 days.

Looking at the Sidetrade Unpaid Invoice Tracker, almost all UK industries appear to have recovered from the pandemic in terms of late payments, most showing pre-pandemic unpaid invoice rates. The exceptions are, Public Services, Other Services and the Food industry. The latter has risen from 14% at pre pandemic levels, to 21% of invoice values being deemed late (10+ days after their due date) as of 16 April 2022.

France is amongst the worst performing European countries for late payments. Their sectors with the shortest payment delays are Retail & CPG, and Manufacturing, both at 17 days. The worst industries in France for late payments are Leisure & Hospitality, and Public Services at 24 days. France still hasnt recovered from the Covid-19 crisis when it comes to late payments. The rate of unpaid invoices across every industry in France is still above pre-pandemic level, at 22.7%.

Josie Dent, Managing Economist at Centre for Economics and Business Research (Cebr): After the hit businesses took during the pandemic and lockdowns, which caused a significant rise in late payments, the global economy is now experiencing a new source of pressure. Across the world, inflation is accelerating, driven by energy prices and supply chain disruptions. While both of these factors were already at play at the start of the year, the conflict in Ukraine and sanctions on Russia, as well as recent lockdowns in key Chinese cities have further added to the inflationary environment and the outlook for price growth. These increasing costs for fuel, energy and raw materials add to businesses financial strain at a time when many are still recovering from the pandemic, making late payments more likely.

Therefore, as businesses bills come due, often at higher prices than previously, many will find themselves having to prioritize which they can afford to pay immediately, and which will need to wait.

In the US, consumer price inflation rose to 8.5% in March, up from just 2.6% a year earlier. In the meantime, the average share of unpaid invoices by value rose from 14.5% on 31 March 2021 to 17.5% on the same day in 2022. Businesses will also be affected by the Federal Reserves interest rate rises expected this year. In March, the Fed voted to increase interest rates for the first time since 2018, with projections pointing to a further six rate rises in 2022 alone, raising costs for those with debt.

Businesses in other countries are also facing rising costs and interest rates. The share of late payments by value stands at 24.7% in France in the latest data (16th April), compared to 21.0% at the start of the year, while inflation picked up to 5.1% in March.

The Sidetrade Data LakeData Science analysis20.7 million buyer companies593 million+ invoices processed in the last three years$4.5 trillion worth of invoices

MethodologyDaily analysis of the data hosted in the Sidetrade Cloud leveraging machine learning algorithms

Reference period1-Jan-2019 through 31-Dec-2021

How the Sidetrade Data Lake works

The Sidetrade Data Lake leverages data collected from multi-tenant SaaS Software. The data within the Sidetrade Data Lake has gone through a complex and layered process of matching, crawling, enriching and multi-step machine learning.Highly effective machine learning is made possible by the huge volume of supplier data within Sidetrades cloud-based Data Lake.Sidetrades AI technology (AKA Aimie) utilizes a combination of machine learning and algorithms to offer recommendations on cutting DSO. Because the system and models are deployed in real-world scenarios, Sidetrade is uniquely able to accurately prove and monitor the Data Lakes value, and performance.

This announcement comes shortly after Sidetrades recognition as a Leader in the first Gartner Magic Quadrant for Integrated Invoice-to-Cash Applications.

Investor relationsChristelle Dhrif +33 6 10 46 72 00 cdhrif@sidetrade.com

Media relationsRebecca Parlby +44 7824 505 584 bparlby@sidetrade.com

About Sidetrade (www.sidetrade.com)Sidetrade (Euronext Growth: ALBFR.PA) provides a SaaS platform dedicated to securing and accelerating cash flow. Sidetrades next-generation AI, nicknamed Aimie, analyzes $4.6 trillion worth of B2B payment transactions daily in the Sidetrade Cloud to predict customer payment behavior and attrition risk of more than 21 million companies worldwide. Aimie recommends the best cash collection strategies, intelligently automates actions on the Order-to-Cash process, and dematerializes customer transactions to enhance productivity, performance and working capital management. Sidetrade has a global reach, with 300 talented employees based in Paris, London, Birmingham, Dublin, Houston, and Calgary, serving global businesses in more than 85 countries. Amongst them: Tech Data, KPMG, Nespresso, Hearst, Expedia, Manpower, Securitas, Randstad, Engie, Veolia, Inmarsat, and Bidfood.

For further information, visit us at http://www.sidetrade.com and follow us on Twitter @Sidetrade.

In the event of any discrepancy between the French and English versions of this press release, only the English version is to be considered.

Attachment

Go here to read the rest:

Sidetrade reveals B2B global payment trends from its unique Data Lake with an industry-first predictive payment intelligence map of 20.7 million...

Read More..

Dotmatics Biology Solution for Antibody Discovery Streamlines Data Collection and Workflows to Accelerate Antibody Research and Therapies – Yahoo…

BOSTON, May 03, 2022--(BUSINESS WIRE)--Dotmatics, a leader in R&D scientific software connecting science, data, and decision-making, today announced the release of its Biology Solution for Antibody Discovery, a unique end-to-end software and data management package. The antibody workflow combines Dotmatics enterprise scientific platform with advanced applications software used routinely in antibody discovery laboratories including Geneious, SnapGene, and GraphPad Prism. The fully integrated solution will streamline data collection and processing of the antibody discovery workflow, facilitating a pathway to more effective and efficient discovery, and accelerating progress towards potentially life-changing therapies for patients.

"The launch of the Dotmatics Biology Solution for Antibody Discovery is the industrys first true end-to-end solution for antibody workflows and represents a huge step forward for innovation in antibody discovery," said Mike McKee, president of Dotmatics. "Whether its R&D for biology or chemistry or formulations, Dotmatics helps scientists and their organizations harmonize data science with the science data."

Finding research data across a siloed enterprise and local systems remains a critical challenge for drug discovery researchers. Scientists are forced to compromise either speed or accuracy. With many disconnected workflows the outcome is often manual data handling, leading to risks of data errors and inefficiencies.

Dotmatics antibody workflow supports the breadth of antibody discovery by addressing these issues, giving organizations a cloud-based data management platform that is centralized and fully integrated with proven scientific applications and productivity tools that end-users trust. Critically, it automates manual, error-prone data tasks and centralizes data management helping organizations make smarter decisions and ultimately accelerating antibody research.

Story continues

Growth in antibody research for the purpose of fighting chronic diseases with antibody therapies is highlighted by the projected investment in the market. A 2022 research report by Future Market Insights projected that the antibody therapy market is expected to reach $700 billion by 2032, up from $186 billion. Another report by the Insight Partners projects that the global monoclonal antibody market will reach $243 billion by 2028 from $111 billion in 2021.

"As scientists pursue critical antibody therapies to fight disease, Dotmatics is connecting them with their data in ways that accelerate innovation and help make the world a healthier, cleaner, safer place to live," said Michael Swartz, senior vice president of enterprise product at Dotmatics. "We are at the forefront of a digital transformation in the scientific R&D industry with an emphasis on providing researchers end-to-end solutions with connected data and workflows. They should be able to use the tools they know and love, while still centralizing data in order to satisfy the needs of the organization."

The Dotmatics antibody workflow incorporates FAIR data practices and provides a uniquely broad and deep solution traversing early candidate selection through cycles of innovation to provide highly refined antibody candidates to progress into animal trial and translational medicine.

For more information about Dotmatics and its new Biology Solution for Antibody Discovery visit the newly launched and rebranded Dotmatics.com or come along to booth 612 at Bio-IT World Conference & Expo in Boston, MA, May 3-5. Additionally, Andrew LeBeau, associate vice president of product integrations at Dotmatics will be presenting at 4:10 pm ET on Wednesday, May 4th, room 209, in track "Intelligent Workflow Automation Through Digital Solutions," where attendees can see firsthand Dotmatics new antibody workflow.

About DotmaticsDotmatics is a leader in R&D scientific software connecting science, data, and decision-making. Its enterprise R&D platform and scientists favorite applications drive efficiency and accelerate innovation. More than 2 million scientists and 10,000 customers trust Dotmatics to help them create a healthier, cleaner, safer world. Dotmatics is a global team of more than 800 employees dedicated to supporting its customers in over 180 countries. The companys principal office is in Boston, with 14 offices and R&D teams located around the world.

Dotmatics is backed by Insight Partners, a leading global venture capital and private equity firm investing in high-growth technology and software scaleup companies. Learn more about Dotmatics, its platform, and applications including GraphPad Prism, Geneious, SnapGene, Protein Metrics, and LabArchives at https://dotmatics.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220503005721/en/

Contacts

Zyme Communications Sarah Jefferysarah.jeffery@zymecommunications.com

More:

Dotmatics Biology Solution for Antibody Discovery Streamlines Data Collection and Workflows to Accelerate Antibody Research and Therapies - Yahoo...

Read More..

Anaconda Brings Python to the Browser With PyScript – CDOTrends

Anaconda over the weekend took the wrap off its PyScript project that allows data scientists and programmers to embed Python programs in HTML pages and execute them within the browser.

Traditionally, Python runs on an application server and returns results in HTML and rendered by the browser. An alternative is a development platform such as the one Anaconda offers.

Python from a browser

The new offering announced at PyCon US 2022 allows Python to be run within the Web browser for greater flexibility.

PyScript is a framework that allows users to create rich Python applications in the browser using a mix of Python with standard HTML. PyScript aims to give users a first-class programming language that has consistent styling rules, is more expressive, and is easier to learn, explained Fabio Pliger, principal software architect at Anaconda in a blog post.

PyScript acts as a wrapper around the Pyodide project, which loads the CPython interpreter as a WebAssembly browser module, explained Bleeping Computer. This means that users no longer need to use a different syntax but can run Python code pasted into a local HTML page.

With PyScript developers can:

Developers can also load a code editor directly using the py-repl tag to input code and execute it immediately no server-side configuration needed.

To be clear, PyScript has no mechanism yet for building platform binaries. However, it is a compelling offering for data scientists looking to quickly test out a Python code snippet or to quickly process some data on a machine lacking their normal assortment of data science tools.

[We] believed that the browser would be a great platform that we could adapt to achieve our goals of providing a reliable, accessible, and fun experience for PyScript users. The browser works everywhere (from laptops, tablets, to phones), and is secure, powerful, and stable, wrote Pliger.

Examples of PyScript in action can be found here, while PyScript itself can be downloaded from GitHub here.

Image credit: iStockphoto/filizbbr

Read the original post:

Anaconda Brings Python to the Browser With PyScript - CDOTrends

Read More..

Hammerspace Joins Global Alliance for Genomics and Health – PR Web

"Scientific research is evolving at a rapid pace. More than ever, scientists and IT teams need to rely on technology and data science to be able to find, access and share data across their research ecosystems." ~ Ari Berman, CEO of BioTeam

BOSTON and SAN MATEO, Calif. (PRWEB) May 03, 2022

Hammerspace, the pioneer of the Global Data Environment, today announced that it has joined the Global Alliance for Genomics and Health. Hammerspace will partner with more than 600 of the worlds leading healthcare, research, patient advocacy, life science and information technology organizations to create the frameworks and standards for sharing genomic and health-related data.

Genomics, microscopy, imaging and precision medicine communities have long faced challenges around data management and the infrastructure needed to support their research. These challenges have grown in order of magnitude of complexity as more powerful and higher-resolution instruments generate data at unprecedented rates. The computational resources required for discovery continue to increase, and both applications and people are scattered worldwide.

The full value of scientific data can only be achieved when the data can be used by the applications and users to derive insights from it. However, tools to efficiently make this data available to those who need it in decentralized environments have lagged.

Scientific research is evolving at a rapid pace. Devices are generating massive volumes of increasingly diverse data, analytics and simulation capabilities continue to develop, and data is increasingly distributed globally across labs, the cloud and HPC centers, said Ari Berman, CEO of BioTeam. More than ever, scientists and IT teams need to rely on technology and data science to be able to find, access and share data across their research ecosystems.

Hammerspace brings the ability for decentralized workflows to come together within a single Global Data Environment to accelerate time to insights, collaboration and data sharing. With Hammerspace, organizations can meet all of their data storage, data management and data protection requirements within a single data environment.

Hammerspace is committed to working together with member organizations to help bring the vision of the Global Alliance for Genomics and Health to fruition.

About the Global Alliance for Genomics and HealthThe Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

Learn More

About HammerspaceHammerspace delivers a Global Data Environment that spans across on-prem data centers and public cloud infrastructure. With origins in Linux, NFS, open standards, flash and deep file system and data management technology leadership, Hammerspace delivers the worlds first and only solution to connect global users with their data and applications, on any existing data center infrastructure or public cloud services.

Press Contact Details:IGNITE Consulting, on behalf of HammerspaceLinda Dellett, 303-439-9398Kathleen Sullivan, 303-439-9365Hammerspace@igniteconsultinginc.com

Share article on social media or email:

View original post here:

Hammerspace Joins Global Alliance for Genomics and Health - PR Web

Read More..

Sustainability and the cloud – FCW.com

Consolidating federal data centers and encouraging cloud computing adoption is supposed to support sustainability, but according to a government watchdog, little is known about the impact of these policies on federal energy consumption. Acquisition leaders are trying to inject sustainability policies into government buying practices and will get advice from a new federal advisory committee that is being established to look at issues of climate and sustainability in federal procurement.

"When we begin to talk about things such as transitioning to the cloud, not only are we talking about the capabilities, the scalability, we also are beginning to integrate the importance of sustainability into that," Laura Stanton, the General Services Administration's Assistant commissioner for the Information Technology Category, said at an industry event on Tuesday.

"The pandemic sped up the public sector adoption of the cloud solutions and the as-a-service models," Stanton said at the GITEC Emerging Technology conference on Tuesday. "This greater reliance on industry-provided infrastructure through cloud and other as-a-service solutions consolidates and shrinks the footprint of the government data centers, also a key goal, and reduces the government's consumption of raw materials and energy."

The Biden administration laid down sustainability goals in a 2021 executive order that committed the government to achieving net-zero emissions for federal procurement and government operations by 2050.

When asked about the relative impact of cloud versus traditional data centers, a GSA spokesperson told FCW that "properly managed cloud environments offer the greatest reduction of energy consumption."

The trend towards cloud computing has resulted in "significantly larger data centers," according to a 2016 report by the Department of Energy, that are more efficient than traditional data centers in terms of server utilization and power usage.

Cloud environments also offer tools to monitor usage and adjust among over- and under-used resources "without any capital expense," the GSA spokesperson explained.

The current energy usage of data centers still owned by agencies isn't well-known, Carol Harris, director of GAO's IT and cybersecurity team, told FCW in an email.

"There is very little data on federal data center energy usage," she said. "Based on current OMB guidance, agencies are required to report average electricity usage and average IT electricity usage for agency-owned data centers that have installed electricity metering."

But only about 22% of data centers have metering, "meaning that electricity usage is unknown for the other 78%," she said.

Around 90% of federal data centers fall under 500kW or less in terms of their energy usage, meaning that "these facilities are too small to implement modern technology to monitor energy usage, unable to establish renewable energy sources, and unable to dictate to energy providers what type of power is being provided," according to GSA.

GSA is implementing environmental requirements into major contracts and into strategic solutions vehicle for computers and laptops, said Stanton. She also pointed to efforts at GSA to work with the Department of Defense through the Defense Enterprise Office Solutions, an effort she says "enables [DoD] to reduce their physical footprint of on-premise infrastructure."

Soon the agency will also be getting input from a new Government Acquisition Policy Federal Advisory Committee focused on sustainability, announced last month. According to GSA, the committee will advise the agency on acquisition tools and authorities to target sustainability initiatives.

"We'll be continuing those conversations and looking at how we continue to both understand the opportunities that sustainability brings, and the opportunity to integrate those into future acquisitions," Stanton said. "We will be working with our suppliers to address climate change and risk in the delivery of critical products and services to the federal customer base."

Read the original post:
Sustainability and the cloud - FCW.com

Read More..

To Win the Next War, the Pentagon Needs Nerds – WIRED

When Russia invaded Ukraine, the US Department of Defense turned to a team of machine learning and artificial intelligence experts to make sense of an avalanche of information about the conflict.

We have surged data scientists forward, Deputy Secretary of Defense Kathleen Hicks told WIRED in a recent interview. These tech experts crafted code and machine learning algorithms, creating systems that are especially valuable for synthesizing the complex logistics picture, she said.

Due to the sensitive nature of operations in Ukraine, Hicks says she cannot provide details of what the data team has done. But Hicks says this helps prove a point that she and others have been making within the Pentagon for some timethat technology is fundamentally changing the nature of war, and the US needs to adapt in order to maintain its edge.

I like to say that bits can be as important as bullets, Hicks says, in reference to the importance of software, data, and machine learning. It isnt only that technology is advancing more rapidly and in different ways; the US also faces fresh international competition in emerging areas like AI. Russia might be less of a technological threat, but China has emerged as a formidable new near-peer rival. We know that by the Chinese governments statements in writing that they're looking very much to advance on the AI front, Hicks says.

During the ongoing conflict in Ukraine, AI algorithms have been used to transcribe and interpret Russian radio chatter, and to identify Russian individuals in videos posted on social media, using facial recognition tech. Low-cost drones that use off-the-shelf algorithms to sense and navigate are also proving a potent new weapon against more conventional systems and strategies. An unprecedented hacking campaign against Russia shows how cybersecurity skills have become a potent weapon against a nation-state adversary. New weapons can now be developed at breakneck speed, too, as was shown earlier this month when the US said it had developed a custom drone specifically for use by Ukrainian forces. By contrast, the US Air Forces latest fighter jet, the F-35, has been in development for over 20 years, at an estimated lifetime cost of $1.6 trillion.

Although the US is helping Ukraine punch above its weight by providing financial aid, conventional weapons, and new technologies, there are thoseinside and outside of the Pentagonwho worry that the US is ill-equipped to adapt to the challenges presented by war in the future.

Every large company has the same problem, says Preston Dunlap, who resigned last week as chief architect of the Department of the Air Force, a role that involved modernizing technology development and acquisition. Dunlap compares the situation to the way big successful businesses can be disrupted by technological change and more nimble competitors, a phenomenon that the business school professor Clayton Christensen called the innovators dilemma.

Dunlap penned an open resignation letter in which he recommended steps that the Department of Defense should take to embrace a more rapid, experimental, and technology-focused culture. He says just like a business faced with technological disruption and more nimble competitors, the US military struggles to change direction because it encompasses so many people, systems, and ingrained ways of doing things. He suggests that advocates for change, such as Hicks, can only do so much. I am concerned about operators having to go into some kind of contingency [conflict] without the available technology, he says. That's just not a place I want us to be.

More:

To Win the Next War, the Pentagon Needs Nerds - WIRED

Read More..