Page 3,707«..1020..3,7063,7073,7083,709..3,7203,730..»

Industry 4.0 And IT/OT Convergence: Crossing The Digital Lines To Success – Forbes

If you've been around Industry 4.0 for a while, or even read about it, you know it embraces new technologies like IIoT, cloud computing, edge computing, artificial intelligence, machine learning and a whole lot more. People have defined Industry 4.0 in terms of orchestration and optimization, physical and digital, and manufacturing and supply chain. But most of these definitions are so complicated that Industry 4.0 itself is not all that well-defined.

My simple definition for Industry 4.0 is that it uses digital technology to solve business problems. But maybe that's a little too simple. So how about this? Industry 4.0 uses digital technology to transform the business for the future, creating new and better ways of doing business.

The best part of this definition is that it's open-ended. The new and better ways of doing business could be just about anything. New capabilities. New processes. Reducing costs. Empowering teams. Improving decision-making. Creating new and better ways of serving customers. Identifying new things that haven't been thought about yet. It's all Industry 4.0!

Another topic that's being mentioned alongside Industry 4.0 regularly is IT/OT convergence. IT (information technology) and OT (operations technology) have both been around for a while. Both trace their lineage back to the space program of the '50s, '60s and '70s.

IT and OT have traditionally been at opposite ends of the technology spectrum. IT has been all about big computers, big databases and processing lots of transactions. In manufacturing, think of the computers used to process sales orders.

OT has been about small and fast computers, operating in real time to control machines and equipment. Think of the computers on the manufacturing lines that run the equipment.

For the longest time, all we've been technically able to do is connect IT and OT systems together. It started in the early '90s with simple connectivity and data exchanges between ERP systems (IT) and automation and control systems (OT). Since then, these ideas of connectivity and data exchange have been the be-all and end-all of IT and OT convergence.

Interestingly enough, as technology has improved, the two ends of the spectrum are converging. The lines between IT and OT are no longer distinct, and, perhaps more importantly, no longer even matter.

Networks: Both IT and OT systems use similar networks with the same types of switches, routers, hubs, wireless access points, firewalls and such.

Computers: Whether you use end-user devices, servers, displays, personal computers, mobile devices or something else, they're pretty much the same for both IT and OT. Only some specialty computers, like programmable logic computers (PLCs) and distributed control systems (DCSs), are unique to the OT world.

Identification: Whether you use barcodes, radio-frequency identification (RFID), magnetic stripes or other tracking devices, it's completely ubiquitous for both IT and OT, with the same technologies common to both.

Mobile devices: For tablets, pads or smartphones, the technology is the same for both worlds.

Software: For operating systems, virtual machine (VM) software, database software and such, it's also the same.

But beyond this underlying technology, which in many cases already has converged, the good news is that the landscape from a user's perspective is also converging. New infrastructures, new platforms, new standards and new plug-and-play options are all coming together that are completely transforming (and eliminating) the differences between IT and OT technologies.

Roles: Solutions are based on roles in the organization. Many roles cross between the traditional boundaries of IT and OT, making the distinctions obsolete and requiring solutions to provide the capabilities needed for the role regardless of artificial IT and OT boundaries.

Apps:Apps that perform specific functions, which can be run anytime, anywhere, by anybody, also make the distinctions between IT and OT obsolete. They perform the required function regardless of traditional IT and OT boundaries.

Machines:In manufacturing, machines still exist because we still have to make stuff. Most machines, however, are now smart and have their own networking and databases. They generate more useful data than many IT systems of just a few years ago, again blurring the distinctions between IT and OT.

Workflow: One of the biggest weaknesses in the old-school connectivity-and-data-exchange world of IT and OT was that many business processes and workflows cut across both worlds, but workflows and processes can now be implemented as they should be for the business, and completely ignore all the old-school boundaries between IT and OT.

This is all great stuff. IT and OT really are converging to the point where distinctions between IT and OT technologies are all but gone. But, before wrapping this up, why are we doing all this? What is the business value behind IT and OT convergence?

Improved customer experience: For most companies, it's a lot more than just providing good products. It's about delivering an excellent customer experience, and that may mean offering services and data with the products, changing the way the company interacts with the customer to provide higher levels of customer service, and ultimately using the complete capabilities of the company to help customers solve their toughest business problems.

Improved business operations: Increase business velocity, agility and flexibility, leverage new technology to free up people's time, use new technology to eliminate repetitive tasks, increase productivity, reduce costs, expand employees' skill sets and ultimately create new and better ways of doing business.

So, whether you call it Industry 4.0, smart manufacturing or digital transformation, or whether you consider IIoT, cloud computing, edge computing, AI, machine learning and now IT/OT convergence, the goal is still the same: Digitally transform the business for the future and create new and better ways of doing business.

The good news is that's what Industry 4.0 and IT/OT convergence is already doing. You just need to find ways to make it happen faster.

View post:
Industry 4.0 And IT/OT Convergence: Crossing The Digital Lines To Success - Forbes

Read More..

How emerging technologies helped tackle COVID-19 in China – World Economic Forum

COVID-19 is a major global public health challenge. Its outbreak in China presented the fastest spread, the widest scope of infections and the greatest degree of difficulty in controlling infections of any public health emergency since the founding of the Peoples Republic of China in 1949.

In the battle against the outbreak, China actively leveraged digital technologies such as artificial intelligence (AI), big data, cloud computing, blockchain, and 5G, which have effectively improved the efficiency of the countrys efforts in epidemic monitoring, virus tracking, prevention, control and treatment, and resource allocation.

Here are a few of the ways information technologies were effectively leveraged:

A new strain of Coronavirus, COVID 19, is spreading around the world, causing deaths and major disruption to the global economy.

Responding to this crisis requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forums mission as the International Organization for Public-Private Cooperation.

The Forum has created the COVID Action Platform, a global platform to convene the business community for collective action, protect peoples livelihoods and facilitate business continuity, and mobilize support for the COVID-19 response. The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.

As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.

In a crisis, collaboration is key. During the outbreak, a range of companies made their algorithms publicly available to improve efficiency and to support coronavirus testing and research.

Baidu Research, a world leader in AI R&D, open-sourced LinearFold (its linear-time AI algorithm), to epidemic prevention centers, gene testing institutions, and global scientific research institutions. The algorithm is an important tool for gene testing institutions, and R&D institutions during the epidemic, reducing the time taken to predict and study coronaviruss RNA secondary structure from 55 minutes to just 27 seconds. The algorithm also improves the speed of predicting and studying coronaviruss RNA secondary structure by 120 times and saves the waiting time for virus detectors and researchers by two orders of magnitude With the improved algorithm comes much-improved efficiency in virus detection and diagnosis than traditional algorithm.

Additionally, Zhejiang Provincial Center for Disease Control and Prevention (Zhejiang CDC) launched an automated genome-wide testing and analysis platform. Based on the AI algorithm developed by the Alibaba DAMO Academy (a platform funded by Jack Ma for science research), the group has shortened the genetic analysis of suspected cases from several hours to half an hour and can accurately detect virus mutations.

A security guard looks at a screen at Wuhan's Hankou Railway Station as travel restrictions for leaving the city, the epicentre of a global coronavirus disease (COVID-19) outbreak, are lifted and people will be allowed to leave the city via road, rail and air, in Wuhan, Hubei, China April 8, 2020.

Image: REUTERS/Aly Song

Artificial intelligence was also leveraged in subway stations, train stations and other public places where there is a high concentration of people and a high degree of mobility. While using the traditional method of temperature measurement is time-consuming, and would increase the risk of cross-infection due to the clustering of the people, companies such as Wuhan Guide Infrared Co. Ltd put forth new temperature measurement technology based on computer vision and infrared technology. This technology made it possible to take body temperature in a contactless, reliable, and efficient manner, with the people even unaware of it. With this technology in place, those whose body temperatures exceeded the threshold could quickly and accurately be located.

After the outbreak, big data played an important role in prediction and early warnings, analyzing the flow of people and the distribution of materials. Qihoo 360, a leading Internet company in China, released Big Data Migration Map this past February which users can access through mobile phones or computers to view the migration trend of the Chinese mainland from January 1, 2020 up to date. The tool became an important means of understanding and predicting changes in the epidemic situation nationwide.

A student attends an online class at home as students' return to school has been delayed due to the novel coronavirus outbreak, in Fuyang, Anhui province, China March 2, 2020. Picture taken March 2, 2020

Image: China Daily via REUTERS

In the epidemic response, relatively mature cloud computing technologies became as essential as water or electricity. Alibaba Cloud made its AI computing power available to public research institutions around the world for free to accelerate the development of new pneumonia drugs and vaccines. Meanwhile, Didi offered GPU cloud computing resources and technical support for combating the novel coronavirus to domestic scientific research institutions, medical and rescue platforms, for free.

As the virus spread, the demand for cloud-based video conferencing and online teaching has skyrocketed. Various cloud service vendors have actively upgraded their functions and provided resources. For example, Youku and Ding Talk (an all-in-one platform under Alibaba Group) launched the "Attending Class at Home" program to provide students with a secure learning environment and convenient learning tools. The Online Classroom function, which is made available for students of universities, primary and middle schools across China without charges during the epidemic, can support millions of students to take online classes simultaneously and has also covered schools in vast rural areas.

Furthermore, other enterprise companies increased access to their tools. Tencent Meeting made unlimited-time meetings for up to 300 participants free until the end of the epidemic. WeChat Work can support the audio and video conference up to 300 participants during the epidemic. During the epidemic, the tool provided free access to stable HD video conferences are accessible from phones allowing sharing documents and screens among up to 300 participants.

Blockchain technology eliminates intermediary, prevents data loss and tampering and provides traceability. It can play an important role in ensuring the openness and transparency of the epidemic information and the traceability of the epidemic materials. For example, blockchain technology can be used to record epidemic information and ensure that information sources are open, transparent, and traceable, thus effectively reducing rumors.

Lianfei Technology launched the nation's first blockchain epidemic monitoring platform, which can track the progress of COVID-19 in all provinces in real time, and register the relevant epidemic data on the chain so that the data can be traced and cannot be tampered with. The data links based on transparent monitoring and accountability are initially established to ensure that epidemic information is open and transparent.

5G, which has just been commercialized, has also played an important role in the epidemic prevention and control. It is mainly used in the fields of live-streaming video and telemedicine. China Mobile opened 5G base stations at Huoshenshan and Leishenshan hospitals, and realized 5G high-definition live broadcasting of the construction of these two hospitals, providing real-time views of the construction sites on a 24-hour basis for more than 20 mainstream media platforms such as People's Daily and Xinhua News Agency. The content was also distributed by China Daily overseas simultaneously, and the number of online viewers exceeded 490 million.

In addition, the epidemic also witnessed the transition of 5G + health from "experimental phase" to "clinical phase". In order to make full use of the resources of experts in large cities and hospitals, the 5G + remote consultation system has been quickly implemented in many hospitals across the country. The first remote consultation platform of Huoshenshan Hospital allows medical experts far away in Beijing to work with front-line medical staff of Huoshenshan Hospital through remote video connections and conduct remote consultations with patients, thus further improving the efficiency and effectiveness of diagnosis and treatment.

"New generation information technologies have unique advantages and can play an important role in responding to major public health challenges."

China's practice has proven that the new-generation information technologies have unique advantages and can play an important role in responding to major public health challenges.

The COVID-19 outbreak is a common challenge faced by mankind with all countries' interests closely intertwined. Countries continue to develop new solutions as the epidemic spreads. As it does, countries must share their learnings and work together. By doing so, they can collectively find the solutions needed to fight the virus and save lives.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

Written by

QI Xiaoxia, Director General, Bureau of International Cooperation, Cyberspace Administration of China

The views expressed in this article are those of the author alone and not the World Economic Forum.

More:
How emerging technologies helped tackle COVID-19 in China - World Economic Forum

Read More..

What Is The Difference Between Artificial Intelligence And …

Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably.

They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.

Both terms crop up very frequently when the topic is Big Data, analytics, and the broader waves of technological change which are sweeping through our world.

In short, the best answer is that:

Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider smart.

And,

Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.

Early Days

Artificial Intelligence has been around for a long time the Greek myths contain stories of mechanical men designed to mimic our own behavior. Very early European computers were conceived as logical machines and by reproducing capabilities such as basic arithmetic and memory, engineers saw their job, fundamentally, as attempting to create mechanical brains.

As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.

Artificial Intelligences devices designed to act intelligently are often classified into one of two fundamental groups applied or general. Applied AI is far more common systems designed to intelligently trade stocks and shares, or maneuver an autonomous vehicle would fall into this category.

Neural Networks - Artificial Intelligence And Machine Learning (Source: Shutterstock)

Generalized AIs systems or devices which can in theory handle any task are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, its really more accurate to think of it as the current state-of-the-art.

The Rise of Machine Learning

Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.

One of these was the realization credited to Arthur Samuel in 1959 that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.

The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.

Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.

Neural Networks

The development of neural networks has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.

A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.

Essentially it works on a system of probability based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables learning by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.

Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.

These are all possibilities offered by systems based around ML and neural networks. Thanks in no small part to science fiction, the idea has also emerged that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. To this end, another field of AI Natural Language Processing (NLP) has become a source of hugely exciting innovation in recent years, and one which is heavily reliant on ML.

NLP applications attempt to understand natural human communication, either written or spoken, and communicate in return with us using similar, natural language. ML is used here to help machines understand the vast nuances in human language, and to learn to respond in a way that a particular audience is likely to comprehend.

A Case Of Branding?

Artificial Intelligence and in particular today ML certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, its important to bear in mind that AI and ML are something else they are products which are being sold consistently, and lucratively.

Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, its possible that it started to be seen as something thats in some way old hat even before its potential has ever truly been achieved. There have been a few false starts along the road to the AI revolution, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.

The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML. I hope this piece has helped a few people understand the distinction between AI and ML. In another piece on this subject I go deeper literally as I explain the theories behind another trending buzzword Deep Learning.

Check out these links for more information on artificial intelligence and many practical AI case examples.

Read more:
What Is The Difference Between Artificial Intelligence And ...

Read More..

Machine Learning: Making Sense of Unstructured Data and Automation in Alt Investments – Traders Magazine

The following was written byHarald Collet, CEO at Alkymi andHugues Chabanis, Product Portfolio Manager,Alternative Investments at SimCorp

Institutional investors are buckling under the operational constraint of processing hundreds of data streams from unstructured data sources such as email, PDF documents, and spreadsheets. These data formats bury employees in low-value copy-paste workflows andblockfirms from capturing valuable data. Here, we explore how Machine Learning(ML)paired with a better operational workflow, can enable firms to more quickly extract insights for informed decision-making, and help governthe value of data.

According to McKinsey, the average professional spends 28% of the workday reading and answering an average of 120 emails on top ofthe19% spent on searching and processing data.The issue is even more pronouncedininformation-intensive industries such as financial services,asvaluable employees are also required to spendneedlesshoursevery dayprocessing and synthesizing unstructured data. Transformational change, however,is finally on the horizon. Gartner research estimates thatby 2022, one in five workers engaged in mostly non-routine tasks will rely on artificial intelligence (AI) to do their jobs. And embracing ML will be a necessity for digital transformation demanded both by the market and the changing expectations of the workforce.

For institutional investors that are operating in an environment of ongoing volatility, tighter competition, and economic uncertainty, using ML to transform operations and back-office processes offers a unique opportunity. In fact, institutional investors can capture up to 15-30% efficiency gains by applying ML and intelligent process automation (Boston Consulting Group, 2019)inoperations,which in turn creates operational alpha withimproved customer service and redesigning agile processes front-to-back.

Operationalizingmachine learningworkflows

ML has finally reached the point of maturity where it can deliver on these promises. In fact, AI has flourished for decades, but the deep learning breakthroughs of the last decade has played a major role in the current AI boom. When it comes to understanding and processing unstructured data, deep learning solutions provide much higher levels of potential automation than traditional machine learning or rule-based solutions. Rapid advances in open source ML frameworks and tools including natural language processing (NLP) and computer vision have made ML solutions more widely available for data extraction.

Asset class deep-dive: Machine learning applied toAlternative investments

In a 2019 industry survey conducted byInvestOps, data collection (46%) and efficient processing of unstructured data (41%) were cited as the top two challenges European investment firms faced when supportingAlternatives.

This is no surprise as Alternatives assets present an acute data management challenge and are costly, difficult, and complex to manage, largely due to the unstructured nature ofAlternatives data. This data is typically received by investment managers in the form of email with a variety of PDF documents or Excel templates that require significant operational effort and human understanding to interpret, capture,and utilize. For example, transaction data istypicallyreceived by investment managers as a PDF document via email oran online portal. In order to make use of this mission critical data, the investment firm has to manually retrieve, interpret, and process documents in a multi-level workflow involving 3-5 employees on average.

The exceptionally low straight-through-processing (STP) rates already suffered by investment managers working with alternative investments is a problem that will further deteriorate asAlternatives investments become an increasingly important asset class,predictedbyPrequinto rise to $14 trillion AUM by 2023 from $10 trillion today.

Specific challenges faced by investment managers dealing with manual Alternatives workflows are:

WithintheAlternatives industry, variousattempts have been madeto use templatesorstandardize the exchange ofdata. However,these attempts have so far failed,or are progressing very slowly.

Applying ML to process the unstructured data will enable workflow automation and real-time insights for institutional investment managers today, without needing to wait for a wholesale industry adoption of a standardized document type like the ILPA template.

To date, the lack of straight-through-processing (STP) in Alternatives has either resulted in investment firms putting in significant operational effort to build out an internal data processing function,or reluctantly going down the path of adopting an outsourcing workaround.

However, applyinga digital approach,more specificallyML, to workflows in the front, middle and back office can drive a number of improved outcomes for investment managers, including:

Trust and control are critical when automating critical data processingworkflows.This is achieved witha human-in-the-loopdesign that puts the employee squarely in the drivers seat with features such as confidence scoring thresholds, randomized sampling of the output, and second-line verification of all STP data extractions. Validation rules on every data element can ensure that high quality output data is generated and normalized to a specific data taxonomy, making data immediately available for action. In addition, processing documents with computer vision can allow all extracted data to be traced to the exact source location in the document (such as a footnote in a long quarterly report).

Reverse outsourcing to govern the value of your data

Big data is often considered the new oil or super power, and there are, of course, many third-party service providers standing at the ready, offering to help institutional investors extract and organize the ever-increasing amount of unstructured, big data which is not easily accessible, either because of the format (emails, PDFs, etc.) or location (web traffic, satellite images, etc.). To overcome this, some turn to outsourcing, but while this removes the heavy manual burden of data processing for investment firms, it generates other challenges, including governance and lack of control.

Embracing ML and unleashing its potential

Investment managers should think of ML as an in-house co-pilot that can help its employees in various ways: First, it is fast, documents are processed instantly and when confidence levels are high, processed data only requires minimum review. Second, ML is used as an initial set of eyes, to initiate proper workflows based on documents that have been received. Third, instead of just collecting the minimum data required, ML can collect everything, providing users with options to further gather and reconcile data, that may have been ignored and lost due to a lack of resources. Finally, ML will not forget the format of any historical document from yesterday or 10 years ago safeguarding institutional knowledge that is commonly lost during cyclical employee turnover.

ML has reached the maturity where it can be applied to automate narrow and well-defined cognitive tasks and can help transform how employees workin financial services. However many early adopters have paid a price for focusing too much on the ML technology and not enough on the end-to-end business process and workflow.

The critical gap has been in planning for how to operationalize ML for specific workflows. ML solutions should be designed collaboratively with business owners and target narrow and well-defined use cases that can successfully be put into production.

Alternatives assets are costly, difficult, and complex to manage, largely due to the unstructured nature of Alternatives data. Processing unstructured data with ML is a use case that generates high levels of STP through the automation of manual data extraction and data processing tasks in operations.

Using ML to automatically process unstructured data for institutional investors will generate operational alpha; a level of automation necessary to make data-driven decisions, reduce costs, and become more agile.

The views represented in this commentary are those of its author and do not reflect the opinion of Traders Magazine, Markets Media Group or its staff. Traders Magazine welcomes reader feedback on this column and on all issues relevant to the institutional trading community.

See the original post here:
Machine Learning: Making Sense of Unstructured Data and Automation in Alt Investments - Traders Magazine

Read More..

Machine Learning Improves Weather and Climate Models – Eos

Both weather and climate models have improved drastically in recent years, as advances in one field have tended to benefit the other. But there is still significant uncertainty in model outputs that are not quantified accurately. Thats because the processes that drive climate and weather are chaotic, complex, and interconnected in ways that researchers have yet to describe in the complex equations that power numerical models.

Historically, researchers have used approximations called parameterizations to model the relationships underlying small-scale atmospheric processes and their interactions with large-scale atmospheric processes. Stochastic parameterizations have become increasingly common for representing the uncertainty in subgrid-scale processes, and they are capable of producing fairly accurate weather forecasts and climate projections. But its still a mathematically challenging method. Now researchers are turning to machine learning to provide more efficiency to mathematical models.

Here Gagne et al. evaluate the use of a class of machine learning networks known as generative adversarial networks (GANs) with a toy model of the extratropical atmospherea model first presented by Edward Lorenz in 1996 and thus known as the L96 system that has been frequently used as a test bed for stochastic parameterization schemes. The researchers trained 20 GANs, with varied noise magnitudes, and identified a set that outperformed a hand-tuned parameterization in L96. The authors found that the success of the GANs in providing accurate weather forecasts was predictive of their performance in climate simulations: The GANs that provided the most accurate weather forecasts also performed best for climate simulations, but they did not perform as well in offline evaluations.

The study provides one of the first practically relevant evaluations for machine learning for uncertain parameterizations. The authors conclude that GANs are a promising approach for the parameterization of small-scale but uncertain processes in weather and climate models. (Journal of Advances in Modeling Earth Systems (JAMES), https://doi.org/10.1029/2019MS001896, 2020)

Kate Wheeling, Science Writer

Follow this link:
Machine Learning Improves Weather and Climate Models - Eos

Read More..

What Will Be the Future Prospects Of the Machine Learning Software Market? Trends, Factors, Opportunities and Restraints – Science In Me

Regal Intelligence has added latest report on Machine Learning Software Market in its offering. The global market for Machine Learning Software is expected to grow impressive CAGR during the forecast period. Furthermore, this report provides a complete overview of the Machine Learning Software Market offering a comprehensive insight into historical market trends, performance and 2020 outlook.

The report sheds light on the highly lucrative Global Machine Learning Software Market and its dynamic nature. The report provides a detailed analysis of the market to define, describe, and forecast the global Machine Learning Software market, based on components (solutions and services), deployment types, applications, and regions with respect to individual growth trends and contributions toward the overall market.

Request a sample of Machine Learning Software Market report @ https://www.regalintelligence.com/request-sample/102477

Market Segment as follows:

The global Machine Learning Software Market report highly focuses on key industry players to identify the potential growth opportunities, along with the increased marketing activities is projected to accelerate market growth throughout the forecast period. Additionally, the market is expected to grow immensely throughout the forecast period owing to some primary factors fuelling the growth of this global market. Finally, the report provides detailed profile and data information analysis of leading Machine Learning Software company.

Key Companies included in this report: Microsoft, Google, TensorFlow, Kount, Warwick Analytics, Valohai, Torch, Apache SINGA, AWS, BigML, Figure Eight, Floyd Labs

Market by Application: Application A, Application B, Application C

Market by Types: On-Premises, Cloud Based

Get Table of Contents @ https://www.regalintelligence.com/request-toc/102477

The Machine Learning Software Market research presents a study by combining primary as well as secondary research. The report gives insights on the key factors concerned with generating and limiting Machine Learning Software market growth. Additionally, the report also studies competitive developments, such as mergers and acquisitions, new partnerships, new contracts, and new product developments in the global Machine Learning Software market. The past trends and future prospects included in this report makes it highly comprehensible for the analysis of the market. Moreover, The latest trends, product portfolio, demographics, geographical segmentation, and regulatory framework of the Machine Learning Software market have also been included in the study.

Global Machine Learning Software Market Research Report 2020

Buy The Report @ https://www.regalintelligence.com/buyNow/102477

To conclude, the report presents SWOT analysis to sum up the information covered in the global Machine Learning Software market report, making it easier for the customers to plan their activities accordingly and make informed decisions. To know more about the report, get in touch with Regal Intelligence.

See original here:
What Will Be the Future Prospects Of the Machine Learning Software Market? Trends, Factors, Opportunities and Restraints - Science In Me

Read More..

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat

Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.

Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.

The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.

Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.

Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.

In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing.

Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.

To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.

We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.

For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.

Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.

We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.

They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.

But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?

That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.

Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'

So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.

And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.

I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?

You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.

The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.

A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.

For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.

Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.

You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.

I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.

Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.

We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.

The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.

Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.

All the above requires one final component: talent.

You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.

The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?

The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.

Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.

Read more:
How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls - VentureBeat

Read More..

Titled Tuesday Now Every Week With Increased Prizes – Chess.com

With immediate effect, Chess.com's Titled Tuesday tournaments will be held every week instead of every month. Each tournament will feature a $1,600 prize fund.

In response to COVID-19, Chess.com is creating opportunities for all titled players to stay active and engaged in this time of social distancing and self-quarantine. We are expanding Titled Tuesday to a weekly event, with an increased prize fund each week.

The start time of 10 a.m. Pacific Time (19:00 Central Europe) has been chosen in order to accommodate players from around the world. Titled Tuesday will maintain its nine-round Swiss format, with a time control of three minutes and one-second increment. Each week's tournament will be broadcast live on Chess.com/TV.

Titled Tuesday's expansion will mean the 2020 season's monthly prize fund will increase from $2,200 to $6,400. With $1,600 in prizes available every week, the prize fund will be distributed as follows:

In addition to the increased prize fund, Chess.com is proud to offer its first-ever Titled Tuesday prize for female players. In keeping with tradition, the Best Stream prize will be awarded as 20 gifted subs to the streamer's channel.

Titled Tuesday's expansion means that all competitors in this event are required to have their full legal name in their Chess.com profile. Anonymous titled player accounts or accounts found to be using a fake name will not be eligible to win prizes during the event.

All players must also abide by all rules and site policies found at Chess.com/agreement and cooperate fully with Chess.com's fair play detection team. Participants should be prepared to join a ZOOM call for proctoring at the arbiter's discretion and this request may be made between rounds via direct chat in live chess by a Chess.com staff member.

This month's Titled Tuesday was by far the biggest edition ever held, with nearly 900 titled players participating, including top GMs Fabiano Caruana, Ian Nepomniachtchi,Maxime Vachier-Lagrave, andHikaru Nakamura. Weekly editions are expected to be just as star-studded as the monthly versions have been since Titled Tuesday's inception.

GM Simon Williams playing and streaming Titled Tuesday.

The next Titled Tuesday is set to start next week on Tuesday, April 14 at 10 a.m. Pacific Time (19:00 Central Europe). Titled players may register for the tournament up to one hour before it begins in the tournament tab located at Chess.com/live.

Find more information about Titled Tuesdays here.

Read this article:
Titled Tuesday Now Every Week With Increased Prizes - Chess.com

Read More..

New chess body CPF looks to learn from failure – Times of India

'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); console.log(isIndia && randomNumber A similar platform had existed with the Chess Players Association of India (CPAI) being formed in 2004. However, that body did not last.

CPAI was formed when AICFs former secretary general Ummer Koya decided to reduce players prize money, Barua, who was also a member of that body, told TOI on Wednesday. That forum did not last long, but CPAI was instrumental in Koyas defeat in AICF polls that followed, Barua said.

Indias second GM added although there was no such immediate reason behind CPFs formation, but the present instability within the AICF did play a part.

Some of the players approached the Sports Ministry regarding this turmoil a few months back. We did not get the necessary response but felt the need for a forum to put up the players voice in an organized manner, Barua said.

Continued here:
New chess body CPF looks to learn from failure - Times of India

Read More..

Bitcoin Dominance: 2-Year Uptrend Breaking Could Spark Altcoin Boom – newsBTC

Bitcoin dominance has not only broken down from a two-year-long uptrend, but it has also retested the trend line as resistance and failed to reclaim the key level.

With trend line support now confirmed as resistance, its likely that Bitcoin dominance could see an extended downtrend in the days, weeks, and months ahead. But what does this mean for altcoins and the rest of the cryptocurrency market?

Bitcoin dominance is a metric that weighs Bitcoins market cap against the rest of the cryptocurrency landscape.

As the first-ever cryptocurrency, Bitcoin enjoys a first-mover advantage, brand power, and the most institutional interest and financial world support compared to the rest of the crypto market.

Related Reading | Peak Fear Crypto Market Shows Big Bitcoin Recovery is Imminent: Analyst

This has helped Bitcoins market cap grow massively in relation to the rest of the thousands of altcoins making up the industry.

Prior to 2017, Bitcoin dominance had never fallen below 94%. It was that year and the hype bubble that sent BTC dominance is a brutal downtrend, falling to as low as 35% dominance.

But as the bubble popped, and the irrational speculative valuations of these untested altcoins fell by 99% or more in most cases. Even the strongest altcoins like Ethereum or Ripple fell by over 90%

It resulted in an uptrend forming in both Bitcoin and BTC dominance, bringing the metric to a peak of 73% during the last two years.

The uptrend support line, however, was finally breached in February 2020 during the massive altcoin market breakout, supported by historic trading volume.

BTC dominance has since retested the two-year-long uptrend line and confirmed it as support turned resistance.

With a bearish retest confirmed, further downside in BTC dominance is probable. And when Bitcoin dominance drops, it is time for altcoins to shine.

When altcoins outperform Bitcoin, crypto analysts and traders refer to this as an altcoin season. However, its been since before 2018 and long before most crypto investors bought in at the top of the bubble since such an alt season has occurred.

Crypto investors and traders may not know what to expect from altcoins when dominance breaks down. The last major breakdown resulted in BTC dominance falling from 95% to just 35%.

Related Reading | Crypto Titanic: Altcoin Investors Must Prepare to Sink With The Ship

A similar, 60% move down from the recent peak of 73% dominance, would mean Bitcoin has just 13% market share at the end of the next downtrend. A move like that would mean that altcoins have taken over the crypto market in a major way, and Bitcoin will be left at risk of falling out of its leadership position for the first time ever.

Originally posted here:
Bitcoin Dominance: 2-Year Uptrend Breaking Could Spark Altcoin Boom - newsBTC

Read More..