Page 3,510«..1020..3,5093,5103,5113,512..3,5203,530..»

Cloud Computing in Higher Education Market 2020 Global Analysis, Trends, Forecast up to 2025 – 3rd Watch News

The Cloud Computing in Higher Education market research added by Market Study Report, LLC, offers a comprehensive analysis of growth trends prevailing in the global business domain. This report also provides definitive data concerning market, size, commercialization aspects and revenue forecast of the industry. In addition, the study explicitly highlights the competitive status of key players within the projection timeline while focusing on their portfolio and regional expansion endeavors.

.

Request a sample Report of Cloud Computing in Higher Education Market at:https://www.marketstudyreport.com/request-a-sample/2745198?utm_source=3wnews.org&utm_medium=AG

The report exemplifies industry tendencies as well as presents revenue forecast, sales volume, market size and upcoming opportunities. In addition, information pertaining to the market drivers that will positively affect the profitability graph and the respective segmentations influencing the market size during the study period is delivered in the report.

The Cloud Computing in Higher Education market study provides a comprehensive analysis of the industry vertical. As per the report, this market is poised to acquire commendable returns and record a significant growth rate during the estimated timeframe.

Major takeaways from the Cloud Computing in Higher Education market size report on the basis of geographical landscape:

Ask for Discount on Cloud Computing in Higher Education Market Report at:https://www.marketstudyreport.com/check-for-discount/2745198?utm_source=3wnews.org&utm_medium=AG

This report considers the below mentioned key questions:

Q.1. What are some of the most favorable, high-growth prospects for the global Cloud Computing in Higher Education market

Q.2. Which products segments will grow at a faster rate throughout the forecast period and why

Q.3. Which geography will grow at a faster rate and why

Q.4. What are the major factors impacting market prospects What are the driving factors, restraints, and challenges in this Cloud Computing in Higher Education market

Q.5. What are the challenges and competitive threats to the market

Q.6. What are the evolving trends in this Cloud Computing in Higher Education market and reasons behind their emergence

Q.7. What are some of the changing customer demands in the Cloud Computing in Higher Education Industry market

Key aspects of the Cloud Computing in Higher Education market entailed in the report are listed below:

For More Details On this Report:https://www.marketstudyreport.com/reports/global-cloud-computing-in-higher-education-market-2020-by-company-regions-type-and-application-forecast-to-2025

Some of the Major Highlights of TOC covers:

Chapter 1: Methodology & Scope

Definition and forecast parameters

Methodology and forecast parameters

Data Sources

Chapter 2: Executive Summary

Business trends

Regional trends

Product trends

End-use trends

Chapter 3: Cloud Computing in Higher Education Industry Insights

Industry segmentation

Industry landscape

Vendor matrix

Technological and innovation landscape

Chapter 4: Cloud Computing in Higher Education Market, By Region

Chapter 5: Company Profile

Business Overview

Financial Data

Product Landscape

Strategic Outlook

SWOT Analysis

Related Reports:

1. Global Cloud Product Lifecycle Management Market 2020 by Company, Regions, Type and Application, Forecast to 2025The Cloud Product Lifecycle Management Market Report offer the complete scenario of the industry and valuation of upcoming Trends for future market. It also gives the analytic of enduring growth factor, trends and statistic of Cloud Product Lifecycle Management Market industry. The Cloud Product Lifecycle Management Market has been outlined by overall information and analysis.Read More: https://www.marketstudyreport.com/reports/global-cloud-product-lifecycle-management-market-2020-by-company-regions-type-and-application-forecast-to-2025

2. Global Cloud Mobile Music Services Market 2020 by Company, Regions, Type and Application, Forecast to 2025Cloud Mobile Music Services Market Report covers the makers information, including shipment, value, income, net benefit, talk with record, business appropriation and so forth., this information enables the buyer to think about the contenders better. This report additionally covers every one of the districts and nations of the world, which demonstrates a provincial advancement status, including market size, volume and esteem, and also value information. It additionally covers diverse enterprises customers data, which is critical for the producers.Read More: https://www.marketstudyreport.com/reports/global-cloud-mobile-music-services-market-2020-by-company-regions-type-and-application-forecast-to-2025

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150

See the rest here:
Cloud Computing in Higher Education Market 2020 Global Analysis, Trends, Forecast up to 2025 - 3rd Watch News

Read More..

Reaching for the Cloud: Embracing digital channels in cooperative societies within Africa – Techpoint.ng

By Obong Idiong

The Covid-19 pandemic has fueled the migration of group interactions to online channels. Virtual meeting platforms such as Zoom, Skype, Microsoft Teams and Google Meets have become an essential element of modern culture, enabling family members, professional colleagues and other people groups to interact safely amid the outbreak of the highly contagious disease. Additionally, there is a growing appetite for virtual channels for investments, commerce and recreation. According to the African Union Commissioner, Amani Abou-Zeid, the Covid-19 crisis has become the biggest catalyst for digital transformation and has moved digitization from a niche market into mass adoption.

As a voluntary association of individuals with common economic and social interests, cooperative societies in Africa should embrace the trend and take deliberate steps to migrate their operations to digital channels. Cloud and related technology provide an opportunity for cooperative societies to not only improve the user experience of members and ease administration but to deliver additional services efficiently.

The adoption of digital technology holds the key for cooperatives in Africa to fulfil their potential as agents of economic growth. In Kenya, for example, the Ministry of Industry, Trade and Cooperatives estimates that cooperatives account for over 30 percent of GDP, compared to a much smaller fraction of Nigerias GDP. Technology has played an important role in organizing the activities of cooperatives in nations where they are making a significant economic impact.

Advertisement

Cloud computing is a technology that is very relevant to cooperatives today. It refers to the delivery of computing services such as servers, storage, databases, networking, software and analytics over the internet and can be used to deliver faster and more flexible services to a variety of users. Service providers create cloud computing systems to serve common business or research needs.

These solutions are generally designed to support large customer or user numbers and are therefore an ideal solution for cooperative societies that seek to operate more efficiently. Adopters typically pay only for the cloud services they use and can lower their operating costs, enhance security, run infrastructure more efficiently and scale operations. One way to achieve this cost efficiency is by partnering with digital solutions companies.

By adopting cloud technology, cooperatives can revolutionize their operations in several ways. The collection and organization of members data on a cloud platform allow cooperatives to create financial accounts efficiently. This endears the cooperative to financial service companies, making them more willing to offer loans and other services to cooperatives. Sadly, many cooperatives in Africa struggle to provide financial accounts and data on the contribution of members. As a result, they are unable to take advantage of financing opportunities such as the loans currently being offered by the various Governments in these countries through various intervention schemes.

Technology is also changing how members relate with each other. Participation at meetings, voting and notifications can now be done virtually, saving time, resources, increasing participation and making engagement more convenient for members. Also, the adoption of digital technology opens up a new approach to customer service and customer experience. Members are now able to access information about their contributions and receive personalized services on the go.

The use of digital methods to organize the operations of cooperative societies also enables a range of prescriptive and predictive analysis that are otherwise not possible without data. The managers of the cooperative can analyze the membership and peculiar needs and circumstances of members on a platform made possible by cloud computing technology.

For example, through data mining, cooperative societies can more accurately ascertain the products and services that could interest their members. Also, through the processing of data and the creation of appropriate algorithms, the cooperative is more likely to improve member satisfaction by anticipating their needs and product preferences. Data processing can also be a valuable tool in risk management concerning designing rules around the provision of loans/credit to members. The power of data can only be truly unleashed today when organizations adopt digital methods in the provision of their services and the management of their operations.

The application of digital technology to drive innovation in the activities of cooperatives is a growing trend across the world. Credit unions in the United States are offering services via online and mobile channels to young adults. Millions of credit union members in Asia are being connected to digital financial tools. Digital technology is not only supporting the efficient distribution of loans to credit union members but enabling peer-to-peer rental of farming and other equipment in many countries. Given a large number of cooperative societies in Africa, there is significant scope for these groups to enhance their impact on society, better serve their members, and navigate the current public health concerns.

Another reason why cooperatives should embrace digital technology is the threat of the potential disruption of their businesses by outsiders. The success of Uber shows how the business of taxi service cooperatives across the world can be disrupted. Cooperatives in Africa need to wake up to the threat of technology to their business and embrace it before advancements in technology render them irrelevant in the future.

While access to digital tools and affordable broadband connectivity remains a challenge, the World Economic Forum calls for localizing digital content and services through; new models of investment in digital infrastructure; digital skills development; digitization of public services and creation of technology offerings tailored towards SMEs, all of which create further opportunities.

Covid-19 has created uncertainties across economies around the globe. However, this dire outlook creates new opportunities, to do things differently and harness new growth opportunities.

Obong Idiong is the CEO of Africa Prudential Plc, an NSE listed digital solutions company with a particular focus on Nigerias capital market.

Techpoint Build 2020 is holding virtually in August 2020. You can register now to attend for free.

Subscribeto the Techpoint Africa weekly newsletter here.

Nigerian startups raised $55.4m in Q1 2020;over 99% of which came from foreign sources. Find out more when you download the full report.

Related

Read this article:
Reaching for the Cloud: Embracing digital channels in cooperative societies within Africa - Techpoint.ng

Read More..

CGG GeoSoftware and Alibaba Cloud to support E&P companies with cloud geoscience technologies – Oilfield Technology

CGG GeoSoftware has collaborated with Alibaba Cloud to deploy its suite of proprietary geoscience software solutions on the Alibaba Cloud platform.

All applications across the entire GeoSoftware portfolio also run on the Azure and AWS cloud platforms, offering advanced capabilities in machine learning and artificial intelligence, as well as connections to Python ecosystem notebooks.

Alibaba Cloud, the data intelligence backbone of Alibaba Group, is the third largest cloud service provider in the world, and holds the number one market share position in Asia Pacific, according to Gartner, a leading research and advisory company.

CGG GeoSoftware, through its reservoir characterisation, interpretation and rock physics modelling solutions, such as Jason, HampsonRussell, InsightEarth, PowerLog and RockSI, enables geoscientists to work with greater efficiency and optimal productivity. Initial pilot studies have shown that using Jason Workbench software on Alibaba Cloud can bring dramatic improvements in computing efficiency compared to use of the same software on local compute resources.

Li Qiang, Director of Auto & Energy Business Group, Alibaba Cloud Intelligence, said: CGG GeoSoftware has become an Alibaba Cloud partner in oil and gas exploration and production in China. We look forward to continuing our cooperation with CGG GeoSoftware so that Chinese oil and gas exploration and production players can expand to full cloud capability and reap the benefits of digital technology.

Kamal al-Yahya, Senior Vice President, Geoscience Software & Smart Data Solutions, said: Usage of cloud computing technology in the energy industry continues to expand as E&P companies recognise the business value of cloud computing for streamlining operations and optimising decision-making. CGG GeoSoftware and Alibaba Cloud will reinforce their cooperation to provide best-in-class advanced cloud technology to serve Chinese oil and gas exploration and production companies.

Read the article online at: https://www.oilfieldtechnology.com/digital-oilfield/01072020/cgg-geosoftware-and-alibaba-cloud-to-support-ep-companies-with-cloud-geoscience-technologies/

See more here:
CGG GeoSoftware and Alibaba Cloud to support E&P companies with cloud geoscience technologies - Oilfield Technology

Read More..

Huawei: 5G’s ‘five opportunities’ | – Advanced Television

By Colin Mann

July 1, 2020

At the New Infrastructure and Business Digitalization Forum held during the first GSMA Thrive event, Gan Bin, Chief Marketing Officer for Huaweis Wireless Network Solutions, delivered a keynote speech entitled 5G Brings Five Opportunities with New Value, elaborating on why 5G is the digital foundation of new infrastructure to upgrade connectivity, AI, cloud, computing, and industrial applications and inject new vitality into economic development.

5G significantly improves the experience of connectivity, expanding 4Gs people-centred connections with smartphones to a full range of scenarios that span not only smartphones, but also smart wearables and homes. This will add greater convenience to daily lives, Gan told participants.

Huaweis latest Global Industry Vision (GIV) report projects a per-person average possession of five smart devices by 2025. Apart from eight billion smartphones, there will be more than 20 billion PCs, tablets, VR headsets, smart watches, and smart screens among other smart devices.

5Gs high bandwidth makes it possible to switch video calls, video entertainment, and personal data across these devices seamlessly. In addition, over 20 billion real-time online smart home devices, such as sound boxes, projectors, printers, and blood pressure meters will be operated and managed with 5G connections to enrich personal and home services.

As a result of 5G connections, these physical devices for homes or offices can be systematically integrated with the digital world to build new smart infrastructure platforms for consumption, education, travel, and work. This, in turn, will create new types of smart services.

5Gs full-scenario connectivity also facilitates the application and expansion of AI functionality, suggests Huawei, eliminating data upload limitations, meaning that a massive amount of data can be transferred from hundreds of millions of devices to cloud servers to provide AI operations with tremendous data, which will greatly reduce the training period.

It also suggests that 5G enables devices to make the best of the powerful computing in the cloud to relax requirements on local computing, reducing device costs. Furthermore, 5G enables the transfer of AI operation results to devices to greatly expand the availability of AI-based functionality.

It says that 5G will catalyse the maturity of AI technologies, playing an increasingly important role in our personal and professional lives. Technologies like real-time speech recognition, real-time machine translation, license plate recognition, and production line detection will penetrate every aspect of life and work due to ubiquitous 5G networks. Based on industry forecasts, 5G will potentially add a trillion US dollars to the AI market space by 2025.

Cloud technologies and business models are maturing, with hundreds of millions of connections and availability of AI services acting as the catalyst for the rapid development of the cloud industry, noted Gan.

5G will stimulate the demand for massive storage worldwide, suggests Huawei. Constrained by insufficient local capabilities, less than 2 per cent of the nearly 40 ZB data generated in 2019 was saved. 5G high bandwidth offers a new option to implement cloud storage to save the massive data. According to Huaweis GIV reports, 92 per cent of the worlds newly created 180 ZB data will be stored on the cloud by 2025.

5G also stimulates the demand for enormous computing power. Limited by current technology, less than 10 per cent of all data has been analysed and applied so far. 5G enables devices to leverage powerful cloud computing capabilities anytime, anywhere. According to Huaweis GIV report, the demand for computing power in 2025 will have increased by 150,000 times compared to 2019, with data usage surging from 10 per cent to 80 per cent.

Cloud technology is expanding beyond the Internet industry to government services, financial sectors, and other industries, spreading across production, operations, and management procedures in various enterprises. In China alone, one million enterprises will embrace the cloud in 2020, demonstrating an inevitable trend across industries towards cloud migration.

Computing capabilities have evolved into a crucial extension of human capabilities. Constantly growing computing requirements are driving the development of computing models. 5G will push computing into a new intelligent era featuring efficient collaboration among cloud nodes, edge nodes, and devices. With this new cloud-edge-device collaborative computing model, the cloud end focuses on general computing for brute force processing of massive data, and the edge end specifically addresses dedicated computing demands, including ultra-low latency and security and privacy protection, while devices are responsible for implementing the computing requirements for personalised devices, such as headsets and smartphones.

We are now facing a vast computing market with tremendous potential. Global research firm Gartner has forecast that the computing industry will exceed two trillion US dollars by 2023, advised Gan. While 4G has changed lives, 5G is set to change society. 5G has proven an indispensable enabler for business digitalisation and will greatly improve the operational efficiency across industries.

According to Huaweis GIV report, the contribution of the mobile industry to the worlds economy is growing year by year. 5G is emerging as the major driving force, projected to produce a return 6.7 times that of non-ICT investment in the long run.

5G has developed mature use cases for smart healthcare, distance education, and smart campuses, as well as in the business, travel, and cultural sectors. In China, 5G networks have been deployed in more than 300 hospitals to support remote diagnosis, remote monitoring, remote B-mode ultrasound examinations, and many other online services, which have proven important in fighting against the ongoing pandemic. Currently, 30,000 hospitals across China have planned to introduce 5G networks within one to two years. In addition, the commercial deployment of 5G has been initiated in 500,000 school campuses, 300,000 industrial parks, 100,000 hotels, and tens of thousands of shopping malls across China.

Furthermore, many other industries are working actively to promote the integration of 5G. Examples include electrical power, mining, port, oil and gas, manufacturing, and iron and steel.

Everyone will benefit from the opportunities brought by 5G. We call for concerted efforts across the entire society to explore business and delivery models, policy support, innovative applications, and device diversification, so as to embrace a 5G integrated world, concluded Gan.

More:
Huawei: 5G's 'five opportunities' | - Advanced Television

Read More..

Taking the middleware out of high performance computing – TechRadar

Harnessing the full power of cloud computing can be difficult for developers as cost, complexity and time constraints often get in the way. Adding to this dilemma is the fact that traditional high performance computing relies on excessive amounts of middleware, orchestration and over engineering.

To make it easier for developers to create distributed applications which can solve large, complex and computationally intensive challenges, Hadean has created a new distributed computing platform that allows them to write scalable cloud-native applications.

To learn more about Hadean and its new platform, TechRadar Pro spoke with the company's VP of operations, Mimi Keshani.

Hadean was founded to overcome the challenges posed by distributed computing and unleash the potential of the cloud. Taking a first-principles approach we developed the Hadean platform, which enables programmers to quickly and easily write networked, dynamically-scaling cloud-native applications.

Hadean removes computational bottlenecks by eliminating excessive middleware, orchestration and overengineering, enabling developers to build, ship and scale their applications quickly and cost-effectively.

Hadean sits close to the metal and implements a unique process model that transforms the reliability and scalability of distributed computing, ensuring any application built on it is distributed and dynamically scalable by default.

Our simulation application, Aether Engine, will be used by the Francis Crick Institute to carry out COVID-19 simulations of models that combine analysis of person-to-person interaction with insight into how the virus transmits within an individual, providing a multiscale picture of the pathogens spread. The project will involve computationally complex data sets, which will be handled by Aether Engines distributed octree data structure. It dynamically partitions the simulation to provide additional computing power as required, reducing the expense and engineering complexity typically associated with running hyper-scale simulations.

Ultimately, we hope to accurately predict how susceptible an individual is to infection and the likelihood that they can pass it on. The project will provide near real-time analysis, and enable governments and health organisations to make more informed decisions when planning protective measures, which will ultimately ease the burden on the healthcare infrastructure and save lives.

Technology is in a constant state of rapid evolution, and the recent pandemic has highlighted the need for greater computing power, and the ability to access it quickly and cheaply. In pandemics we also see a proliferation of data becoming available, and therefore greater demand for cloud technologies to store, structure, and analyse these outputs.

Unleashing the potential of both big data and big compute means that we are now able to rigorously explore avenues such as personalised and precision medicine. In order to make the most of the computation power available we need to build cloud-native applications - hence why we created Hadean.

In 2019 we partnered with the Francis Crick Institute to explore Hadeans applicability to the life sciences via an Innovate UK grant. We used Aether Engine, our spatial simulation engine, to investigate how we could accelerate computational tools for docking two protein structures together, including a number of interactions involved in certain cancers.

We were able to significantly reduce the computation time required to sample millions of possible protein structures - as sampling on Aether Engine can be easily parallelised with no extra effort for the developer, and using more diverse inputs meant fewer docking runs were needed. We published this work in the journal Proteins and renewed the contract to build on this work and model COVID-19 transmission.

Aether Engine is a spatial simulation application. It scales across different processors and physical machines, utilising more computing power as the simulations grow in complexity and size. It can run complex simulations, quickly and at massive scale; drastically improving the speed, scalability and reliability of cloud and distributed computing systems.

The Hadean Platform (formally OS) is a cloud-native operating system, which exists to implement the distributed process model. Applications built on the platform are distributed by default, without the need for containers, such as our cardinal application Aether Engine. The Hadean platform dynamically scales as more or less computing power is required by efficiently spliting a computational task and allocating it out to CPUs in a given cloud system, rather than that of a single server or cluster.

Weve got a number of exciting projects in the pipeline. Most recently we've been helping epidemiological researchers at Imperial College via theRAMP initiativeon their work to create massive spatial social networks of the UK to inform models being used to develop the NHSX contact tracing app.

See the original post here:
Taking the middleware out of high performance computing - TechRadar

Read More..

Quantum Computing gains as it kicks off commercialization phase with its Mukai quantum computing software – Proactive Investors USA & Canada

Set up at the beginning of 2018, Quantum bills itself as the first publicly traded pure-play quantum computing company

QuantumComputing Inc (), an advanced technology company developing quantum-ready applications and tools,said Wednesday that it is set to gain as it has entered the key commercialization phase as the only public pure-play in the quantum computing space.

The Leesburg, Virginia-based company has kicked off the official commercial launch of its Mukai quantum computing software execution platform. Last week, the company introduced a new trial access program that demonstrates Mukais power to solve real-world problems.

Quantums stock recently traded 1.3%higher to $3.91 a share inNew York.

READ:Quantum Computing launches free trial of Mukai quantum computing application platform

According to the company, the trial will enable developers to discover how they can migrate their existing applications to quantum-ready solutions and realize superior performance even when running their solutions on classical Intel or AMD processor-based computers.

The trial is designed to encourage and facilitate quantum application development to solve real world problems at breakthrough speed and not tomorrow, but today, the company said in a statement.

There are only a handful of quantum software experts in the world, and fortunately for us, this includes Mike and Steve," commented Quantum CEO Robert Liscouski. They have been doing an outstanding job building out our software engineering teams, developing our first quantum-ready products, and preparing QCI for commercial success.

Quantum kicked off 2020 with the public release of its first quantum-ready software product, the QCI Quantum Asset Allocator (QAA). This solution is designed to help portfolio managers maximize returns by calculating their optimal asset allocations, said the company.

QAA is the first of a series of Quantum products that will leverage quantum techniques to provide differentiated performance on both classical computers and on a variety of early-stage quantum computers, added the company. Naturally, Quantum is looking to convert its QAA beta users into long-term customers.

The core of our strategy has been to anticipate the direction of the market and be ahead of it by offering unique solutions that establish QCI as a market leader, said Liscouski. We will be driven by the market, but in turn will drive the market by helping our customers realize their quantum-enabled future.

The company said that while quantum computing is typically a high-dollar investment given the "sophisticated and costly hardware," Mukai makes quantum application development affordable and scalable compared to running solutions on intermediate quantum computers, like those offered by D-Wave, Fujitsu, IBM and Rigetti.

Mukai addresses the quantum computing market which is tipped to grow at a 23.2% compound annual growth rate to $9.1 billion by 2030, according to Tractica.

Contact the author Uttara Choudhury at [emailprotected]

Follow her on Twitter: @UttaraProactive

Continued here:

Quantum Computing gains as it kicks off commercialization phase with its Mukai quantum computing software - Proactive Investors USA & Canada

Read More..

Is quantum computing ready to leap into the real world? – ARNnet

Market research firm IDC predicts that by 2023, 25% of Fortune 500 companies will gain a competitive advantage from quantum computing.

Its a bold prediction given the current dearth of real-world examples of quantum computing in action. However, theres plenty of industry activity to back up IDCs forecast. In fact, early this year at the Consumer Electronics Show the biggest buzz wasnt the newest smartphone, wearable device or autonomous-driving technology, but rather unprecedented computing power based on an area of quantum physics Albert Einstein described as "spooky action at a distance."

While quantum computing hasnt yet factored into solving worldwide problems such as the coronavirus pandemic, that is exactly the type of problem quantum has the potential to address. That potential will turn into a reality, according IBM, one of a handful of tech giants leading the quantum charge. This is the decade that quantum computing gets real, says Katie Pizzolato, director at IBM QStart.

For that reason, Pizzolato said, it was important to keep quantum public-facing rather than keep it a technology buried in research facilities. We wanted to get quantum out of the labs and into the real world, she said in reference to IBMs strong presence at CES.

Companies such as Google, Microsoft, D-Wave and Regetti are also eager to move quantum forward, and based on IDCs recent report Quantum Computing Adoption Trends: 2020 Survey Findings, the technology is building momentum.

According to responses from 520 IT and line-of-business professionals, quantum computing budgets and implementations will increase in the next 18-24 months. Half of all respondents to the IDC survey reported that funds allocated for quantum computing accounted for just 0-2% of the annual IT infrastructure in 2019, but will account for 7-10% in the next 24 months. For companies with more than 10,000 employees, the spending increase is more dramatic more than half of respondents will spend between 9% and 14% on quantum technology over the next two years.

Respondents to the IDC survey were clear where they are focusing their attention: 65% of respondents are using to plan to use cloud-based quantum computing, followed by 45% who use or plan to use quantum algorithms (which includes simulators, optimizations, artificial intelligence, machine learning and deep learning). Quantum networks (44%), hybrid quantum computing (40%) and quantum cryptography (33%) round the top five, according to the IDC survey.

Heather West, IDC senior research analyst, Infrastructure Systems, Platforms and Technology and one of the reports authors, says that quantum computing excels at solving large problems where theres so much data. The initial areas of focus will be AI, business intelligence and overall productivity and efficiency, according to the IDC report.

Very few companies have actually operationalized [quantum computing]. The skillsets are so advanced, and few people really understand quantum, West said, adding that were still at the experimentation stage with algorithms as companies also look to overcome challenges such as cost, security and data transfers between vendors. West points out, however, that there are already practical use cases in areas such as manufacturing and finance.

Right now, West says, the focus is on how to optimize processes. However, in the future, quantum will be applied to larger problems such as how to address climate change and cure diseases.

As IDCs West says, quantum computing isnt without its challenges. IDC cites complex technology, skillset limitations, a lack of available resources, cost, security, data transfer among vendors as barriers to adoption. With so many challenges, its not surprising that when selecting vendors to support quantum technology initiatives big names dominate the responses in the IDC survey. Google tops the list with 37% of respondents citing it as the vendor of choice, followed by Microsoft with 32%, IBM with 27% and Intel with 23&.

What makes quantum computing more powerful than classical computing is that rather relying on binary bits (i.e, either a 1 or 0) quantum computing uses qubits. Qubits can process more data because they can exist in many possible combinations of 1 and 0 simultaneously, known as superposition, processing an enormous number of outcomes.

In addition to superposition, pairs of qubits can be "entangled." This entanglement is what makes quantum computers as powerful as they are. What make it even more intriguing is that no one knows how or why it works, prompting that spooky action description from Einstein.

In classical computing, doubling the amount of bits gives you, as youd expect, twice the computing power. However, thanks to entanglement adding more qubits gives you exponentially more processing power.

If processing power potential is the good news on qubits, their fragile nature is the bad news. Not all qubits are created equal, IBMs Pizzolato says. Qubits are unpredictable and susceptible to environmental noise and errors. After an error they fall back to a binary state of 1 or 0, so the longer the calculation runs without an error, the greater the calculation. The goal is to protect against errors to solve the most challenging problems, Pizzolato says.

How common are these errors? A slight fluctuation in temperature or vibration can cause whats known as "decoherence." And, once a qubit is in decoherence, its calculation has failed and must be run again. For that reason, quantum computers are housed in environments of near absolute zero and with little outside disruption.

More qubits help. The 50 qubits range is when you start to supersede what you can achieve on a supercomputer, says Pizzolato. IBM last fall announced its 14th quantum computer, a 53-qubit system. Its previous quantum computers were 20 qubits. However, quantum is more than qubits. Hardware is at the center of the circle, but then you have the algorithms and the applications, says Pizzolato. More sophisticated algorithms are critical to quantum computings real-world success. Quantum is all about the algorithms you can run and the complexity of those algorithms, she says.

Skills gaps are a challenge for IT in general. With quantum computing, its magnified. Where will the quantum development come from? Peter Rutten, research director and one of the authors of the IDC report, says that the algorithms and application development will come from three distinct personas:

Developers who are intrigued with quantum computing, developers with a physics background (because there are not many jobs in physics) and those working in high-performance-computing operations. Its a seamless transition from HPC algorithms to quantum, Rutten says.

On the one hand, Google, IBM and others appear to be jostling for position in achieving quantum advantage (the point at which quantum computing can solve a program faster than classical computing) and quantum supremacy (when quantum computing solves a program that no conventional computer can solve). In fact, IBM recently publicly refuted Googles claim of achieving quantum supremacy with its 53-qubit computer, its researchers saying that Google failed to fully estimate the resources of a supercomputer, publishing this in an IBM Research blog last October:

Building quantum systems is a feat of science and engineering, and benchmarking them is a formidable challenge," according to an IBM quantum-computing blog. "Googles experiment is an excellent demonstration of the progress in superconducting-based quantum computing, showing state-of-the-art gate fidelities on a 53-qubit device, but it should not be viewed as proof that quantum computers are supreme over classical computers.

On the other hand, despite the top-tier vendors seemingly jockeying for quantum positions, IDGs Rutten said, its not about competitors going head-to-head. Its hard to compare. No one can tell you [whos ahead] because they are measuring progress in different ways, he says. The notion of quantum being a race is silly.

IDCs West concurs, saying that quantum advances will come from the developer community and technology partnerships. Its not so much a race to the end, because there may not be just one answer.

For its part, IBM has a network of 100 partnerships from commercial (e.g, Goldman Sachs, ExxonMobile, Accenture and others), academic (e.g., MIT, Virginia Tech, Johns Hopkins and dozens of others), startups, government and research sectors.

Even with the likes of Google, IBM and Microsoft pushing quantum computing to go from advantage to supremacy, no one knows where the big innovation will come from, Pizzolato says. The MVP is probably a guy in a lab.

Error: Please check your email address.

See the rest here:

Is quantum computing ready to leap into the real world? - ARNnet

Read More..

2 books to deepen your command of python machine learning – TechTalks

Image credit: Depositphotos

This post is part ofAI education, a series of posts that review and explore educational content on data science and machine learning. (In partnership withPaperspace)

Mastering machine learning is not easy, even if youre a crack programmer. Ive seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. Its not. And every single one of them has been dismayed.

I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer. This is an oversimplification that many media outlets with little or no knowledge of the actual challenges of writing machine learning algorithms often use when speaking of the ML trade.

The second reason, in my opinion, are the many books and courses that promise to teach you the ins and outs of machine learning in a few hundred pages (and the ads on YouTube that promise to net you a machine learning job if you pass an online course). Now, I dont what to vilify any of those books and courses. Ive reviewed several of them (and will review some more in the coming weeks), and I think theyre invaluable sources for becoming a good machine learning developer.

But theyre not enough. Machine learning requires both good coding and math skills and a deep understanding of various types of algorithms. If youre doing Python machine learning, you have to have in-depth knowledge of many libraries and also master the many programming and memory-management techniques of the language. And, contrary to what some people say, you cant escape the math.

And all of that cant be summed up in a few hundred pages. Rather than a single volume, the complete guide to machine learning would probably look like Donald Knuths famous The Art of Computer Programming series.

So, what is all this tirade for? In my exploration of data science and machine learning, Im always on the lookout for books that take a deep dive into topics that are skimmed over by the more general, all-encompassing books.

In this post, Ill look at Python for Data Analysis and Practical Statistics for Data Scientists, two books that will help deepen your command of the coding and math skills required to master Python machine learning and data science.

Python for Data Analysis, 2nd Edition, is written by Wes McKinney, the creator of the pandas, one of key libraries using in Python machine learning. Doing machine learning in Python involves loading and preprocessing data in pandas before feeding them to your models.

Most books and courses on machine learning provide an introduction to the main pandas components such as DataFrames and Series and some of the key functions such as loading data from CSV files and cleaning rows with missing data. But the power of pandas is much broader and deeper than what you see in a chapters worth of code samples in most books.

In Python for Data Analysis, McKinney takes you through the entire functionality of pandas and manages to do so without making it read like a reference manual. There are lots of interesting examples that build on top of each other and help you understand how the different functions of pandas tie in with each other. Youll go in-depth on things such as cleaning, joining, and visualizing data sets, topics that are usually only discussed briefly in most machine learning books.

Youll also get to explore some very important challenges, such as memory management and code optimization, which can become a big deal when youre handling very large data sets in machine learning (which you often do).

What I also like about the book is the finesse that has gone into choosing subjects to fit in the 500 pages. While most of the book is about pandas, McKinney has taken great care to complement it with material about other important Python libraries and topics. Youll get a good overview of array-oriented programming with numpy, another important Python library often used in machine learning in concert with pandas, and some important techniques in using Jupyter Notebooks, the tool of choice for many data scientists.

All this said, dont expect Python for Data Analysis to be a very fun book. It can get boring because it just discusses working with data (which happens to be the most boring part of machine learning). There wont be any end-to-end examples where youll get to see the result of training and using a machine learning algorithm or integrating your models in real applications.

My recommendation: You should probably pick up Python for Data Analysis after going through one of the introductory or advanced books on data science or machine learning. Having that introductory background on working with Python machine learning libraries will help you better grasp the techniques introduced in the book.

While Python for Data Analysis improves your data-processing and -manipulation coding skills, the second book well look at, Practical Statistics for Data Scientists, 2nd Edition, will be the perfect resource to deepen your understanding of the core mathematical logic behind many key algorithms and concepts that you often deal with when doing data science and machine learning.

The book starts with simple concepts such as different types of data, means and medians, standard deviations, and percentiles. Then it gradually takes you through more advanced concepts such as different types of distributions, sampling strategies, and significance testing. These are all concepts you have probably learned in math class or read about in data science and machine learning books.

But again, the key here is specialization.

On the one hand, the depth that Practical Statistics for Data Scientists brings to each of these topics is greater than youll find in machine learning books. On the other hand, every topic is introduced along with coding examples in Python and R, which makes it more suitable than classic statistics textbooks on statistics. Moreover, the authors have done a great job of disambiguating the way different terms are used in data science and other fields. Each topic is accompanied by a box that provides all the different synonyms for popular terms.

As you go deeper into the book, youll dive into the mathematics of machine learning algorithms such as linear and logistic regression, K-nearest neighbors, trees and forests, and K-means clustering. In each case, like the rest of the book, theres more focus on whats happening under the algorithms hood rather than using it for applications. But the authors have again made sure the chapters dont read like classic math textbooks and the formulas and equations are accompanied by nice coding examples.

Like Python for Data Analysis, Practical Statistics for Data Scientists can get a bit boring if you read it end to end. There are no exciting applications or a continuous process where you build your code through the chapters. But on the other hand, the book has been structured in a way that you can read any of the sections independently without the need to go through previous chapters.

My recommendation: Read Practical Statistics for Data Scientists after going through an introductory book on data science and machine learning. I definitely recommend reading the entire book once, though to make it more enjoyable, go topic by topic in-between your exploration of other machine learning courses. Also keep it handy. Youll probably revisit some of the chapters from time to time.

I would definitely count Python for Data Analysis and Practical Statistics for Data Scientists as two must-reads for anyone who is on the path of learning data science and machine learning. Although they might not be as exciting as some of the more practical books, youll appreciate the depth they add to your coding and math skills.

View original post here:
2 books to deepen your command of python machine learning - TechTalks

Read More..

Deep learning’s role in the evolution of machine learning – TechTarget

Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of models for improving statistics, recognizing speech, predicting risk and other applications.

While many of the machine learning algorithms developed over the decades are still in use today, deep learning -- a form of machine learning based on multilayered neural networks -- catalyzed a renewed interest in AI and inspired the development of better tools, processes and infrastructure for all types of machine learning.

Here, we trace the significance of deep learning in the evolution of machine learning, as interpreted by people active in the field today.

The story of machine learning starts in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced a mathematical model of a neural network. The field gathered steam in 1956 at a summer conference on the campus of Dartmouth College. There, 10 researchers came together for six weeks to lay the ground for a new field that involved neural networks, automata theory and symbolic reasoning.

The distinguished group, many of whom would go on to make seminal contributions to this new field, gave it the name artificial intelligence to distinguish it from cybernetics, a competing area of research focused on control systems. In some ways these two fields are now starting to converge with the growth of IoT, but that is a topic for another day.

Early neural networks were not particularly useful -- nor deep. Perceptrons, the single-layered neural networks in use then, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, highlighting the limitations of existing neural network algorithms and causing the emphasis in AI research to shift.

"There was a massive focus on symbolic systems through the '70s, perhaps because of the idea that perceptrons were limited in what they could learn," said Sanmay Das, associate professor of computer science and engineering at Washington University in St. Louis and chair of the Association for Computing Machinery's special interest group on AI.

The 1973 publication of Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other types of machine learning algorithms, reinforcing the shift away from neural nets. A decade later, Machine Learning: An Artificial Intelligence Approach by Ryszard S. Michalski, Jaime G. Carbonell and Tom M. Mitchell further defined machine learning as a domain driven largely by the symbolic approach.

"That catalyzed a whole field of more symbolic approaches to [machine learning] that helped frame the field. This led to many Ph.D. theses, new journals in machine learning, a new academic conference, and even helped to create new laboratories like the NASA Ames AI Research branch, where I was deputy chief in the 1990s," said Monte Zweben, CEO of Splice Machine, a scale-out SQL platform.

In the 1990s, the evolution of machine learning made a turn. Driven by the rise of the internet and increase in the availability of usable data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models that we see today.

The turn toward data-driven machine learning in the 1990s was built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team demonstrated the ability to use backpropagation to build deeper neural networks.

"This was a major breakthrough enabling new kinds of pattern recognition that were previously not feasible with neural nets," Zweben said. This added new layers to the networks and a way to strengthen or weaken connections back across many layers in the network, leading to the term deep learning.

Although possible in a lab setting, deep learning did not immediately find its way into practical applications, and progress stalled.

"Through the '90s and '00s, a joke used to be that 'neural networks are the second-best learning algorithm for any problem,'" Washington University's Das said.

Meanwhile, commercial interest in AI was starting to wane because the hype around developing an AI on par with human intelligence had gotten ahead of results, leading to an AI winter, which lasted through the 1980s. What did gain momentum was a type of machine learning using kernel methods and decision trees that enabled practical commercial applications.

Still, the field of deep learning was not completely in retreat. In addition to the ascendancy of the internet and increase in available data, another factor proved to be an accelerant for neural nets, according to Zweben: namely, distributed computing.

Machine learning requires a lot of compute. In the early days, researchers had to keep their problems small or gain access to expensive supercomputers, Zweben said. The democratization of distributed computing in the early 2000s enabled researchers to run calculations across clusters of relatively low-cost commodity computers.

"Now, it is relatively cheap and easy to experiment with hundreds of models to find the best combination of data features, parameters and algorithms," Zweben said. The industry is pushing this democratization even further with practices and associated tools for machine learning operations that bring DevOps principles to machine learning deployment, he added.

Machine learning is also only as good as the data it is trained on, and if data sets are small, it is harder for the models to infer patterns. As the data created by mobile, social media, IoT and digital customer interactions grew, it provided the training material deep learning techniques needed to mature.

By 2012, deep learning attained star status after Hinton's team won ImageNet, a popular data science challenge, for their work on classifying images using neural networks. Things really accelerated after Google subsequently demonstrated an approach to scaling up deep learning across clusters of distributed computers.

"The last decade has been the decade of neural networks, largely because of the confluence of the data and computational power necessary for good training and the adaptation of algorithms and architectures necessary to make things work," Das said.

Even when deep neural networks are not used directly, they indirectly drove -- and continue to drive -- fundamental changes in the field of machine learning, including the following:

Deep learning's predictive power has inspired data scientists to think about different ways of framing problems that come up in other types of machine learning.

"There are many problems that we didn't think of as prediction problems that people have reformulated as prediction problems -- language, vision, etc. -- and many of the gains in those tasks have been possible because of this reformulation," said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery's special interest group on AI.

In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image.

The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power.

"Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes," Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.

The art of crafting a machine learning problem has been taken over by advanced algorithms and the millions of hours of CPU time baked into pretrained models so data scientists can focus on other projects or spend more time on customizing models.

Deep learning is also helping data scientists solve problems with smaller data sets and to solve problems in cases where the data has not been labeled.

"One of the most relevant developments in recent times has been the improved use of data, whether in the form of self-supervised learning, improved data augmentation, generalization of pretraining tasks or contrastive learning," said Juan Jos Lpez Murphy, AI and big data tech director lead at Globant, an IT consultancy.

These techniques reduce the need for manually tagged and processed data. This is enabling researchers to build large models that can capture complex relationships representing the nature of the data and not just the relationships representing the task at hand. Lpez Murphy is starting to see transfer learning being adopted as a baseline approach, where researchers can start with a pretrained model that only requires a small amount of customization to provide good performance on many common tasks.

There are specific fields where deep learning provides a lot of value, in image, speech and natural language processing, for example, as well as time series forecasting.

"The broader field of machine learning is enhanced by deep learning and its ability to bring context to intelligence. Deep learning also improves [machine learning's] ability to learn nonlinear relationships and manage dimensionality with systems like autoencoders," said Luke Taylor, founder and COO at TrafficGuard, an ad fraud protection service.

For example, deep learning can find more efficient ways to auto encode the raw text of characters and words into vectors representing the similarity and differences of words, which can improve the efficiency of the machine learning algorithms used to process it. Deep learning algorithms that can recognize people in pictures make it easier to use other algorithms that find associations between people.

More recently, there have been significant jumps using deep learning to improve the use of image, text and speech processing through common interfaces. People are accustomed to speaking to virtual assistants on their smartphones and using facial recognition to unlock devices and identify friends in social media.

"This broader adoption creates more data, enables more machine learning refinement and increases the utility of machine learning even further, pushing even further adoption of this tech into people's lives," Taylor said.

Early machine learning research required expensive software licenses. But deep learning pioneers began open sourcing some of the most powerful tools, which has set a precedent for all types of machine learning.

"Earlier, machine learning algorithms were bundled and sold under a licensed tool. But, nowadays, open source libraries are available for any type of AI applications, which makes the learning curve easy," said Sachin Vyas, vice president of data, AI and automation products at LTI, an IT consultancy.

Another factor in democratizing access to machine learning tools has been the rise of Python.

"The wave of open source frameworks for deep learning cemented the prevalence of Python and its data ecosystem for research, development and even production," Globant's Lpez Murphy said.

Many of the different commercial and free options got replaced, integrated or connected to a Python layer for widespread use. As a result, Python has become the de facto lingua franca for machine learning development.

Deep learning has also inspired the open source community to automate and simplify other aspects of the machine learning development lifecycle. "Thanks to things like graphical user interfaces and [automated machine learning], creating working machine learning models is no longer limited to Ph.D. data scientists," Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, said.

For machine learning to keep evolving, enterprises will need to find a balance between developing better applications and respecting privacy.

Data scientists will need to be more proactive in understanding where their data comes from and the biases that may inadvertently be baked into it, as well as develop algorithms that are transparent and interpretable. They also need to keep pace with new machine learning protocols and the different ways these can be woven together with various data sources to improve applications and decisions.

"Machine learning provides more innovative applications for end users, but unless we're choosing the right data sets and advancing deep learning protocols, machine learning will never make the transition from computing a few results to providing actual intelligence," said Justin Richie, director of data science at Nerdery, an IT consultancy.

"It will be interesting to see how this plays out in different industries and if this progress will continue even as data privacy becomes more stringent," Richie said.

More:
Deep learning's role in the evolution of machine learning - TechTarget

Read More..

What I Learned From Looking at 200 Machine Learning Tools – Machine Learning Times – machine learning & data science news – The Predictive…

Originally published in Chip Huyen Blog, June 22, 2020

To better understand the landscape of available tools for machine learning production, I decided to look up every AI/ML tool I could find. The resources I used include:

After filtering out applications companies (e.g. companies that use ML to provide business analytics), tools that arent being actively developed, and tools that nobody uses, I got 202 tools. See the full list. Please let me know if there are tools you think I should include but arent on the list yet!

Disclaimer

This post consists of 6 parts:

I. OverviewII. The landscape over timeIII. The landscape is under-developedIV. Problems facing MLOpsV. Open source and open-coreVI. Conclusion

I. OVERVIEW

In one way to generalize the ML production flow that I agreed with, it consists of 4 steps:

I categorize the tools based on which step of the workflow that it supports. I dont include Project setup since it requires project management tools, not ML tools. This isnt always straightforward since one tool might help with more than one step. Their ambiguous descriptions dont make it any easier: we push the limits of data science, transforming AI projects into real-world business outcomes, allows data to move freely, like the air you breathe, and my personal favorite: we lived and breathed data science.

I put the tools that cover more than one step of the pipeline into the category that they are best known for. If theyre known for multiple categories, I put them in the All-in-one category. I also include the Infrastructure category to include companies that provide infrastructure for training and storage. Most of these are Cloud providers.

To continue reading this article click here.

Continue reading here:
What I Learned From Looking at 200 Machine Learning Tools - Machine Learning Times - machine learning & data science news - The Predictive...

Read More..