Page 2,514«..1020..2,5132,5142,5152,516..2,5202,530..»

Best of VMworld: Harnessing Data Center Modernization and the Public Cloud with SBG-SMIT – Marketscreener.com

VMworld 2021 was full of incredible content across our general sessions, technical demos, and hands-on labs. Among the highlights were our customer conversations, in which we learned how organizations around the globe are leveraging VMware technologies to reach unprecedented heights.

We heard from Josef Schmid, Group IT infrastructure manager at SGB-SMIT, on the organization's journey to the cloud and data center modernization. SGB-SMIT is a power transformer manufacturer headquartered in Regensburg, Germany, with 13 locations around the world. The challenge for the IT team at SGB-SMIT was the need to quickly scale data and processes to keep up with doubling production growth while keeping employees connected across locations.

How it All Started

Under the new leadership of CIO Salvatore Cassara in 2016, the first step in SGB-SMIT's cloud journey was consolidating the Office platforms and moving to Office 365, a common first step for many organizations. After the Office 365 migration, the team realized that their existing virtual desktop environment wouldn't support their growing need for high-performance capabilities such as 3D CAD, especially for their new subsidiary in Romania.

SBG-SMIT then made a bold decision to consolidate their data centers and create a single global data center operating as a private cloud. As their first cloud project, the organization opted for a high-end, 3D CAD virtual desktop infrastructure to support their engineers who needed a high-performance environment to fulfil their roles.

Public Cloud Migration

To achieve their public cloud migration goals, SGB-SMIT used both VMware Cloud on AWS and Google Cloud VMware Engine by managing it as one entity from a vCenter and vSphere environment. In fact, with Google Cloud VMware engine, SGB-SMIT was able to deploy nearly 1,000 desktops into the cloud within 8 weeks.

Future Plans

The end goal for SGB-SMIT is to move everything that is currently on-prem to the cloud, including virtual servers and end user desktops. This move will result in the best user experience since there will be no need to transfer big data loads over their local network.

The plan for the on-premises infrastructure is that it will only run production related systems that need low-latency access, in cases where there is a critical situation that requires accessing data in their private cloud.

The Human Side

Sometimes, the "human" element can be the most difficult part of change - sometimes even harder than the technical side. To combat this phenomenon during their shift to a central, private cloud and the public cloud, SGB-SMIT created a robust plan to ensure that all employees' technical needs would be met, and that their experience would be seamless.

As the IT team already had a lot of experience of virtualization, the team received support and training to the point that motivation was high. But, to ensure that all end users would also have a smooth experience with the new central private cloud, SBG-SMIT set up VDI "test teams" across departments to guarantee that users would have the ability to work as efficiently as they did previously.

Ultimately, SGB-SMIT has built infrastructure that is highly scalable, sustainable, and innovative, with the potential to leverage new cloud services such as data analytics and machine learning. Becoming an enterprise with multi-cloud infrastructure has enabled SGB-SMIT to accomplish incredible technological feats such as building 3D accelerated VDI, supporting rapid desktop deployment, and more. To learn more about the successes and challenges of SGB-SMIT'S multi-cloud journey, watch their VMworld session here:

Go here to see the original:
Best of VMworld: Harnessing Data Center Modernization and the Public Cloud with SBG-SMIT - Marketscreener.com

Read More..

PSA | Unauthorized iPhone 13 Screen Replacements Will Break Face ID – iDrop News

Over the past few years, Apple has been making unauthorized DIY repairs more difficult with its latest iPhones, and now it looks like its crossed another big line with the iPhone 13 and iPhone 13 Pro.

According to iFixit, replacing a screen on Apples newest iPhone models has become even more challenging since you now risk breaking Face ID in the process.

This isnt the first time weve seen something like this happen, but its definitely a first for Apples Face ID authentication system. This has actually long been a problem with Apples older Touch ID equipped iPhones, where the Touch ID sensor would fail to work after the display was replaced.

Since the display, home button, and Touch ID components are all connected, the new sensor needs to be cryptographically paired up using a process only available to authorized Apple repair shops. If you have your display replaced by anybody else, they cant perform this step, so the iPhone wont recognize the Touch ID sensor.

To make matters worse, a security feature in iOS actually took this to an extreme a few years ago, bricking the iPhones of users who had unauthorized display repairs done. Apple eventually fixed this issue following regulatory fines and threats of class-action lawsuits, but now some are fearing that it may be rearing its head again in the iPhone 13.

Apple has been chipping away at iPhone repair work outside their control for years now. With new changes to the iPhone 13, they may be aiming to shatter the market completely.

In its experiments with the new iPhone 13 models, iFixit discovered that replacing the screen on an iPhone 13 disables its Face ID functionality even if you simply swap two screens between two otherwise identical models.

The problem seems to stem from a chip about the size of a Tic-Tac thats tucked away at the bottom of the screen. This small microcontroller is uniquely linked to the specific iPhone 13 that it came with, using a technique known as serialization.

During startup, the iPhone 13 checks to see if this chip matches what its supposed to be and if it doesnt find the correct chip, it disables Face ID.

Whats most onerous about this, according to iFixit, is that the screen technically has nothing to do with the Face ID hardware, so there doesnt appear to be any justifiable reason for this except as a means for Apple to shut down unauthorized repairs of its newest iPhone models.

This unprecedented lockdown is unique to Apple. Its totally new in the iPhone 13, and hard to understand as a security measure, given that the Face ID illuminator is entirely separate from the screen. It is likely the strongest case yet for right to repair laws.

Authorized Apple service technicians have access to proprietary tools that allow them to sync the serial numbers of the iPhone and the display via Apples cloud servers. However, these tools naturally arent available to independent repair shops.

Although Apple introduced a new Independent Repair Provider (IRP) program two years ago, the terms of the program are said to be ridiculously Draconian, giving Apple the right to randomly inspect participating companies, without notice, to search for and identify the use of prohibited repair parts, even up to five years after they leave the program.

Needless to say, there arent too many independent repair shops who are interested in signing away their rights just to gain access to Apples authorized repair parts, especially since they cant claim to be Apple Authorized Service Providers thats a much harder designation to achieve and theyre even required to make their customers sign waivers acknowledging that theyre not getting real Apple repairs.

According to iFixit, some of the most sophisticated repair shops have been able to work around this limitation by swapping the actual chip from the original display into a new one, but thats a procedure thats not for the faint of heart.

For one thing, it requires a microscope, and then on top of that, you need somebody who is not only skilled with microsoldering but also has the necessary equipment to do this work.

Three out of 10 shops solder. One out of [those] three can do BGA [microsoldering] work.

That said, for some repair shops, this isnt a new problem. At least one told iFixit theyve been doing these kinds of screen chip swaps since the 2018 iPhone X simply to avoid potential screen calibration issues and genuine part warnings. Theyve managed to get the process down to 15 minutes, so this new requirement with the iPhone 13 is unlikely to be a problem.

In fact, this particular repair shop has gone so far as to build an inventory of refurbished and third-party replacement screens that already have the chip slot carved out and ready to go.

For customers who want to fix their iPhone 13 themselves, the options are grim. You could live without any kind of biometric login, like you might have in 2012. Or you could try to move the chip, after buying yourself a microscope or high-resolution webcam, a hot air rework station, a fine-tip soldering iron, and the necessary BGA stencils, flux, and other supplies.

This isnt the only problem that independent technicians face, however, as there are still other things that remain the exclusive domain of Apples authorized technicians. For instance, not only can they make an iPhone 13 accept a new screen with only a few clicks inside their secret software, but theyre also still the only ones that can keep True Tone working something that still eludes independent repair techs working on the iPhone 12 and iPhone 13.

One experienced repair tech, Dusten Mahathy, shared third-hand information from Apple support suggesting that this issue was a bug that would be fixed in a future software update, but the problem persists in iOS 15.1.

In fact, as iFixit explains, the only real change from iOS 15.0 to iOS 15.1 is that the iPhone now shows an explicit error message that Face ID has been disabled, rather than just silently failing as it did before. Its possible that this was the fix that Apple was talking about when it spoke with Mahathys sources.

iFixit is naturally skeptical that this is an accident, especially considering Apples previous track record. Its done this with Touch ID, batteries, and cameras, so why shouldnt the display be next?

Technically, yes: Face ID failure could be a very specific hardware bug for one of the most commonly replaced components, one that somehow made it through testing, didnt get fixed in a major software update, and just happens to lock out the kind of independent repair from which the company doesnt profit.

Apple has staunchly opposed Right to Repair laws for years, spending millions of dollars lobbying against them in various states, so certainly it has absolutely no reason to make things easier for DIYers and small independent repair shops. However, with this latest change, it would appear its going out of its way to make things harder instead.

Theres still a possibility that Apple could fix this in a future iOS update, likely simply warning the user about an unverified screen rather than shutting down a core feature. It wouldnt even be the first time its done this, as we saw the same restriction on iPhone 12 camera replacements last year, which went from being non-functional in iOS 14 to simply showing an Unable to Verify warning in iOS 14.4.

Even if Apple makes that move, however, its hard to rule out the possibility that the company has merely been testing the waters to see how much of this kind of component locking it can get away with.

Read the original here:
PSA | Unauthorized iPhone 13 Screen Replacements Will Break Face ID - iDrop News

Read More..

India’s cloud end-user spending to grow by 35% in 2021: Gartner – Business Today

Reflecting a structural shift in operation of Indian businesses post-pandemic, the end-user spending on public cloud services in the country is forecast to grow by another 30 per cent in 2022, according to a report by the tech research and consulting firm, Gartner. It will be the fourth straight year of double-digit growth in this space.

The end-user cloud spending is estimated to grow by nearly 35 per cent as per the latest forecast, compared to around 31 per cent estimated in April.

The public cloud services in India is estimated to grow to $7.3 billion in 2022.

Public cloud services adoption has accelerated since the onset of the global pandemic. The pandemic was a tipping point for Indian businesses to realise the true value of public cloud, said Sid Nag, research vice president at Gartner.

In India, the policy infrastructure is emerging as an important contributor to public cloud growth. For example, the recently launched public cloud government initiatives Meghraj and Cloud Vision for India 2022 will prove useful for small and medium businesses or those who are in early stage of cloud adoption to benefit from this technology, he added.

The cloud computing service enables its user to hire or use software, storage, servers as per requirement instead of purchasing the entire system. With the growing adoption of new-age technologies such as Big Data, analytics, artificial intelligence (AI), and the Internet of Things (IoT), Indian cloud infrastructure market has seen a tremendous rise in demand.

Gartner said that initiatives targeted towards building a skilled cloud workforce in partnership with private IT service providers will contribute to the governments effort of strengthening the public cloud ecosystem in the country.

Business entities are increasingly migrating their data to the cloud space. The IT companies are investing heavily in the cloud business to cater to the accelerating demand across the globe. Wipro through its FullStride Cloud Services has committed an investment of $1 billion in cloud technologies, capabilities, acquisitions, and partnerships over the next three years and employs over 79,000 cloud professionals. Infosys through its one-year-old cloud platform Cobalt is offering 35,000 cloud assets and over 300 industry cloud solution blueprints.

Several start-ups in India too are driving the cloud adoption, and traditional businesses like education and retail are also migrating to cloud.

Indian CIOs are expected to focus their cloud investment on cloud system infrastructure services (IaaS). This segment is forecast to be a total of $2.4 billion by 2022, up 40 per cent from 2021. IaaS will make up 32.3 per cent of the total investments in public cloud services in 2022.

Public cloud growth continues to be driven by organisations that want to modernise their IT and reduce their capital expenditure spend, said Nag. He further added that the desire for agility and innovation in both business transformation and IT operations is also fueling the growth of public cloud.

The next step in the growth of cloud in India will be the adoption of cloud native technologies. Indian CIOs will look to reimagine and refashion their applications and workloads using containers and microservices as well as artificial intelligence (AI) and machine learning (ML), said Nag.

Read the original:
India's cloud end-user spending to grow by 35% in 2021: Gartner - Business Today

Read More..

The Worldwide Cloud Music Services Industry is Expected to Reach $19+ Billion by 2026 – PRNewswire

DUBLIN, Nov. 4, 2021 /PRNewswire/ -- The "Cloud Music Services Market - Forecasts from 2021 to 2026" report has been added to ResearchAndMarkets.com's offering.

The cloud music services market is expected to grow at a compound annual growth rate of 29.48% over the forecast period to reach a market size of US$19.158 billion in 2026 from US$3.140 billion in 2019.

Cloud music services allow users to save their music collections on the cloud as well as on local storage devices and stream music across many devices. These services usually operate on a freemium basis, which allows users to store data for free up to a specific amount before having to pay a small fee to utilise cloud services. Many cloud services feature user interfaces that let users access music from a range of devices, including MP3 players, laptops, cellphones, gaming consoles, and setup boxes. Using the servers, one can make a number of playlists and listen to them from anywhere.

Cloud has revolutionized the music industry and how people listen to digital music. The smartphone has largely become the device of choice for allowing cloud-based music services due to its increasing penetration and coverage. According to the IFPI Global Music Report 2021, total streaming revenues increased by 19.9% to $13.4 billion, accounting for 62.1% of total global recorded music revenue.

Cloud music services are generally used to expand music access by overcoming constraints imposed by device storage space or lack of ownership. Streaming, subscription, and other cloud services provide listeners service agreements that allow them to rent music for a charge or under specific circumstances. One of the main trends predicted to gain momentum in the cloud music services market is the rising incorporation of analytics in the music business. Record labels can give clients a personalised music experience thanks to the help of curators, editors, and

Growth Factors

Rising use of smartphones

The adoption of cloud music services is rising at a faster rate as the use of smartphones grows. It solves the problem of saving tracks on a smartphone. With the use of a smartphone, the user can listen to his favourite music whenever he wants. As the number of smartphones grows, so does the use of mobile cloud services, resulting in a growth in the mobile cloud services market. Furthermore, with enhanced mobile network technology, users can have high-speed internet access to playlists on the cloud, which is expected to drive the cloud music service market.

Restraints

High bandwidth with fast streaming

High-speed network bandwidth is required to stream music online, and a stable network is required to maintain continuously streaming. Many developing nations lack strong network connections, which might have a negative impact on the market for cloud music services. Furthermore, in the coming years, efforts by telecommunications and government services to establish extensive network connectivity may overcome the constraints. Furthermore, because it is customary for users to share their IDs and password, the privacy connected with the music collection might be a big stumbling block for the market.

COVID-19 Impact

The COVID-19 pandemic is forecasted to have a positive influence on the cloud music services market, as all kinds of entertainment have gone online due to the outbreak. People were compelled to adapt to new kinds of media due to the lack of movement and the closing of all entertainment centres, resulting in the rise of streaming services.

Key Topics Covered:

1. Introduction

2. Research Methodology

3. Executive Summary

4. Market Dynamics4.1. Market Drivers4.2. Market Restraints4.3. Porter's Five Forces Analysis4.3.1. Bargaining Power of Suppliers4.3.2. Bargaining Power of Buyers4.3.3. The Threat of New Entrants4.3.4. Threat of Substitutes4.3.5. Competitive Rivalry in the Function4.4. Function Value Chain Analysis

5. Cloud Music Services Market Analysis, By Type 5.1. Introduction5.2. Downloadable5.3. Subscription5.4. Streaming

6. Cloud Music Services Market Analysis, By Geography

7. Competitive Environment and Analysis7.1. Major Players and Strategy Analysis7.2. Emerging Players and Market Lucrativeness7.3. Mergers, Acquisitions, Agreements, and Collaborations7.4. Vendor Competitiveness Matrix

8. Company Profiles8.1. Apple, Inc.8.2. Spotify8.3. Amazon8.4. Pandora8.5. Sound Cloud8.6. KKBOX8.7. Youtube8.8. Deezer SA8.9. Saavn LLC8.10. Gaana.com

For more information about this report visit https://www.researchandmarkets.com/r/bfn74x

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Continued here:
The Worldwide Cloud Music Services Industry is Expected to Reach $19+ Billion by 2026 - PRNewswire

Read More..

Tips and tricks to minimise downtime while you migrate to the cloud – ITWeb

If youve been following this migration article series, you should be ready to move onto the next phase of your journey getting to the good stuff and migrating your workloads.

Before we get there, let's quickly recap all the things you should have in place before you kick off your migration.

1. You know what your security and compliance goals are, and what services you can (or can't) use in your cloud architecture.

2. You have deployed a couple of tools that will make the migration journey just a little bit easier.

3. You understand your total cost of ownership (TCO)

4. You know what you will be doing with each application during the migration and which of the six migration strategies youll follow for each application.

5. Youve reviewed your operations and determined if any changes need to be made to effectively migrate to the cloud and manage the environment when you are there.

6. Shared responsibility, this is key. You might have read about this before, go read it again .

Okay, so you are now ready to kick off the migration, but how do you make it as easy as possible while ensuring there is very little to no downtime of your applications?

Jaco Venter, head of the cloud management at BBD, an international software development company with expertise in cloud enablement services, says while there are quite a few options that could help you make the AWS move to cloud a little easier, there are two strategies that he believes work very well.

CloudEndure

A tool from AWS, CloudEndure has two primary functions: Disaster recovery and migrations.

The migration component is called AWS Application Migration Service (AWS MGN).

Using a component such as AWS MGN allows you to maintain normal business operations throughout the replication process while you migrate to the cloud. Basically, because it continuously replicates source servers, there is little to no performance impact on your operations during this process. Continuous replication also makes it easy to conduct non-disruptive tests and shorten cutover windows while you move the network identity to the cloud computer.

AWS MGN reduces overall migration costs with no need to invest in multiple migration solutions, specialised cloud development or application-specific skills because it can be used to lift and shift any application from any source infrastructure that runs supported operating systems (OS), explains Venter.

Here's an example of how the process works:

Implementation begins by installing the AWS Replication Agent on your source servers. Once it's installed, you can view and define the replication settings. AWS MGN uses these settings to create and manage a staging area subnet with lightweight Amazon EC2 instances that act as replication servers and as low-cost staging Amazon EBS volumes.

These replication servers receive data from the agent running on your source servers and write this data to the staging Amazon EBS volumes. Your replicated data is compressed and encrypted in transit and can be encrypted at rest using EBS encryption. AWS MGN keeps your source servers up to date on AWS using continuous, block-level data replication. It uses your defined launch settings to launch instances when you conduct non-disruptive tests or perform a cutover.

When you launch test or cutover instances, the service automatically converts your source servers to boot and run natively. After confirming that your launched instances are operating properly, you can decommission your source servers. You can then choose to modernise your applications by leveraging additional services and capabilities. In a nutshell, utilising components such as the AWS Application Migration Service during your migration means that you can keep your operations running while simultaneously moving them to the cloud its a great trick for ensuring a smooth transition to the cloud.

He adds that AWS MGN is also a great option for lift and shift projects, but with these you might be modernising your application and therefore cant just simply replicate and decommission on the one side and switch on the new environment without any interventions.

Thats where services such as Route 53 come in.

Amazon Route 53 is a highly available and scalable cloud domain name system (DNS) web service. It is designed to give software engineers and businesses an extremely reliable and cost-effective way to route end-users to internet applications by translating names like http://www.example.com into the numeric IP addresses such as 192.0.2.1. Why does this need to happen? DNS web services are important because they are what computers use to connect to each other.

Using Route 53's DNS routing function means you can manage what environment your users will connect to a handy tip if youre wanting to make sure that your end-users have no disruption or break in what theyre viewing through your migration. At the point where you have completed your internal testing and are ready to switch over to your new cloud environment, DNS routing allows you to redirect traffic so that your users seamlessly connect to the new environment. During this process your users won't be impacted and are able to simply continue operating as if nothing changed.

In the event that something does not quite go as planned (And it can happen! remarks Venter), you can redirect traffic back to your previous environment, allowing you to address any challenges then switch back to the new cloud set-up once again.

A migration plan is all about mitigating risk and ensuring there are clearly defined metrics of what "good" looks like. Its also about knowing at what point you should deem the project a success and when to roll back. Whats that old adage? Failing to plan is planning to fail.

One last tip from Venter and BBD: Make it easy to roll back if you have to.

If youre in need of a cloud partner to guide you through the tools, compliancy, strategies, operational changes and downtime dependencies reach out to BBD for more on itscloud enablement services.

More here:
Tips and tricks to minimise downtime while you migrate to the cloud - ITWeb

Read More..

Global Cloud Music Services Market (2021 to 2026) – Featuring Apple, Spotify and Amazon Among Others – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Cloud Music Services Market - Forecasts from 2021 to 2026" report has been added to ResearchAndMarkets.com's offering.

The cloud music services market is expected to grow at a compound annual growth rate of 29.48% over the forecast period to reach a market size of US$19.158 billion in 2026 from US$3.140 billion in 2019.

Companies Mentioned

Cloud music services allow users to save their music collections on the cloud as well as on local storage devices and stream music across many devices. These services usually operate on a freemium basis, which allows users to store data for free up to a specific amount before having to pay a small fee to utilise cloud services. Many cloud services feature user interfaces that let users access music from a range of devices, including MP3 players, laptops, cellphones, gaming consoles, and setup boxes. Using the servers, one can make a number of playlists and listen to them from anywhere.

Cloud has revolutionized the music industry and how people listen to digital music. The smartphone has largely become the device of choice for allowing cloud-based music services due to its increasing penetration and coverage. According to the IFPI Global Music Report 2021, total streaming revenues increased by 19.9% to $13.4 billion, accounting for 62.1% of total global recorded music revenue.

Cloud music services are generally used to expand music access by overcoming constraints imposed by device storage space or lack of ownership. Streaming, subscription, and other cloud services provide listeners service agreements that allow them to rent music for a charge or under specific circumstances. One of the main trends predicted to gain momentum in the cloud music services market is the rising incorporation of analytics in the music business. Record labels can give clients a personalised music experience thanks to the help of curators, editors, and

Growth Factors

Rising use of smartphones

The adoption of cloud music services is rising at a faster rate as the use of smartphones grows. It solves the problem of saving tracks on a smartphone. With the use of a smartphone, the user can listen to his favourite music whenever he wants. As the number of smartphones grows, so does the use of mobile cloud services, resulting in a growth in the mobile cloud services market. Furthermore, with enhanced mobile network technology, users can have high-speed internet access to playlists on the cloud, which is expected to drive the cloud music service market.

Restraints

High bandwidth with fast streaming

High-speed network bandwidth is required to stream music online, and a stable network is required to maintain continuously streaming. Many developing nations lack strong network connections, which might have a negative impact on the market for cloud music services. Furthermore, in the coming years, efforts by telecommunications and government services to establish extensive network connectivity may overcome the constraints. Furthermore, because it is customary for users to share their IDs and password, the privacy connected with the music collection might be a big stumbling block for the market.

COVID-19 Impact

The COVID-19 pandemic is forecasted to have a positive influence on the cloud music services market, as all kinds of entertainment have gone online due to the outbreak. People were compelled to adapt to new kinds of media due to the lack of movement and the closing of all entertainment centres, resulting in the rise of streaming services.

Key Topics Covered:

1. Introduction

2. Research Methodology

3. Executive Summary

4. Market Dynamics

4.1. Market Drivers

4.2. Market Restraints

4.3. Porter's Five Forces Analysis

4.4. Function Value Chain Analysis

5. Cloud Music Services Market Analysis, By Type

5.1. Introduction

5.2. Downloadable

5.3. Subscription

5.4. Streaming

6. Cloud Music Services Market Analysis, By Geography

7. Competitive Environment and Analysis

7.1. Major Players and Strategy Analysis

7.2. Emerging Players and Market Lucrativeness

7.3. Mergers, Acquisitions, Agreements, and Collaborations

7.4. Vendor Competitiveness Matrix

8. Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/9uat8q

Read more:
Global Cloud Music Services Market (2021 to 2026) - Featuring Apple, Spotify and Amazon Among Others - ResearchAndMarkets.com - Business Wire

Read More..

Kyndryl set for IBM spin-off: Can it grow ecosystem, innovation and revenue? – SmartPlanet.com

Kyndryl, a managed services giant spun off from IBM, will officially become a publicly traded independent company on Wednesday and the company has a long to-do list that includes boosting innovation, delivering revenue growth and forging a cohesive employee culture.

Martin Schroeter, CEO of Kyndryl, said at the company's inaugural investor day that Kyndryl will "ramp up our focus on innovation, going after new market opportunity and using our experience and our IP to benefit our customers."

In the meantime, Kyndryl will remain known for being the largest integrator with $19.1 billion in revenue as well as 90,000 employees. According to Gartner, Kyndryl will be the largest implementation services leader followed by DXC, Atos, Fujitsu and Accenture.

Kyndryl operates in 63 countries, manages 750,000 virtual servers, 270,000 network devices and 25,000 SAP and Oracle systems.

Schroeter's plan revolves around extending its implementation and managed services into other areas with more growth. Here's a look at the plan, markets and potential growth through 2024. In short, Kyndryl will ride intelligent automation, data services, cloud services and security to deliver more value and enable digital transformation.

The argument for Kyndryl is that companies are starting their digital transformations and the company has time to expand even as it simplifies customer infrastructure. Schroeter also said Kyndryl will offer an ESG platform and strategy to address customers' environmental, social and governance challenges.

Among the key areas Kyndryl aims to address:

Indeed, Kyndryl has the customer base to expand. It has more than 4,000 customers and only 15% of revenue comes from the top 10. Kyndryl counts 75% of the Fortune 100 as customers and the average customer relationship is more than 10 years.

But the challenge will be pivoting Kyndryl story from implementation to innovation.

Kyndryl's investor day revolved around convincing Wall Street that the company was a solid investment. IBM shareholders will receive one Kyndryl share for every 5 IBM shares held. Kyndryl shares are distributed after market close on Nov. 3 with trading under the KD ticker on Nov. 4.

As for the balance sheet, Kyndryl will start with $2 billion in cash and $3.2 billion of debt with an incremental $3 billion credit facility.

The revenue streams for Kyndryl are also predictable. The company said that about 85% of its expected revenue is under contract at the start of every year. In addition, ABN Amro recently announced a $400 million tech services deal with Kyndryl.

Wall Street analysts were generally cautious following Kyndryl's investor day. For instance, Wedbush analyst Moshe Katri said in a research note that Kyndryl will need to manage cannibalization to its services business and cut costs with restructuring. "We see a long and challenging road for a recovery at Kyndryl," said Katri.

Perhaps the biggest issue facing Kyndryl is that it must operate in an environment that's moving toward cloud models with little capital investment up front and a heavy dose of automation. Simply put, Kyndryl has its own transformation to deliver.

Kyndryl doesn't expect revenue growth until 2025 and there is potential sales contraction leading up to that date. Stifel Nicolaus analyst David Grossman said there are multiple opportunities to expand as Kyndryl expands its ecosystem and partnerships.

Kyndryl's management team is roughly split between IBM executives, external hires and IBM alums and external hires. The diversified set of opinions and experiences is something that can set Kyndryl apart, said Schroeter.

Indeed, Kyndryl's executive team includes former CIOs of State Street, GE and NBC Universal.

The company's name is derived from the words kinship and tendril to evoke growth and working together well.

At the Kyndryl investor day, executives emphasized that culture and people were the core assets for success. Kyndryl noted that its employees are continually learning, earning certifications and badges and reskilling on the fly.

More importantly, Kyndryl has been expanding its skillsets in Amazon Web Services, Microsoft Azure and Google Cloud. Those skills will be critical to making Kyndryl a broader player.

To celebrate the spin-off, Kyndryl will plant a tree for each employee. The company will also aim to build a purpose-driven firm from the ground up.

Excerpt from:
Kyndryl set for IBM spin-off: Can it grow ecosystem, innovation and revenue? - SmartPlanet.com

Read More..

The buzz about cloud-based document management systems and why it is likely to become mainstream – YourStory

In 2001, IDC reported that workers creating, managing or editing documents were spending up to 2.5 hours a day on average searching for what they needed. By 2012, IDCs Information Worker Survey reported that workers were spending just about five hours per week searching for documents. The reduction in the time to find information can be attributed to technology solutions in the area of document management. Despite all the technological advances, documents continue to be stored and managed in an unstructured way, electronically, making accessibility and security a key challenge. A more recent survey conducted by the Economist Intelligence Unit (EIU) shows that employees spend 25 percent of their time searching for information to do their jobs.

Most documents continue to remain disconnected across organisations of all sizes. The information is often spread across emails, chats, documents, spreadsheets, slides, each of which may reside with different users and in silos, making data gathering extremely challenging.

In addition, with many users wanting to have easy accessibility, the documents are copied, often multiple times, leading to a growing volume. Industry studies point out that 30 percent of document accesses are often unsuccessful because of the document being misfiled or disappearing or challenges in access controls. And, all of this translates into considerable costs, sometimes directly or in the form of loss of employees work time. This is where a digital document management solution (DMS) becomes relevant.

A digital DMS can address key challenges that manual document management falls short, explains K Bhaskhar, Senior Vice President, Canon India, BIS Division. With a cloud-based document management system, there is almost 100 percent uptime so that documents can be accessed from anytime, anywhere. The system is secure because there is a constant upgradation of the firewall and the overall security software. Most importantly, it is a cost-effective proposition for any organisation as it works on a shared services model, thereby contributing to an organisation becoming more collaborative, agile and efficient.

And sometimes because the work processes are not transparent enough, it might be extremely difficult to find a particular document that is needed or who is in charge of that document. Not just that, with employees managing too many documents, it is quite possible for them to lose track of the documents, resulting in overshooting deadlines. In a cloud-based DMS, there is version control and changes made to any document can be easily tracked.

In contrast, DMS, especially cloud-based DMS, not only addresses these challenges but also ensures process efficiency, secure and confidential access to data and is cost effective. Today, cloud-based DMS with its high-performance servers and automatic text recognition, make it possible to store, manage and access large volumes of documents easily. A user can get to the desired document almost instantly without having to search for hours in file servers or folders for a single piece of information they need.

Venkatesh agrees, saying cloud-based DMS is better than having a dedicated internal SPOC to manage the access and management of official documents. However, he says that the clear advantages to the success of cloud-based DMS depend on two critical factors. One, the cloud-based DMS should integrate with existing systems to make the experience and scale seamless as opposed to providing another set of siloed tools for document management. Any technology solution, no matter how impactful it is, will not get used unless it weaves into the existing way of working, be it integrating with e-mails, social collaboration tools, identity management tools, access control systems, among others. Second, a critical element is ensuring that there is a push for adoption.

And, when adoption of document management is successful, benefits are more appreciated if we look at the intangibles in terms of ease at which teams can now get access to information they need, says Venkatesh. Sharing an example, he says, With a DMS in place, a sales SPOC can now get access to authorised information and playbooks of other customers that can be shared at any given point of conversation to drive home the point for a potential customer. The DMS with its in-built security features ensures that only relevant content is shared without compromising on security. He explains that this immediate access to relevant information can be critical in driving the customer delight factor in a sales conversation. With DMS, there is no dependency on people for access to information they need to do their job which would otherwise take anywhere between a few hours to days, he adds.

Agami a network organisation which was dependent on a paper-based document management system prior to the pandemic, today has made the shift to digital solutions.

Shifting to digital DMS not only eliminated the need for a dedicated office space for managing the paperwork but also brought about efficiency in the workflow. For instance, the immediacy of being able to get access to a document and get it signed digitally translates into a huge advantage, he says.

Today, amidst the pandemic, cloud-based DMS adoption has seen an accelerated rise, as has the case been across technology solutions addressing different use cases in digitisation. The world today has become fast, from fast food to fast disbursal of cash through ATMs. So why shouldnt access to documents be fast? While adoption has seen an upward curve over the last few years, the curve has been steeper since 2019. And the adoption of DMS increased significantly since the onset of the pandemic and shift to a hybrid work environment, says Bhaskhar. He adds that the adoption has been primarily driven to meet the needs of Finance, HR, and IT functions in particular.

And, with organisations shifting to a hybrid workplace structure for the long-term, Bhaskhar opines that DMS will become a mainstay technology for companies. Here, he shares that it is a best practice for organisations, especially startups, to start with the digital management of their documents as early as possible. This will create structure and transparency of information at all levels of the company. It will be easier to manage the flow of information when the startups start achieving economies of scale, he says.

Today, DMS is witnessing interesting innovations. The innovations with respect to digital right management, context-aware access, user provisioning and deprovisioning will further enhance the impact of DMS solutions, says Venkatesh. Bhaskar points out Canon's DMS Therefore to further illustrate.

The increasing shift to a hybrid workplace, work becoming more collaborative and complex and innovations in DMS are likely to further make the case stronger for businesses of all sizes across sectors to adopt DMS.

See more here:
The buzz about cloud-based document management systems and why it is likely to become mainstream - YourStory

Read More..

Virospack acquires IFSs technology to improve its operational efficiency – Cosmetics Business

4-Nov-2021

Packaging

Virospack, the specialist in droppers for cosmetics, has reached an agreement with the softwareprovider IFS to use their IFS Cloud solution, with the aim of optimising their production andmaterial traceability, as well as reinforcing their environmental commitment

The production of dropper packs may seem simple and, in fact, just three pieces make up most ofthese droppers. The Spanish company Virospack has made these products its business, and with65 years of experience employs 500 people, who design and make high-end cosmetic packs forwell-known international brands.

However, there are numerous challenges facing the production of these droppers, from optimisingthe design and managing the materials used in production, to the development of increasinglysustainable products, which reduce the environmental impact.

This is one of the reasons that has led Virospack to reach an agreement with the Swedishbusiness software provider IFS to integrate the IFS Cloud solution, which will be implemented in itsentirety, including the HR, Finance, Business Intelligence, Project Management, CRM, Quality,Logistics and Purchasing modules.

Companies can use our software to measure how sustainable they are. Thanks to IFS Cloud,Virospack will have multiple parameters that will allow it to reduce its carbon footprint, highlightedthe IFS Country Manager for Spain and Portugal, Juan Gonzlez, in an interview with EuropaPress given to mark the beginning of this collaboration with the Barcelona-based company.

Technology and digitalisation are an increasing reality for companies, with 52% of companiesaffirming that they will increase their spending on digital transformation, according to an IFS study.

However, in addition to improving efficiency, this may also be a lever for environmental change.

Although servers consume electricity, the impact on energy consumption can be reduced by usingcentral servers, in addition to other benefits, such as avoiding the use of paper.

In line with this, Virospack is looking to further improve its current environmental commitments. TheBadalona-based company already has the ISO 14001 certificate for environmental managementsystems and uses 100% renewable energy at source at all of its facilities, 100% certified wood andrecycles 80% of hazardous waste.

As part of the collaboration agreement, both companies will work together over the next 12 monthsto develop tools adapted to Virospack's operation. It is expected that IFS's solutions will begin tobe used in December 2022 as a result of this strategic agreement, and their implementation willbegin this December.

This is according to Virospack's Service Manager, Montserrat Florencio, who explained thatthere will be a complete integration of all the transactions of IFS Cloud. Antalis ConsultingServices, an IFS partner, will also be involved in this implementation, providing their experience inproduction.

Another cornerstone of the agreement focuses on efficiency, which can be improved thanks to theuse of the IFS Cloud solution, launched in March to replace IFS Applications. It allows customersto manage all of their products in a unified way, on a single API-based platform designed for thecloud, but which can be run anywhere.

According to IFS's Country Manager, the company stands out from other business softwareproviders as it provides a single solution, which brings three products together in one platform:the company's ERP, i.e. Enterprise Resource Planning, field management and fixed assetmanagement. This integration may result in improved efficiency and avoid integration problems.

The improvements in efficiency will result from improved interdepartmentalcommunications, providing the benefit of more streamlined flows and greater efficiency,according to Montserrat Florencio.

In Virospack's case, traceability was another aspect that warranted the agreement with IFS. Thisaspect affects both the environmental commitment -it allows the carbon footprint of each product tobe traced- and other aspects, such as the materials used or even the fact that documents can beaccessed with a single click.

Better traceability will increase our efficiency. We are already working in this area, but it isdifficult to obtain data as we have to use different programs, applications and tools, saidVirospack's Service Manager.

The arrival of the Covid-19 pandemic put a strain on many manufacturing sectors, such as in thecase of semi-conductors, and the global industry is already suffering from the shortage ofprocessors. However, Virospack managed to obtain a growth in sales of more than 20% in 2020compared to the previous year.

The company that specialises in dropper packs has highlighted its commitment to innovation, withits agreement to use IFS's ERP, but also with a parallel project to create a new automatedwarehouse, which will be launched simultaneously.

Innovation is not something that is improvised, it is not just for the production, design andindustrialisation of a product, we also look to provide the company with technologicalinnovation that allows us to improve our current processes, added Florencio.

The arrival of the pandemic benefitted us, despite its impact, agreed Gonzlez, who highlightedthe efforts of companies to provide their employees with professional software tools. Softwarecompanies have to commit to making the employee's life easier, he concluded.

See the rest here:
Virospack acquires IFSs technology to improve its operational efficiency - Cosmetics Business

Read More..

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits? – HPCwire

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the companys roughly 100 employees travel to New York to ring the opening bell of the New York Stock Exchange. It will also be interesting to listen to IonQs first scheduled financial results call (Q3) on November 15.

IonQ is in the big leagues now. Wall Street can be brutal as well as rewarding, although these are certainly early days for IonQ as a public company. Founded in 2015 by Monroe and Duke researcher Jungsang Kim who is the company CTO IonQ now finds itself under a new magnifying glass.

How soon quantum computing will become a practical tool is a matter of debate, although theres growing consensus that it will, in fact, become such a tool. There are several competing flavors (qubit modality) of quantum computing being pursued. IonQ has bet that trapped ion technology will be the big winner. So confident is Monroe that he suggests other players with big bets on other approaches think superconducting, for example are waking up to ion traps advantages and are likely to jump into ion trap technology as direct competitors.

In a wide-ranging discussion with HPCwire, Monroe talked about ion technology and IonQs (roughly) three-step plan to scale up quickly; roadblocks facing other approaches (superconducting and photonic); how an IonQ system with about 1,200 physical qubits and home-grown error-correction will be able to tackle some applications; and why IonQ is becoming a software company and thats a good thing.

In ion trap quantum computing, ions are held in position by magnetic forces where they can be manipulated by laser beams. IonQ uses ytterbium (Yb) atoms. Once the atoms are turned into ions by stripping off one valence electron, IonQ use a specialized chip called alinear ion trap to hold the ions precisely in 3D space. Literally, they sort of float above the surface. This small trap features around 100 tiny electrodes precisely designed, lithographed, and controlled to produce electromagnetic forces that hold our ions in place, isolated from the environment to minimize environmental noise and decoherence, as described by IonQ.

It turns out ions have naturally longer coherence times and therefore require somewhat less error correction and are suitable for longer operations. This is the starting point for IonQs advantage. Another plus is that system requirements themselves are less complicated and less intrusive (noise producing) than systems for semiconductor-based, superconducting qubits think of the need to cram control cables into a dilution refrigerator to control superconducting qubits. That said, all of the quantum computing paradigms are plenty complicated.

For the moment, ion traps using lasers to interact with the qubits is one of the most straightforward approaches. It has its own scaling challenge but Monroe contends modular scaling will solve that problem and leverage ion traps other strengths.

Repeatability [in manufacturing superconducting qubits] is wonderful but we dont need atomic scale deposition, like you hear of with five nanometer feature sizes on the latest silicon chips, said Monroe. The atoms themselves are far away from the chips, theyre 100 microns, i.e. a 10th of a millimeter away, which is miles atomically speaking, so they dont really see all the little imperfections in the chip. I dont want to say it doesnt matter. We put a lot of care into the design and the fab of these chips. The glass trap has certain features; [for example] its actually a wonderful material for holding off high voltage compared to silicon.

IonQ started with silicon-based traps and is now moving to evaporated glass traps.

What is interesting is that weve built the trap to have several zones. This is one of our strategies for scale. Right now, at IonQ, we have exactly one chain of atoms, these are the qubits, and we typically have a template of about 32 qubits. Thats as many as we control. You might ask, how come youre not doing 3200 qubits? The reason is, if you have that many qubits, you better be able to perform lots and lots of operations and you need very high quality operations to get there. Right now, the quality of our operation is approaching 99.9%. That is a part per 1000 error, said Monroe.

This is sort of back of the envelope calculations but that would mean that you can do about 1000 ops. Theres an intuition here [that] if you have n qubits, you really want to do about n2 ops. The reason is, you want these pairwise operations, and you want to entangle all possible pairs. So if you have 30 qubits, you should be able to get to about 1000 ops. Thats sort of where we are now. The reason we dont have 3200 yet is that if you have 3200 qubits, you should be able to do 10 million ops and that means your noise should be one part in 107. Were not there yet. We have strategy to get there, said Monroe.

While you could put more ions in a trap, controlling them becomes more difficult. Long chains of ions become soft and squishy. A smaller chain is really stiff [and] much less noisy. So 32 is a good number. 16 might be a good number. 64 is a good number, but its going to be somewhere probably under 100 ions, said Monroe.

The first part of the strategy for scaling is to have multiple chains on a chip that are separated by a millimeter or so which prevents crosstalk and permits local operations. Its sort of like a multi-core classical architecture, like the multi-core Pentium or something like that. This may sound exotic, but we actually physically move the atoms, we bring them together, the multiple chains to connect them. Theres no real wires. This is sort of the first [step] in rolling out a modular scale-up, said Monroe.

In proof of concept work, IonQ announced the ability to arbitrarily move four chains of 16 atoms around in a trap, bringing them together and separating them without losing any of the atoms. It wasnt a surprise we were able to do that, said Monroe. But it does take some design in laying out the electrodes. Its exactly like surfing, you know, the atoms are actually surfing on an electric field wave, and you have to design and implement that wave. That was that was the main result there. In 2022, were going to use that architecture in one of our new systems to actually do quantum computations.

There are two more critical steps in IonQs plan for scaling. Error correction is one. Clustering the chips together into larger systems is the other. Monroe tackled the latter first.

Think about modern datacenters, where you have a bunch of computers that are hooked together by optical fibers. Thats truly modular, because we can kind of plug and play with optical fibers, said Monroe. He envisions something similar for trapped ion quantum computers. Frankly, everyone in the quantum computing community is looking at clustering approaches and how to use them effectively to scale smaller systems into larger ones.

This interface between individual atom qubits and photonic qubits has been done. In fact, my lab at University of Maryland did this for the first time in 2007. That was 14 years ago. We know how to do this, how to move memory quantum bits of an atom onto a propagating photon and actually, you do it twice. If you have a chip over here and a chip over here, you bring two fibers together, and they interfere and you detect the photons. That basically makes these two photons entangled. We know how to do that.

Once we get to that level, then were sort of in manufacturing mode, said Monroe. We can stamp out chips. We imagine having a rack-mounted chips, probably multicore. Maybe well have several 100 atoms on that chip, and a few of the atoms on the chip will be connected to optical conduits, and that allows us to connect to the next rack-mounted system, he said.

They key enabler, said Monroe, is a nonblocking optical switch. Think of it as an old telephone operator. They have, lets say they have 100 input ports and 100 output ports. And the operator connects, connects with any input to any output. Now, there are a lot of connections, a lot of possibilities there. But these things exist, these automatic operators using mirrors, and so forth. Theyre called n-by-n, nonblocking optical switches and you can reconfigure them, he said.

Whats cool about that is you can imagine having several hundred, rack-mounted, multi-core quantum computers, and you feed them into this optical switch, and you can then connect any multi-core chip to any other multi-core chip. The software can tell you exactly how you want to network. Thats very powerful as an architecture because we have a so-called full connection there. We wont have to move information to nearest neighbor and shuttle it around to swap; we can just do it directly, no matter where you are, said Monroe.

The third leg is error correction, which without question is a daunting challenge throughout quantum computing. The relative unreliability of qubits means you need many redundant physical qubits estimates vary widely on how many to have a single reliable logical qubit. Ions are among the better behaving qubits. For starters, all the ions are literally identical and not subject to manufacturing defects. A slight downside is that Ion qubit switching speed is slower than other modalities, which some observers say may hamper efficient error correction.

Said Monroe, The nice thing about trapped ion qubits is their errors are already pretty good natively. Passively, without any fancy stuff, we can get to three or four nines[i] before we run into problems.

What are those problems? I dont want to say theyre fundamental, but there are brick walls that require a totally different architecture to get around, said Monroe. But we dont need to get better than three or four nines because of error correction. This is sort of a software encoding. The price you pay for error correction, just like in classical error correction encoding, is you need a lot more bits to redundantly encode. The same is true in quantum. Unfortunately, with quantum there are many more ways you can have an error.

Just how many physical qubits are needed for a logical qubit is something of an open question.

It depends what you mean by logical qubit. Theres a difference in philosophy in the way were going forward compared to many other platforms. Some people have this idea of fault tolerant quantum computing, which means that you can compute infinitely long if you want. Its a beautiful theoretical result. If you encode in a certain way, with enough overhead, you can actually you can run gates as long as you want. But to get to that level, the overhead is something like 100,000 to one, [and] in some cases a million to one, but that logical qubit is perfect, and you get to go as far as you want [in terms of number of gate operations], he said.

IonQ is taking a different tack that leverages software more than hardware thanks to ions stability and less noisy overall support system [ion trap]. He likens improving qubit quality to buying a nine in the commonly-used five nines vernacular of reliability. Five nines 99.999 percent (five nines) is used describe availability, or put another way, time between shutdowns because of error.

Were going to gradually leak in error correction only as needed. So were going to buy a nine with an overhead of about 16 physical qubits to one logical qubit. With another overhead of 32 to one, we can buy another nine. By then we will have five nines and several 100 logical qubits. This is where things are going to get interesting, because then we can do algorithms that you cant simulate classically, [such] as some of these financial models were doing now. This is optimizing some function, but its doing better than the classical result. Thats where we think we will be at that point, he said.

Monroe didnt go into detail about exactly how IonQ does this, but he emphasized that software is the big driver now at IonQ. Our whole approach at IonQ is to throw everything up to software as much as we can. Thats because we have these perfectly replicable atomic qubits, and we dont have manufacturing errors, we dont have to worry about a yield or anything like that everything is a control problem.

So how big a system do you need to run practical applications?

Thats a really good question, because I can safely say we dont exactly know the answer to that. What we do know if you get to about 100 qubits, maybe 72, or something like that, and these qubits are good enough, meaning that you can do 10s of 1000s of ops. Remember, with 100 qubits you want to do about 10,000 ops to something you cant simulate classically. This is where you might deploy some machine learning techniques that you would never be able to do classically. Thats probably where the lowest hanging fruit are, said Monroe.

Now for us to get to 100 [good] qubits and say 50,000 Ops, that requires about 1000 physical qubits, maybe 1500 physical qubits. Were looking at 1200 physical qubits, and this might be 16 cores with 64 ions in each core before we have to go to photonic connections. But the photonic connection is the key because [its] where you start to have a truly modular data center. You can stamp these things out. At that point, were just going to be making these things like crazy, and wiring them together. I think well be able to do interesting things before we get to that stage and it will be important if we can show some kind of value (application results/progress) and that we have the recipe for scaling indefinitely, thats a big deal, he said.

It is probably going too far to say that Monroe believes scaling up IonQs quantum computer is now just a straightforward engineering task, but it sometimes sounds that way. The biggest technical challenges, he suggests, are largely solved. Presumably, IonQ will successfully demonstrate its modular architecture in 2022. He said competing approaches superconducting and all-photonics, for example wont be able to scale. They are stuck, he said.

I think they will see atomic systems as being less exotic than they once thought. I mean, we think of computers as built from silicon and as solid state. For better for worse you have companies that that forgot that they supposed to build computers, not silicon or superconductors. I think were going to see a lot more fierce competition on our own turf, said Monroe. There are ion trap rivals. Honeywell is one such rival (Honeywell has announced plans to merge with Cambridge Quantum), said Monroe.

His view of the long-term is interesting. As science and hardware issues are solved, software will become the driver. IonQ already has a substantial software team. The company uses machine learning now to program its control system elements such as the laser pulses and connectivity. Were going to be a software company in the long haul, and Im pretty happy with that, said Monroe.

IonQ has already integrated with the three big cloud providers (AWS, Google, Microsoft) quantum offerings and embraced the growing ecosystem of software and tools providers and has APIs for use with a variety of tools. Monroe, like many in the quantum community, is optimistic but not especially precise about when practical applications will appear. Sometime in the next three years is a good guess, he suggests. As for which application area will be first, it may not matter in the sense that he thinks as soon as one domain shows benefit (e.g. finance or ML) other domains will rush in.

These are heady times at IonQ, as they are throughout quantum computing. Stay tuned.

[i] He likens improving qubit quality to buying a nine in the commonly-used five nines vernacular of reliability. Five nines 99.999 percent (five nines) is used describe availability, or put another way, time between shutdowns because of error.

See the original post here:
IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits? - HPCwire

Read More..