Letting the Cat Out of the Bag: Public Cloud has Latency Issues! – InfoWorld

Transform to a modern hybrid infrastructure with converged, hyperconverged, and composable infrastructure solutions from Hewlett Packard Enterprise.

sponsored

Technology, like everything else, has trends or cycles. Cloud started more than 10 years ago and was the hot, new tech trend. But noware things starting to shift again? Are organizations thinking twice before automatically moving essential workloads to the public cloud?

The answer is yes and for a variety of reasons. A few born-in-the-cloud companies have now moved from the public cloud back to on-premises data centers DropBox is a high-profile example. And the public cloud performance (or lack thereof) was a big reason why.

Letting the cat out of the bag: Public cloud is all about capacity, not performance

When businesses choose to put their applications in the public cloud, they are sharing infrastructure with a lot of other people. Of course, this can be a good solution because it means that you only pay for what you need when you need it. Public cloud also gives businesses the ability to scale up or down based upon demand.

But dont forget the whole business model of public cloud: time-sharing. The provider is giving everyone a slice of the timeshare pie, which means that the provider is promising capacity not performance. I am not the first person to let this particular cat out of the bag. I just want to reiterate it yes, public cloud providers do place performance limits on the services they provide.

Of course, for workloads you deploy on premises, you get to decide what the performance slice should be. Having this choice is imperative for applications that require reduced latency, such as those for big data and financial services.

Are new technologies making data centers new again?

Looking forward, two new technologies are now available that can boost performance for all applications. These technologies are containers and composable infrastructure. Running containers on composable infrastructure can ensure better performance for all applications.

Containers are open source software development platforms that share a common lightweight Linux OS and only keep the different pieces that are unique to that application within the container. This type of OS-level virtualization means you can hold a lot more containers on a particular server compared to virtual machines (VMs).

A big benefit of containers is increased performance. And when you run containers on bare-metal, performance is increased even more! This is because containers running on bare-metal dont require a hardware emulation layer that separates the applications from the server.

HPE and Docker recently tested the performance of applications running inside of a single, large VM or directly on top of a Linux operating system installed on an HPE server. When bare-metal Docker servers were used, performance of CPU-intensive workloads increased up to 46%. For businesses where performance is paramount, these results tell a compelling story.

Yet, some companies have hesitated to move containers out of virtual machines and on to bare-metal because of perceived drawbacks of running containers on bare-metal servers. These drawbacks, such as difficulties with managing physical servers, are definitely relevant when considering yesterdays data center technologies. Composable infrastructure helps overcome these challenges by making management simple through highly automated operations controlled through software.

Composable infrastructure consists of fluid pools of compute, storage, and fabric that can dynamically self-assemble to meet the needs of an application or workload. These resources are defined in software and controlled programmatically through a unified API, thereby transforming infrastructure into a single line of code that is optimized to the needs of the application.

Because composable infrastructure is so simple to deploy and easy to use, it removes many of the drawbacks you would traditionally encounter when deploying containers on bare-metal. The end result is better performance at lower costs within your own data center. The combination of containers and composable infrastructure is a marriage made in heaven.

A hybrid IT cloud strategy solves the performance problem of public cloud

When considering where to deploy, first consider the performance needs of your application. Then compare those performance needs against the service levels offered by public cloud vendors and what you can deliver on premises. As I wrote in a previous article, businesses need to determine which workloads should be in the public cloud and which ones should remain on traditional IT or a private cloud. And thanks to todays new technologies, containers and composable infrastructure, staying with traditional data-center deployments may just be the better choice.

To learn more about containers running on HPE bare-metal servers, click here. To read about the benefits of HPEs first composable infrastructure, HPE Synergy, readHPE Synergy for Dummies. To find out how HPE can help you determine a workload placement strategy and how to best meet your service level agreements, check outHPE Pointnext.

Sponsored Links

Link:
Letting the Cat Out of the Bag: Public Cloud has Latency Issues! - InfoWorld

Related Posts

Comments are closed.