Bouncing Back To Private Clouds With OpenStack – The Next Platform

March 1, 2017 Timothy Prickett Morgan

There is an adage, not quite yet old, suggesting that compute is free but storage is not. Perhaps a more accurate and, as far as public clouds are concerned, apt adaptation of this saying might be that computing and storage are free, and so are inbound networking within a region, but moving data across regions in a public cloud is brutally expensive, and it is even more costly spanning regions.

So much so that, at a certain scale, it makes sense to build your own datacenter and create your own infrastructure hardware and software stack that mimics the salient characteristics of one of the big public clouds. What that tipping point in scale is really depends on the business and the sophistication of the IT organization that supports it; Intel has suggested it is somewhere around 1,200 to 1,500 nodes. But clearly, just because a public cloud has economies of scale does not mean that it passes all of those benefits on to customers. One need only look as far as the operating profits of Amazon Web Services to see this. No one is suggesting that AWS does not provide value for its services. But in its last quarter, it brought nearly $1 billion to its middle line out of just under $3.5 billion in sales and that is software-class margins for a business that is very heavily into building datacenter infrastructure.

Some companies, say the folks that run the OpenStack project, are ricocheting back from the public cloud to build their own private cloud analogues, and for economic reasons. Luckily, it is getting easier to use tools like OpenStack to support virtual machine, bare metal, and container environments. This is, of course, a relative thing, too. No one would call OpenStack easy, but the same holds true for any complex piece of software such as the Hadoop data analytics stack or the Mesos cluster controller, just to call out two.

People are realizing that the public cloud, and in particular the hyperscale providers like AWS, Google, and Microsoft, are really in some cases the most expensive way to do cloud computing, Mark Collier, chief operating officer at the OpenStack Foundation, tells The Next Platform. There is was this misconception early on that, because of economies of scale, the hyperscale clouds would be cheaper. But if you look at the earnings releases of AWS and others, their growth rates are slowing in the public cloud, and we think that has a lot to do with cost. So we are starting to see some major users of AWS standing up private clouds powered by OpenStack and moving certain strategic workloads off of AWS and repatriating them internally. There are a lot of reasons for this, but cost is the biggest driver.

OpenStack users have, over the past three years, moved from tire kicking to production

The public cloud is worth a premium over private clouds for a bunch of reasons, not the least of which customers using public clouds do not have to pay the capital costs of infrastructure or the management costs of making it run well. And having the ability to do utility priced, instant on and off capacity is also worth a premium, and we know this because steady state rental of capacity on clouds costs less than on-demand capacity. (As we would expect.) But, says Collier, a lot of customers have steady state workloads that just run, and even though there are ways to bring the costs down on public clouds where the virtual machines just sit there, day in and day out, customers moving off AWS to a private cloud can see anywhere from a 50 percent to 70 percent cost reduction for these constantly running jobs.

Those are some big numbers, and we would love to see this further quantified and qualified.

Collier also points out that OpenStack is used to create dozens of big public clouds, so it is not just a private cloud technology. (Rackspace Hosting, one of the co-founders of OpenStack along with NASA, operates what is probably the largest OpenStack cloud for its Cloud Servers and Cloud Storage services.)

Public and private clouds are all growing, but customers are getting more strategic about where to place their workloads so they get their moneys worth, says Collier. And if you are paying to turn resources on and off, and you are not doing that, then you are wasting your money. People are no longer wondering when they are moving to clouds they pretty much know everything is going in a cloud environment. But now they are thinking about which type makes sense. People are starting to dig into the numbers.

It is hard to say how much of the compute, storage, and networking capacity installed worldwide and only running enterprise applications at that is on private clouds versus public clouds versus traditional uncloudy infrastructure. And Collier was not in a mood to take a wild guess about how, years hence, this pie chart might shake out. But he concurred with us that it might look like a 50-50 or 60-40 split between private and public cloud capacity over the long haul. A lot will depend on economics, both in terms of what the public clouds charge and what enterprises can afford in terms of building their own cloud teams and investing in infrastructure.

If Amazon Web Services offered a private cloud you could plunk into a datacenter, and at a private cloud price, this would certainly change things. But it also might make AWS a whole lot less money, which is why we think maybe the top brass at AWS are not so keen on the idea. They might be able to double, triple, or quadruple their aggregate compute and storage, but not make more money doing it unless customers decide to use AWS management on their baby and private AWS clouds. Should they ever come to pass.

And having a private VMware cloud running in AWS datacenters, as will be done this year, does not count. We are not sure of much in this world, but we fully expect for capacity on this VMware Cloud on AWS service to cost considerably more than hosting a private cloud based on the ESXi hypervisor, vCenter management tools, and some of the vRealize cloud management tools.

There are a couple of things that are giving OpenStack a second wind, and it is not just the backdraft effect off of big public clouds by enterprise customers.

For one thing, OpenStack is getting more refined and more polished, as we demonstrated by the Ocata release that was put out by the community two weeks ago. This release had a relatively short development cycle, coming out about two months ahead of the usual cadence, but the future Pike release will get back to the normal six-month release cadence that OpenStack has adhered to for years now.

One big change with the Ocata release of OpenStack is that the horizontal scaling mechanism for the Nova compute portion of OpenStack, called Cells, has gotten a V2 update and is not only ready for primetime, but is running with Nova by default starting with Ocata. In essence, Cells allows for multiple instances of the Nova compute controller (including its database and queue) to be distributed in a single cluster and be federated for management. Cells was developed by Rackspace and has been used in production since August 2012 and has been in development formally for OpenStack the Grizzly release back in 2012, and it can be used to federate clustered Nova controllers within a datacenter or region or across regions.

Nova also now includes a feature called the placement and resource scheduler it does not yet have a funky name because it has not been busted free of Nova, but Jonathan Bryce, executive director of the OpenStack Foundation, says that this scheduler could eventually be broken free and used to control certain aspects of other portions of the OpenStack stack. This is a new way of managing the assets that comprise a cloud servers, storage devices, networking equipment, and so on adding intelligence to that placement. So, for instance, it tracks the kinds of devices and their capacities and performance, and with a set of APIs you can request that a workload be deployed on a specific collection of resources and this scheduler can find it and make it happen through Nova.

The first and second generations of cloud, according to OpenStack

The idea is that we are on the second generation of clouds, and they are easier to run and that makes them more cost effective and also opens them up for deployment by more people, says Bryce, which sets up a virtuous cycle. But the other attribute of Gen 2 clouds is that they do more things. When OpenStack was just starting, it was basic virtualization with self service and elastic provisioning. When you look at it now, what you see are cloud native applications, but also things like SAP and network function virtualization workloads. So the private cloud today costs less, but it also does more. So having a more intelligent scheduler that makes sure you put an NFV workload onto a server that has high performance networking gear, or you put a data analytics workload onto something that has high performance I/O, these are the things that end up making these new clouds extremely capable and able to run these new workloads.

And this is also why OpenStack use is exploding in new markets, particularly China, where there is no established virtualization player and lots of companies are greenfield installations.

With OpenStack now seven years old, it has become a reasonably mature platform thanks to the hard work of thousands of software engineers and the enlightened self-interest of their employers. And it is reasonable to ask if OpenStack, like other open source infrastructure components like the Linux kernel and the bits that wrap around it to make it an operating system, is largely done.

OpenStack has thousands of marquee enterprise customers, and this is just a sampling

Theres always something more to do, says Bryce. OpenStack is an interesting animal in some ways because it has these very primitive core functions such as virtualization and networking, and those are necessary for every single workload, every single application that runs on any platform. Those are key, and fairly stable and mature. Where we are seeing exciting work still happen is how you leverage and integrate these infrastructure primitives to meet new workloads.

For instance, a lot is happening in the OpenStack community with software containers right now. Not only is OpenStack being containerized itself so it can be deployed and managed better, but containers are being added atop either virtualized or bare metal OpenStack clouds so they can be used to manage other applications that in turn run on OpenStack.

When you layer dynamic application management through containers on top of programmable infrastructure, you really get the best of both worlds, Bryce explains. But in order to achieve this, you need tight integration between the two.

Just as was the case with server virtualization based on hypervisors when it became popular on X86 platforms a decade ago, there is much weeping and gnashing of teeth with regards to both networking and storage underpinning container environments. So OpenStack shops are combining the Neutron virtual networking with Cinder block storage and the Kubernetes container scheduler, or gluing together Nova compute with Cinder block storage and Docker container runtimes. The Kuryr project provides the link between Docker and Neutron, hence its name courier, and a subproject called Fuxi connects Cinder block storage and Manila shared file systems to Docker in a similar fashion.

Categories: Cloud, Compute, Enterprise

Tags: container, Docker, OpenStack

Making Remote NVM-Express Flash Look Local And Fast Looking Down The Long Enterprise Road With Hadoop

See original here:
Bouncing Back To Private Clouds With OpenStack - The Next Platform

Related Posts

Comments are closed.