Private cloud: Orchestration, storage and containers – ComputerWeekly.com

Cloud, to most people, means public cloud infrastructure, such as that offered by Amazon Web Services (AWS) and Microsoft Azure.

In contrast, private cloud tends to refer to on-premise infrastructure that functions in the same way as public cloud, by offering access to resources on a service-based delivery model.

A collection of our most popular articles on datacentre management, including: Cloud vs. Colocation: Why both make sense for the enterprise right now; AWS at 10: How the cloud giant shook up enterprise IT and Life on the edge: The benefits of using micro datacenters

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

So, what is involved in building a private cloud and how does it impact on storage in the datacentre?

Before we dive into how a private cloud can be implemented, we should take a moment to review the features of cloud in general.

Cloud, whether private or public, is expected to meet the following criteria:

In a private cloud, some features seen in the public arena are not necessarily required.

Some organisations may choose not to implement billing (or even chargeback), for example, as the financial mechanism for directly charging business units for their usage may not exist.

The idea of unlimited is typically not a private cloud requirement, as IT budgets are generally restricted, whether that refers to the spending of the business or the IT department.

A key part of private cloud is the orchestration framework that provisions resources based on customer requests. These include open-source platforms such asOpenStack and CloudStack, proprietary tools such as VMwares vRealize Suite Cloud Management Platform and the upcoming Azure Stack, Microsofts on-premise automation suite implementation of public Azure.

There are also platforms, such as Mesosphere, Kubernetes and Docker Swarm, that provide open source solutions that go part way to implementing private cloud.

Finally, we should mention that suppliers, such as Zerostack and Platform 9, that are looking to optimise some of the tools weve already mentioned.

When we look at the way storage is consumed by these platforms, we see a wide variety of implementations.

Possibly the most mature and easily understood storage options are for VMware.

VMware vSphere, which is a core component of the vRealize Suite, supports a range of existing block (Fibre Channel, iSCSI) and file (NFS) based systems. Supplier support is widespread, and once storage is configured into the infrastructure, it can be provisioned out automatically as part of virtual machine creation.

OpenStack has a range of storage support options, described as projects, that cover block storage (Cinder), file (Manila) and object (Swift). Each project defines a set of APIs that suppliers can code to, automating the provisioning and mapping of their storage platforms to instances (virtual machines) by the OpenStack platform.

The same level of application programming interfacesupport is also available within Kubernetes, enabling the provisioning of persistent storage to container instances.

This can include traditional protocols (such as iSCSI and NFS), but also scale-out storage such as GlusterFS and Ceph. Theres more of an assumption here that users need to manage their storage manually (perhaps simply as local JBODs), but storage hardware suppliers are starting to add support for Kubernetes volumes. One example is NetApp with their Trident project to automate storage provisioning.

Docker deployments tend to use storage provisioned to the host running containers, which could be a local file system or block devices mounted to the server. Flocker is an open source tool that enables the automation of provisioning block devices to containers through a set of APIs for which suppliers can provide support. Through their EMCCode efforts, Dell EMC supports connectivity for many of their existing storage platforms to a Docker ecosystem. Much of this work is best efforts and only community supported, so perhaps not suited to critical production environments.

We can see from the way storage is consumed by private clouds that there are certain differences that must be considered. The most obvious is that of automation. Storage is expected to plug into orchestration frameworks in a way that makes it easy to auto-provision logical unit numbers (LUNs) and file shares through the orchestration tools.

This automation represents two issues.

First, there is security. Can storage be presented from a pool of resources, or does the orchestration platform get access to the whole storage array or environment?

Second, what controls are in place to limit the consumption of storage resources? We can envisage a scenario where the on-demand capability of private cloud could exhaust the storage capacity available.

This issue takes us on to the issue of maintenance.How easily can a storage solution be expanded or replaced?

Typically, storage migrations are implemented as big projects that consume significant resources. This doesnt arise as an issue in the public cloud environment, where storage issues are obfuscated from the customer.

How storage migrations be managed in private cloud?

As instances are created or destroyed, new applications can be provisioned from new storage capacity. That leaves the migration of existing instances to take care of. These may have to be migrated offline or through some sort of snapshot process. Either way, the introduction of a new storage platform needs to be implemented with a minimal impact to service.

Its worth noting that most of the above issues havent been fully resolved. As a result, an entire storage platform may be presented to a cloud solution, for example, rather than shared with other users because the required security controls dont exist. This may be practical in large environments, but could cause issues for smaller IT organisations.

Picking a supplier or storage product requires thought about the orchestration platform and supportability from the storage supplier.

Older (and more likely to be classed as legacy) platforms will be harder to integrate, as they usually have no native automation capabilities. These tend to have been added on later. NetApp SolidFire is an example of a storage platform that provides a native API, with supported integration into all the common provisioning platforms.

An alternative is to go for a scale-out storage solution, either an open source platform or one from an SDS provider. Ceph is one solution directly integrated into OpenStack that can provide object, block and file support.

There are also scale-out solutions like StorPool (block storage), StorageOS (specifically for containers), ScaleIO from EMC, StoreVirtual from HPE and of course native Virtual SAN from VMware.

The advantage of using these platforms is that storage can be deployed as part of the server, rather than having dedicated storage software. Integration of solutions like Virtual SAN, for example, are directly supported within vSphere, enabling policy-based provisioning of storage for virtual instances.

The options for building out private clouds and persistent storage are wide ranging and potentially confusing at first glance. But remember the main tenets of public cloud (elasticity, on-demand, multi-tenancy and service-based) for a good basis to help choose the most appropriate solution for your private cloud infrastructure.

See the original post here:
Private cloud: Orchestration, storage and containers - ComputerWeekly.com

Related Posts

Comments are closed.