Prepare your server fleet for a private cloud implementation – TechTarget

Private cloud services promise flexibility and scalability, while allowing organizations to maintain full control...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

of their enterprise data centers. It's a compelling goal -- but private cloud implementation can be challenging and frustrating.

The path from a traditional data center to a private cloud starts at the lowest levels of the infrastructure. IT leaders must evaluate their current server fleet to ensure that each system offers the features needed to support virtualization and the subsequent cloud stack. Here are some considerations that can help sanity check whether your data center infrastructure is ready for private cloud implementation.

It's important to understand individual processor technologies and properly enable each feature before you deploy hypervisors and, eventually, the private cloud stack. For example, processors will invariably require hardware virtualization support through processor extensions, including Intel Virtualization Technology and AMD Virtualization. This technology typically includes support for the second level address translation required to translate physical memory space to virtual memory space at processor hardware speeds.

The path from a traditional data center to a private cloud starts at the lowest levels of the infrastructure.

Enable AMD No eXecute (NX) and the Intel eXecute Disable (XD) bits for processors, which will mark memory pages to prevent buffer overflow attacks and other malicious software exploits. You can typically enable processor extensions and NX/XD bits through the system BIOS or the Unified Extensible Firmware Interface (UEFI).

Consider the processor core/thread count for each server. Hypervisors, such as ESXi 6.0, demand a host server with at least two processor cores, but this is generally a bare minimum system requirement. Additional processor cores will vastly expand the number of VMs and workloads that each server can handle, and you can treat each additional processor thread as a separate core. For example, an AMD Opteron 6200 Series processor can support VMware ESXi 6.5 with eight cores and total of 16 threads; an Intel Xeon E5-2600-v4 Series processor offers 24 cores and a total of 48 threads.

Finally, consider the availability of UEFI on the server. UEFI is a later-type of system firmware -- a kind of advanced BIOS -- that allows additional flexible boot choices. For example, UEFI allows servers to boot from hard disk drives, optical discs and USB media -- all larger than 2 TB. However, it's important to evaluate the boot limitations of the hypervisor. As an example, ESXi 6.0 does not support network booting or provisioning with VMware Auto Deploy features -- this requires traditional BIOS and isn't currently supported by UEFI. If you change from BIOS to UEFI after you install a hypervisor, it might cause boot problems on the system. Consequently, it's a good idea to identify the firmware when identifying processors on each server.

Every VM or container exists and runs in a portion of a server's physical memory space, so memory capacity plays a critical role in server virtualization and in private cloud implementation. Hypervisors, such as ESXi, typically recommend a system with at least 8 GB to host the hypervisor and allow capacity for at least some VMs in production environments. Private cloud stacks such as OpenStack are even lighter, recommending only 2 GB for the platform -- each VM will demand more memory.

However, such memory recommendations are almost trivial when compared to the memory capacity of modern servers. As an example, a Dell R610 rackmount server is rated to 192 GB, while a Dell R720 is rated to 768 GB of memory capacity. This means existing enterprise-class servers already possess far more than the required minimum amount of memory needed for virtualization and a private cloud implementation. The real question becomes: how many VMs or containers do you intend to operate on the server, and how much memory will you provision to each instance? These considerations can vary dramatically between organizations.

As you virtualize, and place more workloads on, physical servers, network utilization increases dramatically. Network limitations can cause contention between workloads and result in network bandwidth bottlenecks that can impair the performance and stability of other workloads. This can be particularly troublesome during high-bandwidth tasks like VM backups, especially when multiple VMs attempt the same high-bandwidth tasks simultaneously.

This makes adequate bandwidth and network architecture choices critical on the road to private cloud implementation. A hypervisor, such as ESXi, typically demands at least one Gigabit Ethernet (GbE) port. Although a faster Ethernet port, such as 10 GbE, can alleviate bandwidth bottlenecks, it is often preferable to deploy two or more GbE ports instead. Multiple physical ports can present several important benefits. For example, you can combine multiple GbE ports can to aggregate the bandwidth of slower, less expensive network adapters and cabling infrastructure. This can also build resilience since a port failure at the server or corresponding switch port can failover to another port.

Storage is another core attribute of virtualization, so pay close attention to issues like storage capacity. A hypervisor like ESXi typically needs about 10 GB of storage divided between a boot device -- which creates a VMFS volume -- and a scratch partition on the boot device. Private cloud services platforms like OpenStack recommends at least 50 GB of disk space. The real capacity issue depends on the number of VMs and the amount of storage you allocate to each VM instance. An environment that uses few fixed VM disk images may need less capacity than an environment that deploys many different VM images with various storage requirements. As a rule, 1 TB should be adequate for a typical virtualized server.

Local storage capacity is typically not a gating issue with modern servers and storage equipment. In actual practice, however, enterprise servers rarely depend on local per-server storage, and instead use shared storage systems. In this case, the primary server concern may be adequate local storage to boot the system, but defer to a storage area network (SAN) for VM and workload data retention. This means the server should include adequate SAN support, such as two or more dedicated Ethernet ports (i.e., iSCSI or FCoE) or Fibre Channel ports for redundant SAN connectivity. Disks should always provide some level of RAID support -- RAID 5 or even RAID 6 can offer the best data protection and rebuild performance to hot spare disks.

As more VMs coexist on fewer physical servers, a server fault or failure can impact more VMs, which can be disruptive. As a business embraces virtualization and moves toward private cloud implementation, the underlying server hardware should include an array of resiliency features that can forestall failures.

Critical server hardware should include redundant power supplies and intelligent, firmware-based self-diagnostics that can help technicians identify and isolate faults. Modern servers typically include a baseboard management controller capable of system monitoring and management. If a server fails, it may be crucial to remove and replace the failed unit quickly.

Inside the server, select and enable memory resilience features like advanced error correcting code to catch single- and multi-bit errors, memory mirroring, hot spares that can swap in a backup DIMM if one DIMM fails and memory scrubbing -- sometimes called demand and patrol scrubbing -- that can search for and address memory errors on-demand or at regular intervals.

Any capable configuration management tool or framework can summarize and report many of these attributes for you directly from the local configuration management database. This can ease the time-consuming and error-prone manual review of physical systems and hypervisors. But a review of servers and hypervisors is really just the start of a private cloud implementation -- they form the critical cornerstone for other components, like storage, networks and software stacks, within the infrastructure.

OpenStack support lifecycles grow for the enterprise

The on-premises vs. cloud computing battle continues

Don't label all infrastructure as a commodity

Continue reading here:
Prepare your server fleet for a private cloud implementation - TechTarget

Related Posts

Comments are closed.