Monday, April 2, 2018

Hyperconvergence: What it is, how it works, and why it matters

#Hyperconvergence is a marketing term referring to a style of #datacenter architecture that trains the attention of IT operators and administrators on the operating conditions of workloads over systems. The main objective of #hyperconverged infrastructure ( #HCI ) has been to simplify the management of data centers by recasting them as transportation systems for software and transactions, rather than networks of processors with storage devices and memory caches dangling from them. The "convergence" that HCI makes feasible in the data center comes from the following: Applications and the servers that host them are managed together, using a single platform that focuses on the health and accessibility of those applications. Compute capacity, file storage, memory, and network connectivity are collected together and managed individually like public utilities. Workloads are treated like customers whose needs must be satisfied, even if it takes the decommissioning and shutdown of hardware to accomplish it. Each workload is packaged within the same class of construct: Usually virtual machines (VM) designed to be hosted by hypervisors such as VMware's ESX and ESXi, Xen, KVM, and Microsoft's Hyper-V. These constructs enable HCI platforms to treat them as essentially equivalent software components, if with different operating requirements. SERVICES VS SERVERS Since the dawn of information technology, the key task of computer operators has been to monitor and maintain the health of their machines. At some point, the value of keeping software accessible and functional to users -- especially to customers -- exceeded the cost of extending the lifespan of hardware. The key variables in the cost/benefit analysis equation were flipped, as the functionality of services became more valuable than the reliability of servers. Although the ideal of hyperconverged infrastructure has always been the radical simplification of workload management in enterprise data centers, in every enterprise whose data centers predate the installation of HCI, the issue of integration has reared its ugly head. Ops managers have insisted that pre-existing workloads co-exist with hyperconverged workloads. On the opposite end of the scale, developers working with the newest containerized technologies such as Docker and Kubernetes have insisted that their distributed, VM-free workloads co-exist with hyperconverged. Read also: What is Docker and why is it so darn popular? WHAT'S THE VALUE PROPOSITION? So the "hyper" part of hyperconvergence typically gets tamped down somewhat. The lack of a single, emergent co-existence strategy for any permutation of HCI has given the major vendors in this space an opening -- not just to establish competitive advantage but to insert specialized hardware and proprietary services into the mix. Architects of open-source data center platforms such as OpenStack cite this response from vendors as the re-emergence of locked-in architectures, in making their case for hybrid cloud architectures as effective alternatives. The question for many organizations: Is there value in adopting an architecture whose very name suggests the incorporation of everything, when reality dictates it must only be adopted partway and integrated with the rest? Put another way, is there any good to be gained from embracing an ideal that started out as all-inclusive, but which in practice ends up being exclusive after all?

No comments:

Post a Comment