As with any piece of data hardware or software, the key to a successful #HCI deployment is more in how you use it than what vendor it comes from. #Hyperconverged infrastructure (HCI) is quickly infiltrating enterprise capital spending plans, which is naturally leading to increased competition among vendors who have seen their hardware revenues diminish steadily over the past decade. And while it is too soon to start calling it a “war” akin to the PC, server, storage and cloud wars of the past, there is definitely some jockeying for position in the market as manufacturers seek the right combination of form, function, performance and price. One of the main themes at #DellEMC World this month was the company’s new portfolio of hyperconverged systems. In large part, it consisted of updated versions of previous systems like the #VxRail and #XC series appliances, but the company is also rolling out new purchasing and deployment options to help the enterprise manage its costs as infrastructure scales in conjunction with data demands. At the same time, Dell EMC is outfitting its systems with new virtual networking options that allow organizations to implement fully software-defined data center (SDDC) architectures within tightly converged hardware footprints. Barely a week after the #DellEMC announcements, however, #IBM and #Nutanix inked a deal that puts the Nutanix Enterprise Cloud Platform on hyperconverged infrastructure built around IBM’s Power architecture. The companies say this provides a scalable solution for mission-critical workloads while at the same time providing an avenue to deploy artificial intelligence and machine learning for emerging cognitive computing applications. The deal is a challenge to companies like #AMD and #Intel as much as it is to Dell EMC – which also offers Nutanix on its HCI systems – in that it provides an option for organizations looking to diversify infrastructure away from x86 processing. Meanwhile, developers like #Pivot3 are tweaking their software stacks to make HCI more amenable to small and mid-sized enterprises, which have just as much incentive, if not more, to consolidate their hardware footprints as larger organizations. The company recently released the Acuity management stack that allows its system to simultaneously handle greater numbers of increasingly complex applications. Key elements in Acuity are support for Non-Volatile Memory Express (NVMe), which provides faster access to Flash storage, and a policy-based management engine to improve coordination between multiple virtual machines. But as tech consultant (and ITBE contributor) Rob Enderle pointed out on TG Daily recently, the main challenge behind HCI is not to boost features or control costs but to manage entropy. Using leading auto companies like Jaguar and Mercedes as examples, he notes that legacy IT staff have built their careers around tweaking and customizing vendor platforms. This is bad enough in standard data center infrastructure, but converged systems – at least the hardware components – should be left alone. It’s kind of like buying a car from stock and outfitting it with all manner of after-market products – you may see improvement at first, but eventually the complexity starts to hamper performance. His recommendation is that HCI should be reserved for greenfield deployments with IT staff trained from the get-go to work in converged infrastructure environments. As with any piece of data hardware or software, the key to a successful HCI deployment is more in how you use it than what vendor it comes from. With data operations expected to become increasingly autonomous in the coming years, workloads should naturally gravitate toward an optimal mix of cost and performance. Hyperconverged infrastructure should provide both regardless of whether it resides in the enterprise or the cloud, and this should finally provide the flexibility to process and store data wherever it makes the most sense, not where the dictates of limited resources demand.
No comments:
Post a Comment