Enter #disaggregation. Separating an aggregate into its components isn't a new concept, but this approach is assuming greater importance. For IT, disaggregation means breaking a computer down to its core elements -- compute, memory, I/O, storage, cache, network fabric and so on -- to implement more-cost-effective, agile infrastructures and more-efficient storage allocation. But why do this now after a decade of aggregating IT resources through concepts like hyper-convergence? Scaling resources individually is often more-cost-effective than scaling them as combinations. The idea of disaggregation is to create pools of individual resources using many computers and then allot appropriate combinations of resources -- memory, CPU, cache, network fabric and storage allocation -- on demand to serve individual applications. Done correctly, disaggregation enables standing up and breaking down infrastructures almost instantaneously, with resource utilization soaring past 80% and management costs decreasing with the resulting automation of storage allocation. Let's explore five products to illustrate how disaggregation is being used and the benefits are accruing. Three are normally associated with storage: #Nutanix, #Pivot3 and #Datrium. The remaining two -- #DriveScale and #HewlettPackardEnterprise 's #Synergy -- apply disaggregation on a broader front.
TechNewSources is a one stop shop for all the latest, datacenter TechnNews you can use.
Dell, EMC, Dell Technologies, Cisco,
Sunday, August 6, 2017
Disaggregating network, compute and storage allocation demystified
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment