According to CEO @MohitAron, the goal for #Cohesity storage products right from the beginning was to develop a secondary storage platform that had deduplication built in and that could support unlimited scaling. Aron also wanted it to be easy to deploy and manage.  To that end, when Cohesity launched its DataPlatform in 2015, it was built on hyper-converged infrastructure ( #HCI) principles. That means sets of nodes are installed to form storage clusters, and all the resources are available through a hypervisor, which also provides simplified management. With universal deduplication baked in, Cohesity was one of the first companies to offer global, block-level data reduction that would scale as more nodes were added to scale up a customer's secondary storage. With an innovative snapshot system, Cohesity claims there will be no performance hit, no matter how many snapshots the system takes. One of the ways it does this is by spreading the snapshots across multiple nodes. Cohesity storage even offers a consistency guarantee, in which DataPlatform writes to multiple nodes before it acknowledges a successful write. Test your knowledge of Cohesity's converged secondary storage by taking our quiz.
No comments:
Post a Comment