Dell, EMC, Dell Technologies, Cisco,

Tuesday, March 14, 2017

How Rackspace Optimized Red Hat Ceph Storage

Protecting data was dead center of our intention at #Rackspace when we worked with #RedHat to create our Red Hat #Ceph Storage reference architecture. When designing any solution, it is easy to get carried away and only select individual components that are considered best in class. However, that can cause costs to spiral, as would building specific snowflake solutions for every deployment. Being pragmatic and seeking the balance between capacity, performance, operational complexity, support burden, and cost can be tough with competing priorities, but it is critical to building a repeatable and scalable system which is highly optimized and remains affordable. The Rackspace reference architecture for Ceph Storage provides two types of storage nodes — one designed for performance and another for capacity. We deploy a minimum of five of either of those nodes to create a cluster. To gain the greatest density, we like to use servers that support 24 x 2.5” disks. While in terms of performance it could be argued that the node itself becomes a bottleneck, we find this solution provides a great low latency offering, at a reasonable density. Similarly, for our capacity offering, we again use the same chassis, and populate it with a small number of SSDs for performance, and fill the remaining slots with NL-SAS HDDs for high capacity. This provides dense capacity while also providing good throughput, as there are still a large number of spindles available. Typically, we distribute the Ceph monitor daemons either on the management nodes of the cloud, or in larger deployments onto some of the storage nodes themselves. We have found that dedicating physical servers to monitor duties is typically not necessary, except in the largest of clusters.

https://blog.rackspace.com/how-rackspace-optimized-red-hat-ceph-storage

No comments:

Post a Comment