Dell, EMC, Dell Technologies, Cisco,

Thursday, December 1, 2016

Pushing Back Against Cheap and Deep Storage

It is not always easy, but several companies dedicated to the #supercomputing market have managed to retune their wares to fit more mainstream market niches. This has been true across both the hardware and software subsets of high performance computing, and such efforts have been aided by a well-timed shift in the needs of enterprises toward more robust, compute and data-intensive workhorses as new workloads, most of which are driven by dramatic increases in data volumes and analytical capabilities, keep emerging. For #supercomputer makers, the story is a clear one. However, on the storage side, especially for those select few companies that have a unique, proprietary parallel file system and hardware partnerships to roll into appliances, the move to a wider enterprise market base from pure #HPC can be a bit more challenging. Unlike with compute systems for HPC, which are seeing broader reach due to increased computational and data requirements, parallel file systems have something of a tougher row to work, especially with the “cheap and deep” options enabled by software defined or object-based storage. We have talked in depth about how humble HPC roots have propelled some storage companies into the broader enterprise storage foray (DDN is a prime example), and via HPC systems makers that sell storage (think Cray’s Sonexion line with burst buffers and such), but these companies are basing their offerings on commercial-grade distributions of Lustre and GPFS for the most part versus going to fully customized route. This is already a tough sell for some enterprise shops that haven’t had the experience of using high performance parallel file systems as it is, but for companies that have been around for decades with a closed-source file system locked inside an appliance as in the case of Panasas, the enterprise reach seems like a bit more of a stretch—at least, in theory. Actually, according to Tom Shea, COO at Panasas, this might sound logical, but what enterprise-grade commercial HPC shops need is something that can just function out of the box. He says that while there are plenty of sites that choose to deploy Lustre or another file system on their own commodity hardware, the management overhead of this is far more complex than it might appear. He says that users are far less interested in what is under the hood of the Panasas appliance-based approach and far more focused on getting to work on real applications. While open sourcing their file system is not on any foreseeable roadmap (which bucks the trend), their focus will be on building a bolder base of commercial high performance computing users through a focus on scalability, reliability, and manageability—all areas where Shea says the open source, DIY storage approach fails for users with mission-critical applications at scale. As a quick refresher, Panasas got its start in 1999 at the technical direction of co-founder, Garth Gibson, of RAID fame and found significant footing in the emerging HPC market in government and academia in particular, beginning with the PanFS file system and its first scale-out NAS appliances. While it is difficult to tell if the privately-held company is profitable or growing overall, especially after undergoing some major transitions, the company’s Jim Donovan gave some key insight into what growth looks like from a more government/academic focused HPC business to a more enterprise-geared one.

https://www.nextplatform.com/2016/11/29/pushing-back-cheap-deep-storage/

No comments:

Post a Comment