Dell, EMC, Dell Technologies, Cisco,

Tuesday, June 7, 2016

Hadoop rigs are really supercomputers waiting for the right discipline

#IBM reckons the rigs assembled to run the likes of #Hadoop and #Apache #Spark are really just supercomputers in disguise, so has tweaked some of its #supercomputer management code to handle applications that sprawl across x86 fleets.

As explained to The Register by IBM's veep for software-defined infrastructure Bernie Sprang, apps resting on clusters need to optimise workloads across pools of compute and storage resources, and can benefit from templates that make it easier to deploy without dedicated hardware. That second point, Sprang says, is important because he's starting to see “cluster creep”, a phenomenon whereby different teams inside an organisation each cook up their own compute clusters that could perhaps be shared instead of hoarded.

Big Blue thinks workload deployment and scheduling tools can give organisations the chance to acquire and operate fewer clusters without running out of data-crunching grunt.

Enter the new IBM “Spectrum” tools, comprising just such a resource-sharing tool in the form of the “Conductor” product that can “templatise” clustered apps. There's also the “Spectrum LSF” tool for workload scheduling. IBM already offers Spectrum Storage, the filesystem formerly known as GPFS, to support large-scale workloads.

http://www.theregister.co.uk/2016/06/07/ibm_spectrum/

No comments:

Post a Comment