Dell, EMC, Dell Technologies, Cisco,

Sunday, August 21, 2016

How to optimize Hadoop performance by getting a handle on processing demands

The promise of mainframe technology is that systems management within these mega-machines is so well orchestrated that optimal resource allocation and the management of diverse processing and applications on multiple operating systems is possible under one computer frame. It is a major reason why mainframes continue to hold their own as preferred bastions of mission-critical applications in enterprises.

So what does this have do with big data platforms like #Hadoop, which most often are processed in distributed computing environments?

As a preferred platform for big data processing, Hadoop doesn't have the same robust systems management tools as mainframes. Part of the reason why is due to distributed computing's heritage as a commodity platform that could produce cheaper hardware alternatives to sites than buying mainframes—without many of the advanced systems management techniques that were and are available on mainframes. The trade-off for sites choosing distributed hardware was simply to add more servers to compensate for poorer optimization of system throughput and storage.

http://www.techrepublic.com/article/how-to-optimize-hadoop-performance-by-getting-a-handle-on-processing-demands/

No comments:

Post a Comment