#IBM (IBM) announced that it will launch bare metal servers powered by #Intel ’s (INTC) #Xeon Scalable processors. IBM is also trying to compete against Intel’s “scalable system framework” by forming the #OpenCAPI Consortium, of which Intel isn’t a member. OpenCAPI stands for “open coherent accelerator processor interface,” and the aim of the consortium is boosting datacenter efficiency for applications in the fields of #AI ( #artificialintelligence) and #ML ( #machinelearning ) in #HPC ( #highperformancecomputing) environment. #Nvidia (NVDA) is one of the board members of #OpenCAPI. At the end of 2016, IBM announced that it was teaming up with Nvidia for making the world’s fastest AI/ML solutions. IBM's goal was to take advantage of Nvidia’s NVLink technology optimized for its Power processors so that it can be combined with Nvidia’s latest GPU technology. Clearly, the developments were conducive to Nvidia’s unprecedented growth story. Nvidia All Over The Place: OpenCAPI The aim of OpenCAPI is to move data faster across datacenters so that getting insights from large datasets becomes quick and easy. Its goal is to support storage class memory, similar to Intel’s #3DXPoint memory. It is noteworthy here that Intel’s memory chip partner #Micron (MU) is an OpenCAPI member at the board level. In addition, OpenCAPI aims to deliver high bandwidth at very low latency, compared to PCIe-based applications.
https://seekingalpha.com/article/4089044-intel-ibm-making-nvidia-breakfast-sandwich
TechNewSources is a one stop shop for all the latest, datacenter TechnNews you can use.
Dell, EMC, Dell Technologies, Cisco,
Thursday, July 20, 2017
Intel And IBM Making Nvidia Their Breakfast Sandwich
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment