@IBM Research says it's developed a new approach to in-memory computing that could give it an answer to the hardware accelerators for high-performance and machine-learning applications sought by @Microsoft and @Google. IBM's researchers describe its new 'mixed-precision #inmemorycomputing' approach in a paper published today in peer-reviewed journal @NatureElectronics. The company is eyeing a different take on traditional computing architectures in which software requires data transfers between separate CPU and RAM units. According to IBM, that design, known as a von Neumann architecture, creates a bottleneck for data analytics and machine-learning applications that require ever-larger data transfers between processing and memory units. Transferring data is also an energy-intensive process.
https://www.zdnet.com/article/ibm-our-in-memory-computing-breakthrough-will-cut-cost-of-training-ai/
No comments:
Post a Comment