Dell, EMC, Dell Technologies, Cisco,

Wednesday, October 25, 2017

IBM scientists demonstrate in-memory computing with 1 million devices for applications in AI

A million processes are mapped to the pixels of a 1000 × 1000 pixel black-and-white sketch of Alan Turing. The pixels turn on and off in accordance with the instantaneous binary values of the processes. Credit: Nature Communications "In-memory computing" or "computational memory" is an emerging concept that uses the physical properties of memory devices for both storing and processing information. This is counter to current von Neumann systems and devices, such as standard desktop computers, laptops and even cellphones, which shuttle data back and forth between memory and the computing unit, thus making them slower and less energy efficient.  Today, @IBM Research is announcing that its scientists have demonstrated that an unsupervised #machinelearning algorithm, running on one million #phasechangememory ( #PCM) devices, successfully found temporal correlations in unknown #datastreams. When compared to state-of-the-art classical computers, this prototype technology is expected to yield 200x improvements in both speed and energy efficiency, making it highly suitable for enabling ultra-dense, low-power, and massively-parallel computing systems for applications in AI. The researchers used PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in place. "This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures," says Dr. Evangelos Eleftheriou, an IBM Fellow and co-author of the paper. "As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today's computers. Given the simplicity, high speed and low energy of our in-memory computing approach, it's remarkable that our results are so similar to our benchmark classical approach run on a von Neumann computer."

https://m.phys.org/news/2017-10-ibm-scientists-in-memory-million-devices.html

No comments:

Post a Comment