Dell, EMC, Dell Technologies, Cisco,

Tuesday, August 8, 2017

In-memory computing: Where fast data meets big data

Traditionally, databases and big data software have been built mirroring the realities of hardware: memory is fast, transient and expensive, disk is slow, permanent and cheap. But as hardware is changing, software is following suit, giving rise to a range of solutions focusing on in-memory architectures.

The ability to have everything done in memory is appealing, as it bears the promise of massive speedup in operations. However, there are also challenges related to designing new architectures that make the most of memory availability. There is also a wide range of approaches to in-memory computing ( #IMC ).

Some of these approaches were discussed this June in Amsterdam, at the In Memory Computing Summit EMEA. The event featured sessions from vendors, practitioners and executives and offered an interesting snapshot of this space. As in-memory architectures are becoming increasingly adopted, we'll be increasingly covering it, kicking off with IMC Summit organizers: GridGain.

SLIDING IN

First off, IMC is not new. Over time caching has been increasingly used to speed up data-related operations. However as memory technology evolvesand the big data mantra is spreading, some new twists have been added: memory-first and HTAP.

HTAP stands for hybrid transactional and analytical processing, and was introduced as a term by Gartner. HTAP basically means having a single database backend to support both transactional and analytical workloads, which sounds tempting for a number of reasons. Many IMC solutions emphasize HTAP, seeing it as something through which they can build their case.

As to the second point, IMC turns traditional thinking in the database world on its head. As Abe Kleinfeld, GridGain CEO puts it, "traditionally in databases memory was a valuable resource, so you tried to use it with caution. In our case, we always go to memory first and avoid touching the disk at all cost. The algorithms we use may be the same -- it's all about cache and hits and misses after all -- but the thinking is different."

http://www.zdnet.com/article/in-memory-computing-where-fast-data-meets-big-data/

No comments:

Post a Comment