Dell, EMC, Dell Technologies, Cisco,

Monday, October 17, 2016

Thanks for the memory: How cheap RAM changes computing

RAM (random access memory) is a component of every computer system, from tiny embedded controllers to enterprise servers. In the form of SRAM (static RAM) or DRAM (dynamic RAM), it’s where data is held temporarily while some kind of processor operates on it. But as the price of RAM falls, the model of shuttling data to and from big persistent storage and RAM may no longer hold. ( #phasechangememory ) Ram is highly susceptible to market fluctuations, but the long-term price trend is steadily downward. Historically, as recently as 2000 a gigabyte of memory cost over $1,000 (£800 in those days); today, it’s just under $5 (~£5). That opens up very different ways of thinking about system architecture.
Databases are traditionally held on disk, from where the required information is read into memory as needed, and then processed in some way. Memory size is usually assumed to differ from disk size by several orders of magnitude—say, gigabytes vs. terabytes. But as memory size increases, it becomes more efficient to load more data into memory, reducing the number of disk reads and writes. As RAM prices continued to fall, we began to see whole databases loaded from disk into memory, operations performed, and then written back to persistent storage. Now, however, we’re at the point where some databases are never written back to persistent storage, existing entirely in volatile RAM.
Memory access speeds are measured in nanoseconds (billionths of a second), and disk seek time is usually measured in milliseconds, making memory about a million times faster. RAM transfer speeds aren’t a million times faster, of course—gigabytes per second versus a few hundred megs per second for a quick hard drive—but RAM clearly has persistent storage beat by at least an order of magnitude.

http://arstechnica.co.uk/gadgets/2016/10/how-cheap-ram-changes-computing/

No comments:

Post a Comment