Monday, June 26, 2017

Flash storage and persistent memory may require application re-writes

Significant changes are needed to optimize enterprise software for running on flash storage and persistent memory. That's according to experts at #Microsoft, #DellEMC, and others who said their companies are already underway in preparing major products for the transitions—but who also agree that more work will be needed, some of it yet to be determined.

Flash storage, which replaces hard disks that have spinning media with hard disks that have chips, is already establishing its place in data centers and on individual computers. #Persistentmemory—also known as storage-class memory, and which is the concept of a computer architecture where all data lives in non-volatile RAM, with no hard disk of any kind—is on the cusp of moving from laboratories to product development.

Both technologies are disruptive. However, flash storage simply makes data read/write speeds faster, while persistent memory challenges computer architecture as we know it. The latter change will impact how software is developed, how it interacts with operating systems, and how applications such as databases mediate between records and hardware.

"It's one of those discussions that could go on several hours with the details," said Rohan Kumar, general manager of database systems at Microsoft.

Microsoft's database management system, SQL Server, gained an in-memory engine in 2014 and a memory logging function in 2016. By that point storage management became separate from the main program, Kumar noted. Windows Server 2016 also received modifications for the persistent memory transition.

"Imagine a world where your memory is as big as your flash array right now, what do you do?," Kumar said. At some point applications may need to be largely rewritten, rather than just modified, he acknowledged.

"We have to," Kumar said. But, "If every application has to change whenever there's a storage platform innovation, then that's very disruptive," he said, referring to bad disruption, not the productive kind. As such, any companies building their own applications should look at data platforms that handle the abstraction to operating systems as much as possible, he advised.

Kumar said he is already seeing customers install high-end application and database servers with up 3-6TB of memory, which equals traditional hard disk storage levels of not long ago.

Dell-EMC's Dan Cobb, VP of global technology strategy, expressed similar views. "That's a great topic and a rich one to dig into. We know and sort of take it as an act of faith that when the infrastructure gets [faster] we can expect a similar bump in workload capability," he noted. "We now have tens of microsecond and hundreds of microsecond tiers available. [Even] if the software stack does absolutely nothing, they've had the benefit of Moore's Law," Cobb continued.

"New architectures that will co-evolve with the hardware is the next phase. We are going to have a blast with new architectures to mix-and-match these things appropriately. They'll be... applications that are more evolutionary [and] move the workload to systems whose performance is dictated more by memory than by performance."

That doesn't necessarily mean software will be any better programmed than it is today—"It can run infinite loops in half the time," Cobb joked. "If you decide to aggressively go in-memory, then you need a different way to ensure consistency and coherence and those sorts of things. That's not new software," he said.

Even the languages used to develop modern software may need to change, Cobb said. C++, Java, and so on— "Those languages never really thought about data that was persistent. Imagine a new clause for the compiler in these languages that says 'pragma' persistent," he said, referring to code for instructing compilers about your intentions. "Now I can provide instructions to the compiler, and the compiler can pass things to the corresponding run-time library that says this data structure Danny just defined is a persistent thing."

Hewlett-Packard, Intel, SAP, the Storage Networking Industry Association, and others are some of the organizations also leading the charge to a world where hard disks may be limited to cold storage (or gone entirely) and memory never forgets. Some of the companies involved are behind the scenes of a related open-source programming effort, while SNIA has a whole work group for the subject.

"We're just in for a tremendously fun time," Cobb added. "Applications in the storage domain have become accustomed to having infinite storage. Memory applications haven't had the luxury of doing that."

As for the developers themselves, "Let me talk about a programmer who's essentially beginning her career today," Cobb suggested. "She may want to do a modern-day data memory framework that's only ever run in a persistent memory world. That would be a great thought experiment."

http://www.techrepublic.com/article/flash-storage-and-persistent-memory-may-require-application-re-writes/

No comments:

Post a Comment