Dell, EMC, Dell Technologies, Cisco,

Monday, April 30, 2018

NVMe™ and NVMe-oF™ Take Center Stage at Flash Memory Summit

SANTA CLARA, Calif., April 30, 2018 (GLOBE NEWSWIRE) -- The rapidly emerging #NVMe and #NVMeoF standards for high speed data storage will play a lead role in the 2018 Flash Memory Summit ( #FMS ), scheduled for August 7-9, 2018 at the Santa Clara Convention Center.   According to G2M Research, the NVMe market is expected to reach $60 billion by 2021 as designers make it their preferred solution for high-speed storage. NVMe offers data transfer rates up to 10 times greater than SATA, runs on the popular PCIe interface and offers a rapidly growing ecosystem of products and technology.  The NVMe-oF standard offers similar advantages for networked storage, allowing applications to use flash anywhere in an enterprise network or computer system.  “NVMe should be designers’ first choice among storage interfaces,” said Chuck Sobey, Flash Memory Summit Conference Chair.  “It offers much higher throughput than disk interfaces while still using an interface familiar to all data centers.  NVMe and NVMe-oF meet the needs of enterprise, computer, high-performance computing, embedded, mil/aero, mobile, and many other applications as well.” FMS 2018 will present a range of in-depth technical sessions on NVMe and NVMe-oF, including a series organized by the standards Group NVM Express Inc. as well as pre-conference seminars, invited talks, exhibits and market trends. The Flash Memory Summit will also feature 3D NAND flash and 3D XPoint, emerging nonvolatile memory technologies including applications for the latest MRAM developments, persistent memory and NVDIMM, controllers, flash storage networking, and the software that makes it all work together.  The FMS conference offers an upgraded exhibit hall, including a large demonstration area and more than 100 exhibitors. The conference is still accepting exhibitor applications for booth space and proposals for panel sessions and speaking slots.

https://globenewswire.com/news-release/2018/04/30/1489650/0/en/NVMe-and-NVMe-oF-Take-Center-Stage-at-Flash-Memory-Summit.html

Huawei developing its own OS in case it’s barred from Android?

@Android is the go-to operating system for those that aren’t into @iOS, but that may not be the case forever. @Samsung has been developing its own #Tizen OS for some time now, even using it on some smartphones. Now it could be @Huawei ’s turn: The South China Morning Post says Huawei has been working on an Android alternative. There are a few reasons an Android alternative might be important for Huawei. For starters, trade and security tensions between the U.S. and China have placed ZTE under threat of losing its Android license from Google. Huawei, another Chinese company, could be under the same threat. Even without U.S.-China tensions, there are plenty of reasons Huawei might want to explore its own technology. For example, the company may be looking to reduce its dependence on other companies in general — as it has done in the mobile processor business. While it does still use third-party processors, many of its devices use chips built by the company itself. According to the report, the plan to build a new operating system was set in motion by Huawei founder Ren Zhengfei, and was really a failsafe in case of a “worst-case scenario.” The operating system hasn’t seen the light of day yet because it simply isn’t as good as Android — but another problem with it could be that it doesn’t have enough third-party support. Even if it is developing its own operating system, it’s possible that Google’s apps and services will still be available on it — though that will depend on the operating system itself.

https://www.digitaltrends.com/mobile/huawei-operating-system-news/

VMware Certified Design Expert 6 - Network Virtualization (VCDX6-NV)

@VMware Certified Design Expert (VCDX) is the highest level of VMware certification. This elite group of VCDX professionals is comprised of design architects highly-skilled in VMware enterprise deployments.  Earning the new VCDX6-NV certification from VMware will validate your world-class vSphere and NSX 6.x skills and prove you can deliver true business value through designing a VMware NSX®-platform-based data center networking infrastructure that meets customer objectives and constraints.
https://www.vmware.com/education-services/certification/vcdx6-nv.html

Nutanix’s Amazon Cloud Rival Delayed by Engineering Issues

@Nutanix Inc. planned to release a public cloud offering to rival market leader @Amazon Web Services in the second half of 2018. But engineering challenges have put the launch timing in doubt. Chief Executive Officer Dheeraj Pandey has found that targeting Amazon.com Inc. is harder than expected, even for an underdog like him. San Jose, California-based Nutanix pioneered a type of enterprise computing, known as hyperconverged systems, that combines server, storage and virtualization software in a single machine. The company was the second best-performing initial public offering of 2016. Lately Nutanix’s stock has hovered near record highs on investor excitement that it will be able to notch massive growth rates with its upcoming Xi Cloud Services product to compete against Amazon, Microsoft Corp. and Alphabet Inc.’s Google. Stretching beyond its core business hasn’t been easy, however.

https://www.bloomberg.com/news/articles/2018-04-27/nutanix-s-amazon-cloud-killer-delayed-by-engineering-problems

Ex-NetApp Exec Strubel To Focus On Simplicity As Commvault's New Head Of Worldwide Channels

For @Commvault 's new head of worldwide channels @ScottStrubel, simplification is the key to helping channel partners grow their businesses. Over the next few months,Commvault plans to introduce a significantly simplified licensing model to make it easier for solution providers to quote and sell Commvault offerings, Strubel said. The company will also continue to refine its Admin Console to make it easier for partners to implement. "We want simplicity for partners," said Strubel, a former @NetApp channel executive who joined Commvault as vice president of worldwide channels Tuesday. "If we focus on simplification of how partners work with Commvault, we believe we'll see an increase in transaction volume both for our software and our HyperScale appliance." [Related: NetApp Channel Exec Scott Strubel Retires, Thomas Stanley Takes Over In Interim] Strubel, who left his role as channel leader for NetApp's Americas business in March, 2017, has a knack for doing what he says he will do, said John Woodall, vice president of engineering at Integrated Archive Systems, a Palo Alto, Calif.-based solution provider that works with both Commvault and NetApp. Woodall, who has known Strubel for years, told CRN that a focus on simplicity is a great starting point for Commvault. [Sponsored Suggested Post: ZK Research Labs Gives HPE OfficeConnect an “A” in Extensive Testing See why the HPE OfficeConnect OC20 got top marks when it comes to providing robust, reliable, business-grade Wi-Fi for small businesses.] Simplifying the quoting process is a priority for Commvault's channel, Woodall said. "Commvault has a good offering that could benefit from operational efficiencies and simplicity," he said. "It's good for the channel's business, and helps stay interesting to partners." And Strubel, an executive who really understands the channel business, is just the man for the job, Woodall said. "He gets things done," he said. "In meetings, he wants straight feedback. He takes notes and follows up on the feedback. Not everybody is like that. He understands our business, how to do business with us, how to set goals that are measurable, and how to follow up on it." The hiring of Strubel will pay off well for Commvault, Woodall said. "The data protection business has a lot of competition for mindshare," he said. "It's important for Commvault to have someone who understands what the channel is doing. And Strubel does." Strubel was the third new partner-focused executive hired so far this year by Commvault, the Tinton Falls, N.J.-based developer of data protection software and appliances. The company in January named Owen Taraniuk as its new head of worldwide partnerships and market development, and followed up in February with the appointment of Andy Vandeveld as the company's new vice president of worldwide alliances. Strubel takes over for Ralph Nimergood, Commvault's vice president of indirect strategy for channel sales and chief of staff within the worldwide partnerships and market development group. Nimergood had been running channel sales and is remaining with the company. Going forward, both Strubel and Nimergood report to Taraniuk, with Strubel focused on channel sales activities and Nimergood focused on all the ways channel partners interact with Commvault, Strubel told CRN. "Ralph will lead our shared services organization, and I will use that organization," he said. "That includes our partner portal and partner programs, which Ralph will build." Strubel said he will be focusing on partner profitability and on co-selling opportunities at Commvault. His initial focus will be on simplifying how Commvault works with its channel partners. Partners have been telling Commvault for a long time that the company has outstanding technology, but that technology is too complex, Strubel said. "Maybe there were too many levers to pull and knobs to turn," he said. "But the latest version was downloaded by over 7,000 people who now average only 35 minutes until their first backup. So the idea that Commvault is complex is not true. We need partners to understand they can get to the first backup in under 35 minutes. We can prove they can drive a Rolls Royce without having to turn the engine." Commvault is bringing on a significant sales, engineering, and delivery team for channel partners, he said. "To be a true partner-led company, we need to change the way we interact with partners," he said.

https://www.crn.com/news/storage/300102806/ex-netapp-exec-strubel-to-focus-on-simplicity-as-commvaults-new-head-of-worldwide-channels.htm

Microsoft narrows Amazon's lead in cloud, but the gap remains large

@Microsoft continued to eat into @Amazon 's lead in the #cloud infrastructure market in the first quarter, though there remains a big gap between the two. @Amazon Web Services held 33 percent of the cloud infrastructure market in the quarter, which is flat from a year earlier, according to data published Friday by Synergy Research Group. The market includes raw computing and storage, services for running applications and hosted private cloud. Microsoft's share of the market jumped to 13 percent from 10 percent a year earlier. @Google increased to 6 percent from 5 percent. Sequentially Amazon's market share fell by a percentage point. All three companies reported financial results this week and each topped analysts' estimates, highlighting expansion in their cloud businesses as drivers of overall growth. Investors are still most bullish on Amazon, pushing the stock up 4 percent on Friday, following the earnings report, and lifting its rally for the year to 35 percent. Microsoft recently instituted organizational changes to better capitalize on its cloud business, pushing more resources to Scott Guthrie, who has overseen the rise of the company's Azure unit. Research firm Canalys published findings that were similar to Synergy's on Friday, showing that while AWS has stayed above 30 percent market share, Microsoft made a clear gain in the past year, with Google notching a modest improvement. Amazon is the only one of the three companies that breaks out its cloud division in terms of sales. The company said on Thursday that AWS revenue jumped 49 percent to $5.44 billionin the first quarter. Microsoft said Azure revenue jumped 93 percent. KeyBanc analysts estimated Azure had $1.76 billion in revenue, while Raymond James analysts predicted the number was $2.05 billion. One thing to watch: cloud growth could start to more clearly impact Microsoft's margins because Azure is less profitable than the company's legacy software products. "You'll see some gross margin percentage pressure in that segment through 2019, but with significant dollar growth," said Microsoft CFO Amy Hood, on the company's conference call on Thursday.

https://www.cnbc.com/2018/04/27/microsoft-gains-cloud-market-share-in-q1-but-aws-still-dominates.html

What Machine Learning Can and Can't Do for Enterprise Networks

The enterprise is taking a hard look at #machinelearning (ML) as a means to bring network infrastructure into the 21st century, and many leading organizations have already implemented the technology in production environments. But is ML the panacea vendors make it out to be? And can it produce the kind of autonomous, intent-based environments that currently populate the hype cycle? The good news about machine learning is that it does not necessarily require a huge upfront investment. Leading cloud providers are rolling out ML-as-a-Service ( #MLaaS ) options. These implement the technology on the very infrastructure that will likely support the next-generation applications that can make the best use of machine learning, namely, the cloud. Google, for one, recently rolled out an MLaaS offering based on technology it acquired from a UK company called DeepMind way back in 2014. The goal is to allow network administrators to create autonomous virtual networks on the Google Cloud that would leverage the reams of unstructured data, such as telemetry, historical patterns and traffic analysis, that are already flowing around network deployments. But is this enough to craft a truly smart network that could create optimized environments on the fly based on the needs of applications? Not exactly, according to Juniper’s Kireeti Kompella. In an interview with Data Center Knowledge, Kompella said that such a vision is possible, but it will require careful coordination among a number of cutting-edge technologies and the vendors that design them. By itself, machine learning can craft network configurations from a list of profiles using past performance and other data. But to build a truly self-driving network, the enterprise will also need a closed-loop monitoring system overseeing SLAs, peering, LSPs and other parameters. In this way, the network can measure current performance against evolving objectives and adapt itself accordingly, without human intervention or input. We also shouldn’t expect ML to improve on basic networking attributes like latency, says Savvius Inc.’s Jay Botelho. By nature, latency is highly difficult to predict. In any given millisecond, exorbitant traffic can hit a specific switch or router, resulting in the inevitable packet queue. The best way to fight latency, then, is not to micromanage every bit in transit but to push traditional monitoring capabilities as close to the user as possible. As data becomes more distributed, even a highly intelligent management regime will be of little benefit if it remains locked in a centralized deployment model. Probably the one area where ML can make already a major contribution to network performance is security. As enterprise software strategist Louis Columbus noted on Forbes recently, ML can address the five key threats facing the enterprise these days without incurring the millions in costs associated with traditional measures. ML’s constraint-based pattern-matching capabilities, for example, are ideal for thwarting compromised credential attacks, while its ability to scale Zero Trust Security (ZTS) measures allow it to support enterprise-wide risk scoring models. In addition, it provides for highly customized multi-factor authentication workflows to keep tabs of a changing workforce, while at the same time enabling predictive analytics and insight into emerging threat profiles. And lastly, it can remain a step ahead of rapidly evolving malware as hackers try to rework their code to subvert reactive defensive measures. Machine learning will prove to be an invaluable tool for enterprise network management, but only if it is deployed as part of an overarching framework that also incorporates software-defined networking (SDN), network functions virtualization (NFV), automation and broad policy and governance architectures. Giving a machine the ability to learn means it will soon be able to do a lot on its own, but ultimately it must rely on related systems — and the people who guide them — to put that learning to the most productive use.

http://mobile.enterprisenetworkingplanet.com/welcomead?_qstu=%2Fdatacenter%2Fdatacenter-blog%2Fwhat-machine-learning-can-and-cant-do-for-enterprise-networks.html

Sunday, April 29, 2018

Microsoft releases Office 2019 preview

@Microsoft is starting to preview the features and changes to Office 2019 today. The software maker is making its first Office 2019 preview available to business customers, with features that are typically found in Microsoft’s Office 365 subscription. Office 2019 will include the usual Word, Excel, PowerPoint, and Outlook client apps, alongside server versions of Exchange, SharePoint, and Skype for Business. Microsoft is targeting this Windows version of Office 2019 towards businesses that aren’t running Office 365, and the productivity suite will only run on Windows 10. A separate Office 2019 for Mac version will also be available for macOS. “Updates include new and improved inking features across the apps, like the roaming pencil case, pressure sensitivity, and tilt effects; more powerful data analysis in Excel, like new formulas, new charts, and Power BI integration; and sophisticated presentation features in PowerPoint, like Morph and Zoom,” explains Microsoft’s Jared Spataro.

https://www.theverge.com/2018/4/27/17290112/microsoft-office-2019-preview

Dell EMC Channel Chief Mullen On What Partners Can Expect At Dell Technologies World And Global Partner Summit 2018

@DellTechnologies World and Global Partner Summit 2018 Thousands of Dell Technologies channel partners will flock to Las Vegas next week for the highly anticipated Dell Technologies World 2018, which also will include Dell's Global Partner Summit. "The product announcements are pretty darn compelling," said Dell EMC channel chief @JoyceMullen, in an interview with @CRN. "We're going to fast-track our partners' abilities to deliver across transformational areas." CRN spoke to Mullen about what partners can expect to learn and hear about during the summit, which takes place from April 30 to May 3. Keynote speakers at the event include Dell Chairman and CEO Michael Dell, VMware CEO Pat Gelsinger and Mullen, as well as guest speakers such as actor Ashton Kutcher. Mullen's talks to CRN about what partners need to know heading into the conference next week.

https://www.crn.com/slide-shows/data-center/300102766/dell-emc-channel-chief-mullen-on-what-partners-can-expect-at-dell-technologies-world-and-global-partner-summit-2018.htm

Affordable Optical Technology Needed Says HPE’s Daley

While not new, the challenges presented by computer cabling/PCB circuit routing design – cost, performance, space requirements, and power management – have coalesced into a major headache in advanced #HPC system design, said @JimmyDaley, head of HPC engineering for #HPE, at the HPC User Forum last week. What’s required, and sooner-rather-than-later, is broader adoption of optical cable technology as well as mid-board optics said Daley. Daley’s ~15-minute jaunt through the major copper versus optical issues was a good reminder of this persistent problematic area. He bulleted out three sets of challenges posed by cabling and reviewed briefly how optical solutions could help overcome them: Cost and Signaling Density and Egress Power and Thermals The slide below contrasts the size difference of copper versus optical cable. Like many, Daley is looking for cabling options and currently there aren’t many. His presentation was both a refresher and look-ahead at cable interconnect issues and opportunities. Copper cable, of course, predominates but it’s not cheap and has plenty of constraints. Active optical cable (AOC) is powerful but also on the order of 4-6X more expensive than copper; Daley called AOC HPC’s nemesis. Broader use of passive optical would solve many problems and be less expensive but there’s a fair amount of work to be done before that is practical. Today, AOC is the backbone for most very large supercomputers. “Even though these AOC cables struggle with costs and other things, the adoption of these cables is still very aggressive. We have to [use them] in order to get our science done,” he said. “At 50 gigs (see chart below), we as an industry pretty much only use them sparingly. As we get to 100 gigs speeds we are already at about 40 percent mix in active optical cables (AOC) and copper and as we move to 200 gigs and beyond, we are predicting that most of what we deploy is going to have to be active optical cable.” Daley pointed out it is not the fiber in AOC that’s so expensive; it’s actually less expensive than copper, but everything else required (various components and materials) for active optical cable that boosts its cost. “[Given] the cost of copper cable I’m not sure I am going to be able to do a meaningful 400 gig copper cable, which means I am now at a list price of $2,000-plus for AOC. Go build a fabric out of that and see how far the wallet goes,” said Daly. The big win would be in being able to deploy passive cable and efforts are ongoing to develop better passive optical cable solutions. Printed circuit boards present similar challenges as signaling demands rise. “Where we live today (PCIe Gen 3) I can get everything using fairly standard PCB material. As we start talking about PCIe Gen 4…[w]e are going to have to start looking at more and more exotic routing material. We have a lot of smart guys in HPE and industry and this box will start to shift up ever so slightly, but it will not shift up meaningfully. And my red box (chart below) here is just a lost cause,” he said. Clearly optical cable has many advantages – physical size and bandwidth are the obvious ones. The growth in systems has exacerbated pressure on interconnect throughout systems and Daley’s slide below illustrating the difference in copper cabling versus optical cabling required for 256-node makes the point nicely; the cross section of the copper cable is roughly the size of a CD while the optic cable counterpart is the size of a watch face. Moving to optical as well as “mid-board optics,” he said, is a necessary step for larger systems. He also argued for accelerated development of passive optical approaches, which are not only cheaper, but more suitable for modular systems. “If I move to passive optical cable that also frees us up. Forget the economics and physics of things. It frees you guys up to wire the datacenter one time, you put the wires in and the optics actually come with the thing that you are plugging in. Today we are very driven to making sure that within a rack that you hit the right switch radix because if you leave switch ports empty, those are costs that have to get amortized across all of the other nodes in the rack and that becomes very, very painful. “If I move to optical cable where distances aren’t [the issue] then I don’t have to have a top-of-rack switch, I have a mid-row switch and we just make sure that within that row we design and get the right things there. Then I can start to design to power constraints and thermal constraints besides switch radixes. The biggest thing in my mind is that it allows us to start to optimize our topologies for the workload we are trying to do as opposed to optimizing for “how do I get as many copper cables into this picture as I can because I can’t afford optical cables as we move forward.’”

https://www.hpcwire.com/2018/04/26/affordable-optical-technology-needed-says-hpes-daly/

Top Lawyers at Cisco, NetApp and HPE Say Legal Ops Has Been a Game Changer

The general counsel role is one that’s evolved immensely in the past few years, and that’s in no small part due to the growth of legal operations, according to three top in-house lawyers at major tech companies. At the final panel discussion at the Corporate Legal Operations Consortium’s 2018 institute, @Yahoo ’s legal ops leader @JeffFranke got @NetApp Inc. GC @MattFawcett, @Cisco Systems Inc. GC @MarkChandler and @Hewlett Packard Enterprise Co. CLO @JohnSchultz to open up about the changes they’ve seen due to the rise of legal ops. Schultz said he started an ops function at HPE because he wanted greater process efficiency–but that the function’s changed the department in many other ways too. “The discipline that [legal operations] brings around dashboarding and analytics really does drive a different level of excellence into the business,” he said. “We’re not all that comfortable with numbers and scorecards and so forth and the ops group has done a tremendous job at creating that rigor that I think has improved the excellence of what we do in a demonstrable way.” The ability to improve a department’s performance is important, according to Schultz, but legal ops’ role in measuring that success in a visible way has also been crucial in expanding the influence of in-house legal throughout HPE. “When you can [provide metrics] on a consistent basis, it increases so greatly the credibility that you have inside the company,” he said. “It then allows you to have an expanded role.” Fawcett echoed these statements. At NetApp, he and Connie Brenton, the company’s senior director of legal operations, have played a role in beta testing new technologies for other departments, created workflows for processes outside of legal and served on the company’s technology investment board. Chandler said legal operations has helped companies tap into and better utilize the legal department, as it’s one of few groups that works with the entire organization. “They see every transaction the company does, and that’s why the legal operations role is so important,” he said. Chandler added that operations’ ability to turn the data from these transactions into meaningful dashboards has been useful at Cisco. The panelists also noted legal operations’ role in pushing their departments to be more tech-oriented, while increasing efficiency and lowering cost. Fawcett said these changes were possible by reassessing risk levels and not being afraid to take a chance. In his experience, many legal departments may be overcautious, leading to expensive and inefficient strategies. GCs that want to see real improvement in their department should be more open to taking a risk and try new technologies and processes, he explained. “One of my jobs is being the person who says, ‘We can take that risk,’” Fawcett said. “Let’s take that risk. You’re not going to break NetApp with one bad transaction.”
https://www.law.com/therecorder/2018/04/26/top-in-house-lawyers-from-cisco-netapp-and-hpe-say-legal-ops-rise-has-been-game-changer-403-14451/?slreturn=20180329223542

iXsystems TrueNAS M-Series Blows Away Veeam Backup Certification Tests

SAN JOSE, Calif., April 26, 2018 (GLOBE NEWSWIRE) -- @iXsystems, the industry leader in storage and servers driven by #OpenSource, announced the certification of its #TrueNAS M-Series for @Veeam #Backup and #Replication. The TrueNAS M40 and M50 models performed up to five times better in certification tests than Veeam requirements. With this certification the complete line of iXsystems hybrid storage products, which include the TrueNAS X10, X20, M40 and M50 storage arrays, have passed the Veeam Backup and Replication v9.5 for VMware certification tests, confirming that TrueNAS integrates with Veeam software to provide VM-aware backup and storage. The TrueNAS X10 and X20 are cost-effective enterprise storage solutions that are designed to help small and midsize businesses modernize their storage infrastructures and were announced in the middle of 2017. The TrueNAS M40 and M50 are the newest high-performance models in the iXsystems hybrid, unified storage product line, and were announced in early April of this year. Veeam certification tests the speed and effectiveness of the data storage repository using a testing methodology defined by Veeam for Full Backups, Full Restores, Synthetic Full Backups, and Instant VM Recovery from within the Veeam Backup and Recovery environment. As a unified file, block, and object sharing solution, TrueNAS can meet the needs of backup and virtualization use cases thanks to its unified support for the SMB, NFS, AFP, iSCSI, and Fibre Channel protocols.

https://globenewswire.com/news-release/2018/04/26/1488068/0/en/iXsystems-TrueNAS-M-Series-Blows-Away-Veeam-Backup-Certification-Tests.html

Why SATA flash drives are being left in the dust

I noted years ago that if flash had been available in 1956 - the year the first disk drive shipped - we would never be building SSDs with SATA interfaces today. SATA works well for disks, because disks are slow. SATA responded by upping their data rates from a raw 3 Gb/sec to 6 Gb/sec. But bandwidth wasn't the problem - IOPS and latency was. While drive vendors will continue to produce SATA SSDs for cost-sensitive users, the real action today is in NVMe SSDs. These are are extremely high performance SSDs that provide hundreds of thousands of IOPS, gigabytes per second, and extremely low latency. Performance that only a few years ago required $100,000++ storage array. CASE STUDIES  Most customers are loath to share the secrets of their configuration choices, rightfully considering them trade secrets in an intense competitive environment. But I recently spoke to a Kingston Technology senior systems engineer, Cameron Crandall, about customer deployments that showed how NVMe SSDs are changing how people configure systems and manage their workflows. Kingston is the storage industry's best kept secret. They are a privately held, $6.6 billion US company. Started by a couple of engineers in the late 80s, they focus on well-engineered products rather than marketing. Which may explain why they have a 59 percent add-on server memory share - not counting their OEM business. Color me impressed. Kingston also makes NVMe SSDs. That's where my discussion with Cameron came in.

https://www.zdnet.com/article/why-sata-flash-drives-are-being-left-in-the-dust/

Friday, April 27, 2018

Google Charts Two-Dimensional Quantum Course

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for @Google ’s @JohnMartinis. Delivering a presentation last week at the #HPC User Forum in Tucson, Martinis, one of the world’s foremost experts in #quantumcomputing, emphasized that building a useful quantum device is not just about the number of qubits; getting to 50 or 1,000 or 1,000,000 qubits doesn’t mean anything without quality error-corrected qubits to start with. Martinis compares focusing on merely the number of qubits to wanting to buy a high-performance computer and only specifying the number of cores. How to create quality qubits is something that the leaders in the quantum space at this nascent stage are still figuring out. Google — as well as IBM, Intel, Rigetti, and Yale – are advancing the superconducting qubit approach to quantum computing. Microsoft, Delft, and UC Santa Barbara are involved in topological quantum computing. Photonic quantum computing and trapped ions are other approaches. The reason quality is difficult in the first place is that qubits – the processing unit of the quantum system – are fundamentally sensitive to small errors, much more so than the classical bit. Martinis explains with a coin on the table analogy: “If you want to think about classical bits – you can think of that as a coin on a table; we can represent classical information as heads or tails. Classical information is inherently stable. You have this coin on the table, there’s a restoring force, there’s dissipation so even if there’s a little bit of noise it’s going to be stable at zero or one. In a quantum computer you can represent [a quantum bit] not as a coin on a table but a coin in free space, where say zero is up, and one is down and rotated 90 degrees is zero plus one; and in fact you can have any amount of zero and one and it can rotate in this way to change something called quantum phase. You see since it’s kind of an analog system, it can point in any direction. This means that any small change in this is going to give you an error. “Error correction in quantum systems is a little bit similar to what you see in classical systems where you have clocked logic so you have a memory source, where you have a clock and every clock period you can compute through some arithmetic logic and then you sequence through this and the clock timing kind of takes care of all the different delays you have in the logic. Similar here, you have kind of repetition of the error correction, based on taking the qubit and encoding it in many other qubits and doing parity measurements to see if you’re having both bit-flip errors going like this or phase flip errors going like that.” The important thing to remember says Martinis is that if you want to have small errors, exponentially small errors, of 10-9 or 10-12, you need a lot of qubits, i.e., quantity, and pretty low error rates of about one error in one-thousand operations, i.e., quality. In Martinis’s view, quantum computing is “a two-dimensional horse race,” where the tension between quality and quantity means you can’t think in terms of either/or; you have to think about doing both of them at the same time. Progress of the field can thus be charted on a two-dimensional plot. The first thing to note when assessing the progress in the field are the limiting error rate and the number of qubits for a single device, says Martinis. The chart depicts, for a single device, the worst error rate, the limiting error rate, and the number of qubits. Google is aiming for an error correction of 10-3 in about 103 qubits. “What happens, “says Martinis, “is as that error rate goes up the number of qubits you have to have to do error correction properly goes up and blows up at the error correction threshold of about 1 percent. I call this the error correction gain. It’s like building transistors with gain; if you want to make something useful you have to have an error correction that’s low enough. If the error correction is not good enough, it doesn’t matter if you have a billion qubits, you are never going to be able to make them more accurate.” Up to 50 qubits is classically simulatable, but if the error rate is high it gets easier but it is not useful. Pointing to the lower half of the chart, Martinis says “we want to be down here and making lots of qubits. It’s only once you get down here [below the threshold] that talking quantity by itself makes sense.” One of the challenges of staying under that error correction threshold is that scaling qubits itself can impede error correction, due to undesired cross-talk between qubits. Martinis says that the UC Santa Barbara technology it is working with was designed to reduce cross-talk to produce a scalable technology. For flux cross-talk, fledgling efforts were at 30-40 percent cross-talk. “The initial UC Santa Barbara device was between 1 percent to .1 percent cross-talk and now it’s 10-5,” says Martinis, adding “we barely can measure it.” The solid black dot on the chart (above) represents that UC Santa Barbara chip. It is 9 qubits and dips just beneath the error correction threshold. Now with its follow-on Bristlecone chip architecture, Google is working to scale the UCSB prototype to >50 qubits to show quantum supremacy, the point at which it would be longer feasible to classically simulate it. The Google team is focused on improving error correction with the expectation that near-term applications will then be feasible. Martinis says the next step is to move out to ~1,000 qubits with exponentially small errors. The end goal is to scale up to a million-or-so qubits with low error rates to solve real-world problems that are intractable on today’s best supercomputers. The Bristlecone chip consists of 72 qubits, arranged in 2D array. The device has been made and is now undergoing testing to make sure it is operating correctly. Google uses its Qubit Speckle algorithm to validate its quantum supremacy experiments. Martinis reports that progress on quantum algorithms is also advancing. One of the most compelling applications for quantum computers is quantum chemistry. It’s a natural application for quantum computing, says Martinis. The algorithm though is exponentially hard. In 2011, Microsoft’s quantum computing group documented an O(n11) quantum chemistry algorithm, which would take the age of the universe to run. Work has since progressed and recently the Google theory group showed an algorithm that is Õ(N2.67) for the exact solution and O(N) for the approximate. “[The exact implementation] would take about 100 logical qubits, requiring a million physical qubits,” Martinis notes. “It’s beyond what we can do now, but now the numbers are reasonable so we can think about doing it.” In closing, Martinis points out that the metrics for assessing the progress of quantum computing in addition to quality and quantity also include speed and connectivity. In different technologies, there can be a factor of 105 or so different speeds. For networking, he says you need at least 2D nearest neighbor corrections to do the error correction properly. Referring to the chart on Google’s key metrics (at left), Martinis says the company isn’t ready to talk about Bristlecone’s error-correction or speed yet but it anticipates good numbers and hopes to show quantum supremacy “very soon.”

https://www.hpcwire.com/2018/04/26/google-frames-quantum-race-as-two-dimensional/

Pure Storage Announces Third Annual Technology Conference, Pure//Accelerate 2018

MOUNTAIN VIEW, Calif., April 26, 2018 /PRNewswire/ -- @PureStorage (NYSE: PSTG), the all-flash storage platform that helps innovators build a better world with data, invites technologists and innovators from around the globe to explore the possibilities of data at its third annual technology conference, Pure//Accelerate. Over the course of three days, attendees will engage with tech and business leaders to learn how to implement and ultimately transform their organizations with modern data strategies. Pure//Accelerate 2018 Pure//Accelerate 2018 will provide a comprehensive set of opportunities for, customers, prospects, partners, press and analysts to learn about the latest innovations in all-flash storage and data management. Attendees will have the opportunity to discuss features with Pure Storage engineers, learn new skills to scale IT operations and deliver cloud agility, and network with peers from a broad representation of world-class innovators. Main stage speakers will include Pure Storage CEO Charlie Giancarlo and MIT Researcher Kate Darling, one of the world's foremost experts in the intersection of emerging technologies and society.    With more than 60 breakout sessions and 50 theatre sessions, 24 demo stations and interactive product booths, attendees will learn: How to simplify production to accelerate applications, save money, enable developers with on-demand infrastructure and automate clouds. Proven best practices to get the best performance out of Pure technologies. How to deliver real-time, next-generation analytics and AI through a modern, all-flash data pipeline. Become a Pure Storage Certified Professional. The Pure Storage Foundation Exam covers administration, architecture and components, support and the Evergreen business model. The final day of the conference will also feature a special focus on DevOps, with an emphasis on how to speed time to market and drive developer productivity. Sessions, demos and theatre sessions will focus on how to enable self-service and instantly accessible developer environments. Pure//Accelerate will be held May 22-24, 2018 at The Bill Graham Civic Auditorium in San Francisco. Register now for a conference and exposition pass. Global Partner Forum. Alongside Pure//Accelerate, all partners are also invited to the Global Partner Forum, May 21-22. Attendees will gain firsthand insight into Pure Storage's vision, strategy and upcoming product announcements. The dedicated technical and business breakouts will equip partners to take advantage of market opportunities, gain competitive selling knowledge and delight customers. Pure//Catalyst CIO Summit. Pure Catalyst is an exclusive experience for forward-thinking CIOs with original research, advisory content, tools and invitation-only events designed to educate and inspire. Participants learn from leading innovators, engaging thought leadership content and unique peer support from the most brilliant minds in the industry. The CIO Summit will include welcome remarks from CEO Charles Giancarlo, customer led discussions on data-centric infrastructure strategies, and a breakout session led by Pure Storage CIO Yousuf Khan. Pure Appreciation Night. Pure//Accelerate attendees are invited to attend an appreciation party on May 23 featuring food, drinks and musical entertainment from the Goo Goo Dolls. Additional details to follow. For more information, including conference agenda, schedule and a complete roster of speakers, visit us here.

https://www.prnewswire.com/news-releases/pure-storage-announces-third-annual-technology-conference-pureaccelerate-2018-300637014.html
 

Big Switch Unveils Advanced Network Telemetry for Pervasive Security and Deeper Visibility

SANTA CLARA, Calif., April 26, 2018 (GLOBE NEWSWIRE) -- @BigSwitch Networks®, The Next-Generation Data Center Networking Company, today unveiled new innovations for Big Monitoring FabricTM (Big Mon) that enable network and security operations teams to capture cloud-native data center network traffic at scale, and provide deep application-level analytics. Big MonTM Recorder Node allows high-performance packet recording, querying and replay functions, and Big MonTM Analytics Node provides unprecedented network visibility to monitor, discover and troubleshoot network and application performance issues as well as accelerate root cause of security breach discovery. With Big Mon Recorder and Analytics Nodes, customers can now achieve deep network telemetry for both cloud-based and traditional data center environments and have the ability to replay past conversations across users and applications with a single-click.  Architecture flexibility allows Big Mon to easily extend to multi-cloud environments, including hybrid cloud and public cloud deployments. “Big Switch has pioneered the use of cloud-native networking architectures to achieve pervasive network visibility and enhanced network security,” said Prashant Gandhi, Chief Product Officer, Big Switch Networks. “With the launch of Analytics Node and Recorder Node, Big Switch further distances itself from legacy box-based solutions, offering unmatched network telemetry to achieve root cause analysis, breach mitigation, and faster mean time to resolution.” It is estimated that nearly 8.5 billion devices are connected to the Internet today. In order to support the digital economy, cloud-native applications are being developed and deployed at increasing rates to collect and optimize the vast amounts of data being generated. The nature of cloud-native applications generates a significant increase in East-West network traffic, providing new attack vectors beyond the hardened DMZ-protected North-South traffic. Unfortunately, the volume, velocity and sophistication of cyber attacks are also intensifying, resulting in the need for pervasive network security and visibility solutions. To mitigate against cyber attacks, network packet brokers (NPBs) are required as part of a holistic network security strategy. Traditional box-based, hardware-centric NPBs are architecturally constrained to meet emerging security and visibility demands of cloud-native data centers. The University of Oklahoma deployed Big Monitoring Fabric nearly two years ago. Initially the University made the decision to replace an incumbent legacy NPB solution with Big Mon, due to its ease-of-use, and the versatility of the product appealed to its innovative IT team. Additionally, the University has achieved twice the visibility at nearly 50% CAPEX savings. The University has since deployed both Big Mon Analytics Node and Recorder Node in order to address the following challenges: Unauthorized Application and Device Sprawl, Capacity Planning and Threat Mitigation. The University already used multiple security tools, Analytics Node is being leveraged to optimize tool usage and make them more efficient. According to The University, when it gets a Phish, they try to find out how many users replied or visited the website in the email or submitted their credentials to a website. Given BRO doesn’t capture the POST data by default, the University relies on Big Mon Recorder Node, to enable a full packet capture device to match users to IP’s on its network in order to easily determine who submitted credentials. While Phished user passwords are immediately reset, the University leverages Big Mon to determine if credentials were used for malicious activity, and to monitor accounts for possible login from external IP’s. “As a leading public University, which serves a very large number of users, Analytics Node and Recorder Node have provided us with an efficient, cost-effective and scalable way to address multiple challenges that we faced due to having an open network,” said Aaron Baillio, Managing Director, Security Operations and Architecture, University of Oklahoma. “Analytics Node together with the packet capture capability of the Recorder Node has allowed us to reinforce security posture by rapid impact analysis and mitigation of compromised user credentials.” Inspired by the design principles of hyperscale operators, Big Switch’s Big Mon leverages SDN controls, a fabric architecture based on open networking (britebox/whitebox) switches and DPDK-powered x86 servers to deploy highly scalable, agile, flexible and cost-effective network visibility and security solutions. This next-generation NPB architecture is highly flexible, with fabric switches providing L2-L4 filtering at line rate, while DPDK-based x86 nodes provide high-speed packet-level and flow-level services with Big Mon Service Node, recording and playback with Big Mon Recorder Node and deep application-level visibility with Big Mon Analytics Node. All of the components are fully controlled and managed by the Big Mon Controller, thus enabling organizations to deploy large scale monitoring fabrics within existing OpEx budget. Big Mon architecture is also inherently scale-out, allowing customers to start with a single monitoring switch, and grow the fabric on an as-needed basis. Big Mon Service Node, Recorder Node and Analytics Node can be deployed in clusters to enable independent scale-out. "Big Switch is bringing cloud-scale analytics and integrated packet flow recording to multi-cloud software-defined infrastructure,” said Stephen Collins, Principal Analyst, ACG Research Network Visibility and Analytics. “The Analytics Node and Recorder Node offer organizations a simple and more cost-effective solution for rapidly detecting performance anomalies, identifying security issues and conducting forensic analysis.” Big Mon Recorder Node - Next-gen Packet Recorder With vast amounts of data traversing the data center network, traditional packet capture solutions are unable to efficiently and affordably scale. The need for a next-gen packet capture solution for forensic analysis of events and inefficiencies is clear. Recorded data allows network IT teams to replay the specifics of an event and provides them with the necessary context to solve operational and security threats, by retrieving a historical record of the exact moment a service anomaly occurred, in order to derive root cause and predict future trends. Big Mon Recorder Node is high-performance packet recorder software, deployed on a commodity x86-based server. The Big Mon SDN controller automatically discovers the Recorder Node, ensuring a single point of configuration and device lifecycle management. Multiple Recorder Nodes can be strung together, allowing end-users to store more network traffic for longer periods and retrieve them via the Big Mon Controller or Analytics Node with agility and simplicity.

https://globenewswire.com/news-release/2018/04/26/1487781/0/en/Big-Switch-Unveils-Advanced-Network-Telemetry-for-Pervasive-Security-and-Deeper-Visibility.html

Juniper Unveils Contrail Software to Compete With Cisco, VMware

@Juniper Networks today rolled out @Contrail Enterprise Multi-Cloud, which it’s positioning as a more open alternative to Cisco’s ACI and @VMware’s #NSX. Juniper’s technology controls both the underlying devices of the data center infrastructure as well as the policy control overlay. “We’re unifying the underlay and overlay into a single controller, whether an enterprise is looking for device or fabric management or overlay policy and control,” said Mike Bushong, Juniper’s VP of enterprise cloud marketing. Related Articles Service Providers Race Toward Network Automation, Incredible Hulk Style Verizon Sees Fixed 5G Opportunity Where AT&T Does Not Nokia Stumbles Out of the Blocks, But Is Upbeat on 5G, Rival Woes Your Data Center Needs a Network Time Machine How to Make Money in SDN He said Contrail Enterprise Multi-Cloud is somewhat comparable to Cisco’s ACI and “its APIC controller that manages the underlay and overlay.” But he added, “We’re more open to multi-vendor. Our goal is to make it so enterprises can move to multi-cloud. The days when you’re going to have an end-to-end solution with proprietary control, those days are over.” Contrail Enterprise Multi-Cloud enables policy and control capability for any workload across multi-vendor networking environments and most server deployments in any clouds whether public, private, or both. Contrail At the end of 2017, Juniper moved its OpenContrail software-defined networking (SDN) controller to the Linux Foundation. And the project has since been renamed Tungsten Fabric and become part of the Linux Foundation Networking Fund (LFN). Juniper plans to continue participating in the open-source Tungsten Fabric project and mirror its updates in its commercial Contrail products. “We’ll be one of the biggest consumers of Tungsten Fabric,” said Randy Bias, VP of technology and strategy at Juniper Networks. The company has architected its Contrail Enterprise Multi-Cloud to support multi-vendor environments based on open interfaces and standard protocols. “When you look at data center and IP fabrics, you’ve got to have a way of stitching these together,” said Bushong. In addition to overlay and fabric orchestration, the software provides a single tool to manage bare metal servers, traditional networking devices, virtual machines, containers, private and public clouds, and security software. It also provides real-time infrastructure performance monitoring for data center networking devices as well as application monitoring. Today’s news from Juniper is the latest in its roll-out of a multi-cloud strategy. Earlier this year, the company announced several products focused on connectivity. “We asserted in February that multi-cloud needed to be end-to-end,” said Bushong. “But today we’re saying it also needs to be top-to-bottom; the underlying connectivity but also visibility, security, and orchestration.” While today’s announcement was about the data center, Bushong hinted that the next piece of Juniper’s strategy would be about a multi-cloud strategy in other places. “Migration has to happen across the data center, branch, and public cloud,” he said. “Juniper is committed to the secure, multi-cloud journey.”

https://www.sdxcentral.com/articles/news/juniper-unveils-contrail-software-to-compete-with-cisco-vmware/2018/04/

Nexenta to Showcase Broadest SDS Portfolio on DellEMC at Dell Technology World 2018

LAS VEGAS and SAN JOSE, Calif., April 26, 2018 (GLOBE NEWSWIRE) -- Nexenta (@Nexenta), the global leader in Open Source-driven #Software-Defined Storage ( #OpenSDS ), today announced its participation at #DellTechnologyWorld 2018, April 30-May 3, at The Venetian. Event attendees are invited to meet with Nexenta executives at booth #1754, find out more about Nexenta’s product portfolio for file, block and object storage software and its long-standing partnership with Dell. Dell and Nexenta's Software-Defined Storage solutions offer a comprehensive suite of storage management services that run on Dell’s world-class servers and storage hardware. Together Dell and Nexenta offer: Dell Certified Reference Architectures for ease of deployment and support. One software stack for all-disk, hybrid and all-flash storage configurations. Petabytes of storage deployed at leading worldwide technology, research, healthcare, and other industry organizations. Flexible deployment, including on Dell 14G servers, on virtual machines or on the public cloud. These high-performance yet cost-effective data storage solutions take advantage of features such as inline deduplication, unlimited snapshots and cloning and high availability support. Dell Certified Reference Architectures enable ease of deployment and support – and being fully integrated with Dell ProSupport means that Nexenta customers receive the premium support services they have come to demand and expect. Dell World attendees can learn more about this at Nexenta's booth #1754 where you can find our very own Nexenta solution Experts. Those that stop by Nexenta’s booth will be given the chance to win one of five $100 Amazon gift cards, courtesy of Nexenta.

https://globenewswire.com/news-release/2018/04/26/1488214/0/en/Nexenta-to-Showcase-Broadest-SDS-Portfolio-on-DellEMC-at-Dell-Technology-World-2018.html

Cohesity SD-Storage Automates Data Backup in Public Clouds

@Cohesity ’s latest software-defined storage  (SD-storage) updates target cloud-based application testing and development and automated backup for data stored in public clouds. The company today said its on-premises data backup platform, aptly named DataPlatform, now includes a capability called #CloudSpin that allows companies to instantly access their backup data in the cloud for testing and development purposes. It converts backup data stored in the appliance into a new virtual machine (VM) for use in the cloud.

https://www.sdxcentral.com/articles/news/cohesity-sd-storage-automates-data-backup-public-clouds/2018/04/

Cisco touts Application Centric Infrastructure success

NASDAQ: CSCO) says that a range of service providers around the world have chosen to deploy the company's Application Centric Infrastructure ( @Cisco ACI). The software-defined networking (SDN) platform helps enable the delivery of cloud services in an automated, policy-based manner, the company says. Cisco ACI also offers improves scalability and security while providing centralized multi-site management capabilities, the company adds. Customers who have deployed the SDN platform include: NTT Communications (United States) T-Systems (Germany) Hosters (Denmark), a certified Microsoft Azure Managed Service Provider scanplus GmbH (Germany), a managed cloud services provider Tieto Oyj (Finland), a Nordic software and services company that has deployed Cisco ACI as a Layer 2 fabric in six data centers in three countries Ritter (United States), which provides residential and business services to more than 45,000 customers in rural Arkansas and Tennessee. For related articles, visit the SDN/NFV Topic Center. For more information on high-speed transmission systems and suppliers, visit the Lightwave Buyer's Guide.

http://www.lightwaveonline.com/articles/2018/04/cisco-touts-application-centric-infrastructure-success.html

Red Hat Accelerates Software-Defined Storage Adoption with Red Hat Storage One

RALEIGH, N.C.--(Business Wire)-- @RedHat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Storage One, a new approach to web-scale enterprise storage aimed at providing customers the convenience of hardware-optimized systems while preserving the flexibility and scale of software-defined storage. Red Hat Storage One is designed to provide more of a plug-and-play approach to the procurement, deployment, and performance tuning of software-defined storage, ushering in a new wave of enterprise storage to meet the varying demands of modern workloads. Red Hat Storage One is a platform of pre-engineered systems, created in close collaboration with Red Hat’s server hardware partners, that delivers tightly packaged workload-optimized storage solutions. The fulfillment and support is offered via Red Hat’s hardware partner ecosystem. Supermicro is the first Red Hat server partner offering a single part number for software, hardware, and support of the solution. With rapidly increasing data volumes, many organizations are facing pressure to modernize traditional systems. As a result, many enterprises now want the benefits of software-defined storage without investing in the deep skill sets that may be required to configure and tune the systems. Unlike many proprietary, monolithic storage systems that are labeled software-defined storage only in name, Red Hat Storage One offers an open, flexible, and modular solution that can easily be extended to meet the evolving needs of the modern enterprise. Red Hat Storage One can help customers get the flexibility of software-defined storage and the convenience of storage appliances, with features including:

http://m.digitaljournal.com/pr/3747643

Dell EMC storage strategy talk buzzes Dell Tech World

Dell EMC World is no more. The combined vendor has renamed its annual user summit Dell Tech World, where data storage product upgrades will touch flash, HCI and software-defined.

On Monday @Dell and @EMC will kick off their third annual user conference since merging. The conference is happening amid a swirl of questions about the technology and business directions of the world's largest storage vendor.  @DellTechnologies World will take place on the same Las Vegas site that hosted EMC World through 2016 and @DellEMC World in 2017. The new name is a reminder of the gradual absorption of the EMC Federation brands into Dell Technologies. On the product front, Dell EMC is expected to beef up its most successful emerging storage platforms -- all-flash arrays and hyper-converged infrastructure -- as well as legacy Dell platforms. Other expected themes include Dell EMC's long-range storage roadmap for non-volatile memory express flash (NVMe), plans for revitalizing sales of the vendor's midrange storage arrays and a viable hybrid cloud strategy. Will Dell and VMware merge, or not? Is EMC an IPO target? One thing to listen for is whether Dell chairman and CEO Michael Dell addresses ongoing reports of a potential reverse merger with publicly traded VMware during his Monday keynote. Dell owns an 81% stake in VMware after buying EMC. Allowing VMware to acquire Dell Technologies would shift merger-related debt to VMware's books and make Dell a public company again. Such a move requires approval from VMware shareholders. Dell has been considering ways to reduce its debt burden of about $40 billion, coming mostly from its $60 billion-plus EMC acquisition. It recently spun off its Pivotal Software subsidiary (acquired in the EMC deal) through an initial public offering. Other options reportedly under discussion include taking Dell EMC public, or do nothing and keep the existing corporate structure intact. As a paper restructuring, a merger involving VMware makes a lot of sense, said Greg Schulz, chief advisory analyst at Server StorageIO, a consulting firm in Stillwater, Minn. "Candidly, I'd be surprised if they announce it at the show, but I wouldn't rule it out," Schulz said. "There are some people who look at this as Dell unraveling the deal with EMC, as if to say 'I told you so.' Either way, though, Michael Dell has to address the VMware rumors. The less he says about it, the more Dell EMC is going to get grilled about it." How will Dell tie storage refreshes to emerging hybrid cloud workflows? Aside from expected product news, Dell EMC needs to articulate how its data storage helps customers adapt in a hybrid cloud era, said Scott Sinclair, a storage analyst at Enterprise Strategy Group in Austin, Texas. The message takes on added significance given the revenue growth posted by rival NetApp since inaugurating its hybrid cloud Data Fabric strategy three years ago. "Organizations are trying to wrap their head around what it means to compete in a digital economy," Sinclair said. "I'm looking for Dell to lay out its vision for the data center of the future and how they plan to help customers get there. They need the new product launches and features, but for a company of Dell's breadth, the strategic message at the end of the show should be a sum greater than the components of each individual product release." Dell EMC storage revenue has declined for several quarters, indicative of an industry-wide pattern. Dell executives partly blame a slump on sales execution in midrange storage. That prompted changes to its compensation structure and a realignment of the Dell Infrastructure Solutions Group (ISG), which encompasses networking, servers and storage. Dell EMC storage accounted for $4.2 billion in revenue last quarter, a drop of 11% year over year. A good chunk of that revenue stemmed from consumption-based purchases and recurring maintenance contracts on existing storage gear. Reshuffling ISG is an acknowledgement by Dell EMC that legacy networked storage is slowly giving way to flash and hyper-converged infrastructure. Longtime Dell executive Jeff Clarke heads the realigned ISG, having taken over following the retirement of EMC veteran David Goulden last year. Clarke also runs the Dell client solutions group as vice chairman of products and operations. "What we're seeing is the aftermath of the EMC acquisition," said Eric Burgener, a research vice president for the storage practice at analyst firm IDC, based in Framingham, Mass. "They were very open at the time that it was going to take several years for them to streamline the organization. There are certain products that they're getting rid of and they have to choose between some [others]. I think we'll probably see that going for at least another year." Camberley Bates, a managing director and analyst at Evaluator Group, based in Boulder, Colo., said she wants to hear a clear strategy to stabilize post-merger turnover in Dell EMC storage sales team. "We understand EMC has lost quite a few salespeople," she said. "EMC had the best storage sales force for a very long time, but changes in customer relationships and shifts within the larger Dell company have caused some EMC customers to pause as they make decisions" on product refreshes. "Can they be a storage growth company? Yes, if they have a clear roadmap discussion on investments they're making in the product. Second, they need to retool the sales force, which I believe they are doing, such that there is an expert focus on storage. We have seen several firms combine server and storage sales, without success," Bates added. A storage portfolio top-heavy with midrange offerings may also be weighing on sales. Prior to the merger, EMC had the market-leading VNX and Unity platforms, while Dell sold the SC Series (formerly Compellent) and PS Series (formerly EqualLogic) SAN arrays. Dell EMC plans to phase out the PS Series, but it added all-flash options to its SC Series last year. All-flash XtremIO arrays also sell into the midrange market. Ultimately, Schulz said, Dell EMC will have to consolidate to one or two midrange storage platforms that can provide a webscale architecture. "Customers at the very high end will continue to buy Dell EMC high-end VMAX," he said. "There won't be many new customers in that area, but it's in the midrange where they are most vulnerable. Ultimately, Dell EMC will have to consolidate to one or two midrange storage platforms" to address emerging cloud and webscale workloads. Will Dell EMC formalize its strategy on NVMe? This seems like the year for Dell EMC's data storage to make a splash on its NVMe storage, with analysts predicting NVMe over Fabrics host-to-target connectivity to boost deployments by 2019. Garry Kranz asks: Which strategic direction are you urging Dell EMC to take with regard to data storage products? Join the Discussion Dell EMC was an early entrant in NVMe storage with its DSSD D5 Rack-Scale Flash Appliance, which it designed with a proprietary mesh fabric. It was also the first to scrap an NVMe flash product, taking DSSD D5 out of circulation in May 2017 after meager sales performance. Company product executives have said the plan is to integrate DSSD as a fast flash tier in existing platforms, making it a candidate for insertion in VMAX, XtremIO or server-side flash with Dell PowerEdge. Dell EMC has yet to indicate how it will use DSSD technologies, if at all. EMC was an early contender in all-flash arrays, but that market has since plateaued, IDC's Burgener said. "Even as late as last year, the Dell EMC VMAX all-flash was the fastest-growing revenue stream in their external array portfolio. It overtook the XtremIO. But a lot of that growth is starting to level off, because they're not leveraging their installed base [for] replacements," he said.

https://searchstorage.techtarget.com/news/252439896/Dell-EMC-storage-strategy-talk-buzzes-Dell-Tech-World

NetApp Finds Its Cloud Feet

@NetApp appears to have turned the corner and finally found a solid story to tell about its cloud plans. I've been frustrated by NetApp's cloud approach for a few years. The ideas were solid enough, but the delivery on them, and the communication to customers, lagged the potential. We saw the result in flat and declining revenues. Now it looks like NetApp has managed to acquire enough outside talent who truly understand the non-ONTAP world that it can start to move past its own heritage and become an innovative company again. The portfolio of cloud products and services is now broad enough to provide useful options to enterprises who like NetApp on-site but also want to be using more cloud. NetApp is adding features and functions to suit what customers are looking for today, rather than coasting on past glories. Stephen Foskett Eiki Hrafnsson, Technical Director Cloud Data Services, presents at Cloud Field Day 3. SAPVoice Your Career Depends On Split-Second Decisions People Make About Your Appearance "Having built a public cloud from scratch […] I can tell you, storage is hard," said Eiki Hrafnsson, Technical Director Cloud Data Services. Hrafnsson came to NetApp about 8 months ago when NetApp purchased Greenqloud, the Icelandic hybrid cloud management platform founded by Hrafnsson and Tryggvi Larusson and steered into acquisition by ex-CEO Jonsi Stefansson who is now VP of Cloud Services at NetApp. Greenqloud had tried to do things with more bleeding edge solutions, and found all of them wanting. "It was only when we started to use NetApp that we saw real stability, and real performance at the same time," he said. Recommended by Forbes NetApp Still Figuring Out How To Cloud NetApp Unravels Data Fabric - Tailored Togs Or Tatty Toga? NetApp Officially More Than ONTAP NetApp Acquires SolidFire For $870 Million MOST POPULAR Photos: The 10 Most Dangerous U.S. Cities TRENDING ON FACEBOOK Farewell Toys R Us, We Will Miss You MOST POPULAR Photos: The Toughest Jobs To Fill In 2017 MOST POPULAR The Top 10 Richest Billionaires 2018 Stability and performance are fairly important for storage systems, regardless of whether they're on-site, in a colo, or in the cloud. What's been lacking, however, has been the modern set of features that application developers have been looking for. Developers have been willing to put up with lower performance, and sometimes less stability, in order to gain access to features. "A lot of people, including myself, would ask 'Why would I use ONTAP as a controller in the cloud?," said Hrafnsson, "Just use EBS, right? But when I looked into it, it's actually cheaper, and higher performance, and cheaper, to use an ONTAP cluster in AWS than to use EBS." NetApp's new Cloud Volumes service builds on this foundation to provide the same high-performance and featureful storage layer, but as a consumption service with APIs to drive it all. The full suite of cloud options is clearly laid out at cloud.netapp.com and it seems to me that NetApp has done a good job of clearly articulating what it offers people, and why they should want it. Not everyone will, and that's okay. What's been lacking until now is a clear message from NetApp about why cloud people should entertain NetApp as a solution, and there is still a lot of work to be done to emerge from the long shadow of the appliance-based ONTAP that made the company successful. The network side of cloud continues to be the thorn in the side of NetApp's cloud aspirations. Those pesky egress charges create a substantial switching cost if you invest in one particular cloud and then change your mind. It works great for the cloud provider, of course, but not so great for customers tring to be agile and responsive, which is what cloud is supposed to be about. For customers with a lot of data—or plans to use a lot of data—this is an important planning consideration when deciding where, or even if, to go to cloud.

https://www.forbes.com/sites/justinwarren/2018/04/25/netapp-finds-its-cloud-feet/#61722cd4699f

1 Headwind IBM Will Face This Year

International Business Machines' (NYSE: @IBM ) return to growth after five years of declining revenue has been driven by a few different things. A weakening U.S. dollar has boosted the top line thanks to currency translation effects, the continued double-digit expansion of IBM's growth businesses has offset slumping sales in some legacy businesses, and the launch of the newest z14 mainframe system has provided a boost to the hardware business. The timing of the new mainframe system was just about perfect, coming when IBM absolutely needed to prove that it could grow. A 71% jump in mainframe sales during the fourth quarter of 2017 was enough to push IBM's total revenue, adjusted for currency, about 1% higher. Momentum continued for the mainframe in the first quarter of 2018 with 54% growth, but currency-adjusted total revenue was just about flat. THE Z14 MAINFRAME. IMAGE SOURCE: IBM. Unfortunately, this strong mainframe growth probably won't last. The initial spike in sales following a mainframe launch typically gives way once that launch is lapped. The z14 started shipping toward the end of the third quarter of 2017, so the second half of 2018 will likely feature a mainframe headwind that will test IBM's ability to produce revenue growth. The mainframe cycle IBM refreshes its mainframe lineup every few years. The z14 was announced in July 2017, about two and a half years after the launch of the z13. Each new mainframe triggers some existing customers to upgrade. This creates a swell in mainframe sales, often lasting a few quarters. After that initial surge, mainframe sales start to slump as the launch is lapped. That slump lasts until the next mainframe refresh, which starts the process over again.

https://www.fool.com/investing/2018/04/24/1-headwind-ibm-will-face-this-year.aspx

Former Nimble CEO Becomes New Leader Of Open-Source Container Vendor Sysdig

Former @Nimble Storage president and CEO @Suresh Vasudevan Tuesday become the new president and CEO of @Sysdig, maker of an #opensource #container intelligence platform. The appointment comes one year after @Hewlett Packard Enterprise completed its $1.2 billion acquisition of all-flash storage vendor Nimble.   Vasudeva, a storage industry veteran, led Nimble for six years from 2011 during its startup days, through its acquisition by HPE in April, 2017. The company went public during that time, issuing an IPO in 2013. Prior to Nimble, he served as CEO of server and storage vendor at Omneon and served as senior vice president and general manager of the networked storage and manageability group at NetApp as part of a 10-year tenure there.  "[Sysdig] has unlocked a new data source at the heart of applications and created a unified platform that solves both security and monitoring challenges," said Vasudevan in a statement. "No other platform can offer something so elegant and powerful."
https://www.crn.com/news/applications-os/300102561/former-nimble-ceo-becomes-new-leader-of-open-source-container-vendor-sysdig.htm

Thursday, April 26, 2018

Dell And HPE Battle For Cloud Infrastructure Supremacy As Public Cloud Drives $12.8B In Q4 Spending

Huge Growth In Infrastructure Sales @Dell Technologies is widening the market share gap with its infrastructure rival @Hewlett Packard Enterprise as #publiccloud infrastructure spending from the likes of @Amazon, @Google and @Facebook continues to soar, according to new data from research firm @IDC. Worldwide cloud infrastructure sales for public and private cloud grew more than 27 percent year over year to $12.8 billion in the fourth quarter of 2017, according to IDC. For the full year of 2017, cloud deployment revenue reached $43.4 billion, representing 22 percent growth compared to 2016.  Here are the top five vendors who led the global cloud infrastructure market during the fourth quarter of 2017.

https://www.crn.com/slide-shows/data-center/300101596/dell-and-hpe-battle-for-cloud-infrastructure-supremacy-as-public-cloud-drives-12-8b-in-q4-spending.htm

HPE Denies Huawei Partnership Reports

PALO ALTO, Calif. -- "This announcement and resulting news coverage is inaccurate. @HPE does not have a partnership with @Huawei to develop #SDWAN or any other technology for general market availability. As a strong supporter of open standards and interfaces, HPE works together with all major vendors on customer-specific projects." @Hewlett Packard Enterprise

Overcoming Space and Power Limitations in HPC Data Centers

In companies of all sizes, critical applications are being adopted to accelerate product development, make forecasts based on predictive models, enhance business operations, and improve customer engagements. As a result, there is a growing need for Big Data analytics in many businesses, more sophisticated and more granular modeling and simulation, wide-spread adoption of AI (and the need to train neural nets), and new applications such as the use of genomic analysis in clinical settings and personalized medicine. These applications generate workloads that overwhelm the capacity of most installed data center server systems. Simply put, today’s compute-intensive workloads require access to significant HPC resources. Challenges bring HPC to the Mainstream Many of today’s new and critical business applications are pushing the limits of traditional data centers. As a result, most companies that previously did not need HPC capabilities, now find such processing power is required to stay competitive. Unfortunately, several problems prevent this from happening. When attempting to upgrade infrastructure, most organizations face inherent data center limitations with space and power. Specifically, many data centers lack the physical space to increase compute capacity significantly. And all organizations incur high electricity costs to run and cool servers, while some data centers have power constraints that cannot be exceeded. Additionally, there is lack of internal HPC expertise. IT staff may not have the knowledge base to determine which HPC elements (including processors, memory, storage, power, and interconnects) are best for the organization’s workloads or the expertise to carry out HPC system integration and optimization. These skills have not been required in mainstream business applications until now. As a result, most organizations need help when selecting an HPC solution to ensure it is the right match for the organization’s compute requirements and budget constraints, and one that fits into an existing data center. Selecting the Right Technology Partner Traditional clusters consisting of commodity servers and storage will not run the compute-intensive workloads being introduced into many companies today. Fortunately, HPC systems can be assembled using the newest generation of processors, high-performance memory, high-speed interconnect technologies, and high-performance storage device like NVMe SSDs. However, to address data center space and power issues, an appropriate solution must deliver not just HPC capabilities, but the most compute power per watt in a densely packed enclosure. To achieve this, it makes sense to find a technology partner with deep HPC experience who can bring together optimized systems solutions with rack hardware integration and software solution engineering to deliver ultimate customer satisfaction. This is an area where Super Micro Computer, Inc. can help. Supermicro® has a wide-range of solutions to meet the varying HPC requirements found in today’s organizations. At the heart of its HPC offerings are the SuperBlade® and MicroBlade™ product lines, which are advanced high-performance, density-optimized, and energy-efficient solutions for scalable resource-saving HPC applications. Both lines offer industry-leading performance, density, and energy efficiency. They support the option of BBP® (Battery Backup Power modules), so the systems provide extra protection to the data centers when a power outage or UPS failure occurs. This feature is ideal for critical workloads, ensuring uptime in the most demanding situations. SuperBlade and MicroBlade solutions are offered in several form factors (8U, 6U, 4U, 3U) to meet the various compute requirements in different business environments. At the high end of the spectrum, there is the 8U SuperBlade: SBE-820C series enclosure supports 20x 2-socket (Intel® Xeon® Scalable processor) blade servers with 40 hot-plug NVMe SSDs or 10x 4-socket (Intel® Xeon® Scalable processor) blade servers with 80 hot-plug NVMe SSDs, 100Gbps EDR InfiniBand or 100Gbps Intel Omni-Path switch, and 2x 10GbE switches. This SKU is best for HPC, enterprise-class applications, cloud computing, and compute-intensive applications. SBE-820J series enclosure supports 20x 2-socket (Intel® Xeon® Scalable processor) blade servers with 40 hot-plug NVMe SSDs or 10x 4-socket (Intel® Xeon® Scalable processor) blade servers with 80 hot-plug NVMe SSDs, and 4x Ethernet switches (25GbE/10GbE). This SKU is similar to the SKU above, except it is built to operate at 25G/10G Ethernet instead of 100G InfiniBand or Omni-Path. This solution is most suitable for HPC workloads in IT environments that leverage Ethernet switches with 40G or 100G uplinks. The 8U SuperBlade offering includes the highest density x86 based servers that can support up to 205W Intel® Xeon® Scalable processor. One Supermicro customer at a leading semiconductor equipment company is using 8U SuperBlade systems for HPC applications with 120x 2-socket (Intel® Xeon® Scalable processor) blade servers per rack. This allows the company to save a significant amount of space and investment dollars in its data center. Supermicro solutions helped a Fortune 50 Company scale its processing capacity to support its rapidly growing compute requirements. To address space limitations and power consumption issues, the company deployed over 75,000 Supermicro MicroBlade disaggregated, Intel® Xeon® processor-based servers at its Silicon Valley data center. Both SuperBlade and MicroBlade are equipped with advanced airflow and thermal design and can support free-air cooling. As a result, this data center is one of the world’s most energy efficient with a Power Usage Effectiveness (PUE) of 1.06. Compared to a traditional data center running at 1.49 PUE, this new Silicon Valley data center powered by Supermicro blade servers achieves an 88 percent improvement in overall energy efficiency. When the build-out is complete at a 35 megawatt IT load power, the company is targeting $13.18M in savings per year in total energy costs across the entire data center.

https://www.hpcwire.com/2018/04/23/overcoming-space-and-power-limitations-in-hpc-data-centers/