Dell, EMC, Dell Technologies, Cisco,

Friday, March 30, 2018

VMware vSAN with Intel Optane Preview

We've tweeted about it and it's been featured in our newsletter, but to share the news with the rest of our audience, we have been working on prepping a four node #vSAN cluster. In partnership with @Supermicro, @VMware and @Intel, we have several reviews and content pieces forthcoming that highlight vSAN capabilities across a broad specturm of storage, interconnect, CPU and RAM configurations. Primarily we'll be working within what's dubbed Intel Select Solutions for VMware vSAN; but depending on timing and components, we may be able to expand into other areas including VMware tools like replication or VMware cloud options within @AWS for instance

Thursday, March 29, 2018

Jeremy Burton Stepping Down As Dell EMC CMO, Allison Dew Taking Over

@JeremyBurton, chief marketing officer of @DellTechnologies and one of the best-known public faces of the company, is stepping down from his position, Dell Technologies said Thursday. Taking over from Burton as chief marketing officer is @AllisonDew, a 10-year Dell marketing veteran who most recently has been leading marketing for Dell's Client Solutions Group, the Round Rock, Texas-based company said. Dew also spent several years in various advertising and marketing roles at @Microsoft. In a statement about Burton's departure, Dell Technologies CEO @MichaelDell said, "I cannot thank Jeremy enough for his vision and leadership, and for everything he's done for our company. He departs on a very high note, having played a leading role in establishing Dell Technologies among the world's most highly trusted and essential IT companies. He now hands the baton to Allison Dew, who has proven herself as one of the industry's most highly respected, experienced and capable marketing leaders." Burton is one of a group of top executives who joined Dell Technologies when that company acquired EMC. During the two years since, Burton spearheaded Dell's global marketing structure and strategy and managed the company's enterprise marketing. Just as important, Burton played a key role in corporate development, including leading its merger and acquisition and venture capital investment activities. [Related: Dell EMC's Goulden And Burton On China, HPE, The Future Of VMware, And The Advantages Of Going Private] Burton spent four years as executive vice president and chief marketing officer at EMC starting in 2010 before transitioning to president of that company's products and marketing business. [Sponsored Suggested Post: Webinar: Simplified Endpoint Microsegmentation from the Cloud Learn how to easily protect midsize enterprise customers from lateral attacks by leveraging SD network segmentation from a single cloud platform.] Prior to EMC, he was president and CEO of Serena Software for nearly three years, and focused on security and data management at Symantec and Veritas. Jamie Shepard, managing director for IT transformation and cloud at New York-based Accenture, said he has known Burton since he joined EMC. "He's a software guy and a techie," Shepard told CRN. "I've always applauded his marketing efforts." Shepard said that prior to EMC first introducing its VSPEX converged infrastructure reference architecture in 2012, Burton was hands-on not only with the marketing of the new offering but also on the technical side. "It's one thing to have great engineering with intelligent software built in," he said. "There were a lot of ideas going on technically. But Burton was hands-on. He said customers were not ready for a lot of the features. He was an unusual for a marketing guy. He knew the technology, and helped paint a proper vision for partners and customers." When EMC formed its early Partner Advisory Council of eight partners, including Shepard, who was then with International Computerware, it was looking for ways to better work with partners with their own individual needs, Shepard said. "Jeremy walked into a room and knew we were different sizes, had different types of customers, and were competing with each other," he said. "But he knew how to talk with all of us. He was a channel advocate. More importantly, he was an industry advocate. He genuinely liked to see people succeed."

What's behind VMware Stock's Fall

@Dell’s decision is hanging over VMware @VMware (VMW) has been in the news as its stock has fluctuated significantly in the last two months due to speculations regarding its parent company, Dell. Prior to its acquisition by @DellEMC held an 80% stake in VMware. Following #EMC ’s acquisition in late 2016, VMware became a part of @DellTechnologies, which is currently considering several strategic options, including a reverse merger with VMware. At the time of the Dell-EMC deal, it was not only the tech industry’s (QQQ) biggest buyout but also the largest financing commitment for a technology deal.
Though VMware has chosen to remain tight-lipped about the rumored reverse merger plans, VMware chief financial officer Zane Rowe said that fiscal 2018 brought $400 million in synergies, as Dell helped VMware sell its offerings. During the company’s 4Q18 earnings call, he said, “The $400 million was an increase from the prior guide that we gave you of $250 million which was the annual synergy capture, annual bookings synergy capture for FY 2018.”

How Does Micron Look to Tap Secular Memory Demand Trends?

@Micron well positioned to tap secular demand Memory demand is becoming more secular, growing beyond PCs (personal computer) and into the #datacenter, #autonomouscars, #IoT (Internet of Things), and #AI (artificial intelligence) spaces. Micron Technology (MU) is uniquely placed to tap this opportunity by leveraging its broad memory portfolio of advanced #DRAM (dynamic random-access memory), #NAND (negative-AND), #NORFlash, and #3DXPoint technologies. Its broad memory portfolio helped it achieve record design wins in the automotive space in its fiscal 1H18, which ended on March 1, 2018. In the NAND space, Micron’s management is focusing on delivering high-value solutions. In the DRAM space, it’s investing in technology advancements to achieve cost competitiveness with Samsung (SSNLF) and SK Hynix.

HPE Aruba President Melkote On The Value Of A Multi-Vendor, API-First Approach And The Endgame For Cisco-Style Vendor Lock-In

API First, Lock-In Last A new #machinelearning -based #networkanalytics offering, an acquisition, and an entrance into the red-hot market for software-defined WAN solutions: @HPE @Aruba President @KeertiMelkote is aggressively pushing the company into an API-first strategy that he says provides more value to customers and partners, and deals a well-placed blow to @Cisco 's approach to the new era of networking. Melkote highlighted the launch of HPE Aruba's NetInsight analytics offering, the acquisition of @CapeNetworks, and its planned midyear push into the #SDWAN market during the company's Atmosphere conference in Las Vegas this week. The moves are part of a strategy to help the company and its partners capitalize on the market's move toward software-based infrastructures as customers seek to quickly monitor and manage an increasingly mobile workforce. HPE Aruba's API-first, multi-vendor approach to network architecture allows customers to meet those needs in mixed, best-of-breed environments rather than being locked into proprietary solutions from traditional players like Cisco, Melkote said. And while the old and the new will likely co-exist for the foreseeable future, customers are clearly demanding a flexible, secure and efficient software solution, and partners need to double down on meeting those demands to make the most of the huge opportunity in front of them, Melkote said. What follows is an edited excerpt of Melkote's conversation with CRN.

Wednesday, March 28, 2018

Amazon AWS Cloud Services Achieve GDPR Compliance

@Amazon Web Services has achieved General Data Protection Regulation ( #GDPR ) compliance, the public cloud services provider (CSP) disclosed in a blog post. GDPR is a data privacy regulation from the European Union. Businesses and organizations worldwide — from large enterprises to small businesses — are striving to meet a GDPR compliance deadline set for May 25, 2018. Amazon insists that AWS is ready for the regulation. In a blog post this week, AWS VP of Security Chad Woolf stated: “This announcement confirms we have completed the entirety of our GDPR service readiness audit, validating that all generally available services and features adhere to the high privacy bar and data protection standards required of data processors by the GDPR. We completed this work two months ahead of the May 25, 2018 enforcement deadline in order to give customers and APN partners an environment in which they can confidently build their own GDPR-compliant products, services, and solutions.”

Nexenta Achieves Certification for Red Hat OpenStack Platform Certification

SANTA CLARA, Calif., March 28, 2018 (GLOBE NEWSWIRE) -- Nexenta (@Nexenta), the global leader in #OpenSource-driven #SoftwareDefined Storage ( #OpenSDS), today announced that they have joined the @RedHat Connect for Technology Partner Program, as well as collaboration with Red Hat to achieve Red Hat @OpenStack Certification.  This certification demonstrates that #NexentaStor, a leading enterprise grade unified file and block storage solution, has been tested and certified for use with Red Hat OpenStack Platform to provide storage services, performance and compatibility.  With this announcement, Nexenta is excited to enable customers with: Block and File storage services for Red Hat OpenStack Platform Availability of NexentaStor cinder and manila via Red Hat OpenStack Platform 10 "We’re excited to join the Red Hat Connect for Technology Partner program with this certification," said Don Lopes, VP of Channels and Marketing at Nexenta. "Our market strength in Software-Defined Storage combined with Red Hat's work in OpenStack can deliver enhanced value and benefits to our mutual customers and prospects across the Enterprise, Telco and Cloud segments.” "We are pleased to have Nexenta  join us in the Red Hat Connect for Technology Partner Program," said Mike Werner, senior director, Global Technology Ecosystems at Red Hat. “Red Hat works with partners like Nexenta that are committed to helping customers get the most out of our open hybrid cloud solutions, including Red Hat OpenStack Platform. We look forward to continue our collaboration with Nexenta as an important member of Red Hat's OpenStack ecosystem." To view additional details about NexentaStor certification with RHOSP, please visit http://bit.ly/NexentaOpenStack

Four Providers of Datacenter Software-Defined Networking Named IDC Innovators

FRAMINGHAM, Mass.--(BUSINESS WIRE)--International Data Corporation (IDC) today published an IDC Innovators report focusing on four technology providers that are considered key emerging vendors in the software-defined networking (SDN) market. The four companies named as IDC Innovators are @Apstra, Inc., @Big Switch Networks, Inc., @Plexxi, and @Pluribus Networks. A new IDC Innovators report identifies four technology providers that are considered key emerging vendors in the software-defined networking market Tweet this #SDN, and its evolutionary extension into intent-based networking ( #IBN), represents an architectural approach to datacenter networking in the cloud era. It is defined by an abstracted architectural model that better aligns network infrastructure with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility and, where needed, direct integration with cloud orchestration platforms. These capabilities can translate into significant operational savings while providing organizations with the means of faster time to business outcomes and revenue realization. "Given the imperative of digital transformation and the need to support cloud-native applications and to deliver cloud-like agility, enterprises are compelled to modernize their datacenter-network architectures and operational models. Fortunately, practical and viable options for datacenter SDN and IBN are available, addressing requirements associated with a broad range of workloads and use cases. This IDC Innovators report profiles four vendors that take varied approaches to addressing the need for datacenter networking in the cloud era," said Brad Casemore, research vice president, Datacenter Networks. Apstra offers a vendor-agnostic datacenter-network operating system and intent-based analytics that provide real-time, continuous closed-loop validation. Big Switch Networks offers a fabric-based SDN solution that is designed to make the datacenter network intelligent, agile, and flexible. Plexxi offers a programmable, workload-driven Ethernet/IP-based datacenter fabric that enables convergence of discrete networks into an intelligent datacenter fabric serving all workloads. Pluribus Networks offers a dynamic, scale-out architecture that enables organizations to build a next-generation private or public cloud that is optimized to support mission-critical environments.

The Linux Foundation Hosts 'DANOS' Project, a Unified Network Operating System

LOS ANGELES, March 27, 2018 /PRNewswire-USNewswire/ -- #OpenNetworking Summit -- The #LinuxFoundation today announced the Disaggregated Network Operating System ( #DANOS) project to enable community collaboration across network hardware, forwarding and operating system layers. DANOS is initially based on @AT&T's "dNOS" software framework of an open, cost-effective and flexible alternative to traditional networking operating systems. As part of The Linux Foundation, it will incorporate contributions from complementary open source communities to build a standardized distributed Network Operating System (NOS). NOS creation has historically been challenging due to the complexity and magnitude of software and hardware requirements -- as each device in the network needs its own operating system. However, the growth of open source combined with advances in both software and hardware, has fostered an ecosystem of advanced networking applications, new and open technologies, and building blocks on which to host an open and flexible NOS. "We are pleased to welcome DANOS to The Linux Foundation community of open, collaborative innovation," said Arpit Joshipura, General Manager of Networking, The Linux Foundation. "DANOS will provide an open NOS framework that leverages existing open source resources and complementary platforms such as switches and white box routers. We invite others in the broader ecosystem to join the effort to accelerate innovation and creation of an industry-standard disaggregated NOS." "As far as we know, DANOS is an industry first: an open-source, carrier-grade operating system for wide area networks," said John Medamana, Vice President, Packet Optical Network, AT&T. "DANOS is a milestone for us and the industry, and we're excited to see how developers and other users implement and build upon it." Supporting Quotes: "The FRRouting team welcomes DANOS to the Linux Foundation," said J.R. Rivers, CTO of Cumulus Networks. "With 3,413 commits from 68 authors and 42 organizations in the last year, the team continues to build the most full-featured, high-performance open routing stack available." "On behalf of OpenSwitch I'd like to welcome DANOS to open source NOS and Disaggregated Networking," said Alley Hansen, Board Chair, OpenSwitch, and Director, Strategy Networking Dell. "We look forward to the collaboration with DANOS on leveraging the work OPX has accomplished in order to drive more value for operators and to accelerate the adoption of composable networks." "ONF has been a champion of disaggregation, white boxes and open source and in this regard, we are happy to see DANOS launched as a carrier-grade, open source network OS for white boxes," said Guru Parulkar, Executive Director, ONF. "We are looking forward to DANOS and ONF's recent open source platform Stratum, thin switch OS, working together." "SDKLT is a revolutionary, feature-rich open source Software Development Kit, which enables a new approach to switch configuration," said Eli Karpilovski, Director of Marketing, Switch Products at Broadcom. "Broadcom is thrilled to showcase how this mature switch SDK can advance Linux Foundation projects such as DANOS." "We're excited to see the Switch Abstraction Interface (SAI) as the hardware abstraction layer for DANOS, which will enable DANOS to work across the wide set of switches and switch ASICs that implement SAI," said Dave Maltz, Distinguished Engineer, Microsoft Corp.

https://www.prnewswire.com/news-releases/the-linux-foundation-hosts-danos-project-a-unified-network-operating-system-300620140.html

Microsoft Creates Wild Half-Electron Quasiparticle for Its Future Quantum Computer

@Microsoft just announced a #quantumcomputing advancement today: a measurement that looks like an electron split in half in a piece of wire. It will be of central importance if the company hopes to create a working #quantumcomputer. Big companies like @IBM, @Google, and @Intel (and even a few startups) have built quantum computers with multiple qubits. It may appear that Microsoft is lagging behind—it hasn’t even debuted a single qubit yet! But Microsoft is working on its own quantum computer that incorporates brain-melting physics to overcome challenges plaguing the competitors. If it gets everything working, this could be a really big deal
Quantum computers are machines that use the basics of quantum physics, the physics of the smallest particles, to perform calculations difficult or impossible for regular computers to do. While you may have heard of machines with as many as Google’s 72 qubits, those are imprecise. The tiniest vibrations or energy from the outside environment could lead to an error in the calculation. But Microsoft’s ‘topological’ quantum computers might drastically reduce that noise. Its researchers are making some advances this year, including a paper published today in Nature, and they think they’ll have working qubits by the end of the year.
“One of our qubits will be as powerful as 1,000 or 10,000 of the noisier qubits,” Microsoft’s Julie Love, director of quantum computing business development, told me a few weeks ago in a Microsoft boardroom near New York’s Times Square.
Computers calculate with bits—two-state systems, like a coin that can either be heads or tails. A quantum bit, or qubit, is the same, except the coin is flipping in a black box during the calculation. You’re allowed to set some initial values on each side of the coin—complex numbers of the form a+blike you learned about in high school that, when manipulated, output how likely the coin is to land on heads or tails. You only know the value of the coin when you open the box. Computation is done by putting several coins tied together in the box at the same time and interacting them in a way such that those initial values combine mathematically. The output now relies on all of the coins, which makes certain combinations of heads or tails more likely and certain ones forbidden.

Dell EMC to resell Metaswitch Composable Network Protocols (CNP)

LOS ALTOS, Calif., March 28, 2018 (GLOBE NEWSWIRE) -- #Cloud native communications software leader @Metaswitch today announced a collaboration with @Dell EMC, who will resell its Composable Network Protocols ( #CNP). Dell EMC will combine Metaswitch’s CNP IP routing and MPLS networking stacks, which draw from more than 35 years of protocol development and are deployed in the products of more than 250 network OEMS, with its own EMC OS10 Open Edition on ONIE-enabled platforms. The combination of these well-established, time-tested technologies separates this from alternative offerings and enables true software disaggregation. “The combination of Metaswitch CNP and Dell EMC OS10 Open Edition significantly raises the viability of software disaggregation in production deployments,” said Drew Schulke, VP of networking at Dell EMC. “With this comprehensive portfolio of hardened network protocol stacks, we’ll work closely on addressing the complexities of data center interconnect and wide-area IP/MPLS routing at scale. We’re pleased to be working closely with Metaswitch with its long and successful track record of development and deployment of protocol stacks.” Dell EMC OS10 Open Edition combines the best of Linux, open computing and networking to advance open networking disaggregation. OS10 Open Edition embodies the evolution of networking software and was designed to introduce new levels of software flexibility and programmability in large-scale data center environments for cloud and communications services providers (CSPs). It easily integrates with DevOps tools and serves as a foundation for the OpenSwitch project within The Linux Foundation. Metaswitch CNP provides completely disaggregated network software solutions for deployments ranging from Layer 2 to Layer 3 fabrics, VPN interconnect and carrier-grade MPLS infrastructures. Running stand-alone or combined with third-party commercial or open source stacks, Metaswitch routing and control plane protocols install and operate as binary applications on top of network operating systems including OpenSwitch. “This combined solution introduces new capabilities and flexible purchasing options when moving to a fully-disaggregated architecture,” said Shriraj Gaglani, EVP of business and corporate development for Metaswitch. “Dell EMC has a very strong track record as a pioneer in open networking and we share their vision for the future of true software disaggregation.” For more information about the Composable Network Protocol packages available through Dell EMC and to download an evaluation version, visit the Metaswitch web site here. Solutions are also being demonstrated on Dell EMC booth #C16 at ONS 2018. About Metaswitch Metaswitch is the world’s leading cloud native communications software company. The company develops commercial and open-source software solutions that are constructively disrupting the way that service providers build, scale, innovate and account for communication services. By working with Metaswitch, visionary service providers are realizing the full economic, operational and technology benefits of becoming cloud-based and software-centric. Metaswitch’s award-winning solutions are powering more than 1,000 service providers in today’s global, ultra-competitive and rapidly changing communications marketplace. For more information, please visit www.metaswitch.com.

HP Unveils Workstations For Machine Learning

Today @HP unveiled new workstations specifically built around machine learning (ML). The new ML platforms leverage the power of HP’s Z Workstations, including graphics technology and software tools, and uses them to optimize workflows in ML. Included in this announcement is the HP Z8, what it is referring to as the world’s most powerful workstation for ML development (based on power and hardware specs). #MachineLearning and other potential #ArtificialIntelligence ( #AI) technologies can come with tremendous benefits. ML can drastically reduce design times while simultaneously increasing performance, productivity, and quality. This is especially useful to areas such as facial identification, sentiment analysis, fraud detection, and predictive analytics where time is of the essence. To create these powerful ML algorithms and data sets, users will need equally powerful workstations at the edge. People tend to think that complex models that need tons of data are done in the cloud. While this is true in many cases, the cost-effectiveness of this method goes down rapidly when performance is needed, thus enter the workstation. Aside from a cost advantage, developers will also have more control on a workstation, along with better security. However, most workstations don’t have quite the power needed for ML workloads. HP’s new Z Workstations provides state-of-the-art tools, such as HP curated software stacks, as well as support for NVIDIA GPU Cloud. The latest HP Z Workstations also offer high security. They are self-healing with hardware-enforced and manageable security. Availability The new ML solutions are available now as part of HP’s Device as a Service (DaaS) offering.

Lightspeed just filed for $1.8 billion in new funding, as the race continues

@Lightspeed just filed for $1.8 billion in new funding, as the race continues Connie Loizos Just a day after @General Catalyst, the 18-year-old venture firm, revealed plans in an SEC filing to raise a record $1.375 billion in capital, another firm that we’d said was likely to file any second has done just that. According to a fresh SEC filing, Lightspeed Venture Partners, also 18 years old, is raising a record $1.8 billion in new capital commitments from its investors, just two years after raising what was then a record for the firm: $1.2 billion in funding across two funds (one early-stage and the other for “select” companies in its portfolio that had garnered traction). Still on our watch list: news of bigger-and-better-than-ever funds from other firms that announced their latest funds roughly two years ago, including Founders Fund, Andreessen Horowitz and Accel Partners. The supersizing of venture firms isn’t a shock, as we wrote yesterday — though it’s also not necessarily good for returns, as we also noted. Right now, venture firms are reacting in part to the $100 billion SoftBank Vision Fund, which SoftBank has hinted is merely the first of more gigantic funds it plans to raise, including from investors in the Middle East who’d like to plug more money into Silicon Valley than they’ve been able to do historically. The game, as ever, has also changed, these firms could argue. For one thing, the size of rounds has soared in recent years, making it easy for venture firms to convince themselves that to “stay in the game,” they need to have more cash at their disposal. Further, so-called limited partners from universities, pension funds and elsewhere want to plug more money into venture capital, given the lackluster performance some other asset classes have produced. When they want to write bigger checks to the funds in which they are already investors, the funds often try accommodating them out of loyalty. (We’re guessing the greater management fees they receive, which are tied to the amount of assets they manage, are also persuasive.) What’s neglected in this race is the fact that the biggest outcomes can usually be traced to the earlier rounds in which VCs participate. Look at Sequoia’s early investment in Dropbox, for example, or Lightspeed’s early check to Snapchat. No matter the outcome of these companies, short of total failure, both venture firms will have made a mint, unlike later investors that might not be able to say the same. There is also ample evidence that it’s far harder to produce meaningful returns to investors when managing a giant fund. (This Kaufmann study from 2012 is among the mostly highly cited, if you’re curious.) Whether raising so much will prove wise for Lightspeed is an open question. What is not in doubt: @Lightspeed is right now among the best-performing venture firms in Silicon Valley. In addition to being the first institutional investor in now publicly traded @Snap, the company wrote early checks to @MuleSoft, which staged a successful IPO in 2018; in @StitchFix, which staged a successful IPO in 2018; in @AppDynamics, which sold to @Cisco for $3.7 billion last year. It was an early investor in @Nimble Storage, which sold to @Hewlett Packard Enterprise for just north of $1 billion in cash last March. And just two weeks ago, another of its portfolio companies, @Zscaler, also staged a successful IPO. At a StrictlyVC event hosted last year by this editor, firm co-founders Ravi Mhatre and Barry Eggers talked about their very long “overnight” success story, and about the importance of funding companies early to help them set up durable businesses. It will be interesting to see whether this new capital is invested in more early-stage deals, or the firm sees growing opportunity to compete at the growth stage. Probably both? Stay tuned.

Supermicro's New Scale-Up Artificial Intelligence and Machine Learning Systems with 8 NVIDIA Tesla V100 with NVLink GPUs Deliver Superior Performance and System Density

SAN JOSE, Calif., March 27, 2018 /PRNewswire/ -- Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today is showcasing the industry's broadest selection of GPU server platforms that support NVIDIA® Tesla® V100 PCI-E and V100 SXM2 GPU accelerators at the GPU Technology Conference in the San Jose McEnery Convention Center, booth 215, through March 29. @Supermicro offers best performing #GPU servers with #Tesla V100 PCI-E and V100 SXM2 32GB GPUs For maximum acceleration of highly parallel applications like #artificialintelligence ( #AI ), #deeplearning, #selfdriving cars, #smartcities, #healthcare, #bigdata, #HPC, #virtualreality and more, Supermicro's new 4U system with next-generation @NVIDIA #NVLink™ interconnect technology is optimized for maximum performance. SuperServer 4029GP-TVRT supports eight NVIDIA Tesla V100 32GB SXM2 GPU accelerators with maximum GPU-to-GPU bandwidth for cluster and hyper-scale applications. Incorporating the latest NVIDIA NVLink technology with over five times the bandwidth of PCI-E 3.0, this system features independent GPU and CPU thermal zones to ensure uncompromised performance and stability under the most demanding workloads. "On initial internal benchmark tests, our 4029GP-TVRT system was able to achieve 5,188 images per second on ResNet-50 and 3,709 images per second on InceptionV3 workloads," said Charles Liang, President and CEO of Supermicro.  "We also see very impressive, almost linear performance increases when scaling to multiple systems using GPU Direct RDMA. With our latest innovations incorporating the new NVIDIA V100 32GB PCI-E and V100 32GB SXM2 GPUs with 2X memory in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world's most complex and challenging problems." "Enterprise customers will benefit from a new level of computing efficiency with Supermicro's high-density servers optimized for NVIDIA Tesla V100 32GB data center GPUs," said Ian Buck, vice president and general manager of accelerated computing at NVIDIA. "Twice the memory with V100 drives up to 50 percent faster results on complex deep learning and scientific applications and improves developer productivity by reducing the need to optimize for memory." "At Preferred Networks, we continue to leverage Supermicro's high-performance 4U GPU servers to successfully power our private supercomputers," said Ryosuke Okuta, CTO of Preferred Networks. "These state-of-the-art systems are already powering our current supercomputer applications, and we have already begun the process of deploying Supermicro's optimized new 4U GPU systems loaded with NVIDIA Tesla V100 32GB GPUs to drive our upcoming new private supercomputers." Supermicro is also demonstrating the performance-optimized 4U SuperServer 4029GR-TRT2 system that can support up to 10 PCI-E NVIDIA Tesla V100 accelerators with Supermicro's innovative and GPU-optimized single root complex PCI-E design, which dramatically improves GPU peer-to-peer communication performance. For even greater density, the SuperServer 1029GQ-TRT supports up to four NVIDIA Tesla V100 PCI-E GPU accelerators in only 1U of rack space and the new SuperServer 1029GQ-TVRT supports four NVIDIA Tesla V100 SXM2 32GB GPU accelerators in 1U.   With the convergence of big data analytics and machine learning, the latest NVIDIA GPU architectures, and improved machine learning algorithms, deep learning applications require the processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network. Supermicro's single-root GPU system allows multiple NVIDIA GPUs to communicate efficiently to minimize latency and maximize throughput as measured by the NCCL P2PBandwidthTest

10 hot storage companies to watch

Innovations such as software-defined storage ( #SDS), #hyperconverged infrastructures ( #HCI), and #blockchain have investors flocking to enterprise storage startups, and this market shows no signs of slowing down. Collectively, the 10 startups featured in this roundup have raised more than $736 million in VC funding. This total is even more impressive when you factor in two startups not included in that calculation. One of them is entirely self-funded, while the other has a unique business model and an equally unique source of non-VC funding: an ICO, or Initial Coin Offering. @Dell EMC, @NetApp, @Nimble, @Pure Storage, @Veeam @Cohesity.
According to research firm IDC, the worldwide enterprise storage market expanded by 13.7 percent year over year to just under $13.6 billion in the fourth quarter of 2017. Other research firms believe the growth rate will accelerate in the near term. Research and Markets, for instance, predicts that one fast-growing segment of the overall enterprise storage market, cloud storage, will expand to become a $92.5 billion market by 2022.
The startups in this roundup are positioning themselves to ride that growth wave, offering everything from hybrid cloud storage with smart-tiered caching to platforms intended to deliver 100 percent uptime with zero data loss after a recovery event to a system designed in a way that could enable it to become the Airbnb of distributed storage.

Zones Named One of 2018 Tech Elite Solution Providers by CRN

AUBURN, Wash.--(BUSINESS WIRE)-- @Zones, Inc., announced today that #CRN®, a brand of @The Channel Company, has named Zones to its 2018 #TechElite 250 list. This annual list honors an exclusive group of North American IT solution providers that have earned the highest number of advanced technical certifications from leading technology suppliers, scaled to their company size. @Zones Recognized as 2018 CRN Tech Elite Solution Provider Tweet this To compile the annual list, The Channel Company’s research group and CRN editors work together to identify the most customer-beneficial technical certifications in the North American IT channel. Companies who have obtained these elite designations—which enable solution providers to deliver premium products, services, and customer support—are then selected from a pool of online applicants. Zones, in conjunction with Zones nfrastructure, the Services Company of Zones, provides top-tier products coupled with world-class services. This winning combination allows them to deliver advanced IT solutions for enterprises looking to complete their digital transformation quickly and seamlessly. The implementation of a robust enablement program provides a comprehensive platform for training and certifications to ensure continuous learning and technical advancement. Zones has invested in its people, processes, and tools to consistently produce exceptional outcomes for all their customers. “Being named to CRN’s Tech Elite 250 list is no small feat,” said Bob Skelley, CEO of The Channel Company. “These companies have distinguished themselves with multiple, top-level IT certifications, specializations and partner program designations from the industry’s most prestigious technology providers. Their pursuit of deep expertise and broader skill sets in a wide range of technologies and IT practices demonstrates an impressive commitment to elevating their businesses—and to providing the best possible customer experience.” “From the beginning, we have been committed to maintaining only the highest level of technical knowledge, capabilities, and service,” said Firoz Lalji, Chairman, President, and CEO at Zones. “We are focused on ensuring we employ and develop the best solutions and services experts in the industry by placing great emphasis on the right partners and specializations. We are thankful that CRN has recognized these efforts.” Coverage of the Tech Elite 250 will be featured in the April issue of CRN and online at www.crn.com/techelite250. Follow The Channel Company: Twitter, LinkedIn and Facebook Tweet This: @TheChannelCo names @Zones to @CRN 2018 Tech Elite 250 list #CRNTechElite250 www.crn.com/techelite250 About Zones Zones, Inc., a $2 billion global IT solutions provider, delivers products and services to help companies make a complete digital transformation – from desktop to data center – to the cloud and beyond. Our expertise is reflected in our solutions environments: Workplace Modernization, Network Optimization, Data Center Transformation and Security Fortification. We leverage a deep portfolio from top-tier technology brands to enable scalability, security, and efficient IT deployments, utilizing the highest levels of technical certification from key partners, including Cisco Gold, Dell EMC Titanium, HPE Platinum, among others. Zones was named to the 2018 CRN Managed Services Provider 500 list as a member of the Elite 150, and is an eleven-time recipient of the Microsoft Excellence in Operations award. Headquartered in Auburn, Wash., Zones operates in more than 80 countries, with multiple regional and executive offices, several distribution facilities, IT partners, and a number of service affiliates around the world. As a certified Minority Business Enterprise (MBE), and holder of an elite Corporate Plus Certification, we’ve proven our ability to support many of the world’s largest corporations. For additional information, visit the company’s website at www.zones.com or call (800) 408-9663. Follow Zones on Twitter @Zones and on LinkedIn at Zones, Inc. About The Channel Company The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequaled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace. www.thechannelco.com

Azure SQL Data Warehouse now generally available in all Azure regions worldwide

@Microsoft is excited to announce the general availability of #Azure #SQL Data Warehouse in three additional regions— Japan West, Australia East, and India West. These additional locations bring the product worldwide availability count to all 33 regions – more than any other major #clouddata warehouse provider. With general availability, you can now provision SQL Data Warehouse across 33 regions with financially backed SLA of 99.9 per cent availability.
SQL Data Warehouse is a high-performance, secure, and compliant SQL analytics platform offering you a SQL-based view across data and a fast, fully managed, petabyte-scale cloud solution. It is elastic, enabling you to provision in minutes and scale up to 60 times larger in seconds. It comes standard with Geo-Backups, which enable geo-resiliency of your data and allows your data warehouse to be restored to any region in Azure in the case of a region-wide failure.
Azure regions provide multiple, physically separated and isolated availability zones connected through low latency, high throughput, and highly redundant networking. Starting today, customers can leverage these advanced features across 33 regions.
Begin today and experience the speed, scale, elasticity, security, and ease of use of a cloud-based data warehouse for yourself. You can see this blog post for more info on the capabilities and features of SQL Data Warehouse.

Cisco emboldens its disaggregation strategy

The notion of disaggregation — separating the operating system and applications from the underlying hardware — has always been a conundrum for Cisco. In a nutshell, why would the company risk losing all of the millions of dollars in development and the key networking features tied up in current Cisco hardware and software packages? But in the new world of all-things software in which Cisco plans to be king, the disaggregation strategy is gaining momentum.

This week the company took things a step further in announcing a variety of disaggregation steps enterprise and service provider customers could be interested in.
For the enterprise Cisco bolstered its IOS XE software with modeling software called Cisco Validated Design (CVD) Configuration Management Templates for IOS XE which automates Cisco reference design deployment through the use of declarative configurations, the company said.  
“The templates provided in this repository let users provision portions of various CVDs. One such example is leveraging popular open source configuration management tools to drive declarative configuration, reducing ‘fat finger’ mistakes that can be made from transcribing documentation into device configuration,” Cisco said.
IOS XE is Cisco’s Linux-based operating system that Cisco says can be run in multivendor virtualized environments.   

Apollo Is Considering IPO of Cloud-Hosting Firm Rackspace

@Apollo Global Management LLC is considering an initial public offering of cloud-computing company @Rackspace Hosting Inc., people with knowledge of the matter said. The private equity firm has held early talks with advisers and may seek to begin the process before the end of the year, said the people, who asked not to be identified because they weren’t authorized to speak publicly. Rackspace could have an enterprise value of as much as $10 billion in a U.S. listing, one of the people said. A final decision hasn’t been made and Apollo could elect to keep the business, the people said.  Apollo took Rackspace private in 2016 for $4.3 billion, equivalent to a price of $32 a share, with peer Searchlight Capital Partners also making an investment. Rackspace completed its purchase of Datapipe Inc. last year, raising an additional $800 million in debt to finance the deal. Datapipe’s previous owner, Abry Partners, kept a minority position in the combined business. Rackspace has performed very well over the past two years, one of the people said. A representative for Apollo declined to comment, while Searchlight didn’t immediately respond to messages seeking comment. Rackspace didn’t have an immediate comment. Rackspace, based in San Antonio, Texas, offers services that help companies and public authorities shift their IT operations to cloud platforms, including Amazon Web Services, Google parent Alphabet Inc. and Microsoft Corp., according to the Rackspace website.

Micron: You Don’t Know How Big This Memory Stuff Is, Says Instinet

Fears of a downturn in memory chips such as DRAM and NAND are ill-founded, argues Romit Shah of Instinet, as the raging demand for parts in cloud computing and artificial intelligence should continue to boost sales for Micron, as well as for its equipment suppliers, Lam Research, Applied Materials, and KLA-Tencor.
@Instinet analyst @RomitShah today reiterates Buy ratings on @Micron Technology (MU), and the chip equipment makers that help it, @Applied Materials (AMAT) and @Lam Research (LRCX), writing that investors still underestimate how strong the trend is in #DRAM and #NAND flash chips, which have become the “choke point” of all computing.

Cloud computing companies are gobbling up both NAND and DRAM to speed up their efforts, as memory has become the critical part, writes Shah:
DRAM and NAND storage have become the choke point in system level performance across multiple applications; cloud vendors, for example, are boosting memory content to speed up performance. These cloud companies are very sophisticated about hardware architecture. Vendors are spending tremendous amounts of capital to reduce wait times in servers. This means maximizing the amount of memory around the processor and greater use of NAND flash.
Chip supply is exacerbating the situation, writes Shah, as indicated by remarks from Micron CEO @SanjayJah:
According to commentary by Lam mgmt., 3x the capex is required to achieve the same level of DRAM bit growth today as in 2012; 5x the capex is needed in NAND. Along these lines, although DRAM WFE spend is estimated to come in at $13bn in CY18 (flat vs. CY07), that level of investment would only generate 20% bit growth vs. 90% bit growth a decade ago. Similarly, NAND WFE is estimated at $15bn in CY18, up 2.5x vs. $6bn in CY07; however, that level of investment would only yield 45% bit growth vs. 125% bit growth 10 years ago. The factor worth noting is that the memory market is more diversified. Demand drivers for both DRAM and NAND are far more balanced today than they were in 2007. A decade ago, demand for PC DRAM was 7x server DRAM; demand for mobile NAND flash was 10x SSD NAND. In 2018, demand for server DRAM is 2x PC DRAM while demand for enterprise SSDs exceeds that of mobile NAND flash.
Best of all, the high demand and tight supply is translating into much higher prices for the chips, he notes, including DRAM but also other chips such as the "graphics processing units,” or GPUs, sold by Nvidia (NVDA).

NVIDIA to Unleash Deep Learning in Hyperscale Datacenters

@NVIDIA CEO unveils Volta-based GV 100 for workstations, new inferencing software, technologies providing a 10x boost for deep learning, a self-driving car simulator, and more. Millions of servers powering the world’s hyperscale data centers are about to get a lot smarter. NVIDIA CEO @JensenHuang Tuesday announced new technologies and partnerships that promise to slash the cost of delivering deep learning-powered services. Speaking at the kickoff of the company’s ninth annual GPU Technology Conference, Huang described a “Cambrian Explosion” of technologies driven by GPU-powered deep learning that are bringing support for new capabilities that go far beyond accelerating images and video. “In the future, starting with this generation, starting with today, we can now accelerate voice, speech, natural language understanding and recommender systems as well as images and video,” Huang, clad in his trademark leather jacket, told an audience of 8,500 technologists, business leaders, scientists, analysts and press gathered at the San Jose Convention Center. Over the course of a two-and-a-half hour keynote, Huang also unveiled a series of advances to NVIDIA’s deep learning computing platform that deliver a 10x performance boost on deep learning workloads from just six months ago; launched GV 100, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced DRIVE Constellation to run self-driving car systems for billions of simulated miles. Power to the Pros Huang’s keynote got off to a brisk start, with the launch of the new Quadro GV 100. Based on Volta, the world’s most advanced GPU architecture, Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity. NVIDIA CEO Jensen Huang launches the Quadro GV100. GV100 sports a new interconnect called NVLink 2 that extends the programming and memory model out of our GPU to a second one. They essentially function as one GPU. These two combined have 10,000 CUDA cores, 236 teraflops of Tensor Cores, all used to revolutionize modern computer graphics, with 64GB of memory. Deep Learning’s Swift Rise The announcements come as deep learning gathers momentum. In less than a decade, the computing power of GPUs has grown 20x — representing growth of 1.7x per year, far outstripping Moore’s law, Huang said. “We are all in on deep learning, and this is the result,” Huang said. Drawn to that growing power, in just five years the number of GPU developers has risen 10x to 820,000. Downloads of CUDA, our parallel computing platform, have risen 5x to 8 million. “More data, more computing are compounding together into a double exponential for AI, that’s one of the reasons why it’s moving so fast” Huang said. Bringing Deep Learning Inferencing to Millions of Servers The next step: putting deep learning to work on a massive scale. To meet this challenge, technology will have to address seven challenges: programability, latency, accuracy, size, throughput, energy efficiency and rate of learning. Together, they form the acronym PLASTER.

Tuesday, March 27, 2018

Google Could Owe Oracle $8.8 Billion in Android Fight

@Google could owe @Oracle Corp. billions of dollars for using Oracle-owned @Java programming code in its @Android operating system on mobile devices, an appeals court said, as the years-long feud between the two software giants draws near a close. Google’s use of Java shortcuts to develop Android went too far and was a violation of Oracle’s copyrights, the U.S. Court of Appeals for the Federal Circuit ruled Tuesday. The case -- first filed in 2010 -- was remanded to a federal court in California to determine how much the Alphabet Inc. unit should pay. Oracle had been seeking $8.8 billion, though that number could grow. Google expressed disappointment and said it’s considering its next steps in the case. The dispute, which could have far-reaching implications for the entire software industry, has divided Silicon Valley for years between those who develop the code that makes software steps function and those who develop software programs and say their “fair use” of the code is an exception to copyright law.  “It’s a momentous decision on the issue of fair use,” lawyer Mark Schonfeld of Burns & Levinson in Boston, who’s been following the case and isn’t involved. “It is very, very important for the software industry. I think it’s going to go to the Supreme Court because the Federal Circuit has made a very controversial decision.” Computer Instructions At issue are pre-written directions known as application program interfaces, or APIs, which can work across different types of devices and provide the instructions for things like connecting to the internet or accessing certain types of files. By using the APIs, programmers don’t have to write new code from scratch to implement every function in their software or change it for every type of device. “The Federal Circuit’s opinion upholds fundamental principles of copyright law and makes clear that Google violated the law,” Oracle General Counsel Dorian Daley said in a statement. “This decision protects creators and consumers from the unlawful abuse of their rights.” Google and its supporters contend that the ruling, if left to stand, would harm development of new software programs and lead to higher costs for consumers. “We are disappointed the court reversed the jury finding that Java is open and free for everyone,” Google said in a statement. “This type of ruling will make apps and online services more expensive for users.”

New Amulet Hotkey CoreStation Solutions Simplify Windows Migration and Enhance Compute Performance

For the first time, powerful @NVIDIA #Tesla #datacenter GPUs can be used in the industry leading @DellEMC #PowerEdge FX architecture. The #AmuletHotkeyCoreStation #VFC640 GPU accelerated blade server provides market leading densities and efficiency. It is a powerful and agile platform that can handle a broad range of workloads. Support up to 128 graphics accelerated virtual desktops to simplify migration to Windows 10. Use with compute frameworks or #HPC applications to enhance compute performance.... Unique architecture: combines @Intel® Xeon® Scalable processors and NVIDIA® Tesla® data center GPUs with the efficiency of blades, and the cost benefits of rack-based systems, to handle today's evolving workloads. Market leading density: drives efficiency and performance with up to 128 GPU accelerated virtual desktops across four dual-socket servers in only 2U of rack space. Powerful performance: to handle graphics and GPU accelerated compute for virtual workstation, high-performance computing, big data analytics, machine learning and deep learning. NEW YORK, March 27, 2018 /PRNewswire/ - Amulet Hotkey Inc., a leader in design, manufacturing and system integration for remote physical and virtual workstation solutions, today announced a significant addition to the CoreStation blade family based on the Dell EMC PowerEdge FX architecture. For the first time, powerful NVIDIA Tesla data center GPUs can be used in the industry leading Dell EMC PowerEdge FX architecture. The Amulet Hotkey CoreStation VFC640 GPU accelerated blade server uses a unique PCIe expansion module and GPU card developed in collaboration with Dell EMC and NVIDIA product engineering teams. Two Amulet Hotkey DXF-EXP-V modules support up to eight NVIDIA Tesla P6 GPUs in the FX2 server chassis, while maintaining the benefits of the FC640 blades and FX2 architecture. The result is a powerful and agile platform that can handle a broad range of workloads. "The Amulet Hotkey CoreStation VFC640 expands upon the market leading blade workstation portfolio designed to meet the graphics and compute performance needs of professionals while driving customers' IT transformations," said Andrew Jackson, president, Amulet Hotkey Inc. "Our unique solution enables up to eight powerful GPUs, with up to four dual-socket servers in a PowerEdge FX2s chassis. Delivering this capability in an industry standard 2U rackmount form factor demonstrates our commitment to use innovative design and manufacturing to meet enterprise IT needs for a truly flexible and scalable computing architecture." "NVIDIA Tesla data center GPUs and NVIDIA virtual GPU software dramatically increase the graphics and compute performance of blade servers to handle evolving workflows with AI, photorealism and unlimited mobility," said Bob Pette, vice president, professional visualization, NVIDIA. "Customers working with large datasets, complex models or graphics-intensive productivity applications found in Windows 10 can leverage the benefits of high-density blades integrating multiple Tesla GPUs, NVIDIA virtualization software, Dell EMC OEM Solutions and Amulet Hotkey technology." "With the increasing adoption of Artificial Intelligence and Machine Learning, customers are looking for enhanced compute solutions that optimize these workloads," said Ron Pugh, vice President and general manager for the Americas, Dell EMC OEM Solutions. "We are excited to work with Amulet Hotkey to develop game changing solutions for our customers that accelerate IT transformation.

Enterprise Data Protection for FlashBlade with Cohesity

@PureStorage #FlashBlade, the industry's first all-flash, #scaleout file storage, purpose-built for high-performance and immediate access to mission critical file and object data. FlashBlade provides unparalleled performance across a broad range of environments. @Cohesity, the leader of hyperconverged secondary storage makes your data work for you by consolidating secondary storage silos onto a web-scale data platform that spans both private and public clouds. Join us on April 19th and learn how FlashBlade accelerates data intelligence across modern analytics, machine learning and rich metadata databases with simplified data protection, high performance, application consistent backups and file level recoverability from Cohesity. We look forward to you joining us for the live event! Sponsor: Cohesity & Pure Storage

The hyper-converged network becoming more common in HCI

It is commonly said of hyper-converged infrastructure systems that they virtualize storage, compute and networking resources. That last one, networking, is not the same as the other two in an HCI implementation. In an HCI system, it is easy to take physical compute and storage resources and turn them into logical pools of virtual resources to be delivered to the virtual machines (VMs) at the end-user level. In each of those two cases, you are virtualizing just a few things -- CPU cycles and RAM for compute or data memory space for storage. In contrast, trying to implement a hyper-converged network means virtualizing the physical switches and controllers and dividing up among the VMs the bandwidth, throughput, access and other features they provide. Virtualizing how the end user accesses network resources, however, is not the same thing as making the resource itself virtual. Because the network is vital to all the other virtualized resources, it is arguably the most important aspect of an HCI system. If the network function doesn't operate properly, the VMs can experience everything from slowdowns to simply not being available to the end user. In other words, no network means no VMs to work on. One way a hyper-converged network is used by nearly all HCI platforms is by virtualizing the network interface card (NIC) in the VMs. This is also known as I/O virtualization, because it virtualizes the network input and output of the VM. Both @Microsoft's #HyperV hypervisor and VMware's ESX support this, and Microsoft added NIC teaming to Windows Server 2016, allowing virtual NICs to be combined into a pooled virtual network adapter to reduce connectivity and throughput issues for any one VM. Virtualizing how the end user accesses network resources, however, is not the same thing as making the resource itself virtual. Virtualizing networks To be truly on the same level as the compute and storage resources in a hyper-converged platform, your hyper-converged network would need to take the function of the physical switches and controllers. It would have to turn them into a logic pool from which any set amount could be assigned to each VM. That would require much of the network hardware to be contained in the HCI appliance or replicated via software in the appliance. VMware is one of the vendors with the most fully formed hyper-converged network technology, via its NSX product. While NSX was originally developed to work solely in a VMware HCI system, last year, the company released NSX-T, which can bring network virtualization to operations running the Kernel-based Virtual Machine hypervisor or ones based on OpenStack. Nutanix is working on adding networking to its HCI platform through its Acropolis infrastructure stack. These networking features include microsegmentation services and orchestration through open APIs. Software-defined networking and HCI Software-defined networking (SDN) is exactly as stated: running software that virtualizes management of typical network hardware, such as switches, firewalls and routers. SDN is often based on the OpenFlow protocol, although other protocols, such as OpenDaylight and even OpenStack, can be used. One of the advantages of SDN is that it allows a network admin to manage the flow of data through a network from a central control console, instead of having to adjust the individual hardware components separately.

Pivot3 Jumps Into Hybrid Cloud Game, Moves HCI Stack to AWS

Offering (or at least talking about) a #hybridcloud service has become almost a requirement for hyperconverged infrastructure ( #HCI) software vendors. And now @Pivot3 has joined the party with its Cloud Edition on @Amazon Web Services (AWS). The platform allows customers to run the same HCI stack in AWS and in on-premises data centers. “A lot of the features customers like in our hyperconverged software platform running in on-premises environments around policy automation and workload prioritization, they can now take advantage of that in a cloud environment,” said Ben Bolles, VP of products at Pivot3.
Support for @Microsoft #Azure and @Google Cloud will come later this year, he said.
The primary uses cases with this edition, which will be generally available in the second half of this year, include backup and disaster recovery. Using AWS eliminates the need to stand up a second data center or co-location facility.
Intelligent Cloud Engine
The new platform also uses something that Pivot3 calls its Intelligent Cloud Engine. It builds on the company’s policy-based quality of service (QoS) engine, which automates data placement and data protection tasks based on service level agreements and business priorities.
“The Intelligent Cloud Engine essentially is putting a name on all the capabilities we have for our policy-based management, and all the work we do around data protection and data management,” Bolles said.
The company plans to add an archive use case.
“We’ll have policies built around that so that our software will automatically figure out what data should reside on premises and what data should go up to the cloud,” Bolles said. “Being able to automate workloads’ movements based on the policies customers set within our software — that would be a main difference compared to VMware or others that might run on AWS or Azure.”
HCI Stacks for Hybrid Cloud
Most leading HCI software vendors already have a hybrid cloud play. VMware Cloud on AWS launched about six months ago. Microsoft Azure Stack, which lets enterprises build a private-cloud version of the Azure public cloud in their own data centers, began shipping last September.
And Nutanix has been talking about its Xi public cloud service since last summer (even though it’s not slated to launch until later this year).
Is Pivot3 too late to the hybrid cloud game?
“It is definitely not too late for Pivot3,” said Mike Leone, a senior analyst at Enterprise Strategy Group. “The Pivot3 hybrid cloud approach is slightly different and more intelligent that many of its competitors.”
The difference, he said, is the company’s policy-based management, which can be used to optimize resource utilization. “And with resource utilization being the key cost component in the cloud, the more intelligent that resource utilization can be managed the better,” Leone said.
Smarter Hybrid Cloud?
Eric Slack, a senior analyst at Evaluator Group, agreed.
“Pivot3’s taking the right approach, and that’s what important in the long term,” Slack said. “VMware has a pretty good story. Nutanix has a head start on a lot of folks, too. But in terms of what Pivot3 is doing, they’ve been talking about automated policy management and their quality of service engine for three years. They are using that same intelligence, that same decision making [in the new hybrid cloud platform] so that cloud just becomes one more destination to put stuff.”
Slack said Pivot3’s technology is sound, and its policy engine differentiates it from other HCI software stacks.
“I don’t think they’ve gotten as much traction in the market on their quality of service engine,” Slack said. “But one of the things about that kind of technology: it lends itself to analytics. And that’s the kind of thing people are looking for in a cloud infrastructure.”

Hewlett Packard Enterprise (HPE) to Acquire Cape Networks

@Hewlett Packard Enterprise (NYSE: #HPE), today announced a definitive agreement to purchase privately-held @CapeNetworks based in Cape Town, South Africa, and San Francisco. Cape Networks will become a part of Aruba, a Hewlett Packard Enterprise company, and will expand @Aruba ’s Artificial Intelligence ( #AI) powered networking capabilities with a sensor-based service assurance solution that gives customers a simple, proactive, and network-agnostic tool for measuring and monitoring SaaS, application, and network services. The solution helps organizations deliver the best possible end user experience by enabling IT to get ahead of service quality issues before they occur, accelerate time to resolution, and lower cost of operations

SonicWall Launches MSSP Program Focused On Enabling, Simplifying Security Operations

@SonicWall has debuted a new partner program that provides solution providers with blueprints showing them how to operate a managed security practice most effectively. The Milpitas, Calif.-based security vendor said its three-tier #MSSP program will be an add-on to SonicWall's flagship #SecureFirst initiative, providing financial, marketing and enablement support to partners with an emerging or established managed security practice, according to @StevePataky, SonicWall's SVP and chief revenue officer. "It's dangerous out there, so you need to have this level of enhanced service capability," said @ScottMcCrady, vice president of global strategic partnerships. [Related: SonicWall Unveils Major Product Updates Across Its Entire Portfolio] Blueprints provided as part of SonicWall's MSSP program will span the maturity lifecycle from health and performance monitoring to high-end analytics across multiple security technologies, Pataky said. Blueprints currently available as part of the program address health and performance monitoring (i.e., is the box up and running?), as well as advanced email protection capabilities against emerging threats. Many more blueprints are coming, Pataky said, including ones for audit and compliance, advanced integrated offerings, advanced ransomware protection capabilities across multiple threat vectors, and custom high-end solutions. [Sponsored Suggested Post: Help Your Customers Build More Than a Network. It Starts Here. At Juniper Networks, our customers don’t set out to build networks. They set out to build ideas. Ideas that reinvent, and create new possibilities. Watch the video.] Forty partners have already begun the vetting process to enter SonicWall's MSSP program since the Feb. 28 soft launch, McCrady said. SonicWall wants to ensure the solution provider is sincerely committed to delivering managed security services before granted them admission to the program, according to McCrady. The MSSP program will be segmented into three tiers, according to McCrady: Protect, Powered, and PoweredPlus. Protect is targeted at partners just getting into managed security services, McCrady said, and will provide lots of tools to get them up and running as quickly as possible and addressing the biggest and baddest threat vectors. Protect partners are expected to have personnel focused on managed security operations and sales, as well as the ability to move into security-related professional services, McCrady said. These partners will from SonicWall receive training for their field sales and sales engineers around the MSSP model and how to provide support through consulting-based engagements, according to McCrady. PoweredPlus partners, on the other hand, already have a foothold in high-end security and are often attempting to address challenges such as how to get more visibility and insight into encrypted traffic, McCrady said. These partners will typically have two SOCs for redundancy and failover, a large number of certified staff, and the ability to provide security services directly to customers, he said. Top-tier partners will have access to specialists with advanced capabilities as well as expertise around the different financial models needed to launch and support a multi-vector offering, McCrady said. ProtectPlus partners are expected to deliver email, UTI and SSL security as a service, McCrady, and will have extensive access to models, frameworks, and direct engagement from SonicWall specialists. SonicWall plans to focus on helping MSSP partners move up the value stack, according to McCrady, and believes that solution providers could go from Protect to PoweredPlus in two to three years in a perfect world. The MSSP program is the second add-on initiative launched around the flagship SecureFirst program, Pataky said, and follows a program focused on partner professional services. SonicWall has embraced a 100 percent channel strategy since spinning off from Dell in November 2016. MSSPs are typically looking for broader and deeper support around tools, enablement and products as compared with a traditional VAR or MSSP, according to Michael Crean, CEO of Woodbridge, Va.-based Solutions Granted. "It's important for them to address our needs," Crean said. "SonicWall has got its swagger back." These partners also want pricing that allows them to be competitive in the marketplace, Crean said, as well as education that allows them to speak knowledgeably to the changes that are taking place in the market. "The educational component is going to be critical," Crean said.

How to transition to NVMe SSDs from SAS/SATA

Getting ahead of the SSD interface performance curve requires foresight and careful planning. As NVMe technology evolves, new adopters are looking for ways to ease the changeover. Nonvolatile memory express has emerged as the open logical device interface specification that future generations
of storage systems will be built upon, promising significant performance acceleration at economics that cannot be equaled by earlier generation SAS/SATA systems. In fact, when compared with NVMe SSDs, SAS and SATA have become legacy design interfaces. And IT experts are taking notice.
A growing number of enterprises are beginning to recognize that, if they truly want to benefit from SSD performance, they need to move toward NVMe technology with a direct attachment to the Peripheral Component Interconnect Express (PCIe) bus.
"This ... gets you as close as possible to the CPU," explained Ian McClarty, president of PhoenixNAP Global IT Services. The approach, he added, dramatically reduces latency and increases speeds through multiple PCIe lanes, thereby relieving the bottlenecks faced by SAS/SATA interface users.
Making the move
The transition to NVMe SSD technology will likely take place gradually over the next several years. Luckily for current SAS/SATA users, they don't have to face the immediate prospect of ripping and replacing their existing storage technology or attempting a forklift upgrade. "Instead, [new adopters] should look to implement future-proof architectures that will allow them to scale with new innovations in flash technology," said Jeff Baxter, chief evangelist of ONTAP at storage and data management software provider NetApp. "Some existing NVMe-ready solutions will allow customers to nondisruptively add new NVMe storage arrays directly into existing scale-out architectures and expand or move workloads as appropriate from current HDD or SAS/SATA-attached SSD arrays to NVMe SSD-based arrays over time," he added.
Before launching an NVMe transition, IT leaders should first determine whether their incumbent storage array is NVMe-capable. "In other words, is the array capable of seamlessly adding #NVMe SSDs without requiring a major reconfiguration or data migration?" asked @NarayanVenkat, @WesternDigital 's vice president of #datacenter systems for @Tegile #flash storage arrays. "If so, then simply replacing a few of the SAS/SATA SSDs with NVMe provides the benefits of instant performance acceleration without any disruption to applications."
Unfortunately, Venkat added, most incumbent storage systems aren't capable of seamless NVMe SSD support, since the products weren't designed from the outset with NVMe SSD compatibility in mind.
Navigating compatibility pitfalls
If an incumbent storage array isn't fully NVMe-compatible, as is typically the case, the only practical solution is to upgrade to an NVMe-based storage system that has been designed from the outset to take advantage of the speed and performance of NVMe SSDs. With this approach, managers can easily migrate applications and data to their new NVMe-based storage systems. "Migration is simple for virtualized environments, like VMware, as administrators can move virtual machines, along with applications and data, with a few clicks," Venkat observed.