Global banking and financial services company @DeutscheBank initiated its coverage of @DellTechnologies' tracking stock with a 'buy' rating and a price target of $114 a share on Wednesday. The bank's analysts say @Dell is giving @VMware more exposure. Shares for the Round Rock, Texas-based company were up $1.55 (2.13%) to 74.29 in regular trading on Wednesday. Dell's debt and business fundamentals are "challenged but not as dire as most assume," wrote Deutsche Bank analyst Karl Keirstead. Deutsche Bank's analysts see upside in Dell giving more exposure to VMware after it bought a majority stake in the virtualization leader through its blockbuster $67 billion acquisition of EMC. The higher price target and buy rating comes as Dell Technologies is exploring two strategic options that include merging with VMware. [Related: Dell EMC Stepping Up Global Dell EMC-VMware Sales Offensive] Dell has been driving tighter alignment with VMware to make it easier and more profitable for solution providers to sell combined Dell EMC-VMware offerings. Earlier this month Dell Technologies CEO Michael Dell confirmed that the $80 billion industry behemoth is currently evaluating an initial public offering or a "business combination" with VMware. [Sponsored Suggested Post: Can’t Find a Rack to Meet Your Exact Needs? Contact Tripp Lite to build a solution! Customize rack spaces, color, width and more, with one of the shortest turnaround times in the industry.] "As part of our ongoing multi-year strategic planning, Dell Technologies is evaluating a number of potential business opportunities. As you can see from our results, we do this from a position of strength, with a desire to grow Dell Technologies and its businesses even faster and thrive in the very dynamic IT marketplace," said Dell in an 8-K filing with the Securities and Exchange Commission. Deutsche's report comes one day before VMware reports its fiscal fourth quarter earnings on March 1, followed by Dell's on March 8.
TechNewSources is a one stop shop for all the latest, datacenter TechnNews you can use.
Dell, EMC, Dell Technologies, Cisco,
Wednesday, February 28, 2018
Windows Server 2016 hyper-converged options are here
The latest edition of Windows Server from Microsoft includes options for implementing a hyper-converged infrastructure in a Microsoft-only IT operation. Microsoft has added hyper-converged infrastructure capabilities to Windows Server 2016. This makes it possible for organizations to implement a software-defined data center based on Windows Server and Microsoft-validated hardware.
@Microsoft 's Windows Server 2016 hyper-converged infrastructure ( #HCI ) story begins with the Windows Server Software Defined ( #WSSD ) program, a system for creating validated software-defined data center ( #SDDC) designs in conjunction with Microsoft partners, such as @Dell EMC, @Hewlett Packard Enterprise, @Fujitsu and @Lenovo. With an SDDC, virtualized, clustered servers function as consolidated resource pools.
The goal of the WSSD program is to ensure that customers deploy and operate their Windows Server SDDC implementations on validated hardware. To this end, the program provides guidance for designing reference architectures and performing SDDC-specific testing. The program also offers "experience guides" and automation scripts to deploy and manage HCI systems.
Some Microsoft partners already offer WSSD-approved systems that are configured and tuned for specific data center workloads. As of this writing, the WSSD program supports three categories of SDDC implementations: Converged Software-Defined Storage (SDS), HCI Standard and HCI Premium.
OpenStack’s Latest Release: All Hail to Queens
The @OpenStack community issued its #Queens software release today. The 17th version of the #opensource #cloud software includes some hefty updates such as a software-defined storage functionality, GPU compatibility, and tracking of container workloads. The OpenStack software is evolving fast in response to user needs. “Cloud started out as a fast way to get a virtual machine,” said Jonathan Bryce, executive director of the OpenStack Foundation. “Now, people expect OpenStack clouds to solve a lot more problems. The scope of cloud continues to move out and gobble up every possible use case.”
https://www.sdxcentral.com/articles/news/openstacks-latest-release-hail-queens/2018/02/
CRN Exclusive: NetApp CEO George Kurian On Flash Storage Momentum, Software-Defined Storage Moves, And The Growing Cisco Partnership
@GeorgeKurian On What's Behind @NetApp's Growth NetApp Feb. 14 reported that company revenue in its third fiscal quarter 2018 grew 8 percent year over year to reach $1.52 billion. That included product revenue growth of 17 percent, and an all-flash array annual revenue run rate of $2.0 billion, up 50 percent over last year. @NetApp has in the past couple of years become the only major storage vendor to consistently show significant growth, driven in large part by the flash storage market where it is a strong second to @DellEMC and rapidly climbing toward the No. 1 spot. In addition, NetApp is far outdistancing its rivals in the converged and hyper-converged market, especially with its strong and growing #FlexPod relationship with @Cisco. NetApp CEO @George Kurian recently spoke with @CRN to discuss NetApp's growth, especially in terms of its flash storage business and Cisco relationship. Here's what he said is behind the company's rapid rise.
VMware's Growth Spree Likely To Continue For Q4
@VMware (NYSE: #VMW ) is scheduled to announce its Q4 fiscal 2018 earnings on March 1. The virtualization and cloud computing provider has demonstrated strength in its business through the fiscal year, with fast-growing segments such as #networkvirtualization, #hybridcloud, #hyperconverged software and #vSAN driving growth. VMware's license business and services business have both grown at a steady pace this year - a trend consistent with recent years.
Head-To-Head Contrast: Dell Technologies (DVMT) vs. Fusion-IO (FIO)
@DellTechnologies (NYSE: #DVMT) and @FusionIO (NYSE: #FIO) are both technology companies, but which is the superior stock? We will compare the two companies based on the strength of their earnings, institutional ownership, dividends, risk, valuation, profitability and analyst recommendations
VMware unveils new edge computing solutions to bolster enterprise IoT efforts
@VMware continued its enterprise Internet of Things ( #IoT) push at Mobile World Congress Wednesday, unveiling a host of new edge computing solutions that address use cases in asset management and surveillance. The solutions are being developed through industry partnerships with firms such as @AxisCommunications, @Wipro Limited, and more, a press release said. By focusing on edge computing, the VMware solutions could make it easier for business users to securely and effectively harness the data created by IoT deployments. On the VMware side of things, products such as VMware #vSAN hyper-converged infrastructure (HCI) software, VMware #vSphere, and VMware #PulseIoTCenter will all play a role in the new solutions, the release said. At this point, the solutions will address specific needs in industrial and manufacturing sectors, as well as within certain brick-and-mortar retail stores.
Nutanix (NTNX) to Report Q2 Earnings: What's in the Cards?
@Nutanix Inc. #NTNX is scheduled to report second-quarter fiscal 2018 earnings on Mar 1. The question lingering on investors' minds is if this provider of enterprise #cloud platform will be able to deliver a positive surprise. We expect the results to benefit from an enriched product portfolio, which is expanding the company's client base. Notably, the company beat the Zacks Consensus Estimate for earnings in the trailing four quarters, with an average positive surprise of 18.5%. In the previous quarter, the company delivered a positive earnings surprise of 38.5%. Its top-line performance has also been consistent. The company beat the Zacks Consensus Estimate for revenues in each of the trailing four quarters. Nutanix expects second-quarter fiscal 2018 revenues in the range of $280-$285 million. Non-GAAP gross margins are projected to be in the range of 62.5% to 63.5%. Nutanix forecasts non-GAAP net loss between 20 cents and 22 cents for the quarter.
http://m.nasdaq.com/article/nutanix-ntnx-to-report-q2-earnings-whats-in-the-cards-cm927740
Tuesday, February 27, 2018
Rigetti Rolls Out Latest Forest Quantum Developer Environment
@RigettiComputing last week reported upgrading its #Forest developer environment for #quantumcomputing. Forest provides developers with access to Rigetti’s cloud-based quantum simulator the quantum #virtualmachine ( #QVM ) as well as access to its quantum hardware systems. According to the company the latest version has improved tools for debugging and optimizing quantum programs. Founded in 2013 and based Berkeley, Calif., Rigetti bills itself as “a full-stack quantum computing company.” It drew attention in December with claims of being the first to solve an unsupervised machine learning problem on a gate model quantum computer. “We did this by connecting one of our recent superconducting quantum processors, a 19-qubit system, to our software platform, Forest. In the ten weeks since then, researchers have already used Forest to train neural networks, program benchmarking games, and simulate nuclear physics,” wrote Rigetti software developer Will Zeng in a blog announcing the Forest 1.3 upgrade. Key improvements in version 1.3, reported Zeng, include: Access to the compiler through a dedicated API that allows users to experiment with compiling programs to different hardware architectures. Test programs on the improved QVM that more accurately mimics actual quantum hardware, accelerating development time. “We’ve released preconfigured noise models based on the behavior of our QPU,” wrote Zeng. Post-testing it is now easier to port programs to the QPU which now supports the .run pyQuil command. Readout fidelity has been improved with a toolset that compensates for readout errors in the QPU; Rigetti says this “can dramatically improve” the performance of programs. “In any given quantum program, researchers make multiple calls to our API. In these circumstances, network latency can become an issue and slow down overall execution time. In Forest 1.3 we provide a 2x speedup in cloud job execution on our QPUs. This makes running hybrid classical/quantum algorithms with many API calls much faster. We’ve also made a new two-qubit gate instruction available on our 19Q-Acorn quantum processor: CPHASE(theta). This introduces our first parameterized two-qubit gate, offering more powerful and flexible entanglement between qubits,” according to Rigetti.
https://www.hpcwire.com/2018/02/27/rigetti-rolls-latest-forest-quantum-developer-environment/
Link to Rigetti blog:
https://medium.com/rigetti/forest-1-3-upgraded-developer-tools-improved-stability-and-faster-execution-561b8b44c875
Dell EMC focuses on precision medicine, connected health, security at HIMSS18
Two years ago, @Dell and @EMC made headlines for completing one of the largest tech mergers in history, the $67 billion acquisition by Dell of EMC’s #cloudcomputing and #bigdata business. Now, at HIMSS18, @DellEMC will be demonstrating how companies can make digital transformation a reality in such arenas as accountable care and the EHR. “Our focus is under four initiatives: #healthIT, #precisionmedicine, #connectedhealth and #securitytransformation,” said @RobertaKatz, director of Global Industries. “We will have demos based on our four pillars, based on the healthcare cloud. We’ll be demonstrating #virtualreality.” Many clients are still spending a lot of time on their clinical applications, Katz said. Hospitals can have 500 or 600 applications and want to know how to streamline those and become more efficient. “The way they do it is around infrastructure,” she said. This can be a multi-cloud environment for a business application or clinical application. Dell EMC is also working on projects around precision medicine, genomics and managing risk. This includes working with independent software vendors on clinical applications, EHRs and working with clinical partners to build apps for healthcare exchanges. “As part of our value proposition, we have a significant set of healthcare partners,” Katz said. Dell will be in Booth 3613.
Dell EMC PowerEdge R940 review: A beast of a server
Sitting at the pinnacle of @DellEMC 's #PowerEdge #14G server family, the R940 delivers a powerful @Intel Xeon Scalable package in a compact 3U rack chassis. It's the first 4-socket (4P) system in the 14G lineup and has the thermal capacity to handle every Xeon Scalable model right up to the high-end 28-core chips.
Targeting a broad range of mission critical workloads, it offers an impressive expansion potential with support for 6TB of DDR4 memory, 12 high-performance NVDIMM-N modules and 13 PCI-Express (PCIe) expansion slots. Storage looks good too, as the chassis has room for 24 SFF drives and is geared up for PCIe NVMe SSDs.
It builds hugely on its predecessor, the 13G R930, by offering 50% more NVMe drives plus a third more I/O capabilities and, being 1U shorter, has a higher rack density. Dell EMC claims the R940 offers 27% more cores - although it must have based this on an R930 with four 22-core E7-8800 v4 Xeons and not with the 24-core models, which only yield a 17% increase.
http://www.itpro.co.uk/server-storage/30651/dell-emc-poweredge-r940-review-a-beast-of-a-server
Global Markets for Software-defined Storage to Double to $18.2 Billion
WELLESLEY, Mass., Feb. 26, 2018 (GLOBE NEWSWIRE) -- Growing demand for flexible, application-specific scalability is driving growth in the market for #softwaredefinedstorage products and services. In a new report, Software-defined Storage: Global Markets to 2022, BCC Research estimates the global market to grow from $8.5 billion in 2017 to almost $18.2 billion by 2022, indicating a compound annual growth rate (CAGR) of 16.4%. The storage industry is one of the most crucial and fastest growing segments in the high technology industry. Storage market leaders such as @IBM and @DellEMC forecast that storage data will exponentially increase in size, volume and complexity. For example, @EMC projects that data will grow to 40 zettabytes by 2020; a 50-fold increase from the amount of data stored in the beginning of 2010. Storage volume is increasing in IT environments due to digital transformation, mobile computing and more complex cloud architectures. The volume is increasing by more than 40% annually. Organizations are increasingly turning to software-defined storage as the primary economic alternative to simply adding more storage racks to keep pace with the burgeoning volume of data. Software-defined architectures are becoming viable across major IT platforms, paving the way for a software-defined data center that leverages the combination of improved operational control and cost efficiency. Thus, software-defined storage is has become a key element of hyper-converged infrastructure that is at the core of the software-defined architecture. It presents great value on its own and enables greater transformation of the IT infrastructure. Research Highlights Fully 80% of the world’s data has been created in the past two years. Much of that data is unstructured, meaning it does not reside in traditional row column databases and is typically more storage intensive. Companies in the North American market are learning how to adapt to regulatory requirements to maximize the value of software-defined storage deployments. Software-defined storage is expected to surpass traditional storage in terms of market size beginning in 2020. “On average, 40% of corporations are increasing their storage capacity by 11% to 30% annually while another 20% of firms are upping capacity by 41% to over 100% year over year,” said Michael Sullivan, Senior Editor and report author, BCC Research. “Significantly, the storage increases are occurring irrespective of whether or not business revenues or employee headcount remains the same or rises. Legacy solutions become cost prohibitive at these high growth rates, and the combination of data analytics, regulatory requirements and other business critical mandates make it impossible for IT organizations to reduce the inevitable exponential expansion of storage infrastructure.”
Lenovo Dishes On 3D XPoint DIMMS, Apache Pass In ThinkSystem SD650
Can you imagine a 512GB memory stick? @Intel's #ApachePass DIMMs make that a reality in @Lenovo's new Think System #SD650 servers, which are the first servers to support the new devices. These new DIMMs will feature #3DXPoint memory addressed as normal system memory in a RAM slot, but they aren't on the market yet. The new DIMMs require specialized accommodations, but Lenovo said the SD650 will support them when the Cascade Lake Xeons, which will be drop-in compatible with the server, come to market next year. RAM is one of the most important components in a system, be it a desktop PC or a server, but the relationship between capacity and cost has long been a serious drawback. For instance, the densest memory sticks come with hefty price premiums. The largest single DDR4 stick you can buy is 128GB and carries an eye-watering $4,000 price tag. You can reach the same level of capacity for much less money if you spread it across multiple DIMMs, so that type of density isn't attractive from a cost standpoint.
http://www.tomshardware.com/news/intel-xd-xpoint-dimm-lenovo-thinksystem,36573.html
Don’t Lift and Shift Big Data Analytics, Lift and Optimize
Increasing volumes and varieties of data, combined with self-service data access, is overwhelming existing reporting tools and infrastructure. To scale for digital era demands, organizations are adopting new cloud #Hadoop -based, #datalake architectures and next generation OLAP semantic layers. As early adopters make this move, they are learning from expensive “lift and shift” failures. “Lift and optimize” approaches are proving to be far more cost-effective and successful. #OLAP is not dead Contrary to the rumors of OLAP’s impending death when in-memory analytics entered the market years ago, OLAP is still not dead. The rules never changed. Exponential growth in data sources, varieties and types did rapidly surpass what traditional OLAP solutions were designed to handle. Thus, new OLAP on Hadoop technologies such as Apache Kylin, AtScale, Kyvos Insights, and other similar solutions were invented for the big data analytics world. Today OLAP is in high demand. Last year, a CEO of a large system implementation firm in the United States expressed the unicorn was not a data scientist. Their firm received over 300 data scientist resumes for one open role. They could not find any OLAP talent with classic Kimball technique, dimensional modeling skills. Surprisingly, the unicorn was someone with OLAP design skills. Assess and optimize for the cloud There are many benefits for both the business and IT to use new OLAP on Hadoop solutions. Speed, scale, simplicity and saving money on cloud bills are usually cited as the top reasons for migration from legacy on-premises OLAP offerings. OLAP on Hadoop solutions make big data analytics easy and familiar. The business can use it within mainstream self-service BI tools such as Excel, SAS, Tableau, Qlik, and TIBCO Spotfire that they already know and love. Without a user-friendly, governed semantic layer that can intelligently cache and pre-aggregate, self-service big data analytics would be overwhelming, frustrating, and expensive. Keep in mind that cloud analytics can be billed by compute or query scan usage.
Dell EMC targets another startup rival with trade secret lawsuits
Hopkinton-based @DellEMC, formerly @EMC Corp., has filed a series of lawsuits against former employees who now work for a Silicon Valley-based data storage competitor called @Rubrik Inc., continuing a pattern of aggressively enforcing its strict employee contracts in court but these appear to be the first such lawsuits filed since the organization came under the @DellTechnologies umbrella. Dell EMC accused the three former employees...
Monday, February 26, 2018
JPMorgan Buys Rights for HQ From Michael Dell Partnership
@JPMorgan Chase & Co. has acquired the bulk of the air rights it needs to build its new headquarters tower at 270 Park Ave. in midtown Manhattan from a partnership that controls Grand Central Terminal’s unused space, a person with knowledge of the transaction said. The bank bought about 650,000 square feet of air rights from a partnership controlled by @MSDCapital, the investment firm for @Dell Inc. founder @MichaelDell and his family, and New York-based real estate company TF Cornerstone, said the person, who asked not to be identified because the person wasn’t authorized to speak about the deal. The person didn’t disclose the price paid. Todd Fogarty, a spokesman for MSD, declined to comment on the transaction, as did Jeremy Soffin, a TF Cornerstone spokesman. Andrew Gray, a JPMorgan spokesman, didn’t immediately respond to an email seeking comment. A call to Argent Ventures, which owned a minority stake in the air rights, wasn’t immediately returned. The air-rights sale was reported earlier Monday by Crain’s New York Business. JPMorgan last week announced plans to knock down its headquarters, the former Union Carbide Building, and replace it with a 2.5-million-square-foot (232,000-square-meter) skyscraper that could be as much as 500 feet taller than the current building. The project was made possible by a New York City rezoning of the east Midtown area last year, which allowed for bigger buildings as a way to compete with other financial capitals around the world that have built more modern office towers. Air rights can be transferred from one building to another, enabling a higher structure than zoning regulations would otherwise allow. The east Midtown rezoning requires the city to collect either $61.49 a square foot or 20 percent of the total cost of the air rights, whichever is higher, to go into an account to fund streetscape and transit improvements in the area. The rezoned area has about 3.6 million square feet of air rights from landmarks, including 1.2 million square feet linked to Grand Central Terminal.
How Michael Dell turned $1,000 into billions, starting from his college dorm room
@MichaelDell 's $21 billion net worth lands him among the 50 richest people in the world according to @Bloomberg, in line with @Elon Musk. That wealth has afforded Dell, 53, the luxury of calling several properties home, like a record-setting $100.47 million Manhattan penthouse, a $40 million home at the Four Seasons in Boston and a residence on the Kona Coast of Hawaii with 7 bedrooms and 12 bathrooms. But when the founder and CEO of @DellTechnologies began his personal computing empire as a 19-year-old, it was from a much less glamorous residence: a dorm room.
"[I] started the company with $1,000 a week before I was taking my final exams as a freshman," Dell tells CNBC's Jon Fortt on his podcast, "Fortt Knox."
"Maybe that wasn't such a good idea, but when you're 19 years old, you haven't developed all the skills you need in terms of judgement and rational thinking," he adds.
Dell had always been interested in math and technology as a kid, but began to develop a substantial interest in computers in college at the University of Texas in Austin. As a hobby and side-hustle, he started pedaling computers he customized on campus with disk drives and extra memory.
"I was in this mode of buying computers and souping them up with more capability and then re-selling them," he says on NPR's "How I Built This." "It was just sort of a fun thing to do and a way to make some money."
Harry Cabluck | AP
In this Friday, Feb. 26, 1999, file photo, Michael Dell, foreground, sits in the dorm room at the University of Texas in Austin, Texas, where he launched his enterprise as a college freshman.
Then, he realized how big the business could be.
"I start exploring this whole computer thing further, and one of the things I noticed about the computer business was that it was very inefficient," Dell explains. "It took a really long time for the technology to get from the people that made it to the people that were buying it, and it was actually rather expensive and slow."
In his dorm, Dell says he was making from $50,000 to $80,000 a month by speeding that process up.
The business even soon caught the attention of a then 25-year-old Mark Cuban, now also a billionaire. At the time, Cuban was building his own business with computers, MicroSolutions, in Dallas.
"I remember driving down to pick up some hard drives that I was going to put into my customers' PCs," Cuban writes on his blogreferring to a trip to Dell's Austin business, then called PC's Limited.
"They had just moved from the owner's dorm room into a little office/warehouse space," he continues. "I was so impressed by this young kid [Dell] (I was a wise old 25 at the time), that I actually wrote a letter thanking him for the great job he was doing, and … I'm embarrassed to say now, I told him that if he kept up what he was doing, he was destined for far bigger and better things."
It seems Cuban was right: Dell Computer went public in 1988 with an $85 million market capitalization just four years after launching, and Dell himself later became the youngest CEO with a Fortune 500 company in 1992. The business went private in 2013.
For ambitious college students today, Dell says there are numerous opportunities available to create the next technology giant.
"We're in an incredibly exciting time in terms of artificial intelligence, deep learning, machine learning, unsupervised learning, the fifth generation cellular network, the digital transformation, the internet of things," and the list goes on, he tells Fortt. The possibilities are "tremendous."
"If you think the last 30 years have been exciting in technology, I think the next 30 years will make it look like child's play," he explains.
But in order to succeed, you'll need genuine curiosity.
"I think you have to do stuff that you are actually, incredibly passionate and excited about — and you know something about," Dell says. "The 'oppor-tuneurs' don't do as well as the entrepreneurs."
That means jumping on hot buzzwords isn't enough to create success. "I think you have to really believe in what you're doing," he explains.
For Dell, the best businesses start with something like this: "It excites me, it interests me, and I've got an idea."
Why overcrowding in tech stocks is still risky
Favoring large-cap technology stocks proved a winning strategy during the recent bout of volatility that sent the stock market into correction. But the risk of a crowded trade, one that could be ripe for reversal, in tech remains, according to analysts at RBC Capital Markets. The S&P 500 SPX, +1.18% has recovered more than half of the losses from the correction earlier this month, when it dropped more than 10%. The benchmark index is up 3.4% since the start of the year, trading near 2,765. The much-beloved tech sector — up nearly 9% year to date — accounted for more than 60% of those S&P gains. Over the past 12 months, the S&P 500 is up 16.4%, while the tech sector gains are more than double that at 35%
In fact, mega-tech names including @Microsoft CorpMSFT, +1.45% @Apple Inc. AAPL, +1.98% @Netflix Inc.NFLX, +2.88% @NVIDIA Corp NVDA, +0.26% and @Alphabet Inc. GOOG, +1.51% contributed the lion’s share, according to Howard Silverblatt, senior index analyst at S&P Dow Jones Indexes.
@Amazon.com AMZN, +1.46% technically a consumer discretionary company but often lumped in with tech, gained nearly 30% since the start of the year and contributed nearly a fifth of all the gains on the S&P 500, according to Silverblatt. So-called FAANG stocks, including Amazon, all hit intraday records (though they did not close at all-time highs) on Friday, an event that many technical analysts see as a bullish signal.
Read: Big Tech shake-up — three stocks to buy and three to sell
Solid performance of the sector is one of the main reasons why these stocks continue to be popular with both the sell-side analysts and buy-side managers.
According to Lori Calvasina, head of U.S. equity strategy at RBC Capital Markets, sell-side analysts still overwhelmingly recommend buying tech stocks.
“Net ‘buy’ ratings (the percent of technology sector buy ratings less the percent of S&P 500 buy ratings) remain extremely high relative to history, though not quite back to tech bubble peaks,” Calvasina wrote in a note to investors.
https://www.marketwatch.com/story/why-overcrowding-in-tech-stocks-is-still-risky-2018-02-26
This Teacher in Ghana Uses Chalkboard to Help Students Learn Microsoft Word
A Ghana teacher has been teaching “Information and Communications Technology” on a chalkboard and has gained global attention for doing so. After his Facebook post, the school which lacked the resources to buy computers has been inundated with offers of donations of laptops.  Source: Owura Kwadwo Hottish/Facebook The school in rural Ghana @OwuraKwadwo worked in could not afford a computer. But Kwadwo, who is originally from Kumasi, Ghana, wanted to help his students get familiar with the look and function of a computer. So Kwadwo, who has a background in visual arts, started by drawing the perfect copy of a blank @Microsoft Word document on a chalkboard. Kwadwo has also tried to put as much detail into the recreation of the blank Word Document, making it very impressive. He has included every feature you would find in a Word Document.  It also includes an explanation of what each part of the document does so that students can really understand how it would work. If students can see what MS Word looks like now, they will be prepared for when they have the opportunity to use it on a real computer, Kwadwo says. ICT as important as maths education The teacher’s work is impressive on a technical level but it is also an effective way to help students grasp the basics of Microsoft Word and computers in general. In the 21st Century, understanding how to use technology is as important as math, science, or history in terms of education.  But many students in the world lack the resources to access and use technology on a regular basis. That is why Kwadwo is willing to put in this level of work to replicate a Word Document. “I do it to make my students understand what I‘m teaching,” he said. “At least to give a picture of what they will see assuming they were behind a computer.”
Industrial Networking Solutions Market Worth 23.84 Billion USD by 2022
According to a new market research report "Industrial Networking Solutions Market by Technology (SDWAN, WLAN, & IIoT), Service (Network Planning & Optimization, & Integration), Application (Predictive Maintenance, & Emergency & Incident Management), Vertical, and Region - Global Forecast to 2022", published by MarketsandMarkets™, the market size is expected to grow USD 9.18 Billion in 2017 to USD 23.84 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 21.0% during the forecast period (2017-2022).
Some of the major vendors in the Industrial Networking Solutions Market include as @Cisco (US), @Dell EMC (US), @Juniper Networks (US), @Sierra Wireless (Canada), @Aruba Networks (US), @Veryx Technologies (US), @Moxa (US), @Belden (US), and @Rockwell Automation (US).
Pure Storage (PSTG) to Report Q4 Earnings: What's in Store?
@PureStorage Inc. PSTG is scheduled to report fourth-quarter fiscal 2018 earnings on Mar 1. Notably, the company beat the Zacks Consensus Estimate for earnings in the trailing four quarters, with an average positive surprise of 49.67%. Last quarter, the company delivered a positive earnings surprise of 66.7%. Last quarter, the company reported non-GAAP loss of 1 cent per share, which was narrower than the Zacks Consensus Estimate of a loss of 3 cents. The figure was also much lower than the year-ago loss of 10 cents per share. Total revenues were $277.7 million, up 41% year over year and ahead of the Zacks Consensus Estimate of $272 million. Revenues surpassed the upper end of the guided range. The company's strong product portfolio with the likes of FlashArray and FlashBlade business segments bolstered the top line. Additionally, the company's data platform for cloud is gaining traction
http://m.nasdaq.com/article/pure-storage-pstg-to-report-q4-earnings-whats-in-store-cm926542
Micron Announces Leading-Edge, Mobile 3D NAND Solutions for Flagship Smartphones
BARCELONA, Spain, Feb. 26, 2018 (GLOBE NEWSWIRE) -- @Micron Technology, Inc. (Nasdaq:MU) announced today three new 64-layer, second-generation #3DNAND storage products, which support the high-speed Universal Flash Storage (UFS) 2.1 standard. Micron's new mobile 3D NAND products are available in 256GB, 128GB and 64GB capacities. These new mobile solutions are based on Micron's industry-leading triple level cell (TLC) 3D NAND technology, empowering smartphone makers to enhance the user experience with next-generation mobile features such as artificial intelligence (AI), virtual reality and facial recognition. The emergence of AI in flagship phones is driving the need for more advanced storage solutions that enable faster and more efficient access to data. Analyst firm Gartner predicts that by 2022, 80 percent of all smartphones will have on-device AI capabilities, increasing the requirement to process and store more data locally.1 Additionally, storage capacity has continued to increase significantly as smartphones have become the go-to devices for photography and multimedia sharing, with up to 256GB in flagship phones today and projections for that to grow to a terabyte by 2021. Micron's new 64-layer TLC 3D NAND storage solutions address these needs by utilizing a mobile-optimized architecture to deliver consistent high performance and low latency, while offering more capacity in a smaller space. "Memory plays an increasingly critical role in delivering the bold new features we have all come to expect from our smartphones," said Gino Skulick, Micron vice president of marketing, Mobile Business Unit. "Micron uniquely delivers both mobile DRAM and 3D NAND, and our leading-edge designs continue to provide the performance demanded by the most advanced smartphones." 64-Layer TLC 3D NAND: Powering the Future Mobile Landscape The new mobile 3D NAND products pack more storage cells into a smaller die area, and by utilizing Micron's CMOS under Array (CuA), they deliver a best-in-class die size. Micron's unique approach places all the flash memory layers on top of the logic array, maximizing the use of space in the smartphone design. Micron's second-generation TLC 3D NAND for mobile offers several competitive technical advantages, including the new features described below: Micron's mobile-optimized architecture delivers consistent high performance and low latency for an enhanced user experience, while minimizing power consumption through the use of an efficient peak power management system. Micron's new 64-layer TLC 3D NAND products are 50 percent faster than previous-generation TLC 3D NAND. Micron's 64-layer 3D NAND technology doubles the storage density of previous-generation TLC 3D NAND while maintaining the same package size. The UFS 2.1 G3-2L interface specification provides compelling performance for mobile applications, delivering up to 200 percent higher bandwidth versus e.MMC 5.1 and providing simultaneous read and write capabilities. This underpins the speed of data access needed when capturing bursts of high-resolution photos or recording 4K video to storage. The new products are based on a 32GB die, which measures 59.341mm2 — the industry's smallest 32GB TLC 3D NAND die on the market.2
Metaswitch Inks Comprehensive Global Reseller Agreement with HPE
LOS ALTOS, Calif. and MOBILE WORLD CONGRESS, BARCELONA, Feb. 26, 2018 (GLOBE NEWSWIRE) -- Cloud native communications software leader @Metaswitch today announced a global resale agreement through @Hewlett Packard Enterprise ( #HPE ). In this agreement, Metaswitch’s core network software products, including the Clearwater IMS core and the #Perimeta session border controller ( #SBC ), will be integrated with HPE’s own #virtualnetworkfunctions and management platforms to deliver end-to-end virtualized #VoLTE, #VoWiFi and advanced communications services. This reseller relationship builds on the previous announcement of the integration of Metaswitch’s Clearwater core with HPE’s infrastructure platforms and NFV Director orchestration platform. The integration accelerates fixed and mobile network operators’ deployments of advanced services including VoLTE and VoWiFi on private, public or hybrid cloud infrastructures.
Sunday, February 25, 2018
Software-Defined Storage Design Considerations: 5 Key Tips
#Softwaredefinedstorage ( #SDS) decouples storage intelligence from the underlying storage devices. The environment orchestrates multiple storage devices into a software storage management layer that operates above the physical storage layer. By moving intelligence up the stack, customers can buy commodity hardware that supports SDS policy-driven workload processing, load balancing, dedupe, replication, snapshots, and backup. Instead of purchasing expensive proprietary NAS or SAN, SDS runs on commodity hardware and standard operating systems. This is true as far as it goes. However, some SDS vendors – especially the software-only sellers – claim that the hardware doesn’t matter. But when it comes to software-defined storage design, hardware choices are critical. It’s true that software-defined storage users can use commodity hardware and avoid expensive SAN or NAS with built-in storage intelligence. But software-defined storage users still need to integrate SDS with hardware, and design the physical infrastructure, so it optimizes the software-defined storage layer. Optimize Software-Defined Architecture Storage IO paths are complex and pass through multiple stages, and pathing issues easily compromise quality of service (QoS) and performance. When admins add software-defined storage on top of this already-complicated architecture, complexity increases and performance and QoS may suffer even more. SDS users can avoid pathing issues by carefully integrating their software-defined storage with the virtual data center and its physical infrastructure. For example, coordinate the SDS design with the SDN manager to optimize virtual paths for packet routing and quality of service. However, many SDS admins spend the majority of their resources on integrating the software-defined data center, and don’t pay enough attention to integrating SDS with the physical layer. This inattention can seriously affect your SDS project. Benefits of Integrating Software-Defined Storage with Hardware The benefits of SDS design that properly integrates your hardware can be substantial. · Optimize application workloads with policy-driven performance and RTO/RPO settings. · SDS pools storage capacity from storage systems and provides them to applications running in the software-defined layer. Provisioning is simplified, and the policy-driven software manages SLAs and QoS for different applications. · Admins centrally manage the logical storage pools instead of logging in and out of device-level data services. Centralized management enables IT to create security and policies across a single logical infrastructure instead of different interfaces at the device level. · SDS simplifies scaling with dynamic provisioning. Admins can easily add servers without manual data distribution and load balancing. · SDS is capable of strong security. Not all SDS offerings are developed equally, but these environments should at a minimum enable encryption, multi-tenant logical layers, and strong logging and monitoring with understandable reports at the management interface. These are strong benefits. But none of this will work well for you if your software-defined storage design does not integrate smoothly with the underlying physical devices. Let’s talk about how you can ensure that it does. Software-Defined Storage Considerations: Key Tips These are the critical points to observe when you design and implement your software defined storage: 1. Know your storage performance and capacity requirements. 2. Understand vendor and hardware compatibilities. 3. Design for current and future resilience. 4. Finalize the design. 5. Manage your software defined storage. SDS Design Tip 1: Know Your Storage Requirements The first thing to do is to identify workloads and their respective applications, servers, and clients. Typical characteristics to look at include baseline IOPs and peak IOPs, throughput, depth of queue waiting to be processed, latency, the portion of data that is actively changing (working IO), variation patterns, and how much throughput performance you need to maintain backup windows. You might also want to look at additional characteristics like the presence of encryption and the ratio of sequential to random data. Now determine their performance and storage capacity requirements, and how data protection will work. Identify application workloads by performance, capacity, and data protection needs, including RTO and RPO. Assign priority applications to fast SSD and disk. Do the same for secondary workloads. Identify characteristics such as where the backup stream is stored, where it is replicated, and the amount of storage capacity to preserve for backup workloads. Plan the storage capacity you need for backup and archive. Although long-term data retention is likely on tape or in the cloud, you might want to continuously backup some high priority applications to the storage hardware underlying the SDS. Typical capacity factors include the existing size of data and its monthly, bi-yearly, or yearly growth rate, and its scheduled retention period. SDS Design Tip 2: Understand vendor and hardware compatibilities Once you have identified workloads, performance characteristics, and capacity needs then you can look for your SDS vendor and optimize the physical storage and networking. As you deploy new software and hardware, be careful to deploy recommended firmware and drivers. Keep in mind that as recently as 2015, most new storage hardware did not automatically work with SDS software. Consequences included SDS not recognizing new devices or media, disk sector and whole disk failures, and difficulty troubleshooting integration problems using separate software and hardware management interfaces. Since then SDS vendors who also build hardware have been developing for SDS stacks, and software-only developers are improving reference architecture. This begs the question about going with commodity hardware and avoiding vendor lock-in, but there has been improvement. Still, assume nothing when you research the storage hardware for your SDS system. Look deeply into compatibilities and best practices. SDS Design Tip 3: Design for current and future resilience Once you have chosen your SDS and physical storage vendors, you will need to design your SDS environment. Design for resilience so your environment will cost-effectively scale to match your developing storage requirements. When you design for resilience, understand your current and future thresholds before you start. Know your uptime requirements and resilience objectives for your storage configurations including cluster nodes, redundant network connections for uninterrupted traffic, and performance. Pay attention to scalability and growth characteristics on your SDS, storage media, processors, and RAM. And be sure to know how simple (or not) it will be to update firmware and drivers. The last thing you want is put your whole admin team on alert and down your SDS system for hours every time you need to upgrade firmware. SDS Design Tip 4: Finalize the design Remember that your SDS runs on top of physical storage. Carefully design performance and capacity so the physical layer will properly support your software-defined storage environment. Decide how many storage pool clusters to deploy and understand provisioning and the process of adding new pools. There is no right or wrong way to do this: large storage pools allow for centralized management and fault troubleshooting but failing storage media can ripple through the entire SDS. In that case, you may prefer to maintain smaller storage pools if the admin management time is not overwhelming. Build in redundancies by mirroring high performance applications to different storage devices in the pool or across multiple pools or use parity for low-write applications like archival. Both parity and mirroring let you copy data sets and consider reserving pool capacity for redistributed data should an SSD or HDD fail. Decide what storage media sizes you will need as well as RAM and CPU requirements. Enclosures enter the picture here: you want sufficient size to house your current drives, and easy scalability so you can simply add more enclosures and redistribute disk drives as needed. Performance and capacity are critical. Remember that calculating SSD performance has two dimensions: the number of SSDs that data can access simultaneously, and the capacity of the SSDs to know how much data it can process at the same time. Also think about the types and sizes of your SSDs and HDDs. Storage tiering comes into play here, with many admins opting for SSDs for Tier 0 and SSDs plus fast disk for Tier 1. Processors and RAM are not necessarily as critical as media performance and capacity but may be very important depending on the processing point for data traffic. If it’s processed at the physical networking layer, such as RDMA network cards, then your storage processors don’t have to do all the heavy lifting. If the storage processors do process the traffic, then you’ll need high performance processors for high priority traffic. RAM makes a difference if you are doing operations such as deduping on clustered nodes. Now that you have planned the underlying storage foundation, plan the virtual one just as carefully. Know how many virtual disks you plan on starting with and understand how to scale them over time. Remember that the more virtual disks you create in cluster nodes, the more time it will take to manage and load balance. The more intelligence your SDS offers for virtual disk management, the easier your job becomes. Also remember that the more virtual disks you are using, the more capacity you reserve for write-back caches. The total size of virtual disks will of course depend on the actual size of your storage tiers. Factor in reserved capacity for dynamic data distribution and write-back caches. Virtual disks can be terabytes in size, but peak workloads should not spike over threshold performance or capacity settings. Smaller disk sizes may be more reliable when running large workloads. Understand before you buy what your disk sector sizes are. For example, if your virtual OSs and applications support native 4KB sector disks, then you will get better write performance and some benefits to reliability and capacity. You can of course use 4K disks with 512-byte emulation for backwards compatibility, but this will not get you the same level of benefits as a native 4KB. SDS Design Tip 5: Manage the Software and Physical Layers Monitoring the software and hardware aspects of the SDS architecture are two different interfaces – more if you are separately managing multiple storage devices. SDS management abstracts the storage from its physical infrastructure and manages it at a logical layer. It administers policies for the hardware storage devices but does not manage the hardware. Common events like failed interconnects and disks can down entire software-defined storage environment. The disadvantage is that admins must separately troubleshoot and manage the physical layer instead of centrally managing the storage stack. This does not have to be a deal breaker for deploying SDS. SAN and NAS hardware are already highly complex and attempting to change application pathways and configurations is time and resource intensive. SDS can certainly simplify these changes, which cuts down on the resources required for managing disk devices and SDS. Simplify hardware troubleshooting by using robust hardware architecture with redundant clusters, dynamic scaling, and self-healing mechanisms. Also look for reference architectures and storage devices that are purpose-built to run SDS stacks. Invest in additional software like drive monitoring utilities. Software Defined Storage: Powerful but Not Simple SDS architectures can save money and time. When admins successfully integrate storage software and hardware and customize them to their storage needs, the business may realize significant improvements and cost savings. No one is saying this is easy. One of the beauties of virtualized environments is that they run on a variety of storage, server, and networking components. Yet when admins want to combine these platforms into a single virtual SDS, they cannot simply expect them to automatically work seamlessly. This is why storage admins need to spend time and attention to make sure their hardware is reliable and fault-tolerant and integrates well with their software-defined storage design. Only this level of integration allows SDS to properly manage stored data with flexible and dynamic policies. You can do it by creating a clear knowledge and implementation plan for SDS and hardware integration and giving your team enough time to optimize the infrastructure right from the start. Expending the right resources at the start of the project will help ensure that your SDS deployment will be everything you need, now and into the future.
CRN Exclusive: Cisco Earns Top Ratings For Response To Spectre, Meltdown Security Crisis
In the days after the #Spectre and #Meltdown bugs hit the news, Faisal Bhutto, vice president of corporate strategy at Computex, a solution provider in Houston, was struck by the level of confusion in the market. "The fact is there's no easy fix, and in the first couple of days there was a lot of noise in the market, a lot of confusion in the market," Bhutto said. But as far as Computex's Cisco customers were concerned, the bugs and the strategies for handling them were not a big deal thanks to the networking giant's focused, fast-moving and complete response to the chip vulnerabilities. "They did a bang-up job, and we were able to get customers aware of what's happening and calm their nerves," Bhutto said. [Read on: CRN's coverage of the Spectre and Meltdown Response] In a recent @CRN survey of solution providers, @Cisco earned top marks among hardware vendors for its response to the Spectre and Meltdown vulnerability discovery; it scored better than bested other industry titans like @IBM, @HPE, and @Dell EMC. CRN conducted an online poll of 190 members of the CRN Channel Intelligence Council, a panel of solution providers representing the broad channel ecosystem in North America. In the survey, solution providers ranked the vendor responses to the Spectre and Meltdown vulnerability issue on a scale of 1 to 5, with 5 being the top mark, or "excellent." [Sponsored Suggested Post: New from CRN: The Most Popular Stories Newsletter. Get Our Top Trending Headlines Delivered. Subscribe Today] Cisco scored a mean rating of 3.66 out of five in the survey, which asked solution providers currently involved with the vendor to score vendors' technical, patching, support service and partner communication in the wake of the Spectre and Meltdown bugs. IBM scored 3.58 in the survey, while HPE scored 3.57 and Dell EMC scored 3.57. Lenovo scored 3.55. HP Inc. scored 3.54 and Apple scored 3.47. NetApp scored 3.28. "The major manufacturers we work with – Cisco, HPE, Dell EMC – they're all world-class organizations, but the engagement we had with [Cisco's] data center team sets them apart," Bhutto said. "When it comes to field engagement, it's not just newsletters and blogs," he said. "That content is there, but at the end of the day, a phone call, a Spark message is more enriching, and that's what stands out." The Spectre and Meltdown bugs affected CPUs industry-wide, prompting Cisco to issue security advisories covering dozens of products, including UCS servers. Cisco's response, Bhutto said, is helped by the fact that its channel-focused engineers act as specialists rather than generalists. "We do a lot of UCS business, and you have engineers who are focused on a particular architecture," Bhutto said. "There's a concentrated effort of assigning partner engineers. That's helped, and you could see the results when Spectre and Meltdown happened. For all of our customers who are UCS clients, it was fairly easy." Close communication and focused engineering expertise have enabled Computex to calm customers and provide clear, comprehensive advice, Bhutto said. "We held education sessions with each large customer of ours," Bhutto said. "We give them a breakdown, and we assigned three of our experts on data center to put together a very concise summary and present it to customers, and that was a huge help. It's about continual education, making sure customers take care of patches and have a disciplined patching cycle. This is a problem you don't solve overnight, but we had enough data to begin communicating with customers right away."
Software-Defined Storage Design Considerations: 5 Key Tips
#Softwaredefinedstorage ( #SDS) decouples storage intelligence from the underlying storage devices. The environment orchestrates multiple storage devices into a software storage management layer that operates above the physical storage layer. By moving intelligence up the stack, customers can buy commodity hardware that supports SDS policy-driven workload processing, load balancing, dedupe, replication, snapshots, and backup. Instead of purchasing expensive proprietary NAS or SAN, SDS runs on commodity hardware and standard operating systems. This is true as far as it goes. However, some SDS vendors – especially the software-only sellers – claim that the hardware doesn’t matter. But when it comes to software-defined storage design, hardware choices are critical. It’s true that software-defined storage users can use commodity hardware and avoid expensive SAN or NAS with built-in storage intelligence. But software-defined storage users still need to integrate SDS with hardware, and design the physical infrastructure, so it optimizes the software-defined storage layer. Optimize Software-Defined Architecture Storage IO paths are complex and pass through multiple stages, and pathing issues easily compromise quality of service (QoS) and performance. When admins add software-defined storage on top of this already-complicated architecture, complexity increases and performance and QoS may suffer even more. SDS users can avoid pathing issues by carefully integrating their software-defined storage with the virtual data center and its physical infrastructure. For example, coordinate the SDS design with the SDN manager to optimize virtual paths for packet routing and quality of service. However, many SDS admins spend the majority of their resources on integrating the software-defined data center, and don’t pay enough attention to integrating SDS with the physical layer. This inattention can seriously affect your SDS project. Benefits of Integrating Software-Defined Storage with Hardware The benefits of SDS design that properly integrates your hardware can be substantial. · Optimize application workloads with policy-driven performance and RTO/RPO settings. · SDS pools storage capacity from storage systems and provides them to applications running in the software-defined layer. Provisioning is simplified, and the policy-driven software manages SLAs and QoS for different applications. · Admins centrally manage the logical storage pools instead of logging in and out of device-level data services. Centralized management enables IT to create security and policies across a single logical infrastructure instead of different interfaces at the device level. · SDS simplifies scaling with dynamic provisioning. Admins can easily add servers without manual data distribution and load balancing. · SDS is capable of strong security. Not all SDS offerings are developed equally, but these environments should at a minimum enable encryption, multi-tenant logical layers, and strong logging and monitoring with understandable reports at the management interface. These are strong benefits. But none of this will work well for you if your software-defined storage design does not integrate smoothly with the underlying physical devices. Let’s talk about how you can ensure that it does. Software-Defined Storage Considerations: Key Tips These are the critical points to observe when you design and implement your software defined storage: 1. Know your storage performance and capacity requirements. 2. Understand vendor and hardware compatibilities. 3. Design for current and future resilience. 4. Finalize the design. 5. Manage your software defined storage. SDS Design Tip 1: Know Your Storage Requirements The first thing to do is to identify workloads and their respective applications, servers, and clients. Typical characteristics to look at include baseline IOPs and peak IOPs, throughput, depth of queue waiting to be processed, latency, the portion of data that is actively changing (working IO), variation patterns, and how much throughput performance you need to maintain backup windows. You might also want to look at additional characteristics like the presence of encryption and the ratio of sequential to random data. Now determine their performance and storage capacity requirements, and how data protection will work. Identify application workloads by performance, capacity, and data protection needs, including RTO and RPO. Assign priority applications to fast SSD and disk. Do the same for secondary workloads. Identify characteristics such as where the backup stream is stored, where it is replicated, and the amount of storage capacity to preserve for backup workloads. Plan the storage capacity you need for backup and archive. Although long-term data retention is likely on tape or in the cloud, you might want to continuously backup some high priority applications to the storage hardware underlying the SDS. Typical capacity factors include the existing size of data and its monthly, bi-yearly, or yearly growth rate, and its scheduled retention period. SDS Design Tip 2: Understand vendor and hardware compatibilities Once you have identified workloads, performance characteristics, and capacity needs then you can look for your SDS vendor and optimize the physical storage and networking. As you deploy new software and hardware, be careful to deploy recommended firmware and drivers. Keep in mind that as recently as 2015, most new storage hardware did not automatically work with SDS software. Consequences included SDS not recognizing new devices or media, disk sector and whole disk failures, and difficulty troubleshooting integration problems using separate software and hardware management interfaces. Since then SDS vendors who also build hardware have been developing for SDS stacks, and software-only developers are improving reference architecture. This begs the question about going with commodity hardware and avoiding vendor lock-in, but there has been improvement. Still, assume nothing when you research the storage hardware for your SDS system. Look deeply into compatibilities and best practices. SDS Design Tip 3: Design for current and future resilience Once you have chosen your SDS and physical storage vendors, you will need to design your SDS environment. Design for resilience so your environment will cost-effectively scale to match your developing storage requirements. When you design for resilience, understand your current and future thresholds before you start. Know your uptime requirements and resilience objectives for your storage configurations including cluster nodes, redundant network connections for uninterrupted traffic, and performance. Pay attention to scalability and growth characteristics on your SDS, storage media, processors, and RAM. And be sure to know how simple (or not) it will be to update firmware and drivers. The last thing you want is put your whole admin team on alert and down your SDS system for hours every time you need to upgrade firmware. SDS Design Tip 4: Finalize the design Remember that your SDS runs on top of physical storage. Carefully design performance and capacity so the physical layer will properly support your software-defined storage environment. Decide how many storage pool clusters to deploy and understand provisioning and the process of adding new pools. There is no right or wrong way to do this: large storage pools allow for centralized management and fault troubleshooting but failing storage media can ripple through the entire SDS. In that case, you may prefer to maintain smaller storage pools if the admin management time is not overwhelming. Build in redundancies by mirroring high performance applications to different storage devices in the pool or across multiple pools or use parity for low-write applications like archival. Both parity and mirroring let you copy data sets and consider reserving pool capacity for redistributed data should an SSD or HDD fail. Decide what storage media sizes you will need as well as RAM and CPU requirements. Enclosures enter the picture here: you want sufficient size to house your current drives, and easy scalability so you can simply add more enclosures and redistribute disk drives as needed. Performance and capacity are critical. Remember that calculating SSD performance has two dimensions: the number of SSDs that data can access simultaneously, and the capacity of the SSDs to know how much data it can process at the same time. Also think about the types and sizes of your SSDs and HDDs. Storage tiering comes into play here, with many admins opting for SSDs for Tier 0 and SSDs plus fast disk for Tier 1. Processors and RAM are not necessarily as critical as media performance and capacity but may be very important depending on the processing point for data traffic. If it’s processed at the physical networking layer, such as RDMA network cards, then your storage processors don’t have to do all the heavy lifting. If the storage processors do process the traffic, then you’ll need high performance processors for high priority traffic. RAM makes a difference if you are doing operations such as deduping on clustered nodes. Now that you have planned the underlying storage foundation, plan the virtual one just as carefully. Know how many virtual disks you plan on starting with and understand how to scale them over time. Remember that the more virtual disks you create in cluster nodes, the more time it will take to manage and load balance. The more intelligence your SDS offers for virtual disk management, the easier your job becomes. Also remember that the more virtual disks you are using, the more capacity you reserve for write-back caches. The total size of virtual disks will of course depend on the actual size of your storage tiers. Factor in reserved capacity for dynamic data distribution and write-back caches. Virtual disks can be terabytes in size, but peak workloads should not spike over threshold performance or capacity settings. Smaller disk sizes may be more reliable when running large workloads. Understand before you buy what your disk sector sizes are. For example, if your virtual OSs and applications support native 4KB sector disks, then you will get better write performance and some benefits to reliability and capacity. You can of course use 4K disks with 512-byte emulation for backwards compatibility, but this will not get you the same level of benefits as a native 4KB. SDS Design Tip 5: Manage the Software and Physical Layers Monitoring the software and hardware aspects of the SDS architecture are two different interfaces – more if you are separately managing multiple storage devices. SDS management abstracts the storage from its physical infrastructure and manages it at a logical layer. It administers policies for the hardware storage devices but does not manage the hardware. Common events like failed interconnects and disks can down entire software-defined storage environment. The disadvantage is that admins must separately troubleshoot and manage the physical layer instead of centrally managing the storage stack. This does not have to be a deal breaker for deploying SDS. SAN and NAS hardware are already highly complex and attempting to change application pathways and configurations is time and resource intensive. SDS can certainly simplify these changes, which cuts down on the resources required for managing disk devices and SDS. Simplify hardware troubleshooting by using robust hardware architecture with redundant clusters, dynamic scaling, and self-healing mechanisms. Also look for reference architectures and storage devices that are purpose-built to run SDS stacks. Invest in additional software like drive monitoring utilities. Software Defined Storage: Powerful but Not Simple SDS architectures can save money and time. When admins successfully integrate storage software and hardware and customize them to their storage needs, the business may realize significant improvements and cost savings. No one is saying this is easy. One of the beauties of virtualized environments is that they run on a variety of storage, server, and networking components. Yet when admins want to combine these platforms into a single virtual SDS, they cannot simply expect them to automatically work seamlessly. This is why storage admins need to spend time and attention to make sure their hardware is reliable and fault-tolerant and integrates well with their software-defined storage design. Only this level of integration allows SDS to properly manage stored data with flexible and dynamic policies. You can do it by creating a clear knowledge and implementation plan for SDS and hardware integration and giving your team enough time to optimize the infrastructure right from the start. Expending the right resources at the start of the project will help ensure that your SDS deployment will be everything you need, now and into the future.
IT turns to VMware encryption for added VM security
Encryption features in recent #vSphere and #vSAN updates aim to balance security with simplicity and performance. For @VMware shops, there is little standing in the way of adoption.
Virtualization admins demand a balance between security, performance and usability in their data protection tools, and VMware aims to address that need with encryption capabilities in vSphere 6.5 and vSAN 6.6.
Virtualization admins demand a balance between security, performance and usability in their data protection tools, and VMware aims to address that need with encryption capabilities in vSphere 6.5 and vSAN 6.6.
News Bits: Virtual Instruments, Dell EMC, Pivot3, Cisco, Rubrik, Flexxible IT, Commvault, & More
This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. @Virtual Instruments rebrands #LoadDynamiX as #WorkloadWisdom. @DellEMC announces availability of its Proven Professional Certifications. @Pivot3 expands partnerships. @Cisco expands its automation portfolio. @Rubrik approaches $300 Million global bookings run rate. @Flexxible IT launches channel program with @HPE. @Commvualt and #INFINIDAT partner on PB scale data protection. Packet deploys @AMD #EPYC CPUs in its Bare Metal Cloud. Virtual Instruments Rebrands Load DynamiX As WorkloadWisdom  Virtual Instruments has rebranded and enhanced its storage performance validation solution, Load DynamiX Enterprise as WorkloadWisdom. The rebrand comes with new performance and features as well as deeper integration with VirtualWisdom, the company’s performance monitoring and analytics platform. Key features and benefits of WorkloadWisdom 6.0 include: Higher Speed Fabrics: Availability of new 25GbE Series Workload Generator, complementing the existing 32GbFC and 40GbE Workload Generators Full storage technology coverage for new Workload Generators: SAN: iSCSI, Fibre Channel NAS: SMB 1.0/2.x/3.x, NFSv2/v3/v4.x Object: HTTP / HTTPS, S3, Swift, Cinder Unprecedented Testing Scale: Dramatic increase in the number of workload generation ports supported A 5-10X performance improvement in results reporting over the previous version of the platform Redesigned User Interface Significantly Reduces: The time needed to create complex test beds for iSCSI environment The time needed to find older test results amongst thousands of result sets Expanded Scale-Out NAS / Cloud NAS: Introduction of production-to-the-lab automation for the SMB file protocol: SMB file environments can access production workload data using the Workload Data Importer Users can also obtain SMB production workload data from VirtualWisdom ProbeNAS, which monitors SMB and NFS environments Ability to deduplicate workloads for NAS deployments VirtualWisdom Integration: WorkloadWisdom fully integrates with VirtualWisdom to comprise Virtual Instruments’ new app-centric approach to IPM Broader Array Support for Workload Data Importer: The WorkloadWisdom Workload Data Importer analysis policies now cover all major storage arrays from NetApp, Dell EMC, IBM, Oracle, HPE and Pure Storage – dramatically simplifying the workload creation process Virtual Instruments WorkloadWisdom Dell EMC Announces Availability Of Its Proven Professional Certifications  Dell EMC announced its Proven Professional program in an effort to increase knowledge, which will lead to increased experience in IT departments. As technology changes at a rapid pace, IT admins need the knowledge to address it. The new Certifications include: Dell EMC Certified Master - Enterprise Architect is the highest-level, experience-based certification. It uses a board review process to validate that candidates have both the technical acumen and the holistic understanding of business drivers to deliver secure ITaaS solutions that further digital transformation. The Dell EMC Education Services website has the full details, eligibility criteria and application to be considered for a board review. Dell EMC Certified Expert Systems Administrator - Multi-Cloud recognizes expertise in administration of multi-cloud solutions. This certification validates an administrator’s experience managing workloads across multi-cloud environments and their expertise in Dell EMC infrastructure, VMware vRealize Suite, and integrating with popular public cloud providers such as Dell EMC and VMware Cloud Provider Partners, Virtustream, Microsoft Azure and Amazon Web Services. Dell EMC Certified Specialist Systems Administrator - Infrastructure Security validates that IT professionals have the knowledge and skills to take a security-first approach to administering Dell EMC infrastructure across the data center and cloud. Administrators will learn best practices in configuring and managing Dell EMC enterprise solutions in alignment with their organization’s overall security strategy. Dell EMC Certified Associate - Converged Systems and Hybrid Cloud Platforms establishes foundational knowledge in integrated infrastructure and cloud strategies. Business or technical professionals learn through real-life case studies and interactive experience in which they play the role of a CIO transforming an organization from a traditional data center to a modernized IT environment. Dell EMC Proven Professional program Pivot3 Expands Partnerships  Pivot3 announced that it is expanding partnerships in order to bring more capabilities to its customers. The three companies that it is expanding partnerships with are Zerto, VMware, and Redington Value. This expansion will include: Zerto will be supported across Pivot3’s entire range of solutions. VMware Horizon 7 is now available on Pivot3. Redington Value is now offering Pivot3 with Splunk to support enterprise IT security and analytics use cases. Pivot3 Cisco Expands Its Automation Portfolio  Cisco is expanding its automation portfolio with the release of Cisco Crosswork Network Automation. This new software portfolio is aimed at helping service providers run their large-scale networks. These new software technologies include: Cisco Crosswork Change Automation: Automated operations application that enables large-scale change and closed-loop control Cisco Crosswork Health Insights: Smart sensors, smart alerts and smart remediation to monitor and optimize networks Cisco Crosswork Data Platform: Featuring both an OpenSource and commercial-class data analytics platform Cisco Crosswork Network Insights: Cloud-based analytics solution for solving large-scale routing issues Cisco Crosswork Situation Manager: Machine learning-based event correlation with social operations Cisco Crosswork Network Automation Rubrik Approaches $300 Million Global Bookings Run Rate  Rubrik is happy to announce that it is approaching a global bookings run rate of $300 million due to the pace of adoption it is seeing in large enterprises. This is a pretty good number for a company that emerged on the scene two years ago. It will be interesting to see where this goes moving forward. Rubrik Flexxible IT Launches Channel Program With HP  Flexxible IT launched a channel program with HP, Inc. featuring integrated workspace solutions targeting SME. The program delivers: Citrix and Microsoft software, pre-integrated and ready to deploy Several options of Mobile Thin Clients or Desktop Thin Clients with keyboard and display for the client side The Flexxible|SmartWorkspaces solution, delivering automated and rapid deployment, single-pane-of-glass administration Flexxible IT Thin Clients Commvualt & INFINIDAT Partner On PB Scale Data Protection  Commvault is partnering with INFINIDAT in order to deliver a single data protection solution to large enterprises for primary and secondary storage uses cases. The two companies that that this partnership will help customers protect data on-premises and in the cloud, while increasing business agility, IT efficiency and reliability. This partnership will also allow INFINIDAT to sell and deliver Commvault Software to its cusomters. Commvault INFINIDAT Packet Deploys AMD EPYC CPUs In Its Bare Metal Cloud  Packet, the bare metal cloud company, announced that it is adding AMD EPYC processors to its line up. The system will be built on Dell EMC’s PowerEdge R6415 include 64GB of RAM, two 480GB SSDs, and the AMD EPYC 7401P CPU (with 24 physical cores). This new system is ideal for use cases such as scale-out SaaS platforms, Kubernetes-based cloud native applications, and enterprise workloads leveraging virtualization.
News Bits: Virtual Instruments, Dell EMC, Pivot3, Cisco, Rubrik, Flexxible IT, Commvault, & More
This week’s News Bits we look at a number of small announcements, small in terms of the content, not the impact they have. @Virtual Instruments rebrands #LoadDynamiX as #WorkloadWisdom. @DellEMC announces availability of its Proven Professional Certifications. @Pivot3 expands partnerships. @Cisco expands its automation portfolio. @Rubrik approaches $300 Million global bookings run rate. @Flexxible IT launches channel program with @HPE. @Commvualt and #INFINIDAT partner on PB scale data protection. Packet deploys @AMD #EPYC CPUs in its Bare Metal Cloud. Virtual Instruments Rebrands Load DynamiX As WorkloadWisdom  Virtual Instruments has rebranded and enhanced its storage performance validation solution, Load DynamiX Enterprise as WorkloadWisdom. The rebrand comes with new performance and features as well as deeper integration with VirtualWisdom, the company’s performance monitoring and analytics platform. Key features and benefits of WorkloadWisdom 6.0 include: Higher Speed Fabrics: Availability of new 25GbE Series Workload Generator, complementing the existing 32GbFC and 40GbE Workload Generators Full storage technology coverage for new Workload Generators: SAN: iSCSI, Fibre Channel NAS: SMB 1.0/2.x/3.x, NFSv2/v3/v4.x Object: HTTP / HTTPS, S3, Swift, Cinder Unprecedented Testing Scale: Dramatic increase in the number of workload generation ports supported A 5-10X performance improvement in results reporting over the previous version of the platform Redesigned User Interface Significantly Reduces: The time needed to create complex test beds for iSCSI environment The time needed to find older test results amongst thousands of result sets Expanded Scale-Out NAS / Cloud NAS: Introduction of production-to-the-lab automation for the SMB file protocol: SMB file environments can access production workload data using the Workload Data Importer Users can also obtain SMB production workload data from VirtualWisdom ProbeNAS, which monitors SMB and NFS environments Ability to deduplicate workloads for NAS deployments VirtualWisdom Integration: WorkloadWisdom fully integrates with VirtualWisdom to comprise Virtual Instruments’ new app-centric approach to IPM Broader Array Support for Workload Data Importer: The WorkloadWisdom Workload Data Importer analysis policies now cover all major storage arrays from NetApp, Dell EMC, IBM, Oracle, HPE and Pure Storage – dramatically simplifying the workload creation process Virtual Instruments WorkloadWisdom Dell EMC Announces Availability Of Its Proven Professional Certifications  Dell EMC announced its Proven Professional program in an effort to increase knowledge, which will lead to increased experience in IT departments. As technology changes at a rapid pace, IT admins need the knowledge to address it. The new Certifications include: Dell EMC Certified Master - Enterprise Architect is the highest-level, experience-based certification. It uses a board review process to validate that candidates have both the technical acumen and the holistic understanding of business drivers to deliver secure ITaaS solutions that further digital transformation. The Dell EMC Education Services website has the full details, eligibility criteria and application to be considered for a board review. Dell EMC Certified Expert Systems Administrator - Multi-Cloud recognizes expertise in administration of multi-cloud solutions. This certification validates an administrator’s experience managing workloads across multi-cloud environments and their expertise in Dell EMC infrastructure, VMware vRealize Suite, and integrating with popular public cloud providers such as Dell EMC and VMware Cloud Provider Partners, Virtustream, Microsoft Azure and Amazon Web Services. Dell EMC Certified Specialist Systems Administrator - Infrastructure Security validates that IT professionals have the knowledge and skills to take a security-first approach to administering Dell EMC infrastructure across the data center and cloud. Administrators will learn best practices in configuring and managing Dell EMC enterprise solutions in alignment with their organization’s overall security strategy. Dell EMC Certified Associate - Converged Systems and Hybrid Cloud Platforms establishes foundational knowledge in integrated infrastructure and cloud strategies. Business or technical professionals learn through real-life case studies and interactive experience in which they play the role of a CIO transforming an organization from a traditional data center to a modernized IT environment. Dell EMC Proven Professional program Pivot3 Expands Partnerships  Pivot3 announced that it is expanding partnerships in order to bring more capabilities to its customers. The three companies that it is expanding partnerships with are Zerto, VMware, and Redington Value. This expansion will include: Zerto will be supported across Pivot3’s entire range of solutions. VMware Horizon 7 is now available on Pivot3. Redington Value is now offering Pivot3 with Splunk to support enterprise IT security and analytics use cases. Pivot3 Cisco Expands Its Automation Portfolio  Cisco is expanding its automation portfolio with the release of Cisco Crosswork Network Automation. This new software portfolio is aimed at helping service providers run their large-scale networks. These new software technologies include: Cisco Crosswork Change Automation: Automated operations application that enables large-scale change and closed-loop control Cisco Crosswork Health Insights: Smart sensors, smart alerts and smart remediation to monitor and optimize networks Cisco Crosswork Data Platform: Featuring both an OpenSource and commercial-class data analytics platform Cisco Crosswork Network Insights: Cloud-based analytics solution for solving large-scale routing issues Cisco Crosswork Situation Manager: Machine learning-based event correlation with social operations Cisco Crosswork Network Automation Rubrik Approaches $300 Million Global Bookings Run Rate  Rubrik is happy to announce that it is approaching a global bookings run rate of $300 million due to the pace of adoption it is seeing in large enterprises. This is a pretty good number for a company that emerged on the scene two years ago. It will be interesting to see where this goes moving forward. Rubrik Flexxible IT Launches Channel Program With HP  Flexxible IT launched a channel program with HP, Inc. featuring integrated workspace solutions targeting SME. The program delivers: Citrix and Microsoft software, pre-integrated and ready to deploy Several options of Mobile Thin Clients or Desktop Thin Clients with keyboard and display for the client side The Flexxible|SmartWorkspaces solution, delivering automated and rapid deployment, single-pane-of-glass administration Flexxible IT Thin Clients Commvualt & INFINIDAT Partner On PB Scale Data Protection  Commvault is partnering with INFINIDAT in order to deliver a single data protection solution to large enterprises for primary and secondary storage uses cases. The two companies that that this partnership will help customers protect data on-premises and in the cloud, while increasing business agility, IT efficiency and reliability. This partnership will also allow INFINIDAT to sell and deliver Commvault Software to its cusomters. Commvault INFINIDAT Packet Deploys AMD EPYC CPUs In Its Bare Metal Cloud  Packet, the bare metal cloud company, announced that it is adding AMD EPYC processors to its line up. The system will be built on Dell EMC’s PowerEdge R6415 include 64GB of RAM, two 480GB SSDs, and the AMD EPYC 7401P CPU (with 24 physical cores). This new system is ideal for use cases such as scale-out SaaS platforms, Kubernetes-based cloud native applications, and enterprise workloads leveraging virtualization.