Dell, EMC, Dell Technologies, Cisco,

Thursday, June 29, 2017

Veeam and Nutanix Accelerate Digital Transformation for the Always-On Enterprise

WASHINGTON--(BUSINESS WIRE)--#Nutanix .NEXT, Booth No. S3 – #Veeam Software, the innovative provider of solutions that delivers Availability for the Always-On Enterprise™, today announced an expanded partnership with Nutanix, a leader in enterprise cloud computing and #hyperconverged infrastructure, in which Veeam becomes the Premier Availability solution provider for Nutanix virtualized environments. In addition, Veeam adds support of Nutanix AHV in its flagship Veeam Availability Suite, allowing joint Nutanix and Veeam customers to benefit from an enterprise-class Availability solution, and therefore help dramatically accelerate enterprises’ Digital Transformation strategies knowing that they can deliver on user expectations for a seamless digital experience.

http://www.businesswire.com/news/home/20170629005954/en/Veeam-Nutanix-Accelerate-Digital-Transformation-Always-On-Enterprise

Nutanix Xi Cloud Services revealed

#XiCloudServices brings #hyperconverged vendor #Nutanix into the #publiccloud game as it seeks to tie on-premises data to the cloud with help of #Calmorchestration software.

OXON HILL, Maryland -- Nutanix laid out the next phase of its cloud vision at Nutanix .NEXT 2017, introducing a set of public cloud services the hyper-converged vendor will host beginning in 2018.

Nutanix's strategy is to build its market leading hyper-convergence technology into an enterprise cloud platform. At .NEXT 2017 Wednesday night, Nutanix previewed its Nutanix Xi Cloud Services and Calm cloud orchestration software.

Nutanix will manage Xi Cloud Services on its public cloud, beginning with a disaster recovery service in early 2018. The Nutanix Xi DR Service ties into the Nutanix Prism management interface, which lets customers set up protection for applications and data on whatever hardware they use with Prism, and on popular public clouds. The Nutanix hyper-converged software stack can run on its branded appliances or servers from Dell EMC, Lenovo, IBM, Cisco and Hewlett Packard Enterprise. Nutanix also supports Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform public clouds.

Nutanix executives say they will enable hybrid cloud services by tying on-premises data to a public cloud.

"How we meld the Xi Cloud Services with on-prem is basically beginning the journey of hybrid cloud services," Nutanix CEO Dheeraj Pandey said during the .NEXT opening keynote.

#NutanixCalm is another piece of its multi-cloud strategy.

Calm, which comes from #Nutanix 's acquisition of #Calmio a year ago, is an orchestration tool built into #Prism. It captures an application in a blueprint with all of the app's #containers, virtual machines, binaries and run book orchestration. The blueprint can be provisioned, managed and scaled in #AWS or on Nutanix appliances on-premises with support planned for #Azure and #GoogleCloudPlatform soon after the initial release. Calm is scheduled for availability near the end of 2017.

"We think the hybrid cloud needs a reboot; a fresh approach," said Greg Smith, Nutanix senior director of technical marketing. "One challenge of the hybrid cloud is that the different public and private clouds are built with different technology stacks, different tools and different management constructs. You wind up with separate IT silos. The public cloud does not meld with the private cloud."

http://searchconvergedinfrastructure.techtarget.com/news/450421672/Nutanix-Xi-Cloud-Services-revealed

Examining the OpenText-Documentum purchase six months later

Six months after purchasing #DellEMC 's enterprise content products, including #Documentum, content management company #OpenText is promising future innovation and integration. Features such as #AI functionality figure into keeping Documentum customers in the fold during the transition.

The OpenText-Documentum purchase sent waves through the enterprise content management industry (ECM) when the $1.62 billion deal was announced last September and finalized in January. Questions were voiced about the future of Documentum and whether customers would need to migrate to OpenText products, but OpenText executives have been adamant that both products will continue to exist and receive innovation, including integration into OpenText's upcoming AI offering, Magellan.

Adam Howatson, OpenText CMO, explained, "That's part of the benefit we're articulating to customers is not only the active investment on the platform you're currently invested in, but also the integration with the broader OpenText product portfolio, from customer experience products or business network offering and our discovery and analytics solutions, including OpenText Magellan, our new AI platform that we will be unveiling on July 11."

Howatson added that Magellan is expected to help uncover insights from both numerical data and unstructured content and will be accessible in repositories such as Documentum or InfoArchive, a document archiving platform OpenText received in the Dell EMC ECD acquisition.

http://searchcontentmanagement.techtarget.com/news/450421739/Examining-the-OpenText-Documentum-purchase-six-months-later

New VMware channel boss calls on partners to see past compute virtualisation

#VMware s new UK channel boss has called on partners to move more quickly into VMware's new product set and offer a range of the portfolio beyond its traditional compute virtualisation heritage. Alanzo Blackstock joined VMware at the start of the year from #Cisco and has set the target of speeding up partner transition from VMware's old business and into what it dubs phase two

"As a business we need to scale and we need to scale faster. In the market we're still known as the leader in compute virtualisation but we need to do much, much more to get the new strategy and the new portfolio out to our customers. We can only do that through our partners," he said.

"What we see in our partner landscape is a bit of legacy in terms of partners that have been used to selling our legacy products and we're trying to get them to start to adapt their business.

"We are starting to see the change, but I think from a channel perspective we probably need to be a bit more aggressive in terms of getting the right mix of partners to align to our strategy."

To usher partners along in this process, VMware is introducing more attractive incentives to partners selling newer products, as it looks to increase awareness of its other virtualisation solutions in areas such as networking and storage.

Alongside this, VMware is seeking partners with proven cloud capabilities, as its partnership with Amazon Web Services moves closer to launching. A UK launch is pencilled in for early next year, Blackstock said.

To do this, Blackstock wants VMware to improve the contact it has with partners further down its pecking order, in the hope of uncovering hidden talent that can rapidly build business in these new areas.

He is also seeking AWS partners that are not yet working with VMware, ahead of the collaboration, and is looking to tap into the Dell Technologies partner base.

"We need to do more to get the new strategy and portfolio and we can only do that with our partners," he said.

"The biggest challenge for me is reaching and touching more partners.

"Currently our model is two tier with distribution and partners, but we need to find a better way of working with distribution and our own resources to touch more partners than we currently do.

"That's the biggest challenge that I have at the moment because my business isn't saying 'here's another 10 people'. I have to find a way of doing that with the resources I have."

https://www.channelweb.co.uk/crn-uk/news/3012906/new-vmware-channel-boss-calls-on-partners-to-see-past-compute-virtualisation

Top 5 Vendors in the Quantum Computing Market from 2017 to 2021: Technavio

The global #quantumcomputing market is forecasted to grow at a CAGR of over 35% during the forecast period. The global quantum computing market will see significant growth from 2018 because of the extensive research that has been carried out for developing a full-fledged quantum computer and related hardware. In 2017, #IBM announced an initiative known as #IBMQ, which is focused on building the world's first universal quantum computer. Competitive vendor landscape According to the report, the global quantum computing market is dominated by #DWave Systems as it is the only company to have commercialized quantum computers. However, the ongoing investment in R&D by several market participants will lead to more competition in the market once quantum computers are commercialized. “The only firm that has developed a quantum computer as of 2017 is D-Wave Systems. The company's most recent product offering is the D-Wave 2X that has over 2,000 qubits. Organizations such as #IBM, #Google, and #Microsoft have ramped up their research work for developing a #quantumcomputer within the next few years,” says Raghu Raj Singh, a lead computing devices research analyst from #Technavio. A trend that is noticed in the global computing market is that major vendors are making strategic partnerships to develop quantum computers. For example, Google is working closely with NASA and USRA to understand quantum computing and develop universal quantum computers. This report is available at a USD 1,000 discount for a limited time only: View market snapshot before purchasing Buy 1 Technavio report and get the second for 50% off. Buy 2 Technavio reports and get the third for free. Top five quantum computing market vendors D-Wave Systems D-Wave Systems is a quantum computing firm, and it is the first company to launch a series of commercially viable quantum computers. The firm released its first quantum computer, the D-Wave One in 2010. Google Google is a multinational corporation that specializes in Internet-related services and products including online advertising technologies, search engines, cloud computing, and software. The parent company of Google is Alphabet. IBM IBM is a conglomerate that operates in over 170 countries. It manufactures computer hardware and software while offering various computing services. The hardware designed by IBM for different categories include IBM's POWER microprocessors, which has been used inside many gaming consoles such as the PlayStation 3, Nintendo's Wii U, and Xbox 360. Intel Intel designs and manufactures advanced and integrated digital technology platforms, which comprise a microprocessor and chipset that can be enhanced by additional hardware, software, and services. It sells these platforms primarily to OEMs, original device manufacturers (ODMs), as well as manufacturing and communications equipment manufacturers in the computing and communications industry. Microsoft Microsoft licenses, develops, markets, and supports software, services, as well as devices worldwide. It develops, markets, and supports software and services designed to increase individual, team, and organizational productivity and efficiency to ease user operations in both hardware and software services. Its products are categorized into two divisions, namely commercial licensing and others.
http://www.businesswire.com/news/home/20170629005889/en/Top-5-Vendors-Quantum-Computing-Market-2017

Graphene Could Help Develop 1000-Faster Computers Based On Spintronics

Using #grapheneribbons and #carbonnanotubes, engineers have created and tested a new type of computing logic structure based on #spintronics that could one day make computers 1,000 times faster and use 100 times less power.

With the help of graphene, engineers believe that they can develop a new type of digital circuit that uses electron spin to vastly improve processing speed and reduce power consumption. Image: pixabay-161773

Whereas electronics is limited by the fact that it uses only one property of electrons, their (negative) electric charge, spintronics adds the use of another property, electron spin.

Digital circuits and cascade of logic

Today’s digital circuits are made of transistors that act as logic gates that work in a binary fashion.

For example, an AND gate always outputs “TRUE” or “1” when all of its inputs are set to “TRUE” or “1,” otherwise it outputs “FALSE” or “0”.

In most cases the 1 is represented by high voltage (usually +5V) and the 0 is represented by low voltage (usually 0V).

The transistors are connected to each other by wires so that the output of one becomes the input of another.

In this way, one can envisage that the processing of data flows like a massive cascade of logic through the millions of transistors contained on a computer chip.

Electrons also have spin

But no matter how much you improve the materials and miniaturize the components, the physical speed of such a system is limited by how fast the voltage can flip between the two states of 1 and 0.

This limitation exists because the technology manipulates only one property of the electron, its electrical charge. By creating a voltage potential, it makes the electrons move along to create a current.

But electrons also have another property, aside from their mass and charge, that is called angular momentum, or spin. And spin is to do with magnetism, not electricity.

Imagine an electron as a tiny billiard ball spinning on its axis. Spinning one way produces a magnetic field with the north pole at the top, and spinning the other way creates a north pole at the bottom.

Spintronics is like electronics “with extras,” where the extras are the ability to manipulate electron spin as well as charge.

Carbon gates can cascade directly

There are at least two ways that spintronics increases the range of things you can do with electrons: working at a very small scale, you can use electrons to generate a magnetic field, and you can use a magnetic field to move electrons (like passing a magnet over iron filings).

Already one can see a potential advantage for digital circuits: you could make things happening in one logic gate influence another logic gate without having to connect them physically – you can get rid of the wires. And that is essentially what the engineers in the new study have shown.

They discovered that electrons moving in carbon nanotubes or extremely thin wires made of carbon, generate a magnetic field that affects the flow of electrons in an adjacent graphene nanoribbon (essentially a very flat carbon ribbon that is only one atom thick). Effectively, this is a carbon transistor or gate.

“These carbon gates can be cascaded directly; no additional intermediate devices are required between logic gates,” note the engineers.

Computers working 1,000 times faster

Ryan M. Gelfand, an assistant professor at the University of Central Florida, and one of the team working on the new carbon-based spintronics system, says:

“If you want to continue to push technology forward, we need faster computers to be able to run bigger and better simulations for climate science, for space exploration, for Wall Street. To get there, we can’t rely on silicon transistors anymore.”

While it is early days and a lot more research and testing needs to be done, he and his colleagues believe that their carbon-based spintronics could lead to computers working some 1,000 times faster (at clock speeds measured in terahertz) and using much less power.


http://marketbusinessnews.com/graphene-help-develop-1000-faster-computers-based-spintronics/161614

Dell EMC Unity Rockets Past $1 Billion Milestone

#DellEMC announced that after just 419 days since shipping, its Dell EMC #Unity family of #AllFlash and #hybridflash storage arrays has achieved $1 billion in cumulative bookings, making it one of the fastest-growing storage products in company history.
Dell EMC Unity delivers best-in-class All-Flash and hybrid storage performance and simplicity for the midrange with integrated support for block and file data. Since May 2016, nearly 6,000 customers have selected Dell EMC Unity to address block and file storage workloads such as databases, file systems, server virtual machines and virtual desktop infrastructures.
"In just over a year since it was introduced, customer adoption of Dell EMC Unity has picked up incredible momentum, becoming one of the fastest-growing storage array families in company history," said Jeff Boudreau, President of Storage, Dell EMC. "Designed for All-Flash performance and cloud-like simplicity, Dell EMC Unity has taken the midrange storage segment by storm and is helping thousands of SMEs and enterprises modernize their data centers."
At Dell EMC World 2017 in Las Vegas, Dell EMC introduced four new Dell EMC Unity All-Flash models – Dell EMC Unity 350F, 450F, 550F and 650F – providing up to 500TB of effective storage capacity per single U, and the introduction of a highly dense 80-drive, 3U footprint. All new Dell EMC Unity models available in July include a 4x larger file system with inline file compression, integrated Copy Data Management (iCDM) with snapshot mobility, introduction of Dynamic Pools offering simpler mapped RAID protection and support for external encryption key management via KMIP (Key Management Interoperability Protocol). Additionally, Dell EMC Unity features an 8x increase in density and 8x more effective file system capacity than its predecessor, as well as the ability to install in under 10 minutes, 33% faster than previous generations.
Notable customers that have recently embraced Dell EMC Unity storage arrays for their midrange and enterprise workloads include:
North Carolina State University (NC State) is the largest university in North Carolina and considers its storage infrastructure critical to supporting its growing population of more than 33,000 students. The public university faces the challenge of doing more with less and was therefore looking to effectively manage its large-scale environment with limited resources. With storage growth rates at 30% year-over-year, NC State required a modern data center solution that would enable consolidation of multiple legacy arrays into a smaller footprint. NC State selected Dell EMC Unity storage to leverage its compact size and data efficiency services such as inline file compression and simplified management. The university's IT department relies on the performance of its Dell EMC Unity All-Flash arrays to support a wide range of applications used by administration, faculty and students, with each application having different performance and availability requirements. Dell EMC CloudIQ provides cloud-based storage analytics for NC State's Dell EMC Unity environment, giving it a near real-time and long-term view of its data storage requirements to help the IT department stay ahead of demand for storage growth.
Watch video of NC State's Dell EMC Unity customer experience:https://www.emc.com/video-collateral/demos/microsites/mediaplayer-video/emc-unity-beta-customer-experience-north-carolina-state-university.htm

Cambridge University Hospitals (CUH) in Cambridge, England recently deployed a Dell EMC Unity All-Flash storage system to support its InterSystems HealthShare informatics platform to consolidate localized data about 3.2 million patients into a single data warehouse. Dell EMC Unity supports the HealthShare platform and its powerful analytics tools that enable CUH to effectively and efficiently mine almost every type of data found in a hospital environment including structured and unstructured data, images, documents and messages from clinical, administrative and patient sources. With Dell EMC Unity All-Flash storage as its big data repository, CUH is realizing dramatic gains in complex query response times that now resolve in minutes and seconds instead of hours. The deployment has also resulted in a reduction of data center floor space and a 60% reduction in power consumption at CUH based on the replacement of two full racks of legacy storage with less than one rack containing Dell EMC Unity.
Additional Resources:
Learn more about Dell EMC Unity All-Flash Storage: https://www.emc.com/en-us/storage/unity.htm#collapseDirect2DellEMC Blog: "Billion Dollar Blast-off: Dell EMC Rockets Past $1B": https://blog.dellemc.com/en-us/billion-dollar-blast-off-dell-emc-unity-rockets-past-1b-in-just-over-a-year/Experience the new standard for affordable All-Flash storage with the Dell EMC Unity Simulator: https://www.emc.com/en-us/storage/all-flash-unity-simulator/index.htm Learn more about Dell EMC's All Flash storage portfolio: https://www.emc.com/en-us/storage/discover-flash-storage/index.htm?pdf=analyst-reports/vmax-all-flash-idc-lab-validationConnect with Dell EMC via Twitter, Facebook, YouTube, LinkedIn and DECN

Optimized HPC Solutions Driving Performance, Efficiency, and Scale

Technology is transforming nearly every human and business process, from driving business growth, to translating documents in real time, to enhancing decision-making in areas like financial services and scientific research. High performance computing ( #HPC ) is helping organizations harness the full power of their data to perform faster and smarter than ever. Spearheading these technological advancements, #artificialintelligence ( #AI ) is introducing a new breed of intelligent machines that can solve highly complex problems quickly, while simplifying IT management and reducing time to insight. Today’s organizations are leveraging deep learning, a powerful component of AI, to analyze data and derive actionable intelligence at lightning speeds. Backed by HPC performance, deep learning applications utilize predictive analytics to forecast future activity, behaviors, and trends as well as #BigData analytics to disclose and resolve hidden problems in data. However, evolving industry requirements are pushing organizations to the brink of their IT capabilities, and challenging them to process rising volumes of data with greater efficiency and precision. Hewlett Packard Enterprise ( #HPE ) is offering industry-leading technologies and services to accelerate HPC and AI applications. This comprehensive, purpose-built portfolio for #deeplearning supports massively parallel processing for large, unstructured datasets, and rapidly exceeds the throughput of typical industry standard servers. Featuring density-optimized systems, HPE innovations are facilitating new levels of performance, efficiency, and scale. INTRODUCING OPTIMIZED HPC SOLUTIONS HPE’s goal is to create a superior IT environment marked by enhanced security, agility, flexibility, and economic control. Optimized HPC platforms are purpose-built to execute increasingly complex workloads as business requirements continue to evolve. The HPE SGI 8600 is rapidly transforming compute capabilities. This petascale system can scale to thousands of nodes, making it the fastest distributed-memory server in the world and the ideal choice for managing AI and HPC applications. And with liquid cooling features and closed-loop airflow, users can capitalize on dramatically lower energy costs without sacrificing performance. Organizations who invest in this cutting-edge technology will have the opportunity to speed innovation with the ease of a comprehensive infrastructure. The HPE SGI Management Suite ensures ease of use and allows provisioning of thousands of nodes in minutes, provides detailed system health monitoring, offers fine-grained power management. In addition to the 8600, HPE Apollo systems provide a bridge from traditional to scale-out architectures that are expressly designed to maximize performance, streamline data center management, and optimize resource consumption such as power, cooling, and space. HPE is turbocharging AI operations with three Apollo HPC systems: The HPE Apollo 6500 offers automated real-time intelligence to run massively parallel applications such as deep learning. The HPE Apollo 2000 system is optimized to increase data center floor space, compute performance, and flexibility to support a wide range of configurations and workload requirements. The HPE Apollo 6000 Gen10 is a next-generation server that enables faster processing, high-speed memory access, updated software-defined management and security, as well as HPC cluster performance with 100 GBPS node-to-node connectivity. Announced earlier this month, the HPE Apollo 6000 Gen10 is the world’s most secure industry standard server. This revolutionary system promotes optimal productivity, efficiency, and security, and only HPE promises a Secure Compute Lifecycle by embedding silicon in the firmware. The Silicon Root of Trust ensures comprehensive hardware and software protection throughout manufacturing, distribution, installation, and usage. And with automated task scheduler and management applications, organizations can operate and scale with confidence.

https://www.hpcwire.com/solution_content/hpe/government-academia/optimized-hpc-solutions-driving-performance-efficiency-scale/

VxRail Appliance Now in vCAN Program

Learn more about the only fully integrated, pre-configured, and pre-tested #VMware #hyperconverged infrastructure appliance family on the market. #VxRail dramatically simplifies IT operations while lowering overall capital and operational costs. Combined with Dell PowerEdge server options, VxRail offers highly configurable appliances that meet any use case. Watch the vmLIVE replay (introducing the product to #vCAN partners) and/or visit vCAN Edge for specific assets.

https://blogs.vmware.com/partnernews/2017/06/vxrail-vcan.html

Intel, Nutanix partner to accelerate business transformation, data center

With a stronghold on a niche market, #Nutanix Inc.’s #hyperconverged infrastructure technology has succeeded largely because of its ability to integrate so well with data center service providers. As the company looks to expand beyond its niche into more cloud-based environments, this partner ecosystem becomes a crucial component of Nutanix’s efforts to maintain momentum. Hyperconverged solutions are especially favored in modern data management scenarios, as businesses across a myriad of sectors embrace the digital transformation. From manufacturing to agriculture, data has become an undeniable asset in need of vendor-agnostic processes. “How do you start to use technology to solve business problems? That’s a big part of what Intel is helping companies do,” said Rachel Mushahwar (pictured, right), general manager and head of Americas industry sales at #Intel Corp.

https://siliconangle.com/blog/2017/06/28/intel-nutanix-partner-accelerate-business-transformation-data-center-nextconf/

TensorFlow to Hadoop By Way of Datameer

Companies that want to use #TensorFlow to execute deep learning models on big data stored in #Hadoop may want to check out the new #SmartAI offering unveiled by #Datameer today. Deep learning has emerged as one of the hottest technique for turning massive sets of unstructured data into useful information, and Google‘s Tensorflow is arguably the most popular programming and runtime framework for enabling it. So it made sense that Datameer, which was one of the first vendors to develop a soup-to-nuts Hadoop application for big data anatlyics, has now added support for TensorFlow into its Hadoop-based application. With today’s unveiling of SmartAI, Datameer is providing a way to execute and operationalize TensorFlow models. “The objective here is to take the stuff that mad scientists are coming up with, and actually take it to the business,” Datameer’s Senior Director of Product Marketing John Morrell tells Datanami. SmartAI, which is still in technical preview, is not helping data scientists to create the models. They will still do that in their favorite coding environment. Nor is it set up to train the models. If you’re interested in learning about how that can be accomplished on Hadoop, Hortonworks has a good blog post on integrating TensorFlow assemblies into YARN. Rather, Datameer’s new app is all about solving some of the thorny “last mile” problems that organizations often encounter as they’re moving a trained TensorFlow model from the lab into production. “AI today has had some problems in terms of operationalization,” Morrell says. “When a data scientist come up with a formula using their data science tools, they just chuck it over the wall to IT guy, who then tries to turn it into code, and custom code the whole thing.”

https://www.datanami.com/2017/06/28/tensorflow-hadoop-way-datameer/

Embedded MRAM Can Take the Heat

TORONTO – On the heels of several foundries publicly announcing plans to put #MRAM into production by the end of this year and into 2018, one of them has outlined how it can significantly improve data retention for embedded applications. At the recent 2017 International Symposium on VLSI Technology, Systems and Applications in Japan, Globalfoundries outlined in a technical paper Everspin Technologies' progress with moving embedded MRAM (eMRAM) forward into the 22nm process node. In a telephone interview with EE Times, Dave Eggleston Globalfoundries' vice president of embedded memory, said the key breakthrough highlighted in the paper is the ability for eMRAM to retain data through solder reflow at 260 degrees Celsius, and for more than 10 years at 125 degrees Celsius, plus read/write with outstanding endurance at 125 degrees Celsius. This will enable eMRAM to be used for general purpose MCUs and automotive SOCs, he said. “The thermal stability has not been there for the magnetic layers. If you solve that data retention problem then it opens up much wider markets," he added. MRAM had demonstrated non-volatility, high reliability and manufacturability in previous technology nodes, Eggleston said, but has been challenged to scale to 2x nm node geometries and BEOL compatibility process temperatures for embedded memories. As outlined in the paper, the magnetic tunnel junction (MTJ) stack and integration was been optimized for a 400 degree Celsius, 60-minute post-MTJ patterning thermal budget and compatible with CMOS BEOL. The three major foundries are all introducing it as a product and customers are picking up GlobalFoundries' PDK to started designing for it, said Eggleston. The major fab equipment makers started getting involved several years ago because they believed there was enough business, so the tools are available for deposition and etch of the MTJ. “They invested and developed products in conjunction with large fabs like us and with small companies such as Everspin," Eggleston said. MCU customers, in the meantime, have started seriously looking at how MRAM can enhance their architectures, said Eggleston. “They get faster write speed and they get higher endurance," he said. This gives them the capability to use embedded MRAM where they might have previously used SRAM. Eggleston said the 2x nm node is the sweet spot in terms of circuitry simplicity and manufacturing costs.

The market opportunity for eMRAM is not dissimilar to those of other emerging memory technologies as well as incumbents: new high-volume markets include mobility, networking, data centers, Internet of Things (IoT) and automotive, said Eggleston. For Globalfoundries, the latter two are the most relevant. “We used to say they were largely the same, but as a foundry we got lots of traction in automotive," he said.

eFlash has been the incumbent embedded memory, but there are many emerging options that can potentially address these markets, said Eggleston. In addition to eMRAM, there is phase change memory (PCM), embedded Resistive RAM (eRRAM), carbon nanotube (CNT) and ferroelectric (FeFET). All have tradeoffs when it comes to retention, efficiency and speed, he said. Both CNT and FeFET show promise, but are too immature, while PCM is too specialized and fading away from embedded applications.

“MRAM and RRAM both have similar capabilities," said Eggleston. “They're both backend aligned memories so that gives you the ability to pretty easily implement them into a logic process." Processes include those that require bulk silicon, FD-SOI or FinFET. eFlash is built down into the silicon, he said, and would be more challenging to build into all of those variants.

RRAM is a simpler stack, said Eggleston, as there are fewer materials required between the electrodes. “It also doesn't require the same investment in equipment MRAM does," he said. "MRAM certainly requires unique capital equipment to do that complex stack." However, he said, RRAM hasn't shown the ability to provide the data retention, speed, and endurance balance required by the broader market.

What MRAM offers over RRAM is versatility, said Eggleston, because its material composition can be tuned between electrodes. “You can tune it either for great data retention or for really fast write speed and endurance," he said. This tuneability, he added, will enable Globalfoundries to address the space at advanced nodes previously addressed by eFlash, as well as tune it for speed for use as a non-volatile cache in server processors and storage controllers.

http://www.eetimes.com/document.asp?doc_id=1331958

Tegile Releases Its Latest IntelliFlash HD

Today #Tegile Systems released the latest version of its flagship #IntelliFlash HD multi-tiered flash storage platform. This new version enables customers to take advantage of #NVMe technology without the risk or resources that are typically associated with it. According to Tegile, the new IntelliFlash HD contains the highest density flash available on the market, provides full encryption, and can store 30% more petabytes of data for any application use case and every workload, at a fraction of the normal cost.

http://www.storagereview.com/tegile_releases_its_latest_intelliflash_hd

Dell EMC Wins ISC Supercomputer Conference Vendor Showdown Track 2

In recent years, traditional #supercomputing competitors, like #SGI (now part of #HPE), Cray, and #IBM, have seen mainstream server HPC brands, like #DellEMC, HPE, and #Lenovo, gain end-user attention. For the past few years the International Supercomputing Conference (ISC), a leading conference for high-performance computing ( #HPC ) held in Frankfurt, Germany last week, has held a vendor showdown: an audience-judged competition among ISC’s Platinum, Gold and Silver sponsors. Audience judging matters at ISC, because its audience is so highly qualified. [You can read the details for 2016’s event here. I assume this year was similar, but ISC had not yet posted the 2017 split before this was published.] ISC’s vendor showdown is a timed event; each vendor has a few minutes to present their strategy, product or research developments using only three slides, and must then answer questions from a panel of moderators. This year, the 22 participating companies were split evenly into two tracks. Dell EMC participated in track 2 (listed here in the order they presented):

Dell EMC’s Ed Turkel gave the winning presentation for track 2. Dell EMC has been experimenting and innovating around the company’s mission to “democratize HPC” (remember that the PC mission belongs to Dell now, a separate corporate entity from Dell EMC). Democratizing HPC means making HPC accessible to small and medium businesses to accelerate science, engineering, and analytics applications. It is doing this by working with partners to advance the technology base and by optimizing HPC solutions, via their Dell EMC HPC Innovation lab
https://www.forbes.com/sites/tiriasresearch/2017/06/28/dell-emc-wins-isc-vendor-showdown-track-2/?c=0&s=NewTech

Wednesday, June 28, 2017

IBM and Lightbend Build Integrated Platform for Cognitive Development

ARMONK, N.Y., June 28, 2017 /PRNewswire/ -- #IBM (NYSE: IBM) today is announcing a collaborative development initiative with #Lightbend, the provider of the world's leading #ReactiveApplicationDevelopmentPlatform, to help advance the development of #artificialintelligence ( #AI ) and #cognitivesolutions in the enterprise. The resulting joint solutions will be designed to provide a complete toolchain for #Java and #Scala developers to easily build and deploy AI and cognitive applications in both on-premise and cloud environments. Developers are turning to Scala for their most complex use cases due its ability to handle streaming data and massive scale. Scala is the language of cognitive development. Modern frameworks such as Spark, Kafka and Akka are written in Scala. The confluence of data volume and the introduction of cognitive/AI technologies is increasing both opportunity and complexity for developers building enterprise applications in the new era of cognitive computing. Lightbend's popular Reactive Platform is powered by an open source core and brings developers important capabilities across reactive programming, data and microservices. IBM will integrate these capabilities across its cloud platform and portfolio of cloud services, including data analytics, cognitive and machine learning, and collaborative data science tools, which will be designed to extend new capabilities to Java and Scala communities. Developers across enterprises and startups will be able to take advantage of the latest cognitive and AI technologies and to leverage existing WebSphere investments. "Working with IBM on an integrated platform for cognitive development is a natural progression of our work to support advanced cognitive application development. As one of the leaders in cognitive/AI, IBM brings important code and tools to our customers and the Java and Scala communities," said Mark Brewer, President and CEO, Lightbend. "IBM's support of Lightbend and the Reactive Platform is a validation of where we think cognitive development is going and what is required by the developers building that future." Bob Lord, IBM's Chief Digital Officer added, "We believe the use of the Lightbend Reactive Platform is essential to building today's modern infrastructures. Lightbend represents IBM's continuous commitment to the Java and Scala communities. Java and Scala are the languages of cognitive and AI development, and cognitive development is the future. The collaboration between IBM and Lightbend can help enterprise developers build cognitive applications and accelerate the era of cognitive computing." IBM has a long history supporting the millions of developers in the Java and Scala communities. IBM spearheaded the creation of an open environment and open tools for Java developers, which resulted in Eclipse and then the Eclipse Foundation. It has also supported the 1.2 million developers in the Scala community for many years, including an ongoing collaboration with Lightbend that has trained hundreds of Scala developers. IBM and Lightbend consider Scala among today's most popular programming languages with major uses at Apple, Verizon, Twitter and more. The strategic collaboration between IBM and Lightbend will create new code, tools and documentation that will be designed to help developers build Java- and Scala-based applications on the Lightbend Reactive Platform. It is anticipated to integrate WebSphere and key offerings across IBM Cloud, including app management services, the Watson Data Platform and the more than 150 services across cognitive intelligence, data analytics, Internet of Things, security, DevOps, and blockchain available to developers through the IBM Cloud. IBM and Lightbend have a history of collaboration but this strategic investment and relationship will allow enterprise developers to both reap the benefits of the Lightbend Reactive Platform while integrating IBM's cognitive development tools, resources and support.

http://www.prnewswire.com/news-releases/ibm-and-lightbend-build-integrated-platform-for-cognitive-development-300481007.html

Microsegmentation Is the Future What it is, why it's 

#Virtualization, arguably describable as "software-defined workloads," has become inextricably intertwined with both Software-Defined Storage ( #SDS ) and Software-Defined Networking ( #SDN ). The past decade had storage wars that redefined the IT landscape, and the upcoming one looks to see networking do the same. One term that will soon become commonplace for all virtualization administrators is microsegmentation. To understand microsegmentation, we first need to understand Virtual LANs ( #VLAN s). VLANs are a way to simulate separate physical networks without actually having to physically wire up separate networks. All devices on VLAN 10, for example, can communicate with one another. Those VLAN 10 devices cannot, however, communicate with devices on VLAN 20. As previously stated, VLANs were designed to simulate multiple physical networks atop a single physical infrastructure. Physical networks in this conceptualization usually consist of hundreds, thousands or even tens of thousands of devices in a single layer 2 switch fabric.

https://virtualizationreview.com/articles/2017/06/28/microsegmentation-is-the-future.aspx?m=1

Linux Foundation announces new project for software-defined networks

The #LinuxFoundation is announcing a new open-source project designed to bring automated protection to software-defined networks. The #OpenSecurityController (OSC) Project is a new software-defined security orchestration solution with a focus on multi-cloud environments. “ #Softwaredefinednetworks are becoming a standard for businesses, and open source networking projects are a key element in helping the transition, and pushing for a more automated network” said Arpit Joshipura, general manager of Networking and Orchestration at The Linux Foundation. “Equally important to automation in the open source community is ensuring security. The Open Security Controller Project touches both of these areas. We are excited to have this project join The Linux Foundation, and look forward to the collaboration this project will engender regarding network security now and in the future.” According to the Linux Foundation, the project will be able to automate the deployment of virtualized network security functions such as next-generation firewall, intrusion prevention systems and application data controllers. The founding members of the #OpenSecurityControl Project include #Huawei, #Intel, #McAfee, #Nuage Networks from #Nokia, and #PaloAlto Networks. The project’s technology will be licensed under Apache 2, and a governing board and technical steering committee will be put in place to handle business decisions and provide leadership. According to the project’s website, “The Open Security Controller (OSC) code base: is a software-defined security orchestration solution that automates deployment of virtualized network security functions, like next-generation firewall, intrusion prevention systems and application delivery controllers; enables East-West data center security, is scalable and reduces threats in software defined network environments; simplifies and automates security management and compliance; and because it is open, offers organizations the flexibility to choose the security technology that is best suited to their needs.”

http://sdtimes.com/linux-foundation-announces-new-project-software-defined-networks/

If You Don't Buy Tech Stocks Now, You're Missing Out On A Bargain

Tech stocks right now are going through a textbook correction within an ongoing bull market. Tech stocks have been the market leaders in 2017, and a correction is only natural. There is no "tech stock bubble": Compared to the whole market, tech stocks are still nowhere even near 1999-2000 levels. Today's low interest rate and low inflation environment is very favorable for tech companies to convert their high gross margins into high profit margins. If you believe in this bull market and its market leaders, right now looks like an excellent opportunity to buy tech stocks at a discount from their recent highs. The financial headlines have been scary about tech stocks (XLK) this week. The NASDAQ (PowerShares QQQ Trust (QQQ)) has been in the red, even on days when the S&P 500 (SPY) and Dow Jones Industrial Average (DIA) are in the green. Talking heads speculate about sector rotations out of tech and into other stocks. Actually, this is just what a textbook market correction looks like: A normal pullback of about 10% within an ongoing bull market. That is simply what tech stocks are going through this month. Take a look at the "52W High" column in the following charts of the market's biggest stocks, and biggest tech stocks. That is the amount they have declined from their recent highs, as of yesterday's (June 27) close. You will notice it is getting close to -10% for many big name tech stocks.

https://seekingalpha.com/article/4084584-buy-tech-stocks-now-missing-bargain

Rubrik and AHV: Say No to Proxies

The last couple of years I am a huge fan of backup software that removes the need for having proxies. #Rubrik provides a proxy-less backup solution by using the #Nutanix Data Services Virtual IP address to talk directory to each individual virtual disk that it needs to back up. Rubrik and Nutanix have some key advantages with this solution: •AOS 5.1+ with version 3 API’s provides change region tracking for quick efficient backup with no hypervisor based snap. This allow for quick and efficient snapshots. •With AHV and data locality, Rubrik can grab the most recently changed data without flooding the network which can happen when the copy and VM might not live on the same host. For Nutanix the reads happen locally. •Rubrik has access to ever virtual disk by making an iSCSI connection to bypass the need of proxies. •AOS can redirect the 2nd RF copy away from a node with it’s advanced data placement if the backup load becomes too great during a backup window. Thus protecting your mission critical apps that running 24-7. •Did I mention no proxies?

http://itbloodpressure.com/2017/06/28/rubrik-and-ahv-say-no-to-proxies/

Supercomputing Centers Have Become Showcases for Competing HPC Technologies

Even though there wasn’t much turnover in the latest TOP500 list, a number of new #petascale #supercomputer s appeared that reflect a number of interesting trends in the way #HPC architectures are evolving. For the purposes of this discussion, we’ll focus on three of these new systems: #Stampede2, #TBUBAME 3.0, and #MareNostrum 4. First let’s dispense with the notion that these #supercomputers are mere sequels with their previous namesakes. What they really have in common is their architecturally diversity and their use of the very latest componentry. It’s notable that none of these three systems have been completed yet since they rely on hardware that is not yet generally available or could not be procured in the quantities needed in 2017.

https://www.top500.org/news/supercomputing-centers-have-become-showcases-for-competing-hpc-technologies/

NetApp lauds 'Run to NetApp' scheme in poaching business from Dell EMC, HPE and IBM

#NetApp is seeing around 22 per cent of net new accounts in EMEA come through its 'Run to NetApp' scheme, which offers rewards for partners that bring over flash accounts from #EMC, Hewlett Packard Enterprise ( #HPE ) and #IBM. The programme launched in August 2016 but was initially known as 'Run from EMC' and designed to pinch customers while EMC was going through its integration into Dell. NetApp has since rebranded the programme and Alfred Manhart, senior EMEA channel director at NetApp, said the scheme is capitalising on increasing consolidation and disruption in the storage market. Alongside Dell's acquisition of #EMC, #HPE has snapped up #SimpliVity and #Nimble Storage over the last six months. "We started a year ago when it was Run from EMC, so if a customer decides to move from EMC to #NetApp we have a programme that is beneficial for the migration services," he said. "The partner gets a financial benefit out of it and next to it they're selling their own services and selling the gear. "As the competition and marketplace is getting a bit broader we call it Run to NetApp, instead of Run to EMC, and we've included in  phase other competitors." As part of the programme, partners get a $60,000 reimbursement for deals over $500,000, with smaller reimbursements available for smaller deals (details can be found on the website of distributor Arrow ECS). The programme is now set to run until April 2018 and Manhart said that NetApp is primed to continue taking market share from competitors, having recently launched its first hyper-converged (HCI) solution. "These kind of huge mergers - Nimble and SimpliVity with HPE and Dell and EMC - are really positive," he said. "The reason our flash figures are extraordinary is because the marketplace was so disrupted that we took the advantage, and also from a financial perspective as well. I think it's very positive. "We have Nutanix and SimpliVity partners in whatever flavour - flash, HCI [for example] - that are trained in [this technology], and I think we have a better product than they do now." Paul Stringfellow, technical director at NetApp partner Gardner Systems, said the programme is one of a number of NetApp initiatives that have been well received by the channel. "With anything like that it's nice to have an incentive to make it easier to win business, particularly when it's competitive," he said. "Anything that eases the path of doing business is always welcome in the channel. "It's another step from NetApp in making sure that they are better at articulating their message and making sure partners understand the data play and where that data fabric message is going. It falls alongside some really good announcements with ONTAP (data management software) and the HCI piece. "There's lots of positive stuff around NetApp at the moment. Their IDC figures look good and they had a decent last quarter, so all those messages are translating well with the channel."

https://www.channelnomics.eu/channelnomics-eu/news/3012809/netapp-lauds-success-of-run-to-netapp-scheme-in-poaching-business-from-dell-emc-hpe-and-ibm

HPE Joins Investors In OpenStack Startup Platform9's $22 Million Funding Round

#Platform9, a startup that's seen spectacular growth over the last year by delivering on-premises #OpenStack-based infrastructure, managed by a cloud software layer, revealed on Wednesday that it had closed another funding round. The $22 million Series C round, led by #CanvasVentures, with #HPE also participating, will allow Platform9 to continue scaling a business that's grown by 300 percent in revenue last year, with a 360 percent expansion of its customer base, CEO Sirish Raghuram told CRN

"That's all been driven by where the market is today," Raghuram said. "Enterprises are struggling to navigate the journey to the cloud. You have a lot of legacy VMware environments, developers using multiple public clouds, and trying to bring that agility to all of your deployments, all fragmented in individual silos."

http://m.crn.com/news/cloud/300087906/hpe-joins-investors-in-openstack-startup-platform9s-22-million-funding-round.htm

VMware prepping NSX-as-a-service running from the public cloud

The content catalog for #VMworld 2017 has appeared and as usual offers a few hints about announcements at the show and the company's future plans. Perhaps most interesting are the sessions pertaining to #VMware 's partnership with #AmazonWebServices. One is titled “VMware #NSXaaS - Secure Native Workloads in #AWS ”. The session description says “VMWare NSXaaS provides you the ability to manage Networking and Security policies in Public Cloud environments such as AWS.” Once we saw that “NSXaaS” reference we quickly spotted job ads that say “VMware #NSX Team is building an elite team of Devops/SRE engineers to run our crown jewel project “NSXaaS” on Public Cloud.” Whoever gets the gig will be “... responsible to run NSX as a Service Reliably with no down time. This will include proactively finding service reliability issues & resolving them as well as responding to customer tickets as a line of defense before involving development engineering.” Suffice to say, it looks like VMware's going to NSX-as-a-service, which is interesting! Another session, “VMware Cloud on AWS - Getting Started Workshop” offers the chance to “Be among the first to see the new VMware on AWS solution. You will interact with the VMware Cloud interface to perform basic tasks and manage your public cloud capacity.” That description is similar to other AWS-related sessions in that it offers demos of actual services, which suggests to The Register's virtualization desk that come VMworld USA in late August VMware-on-AWS will either have been launched or be very close to a debut. Session titles like “VMware Cross Cloud Services - Getting Started” suggest Cross Cloud will also debut at or before the show. A session titled “VMware Integrated OpenStack 4.0: What’s New” suggests a new release is in the works, given that we're currently on version 3.1. “VMware Cloud Foundation Futures” promises to show off “exciting new work being done using VCF as a platform in the areas of edge computing clusters, network function virtualisation, predictive analytics, and compliance.” “Storage at Memory Speed: Finally, Nonvolatile Memory Is Here” looks like being VMware's explanation of how it will put byte-addressable non-volatile memory, which it calls “PMEM” and the rest of us call Optane and pals, to work. The session promises “an overview of VMware virtualization for PMEM that is now running on real PMEM products.” Speed improvements from PMEM aren't automatic, so it will be interesting to see what Virtzilla's come up with. VMware's meat and potatoes – vSphere, vCenter and the like – don't look to have a lot new to discuss other than enhancements to PowerCLI and the vSphere HTML 5 client. Desktop hypervisors usually get a VMworld refresh and the catalog mentions “innovations being added to VMware Fusion, VMware Workstation, and VMware Horizon FLEX” in a session titled “What's New with ...” the above-mentioned products. There's no session description we could find that mentions VMware App Defence, the long-awaited security product The Register believes will emerge in Q3, but the catalog is sprinkled with mentions of endpoint security and VMware's willingness to make it better with virtualization. VMworld Europe is in September this year, so it also fits the Q3 timeframe if VMware wanted to keep the announcement of its new security offering as the big news for its continental followers

https://www.theregister.co.uk/2017/06/28/nsxaas_and_other_vmware_news_from_the_vmworld_2017_content_catalog/

Hyper-convergence will be commoditised, says Nutanix president

The pendulum of IT has always swung between two polar opposites – centralised control of IT operations (think mainframes) and decentralised management via servers and PCs in what is known as client-server architecture.

As more organisations bought into the client-server story, server and storage sprawl started to creep up, leading to the advent of virtualisation software, which helps to consolidate servers and improve efficiency.

In recent years, the rise of hyper-convergence has taken consolidation to a new level by abstracting and embedding storage and networking into the compute tier via software, complete with backup, snapshots, data deduplication and other capabilities.

In an interview with Computer Weekly, Sudheesh Nair, president of hyper-convergence software supplier #Nutanix, talks about the future of #hyperconvergence, its relation with the cloud, and the company’s challenges in building its business in Asia.

Hyper-convergence seems to be all the rage these days. Do you see it being commoditised at some point?

Hyper-convergence will become commoditised – there’s no doubt about it. But that is not new in our industry. About 10 years ago, when #DataDomain was started, deduplication was a business that no one else had and they built on it. Today, deduplication has become a feature for many companies in the data storage business. Commoditisation, however, is happening significantly faster. What used to take 10 years to commoditise could take just two to three years – and that’s good news for customers.

Nutanix’s view is very different. We did not begin this company with the end-game of delivering hyper-converged solutions. Rather, we started with the idea that infrastructure complexity had to be eliminated and made storage invisible. Then, three years ago, we went to the next level and made virtualisation invisible. The next step for us would be to make the cloud invisible. So, in our journey, we are not afraid if people expect hyper-convergence to just work and become commoditised, because our approach and destination are different.

Is that why you are rallying the company around the Enterprise Cloud Platform?

Yes, but there are two things about cloud that people need to internalise. First, cloud is something that lives inside a datacentre owned by the likes of Amazon, Microsoft and Singtel. It’s infrastructure that I rent from someone else. Tomorrow’s cloud will be different – it will not be contained within a single datacentre. This has happened before – mainframes used to sit inside datacentres, accessible only via dumb terminals. But what happened next was the emergence of client-server architecture that eventually disrupted mainframes. Similarly, the capabilities of cloud today will bleed out, be miniaturised and reside in internet of things (IoT) devices, such as cars and drones.

When we think about the Enterprise Cloud Platform, it’s basically the data fabric that starts in the cloud and extends to edge devices, as well as the control fabric that manages, automates and orchestrates workloads between the datacentre and the edge. Second, we think cloud should be powered by applications. Take Apple, for example. It doesn’t sell you the cloud. It sells you unlimited music, but when you listen to that music, you are consuming it through the cloud. That invisibility of the cloud will be the future.

Today, some enterprises still operate their own datacentres and private cloud infrastructure for reasons such as security and data sovereignty. Do you think we will ever come to a point where most companies will rely entirely on the public cloud to run their business?

First, there will always be public cloud companies that will build their own technology, but there are also what we call community cloud providers catering to specific industries and geographies, such as an insurance cloud service provider that complies with insurance regulations in Singapore. Nutanix is focused on community cloud providers that want a cloud out of the box.

Second, think of public cloud suppliers as operating a buffet restaurant where you can have a spread. But not everyone wants to eat the same food, and Amazon is not going to change for those with unique requirements. But that does not mean you should go back to the stone age with a three-tiered architecture, and so on. Our bet is that whether you use the public cloud or private cloud, you can have the same experience.

http://www.computerweekly.com/feature/Hyper-convergence-will-be-commoditised-says-Nutanix-president

Google deal with Nutanix shows its cloud strategy is broadening

#Google is ready to accept that some big enterprises need help getting their critical workloads to the cloud. Its parent #Alphabet on Wednesday announced a partnership with #Nutanix, whose data center hardware and software provides a hybrid approach that bridges the gap between on-premise servers and machines in big cloud data centers. The deal is another indication that Nutanix, which held its stock market debut last year, is now embracing the public cloud as a viable infrastructure choice. Simultaneously it reflects how Google is becoming more receptive to the needs of enterprises.

"With public cloud, you have to meet them where they are -- that's becoming increasingly clear," Nan Boden, Google's head of global alliances, told CNBC in an interview. She said that in recent years Google had not cut major deals with data center hardware providers.

In the public cloud, Microsoft Azure and Amazon Web Services represent Google's biggest competition. Microsoft has been working to make its public cloud and private cloud tools match, while AWS has developed hardware and even trucks to facilitate the migration of data into its cloud.

Google, to date, has been less active in supporting cloud architectures that involve companies' existing on-premises infrastructure.

Nutanix does intend to support application deployment on AWS and Azure, although at this point it's working most closely with at Google.

Nutanix CEO Dheeraj Pandey told CNBC that he's been particularly impressed with the head of Google's cloud, Diane Greene, who was previously co-founder and CEO of VMware.

"She always talked about VMware being the most non-disruptive disruptive technology company," Pandey said. Under Greene's leadership the Google cloud is more "aware of the mundane," he said.

The first integration resulting from the partnership, which will enable applications to move from on-premise data centers to Google's, will become available in the first quarter of 2018. Pricing details aren't available.

Most recently Greene announced that the consumer version of Gmail, which is available to businesses through the G Suite portfolio of applications, would not scan user input to personalize ads -- just like its G Suite equivalent.

And #Nutanix 's #Google partnership comes a few months after Google started offering the #SAP #HANA database software atop its cloud. Additional SAP-Google integrations are on the way.

Nutanix software will enable applications to be wrapped in digital containers -- many of which can run on a single physical server, similar to virtual machines -- and deployed on Nutanix hardware or the Google cloud using #Kubernetes, an open-source tool created by Google. Alternatives to #Kubernetes include #Docker 's #Swarm container cluster management software.

'We talked to a lot of folks and developers in the community and so on, and most everybody is saying it's going to be Kubernetes" as the top tool, Pandey said.

http://www.cnbc.com/2017/06/27/googles-nutanix-partnership-indicates-more-hybrid-cloud-interest.html

Storage Software Market Share 2017-2022: IBM, NetApp, Huawei Technologies, Hewlett-Packard, Dell

Market ” examines the performance of the #StorageSoftware market Size 2017. It encloses an in-depth Research of the Storage Software market state and the competitive landscape globally. This report analyzes the potential of Storage Software market in the present and the future prospects from various angles in detail. The Global Storage Software Market 2017 report includes Storage Software market Size, Revenue, market Share, Storage Software industry volume, market Trends, Storage Software Growth aspects. A wide range of applications, Utilization ratio, Supply and demand analysis are also consist in the report.It shows manufacturing capacity, Storage Software Price during the Forecast period from 2017 to 2022.

Manufacturers Analysis and Top Sellers of Global Storage Software Market 2017: 

#IBM
#NetApp
#Huawei Technologies
#HewlettPackard
#Dell
#Hitachi
#Symantec
#CATechnologies
#OracleCorporation
#EMC

Firstly, the report covers the top Storage Software manufacturing industry players from regions like United States, EU, Japan, and China. It also characterizes the market based on geological regions.

Further, the Storage Software report gives information on the company profile, market share and contact details along with value chain analysis of Storage Software industry, Storage Software industry rules and policies, circumstances driving the growth of the market and compulsion blocking the growth. Storage Software Market development scope and various business strategies are also mentioned in this report.

Inquiry Before Purchasing Report: bit.ly/2skYeLS

The Storage Software research report includes the products that are currently in demand and available in the market along with their cost breakup, manufacturing volume, import/export scheme and contribution to the Storage Software market revenue worldwide.

Finally, Storage Software market report gives you details about the market research findings and conclusion which helps you to develop profitable market strategies to gain competitive advantage.

http://www.openpr.com/news/599820/Storage-Software-Market-Share-2017-2022-IBM-NetApp-Huawei-Technologies-Hewlett-Packard-Dell.html

Global Graphene and 2-D Materials Market 2017 – BASF, Bluevine, FGV Cambridge Nanosystems

Global #Graphene and 2-D Materials Market 2017 report presents the fresh industry data and forthcoming industry trends, allowing you to pinpoint the products and clients driving Revenue growth and profitability. The report Global Graphene and 2-D Materials Market offers a comprehensive and executive-level overview,including definitions, classifications and its applications. The Graphene and 2-D Materials market is expected to reflect a positive growth trend in upcoming years. The pivotal driving forces behind the growth and popularity of Graphene and 2-D Materials market are analysed in depth in this report. Do industry report enlists the preeminent competitors and presents the insights of vital industry Analysis of the key factors influencing the global Graphene and 2-D Materials market. Key Manufacturers Analysis of Graphene and 2-D Materials :- 1 #BASF 2 #Bluevine 3 #FGVCambridge Nanosystems 4 #Grafen Co. 5 #Grafoid 6 #Graphene3D Lab Inc 7 #GrapheneTechnologies 8 #GwentElectronics 9 #Haydale 10 #CambridgeNanosystems 11 #Huawei 12 #IBM 13 #Samsung 14 #Stratasys In-depth data associated with global Graphene and 2-D Materials Market is included in this report. This data includes business tactics, development plans, import/export details. The Graphene and 2-D Materials report also includes the analysis of dominant market players along with their company profile, contact information, their contribution in market share, consumer volume etc. Analysis of Graphene and 2-D Materials Market based on regions (other regions can be added as per the requirement): 1USA 2Europe 3Japan 4China 5India 6South East Asia Request

http://importantevents24.com/2017/06/28/global-graphene-and-2-d-materials-market/

China’s Tech Moguls Warn of AI’s Troubling Trajectory

Never mind the #singularity; #artificialintelligence could eliminate countless jobs, and perhaps reshape global politics in the process.

China has seen incredible economic and social changes over the past few decades. But could the impact of artificial intelligence take the country by surprise?

I recently spent a few weeks in the country, talking to researchers and entrepreneurs developing cutting-edge AI technologies and products.

What stuck with me—beyond the growing ambition of China’s researchers and the overall vibrancy of its tech scene—is how much people are starting to talk about the potential for AI to eliminate jobs across the country. Just a few years ago this seemed like much less of a concern. Now the issue of jobs comes up regularly during Q&A sessions and panel discussions at industry conferences.

This is perhaps partly a reflection of the country’s economic slowdown after years of spectacular rise. But I think it also reflects a growing realization among some of the world’s smartest entrepreneurs and innovators that the economic and social impact of AI is likely to be profound.

The sentiment can certainly be detected among some of China’s most prominent and influential technologists.

Speaking at an event organized in Detroit by his company last week, the CEO of Alibaba, Jack Ma, said that artificial intelligence could displace many workers in both China and the U.S., thereby heightening tensions that some fear could lead the two countries toward armed conflict. This prospect is the subject of several new books, although they do not consider a coming technological shift that could further complicate the picture.

In a compelling op-ed piece in the New York Times this week, Kai-Fu Lee, a renowned technical expert, entrepreneur, and educator and the chairman of an AI lab run by his VC firm Sinnovation Ventures, argues that AI will cause widespread job displacement in coming years.

He, too, raises the prospect that this could reshape global relations by exacerbating international inequalities. China and the U.S. may see job losses as well, but because they are so dominant in the field, they are likely to emerge as the primary beneficiaries of this technological revolution. This could turn them into global AI-fueled superpowers, generating massive amounts of wealth by hoovering up billions of users’ data and providing software-based services that touch every aspect of our lives. Other countries, meanwhile, could be left to rethink their position in the world order.

The economic and social implications of automation and artificial intelligence are, of course, notoriously hard to predict. But this in and of itself may be a cause for concern, as public perception regarding jobs and economic prospects in both the U.S. and China will be incredibly important in coming years.

https://www.technologyreview.com/s/608183/chinas-tech-moguls-warn-of-ais-troubling-trajectory/

Google Stakes Its Future on a Piece of Software

Early in 2015, #artificialintelligence researchers at #Google created an obscure piece of software called ­#TensorFlow. Two years later the tool, which is used in building machine-­learning software, underpins many future ambitions of Google and its parent company, #Alphabet. TensorFlow makes it much easier for the company’s engineers to translate new approaches to artificial intelligence into practical code, improving services such as search and the accuracy of speech recognition. But just months after TensorFlow was released to Google’s army of coders, the company also began offering it to the world for free.

https://www.technologyreview.com/s/608094/google-stakes-its-future-on-a-piece-of-software/

Tuesday, June 27, 2017

Red Hat Releases Hyper-Converged Infrastructure Stack

#RedHat executive Irshad Raihan recently spoke to me about a software package -- Red Hat Hyper-Converged Infrastructure ( #HCI ) -- that the company put together to address the needs of enterprises using #hyperconverged systems. This package is made up of previously available Red Hat products. Red Hat's HCI is completely open source and production ready. It's designed to address "… challenges by integrating compute and storage together on a single server, making it a well-suited solution for low-footprint remote or branch office installations and edge computing." It has the following components: Red Hat Virtualization. The company's KVM-based virtual machine software platform. Red Hat Gluster Storage. The company's storage virtualization technology. The company describes this as "highly scalable software-defined storage that can be converged on the same hardware as Red Hat Virtualization hosts, removing the need for additional compute infrastructures and simplifying deployment." Red Hat Enterprise Linux. Red Hat's Linux distribution. Ansible by Red Hat. An IT automation framework recently acquired by Red Hat. Ansible is designed to provide "automated installation and configuration from a central point."

https://virtualizationreview.com/articles/2017/06/27/red-hat-releases-hyper-converged-infrastructure-stack.aspx?m=1

How do OpenStack Neutron, Dragonflow enable SDN?

New switch and service options surface regularly, and #OpenStack #Neutron, the platform's core networking component, continues to evolve toward a more standard, interchangeable model. For admins, this simplifies more advanced network operations, such as service chaining.

OpenStack Neutron uses network functions virtualization ( #NFV ) and creates layers of network abstraction above the physical network. Protocols are attachable via overlay networking, a method of encapsulating protocols. While the overlay approach uses considerable compute power, it creates high levels of flexibility.

Because Neutron uses a network node that becomes a bottleneck at scale, the open source community introduced the Dragonflow submodule, a software-defined networking (SDN) controller, to act as a distributed virtual router for OpenStack clusters. Dragonflow has a small effect on OpenStack Neutron code but improves scalability and speed and simplifies management. It has an elastic architecture that reacts to virtual LANs in a cluster scaling up or down.

http://searchcloudcomputing.techtarget.com/answer/How-do-OpenStack-Neutron-Dragonflow-enable-SDN

Dell EMC - We've cut more than 100 distributors and we plan to keep cutting

#DellEMC has cut more than 100 distributors from its global line up according to global channel chief #JohnByrne, who said that he intends to continue slimming down the firm's distribution fleet. In October last year, Byrne told partners at Dell EMC's first-ever partner summit in Austin that the firm was over distributed. EMEA channel boss Michael Collins later told us in December that Dell EMC had asked its distributors to submit a request for proposal as part of the review process. Speaking to Channelnomics Europe Byrne confirmed that the firm has now cut 105 distributors globally, bringing Dell EMC's distribution count from 375 in October 2016 to a present day 270. The channel chief said that he intends to reduce its distribution ranks even further and claimed that cuts were "ongoing" but could not put a number on how many more will be removed. According to Sarah Shields, general manager of UK channels at Dell EMC, the UK has seen very few distribution cuts, and is likely to remain largely unaffected by the firm's distribution review. "Just to be clear, in the UK we are pretty clean. Especially when you look at Exertis and Hammer, two become one, then you have Azlan, Tech Data and Avnet, so three become one, so we are in a pretty good position right now," she said. "It has become far easier for partners to navigate our portfolio which is vast. It has been perfect timing." Dell EMC intends to maintain a mix of regional value-added distributors as well as maintain relationships with the three global names: Ingram, Tech Data and Arrow. Collins had previously explained that the firm intends to adopt a "two-plus-one" model, where each country will be catered for by two global players plus a regional one. Byrne said that he intends to keep the same combination on a global basis. "There will be some regional specialisation, some countries there will be two-to-one some will be three-to-one and some will be three-to-two. But nonetheless we are looking at significant distribution rationalisation, and we want the distributors with the capability. If you're not a specialist, we want you to be selling more of our portfolio and we want to be taking share from HP, Lenovo and Cisco. Byrne echoed Collins' sentiments in claiming that the mega-merger between distribution titans Avnet and Tech Data has helped Dell EMC with its distribution review. Byrne claims that since Dell's marriage to EMC, the "big three" distributors have been "all in" with the united company. "When we came together, for EMC, way over 60 per cent of their distribution business was done with the big four, and now the big three, so Arrow, Ingram and Tech Data. If you look at VMware, which is also part of Dell Technologies, they are also heavily weighted towards the big three. At Dell we weren't, the big three were less of our business," he said. "Those big disties didn't really look at us as an enterprise play and I think now there is no confusion; we are absolutely an enterprise play, truly end to end. We are definitely going big with the big three, however we are also looking for regional specialists." He added: "Those big three are all in with us. If you speak to them independently, our channel has been growing double digit, our distribution business has been going double digit since we became a private organisation. We have the widest portfolio on the planet, our channel is growing fast and our share is of wallet with partners is only in the low teens. None of their other existing vendors are throwing those points on the board."

https://www.channelnomics.eu/channelnomics-eu/news/3012664/dell-emc-weve-cut-more-than-100-distributors-and-we-plan-to-keep-cutting