Dell, EMC, Dell Technologies, Cisco,

Showing posts with label Elastic. Show all posts
Showing posts with label Elastic. Show all posts

Tuesday, January 30, 2018

10 Hot Dell EMC Storage Products

Storage For A Modern #DataCenter World The key to @Dell EMC storage sales returning to growth lies not only in a robust storage market, but a robust product line, and the company in the second have proved it could go toe-to-toe with the best with major enhancements across its offerings. While some of the enhancements stemmed from improved hardware, including new Intel Xeon processors and higher-performance flash storage, the key enhancements were software related. CRN looks at 10 key products in the Dell EMC storage lineup, one that offers the broad capabilities to match the company's position as the world's largest storage vendor.

Elastic Cloud Storage

Elastic Cloud Storage is a software-defined, cloud-scale, object storage platform that brings the availability and serviceability of traditional storage arrays to commodity server infrastructures in a private cloud offering. ECS provides comprehensive protocol support for unstructured object and file workloads on a single platform that lets customers manage globally distributed storage infrastructure under a single global namespace with anywhere access to content. Dell EMC in May updated the platform to include the ECS Dedicated Cloud Service to provide hybrid deployment models. It also introduced ECS Next, which provides enhanced data protection, management and analytics.

Integrated Data Protection Appliance

Dell EMC in May introduced its Integrated Data Protection Appliance, or IDPA, a purpose-built, pre-integrated appliance bringing together data protection storage, software, search and analytics to protect data across a wide range of applications and platforms. According to Dell EMC, the IDPA offers fast deployment of data protection for physical and virtual workloads, and lets coverage seamlessly extend to the cloud with native cloud-tiering for long-term retention to private or public clouds. The appliances feature encryption, fault detection and healing, along with an average 55:1 deduplication rate for data both on-premises and in the cloud.

Isilon

Dell EMC provides scale-out NAS offerings aimed at fast-growing unstructured data in its Isilon line. The Intel Xeon-powered Isilon portfolio, which helps manage demanding enterprise file workloads, is available with all-flash, hybrid flash or spinning disk. It provides high-capacity utilization and automated tiering, and can scale from tens of terabytes to tens of petabytes. Dell EMC in May enhanced its software-only Isilon SD Edge to deploy on a single Dell EMC PowerEdge 14th generation server to make it available at a lower cost point for smaller remote offices. The updates also include support for VMware vSphere 6.5, as well as deployment using virtual storage platforms like ScaleIO and VMware vSAN.

http://www.crn.com/slide-shows/storage/300098041/10-hot-dell-emc-storage-products.htm/pgno/0/3

Thursday, December 28, 2017

Red Hat takes different path in app design as enterprise scales out

The year-long embrace of the @Kubernetes container orchestration management system across the enterprise culminated in @Amazon Web Services Inc.’s announcement last month of its @Elastic #Container Service for the #opensource storage platform. The announcement was not a major surprise, given the news in August that AWS would be joining the #CloudNativeComputingFoundation, but it was well received by developers at the recent AWS re:Invent conference in Austin, Texas, nonetheless. One major reason for the positive response is that Kubernetes has increasingly become a significant element in the design and deployment of applications. For open-source companies like Red Hat Inc., this will be a key focus as the container revolution marches boldly into 2018. “Kubernetes has been the single biggest driving force to shift all enterprise architecture from scale up to scale out. That has created a whole number of ripple effects in how applications are designed within the enterprise,” said Steve Watt (pictured), chief architect of emerging technologies at Red Hat.

https://siliconangle.com/blog/2017/12/27/red-hat-takes-different-path-in-app-design-as-enterprise-scales-out-kubecon/

Tuesday, September 26, 2017

BlueData Provides Ability to Deploy Containerized Hadoop and Spark for Dell EMC Elastic Data Platform

SANTA CLARA, CA--(Marketwired - Sep 26, 2017) - #BlueData®, a leading #BigDataasAService company, announced that #DellEMC is using BlueData EPIC software to help power its new Elastic Data Platform. Using BlueData EPIC and the #DellEMCElasticDataPlatform, customers can spin up instant clusters for #Hadoop, #Spark, and other Big Data tools running on #Docker #containers. "We've been working together with Dell EMC across our product, services, and sales teams to help our joint customers get the most out of their Big Data investments," said Kumar Sreekanti, CEO of BlueData. "This latest collaboration is further evidence that our software has become the solution of choice for containerized Big Data deployments in the enterprise." The Dell EMC Elastic Data Platform provides on-demand access to Big Data analytics and data science workloads (e.g. Hadoop, Spark, machine learning, and other use cases) in a highly scalable, flexible, and secure multi-tenant architecture. The platform delivers fast and easy provisioning, simplified deployments, cost-efficiency, and assurance that governance and security requirements are being met for Big Data. The Dell EMC Elastic Data Platform includes Dell EMC infrastructure, software from BlueData as well as BlueTalon for data-centric security, and Dell EMC Professional Services: Deploying Big Data Environments: BlueData provides the ability to quickly create elastic, multi-tenant Big Data environments for data science and analytics using Docker containers -- for Big-Data-as-a-Service whether on-premises, in the cloud, or in a hybrid architecture. Separating Compute and Storage: When aggregate datasets grow larger than a few hundred terabytes, it makes sense to separate compute from storage to allow both to scale independently. For enterprises needing scale out storage, Dell EMC Isilon offers a compelling ROI and ease of use with scalability. Enforcing Centralized Policy: BlueTalon provides the consistent creation and enforcement of data access policies across environments supporting a diverse set of users, tools and data systems. Automating and Integrating: Dell EMC Professional Services have automated the deployment of the above components and provided an open and flexible set of interfaces to integrate into existing Big Data environments. To learn more about the Elastic Data Platform and BlueData's collaboration with Dell EMC, visit the BlueData booth (# 433) at the Strata Data Conference in New York City this week, September 26th to 28th.

http://m.marketwired.com/press-release/bluedata-provides-ability-deploy-containerized-hadoop-spark-dell-emc-elastic-data-platform-2235027.htm

Tuesday, July 25, 2017

IBM Combines All-Flash and Storage Software Optimized for Hortonworks

ARMONK, N.Y., July 24, 2017 /PRNewswire/ -- #IBM (NYSE: IBM) today announced a new #allflash, high-performance data and file management solution for enterprise clients running exabyte-scale #bigdata analytics, cognitive and #AI applications. The combined flash and storage software solution has been certified with the #Hortonworks Data Platform (HDP) to provide clients with more choice in selecting the right platform for their big data analytics on data processing engines like Hadoop and Spark. The new IBM All-Flash #ElasticStorage Server ( #ESS ), powered by IBM #SpectrumScale exabyte-scale software defined storage, increases data throughput by 60 percent1 over previous solutions. Performance improvements produce a storage solution powerful enough to run complex applications from machine and deep learning clusters to image recognition of cancer cells. The #IBM #SpectrumScale storage software combines data from silos across the enterprise into a single view in order to run analytics on #Hadoop and #Spark. Through unifying file, object and Hadoop #HDFS data storage, #IBMESS with IBM Spectrum Scale combines to shorten time to results while contributing to reducing storage capacity requirements by more than 55 percent2 for Hadoop and Spark workload data. Beyond performance and economic benefits, the solution provides user data with essential features such as redundant data paths, distributed erasure coding and end-to-end checksums. Based upon millions of hours of current production systems, reported over the past two years the IBM ESS design has delivered five nines of availability against unscheduled downtime. "We're seeing an increasing appetite on the part of clients to unlock value within their own business data where competitive advantage can be found but too often they face challenges that inhibit ready access to this information," said Ed Walsh, general manager, IBM Storage and Software Defined Infrastructure. "IBM ESS with Spectrum Scale has been developed to make data available by reducing obstacles that prevent users from quickly accessing and processing the growing oceans of differing data types stored within their disparate, isolated storage systems." IBM All-Flash ESS represents a new generation of data and file management. To help clients more effectively manage their hot and cold data. IBM ESS with IBM Spectrum Scale, an exabyte-scale software defined storage solution, supports a wide variety of network protocols while providing clients the elasticity to transparently tier data across flash, disk, tape and cloud storage systems. With all IBM ESS and IBM Spectrum Scale models certified on Hortonworks, we fulfill a commitment to deliver higher exabyte scale storage densities and related performance throughput required for programs like IBM's support for the Department of Energy CORAL project. IBM ESS integration with IBM Spectrum Scale, which is being developed to support anticipated NVMe-based storage solutions from IBM, sets the stage for increased proficiency in managing throughput of Big Data analytics workloads. Availability IBM will be taking orders on the new IBM All-Flash Elastic Storage Server (ESS) today but the solution will be generally available on August 25, 2017. To learn more about the IBM ESS portfolio please visit our website. For more details about the latest IBM All-Flash ESS, please see our blog "Fast data: the future of big data." 1 - Comparing the IBM ESS GS6 25GB/s against the new IBM ESS GS6S 40GB/s maximum throughput 2 - Comparing open source HDFS 3x data replication v. IBM Spectrum Scale RAID 1.22x erasure coded data protection.
http://www.prnewswire.com/news-releases/ibm-combines-all-flash-and-storage-software-optimized-for-hortonworks-300492966.html

Thursday, May 4, 2017

Amazon Web Services cuts EC2 prices once again, as much as 21 percent in some cases

It’s once again cheaper to use #AmazonWebServices. The leading cloud computing company announced price cuts Wednesday across the services it provides through its #ElasticCompute Cloud.This is the 61st time AWS has cut prices on its services, the company said in a blog post. AWS price cuts seemed to arrive monthly at one point a few years ago, and while the pace of those cuts has slowed down a bit, the overall trend continues as AWS usage grows and it exacts pricing power on its suppliers.

Depending on what type of instance you wish to purchase, and the region in which you prefer to run your workloads, the cuts range between five percent and 21 percent. More details are available here.

As overall usage of AWS increase, the company is able to cut prices at a fairly steady clip. It most recently cut prices in November, and the cuts don’t seem to have affected the division’s bottom line: AWS operating income has increased sharply year-over-year during the last two quarters.

https://www.geekwire.com/2017/aws-cuts-ec2-prices-much-21-percent-cases/

Monday, November 14, 2016

Dell EMC unveils new additions to its all-flash portfolio

#Dell #EMC has announced multiple new additions and updates to its all-flash storage portfolio designed to support customers’ storage transformation initiatives. New offerings include the new #VMAX 250F all-flash storage, #DataDomain protection storage systems with Cloud Tier software and updates to the Dell EMC #Unity family of storage systems. According to Dell EMC, businesses are putting unprecedented pressures on their midrange storage arrays, with ever-increasing sizes and numbers of files that need to be stored and accessed instantly. To get the most out of an array – in terms of value for money and capacity per drive – it’s important for organisations to demand high-capacity drives, data compression and tiering to the cloud. Smart businesses are choosing storage platforms that act as a hub, compressing data and directing it to the best storage target and meet their business objectives. The new solutions include the new generation of Dell EMC Data Domain protection storage systems with Cloud Tier software is able to deliver industry-leading speed and scalability as well as enhanced protection for virtual environments. The deployment of the new Data Domain Cloud Tier software within Data Domain, according to Dell EMC, increases the total volume of data that can be managed through a single appliance by 200 percent, with a maximum logical capacity of 150PB. Data Domain Cloud Tier establishes Data Domain as the only protection storage to natively tier de-duplicated data to public, private, or hybrid clouds for long-term retention, including Dell EMC #ElasticCloud Storage and #Virtustream Storage Cloud. The new Data Domain family features four new models: DD6300, DD6800, DD9300 and DD9800. “Protecting the modern data centre is a critical pillar of any IT transformation strategy,” said Beth Phalen, SVP and GM, Data Protection Solutions, Dell EMC. “Business Agility requires customers to protect rapidly growing data sets and minimise downtime no matter what happens in their data centers. To meet these needs we are introducing four new Data Domain systems with integrated flash technology that deliver new levels of scale and performance, allow customers to instantly restore large application environments and further reduce the time it takes to protect large volumes of data. Our data protection everywhere strategy ensures data is protected – affordably and securely – no matter where it lives.” Dell EMC has also broadened its portfolio of All-Flash storage with VMAX 250F. VMAX 250F delivers inline compression and data reduction yielding 4x improvements in storage economics, scales up to 1PB and supports the industry’s latest 7.6TB and 15TB enterprise flash drives. According to the company, it will be delivering improved storage economics across its entire VMAX All Flash array portfolio with its latest HYPERMAX operating system update, which now includes inline compression to increase storage efficiency. Dell EMC also introduced the new Dell EMC #VPLEX for All-Flash platform, that is inclusive of the software and capacity licenses required to provide the continuous availability and non-disruptive data mobility of the latest VPLEX VS6 platform to the modern all-flash data centre. “The VMAX platform is known for and relied upon for mission-critical applications by companies worldwide,” said Fidelma Russo, SVP and GM, Enterprise Storage and Software, Dell EMC. “Recognising that application requirements vary in scale from very large to mid-size, we designed the VMAX 250F to bring the VMAX brand and proven reliability to a broader set of customers. And, with the introduction of the updated Dell EMC HYPERMAX operating system, the entire VMAX All Flash family will enjoy the benefits of inline compression and non-disruptive migration.” Lastly, Dell EMC introduced significant, free, non-disruptive, data-in-place software updates to the Unity family of storage systems that delivers 4x storage efficiencies within the same footprint via inline compression and other all-flash optimisations. Expanding Unity’s capabilities for the modern data center, Dell EMC also introduced integrated file tiering to the public cloud (including Virtustream), along with intelligent and predictive analytics through the new CloudIQ cloud-based storage analytics platform. Unity customers will gain inline compression capabilities with the latest version of UnityOE software, further enhancing data storage efficiency with features such as thin provisioning, snapshots and “file system shrink”. With the ability to compress block-based LUNs hosting in All-Flash pools, Unity’s new compression capabilities are designed to help customers save up to 70 percent in storage capacity costs. Unity All-Flash now provides up to 384TB in a 2U rack through new support for 15.36TB 3D NAND (SAS Flash 4) flash drives, doubling Unity’s drive density. In addition, the Unity 600(F) doubles the number of drives supported, increasing total usable capacity up to 10PBs. Extending the life of mixed SSDs, Dell EMC is also introducing intelligent wear leveling with Unity, allowing customers to mix different types of flash drives within an All-Flash pool.

http://www.cnmeonline.com/news/dell-emc-unveils-new-additions-to-its-all-flash-portfolio/


Sunday, October 23, 2016

The History of Apache Hadoop's Support for Amazon S3

#Hadoop ’s ability to work with #Amazon S3 storage goes back to 2006 and the issue HADOOP-574, “FileSystem implementation for Amazon S3”. This filesystem client, “s3://” implemented an inode-style filesystem atop S3: it could support bigger files than S3 could then support, some its operations (directory rename and delete) were fast. The s3 filesystem allowed Hadoop to be run in Amazon’s EMR infrastructure, using S3 as the persistent store of work. This piece of open source code predated Amazon’s release of EMR, “ #Elastic #MapReduce” by over two years. It’s also notable as the piece of work which gained Tom White, author of “Hadoop, the Definitive Guide”, committer status. A weakness of the S3:// filesystem client was that it wasn’t compatible with any other form of data stored in S3: there was no easy way to write data into S3 for the Hadoop MapReduce to read, or for the results to be written back. (Remember —at the time, Hadoop meant MapReduce only). This was addressed in 2008 by the HADOOP-931 work and the “S3 Native Filesystem ” client, “S3N”. S3N paths have URLs which begin “s3n://”, followed by the name of the S3 “bucket” and the path underneath. S3N made collecting data for Hadoop-in-EC2 clusters easier, as well as allowing the output of work to be published directly for other applications. Since that date, s3n:// has been the ubiquitous prefix on URLs used when Apache Hadoop reads data from S3. But not, notably, from Amazon’s EMR: it uses a scheme, “s3:”, which resembles s3n but has a closed source implementation underneath. Amazon have done some good work there, and the Apache code has lagged. The S3N code has been relatively stable since 2008, with intermittent updates to the underlying jets3t library. It didn’t get much attention, however. The functionality of the jets3t library slowly fell behind that offered by Amazon’s own SDK —which added better authentication, advanced upload operations, and more. There was also work going on by Hadoop-in-cloud users such as Netflix, whose S3mper code addressed S3’s eventual consistency problem —so allowing it to be used as a direct output of analytics jobs.

https://dzone.com/articles/the-history-of-apache-hadoops-support-for-amazon-s-1

Wednesday, August 3, 2016

Latest Amazon Elastic MapReduce release supports 16 Hadoop projects

#Amazon announced the release of #Elastic #MapReduce (EMR) 5.0.0 today, which includes, among other things, support for 16 open source #Hadoop projects. As #AWS continues to hone its various tools to help customers manage myriad enterprise functions in the cloud, this latest one is aimed at data scientists and other interested parties looking to manage big data projects with Hadoop. For those of you unfamiliar with Hadoop, “[It’s] fundamentally infrastructure software for storing and processing large data sets,” according to Mike Gualtieri, a Forrester analyst who covers this space. It’s different from conventional data processing software in that it distributes both the storage and processing over a set of nodes (which can scale to the thousands), providing a much more efficient system for processing large amounts of data. What’s more, it’s a tremendously popular open source Apache project (with a really cute mascot) and a massive ecosystem around it, which is continually adding projects to help fill in holes and requirements.

https://techcrunch.com/2016/08/02/latest-aws-elastic-mapreduce-release-supports-16-hadoop-projects/

Sunday, July 10, 2016

Spark on AWS helps ignite big data workloads

Developers turn to #Hadoop for big data workloads, and #Spark is a particularly enticing Hadoop service on #AWS. Spark teams up with #Elastic #MapReduce for fast processing and versatility. AWS users have access to a growing portfolio of application services, especially those related to data analysis.
And with most organizations awash in more data than they know how to handle, AWS has become an important name in #bigdata tools that provide simpler and cost-effective ways to build an otherwise complex system, such as running Spark on AWS.

Thursday, June 9, 2016

NTP Software Expands Platform Convergence and Earns Certification on EMC's Elastic Cloud Storage (ECS) S3 Connector

BILLERICA, Mass., June 8, 2016/PRNewswire/ -- #NTP Software announced today its certification on #EMC® #Elastic Cloud Storage (ECS™) S3 Connector.  NTP Software VFM™ continues to offer flexibility and convergence for enterprise customers with multiple storage platforms.  

"We are pleased to extend our support to additional EMC technologies with solutions that are integrated across platforms and highly scalable," saidBruce Backa, CEO of NTP Software. "With EMC certifications, we offer compelling options for on-premise and private cloud archiving.  Clients will see immediate relief managing their backups and a reduction in on-going operating costs that will more than pay for the new technology."

EMC's ECS delivers all the benefits of a public cloud in end-customer datacenters at lower costs than public cloud solutions.  This shared global storage can store both small and large files, and it is built from the ground up to support multiple protocols.  Ultimately, ECS simplifies unstructured data storage for traditional and next-generation apps with global access and protection.

Simple Storage Service (S3) on ECS is a web-based interface that allows user access to upload and download practically any file or object using web protocols that pass smoothly from location to location.

http://www.prnewswire.com/news-releases/ntp-software-expands-platform-convergence-and-earns-certification-on-emcs-elastic-cloud-storage-ecs-s3-connector-300279338.html

Tuesday, March 22, 2016

Converged Data Platforms: Part of a Larger Trend

The modern computing paradigm began with a completely converged data center: the mainframe. As soon as the required compute capacity exceeded the largest mainframe, distributed computing was born. What started as #SunMicrosystem’s notion that “the network is the computer” now implies that the cloud—public, private or hybrid—is the computer; a fantastically complex, powerful, and mammoth computer. But that computer is incomplete, because it is not yet the completely integrated, boundless, or frictionless application and data platform that one would get from a “limitless mainframe.”

That is the direction the software industry, and open source software movement, has been heading since the network became the computer: the converged (virtual) data center. You can tick off a progression of technologies that enable the converged data center: a common microprocessor architecture (x86), an open source server operating system ( #Linux ), an open source way to converge server instances (Xen/KVM hypervisor), an open source containerization of applications (Docker/Mesos) and a way to store and manage data at a vast scale ( #Hadoop ). You can add software-defined networking, database services (SQL/NoSQL), application execution engines (#Spark, #Elastic, #MapReduce), and a management harness ( #OpenStack ) to more or less complete the picture.

http://insidebigdata.com/2016/03/21/converged-data-platforms-part-of-a-larger-trend/