Thursday, December 21, 2017

Dell EMC PowerEdge R740xd Server Review

In the spring of 2017, @DellEMC launched the much-anticipated refresh of the #PowerEdge lineup, refreshing the PowerEdge line up to Xeon SP from #Broadwell. The refresh included the new R740 server family, which encompasses the mainstream R740 as well as the "extreme disk" version dubbed the R740xd, which we will be looking at in this review. This powerhouse server supports a wide range of storage options, stretching up to eighteen 3.5" or thirty-two 2.5" disks for incredible capacity, or up to twenty-four 2.5" NVMe SSDs if breakneck storage I/O is more of your forte. Compute and DRAM aren't left in the backseat either, with the R740xd supporting up to dual Intel Xeon Scalable processors with 28 cores each and a peak memory footprint of 3TB max. There are few applications this new server wouldn't excel in, which is exactly the direction Dell EMC took when designing this ever-modular platform.  The PowerEdge R740 servers represent a great middle ground of performance and storage in a 2U box. The server can be configured with up to two Intel Scalable CPUs and 24 DDR4 DIMMs (or 12 NVDIMMs), but where they really shine is in the way they approach storage. While the R740 offers up to 16 storage bays, the xd offers up to 32 2.5” bays, 24 of which can be NVMe. The R740xd also offers some unique storage layouts versus the typical front-loading bays, including mid- and rear-loading bays to fit all the extra storage in the same 2U footprint. The layout also enables users to tailor their storage needs to their application by being able to mix NVMe, SSD and HDD in the same chassis creating storage tiering within the chassis. The R740xd also supports up to 192GB of NVDIMM. The R740xd also has the ability to boot from RAIDed internal M.2 SSDs through an add-on card, freeing up the more front-accessible space for workload storage. Both versions are good for SDS, service providers, and VDI, with total storage and NVMe being the key difference. Also new to the R740/R740xd is the increased support for GPU’s or FPGA’s. Both are capable of support up to 3 300W or 6 150W cards. In this generation Dell EMC engineered the BIOS to automatically register the airflow required by each card and provide individually tailored air flow through a feature called multi-vector cooling. With every refresh of any server line, there are new CPUs, more RAM, and better storage and networking options. What sets many companies apart, though, is the full lifecycle management of the product. Within reason, any server with the same hardware specs will score roughly the same. The difference, though, quickly becomes apparent with the quality of the hardware, the breadth of support software, and how easy the system is to rapidly deploy in a given environment. This is a key area in which Dell EMC sets itself apart from others in the marketplace. Dell EMC gives users key tools such as LifeCycle Controller, iDRAC, OpenManage Mobile and others. We've leveraged many of these tools in our own environment, and time and time again, have walked away impressed with how simple and mature the platform has become over time. The new PowerEdge servers have support for software-defined storage (SDS) built in from the start, lending them to use cases such as hyperconverged infrastructure. In their own enterprise product lineup, Dell EMC leverages the R740 with pre-built and validated solutions such as Ready Nodes for ScaleIO or VSAN, as well as the PowerEdge XC line. The R740xd enables configurations leveraging all external drive bays for the SDS product itself, keeping the boot segment on an internal m.2 SSD. The new Dell EMC PowerEdge R740xd is available now and is highly customizable. For this review we leveraged an individual R740xd with a near top-end configuration, as well as a cluster of 12 R740xds with a more modest configuration. The single R740xd we are using is built with the following: Dual Intel Xeon Platinum 8180 CPUs 384GB of DDR4 2667MHz RAM (32GB x 12) 4 x 400GB SAS SSDs 2 x 1.6TB NVMe SSDs Mellanox ConnectX-4 Lx Dual Port 25GbE DA/SFP rNDC LCD bezel with Quick Sync 2 and OpenManage capabilities iDRAC 9 Enterprise Dell EMC PowerEdge R740xd Server specifications: Form factor: 2U Rackmount Processors: up to 2 Intel Scalable CPUs or up to 28 cores Memory: 24x DDR4 RDIMM, LR-DIMM (3TB max) NVDIMM support: up to 12 or 192GB Drive Bays Front bays: Up to 24 x 2.5” SAS/SSD/NVMe, max 153TB Up to 12 x 3.5” SAS, max 120TB Mid bay: Up to 4 x 3.5” drives, max 40TB Up to 4 x 2.5” SAS/SSD/NVMe, max 25TB Rear bays: Up to 4 x 2.5” max 25TB Up to 2 x 3.5” max 20TB Storage Controllers Internal controllers: PERC H730p, H740p, HBA330, Software RAID (SWRAID) S140 Boot Optimized Storage Subsystem: HWRAID 2 x M.2 SSDs 120GB, 240 GB External PERC (RAID): H840 External HBAs (non-RAID): 12 Gbps SAS HBA Ports Network daughter card options: 4 x 1GE or 2 x 10GE + 2 x 1GE or 4 x 10GE or 2 x 25GE Front ports: VGA, 2 x USB 2.0, dedicated IDRAC Direct Micro-USB Rear ports: VGA, Serial, 2 x USB 3.0, dedicated iDRAC network port Video card: VGA Up to 8 x Gen3 slots, Up to 4 x16 GPU Options: Nvidia Tesla P100, K80, K40, Grid M60, M10, P4, Quadro P4000. AMD S7150, S7150X2 Supported OS Canonical Ubuntu LTS Citrix XenServer Microsoft Windows Server with Hyper-V Red Hat Enterprise Linux SUSE Linux Enterprise Server VMware ESXi Power Titanium 750W, Platinum 495W, 750W, 1100W, 1600W, and 2000W 48VDC 1100W, 380HVDC 1100W, 240HVDC 750W Hot plug power supplies with full redundancy Up to 6 hot plug fans with full-redundancy, high-performance fans available Design and Build The new PowerEdge Servers have been redesigned not only to look slick (which they really do), but to reflect how users and applications interact with them. Across the front is the new bezel that has the Quick Sync support with its wireless OpenManage capabilities. The same design on the new servers is also matched across new Dell EMC storage offerings, including systems such as the Unity 450F all-flash array. Beneath the bezel, there are 24 2.5” bays that support SATA, SAS, Nearline SAS, and NVMe (if configured to do so).  The front can be configured to support 12 3.5” drives as well, if maximum capacity is of higher concern than performance. On the left-hand side are indicator lights for health and ID and the iDRAC Quick Sync 2 wireless activation button. On the right-hand side are the power button, VGA port, iDRAC Direct micro USB port and two USB 2.0 ports. Where others in the market are finding ways to cut costs and remove components in favor of reducing costs, one item Dell EMC has kept as an option for the R740xd and R740 is its front bezel. Some might be saying "Who cares?!" But the little LCD and its three buttons are incredibly useful in a datacenter environment. For instance, in a scenario where you can't access iDRAC remotely, management network settings have changed and you don't want to power cycle the server to go in manually with a crash cart and keyboard, the front bezel comes in very handy. On a Dell EMC server, you can go through the little interface for iDRAC settings, and you are able to switch the management IP back to DHCP from static all through the front panel. Without that functionality still in place, on many systems you'd need to reboot it to change it manually. On the R740xd, this is completely out of band through different controls.  Taking the top cover off shows the inner workings and the massive attention to detail that Dell EMC has put into the new PowerEdge servers. Many of the server components can be swapped out easily if the need arises, and clutter is kept to a minimum to improve airflow. In the system we reviewed, you are able to see the dual-slot m.2 boot SSD card, two RAID cards, as well as two PCIe pass-through adapters for the NVMe slots in front.  Our build also includes the internal dual-slot microSD boot device for hypervisor storage. Not as apparent (but very important) is all of the cooling fan duct work that keeps airflow moving through the system across all hardware, keeping hotspots down to a minimum and allowing the server to minimize excess fan noise. Throughout the course of our testing, we noted (or didn't note) any excess fan noise. Under extreme load with CPUs saturated, fan noise stayed far below other whitebox systems in our lab. Another interesting item we found was how the systems handled airflow at higher ambient air temperatures. In our lab we enjoy using fresh air for cooling servers, so systems in our lab can see a huge range of air temperatures. In situations where the R740xd performed in an environment with high ambient air temperatures, it gracefully increased fan speeds, but still kept noise to a minimum. This is in stark contrast to other servers and hardware in our lab that can be heard through closed doors or which drown out conversations held around them.  In both of our configurations, mid-bay storage options were not configured into the build. We pulled out an example shot from the PowerEdge R740xd technical manual which shows the internal 3.5" bays, as well as the 2.5" drive mounts. Few, if any, other mainstream servers offer this high level of density in a system configuration. While there are unique server builds floating on the market, many are custom built for the application. This makes a world of difference in terms of how unique systems are managed and deployed, as well as who is administering them in the datacenter.  Flipping around to the rear of the R740xd, customers looking for maximum expansion potential should take note. Starting in the upper left-hand corner, there are 3 full-height PCIe expansion slots, and beneath them are a system identification button, an iDRAC dedicated networking port, a serial port, VGA port, and 2 USB 3.0 ports. In the middle are two more full-height PCIe slots, in addition to a half-height slot used for the RAID card on this build. Below those is an rNDC slot which is populated with a dual-port 25Gb Mellanox NIC. On the upper right-hand side are two more full-height PCIe slots above the dual-power supplies. With two full-height PCIe slots to spare, Dell EMC loaded in support for 4 2.5" NVMe SSDs, dual-RAID cards, dual m2. boot SSDs, as well as a dual-port 25Gb Ethernet NIC. The rNDC slot is leveraged for the onboard primary network interface. This can be pre-populated with a number of offerings, ranging from a quad-port 1GbE NIC up to dual-port 25Gb offerings from both Mellanox and Broadcom. None of the options take away from one of the server's available PCIe slots, keeping them completely open for other uses. As we've shown in our rNDC upgrade guide, this bay is easy to upgrade and quite helpful at keeping networking devices out of the main PCIe slots. Management The PowerEdge R740xd offers a wide range of management options, including some traditional, as well as others that fit in the palm of your hand. The R740xd can be deployed by leveraging Dell EMC’s OpenManage Mobile app or locally like previous generation servers. The abilities of OpenManage Mobile can really make a difference, especially when you are setting up several servers in one data center, or you just want to get it finished on the floor without going back and forth to our desk or bringing a crash cart. Leveraging pre-built profiles to rapidly deploy a server with nothing more than a cell phone dramatically speeds up a process that frequently requires a crash cart in a datacenter.  An onboard WiFi radio connects users to the R740xd server, which is clamped down and very secure. You need local and physical access to the server, first to switch on the wireless radio from the front panel of the server, as well as to be able to scan the information tag on the front of the server. Once the network is turned on, you are given access to a private LAN, accessible from your phone or mobile workstation to interface with iDRAC over a mobile app or through a web browser.  This blends in a mix of handheld access for quick status checks or system polling, or more advanced functionality and iKVM work without worrying about connecting any wires or crash carts. The very short range (5 to 10 feet from the server in a datacenter environment) also helps to minimize the chance of anyone hopping onto the system without noticing. When your work is completed, turning off the wireless radio disables any further access.  A welcome addition also built into iDRAC is Group Manager, which allows IT admins to manage a group of R740 servers from within iDRAC itself. In our environment, we have the first R740xd acting as the group leader, requiring just one login to remotely manage multiple servers. From a central point you can get server status, as well as power toggle each server and quickly jump into its local iDRAC interface without having to type in additional login information.  iDRAC has been the heart of Dell management for some time now. Just recently the company announced a slew of enhancements to further improve user experience, as well as the overall functionality of iDRAC. iDRAC9 has added a more powerful processer to quadruple its performance. It now comes with more automation, saving time for IT admins while reducing errors. All BIOS settings can now be adjusted through iDRAC instead of booting to BIOS. The new iDRAC has enhanced storage configurations such as online capacity expansion, RAID level migration, cryptographic physical drive erase, rebuild/cancel rebuild of physical drives, enable revertible hot-spare, and virtual disk renaming.  When we mention that performance within iDRAC has dramatically improved, it's not overstated in the slightest. The new HTML5 interface is much faster in all areas, including initial login and full interaction through the iDRAC WebGUI. Compared to the R730 (which wasn't a slouch when it came out), it's night and day. As far as some of the new features used directly when logging into iDRAC, the management now has a remote view called Connection View. This can give IT admins a look at various aspects of the server right away. Along with this is a new dashboard for remote managing with iDRAC Group manager. For further direct-connected accessibility, there is now a port for iDRAC direct on the front of the server.  Additional features have been brought into iDRAC that allow users to better customize each server for its given application. BIOS level customizations can now be set through iDRAC itself, without requiring a console login. This makes it easier to change a few key settings before the initial deployment, all through a simple web browser or an app on your mobile phone. For deploying a number of servers at a time, users can also build up a server profile file to quickly deploy across multiple servers.  Management of installed hardware also took an interesting path with this latest generation server. Dell made it easier for users to manage PCIe add-on cards, where the server detects the type of card and will automatically adjust the fan speed for proper cooling. Airflow can be further tweaked with a custom LFM fan speed setting per installed device, as well as a master offset adjustment at the server level. Many of the cooling tweaks aren't done to cool installed hardware better than previous generation servers; instead, this is more about *perfectly* cooling hardware with the least amount of airflow required. In many servers, you can set the fans to full speed and not worry about over-heating equipment, but this is at the cost of excess power and noise.  Minimized airflow goes a long way towards reducing power consumption through wasted energy spinning fans needlessly high. At the end of the day, this makes the datacenter more enjoyable without fans buzzing at deafening levels. Performance When comparing the R740xd to the prior generation systems, compute and storage potential have skyrocketed. With Intel Broadwell updates, the top-spec CPU offered in the R730 series (E5-2699v4) offers 96.8GHz in a dual processor configuration. With the Intel Scalable line inside the PowerEdge R740xd, though, the top-end CPU (Platinum 8180) pushes that number to 139.66GHz. At face value, that's a 44% jump, but it doesn't even start to look at improvements in clockspeed at those higher core-counts or DRAM clockspeed improvements. On the storage side, NVMe SSDs have also taken on a bigger role inside the R740xd configurations, with offerings now topping out at 24 NVMe SSDs, where four used to be the peak on the R730xd. As we look at the improvements made on the latest generation Dell EMC PowerEdge server, we will touch on both local performance as well as clustered performance across a group of eight servers leveraging storage from a Dell EMC Unity 450F All-Flash array in a later review. This review layout is geared to help interested buyers see how these servers perform well equipped in single instances, as well as how they interact in a highly virtualized environment within a Dell EMC ecosystem. Bringing together all these systems are Mellanox ConnectX-4 25Gb rNDC NICs as well as Dell EMC Networking Z9100 100G switches. In our section looking at local system performance, we have a well-equipped R740xd which we are testing with two different NVMe combos. One is with two Samsung 1.6TB PM1725a NVMe SSDs, while the second is using four Toshiba 1.6TB PX04P NVMe SSDs. With the Intel Platinum 8180 CPUs inside, we had plenty of CPU cycles to throw at our storage workloads, giving us a chance to show the difference moving from 2 to 4 NVMe SSDs within the same application workload. Additionally, we also push storage to the brink inside an ESXi 6.5 environment with a multiple-worker vdbench test, with multiple workloads geared at simulating basic four corners testing up to VDI traces. Sysbench MySQL Performance Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well. Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller. Sysbench Testing Configuration (per VM) CentOS 6.3 64-bit Percona XtraDB 5.5.30-rel30.1 Database Tables: 100 Database Size: 10,000,000 Database Threads: 32 RAM Buffer: 24GB Test Length: 3 hours 2 hours preconditioning 32 threads 1 hour 32 threads We compared the performance of two Sysbench runs on the PowerEdge R740xd, one with 4VMs hosted on two NVMe SSDs and another with 4VMs with a dedicated NVMe SSD for each VM. In both of these tests the CPU load wasn't brought to the breaking point of 100%. We saw a split of about 60% and 80% CPU utilization for the two benchmarks, meaning there was still room to grow with additional VMs and more DRAM. The first with 2 NVMe SSDs hosting the Sysbench VMs, aggregate TPS came to 11,027, and the second test with 4 NVMe SSDs the aggregate TPS increased to 13,224. This contrasts to a measurement of 10,683TPS from the PowerEdge R630 we benchmarked about a year ago with E5-2699v4 CPUs and 4 NVMe SSDs as well.  Looking at average latency in our Sysbench workload, the 2 NVMe SSD result came in at 11.61ms, whereas the 4 NVMe SSD result came in at 9.69ms.  In our worst-case 99th percentile latency measurement, 2 NVMe SSDs measured 24.5ms, while 4 NVMe SSDs came in at a very stable 20.7ms.  SQL Server Performance StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments. Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test is looking for latency performance. This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, being stressed by Dell's Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers. SQL Server Testing Configuration (per VM) Windows Server 2012 R2 Storage Footprint: 600GB allocated, 500GB used SQL Server 2014 Database Size: 1,500 scale Virtual Client Load: 15,000 RAM Buffer: 48GB Test Length: 3 hours 2.5 hours preconditioning 30 minutes sample period Similar to how we ran our Sysbench benchmark, we tested configurations with 2 NVMe SSDs as well as 4 NVMe SSDs. With 4VMs spread out over 2 drives, we saw aggregate TPS in Benchmark Factory measure 12,631, whereas with 4 NVMe SSDs this measured 12,625. While this is a bit counter intuitive, with our particular configuration of the benchmark, latency measured below shows the real story.  With 2 NVMe SSDs, we saw average latency measure 6.5ms across our four SQL Server workloads, while with 4 NVMe SSDs that number dropped to just 4ms. In both tests performance, using just 20 and 22% of the CPU in the process. The PowerEdge R740xd with dual Intel 8180 CPUs has an immense amount of compute and storage potential to through at these types of database workloads without breaking a sweat.  VDBench Workload Analysis Our last section of local performance testing focuses in on synthetic workload performance. In this area we leveraged four NVMe SSDs in VMware ESXi 6.5, and evenly spread out 16 worker VMs, each with two 125GB vmdks mounted to measure a storage footprint of 4TB. This type of test is useful to show what real-world storage metrics look like with the overhead associated with a virtualized environment. When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from "four corners" tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices. On the array side, we use our cluster of Dell PowerEdge R730 servers: Profiles: 4K Random Read: 100% Read, 128 threads, 0-120% iorate 4K Random Write: 100% Write, 64 threads, 0-120% iorate 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate Synthetic Database: SQL and Oracle VDI Full Clone and Linked Clone Traces Looking at peak read performance, the Dell EMC PowerEdge R740xd offered sub-millisecond latency 4k read performance up to just over of 800K IOPS, starting at 0.21ms. At its peak, the R740xd measured 978k IOPS at a latency of 3.8ms.  Looking at 4k peak write performance, the R740xd started off with a latency of 0.12ms and stayed below 1ms until it hit around 730K IOPS. At its peak, the R740xd hit over 834K IOPS at 2.4ms  Switching to 64k peak read, the R740xd started off at 0.27ms latency and stayed below 1ms until it hit around 150K IOPS. It peaked just over 170K IOPS with 3ms of latency. The R740xd finished with a bandwidth of 10.644GB/s.  For 64k sequential peak write, the R740xd started off at 0.14ms and stayed under 1ms until it hit just over 65K IOPS. The R740xd hit its peak at 93K IOPS with a latency of 2.7ms. The R740xd also had a bandwidth of 5.83GB/s at its peak.  In our SQL workload, the R740xd started its latency at 0.21ms and stayed under 1ms until it hit between 700K and 750K IOPS. It peaked at 760K IOPS and only 1.29ms.  In the SQL 90-10 benchmark, the R740xd started with a latency of 0.2ms and stayed under 1ms until just under 600K IOPS. The R740xd peaked over 634K IOPS with 1.57ms latency.  The SQL 80-20 saw the R740xd start with a latency of 0.2ms and stay under 1ms until it moved over 481K IOPS. The R740xd peaked nearly at 538K IOPS with 1.7ms latency.  With the Oracle Workload, the R740xd started with a latency of 0.2ms and stayed under 1ms until just over 400K IOPS. The R740xd peaked at 470K IOPS with a latency of 2.5ms.  With the Oracle 90-10, the R740xd started off at a latency of 0.2ms and stayed under 1ms the entire benchmark. It peaked at 636K IOPS with a latency of 0.98ms.  With the Oracle 80-20, the R740xd started off at a latency of 0.2ms and stayed under 1ms until it was just under 529K IOPS. It peaked at 533K IOPS with a latency of 1.14ms.  Switching over to VDI Full Clone, the boot test showed the R740xd starting with a latency of 0.21ms and staying under 1ms until around 490K IOPS. The R740xd peaked at 539K IOPS with a latency of 1.9ms.  The VDI Full Clone initial login started off at 0.17ms latency and stayed under 1ms until around 175K IOPS. The R740xd peaked at 218K IOPS with a latency of 4.1ms.  The VDI Full Clone Monday login started off at 0.2ms latency, staying under 1ms until over 180K IOPS. It peaked at 215K IOPS with 2.36ms.  Moving over to VDI Linked Clone, the boot test showed performance staying under 1ms up to roughly 350k IOPS, and later topping out at a peak of 376k IOPS with an average latency of 1.36ms.  In the Linked Clone VDI profile measuring Initial Login performance, we saw sub-ms latency up till around 130k IOPS, where it further increased to 154k IOPS at 1.64ms at its peak.  In our last profile looking at VDI Linked Clone Monday Login performance, we see the 1ms barrier transition happening at around 109k IOPS, where the workload continued to increase to its peak at 151k IOPS and 3.36ms average latency.  Conclusion The new Dell EMC PowerEdge R740xd is the “extreme disk” version of the R740. Within its 2U footprint it can house up to 32 2.5” drives including up to 24 NVMe drives. This server can help bring out the potential of all that high performance storage by leveraging up to two Intel Scalable processors and up to 3TB of memory. Dell EMC didn’t just stop with hardware improvements. The new server comes with support for SDS built in, making it ideal for use cases that need to leverage HCI. The server is modular and highly configurable to meet nearly all customers’ needs. In our application performance benchmarks, the Dell EMC PowerEdge R740xd was tested with one with 4VMs hosted on two NVMe SSDs and another with 4VMs with a dedicated NVMe SSD for each VM. For Sysbench the 4 NVMe test had a score of 13,224 TPS, 10ms average latency, and 21ms of worst-case scenario latency while the 2 NVMe benchmark had 11,028 TPS, 12ms average latency, and 24ms worst-case scenario latency. For our SQL Server test the 4 NVMe test hit an aggregate TPS score of 12,625 and an aggregate latency of 4ms. The 2 NVMe test produced an aggregate TPS score of 12,631.8 and an aggregate latency of 6.5ms. In our VDBench workload analysis, the R740xd really shined in a virtualized ESXi 6.5 environment. In our 4k random we saw sub millisecond performance in read up to 800,000 IOPS and in write up to 730,000 IOPS. In 64k sequential read the R740xd had sub millisecond latency up to 150,000 IOPS and finished with a bandwidth of 10.644GB/s. For 64k write, the server had sub millisecond performance up to 65,000 IOPS and a bandwidth of 5.83GB/s. In our SQL workload we again saw strong sub millisecond performance (until 700,000 IOPS, 600,000 IOPS, and 481,000 IOPS for workload, 90-10, 80-20 respectively) but what was most impressive was that performance topped out with latency between 1.29ms and 1.7ms with performance well over 500,000 IOPS in each. The Oracle workload also showed strong sub millisecond performance with the 90-10 running the entire benchmark in under 1ms, peaking at 636,000 IOPS. The R740xd peaked at 539,000 IOPS, 218,000 IOPS, and 215,000 IOPS in full clone (with peak latency of 1.9ms, 4.1ms, and 2.36ms). And in our linked clone benchmarks, the server peaked at 376,000 IOPS, 154,000 IOPS, and 151,000 IOPS (with peak latency of 1.36ms, 1.64ms, and 3.36ms). Dell EMC is clearly excited about the launch of the new server line and specifically the R740xd, the centerpiece of the PowerEdge line. We've logged many weeks with the new systems, thirteen of the R740xd's make up the core backbone of our test lab. From the work we've done, the servers have impressed everywhere, from manageability via iDrac and OpenManaage Mobile to performance with the NVMe bays. With all of the additional flexibility the xd flavor of the #R740 offers, it's no surprise that @DellEMC is using is as the lynchpin in several of their SDS offerings including #vSAN #ReadyNodes, #ScaleIO Ready Nodes, #StorageSpacesDirect Ready Nodes, #VxRail and the #XC740xd ( @Nutanix ) for instance. In total, the #PowerEdge R740xd is the most complete server offering we've seen to date in terms of build quality, system design, storage flexibility, performance and ease of management making it a clear leader in the space and our first ever Editor's Choice in the server category.

http://www.storagereview.com/dell_emc_poweredge_r740xd_server_review

No comments:

Post a Comment