Dell, EMC, Dell Technologies, Cisco,

Monday, October 30, 2017

Unified Threat Management Market Size, Share, Trend & Growth Forecast to 2022

MarketStudyReport.com adds new “Global Unified Threat Management Market Size, Status and Forecast 2022” new report to its research database. The report spread across 104 pages and 116 table and figures in it. The report provides information on industry trends, demand, top manufacturers, countries, material and application. Global Unified Threat Management Market Research Report 2017 estimates the drivers, restraints, and opportunities pertaining to the global Unified Threat Management market over the timeframe of 2016-2022. Delivering the key insights pertaining to this industry, the report provides an in-depth analysis of the latest trends, present and future business scenario, market size and share, and commercialization potential of the Unified Threat Management industry over the coming five years. Request a Free sample PDF of this research to evaluate more @ https://www.marketstudyreport.com/request-a-sample/654156//?utm_source=MEDG-SP The competitive landscape of the Unified Threat Management market profiles the key players participating in the global business space. These include as   @CheckPointSoftware, @WatchGuardTechnologies, @Cisco, @JuniperNetworks, @IBM, @SonicWall, @HillstoneNetworks, @Fortinet, @Gateprotect, @Huawei, @Kerio, @NETASQ, @PandaSecurity, WatchGuard, @Clavister, @Cyberoam and @Trustwave. The report also details the market positioning, strategies, and revenue procured by each of these manufacturers. The industry is analyzed in terms of products, applications, regions, and the key manufacturers holding a prominent share of the global revenue. The product landscape of the Unified Threat Management market comprises Software Platforms and Support Services. Each product has been analyzed in terms of its production, capacity, and consumption. The report further details the market share to be procured by each of these products over the coming years. Enquire Discount on Global Unified Threat Management Market Research Report @ https://www.marketstudyreport.com/check-for-discount/654156//?utm_source=MEDG-SP In terms of end-user or applications, the report is segmented into BFSI, IT & Telecom, Government, Healthcare, Manufacturing and Other. A detailed analysis of the contribution of these application segments to the overall market revenue and volume has been included in the report. The regional segmentation of the report includes United States, EU, Japan, China, India and Southeast Asia. Detailing the consumption and production rate of Unified Threat Management in these regions, the report outlines the growth curve of this industry in terms of market volume and share over 2016-2022. Major Points in Table of Contents: Global Unified Threat Management Market Size, Status and Forecast 2022 Industry Overview of Unified Threat Management Market Global Unified Threat Management Market Competition Analysis by Players Global Unified Threat Management Market Company (Top Players) Profiles Global Unified Threat Management Market Size by Type and Application (2012-2017) United States Unified Threat Management Market Development Status and Outlook EU Unified Threat Management Market Development Status and Outlook Japan Unified Threat Management Industry Development Status and Outlook China Unified Threat Management Market Development Status and Outlook India Unified Threat Management Industry Development Status and Outlook Southeast Asia Unified Threat Management Industry Development Status and Outlook Market Forecast by Regions, Type and Application (2017-2022) Unified Threat Management Industry Dynamics Market Effect Factors Analysis Research Findings and Conclusion Related Reports: – Global and Chinese Unified Threat Management Industry, 2017 Market Research Report The ‘Global and Chinese Unified Threat Management Industry, 2012-2022 Market Research Report’ is a professional and in-depth study on the current state of the global Unified Threat Management industry with a focus on the Chinese market. The report provides key statistics on the market status of the Unified Threat Management manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the industry.

https://www.medgadget.com/2017/10/unified-threat-management-market-size-share-trend-growth-forecast-to-2022.html

Windocks releases first Data Delivery Platform based on Docker's Container Technology and SQL Server Containers

@Windocks announced the Beta release of Windocks 3.0, the first data delivery platform built on @Docker ’s container technology, with support for #SQLServer #containers. Windocks presents a new option for enterprises as a modern, open, data delivery platform. Modern: Windocks is the first enterprise data delivery platform built on Docker’s container technology. In contrast with outdated, expensive @Solaris #UNIX systems that require dedicated storage administrators, Windocks runs on Windows and provides Developers and DBAs self-service data access and management. Windocks is priced for wide adoption, being available at less than $20/month/data environment. Windocks runs wherever Windows is supported, for use on any public cloud and on premise. Open: Windocks integrates with existing enterprise storage systems, providing simplified access to existing enterprise data. Windocks also adds support for creating and managing SQL Server clones on Windows fileshares. Windocks also supports delivery of cloned data environments to any enterprise application, and is the first to provide support for delivery of clones to Microsoft’s new Docker containers. Windocks 3.0 also adds support for MySQL, and other enterprise data environments are pending. Enterprise Data Delivery: Windocks delivers Terabyte class data environments in seconds, with incremental instantly available updates. Teams simplify operations by using environments on a shared server, reducing VM usage by 5:1 or more. Windocks is popular for support of Jenkins and Continuous Integration strategies, simplifying the delivery of data environments with multi-tier application environments, including .NET, Java, with SQL Server and other databases. Finally, Windocks enhances Data Governance and Regulatory Compliance with data images stored in a versioned, auditable data image repositories. “Enterprise customers are asking for an alternative to expensive, complex solutions built on Solaris UNIX.” said Paul Stanton, Windocks co-founder. “Windocks 3.0 delivers the first container native data delivery solution that fits any budget. Windocks empowers software developers and database administrators with tools to create, manage, and deliver data environments more simply, and affordably than ever.    In a single step SQLServer DBAs create clonable images, and users self-service environments with one click on the Windocks web application.” About Windocks: Windocks delivers database cloned environments for organizations around the globe, enabling organizations to access, manage, and protect data faster, more efficiently, and simply than existing data systems. Database cloning combined with Docker based containers enables modern software development and delivery, and reporting. Windocks is the first open, modern platform for enterprise data delivery.

http://www.prweb.com/releases/2017/10/prweb14854867.htm

IBM Simulates a 56-Qubit Machine

#Quantumcomputers can, in theory, vastly outperform conventional computers using components known as qubits. Now @IBM says it has simulated a 56- #qubit quantum computer on an old-fashioned #supercomputer, a task some had previously suggested was beyond the capabilities of conventional machines. These findings do not mean that @Google and others should abandon their quantum computer projects, the researchers add. Instead, they suggest that conventional supercomputers could help make sure quantum computers actually work by double-checking their results. Classical computers flick transistors either on or off to symbolize data as ones and zeroes. In contrast, quantum computers use quantum bits or qubits that, because of the surreal nature of quantum physics, can be in a state of superposition where they can essentially act as both 1 and 0. Advertisement The superpositions that qubits adopt let them each perform two calculations at once. If two qubits are quantum-mechanically linked, or entangled, they can help perform 2^2 or four calculations simultaneously; three qubits, 2^3 or eight calculations; and so on. In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe. "Eventually quantum computers will get to so many qubits that conventional computers can't catch up," says study lead author Ed Pednault at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York. Previous research suggested that at roughly 50 qubits, quantum computers would achieve "quantum supremacy," solving problems beyond the practical limits of conventional machines, in terms of either computational complexity, available memory, or both. By the end of 2017, Google hopes to make a 49-qubit chip in a push toward quantum supremacy. Now IBM says it has simulated a 56-qubit quantum computer using the Vulcan supercomputer at Lawrence Livermore National Laboratory in California. The scientists detailed their findings Oct. 16 in the ArXiv preprint server. Whereas a 56-qubit quantum computer can theoretically perform 2^56 operations simultaneously, IBM's accomplishment involved dividing this task into 2^19 slices that each essentially consisted of 2^36 operations. This strategy meant the researchers only needed about 3 terabytes of memory for their simulated quantum computer. In contrast, earlier in 2017, a 45-qubit simulation at the Swiss Federal Institute of Technology in Zurich required 500 terabytes of memory. "During our work, we were able to consistently accomplish things we thought impossible a week earlier," says study senior author Robert Wisnieff at IBM's Thomas J. Watson Research Center. "It was like having your birthday every day you came to work." The IBM researchers stressed these kinds of simulated quantum computers are not meant to replace quantum computers. For instance, whereas a perfect 56-qubit quantum computer can perform the experiments "in 100 microseconds or less, we took two days, so a factor of a billion times slower," Wisnieff says. Instead of deflating experiments aimed at achieving quantum supremacy, such as Google's, "this sort of simulation is actually necessary to verify the type of experiment they're planning," says theoretical computer scientist Scott Aaronson at the University of Texas at Austin, who did not take part in this research. By simulating quantum computers, conventional supercomputers can double-check the results of actual quantum computers to see if they are working properly. "We're at the point where we're able to start fabricating machines on the order of 50 qubits, but we know they're not at all ideal in their behavior," Wisnieff says. If the results from quantum computers fail to match those of simulations, researchers know they may have something to fix. Simulated quantum computers can also help researchers explore the best applications for actual quantum computers by finding out which problems they solve better than conventional machines, Wisnieff says. Simulating errors in quantum computers could also help scientists figure out the causes of problems in actual quantum computers, he adds. It remains uncertain what the limit is for how many qubits conventional machines can simulate. "At this point, we don't know exactly how far we can go," Wisnieff says. "But it's just a matter of time before quantum computers ultimately win."

https://spectrum.ieee.org/tech-talk/computing/hardware/ibms-quantum-leap-simulates-56qubit-machine

Comparing Top Hyper-Converged Infrastructure Solutions: the 2017 Edition

I love working with #converged ( #CI) and #hyperconverged ( #HCI) infrastructure technologies. These types of #datacenter systems @DELLEMC @Hewlett @Scale @Nutanix @Simplivity @Cisco @VMware etc have allowed administrators from all sorts of environments realize some big benefits when it comes to optimization and architecture. First, it’s important to define and understand HCI and note that there are numerous similarities between HCI and CI environments. Both are deployed as blocks, and both converge critical resources to deliver higher levels of density. However, the biggest difference comes in how these environments are managed. In HCI, the management layer (storage for example) is controlled at the virtual layer. Specifically, HCI incorporates a virtual appliance that runs within the cluster. This virtual controller runs on each node within the cluster to ensure better failover capabilities, resiliency, and uptime.

Benefits of hyper-converged infrastructure include:

Integrated VDI, convergence, and hypervisor managementRapid-scale deployment of VMs and applicationsSmaller data center footprintGreater levels of application, desktop, and user densitiesDirect integration with the software layerCreation of hyper-scale capabilitiesLeveraging all-flash systemsIntegration with cloud systemsIncreased capabilities around resiliency, business continuity, and disaster recovery 

HCI is a quickly growing market. According to IDC’s Worldwide Quarterly Converged Systems Tracker, worldwide converged systems vendor revenues increased 6.2 percent year over year to $3.15 billion during the second quarter of 2017. The market consumed 1.78 exabytes of new storage capacity during the quarter, which was up 5.6 percent compared to the same period a year ago.

Related: Can Windows Server Make Hyper-Converged Infrastructure a Boring Commodity?

"The converged systems market is benefiting from an expansion into new environments and a new set of customers," said Eric Sheppard, research director, Enterprise Storage & Converged Systems. "This expansion is driven by products that are offering new levels of automation, tighter integration between technologies, and, in many cases, software-defined solutions based on scale-out architectures."

What’s Changed?

IDC’s numbers indicate that Dell EMC has taken the top spot when it comes to hyper-converged infrastructure. However, it needs to be noted that the Dell EMC XC appliances are powered by Nutanix software, which is arguably the engine that’s driving a lot of that growth. Also, let’s not forget that Nutanix sells its own hyper-converged infrastructure appliances too.

Here’s an overview of the top players in hyper-converged infrastructure and their solutions:

Cisco HyperFlex

In April of last year, I wrote a post about what HyperFlex was and wasn’t good at. Well, at version 2.5, I can honestly say that HyperFlex has come a long way. First of all, HyperFlex Connect, a standalone HTML5 interface for the management and orchestration of HyperFlex from any device, makes management way simpler. Connect acts as an extensible interface that is hypervisor agnostic and has built-in automation with RESTful API.

They also added higher-capacity all-flash nodes, which are now coupled with their 40-Gbps USC fabric. All of this translates to big performance enhancements, more density, better VDI multi-tenancy, and optimized resource controls. Another big add was native replication of clusters. This helps protect data and applications. HyperFlex now also includes data-at-rest security options using self-encrypting drives.

Cisco

Finally, remember that CLiQr acquisition? We’re seeing even deeper integration with Cisco CloudCenter and HyperFlex. That being said, integration with existing Cisco systems has been made much easier as well. That is, working with existing UCS domains and incorporating HyperFlex has been greatly simplified. So, if you’re a Cisco shop that wants to support remote office or leverage Cisco’s hyper-converged infrastructure, HyperFlex is a great option!

HPE (SimpliVity)

HPE has been in the CI space for quite some time. However, they became a real HCI player with the 2017 acquisition of SimpliVity. In its own space, SimpliVity was a solid product, going head-to-head with Nutanix. Starting out in 2009, they quickly gained more than a thousand partners with customers worldwide. They had some very cool innovation keys to success, which HPE is leveraging.

What is now called the “HPE” OmniStack Data Accelerator Card performs inline deduplication, compression, and data optimization across primary and backup storage repositories, offloading this processing so VMs suffer no performance penalty. As per HPE SimpliVity, the median data efficiency is rate 40:1.

HPE

From there, HPE SimpliVity Data Virtualization Platform operates as a virtual controller on vSphere ESXi and abstracts data from the underlying hardware. Designed for a bunch of use cases, the HPE SimpliVity 380 HCI architecture is a solid option for organizations looking to support remote office or new virtualization deployments.

Dell EMC

We’re seeing some real muscle-flexing from Dell EMC (and VMware). At the last VMworld, Dell EMC and VMware announced two joint solutions, VxRail 4.5 and VxRack. In its newest version, VxRail 4.5 includes automation and lifecycle management for VMware’s vSAN and vSphere. The really cool part here is that upgrading and patching software is now highly automated. This helps reduce configuration errors and allow admins to focus on more valuable operations. This level of automation is awesome for DevOps, higher levels of scale, and fast deployments.

Updates also include multi-node scaling, which automates the scaling of a single VxRail appliance to multi-node environments. Finally, you’ll see some cool updates around REST-based APIs for programmatic lifecycle management. You can now manage a single appliance or entire clusters.

I didn’t forget about VxRack, the beefier version of VxRail. At VMworld we saw improved capabilities around a self-contained system via integration with VMware Cloud Foundation for simplified management of VMware vSphere 6.5, vSAN 6.6, and the network virtualization product NSX 6.3.

Dell EMC

The other cool part here is the hybrid cloud option. You can now run Dell EMC’s Enterprise Hybrid Cloud (EHC) on top of VxRack. When it comes to Dell EMC, whether you’re a smaller shop or a large data center, there are options for use cases here. Plus, deep integration with your underlying VMware environment make, this tech a must when examining HCI.

Nutanix

I’d call Nutanix of the original companies behind the hyper-converged infrastructure revolution. And, they’re still here and making waves. The Acropolis Operating System (AOS), formerly known as the Nutanix Operating System, has continued to see updates and improvements. Their recent 5.1 release allows customers to add performance to their clusters simply by increasing their SSD tier for example. This is accomplished by adding an all-flash node to an existing hybrid cluster, and the new SSDs are seamlessly added to existing storage containers.

Furthermore, instead of doing forklift migrations from hybrid systems to all-flash systems, users can add all-flash nodes to existing clusters and retire their older hybrid gear.

In 5.1, we also saw capacity optimization improvements. According to Nutanix, the erasure coding algorithm is more intelligent in 5.1, where every time a node gets added new EC strips or existing EC strips on writes will automatically take advantage of the new nodes. This functionality improves capacity utilization across while still maintaining the same protection levels as the cluster grows and shrinks.

Nutanix

Another really cool function has been the further enhancement around containerization. In 5.0 we saw some cool support for things like Docker. In 5.1 we see even deeper integration with the Acropolis Container Services.

Another cool addition is general availability for support of XenServer. This helps further support workloads like XenApp, XenDesktop, and virtual NetScaler appliances.

Overall, Nutanix is an absolute leader in the hyper-converged infrastructure space. However, their strength isn’t just in the hardware. Their software architecture around AOS is truly impressive. Nutanix should be a consideration in almost any HCI scenario.

Scale Computing

Staring out in 2007, Scale Computing is one of the last standalone HCI vendors on the market. With thousands of customers and deployments, this is a mature solution offering serious benefits to the customer. The new HC3 architecture has big improvements around storage deduplication, multi-cluster remote management, disaster recovery capabilities, and even user management. Plus, HC3 allows you to deploy single appliances – instead of the previously required minimum of three. You’d still want a cluster for HA and primary production systems, but if you’re a smaller business and have no need for all that extra horsepower, the single appliance will work for you.

Scale has also done a solid job getting into the automation space. They’ve created an automated an intelligent mechanism to allocate storage across the tiers. According to the company, this tuning capacity allows you to increase the allocation of flash for workloads that have higher I/O requirements while decreasing the allocation for virtual disks and workloads that have minimal I/O needs. I’ve always been a fan of Scale Computing. If you’re looking to support smaller offices and are on a budget (but still want awesome tech), look to Scale as a solid option.

Scale Computing

Let’s be clear -- I know this isn’t the full list of hyper-converged infrastructure vendors. Plus, there are going to be more hardware vendors supporting CI using software (like Pivot3 or Nutanix OEM) to deliver HCI solutions. Lenovo is a great example of that. Furthermore, we’re seeing a broadening market around whitebox integration with HCI software options. Whichever way you approach it, the HCI landscape continues to change and evolve.

Goodbye Atlantis Computing; Hello Hive-IO

Atlantis computing has been in the market for a long time. If you’ve worked with virtualization technologies (XenAppp and XenDesktop in particular) you’ll know about Atlantis. They came to market with their ILIO products and then further impacted the industry with USX. Then, they released their Hyperscale HCI appliance and attempted to enter a very volatile market. Sometimes it works, and sometimes it doesn’t. Its concepts behind hyper-scale were actually really awesome, but there were challenges with the hardware, where it could be deployed, and issues with the deployment itself.

And so, Atlantis Computing’s assets were sold to Hive-IO, a young software-defined focused organization. According to Hive-IO, they’ll continue to support all of Atlantis products and work to retain its essence and technology to help expand Hive-IO's storage offering to include intelligent software-defined solutions. The focus will revolve around an area which both Hive-IO and Atlantis know very well: VDI.

Final Thoughts

Over the past few quarters, I can honestly say that CI and HCI have dominated a lot of the projects we’ve been working on. We’ve seen use cases in healthcare, government, pharma, education, manufacturing, and other verticals. Furthermore, we’re seeing growth in how HCI is being deployed within remote and branch locations.

For HCI to be successful, make sure you know your use case and where you’re deploying the environment. Do your best to reduce complexity and fragmentation by leveraging hyper-converged infrastructure systems that easily integrate with existing data center and virtualization components. Finally, I always recommend testing these systems out. Deploying HCI in parallel with your existing environment can help you better understand utilization, best practices, and where the design can positively impact your specific requirements.

http://www.datacenterknowledge.com/hyper-convergence/comparing-top-hyper-converged-infrastructure-solutions-2017-edition

Free Your Data Intensive Applications in the Flash Era

Today’s modern data intensive applications covering all the current buzz generating markets like #IOT, #analytics, #machinelearning, and multifaceted research initiatives are requiring unique approaches to #processing, #networking and #storage. These applications call on diverse processor strategies to address the different workloads, with @AMD, @Intel Core and #XeonPhi, #GPUs, and Power processors being deployed heterogeneously to address the different requirements. Obviously, higher performance requirements are a perpetual trend, but this super-linear increase in I/O pressure due to tougher I/O patterns, higher concurrency, and heavy read access are outstripping the default high performance I/O infrastructures ability to keep up. Extremely parallel file systems have dealt well with homogeneous large sequential I/O, a workload pattern that is just not found in these emerging applications. Instead of taking the destructive approach of whole-sale replacement of existing file system technologies, DDN took the approach of leveraging the rapid commoditization of flash memory with a software-defined storage layer that enables applications by sitting between the application and the file system. In fact, IME can be deployed to cost-effectively extend the life of existing file system solutions. With a scale-out approach, DDN’s Infinite Memory Engine® (IME®) presents an I/O interface that sits above, but remains tightly integrated with the file system to transparently eliminate I/O bottlenecks. IME unlocks previously blocked applications by delivering predictable job performance, faster computation against data sets too large to fit in memory, and accelerate IO-intensive applications. By doing this in a completely software-defined approach, IME is server and storage-agnostic, and application transparent–maintaining file system semantics so no code changes, scheduler changes or API usage are required. Not only does IME serve as an accelerating shim between application and file system, it also helps address the data center challenges found in high performance environments. IME’s strategic use of flash can reduce space power and cooling requirements from 10x to 300x over legacy storage approaches. This allows administrators to continue to scale applications to meet workload demands while maintaining high performance, independent of the amount of storage capacity behind the file system. Developed from scratch specifically for the flash storage medium, IME delivers a unique approach to performance, while also delivering data security and durability. IME eliminates traditional I/O system slowdowns during extreme loading, dial-in resilience through erasure coding based on the per file or client basis and delivers lightning-fast rebuild due to its fully declustered distributed data architecture. These capabilities combine to bring freedom to complex applications through lower cost, while delivering deeper insight and smarter productivity. To learn how you can leverage the power of IME to improve the efficiency of your compute, storage and networking, visit the DDN website.

https://www.hpcwire.com/2017/10/30/free-data-intensive-applications-flash-era/

Sunday, October 29, 2017

Watch: In-Depth Interview With HPE's Meg Whitman

@HPE CEO @MegWhitman sat down for an in-depth interview that touched on the company's new field compensation strategy, her reluctance to run for elected office and why hardware still matters. Published Oct. 27, 2017

http://crntv.crn.com/detail/videos/most-viewed/video/5627259596001/watch:-in-depth-interview-with-hpe-s-meg-whitman

PlexxiPulse – Plexxi Hyperconverged Networking Fabric Integrates Real-Time Analytics

“HCI is a good way to build a private cloud very efficiently and simply, and a lot of our customers are looking at building a private cloud with HCI. Our focus is on data center networking and building very converged networking systems that are easy to scale out.” – Mat Mathews, SDxCentral In case you missed it, this summer I had the opportunity to catch up with editor Jessica Hardcastle and share my thoughts on #hyperconverged infrastructure ( #HCI ) and our recent partnership with #bigdata firm @DecisionLab. My comments appeared in SDxCentral, where we discussed how HCI is on the rise, intent-based networking, and the benefits of @Plexxi ’s partnership with Decision Lab which allows for the integration of real-time analytics into our platform. You can check out the full story here for more details. Below please find a few of our top picks for our favorite news articles of the week. ComputerWeekly: Hyper-converged infrastructure and disaster recovery By Chris Evans Hyper-converged infrastructure delivers cost savings through a combination of slimmed-down hardware and greater operational improvements. The operational benefits of hyper-converged allow organisations to focus on the needs of the application rather than feed the infrastructure beast. Disaster recovery is a major component of any infrastructure design, including hyper-converged infrastructure. So, can hyper-converged architecture reduce the impact of implementing disaster recovery? TechTarget: HCI software platforms come with challenges, benefits By Clive Longbottom Hyper-converged infrastructure comes with a degree of vendor lock-in and can lack the basic freedom to grow each resource separately. But now, with software-only hyper-converged, it is possible for users to provide their own hardware systems and apply HCI software over the top. This essentially mimics the functionality of an HCI appliance but offers more flexibility. Hyper-converged infrastructure is good for running multiple workloads on the same underlying hardware and is often marketed as a cloud-in-a-box system with elastic resource allocation, high resource utilization and low manual systems management requirements. What do you think? Share your opinions about these stories with us on Twitter. The post PlexxiPulse – Plexxi Hyperconverged Networking Fabric Integrates Real-Time Analytics appeared first on Plexxi.

http://news.sys-con.com/node/4187293

Worldwide Software Defined Data Center Market 2021 Overview- Opportunities, Impact of Drivers, Key Venders, Types, Applications, Forecast 2017-2021

Software Defined Data Center Market report aims to provide opportunities, information on market size, shares and growth factors which help decision makers take sound investment estimation. Global market report also identifies and analyses the emerging trends along with major drivers, challenges and opportunities in Software Defined Data Center industry with analysis on vendors, geographical regions, types, applications. Software Defined Data Center market is projected to display growth at CAGR (compound annual growth rate) of 20.17% during 2017 – 2021 Software Defined Data Center market by Top Vendors: – @Cisco Systems, @Citrix Systems, @DellTechnologies , @Hewlett Packard Enterprise, @Pluribus Networks, @6WIND, @Arista Networks, @Avaya, @BigSwitch Networks, @Brocade Communications Systems, @Ciena, @DataCore, and many more. Get Sample PDF of report @ https://www.absolutereports.com/enquiry/request-sample/10672446 Software Defined Data Center Market by Dynamics: – Market driver Elimination of vendor lock-in issues. For a full, detailed list, view our report Market challenge Configuring legacy infrastructure. For a full, detailed list, view our report Market trend Emergence of software-defined WAN (SD-WAN). For a full, detailed list, view our report Geographical Regions: – APAC, Europe, North America, ROW Browse Detailed TOC, Tables, Figures, Charts and Companies Mentioned in Software Defined Data Center Market @ https://www.absolutereports.com/global-software-defined-data-center-market-2017-2021-10672446 TOC of Software Defined Data Center Market Report Covered: Opportunity in the market Market research methodology Market landscape Growth rate by 2021 Market segmentation by type, application Geographical segmentation Market drivers Market challenges Market trends Software Defined Data Center Market Vendors landscape Strengths and weaknesses of the key vendors List of Exhibits

http://www.satprnews.com/2017/10/28/worldwide-software-defined-data-center-market-2021-overview-opportunities-impact-of-drivers-key-venders-types-applications-forecast-2017-2021/

IoT May Need Sub-50-Cent SoCs

SANTA CLARA, Calif. — The future of the #InternetofThings could depend on a chip that sells for less than 50 cents, according to a panel of technologists at #ARM TechCon here. SoCs will need new kinds of memories, connectivity and sensors to scale to dimensions the #IoT will demand, but the path to get there is still unclear, they said. Today’s #SRAM and #flashmemories, Bluetooth interfaces and sensors consume too much power to serve volume IoT nodes in 2027, panelists said. They sketched out a few possibilities for what may replace them. Ideally, a 2027 end node SoC will consume just 10 microwatts/MHz and send and transmit data on a radio drawing only 1 or 2 milliwatts, said Jason Hillyard, a director of software in ARM’s wireless group. His “slideware SoC” used a new architecture built of subthreshold circuits suited for its energy harvesting power source. One of the widest chasms in getting to such a devcie is the lack of suitable memories, said Lucian Shifren, a senior principal research engineer at ARM.  New memories won’t hit power and cost needs for IoT, ARM said. Click to enlarge. (Images: ARM) “Memory energy is major problem going forward,” Shifren said. “Everyone is betting on ReRAM and STT to replace SRAM and flash, but I don’t think either one will work…and I don’t see any viable replacements out there,” he said, noting STT is expensive and uses too much write energy while ReRAM and phase-change alternatives use relatively high voltages. ARM has made test chips using new memories based on academic research, said Mike Muller, ARM’s chief technologist in an interview with EE Times. However, he declined to share any milestones from the test chips. “The ideal is a non-volatile, low-power design that’s good for logic as well as memory...there will be an expanded range of memories but they will not replace flash,” Muller said. Meanwhile microcontrollers based on subthreshold circuits are already htting a memory wall, said Scott Hanson, CTO of Ambiq Micro. “There are use cases where I’d like to write a lot of data locally, but I can’t do that today,” he said

https://www.eetimes.com/document.asp?doc_id=1332517

Dell EMC altering deal registration to be independent

TORONTO – @DellEMC ’s on-going channel journey took an interesting side route. Dell EMC worldwide channel chief @JohnByrne told Canadian partners at the 2017 #DellForum the deal registration program will be moved out of the channel organization and will be run independently. Also, Dell EMC channel partners should expect more data analytics to come their way through the new deal registration group. Some of the issues that led to this decision stem from the old EMC channel program. Byrne told CDN that partners did not use it that much, but Dell had a robust deal registration program. The KPIs (key performance indicators) in deal registration were based around time of response, which was roughly around four hours. “The journey we are on is about selling more of the infrastructure portfolio. I do not need to respond in four hours. That is too responsive and it goes from direct and channel so it did not make sense for deal registration to sit in the channel and so we are now running it independently,” Byrne said. The new independent team will have the same rules of engagement. Byrne went on to say, when something bad happens such as conflict it’s not a channel rep verses a direct rep. Instead it will be an independent body with human resources relations personnel involved. The two-strike rule, Byrne instituted earlier this year, will still be there.

https://www.computerdealernews.com/news/dell-emc-altering-deal-registration-to-be-independent/57180

Firmly disagree that AI needs to be regulated: VMware’s Pat Gelsinger

@PatGelsinger has been chief executive officer of @VMware Inc.—a unit of @EMC Corp., which is now part of @DellTechnologies—for the last five years. EMC was acquired by @Dell last September but VMware continues to be run as an independent unit. Gelsinger, who earlier worked with Intel Inc. for 30 years and even became its first chief technology officer (CTO), holds seven patents in the areas of VLSI (very large-scale integration) design, computer architecture and communications. Gelsinger has been visiting India at least once a year for the past 25 years. In an interview on the sidelines of VMware vForum 2017 held in Mumbai on 23 October, he shared, among other things, his views on digital disruption and newer technologies like artificial intelligence (AI). Edited excerpts: Now that you are part of Dell Technologies, how is the synergy working out with Dell products? It is exceeding our expectations. We built a business plan before the merger was announced, with regard to the detailed models, number of servers sold, number of regions and market share. I have already announced that we have increased the synergy targets for Dell and VMware. There are places like China where Dell has over $6 billion of business. How big is the market for VMware in India? We have had great success in the Indian market, with the market being our largest site. We have every function at VMware represented in India, as well as the R&D (research and development) vertical. You say all businesses are digital businesses—that there is no longer a distinction between a traditional and a digital business. Give us some examples of this trend. Consider General Electric (GE). They are into engines, oil rigs, power plants and are now focused on software skills because they need analytics to adjust their turbines. GE is also building smart power grids that both store and deliver energy, which accounts for up to 3-4% of the improvement that makes an extraordinary difference to efficiencies. Another example is Bosch, which was once an old-line industry but is now becoming connected and Internet of Things (IoT)-centric, and needs to be managed and automated. Honeywell works on HVAC (heating, ventilation and air conditioning) systems that are now being connected, automated and secured. Retailers today are innovating for new shopping experiences, to compete with players like Amazon. Hence, their online store experience becomes a digital experience. Some of the big disruptions are Uber and Airbnb. One is the largest transportation company but owns no cars. The other is the largest hotel chain that owns no hotel rooms. Every industry is being revolutionized by technology in some manner. But even as we talk about digital, we refer to the phygital (blending the physical and digital) model. Do you see this as a contradiction or a logical conclusion? I think in many cases they started out as contradictions and almost thought about their business in two worlds. On the one hand, you have these new technologies you want to work on. On the other hand, you may want to build on the old technologies. It is not like the power engine in-charge at GE is no longer important, but now we are bringing digitization, new analytics, machine learning and intelligence into that aspect. Everything needs to be innovative if you are truly going to transform the business. And to do that, you have to integrate those two worlds together. Speaking of new technologies, cloud computing companies are integrating their services with AI to differentiate themselves. The field of AI is definitely exciting and has been around for 30 years. Almost all of the core innovations in AI happened in 1980 and in the early 80s. What we are seeing happen over again is that these technologies need about 30 years to mature. That is exactly what we have seen happen with AI. What we are also seeing in the industry right now is that cloud, Big Data and IoT are being applied at a scale unlike ever before, owing to technologies like machine learning and image recognition. Today you can put millions of photographs together and be able to run the algorithms on those millions of photographs to identify whether it is a human or an animal. Cloud will be an important aspect, but we also think that it needs to be applied not just in cloud, but also in self-driving smart cars. The sense I get is that you do not fear AI and are rather comfortable with the kind of progress that AI is making. Yes, absolutely. I firmly disagree that AI needs to be regulated and in that sense, I believe that it becomes an augmentation. The fact that a doctor can now have an AI assistant to look at a cancerous cell image, augmentation is definitely a powerful tool. Imagine the world we would have today without the spreadsheet. I think of AI the same way. Why wouldn’t one use this powerful tool to its best advantage, one that doesn’t replace but instead augments? I am thrilled by the possibility of being able to take several simplistic things and do them at a scale and speed unlike ever before. As we go forward, I think we will find AI being increasingly integrated into products everywhere. Having also served as the CTO of Intel, what are your thoughts on quantum computing? You do not seem to think that it will replace general purpose silicon-based computing. Quantum computing is very exciting in terms of a research area and lots of people are starting to show some more progress, but it will never be a mass-market technology. Quantum computing is exciting because there are a few problems that it can tackle uniquely, better than anything else. Some of the security problems are those where we are able to have many parallel quantum states that allow you to probabilistically look at certain problems in ways that were never possible before. So how do envision the future of computing, given that Moore’s Law has its limits? Moore’s Law continues but it is moderating. It used to double every two years—now it doubles every three years as we’re getting close to the fundamental limits, but computing hasn’t slowed down. What is increasingly happening is that we have applied more computers in a distributed fashion, rather than having one computer that is twice as fast. That is why AI is interesting, because I can now have a cloud—so instead of having one big computer or supercomputer, I now have everybody who can rent a supercomputer, which is essentially thousands of little computers tied together. How is your strategy, comprising private cloud, the software-defined data centre, hybrid cloud and unified communications, working for you? The first one is foundational—moving from virtualization and computing, to the full data centre. That is what you call the software-defined data centre. We launched our virtual networking product with NSX, our virtual storage product with vSAN and our automation with vRealize. Our automation ranks number one. NSX is clearly seen as the market leader today. For virtual SAN, we just announced 150%-plus growth rates in the last quarter. Compared to where we were five years ago when I took over (as CEO of VMware), we did not have any of the other products. For end users, our Workspace ONE offering is doing extremely well because we had brought together the virtual desktop, AirWatch mobility, identity and common management. We can now go to customers with the mindset that it does not matter what kind of device you have or work with—be it HP, Samsung, Apple, Dell or Microsoft—we can manage all of them in the same manner and provide you with the best user experience across all those devices. Today, customers increasingly want fewer vendors to do more. They do not want to integrate all the pieces together, and instead want to get a larger solution from their key partners. That is something we are now able to do. But security remains a big challenge… If I think of the technology industry as a whole, security has failed customers in this area. Over the last five years, vendors have spent more on security than any other technology, and yet there have been several costly breaches. Customers have spent more and they’ve fallen further behind, which is not good. The technology industry needs to build security directly into our products and not ask customers to turn things on. So, we have started to announce products and capabilities in this area specifically, and I think over the next year you’ll see us step up in a big way. Partnerships are clearly one of the foundations of your growth—for example, your tie-ups with companies like Amazon Web Services, Google, International Business Machines Corp. (IBM) and Fujitsu. What’s the way ahead? The idea of partnering to deliver a more complete solution is very important for VMware. Our technologies are often part of the bigger solution. For example, with Fujitsu, they are becoming a big partner for delivering IoT. They are focused on auto companies like Toyota, as well as industrial companies where they are strong—for factory automation and retail. We also announced Google as a key partner for containers (ensures that applications run quickly and reliably regardless of where they are deployed). This idea of containers as a new way of developing and deploying applications is gaining a lot of momentum in the marketplace. Kubernetes is a key technology that Google has created. We are taking our technologies and combining them with Kubernetes and Pivotal (a Dell Technologies unit). We believe this will be an important way for us to reach the market. IBM is taking VMware and adding its technologies like security solutions and Watson analytics, and taking it straight to the marketplace. We just announced a major customer win with IBM at Lloyds Bank in London. In terms of the hybrid cloud or multi-cloud, we have the IBM partnership and the vCloud Air Network partners but the Amazon partnership was a game changer because the number one private cloud company was coming together with the number one public cloud company to deliver a seamless customer experience to move from private to public or public to private. Speaking about new technologies, your thoughts on 5G networks. We are very excited about it. 5G will be the largest capital investment cycle for the rest of our careers, because it will be a new spectrum. It will be new to antenna, base station, radio and networks. It will also be new to how people operate their core networks and that is where technologies like NFV become very important. NFV definitely excites me the most because we are moving on from virtualizing the data centre to virtualizing the network. (Network functions virtualization, or NFV, is expected to reduce the amount of proprietary hardware needed to launch and operate network services.) When you talk about 5G, I think about it as a 100x and a 100x. On one dimension, you want bandwidth that is a 100 times greater and on the other end, you are going to have a 100 times more things connected to it with IoT—right from smart devices to smart cars to smart cities to connected machines and analytics. Today we have about half a billion people and about eight billion smart devices. By 2025, it is expected that we will have 30 billion connected devices on those networks. So, it’s going to be the explosion of machine-connected devices that we are going to see. Hence, this presents a major new opportunity as well as a challenge for building and operating in those networks

http://www.livemint.com/Companies/7Qw04kw3svOICon7eJX4QP/Firmly-disagree-that-AI-needs-to-be-regulated-VMwares-Pat.html

Intel Corp. Wants to Be a $300 Billion Company

The Oregonian recently got ahold of internal @Intel (NASDAQ:INTC) documents that go over the company's ambition to become a much more valuable company than it currently is. Per the document, Intel management aims to increase the company's market capitalization to $300 billion, up from approximately $191 billion today. Translating that into a stock price, Intel's gunning for a price of $63 per share (assuming no change in share count between now and then). Let's go over the key financial goals that management laid out in that document.  IMAGE SOURCE: INTEL. Earnings per share Intel apparently aims to grow its earnings per share to $4, up from the $2.12 that it enjoyed in 2016. Although Intel can certainly accelerate its path to $4 per share in earnings through share repurchases -- buying back stock reduces the total number of shares outstanding, which means that earnings per share for a certain level of net income goes up -- Intel will, for the most part, need to grow its revenue and ultimately its net income substantially to reach this goal. Revenue growth and spending reductions In the document, Intel apparently shared revenue growth targets for its key business units as well as a target for its operating expenses as a percentage of its revenue. In 2016, the note says, operating expenses made up 35% of the company's revenue; Intel aims to bring that down to 30% by 2021. While it may seem that some draconian cost cutting would have to take place for Intel to achieve this goal, that's not the right way to think about it. Spending as a percentage of revenue depends on both total spending and revenue. A company can bring that percentage down through cost cutting, revenue growth, or a combination of both. Intel clearly has ambitions to grow its major businesses over time, as management indicated in the documents that The Oregonian obtained. Intel's targeting annual revenue growth of 10% for its data center business. In its client computing group, which primarily sells personal computer processors and related platform components, the company is planning for a 1% annual decline. Intel's betting on 33% annual growth in its memory business. This part of Intel's business primarily sells NAND flash-based solid-state drives into the data center and recently began selling storage drives based on its 3D XPoint technology, which Intel says offers significant speed and reliability advantages over traditional NAND flash. And, finally, Intel expects 13% annual revenue growth in its blossoming Internet of Things business. Applying these desired growth rates to Intel's 2016 results in each of these business, we get the following revenue: Business Segment 2016 2021 Client Computing Group $32.9 billion $31.3 billion Data Center $17.2 billion $27.8 billion Memory $2.6 billion $10.7 billion IoT $2.6 billion $4.9 billion Total $55.3 billion $74.7 billion SOURCE: INTEL EARNINGS REPORT, THE OREGONIAN. The numbers above exclude Intel's Programmable Solutions Group (formerly known as Altera) as well as the Intel Security Group (which Intel divested) since Intel apparently didn't mention the former in the documents that The Oregonian got ahold of and Intel divested itself of the latter. The point, though, is this: If Intel achieves its revenue growth goals, then that should quite easily bring down Intel's spending as a proportion of its overall revenue.  Foolish takeaway Intel clearly wants to deliver significant value to its stockholders over the next five years, and it wants a large part of that value creation to come from share price appreciation. With proper execution, Intel can achieve these goals, but it won't be easy, particularly as so much of Intel's plan seems to bet on a breakout in its memory business. Not only does Intel face substantial competition in the memory market, but the market itself is highly volatile and is therefore very hard to predict. But, if Intel can deliver on this revenue growth, and if it can maintain healthy gross profit margins and maintain some operating expense discipline, then it wouldn't surprise me to see Intel trading somewhere north of $60 per share by the time 2021 closes out. 10 stocks we like better than Intel When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.* David and Tom just revealed what they believe are the ten best stocks for investors to buy right now… and Intel wasn't one of them! That's right -- they think these 10 stocks are even better buys.

https://www.fool.com/investing/2017/10/27/intel-corp-wants-to-be-a-300-billion-company.aspx

HPE Names New CIO To Oversee Massive Next IT Restructuring Initiative

@Hewlett Packard Enterprise has hired Archana "Archie" Deskus, previously the CIO of GE's Baker Hughes oilfield services business, to oversee a massive IT restructuring. Deskus, a 17-year IT veteran who spent the last four years at Baker Hughes, will be responsible for dramatically revamping HPE's internal IT systems. [Related: #HPE President Neri On #DellEMC 's Server-Storage Trade-Off, Cisco's AI Strategy, Compensation Changes, And The 'Massive' Partner Opportunity Ahead] Under the Next restructuring, HPE is moving from 1,000 business processes supported by 10 ERP systems and more than 950 applications to just 100 business processes with a single global ERP system with 350 applications. The IT restructuring is one of many initiatives that includes massive internal IT system changes including a reduction in the number of internal compensation plans from 400 to 25 and a cut in the number of active server configurations from 50,000 to 10,000. Deskus, who before taking the CIO job at Baker Hughes was CIO at Ingersoll-Rand and Timex, will report to HPE President Antonio Neri who is overseeing the Next initiative. [Sponsored Suggested Post: Intel Optane: Get the Speed. Keep the Capacity. It’s revolutionary – the first all-new class of memory in 25 years. Deliver unprecedented PC performance with Intel Optane memory. ] "Archie has a depth of knowledge about how best to leverage technology to drive business value," said Neri in a blog post. "For the past four years, Archie was CIO at Baker Hughes where she transformed their IT organization into a strategic business partner, while delivering operational performance and business value." The HPE Next initiative is aimed at simplifying the structure of the company, with plans to reduce costs by as much as $200 million to $300 million in the current fiscal year. HPE is aiming for $1.5 billion in cost savings over a three-year period. Deskus replaces interim CIO Joseph Schulz, a 10-year HPE IT veteran and former GE IT executive, who will continue to play a "crucial role in Global IT delivering the NextGen IT program," said Neri. "I’d like to thank Joe for leading the Global IT organization over the last several months," said Neri. "As you all know, he has been the driving force behind the work we’ve done in IT preparing for HPE Next. Joe has a proven track record from his role behind the launch of HPE and the two spin-mergers. I know his expertise will be a great asset as we advance the NextGen IT project." Michael Haley, president of Edge Solutions, an Alpharetta, Ga., solution provider, No. 333 on the 2017 CRN Solution Provider 500, said the Deskus appointment strikes at the heart of HPE CEO Meg Whitman's philosophy that HPE must improve its business processes to make it easier to do business with the company. "Meg told us at the BoB (Best of Breed) conference that 'complexity equals cost' and there is no room for inefficient processes in today's IT systems," said Haley. Haley also said he was "excited and proud" of HPE's decision to name Deskus – who was honored this year by the National Diversity Council this year – in this "strategic and important role that is critical to the success of HPE and its partners."

http://www.crn.com/news/data-center/300094621/hpe-names-new-cio-to-oversee-massive-next-it-restructuring-initiatives.htm

Nutanix Appoints Former Cisco Leader Inder Sidhu as Executive Vice President of Customer Success and Business Operations to Help Drive Next Phase of Growth

@Nutanix Appoints Former @Cisco Leader @InderSidhu as Executive Vice President of Customer Success and Business Operations to Help Drive Next Phase of Growth Inder Sidhu (Photo: Business Wire) October 26, 2017 04:02 PM Eastern Daylight Time SAN JOSE, Calif.--(BUSINESS WIRE)--@Nutanix ® (NASDAQ:NTNX), a leader in enterprise #cloudcomputing, today announced that it has appointed @Cisco leader @InderSidhu as Executive Vice President of Global Customer Success and Business Operations. In this position, Sidhu will draw on his extensive expertise in business strategy, digital transformation and business operations to help drive the next phase of growth at Nutanix. “A digital revolution is taking place, and Nutanix is at the forefront in leading this revolution” Tweet this “With 30 years of technology industry experience, including a 20-year track record of success in driving global go-to-market and customer success at Cisco, Inder brings a wealth of business knowledge and experience to Nutanix. I’m delighted to have him join our team at a critical point in our journey to becoming a major force in enterprise technology,” said Dheeraj Pandey, CEO, Nutanix. “Inder’s addition to the team further strengthens our exceptional executive bench, and will help us expand and deepen our customer relationships and grow our business in the multi-cloud era.” Accelerating Customers’ Digital Transformations Sidhu will lead a team that will be singularly focused on customers’ success throughout their engagement lifecycle with Nutanix. He will also lead the company’s business operations, strategy, and planning teams, working closely with the company’s sales, marketing and finance leaders. Sidhu brings deep levels of industry knowledge, customer experience, strategic insight and operational expertise to the company. As Senior Vice President of Strategy & Planning for Worldwide Sales at Cisco, he architected Cisco’s global go-to-market engine that helped the company grow from $1 billion in annual revenue to $50 billion. Sidhu co-led Cisco’s highly-profitable $16 billion Enterprise business, its rapidly-growing $7 billion Emerging Countries business, and its 1,200-person digitally-driven Worldwide Professional Services business. He also spearheaded the company’s digital transformation initiatives. Prior to Cisco, Sidhu was a consultant with McKinsey & Company, an engineer with Intel and an entrepreneur at Excelan, a successful Silicon Valley start-up, through its IPO. Sidhu is also a New York Times best-selling author of the books The Digital Revolution: How Connected Digital Innovations are Transforming Your Industry, Company, and Career and Doing Both: Capturing Today’s Profit and Driving Tomorrow’s Growth. He is a frequently requested speaker, and has taught at Harvard, Stanford, Wharton and UC Berkeley. “A digital revolution is taking place, and Nutanix is at the forefront in leading this revolution,” said Sidhu. “Nutanix customers love how its enterprise cloud platform allows them to transform IT so they can focus on improving business outcomes. My role is to expand and scale our customer success, while ensuring continued operational excellence, so that we can become even more deeply involved in accelerating our customers’ digital transformation initiatives.”

http://www.businesswire.com/news/home/20171026005438/en/Nutanix-Appoints-Cisco-Leader-Inder-Sidhu-Executive

Amazon, Google, Microsoft and Intel find billions more in profit

Technology companies have been dominating the news and investors’ attention, and a series of earnings reports Thursday afternoon showed why. Four of the most valuable tech companies in the world— @Amazon.com Inc. #AMZN, +13.22%  , @Google parent company @Alphabet Inc. #GOOGL, +4.26% GOOG, +4.80% , @Microsoft Corp. #MSFT, +6.41% and @Intel Corp. INTC, +7.38% — destroyed expectations for profit and revenue in third-quarter reports Thursday afternoon, collectively bringing in about $2.2 billion more profit and $19 billion more revenue than in the same quarter a year ago. Stock in those four companies jumped across the board in late trading, which would make them worth even more than the combined market cap of nearly $2 trillion they enjoyed at the end of Thursday’s trading session.

https://www.marketwatch.com/story/amazon-google-microsoft-and-intel-find-billions-more-in-profit-2017-10-26

Microsoft says something more hyper than Storage Spaces is coming 'very soon'

@Microsoft 's offered an explanation, of sorts, for the unexpected disappearance of #StorageSpacesDirect from #WindowsServer version 1709. A new FAQ on Windows Server, version 1709 posted on Thursday offers the following verbiage: Storage Spaces Direct was introduced in Windows Server 2016 and remains the foundation for our hyper-converged story. Following our successful Windows Server 2016 release, we’ve listened to our customers and are already working to deliver the next set of hyper-converged innovation. While, we elected to keep these features in preview for the Windows Server, version 1709 release, there will be another release available very soon. Whether that “very soon” means the next semi-annual release or a something out-of-band isn't explained. Microsoft has revealed a validated solutions program for Windows-Server-powered hyperconverged boxen, so maybe there'll be news on that front. Windows Server users may also be interested to know that System Centre Operations Manager has had a crypto upgrade, as it now supports TLS 1.2. Adding support for the cipher means Microsoft is getting ahead of itself in terms of ensuring compliance with PCI regulations, which as of June 30th, 2018, won't certify users of the know-to-be-insecure TLS 1.0 and SSL.

https://www.theregister.co.uk/2017/10/27/microsoft_teases_new_hypercoverged_software/

Microsoft’s plan to become a major cloud competitor is working

@Microsoft hit a new milestone in its #cloudservices business this quarter: it’s now able to project over $20 billion in annual revenue. While that may seem like a big but arbitrary milestone, it’s an important one for Microsoft. In 2015, a year after being named the company’s CEO, @SatyaNadella said he wanted Microsoft to hit that target sometime in 2018 — a big jump up from the $6.3 billion business it was back then. Depending on how you’re counting (Microsoft’s 2018 fiscal year started in July), Nadella delivered right on time or slightly ahead of schedule.

https://www.theverge.com/2017/10/26/16553052/microsoft-q1-2018-earnings-cloud-revenue-milestone

Friday, October 27, 2017

Term Sheet Rigetti -- Wednesday, October 25

TERM SHEET: You led Rigetti Computing’s Series A round. The quantum computing company has raised nearly $70 million in venture funding. Why did you decide to invest? PANDE: @Rigetti has a full-stack operation where they create their own chips, they build their own computers, and they write their own software and applications. That full-stack approach will be critically important in this juncture of quantum computing where we are still working to design the first machines. Many companies, like @Google and @Intel, feel that there is huge opportunity for #quantumcomputing right now to be able to do certain tasks dramatically faster than the way traditional computers could work. People have been talking about quantum computing for at least 20 years, but I think we’re now seeing a big shift. For the last two decades, we had been trying to work out the fundamental science of quantum computing, and right now, a lot of the scientific advances are done. The next steps are to figure out engineering advances. In other words, we’re asking questions about how we scale up chips, rather than how we build the fundamental devices themselves. More and more companies are working on building a brain-computer interface, which would allow the mind to connect with artificial intelligence. Facebook is building a BCI that would let people type with their mind, and Elon Musk launched Neuralink to create devices that can be implanted in the brain. What do you think about the future of these innovations? I think it’s a super exciting area. This is another example where tech is meeting biology in stride. There are so many more advances than our understanding of neuroscience and the brain that happened just in the last five or 10 years. It’s very natural to apply machine learning to this brain-computer interface because the computer will have to understand and decode our thoughts — and that’s something that would be very hard to achieve without machine learning. This is very much the topic of science fiction past, but like any other technology, it will start off simple and evolve from there. The simple things will have a huge impact on human health. A brain computer interface could dramatically change the lives of quadriplegics and paraplegics, for example. As the technology gathers some scale and as machine learning gets stronger, these innovations will get closer and closer to your science fiction dreams. The cancer immunotherapy market is projected to reach $111.23 billion by 2021 and there’s been plenty of VC activity there lately. Why are we seeing more of that now? The oncology area is super interesting to me because many of us have felt that drugs don’t really kill or cure people — your immune system cures you. Even something like an antibiotic, which will kill a bacteria in a Petri dish, is largely administered in doses designed to weaken the bacteria so that your immune system can go after it. The immune system is really key. Machine learning will come into play more and more, and we’re seeing biology companies add a layer of tech that will help them accelerate innovation. What are some interesting innovations going on in the biotech space that Term Sheet readers should know about? There are a couple of different spaces that are getting interesting. One is diagnostics. We’re seeing diagnostics companies use AI to create new tests that have much higher accuracy, much lower cost, and typically diagnose things much earlier. In a sense, the genome sequencer is almost like a smartphone. With that one piece of hardware, you can run many different types of tests. That trend is very much blowing up. What do the next 10 years look like? Will we all have brain implants and be able to edit our genes as we wish? I think a lot of medicine will become like dentistry. Dentistry is a great example of doing things preventively. When you get an X-ray, for example, maybe you have a cavity and you just treat it. It’s not like you’re waiting until you’re 80 years old to finally see the dentist. I think in 10 years, we’ll have cancer tests such that you take them once or twice a year, find out you have early-stage cancer, take care of it early, and move on. The big trend is that we’ve got all these new data sources and machine learning to take advantage of and the ability do something actionable.
http://fortune.com/2017/10/25/term-sheet-wednesday-october-25/

Thursday, October 26, 2017

Amazon Web Services crosses $1B in operating profit as revenue jumps 41 percent to $4.6B

@Amazon Web Services continues to grow quite strongly despite its size, and it crossed an interesting milestone during the third quarter: it contributed $1 billion in operating profit to the overall corporation. Revenue from the public cloud leader rose 41 percent to $4.6 billion during the quarter, a tad less than the 42 percent growth posted last quarter, but they’ll probably take it. #AWS continues to defend its turf despite renewed competition from the likes of @Microsoft, which also reported earnings Thursday showing #Azure revenue grew 90 percent in its first fiscal quarter. And it’s no surprise that AWS once again kept Amazon.com out of the red during the quarter, a trend that dates back a while. But its operating profit of $1.2 billion jumped 36 percent compared to the previous year, showing that competitive pressures and the massive investments required to maintain a world-class cloud computing platform aren’t taking a toll on its bottom line. AWS is now on an $18 billion revenue run rate, Chief Financial Officer Brian Olsavsky said on a call with the media following the release of its results. He implied later on a separate call with analysts that AWS would have a fair amount of new stuff to talk about at its big annual conference, Re:Invent, taking place the week after Thanksgiving this year.
https://www.geekwire.com/2017/amazon-web-services-crosses-1b-operating-profit-revenue-jumps-41-percent-4-6b/

AWS Announces Availability of P3 Instances for Amazon EC2

SEATTLE--(BUSINESS WIRE)--Today, @Amazon Web Services, Inc. (#AWS), an Amazon.com company (NASDAQ: #AMZN ), announced #P3 instances, the next generation of #Amazon #ElasticComputeCloud ( #AmazonEC2) #GPU instances designed for #compute -intensive applications that require massive #parallelfloating point performance, including #machinelearning, #computationalfluiddynamics, #computationalfinance, #seismicanalysis, #molecularmodeling, #genomics, and #autonomousvehiclesystems. The first instances to include @NVIDIA #Tesla #V100 #GPUs, P3 instances are the most powerful GPU instances available in the cloud. To get started with P3 instances, visit https://aws.amazon.com/ec2/instance-types/p3/. “Our industry has a pressing need for performant, accurate, and predictive models to extend the scale of discovery and optimization, complementing and going beyond the traditional experimental approach” Tweet this P3 instances allow customers to build and deploy advanced applications with up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances, and reduce training of machine learning applications from days to hours. With up to eight NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVIDIA NVLink interconnect that enables high-speed, low-latency GPU-to-GPU communication. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors, 488 GB of DRAM, and 25 Gbps of dedicated aggregate network bandwidth using the Elastic Network Adapter (ENA). “When we launched our P2 instances last year, we couldn’t believe how quickly people adopted them,” said Matt Garman, Vice President of Amazon EC2. “Most of the machine learning in the cloud today is done on P2 instances, yet customers continue to be hungry for more powerful instances. By offering up to 14 times better performance than P2 instances, P3 instances will significantly reduce the time involved in training machine learning models, providing agility for developers to experiment, and optimizing machine learning without requiring large investments in on-premises GPU clusters. In addition, high performance computing applications will benefit from up to 2.7 times improvement in double-precision floating point performance.” Airbnb’s community marketplace provides access to millions of unique accommodations and local experiences in more than 65,000 cities and 191 countries. “At Airbnb, we’re using machine learning to optimize search recommendations and improve dynamic pricing guidance for hosts, both of which translate to increased booking conversions. These use-cases are highly specific to our industry and require machine learning models that use several different types of data sources, such as guest and host preferences, listing location and condition, seasonality, and price,” said Nick Handel at Airbnb. “With Amazon EC2 P3 instances, we have the ability to run training workloads faster, enabling us to iterate more, build better machine learning models and reduce cost.” Schrödinger’s mission is to improve human health and quality of life by developing advanced computational methods that transform the way scientists design therapeutics and materials. “Our industry has a pressing need for performant, accurate, and predictive models to extend the scale of discovery and optimization, complementing and going beyond the traditional experimental approach,” said Robert Abel, Senior Vice President of Science at Schrödinger. “Amazon EC2 P3 instances with their high performance GPUs allow us to perform four times as many simulations in a day as we could with P2 instances. This performance increase, coupled with the ability to quickly scale in response to new compound ideas, gives our customers the ability to bring lifesaving drugs to market more quickly.” AWS Deep Learning Machine Images (AMIs) are available in AWS Marketplace to help customers get started within minutes. The Deep Learning AMI comes preinstalled with the latest releases of Apache MXNet, Caffe2 and TensorFlow with support for Tesla V100 GPUs, and will be updated to support P3 instances with other machine learning frameworks such as Microsoft Cognitive Toolkit and PyTorch as soon as these frameworks release support for Tesla V100 GPUs. Customers can also use the NVIDIA Volta Deep Learning AMI that integrates deep learning framework containers from NVIDIA GPU Cloud, or start with AMIs for Amazon Linux, Ubuntu 16.04, Windows Server 2012 R2, or Windows Server 2016. With P3 instances, customers have the freedom to choose the optimal framework for their application. “We are excited to support Caffe2 on the new Amazon EC2 P3 instances. The unparalleled power and capability of P3 instances allow developers to train and run models very efficiently at high scale,” said Yangqing Jia, Research Scientist Manager at Facebook. “It will help new innovations get to customers in hours instead of days by taking advantage of the speed with P3 and our modular, scalable deep learning framework with Caffe2.” Customers can launch P3 instances using the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. Amazon EC2 P3 Instances are generally available in the US East (N. Virginia), US West (Oregon), EU West (Ireland), and Asia Pacific (Tokyo) regions with support for additional regions coming soon. They are available in three sizes, with one, four, and eight GPUs, and can be purchased On-demand, Reserved or Spot instances.

http://www.businesswire.com/news/home/20171025006533/en/AWS-Announces-Availability-P3-Instances-Amazon-EC2

Cumulus Networks Inducted into JPMorgan Chase Hall of Innovation

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--@CumulusNetworks, the leading provider bringing #webscale #networking to #enterprisecloud, today announced that it has been inducted into the @JPMorgan Chase Hall of Innovation. Cumulus Networks joins 20 other companies in the JPMorgan Chase Hall of Innovation, which recognizes select emerging tech companies for their innovation, business value and disruptive nature. Cumulus Networks was honored with the award for providing JPMorgan Chase with Cumulus Networks’ Cumulus Linux, a powerful open network operating system that allows businesses to automate, customize and scale using web-scale principles like the world's largest data centers. Cumulus Networks is currently the networking standard for Gaia, JPMorgan Chase’s private cloud, and is also pursuing other use cases within the firm. The Hall of Innovation Award was presented at the 9th Annual J.P. Morgan Technology Innovation Symposium in Menlo Park, California. “We work with many innovative companies and every year, we recognize those that stand out for the impact they’ve helped us make on our business – both in terms of value and service to our clients,” said Matthew Liste, Head of Cloud Engineering. “Cumulus Networks has helped the firm reduce capital expenditures, optimize data center network costs, and bring infrastructure to the market faster.” “For a company with the size and reputation of JPMorgan Chase, it is critical that the network operates with efficiency to keep the business running as smoothly as possible,” said Josh Leslie, CEO of Cumulus Networks. “We take great pride in partnering with JPMorgan Chase to help the company access a scalable, efficient architecture approach, so we feel extremely honored to be recognized with this award and look forward to introducing more companies to the power of web-scale networking.” JPMorgan Chase is a leader in financial services, engaging and partnering with pioneering companies that help drive positive customer and employee experiences in new ways, such as digital, security and data and analytics. Since 2010, 20 companies have been inducted into the JPMorgan Chase Hall of Innovation.

http://www.businesswire.com/news/home/20171026005374/en/Cumulus-Networks-Inducted-JPMorgan-Chase-Hall-Innovation

Dell EMC points to portfolio scale to solve customer conundrums

@DellEMC believes its vast enterprise portfolio is what #channelpartners and end-users alike are seeking as challenges such as workforce, transformation loom large. The vendor's partner program hinges on a commitment of simplicity, an aim it believes is satisfied by bringing together Dell and EMC's offering under a single channel set-up. "We are now the largest IT company in the industry with the broadest product portfolio," @CherylCook, VP global channels and alliances at Dell EMC, told Channelnomics. "We are all talking about the transformational challenges in the industry. I believe our partners, just like our customers, are looking to narrow the number of vendors they are doing business with." Cook said organisations are working hard just to keep pace with the various challenges that their lines of business are presenting. "When you look at the breadth of our portfolio, which includes all the client and PC-based edge devices, we recognise workforce transformation is one of those challenging areas," said Cook. "So whether it be new advances like converged infrastructure and core data centre infrastructure to support all those new workloads, we find ourselves in a unique place that our solution provider partner ecosystem can solve more areas of the customers' challenges by working with Dell EMC than having to source multiple vendor partners to do that." Cook also claimed that Dell EMC has the industry's largest R&D budget as it looks to continually innovate. "We have a commitment to invest for the long-term and to invest $4.5 billion a year in R&D. So we are also a thought-leadership innovation company that will continue to innovate around, for example, the software-enabled data centre and around new technologies and capabilities," she said. "Partners can have confidence working with Dell EMC in the breadth of the solutions we have to help address customers' challenges today. But we also have a forward-looking innovative agenda to be driving and delivering this transformational change. "We are helping to shape the new technologies of tomorrow that will help customers along this transformational journey. "Consolidation is occurring in the PC client landscape and we are one of those consolidators. We have a company at extreme scale and we think this scale helps in the consolidation process," added Cook.

https://www.channelnomics.com/channelnomics-us/sponsored/3019931/sponsored-dell-emc-points-to-portfolio-scale-to-solve-customer-conundrums

HPE To Step Back From 'Tier 1 Service Provider' Server Business While Focusing On Higher Margin Growth Areas

@Hewlett Packard Enterprise is planning to end production of what it calls customizable commodity servers targeting the largest #hyperscale #cloud providers as part of a move to focus on more profitable growth areas. #HPE President @Antonio Neri outlined several initiatives the company has to sharpen its focus on growth during last week's HPE securities analyst meeting. As part of his presentation, which CRN reported on last week, Neri said that HPE will discontinue the sales of custom-designed, commodity servers while continuing to sell higher-margin products. [Related: 10 Server And Storage Technologies Taking Center Stage At HPE Discover 2017] An HPE spokesperson confirmed that the company will end that product line in fiscal year 2018. "In FY18, HPE will cease selling custom-designed, commodity servers to #Tier1 #ServiceProviders, which we define as @Amazon, @Google, @Microsoft, @Facebook, @Apple, @TenCent, @Alibaba and @Baidu.  We will continue selling our higher margin products like storage, networking and higher value servers to these companies," the spokesperson wrote. [Sponsored Suggested Post: Free IoT Virtual Conference Learn how to monetize IoT solutions. Discover new IoT technologies. Build strategic partnerships.] Neri said HPE will stop selling custom-designed commodity servers to the Tier 1 service provider segment, including providers like AWS, Microsoft Azure, and Google, but will continue selling them higher-margin products. "These commodity server deals come at very low margins, and there is no services pull-through opportunity for the company. So, we are modifying our strategy to reinvest these resources towards solutions and services that will drive profitable growth," he said. HPE will focus on gaining share with standardized compute platform for Tier 2 and Tier 3 service providers, including companies such as Dropbox, eBay, and Salesforce.com, who have built their businesses in the cloud, Neri said. "These SKUs will be delivered without a custom-design (including software and services), providing value to our customers while ensuring profitability for our business," he said. "HPE offers channel partners many server options including its ProLiant DL line, its blade servers, and its Synergy composable infrastructure, and so a move to exit the Tier 1 cloud server business would probably have little direct impact on server sales through the channel," said Dan Molina, chief technology officer at Nth Generation Computing, a San Diego-based solution provider and long-time HPE channel partner. However, Molina told CRN, the move will benefit the channel in other ways. "This will free up HPE resources to support its other server lines," he said. "It can be hard for a company to keep up with so many server lines. HPE can now better focus on its roadmap for its DL and Synergy servers, or its SimpliVity hyper-converged infrastructure."

http://www.crn.com/news/data-center/300094536/hpe-to-step-back-from-tier-1-service-provider-server-business-while-focusing-on-higher-margin-growth-areas.htm