Dell, EMC, Dell Technologies, Cisco,

Sunday, December 31, 2017

Progress to turning silicon transistors into qubits which could enable billion qubit quantum computers

Japanese RIKEN researchers are trying to adapt existing the #siliconmetal– #oxidesemiconductor #fieldeffecttransistors ( #MOSFETs) to integrate #qubits with current electronics, offering the potential for scaling up #quantumdevices and bringing #quantumcomputing closer to becoming a reality. @Keiji Ono and colleagues from the #RIKEN Center for Emergent Matter Science and the Toshiba Corporation in Japan, in collaboration with researchers from the United States, are investigating the properties of qubits produced by imperfections or defects in silicon MOSFETs. In particular, they are exploring their potential for developing quantum computing devices that are compatible with current manufacturing technologies. “Companies like IBM and Google are developing quantum computers that use superconductors,” explains Ono. “In contrast, we are attempting to develop a quantum computer based on the silicon manufacturing techniques currently used to make computers and smart phones. The advantage of this approach is that it can leverage existing industrial knowledge and technology.” After cooling a silicon MOSFET to 1.6 kelvin (−271.6 degrees Celsius), the researchers measured its electrical properties while applying a magnetic field and a microwave field. They found that when the silicon MOSFET was neither fully turned on nor off, a pair of defects in the silicon MOSFET formed two quantum dots in close vicinity to each other. This ‘double quantum dot’ generated qubits from the spin of electrons in the dots. It also produced quantum effects that can be used to control these qubits. These observations are an important step toward controlling the quantum state of qubits in silicon MOSFETs and could pave the way for coupling qubits and making quantum devices using existing manufacturing techniques. The researchers intend to raise the temperature at which the phenomena occur. “The work was carried out at temperatures an order of magnitude higher than previously reported,” says Ono. “So one important direction for our future research will be to achieve the same outcomes at even higher temperatures, of say 10 or 100 kelvin, or even at room temperature.”

https://www.google.com/amp/s/www.nextbigfuture.com/2017/12/progress-to-turning-silicon-transistors-into-qubits-which-could-enable-billion-qubit-quantum-computers.html/amp#ampshare=https://www.nextbigfuture.com/2017/12/progress-to-turning-silicon-transistors-into-qubits-which-could-enable-billion-qubit-quantum-computers.html

15 top science & tech leaders offer surprising predictions for 2018

The past year has been a momentous one for science and technology. From the detection of #gravitationalwaves (predicted almost a century ago by Einstein) to the rise of virtual currencies like #Bitcoin to the creation of genetically modified human embryos, 2017 was marked by all sorts of remarkable discoveries and innovations. What will 2018 bring? No one knows for sure. But as we did for 2017, we asked top scientists and thought leaders in innovation what they expect to see in the new year. Here, lightly edited, are their predictions. SEAN CARROLL: UNDERSTANDING QUANTUM SPACETIME  Sean Carroll Bill Youngblood Dr. Sean Carroll is a theoretical physicist at the California Institute of Technology in Pasadena. His most recent book is "The Big Picture: On the Origins of Life, Meaning, and the Universe Itself." I'm going to go out on a limb and predict that we'll see dramatic advances in understanding the quantum nature of spacetime itself. I won't make any large bets on this possibility, since theoretical research is notoriously gradual and unpredictable. But the ingredients are in place for moving our understanding substantially forward. Quantum mechanics is the wonderfully successful theory of how the world behaves at the microscopic level, while on large scales space and time are wedded together in Einstein's famous general theory of relativity. Reconciling how both of these ideas can be true at the same time has been a longstanding puzzle for theoretical physicists. Recently, we have been bringing new tools to bear: information theory, the many-worlds interpretation of quantum mechanics, and an improved understanding of black-hole entropy. The time is right to finally figure out the quantum ingredients out of which space and time are made.

https://www.nbcnews.com/mach/science/15-top-science-tech-leaders-offer-surprising-predictions-2018-ncna814196

The Evolution and Future of Hypervisors

#Hypervisors are a #virtualization technique that powers #cloudcomputing infrastructure like @Amazon #EC2 and @Google #ComputeEngine. Although #container virtualization technology like @Docker and @Kubernetes have taken the spotlight recently, #containers are often deployed on top of hypervisors on the cloud. In this article, we will first outline the architecture of classical trap-and-emulate hypervisors that were invented in the 1970s. We then describe how hypervisors evolved from the 1970s to today’s cloud computing era. Finally, we will look at future trends that affect hypervisor design. (This article was inspired by an awesome talk on Amazon’s Nitro project by Anthony Liguori, which I highly recommend everyone interested in hypervisors and cloud infrastructure to watch.) Architecture A hypervisor is a piece of system software that provides virtual machines (VMs), which users can use to run their OS and applications on. The hypervisor provides isolation between VMs, which run independent of each other, and also allows different VMs to run their own OS. Like other virtualization techniques, hypervisors provide multitenancy, which simplifies machine provision and administration. One of the main criticisms against hypervisors is that they tend to usually be heavy-weight compared to other virtualization techniques like containers (Morabito et al., 2015). However, it’s also possible to build hypervisors that are light-weight (Manco et al., 2017) and also make the guest OS more light-weight when running in under a hypervisor (Madhavapeddy, 2013). A hypervisor can be decomposed into two major parts: the virtual machine monitor (VMM) and the device model. The VMM is responsible for setting up VMs and handling traps (a.k.a VM exits) caused by the guest OS executing privileged instructions like I/O access. The device model, on the other hand, is responsible for implementing I/O interfaces for all the devices like networking cards, storage, and so on, the hypervisor supports. Hypervisor architecture is illustrated in the following diagram. Hypervisor architecture. The hypervisor can be decomposed into two parts: the virtual machine monitor (VMM) and the device model. (The terms hypervisor and VMM are often used interchangeably. However, we refer to hypervisor as the combination of a VMM and a device model.) Virtual machine monitor (VMM) A VMM must satisfy three properties (Popek and Goldberg, 1973): Equivalence property states that program execution has identical observable behavior on bare metal and under VMM, except for timing and resource availability, which are difficult to preserve because of shared physical hardware. Efficiency property states that the majority of program instructions are executed directly on a physical CPU without interference from the hypervisor. Resource control property states that the VMM manages all hardware resources. Virtual machines require permission from the hypervisor to directly access hardware. As a side note, it’s worth noting that emulators satisfy both equivalence and resource control properties, but does not satisfy the efficiency property. The KVM subsystem in the Linux kernel (and other OS’es it has been ported to), for example, provides the building blocks for implementing a VMM. The KVM subsystem is effectively a portable abstraction over CPU hardware virtualization capabilities, which can be leveraged by userspace applications like QEMU to implement a VMM or a full hypervisor. Device Model The device model is the part of a hypervisor, which provides I/O interfaces for virtual machines. While the VMM is responsible for handling traps, it delegates I/O requests to the appropriate device model. Examples of device models are virtualized NICs and storage devices. Device models can either provide interface for a real hardware device or a paravirtualized device. The device model can be implemented either using software, like the virtio family of drivers, or in hardware, using SR-IOV, for example. I/O Virtualization To implement a device model, I/O virtualization is needed. The two approaches for I/O virtualization are software-based and hardware-assisted. Software-based I/O virtualization implements I/O interfaces in software to allow sharing the same physical devices across multiple virtual machines. Software-based I/O virtualization can be implemented on top of various different backends. For example, a software-based storage device can be layered on top of a block device or a filesystem. One issue with software-based approach is that the device model uses the same CPU resources that the vCPUs, which reduces available CPU capacity and causes jitter. Hardware-assisted I/O virtualization implements I/O interfaces in hardware. This approach requires hardware support for sharing the same physical device across multiple virtual machines. SRV-IO is a PCI extension, which allows a physical PCI function to be partitioned into multiple virtual PCI functions. Evolution The semantics of a trap-and-emulate VMM was formalized in the early 1970s (Popek and Goldberg, 1973) and made popular again in the mid-1990s for running commodity OS’es on multicore machines (Bugnion et al., 1997). However, the most popular machine architecture at the time, Intel x86, was not virtualizable because some of it’s privileged instructions did not trap. The VMware hypervisor, which targeted x86, was first released in 1999. It used binary translation to replace privileged instructions to trap into the hypervisor, while still running unprivileged instructions directly on the physical CPU, which solved x86’s virtualization issues (Adams and Agesen, 2006). This allowed the VMware hypervisor to run unmodified commodity OS’es on x86 hardware in virtual machines without the performance penalty of emulation. The Xen hypervisor released first in 2003 took a different approach to solving the x86 virtualization issue. Instead of binary translation, they modified the source code of the guest OS to trap to the hypervisor instead of executing non-trapping privileged instructions. Intel and AMD released x86 CPUs with virtualization extensions in 2005 and 2006, which made classic trap-and-emulate virtualization possible. KVM, initially developed for Linux, implements a kernel subsystem that in combination with QEMU’s device model provides a full hypervisor. Initially, the KVM project provided software-based device model that emulated full hardware devices, but later acquired paravirtualized I/O device model when the virtio device model was introduced. Future The classic hypervisor architecture has stood the test of time but there some trends that affect hypervisor design. Hardware virtualization is becoming more wide-spread. For example, the Amazon Nitro project (talk by Anthony Liguori) takes an unconventional approach to hypervisor design, which replaces all of the software-based device model with hardware virtualization as illustrated in this diagram. Amazon’s Nitro hypervisor also uses a custom designed VMM that leverage’s Linux KVM. Operating systems have also started to evolve to accommodate hypervisors better. Unikernels are an interesting OS design approach that packages the OS and the application into one bundle, which runs in the same CPU protection level (Madhavapeddy, 2013). This eliminates the traditional separation between kernel and user space, which reduces context switch and system call overheads at the expense of losing some OS functionality. The basic idea was already pioneered earlier in the form of library OSes, but the much simpler device model of a hypervisor compared to bare metal made the idea much more feasible for real world use. Light-weight virtualization is becoming more and more important as the use of cloud computing grows. Containers are excellent technology for providing light-weight virtualization. However, containers are unable to provide the full isolation capabilities of VMs, and have various security problems because containers share the same host OS and have access to the large OS system call interface (Manco et al., 2017). Hypervisors can be slimmed down significantly (Manco et al., 2017) and unikernels provide even larger opportunity to optimize the hypervisor if we relax the equivalence property requirement of VMMs (Williams, 2016). Serverless computing is a new computing model, better described as Functions as a Service (FaaS), that allows application developers to deploy functions instead of applications to a managed platform. One approach to serverless computing is to use hypervisors and unikernels for packaging and deploying the functions (Koller and Williams, 2017). Energy efficiency is another important future direction for hypervisor design. Communications technology, which cloud computing is a large part of, is forecasted to consume around 20% of global electricity by 2030, or as much as 50% in the worst case (Andrae and Edler, 2015)! The energy overhead of a hypervisor can be extremely high depending on workload. One experiment reports between 59% and 273% energy overhead for KVM (Jin et al., 2012)! Kernel-bypass networking has become important recently because NICs are getting faster and traditional TCP/IP and POSIX socket abstraction is proving to have high overheads (Han et al., 2012; Young et al., 2014; Yasukata et al., 2016). Hypervisors that implement the device model using I/O paravirtualized effectively introduce another layer to the networking data path, which increases networking overheads. In Linux, the vhost architecture is one solution to the problem. Vhost moves the virtio paravirtualized I/O device model from QEMU (which is the VMM userspace kernel) to the host kernel (which also hosts the KVM module), which eliminates the exit from host kernel to userspace VMM. Another solution is full hypervisor kernel-bypass using hardware NIC virtualization introduced by the Arrakis project (Peter et al., 2014). Summary The hypervisor architecture invented in the 1970s has stood the test of time. The x86 architecture quirks meant that the first successful hypervisors had to resort into binary translation to handle privileged instructions. Binary translation solutions were followed by paravirtualization (popularized by Xen) but hypervisor architectures were consolidated to the classic model as Intel and AMD added virtualization extensions to the x86 architecture. Although containers have recently become a very popular virtualization technique, emerging computing paradigms like serverless computing could make hypervisors an attractive technique again. Light-weight hypervisor designs, unikernels, and hardware-assisted virtualization all reduce hypervisor overheads, which also makes hypervisors more competitive against containers.
https://medium.com/@penberg/the-evolution-and-future-of-hypervisors-999f568f9a5d

The best emerging storage tech of 2017

My consulting focuses on #emergingtechnology, especially in #datastorage. Storage is taking a larger and larger piece of the IT budget, and with the growth of #IoT, #BigData, and #AI, that trend will only continue. That's why we need new technologies to improve our not-terribly-resilient storage infrastructure. FLASH While flash is not new, the advent of quad-level cells ( #QLC) - four bits per cell - and the widespread adoption of #3Dflash, is. The two generally go hand in hand, since QLC needs larger cell sizes, which 3D flash makes economical. It will take several years for both technologies to reach full maturity, but together they show how promising the future is for flash.

STORAGE CLASS MEMORY

SCM is byte-addressable memory that persists data after the power is turned off, so it can be used for storage. Intel finally shipped some of its much hyped 3D XPoint this year, but only as a block addressable storage and not as byte addressable memory.

But Intel and Micron are not the only players. Everspin has increased its MRAM part density to 1 gigabit - still well below what flash offers - but enough to attract many new users, given MRAM's high performance, ruggedness, and density.

Also in the running are Adesto, Crossbar, and Nantero. Expect to see significant announcements from these in 2018.

PCIE STORAGE NETWORKS

One company, Liqid has taken a commercially available PCIe switch chip to create a rack-mountable PCIe switch. Why is this cool? Basically it allows you to extend a server's PCIe bus outside the box, with all the bandwidth and low latency that comes with PCIe.

GDPR: The new data privacy regulation

Promoted by Intel

Liqid is a software company whose main product allows you to create virtual servers out of PCIe connected physical servers. The idea is build high-density racks of computes, networks and storage and use software to create virtual servers that have whatever the application needs for optimum performance.

I expect this concept to catch fire in 2018. Every server has PCIe, why not use it as an interconnect? Which, by the way, was the original concept for PCI years ago.

http://www.zdnet.com/article/the-best-emerging-storage-tech-of-2017/

Big Data: Back To The Future

There is a big data narrative that new technology will sweep all that has come before it, eradicating older technology. The problem is, it might not do as great of a job as it’s supposed to. For an alternative view, I talked to Rohit De Souza and Jeff Veis of Actian Corporation about their concept of hybrid data which goes beyond big data, and at the same time, gives new life to some very established technology. Alastair Dryburgh: Tell me, just to start off, a little bit about the history of Actian, where it started, how it started, why you started it, and where you've currently got to. ADVERTISING inRead invented by Teads Action Rohit De Souza Rohit De Souza: Actian was started quite some time ago by a private equity firm that bought Ingres. Ingres was one of the early relational database pioneers, in fact arguably the superior relational database technology that lost the marketing war to Oracle. About a couple of years ago, the company went through some internal soul-searching. Big data was all the rage, and they were trying to understand whether or not these core data management assets that still ran thousands of core business systems were going to be relevant. The company looked at new emerging use cases to harness the growing sea of data we see today.   Northwestern MutualVoice Divorcing on the Doorstep of Retirement? Here Are 5 Things You Need to Know The investors in the firm at the time then decided that the old data assets were interesting, but they decided to pivot to the shiny new object on the hill called big data. Fast-forward to the end of last year, and the discussion I had with them at that time was, "Look, big data by itself is meaningless. Yes, data is growing, we know all of that. The issue is, because it's getting so diverse and diffuse in both how it is created and consumed, it's not just the analytics that's important, it's the fundamental management of these data assets, and the ability to harness information in any form and extract actionable insights from this data that's really a consequence." Dryburgh: What would the different forms of data be typically? De Souza: You've got data in relational tables, #SQL engines, #Hadoop lakes, or Hadoop swamps as I might call them, and the list goes on. You've got embedded data, because as computing moves down to the edge of devices with the latest Internet of Things movement, you now need a control system down at the edge, so that your thermostat isn't pinging you every 30 seconds saying, "I'm here, I'm here, I'm here."

https://www.forbes.com/sites/alastairdryburgh/2017/12/30/big-data-back-to-the-future/#2fe8bbed3339

Hyperscale data centers reached over 390 worldwide in 2017

Menu #Hyperscale #datacenters reached over 390 worldwide in 2017 Posted yesterday by Ron Miller (@ron_miller) #Hyperscale operators are defined as enormous companies like @Amazon, @Apple, @Facebook and @Google that need to provide computing on a massive scale. You would think that there would be a limited number of this type of highly specialized data center, but recent research from Synergy Research found that 2017 was actually a breakout year for new hyperscale data centers across the world — with no sign of slowing down in 2018. Synergy reported that the year closed with over 390 web-scale data centers worldwide, up from 300 in just one year.  Google was particularly active. Chinese companies Tencent and Baidu also built hyperscale data centers this year. Still, the vast majority are in the US with 44 percent of the total. China is second way back with 8 percent, followed by Japan and the UK with 6 percent each and Australia and Germany with 5 percent each.   Synergy reports that on average, the 24 hyperscale firms have 16 data centers each. The companies with the largest number won’t come as a surprise to anyone with Amazon/AWS, Microsoft, IBM and Google each having at least 45 worldwide. The definition of hyperscale varies, but IDC says it requires at least 5000 servers and 10,000 square feet of available space, but is often much larger. Synergy defines it having “several hundreds of thousands of servers — or sometimes millions.” These operators often build their own equipment to deal with the specific needs of their immense computing requirements. By designing their own hardware and software, these companies can control every aspect of the computing experience to squeeze out the maximum amount of efficiency, which is crucial when you are dealing with the massive scale of these organizations.

https://techcrunch.com/2017/12/30/hyperscale-data-centers-reached-over-390-worldwide-in-2017/

rack-scale flash

Rack-scale flash is flash-only storage that uses a high-speed interface to connect the storage more directly to the CPU than with a traditional storage array. The most common connection is nonvolatile memory express (NVMe) connected via a network fabric. This interface is called NVMe over Fabrics (NVMe-oF).

Rack-scale flash is also known as shared flash storage because it creates a pool of storage that is shared by servers over the high-speed interconnect inside one or more racks. Others call the concept of sharing compute, storage and networking as resource pools composable infrastructure.

By using the NVMe protocol, rack-scale flash vendors can deliver much lower latency while keeping the same throughput levels available from network transports such as 32 gigabit Fibre Channel (FC) or 40/50/100 Gigabit Ethernet (GbE). The lower latency improves the IOPS of the overall system, making rack-scale flash best suited for high-performance computing needs. Examples include artificial intelligence, real-time analytics of big data and in-memory database systems.

Rack-scale flash vendors

There are two main types of rack-scale flash suppliers: those that use off-the-shelf solid-state drives (SSDs) and those that use custom flash modules. The former are generally startups such as Apeiron Data Systems, E8 Storage, Excelero and Mangstor. The latter are rack-scale flash vendors such as tech giants IBM and Hitachi Vantara.

Two early startups are Pure Storage, founded in 2009, and DSSD Inc., founded in 2010. Pure began life as a flash-only storage provider and entered the rack-scale flash market in 2016 with its FlashBlade product.

#DSSD, in comparison, was acquired by EMC Corp. in 2014 for its rack-scale technology. In 2016, the newly merged @Dell EMC launched DSSD D5, a #rackscale #flash appliance based on the acquired technology. The company cancelled the line in March 2017, stating it made more sense to embed the technology into existing storage arrays.

Benefits/drawbacks of rack-scale flash

#Rackscale #flash boosts the performance of any system, even more than #allflash arrays that don't connect via #lowlatency #NVMe protocols. All-flash array vendors claim IOPS ranging from 200,000 to the low millions. @Dell EMC, for example, claimed 10 million IOPS for its now cancelled #DSSD D5 product. While any application can benefit from that speed, it is a necessary feature for financial applications, big data technology such as #Hadoop, and #inmemory #database systems such as #SAP #HANA.

http://searchstorage.techtarget.com/definition/rack-scale-flash

All 50 states vote yes on AT&T’s $40 billion emergency response network FirstNet

From wildfires in California to hurricanes on the Gulf and Atlantic coasts, communications are the bedrock of emergency response and management. However, those communications can be challenging when quickly evolving situations cross multiple jurisdictions — a truth painfully learned on 9/11, when more than a dozen agencies found it difficult to relay critical information to the right people at the right time. Today, AT&T announced that all 50 states, Puerto Rico and the District of Columbia have officially signed on to FirstNet, a government program operated by AT&T to provide universal emergency response communications across the country. States had until yesterday to officially opt-in or opt-out of the FirstNet system. California, Florida, Mississippi and New York were among the states that waited until the last minute to confirm their participation. This is a major win for AT&T, which officially won the FirstNet contract this past March. The contract stipulated that AT&T would manage the network for 25 years, and the company committed to spending $40 billion to manage and operate the network. In exchange, the company would receive 20 MHz of critical wireless spectrum from the FCC, as well as payments from the government totaling $6.5 billion for the initial network rollout. The true win for AT&T though is in the actual spectrum itself, which is in the 700 Mhz band commonly used for LTE signals. While the FirstNet spectrum is prioritized for first responders, it also can be used for consumer wireless applications when an emergency is not taking place, which should improve cellular reception and bandwidth for AT&T customers, particularly in urban areas. The bigger loss, though, is with the U.S. taxpayer. FirstNet has had something of a painful birth and maturation process. Originally created as part of the Middle Class Tax Relief and Job Creation Act of 2012, it was designed by Congress to create an exclusive network for first responders, who presumably couldn’t use consumer technology like smartphones to communicate with each other. That was following recommendations from the 9/11 Commission that encouraged Congress to allocate a dedicated public safety spectrum. The program has had a glacial implementation process ever since. As Steven Brill described in The Atlantic last year: “FirstNet is in such disarray that 15 years after the problem it is supposed to solve was identified, it is years from completion—and it may never get completed at all. According to the GAO, estimates of its cost range from $12 billion to $47 billion, even as advances in digital technology seem to have eliminated the need to spend any of it.” At issue is whether the rapid improvement of consumer wireless technology — which is available today — far outweighs the performance of a hypothetical public safety network that remains a glimmer in the mind’s eye. Most interoperability problems have been solved by modern technology, and so the question becomes what the buildout is really for anyway. Why did the government give exclusive access to a critical part of the spectrum that could have benefited millions of consumers, while also provided expedited access for first responders? For AT&T, the victory provides a new source of revenue from local police and fire departments, who will presumably come to rely on FirstNet for their emergency communications. It also gets a serious boost in its spectrum, along with free cash from taxpayers. But for all of us, it seems billions of dollars will be spent to create a specialist comm channel, when existing technologies are more than up to the task of providing these highly reliable services.

https://techcrunch.com/2017/12/29/all-50-states-vote-yes-on-atts-40-billion-emergency-response-network-firstnet/

Friday, December 29, 2017

How Virtualization Supports Healthcare Data Storage Needs

December 29, 2017 - Entities need to consider how to best address future healthcare data storage needs as the amount of data collected by digital devices continues to grow exponentially. Organizations have several options including public and #privatecloud storage. However #softwaredefinedstorage ( #SDS ) has emerged as another way to store vast amounts of data using #virtualization. Research and Markets predicted the SDS market will reach $42 billion by 2023, up from $4 billion in 2017, and grow at a CAGR of 39 percent in that time. Dig Deeper Benefits of #SoftwareDefinedNetworking in #Healthcare How #VirtualDesktopInfrastructure Supports Patient Care Healthcare Public Cloud Answers #DataStorage Demands The lack of workforce available to accommodate for the growth of traditional on-premises #datacenters is one of the main drivers for virtualized storage techniques. Virtualization scales back the amount of physical space needed to store data, meaning that organizations don’t need to hire additional staff. SDS is one of the architectural components that typically makes up a software-defined data center ( #SDDC). The other components include computer virtualization such as workspace-as-a-service and virtual desktop infrastructure (VDI) and also software-defined networking. SDDC, sometimes referred to as a virtual data center, covers virtualization concepts geared toward automating and abstracting data center resources. Each element of the infrastructure - including networking, security, and storage - is virtualized and implemented as a service. SDDC uses abstraction to bring different components of infrastructure architecture together, usually managed through an application programming interface (API). The number of virtualized solutions within an SDDC makes it complicated to manage without an API assisting the developer. APIs abstract the layers of virtual technology within the data center, only displaying functions critical to the developer making the SDDC easier to manage. SDDC is not a technology organizations can adopt in one deployment and it’s rare to see any organization running on a fully virtual environment. Gartner outlined in an SDDC market overview that many organizations may be several years away from implementing an SDDC infrastructure or any other virtual infrastructure. “Simply changing a legacy infrastructure for a set of software-defined products is unlikely to yield the desired benefits,” Gartner analysts stated. “Before an activity is automated and self-service is implemented, the process associated with the IT service needs to be completely rethought and optimized. This may require new skills and a different culture to what is currently available within certain IT organizations.” Gartner Vice President and Analyst Dave Russell suggested examining all solutions making up an organization’s current infrastructure and figure out which component of the infrastructure will benefit the organization the most from being virtualized. Starting with the most mission critical components and planning to adapt the rest of the infrastructure over time is the best way to approach SDDC. Entities that are struggling with their storage infrastructure can look into virtualizing their storage tools to get the most out of their hardware and gain more control. SDS solutions are currently available for deployment in the healthcare industry but the healthcare industry tends to hesitate when adopting new technology, particularly virtualization technology. A report released by ZK Research earlier this year attributed evolving IT infrastructure technology as one of the major reasons why healthcare organizations have been slower to adopt software-defined solutions. “Historically, hospitals and other healthcare institutions have been slow to adopt new technology, as maintaining the status quo was viewed as safer than risking disruption from new technologies,” the report stated. As the amount of healthcare data being produced continues to grow, healthcare organizations will need to consider different ways of approaching storage. Storage solutions that allow rapid expansion and flexibility need to be considered to accommodate large and complex data files. Virtualizing is a good technique for organizations to get the most out of their IT infrastructure tools.

https://hitinfrastructure.com/news/how-virtualization-supports-healthcare-data-storage-needs

IBM Making Progress in Quantum Computing With Q Systems

A lot has been written about @IBM’s efforts in such #emergingtechnologies as #artificialintelligence, #machinelearning, #analytics, #blockchain and the #cloud, much of it part of the company’s larger strategy around what officials call “ #cognitivecomputing .” But IBM is also making major investments in #quantumcomputing, an area that has gotten a lot of attention for the tech giant in the past several months. IBM has continued to make strides in this area throughout 2017—for example, the company in March rolled out #IBMQ, a quantum computing environment accessible via the IBM Cloud platform, and an API that enables Quantum Experience users to run sophisticated experiments in the cloud-based quantum environment. It's Time to Rethink CRMDownload Now Further reading Free Windows 10 Accessibility Upgrade Ends Soon Microsoft GDPR Tool Helps Partners Check Readiness In November, IBM announced that the first Q systems available online will have a 20-qubit processor and that engineers will upgrade the systems throughout 2018. They also said IBM had created an operational prototype 50-qubit processor, which will run the next generation of IBM Q systems. Earlier this month, IBM unveiled the first dozen businesses that will have early access to the vendor’s IBM Q quantum computing system. Those customers include financial services companies JPMorgan and Chase, auto makers Daimler AG and Honda, Samsung and the Oak Ridge National Lab. The companies are part of IBM’s new IBM Q Network and illustrate the broad customer interest in quantum computing and the ongoing drive to commercialize the technology. “We are currently in a period of history when we can prepare for a future where quantum computers offer a clear computational advantage for solving important problems that are currently intractable,” Dario Gil, vice president of AI and IBM Q at IBM Research, wrote in a post on the company blog when the partner companies were announced. “This is the ‘quantum ready’ phase, he wrote."We all know the reasons why quantum computing has attracted so much excitement. Despite the enormous progress we’ve achieved as a society with ‘classical’ computers, they simply don’t have enough memory or processing power to solve historically intractable problems. Quantum computers, working with classical computers via the cloud, could be the answer for at least some of these,” Gil wrote. The IBM Q Network extends IBM’s “sphere of collaborators with whom we will advance quantum computing—from exploring practical business and scientific applications, to developing the tools needed to make the systems more accessible as they grow in power and performance,” Gil wrote. Quantum computing has been talked about for decades, promising systems that are multiple times faster than current supercomputers. Organizations for several years have been relying on faster CPUs and accelerators including Graphical Processing Units (GPUs) from Nvidia and Advanced Micro Devices, field-programmable gate arrays (FPGAs) like those from Intel and Xilinx, and faster interconnects to accelerate workloads that leverage AI, machine learning and neuromorphic computing (systems that can mimic the functions of the brain). At the heart of quantum computing are qubits. Current systems use bits that hold values of 0 or 1. But qubits—or quantum bits—can be 0 and 1 at the same time, which opens up the possibility of systems running through millions of calculations simultaneously and at high speeds and running workloads that are far too complex for today’s systems. A growing number of top-tier tech vendors including Google, Microsoft and Intel are pushing quantum computing initiatives while smaller vendors such as Rigetti Computing also are gaining traction. D-Wave, which Google has invested in, offers commercial quantum computers, though critics say they aren’t true quantum systems. Most recently, Microsoft earlier in December announced a free preview of its Quantum Development Kit, while Intel in October unveiled a 17-qubit processor. Rigetti this month introduced Q19, a 19-qubit quantum processor. Homeland Security Research analysts said in a report earlier this year that the quantum computing market will hit $10.7 billion by 2024, with $8.45 billion coming from products sales and services and $2.25 billion from government-funded programs. However, it won’t be an easy lift. There are plenty of challenges in quantum computing. For example, qubits are fragile and while they may be entangled—sharing the same state with two or more qubits—they can revert back to one of the two states if affected by an external factor, such as quantum noise. That reversion can cause problems, such as data loss. Intel officials said that to reduce the risk, qubits need to operate in extremely cold environments—about 20 millikelvin, or 250 times colder than deep space—which places critical requirements on how qubits are packaged. IBM’s Gil in his blog post noted the work that still needs to be done in quantum computing, but pointed to the IBM Q systems as examples of what is being achieved now. “It is true that much of what you have read about the promise of quantum computers will require fault-tolerance, a goal likely still a decade or more away,” he wrote. “But we believe that the initial signs of a computational advantage will be achievable with nearer-term approximate quantum computers, which are now emerging.”

http://www.eweek.com/enterprise-apps/ibm-making-progress-in-quantum-computing-with-q-systems

ABB and HPE join forces in industrial automation

@ABB and @Hewlett Packard Enterprise ( #HPE ) said they have agreed t combine the 'ABB Ability' digital operations technologies (OT) services with HPE hybrid information technology (IT) products. This will generate actionable insights from large amounts of #industrialdata, enabling customers to increase the efficiency and flexibility of their operations, they said. The partnership will enable ABB Ability products to run on hybrid platforms such as HPE #ProLiant for @Microsoft #Azure Stack, enabling customers to deploy applications to their preferred location. ABB and HPE will also provide data centre automation, integrating ABB Ability Data Center Automation with the HPE OneView software. The two firms are working on secure edge data centre services, too, in conjunction with IT rack manufacturer Rittal. ABB CEO Ulrich Spiesshofer said the partnership will bring intelligence from cloud-based services to site deployments in industrial plant and data centres for greater uptime, speed and yield. ABB said it has an installed base of 70 million connected devices, 70,000 digital control systems and 6,000 enterprise software products in the industrial sector. Meg Whitman, CEO of HPE, said the two companies will shape a digital industrial future where everything computes, equipping machines with intelligence to collaborate, allowing plants to adapt to changing demands, and enabling global supply chains to react instantly to incidents.

https://www.telecompaper.com/news/abb-and-hpe-join-forces-in-industrial-automation--1226131

The Verge 2017 tech report card: Microsoft

If 2016 was the year of putting failures behind it, @Microsoft’s 2017 was a year of focus. Over the past 12 months we’ve seen the software giant successfully execute on its plan to become a major cloud competitor. It’s a massive part of Microsoft’s business now, and it will only continue to grow into 2018. Microsoft CEO @Satya Nadella also revealed that the company’s future is in artificial intelligence, #MixedReality, and #quantumcomputing. We’ve seen early work in all three areas this year, even if quantum computing and true artificial intelligence are still a ways off in the industry. Microsoft might have hinted at its future and started winning in cloud, but it didn’t stop doing what it’s best at over the years: Windows. This year saw two big “Creators” updates to Windows 10, with a focus on Mixed Reality, creating in 3D Paint, and pen / touch improvements. Microsoft has continued to update Windows 10 regularly with new features, and as we approach its three-year anniversary it doesn’t look like a “Windows 11” successor is likely any time soon.

https://www.theverge.com/2017/12/28/16820404/2017-tech-recap-microsoft-surface-book-xbox-one-x-pubg-windows-10-arm

China's Huawei Flags Slower Smartphone and Overall Revenue Growth

HONG KONG (Reuters) - China's @Huawei Technologies Co Ltd [HWT.UL] on Friday flagged overall and smartphone revenue figures for 2017 that represented its slowest growth in four years, and vowed to extend its global reach with more premium products next year. The telecom equipment and smartphone maker expects 2017 revenue to rise 15 percent to 600 billion yuan ($92.08 billion), Chief Executive Ken Hu said in his New Year’s message. That represents the slowest growth since 2013 for Huawei. Its fast revenue growth in recent years has been slowing as Chinese telecom carriers complete the construction of the world's largest 4G mobile network and as competition intensifies in the smartphone market. Hu said Huawei’s smartphone shipments in 2017 totaled 153 million units and its global market share topped 10 percent, cementing its position as the world's third-largest smartphone maker after Samsung Electronics Co Ltd and Apple Inc. Huawei said it would focus on profit after posting near-flat annual profit growth in March, weighed down by its fast-growing but thin-margin smartphone business and marketing spending. Richard Yu, CEO of Huawei's consumer business group, in a separate New Year's message said the group will strive to obtain a larger share in the high-end market globally, after recording "significant" growth in markets such as Italy and Germany in the past year. "In 2018, we will have disruptive products and innovative technology to lead the global market. I believe that 2018 will be the first year that we will truly be walking the road to global prominence," Yu said. In October Huawei launched its Mate 10 series, its most expensive model to date and powered with AI-enhanced chips which the company says are faster than Apple's iPhones. The model "sold extremely well", Yu said, without disclosing a number. Yu said the division's revenue is expected to rise 30 percent to 236 billion yuan - also its slowest growth since 2013. "We need to better understand the needs of high-end users outside China and fashion-savvy young consumers in China," Yu said. Industry tracker IDC forecasts China’s total smartphone shipments in 2017 to shrink slightly versus a year earlier. Hu also said Huawei’s enterprise business needs to “maintain mid-to-high growth speed and become a pillar business for the company in five years”. He also called on the company's consumer business to improve profitability, its new public cloud business to increase in scale, and its core carrier business to outperform the industry.

https://www.google.com/url?rct=j&sa=t&url=https://www.usnews.com/news/technology/articles/2017-12-28/chinas-huawei-says-expects-2017-revenue-up-15-percent-to-92-billion&ct=ga&cd=CAEYACoTOTU5MzA0NjY1NzkxMDg2MjIwNTIaOTAzNjMwMzlhZDk3OGQxZDpjb206ZW46VVM&usg=AFQjCNFKtsNdLr1LtS093-vThfrPC37B6A

How Dell EMC’s modern infrastructure is helping CIOs in IT mutation

"Today IT transformation is the foremost requirement for every business growth. The legacy PSUs like HPCLs and BPCLs of the world are parallelly running modern infrastructure services with their in-house software to implant a new edge technology. Hence, IT transformation becomes a revolutionary step before acquiring digital mutation in any vertical. Before getting into the digital transformation journey, businesses need to secure their IT operations. Modern infrastructure is thus required to evangelize their data center for a better and secure IT environment.", Niladri Saha, GM- Modern Infrastructure, @Dell EMC India embarked. With businesses aggressively embracing the digital transformation, it is becoming important for them to stay competitive and ahead of changing customer expectations. CIOs play an important role and it is essential for them to adopt the technology systems and processes that will transform their internal operations, to meet the demands of fast moving businesses. This requires building agile infrastructures. “As the business drives new digital initiatives, IT must transform to accommodate massive growth in modernizing data centers, maintain the performance and reliability of legacy business applications, deploy a new breed of storage strategy and support increasingly complex workflows” Saha added. Further Saha discussed points that CIOs must keep in mind before initiating the modernizing their data centers:

1. High costs to operate: As companies expand into new markets, simultaneously they also gain or build data centers along the way. Each data center operated with its own set of practices and on its own hardware. Managing across these disparate systems and inconsistent processes is not at all cost-effective.

2. Aging technology: Enterprises have continued to operate using traditional data centers that weren’t up to modern standards — utilization and virtualization rates were low, automation was non-existent, etc. Over the time, this practice has not only contributed to high operating costs, which will not lead to many inefficiencies, making it nearly impossible for IT to meet the basic demands of the business.

3. High risk due to lack of redundancy: The organization’s fast growth and lack of standardization have always led to a gap in disaster recovery protection. Many applications — including key business systems — had zero redundancy. Hence, to reduce the redundancies, firms need to quickly modernize their operations as this risk to the business is too great to ignore.

4. Time lag in modernization: There is a huge time cost which is involved while migrating towards the new-age data centers. Without the right partner, an organization will easily dedicate twice as much time than it usually will take.

5. Resource cost avoidance: Alternative of choosing the right partner is running the migration project in-house. This would require bringing in contractors, experts and IT admins, which will involve an exorbitant cost. Hence, conducting the project on its own will not only lead to significant resource requirement but there are greater chances that the project may get extended.

In a modus operandi of migrating legacy data centers to modern technology, CIOs prefer not to scale out their old servers. "In such deployment, SDS helps in consolidating all the servers which brings out storage piece from a heterogeneous state and get a merger solution for old servers. And that’s how software defined storage helps organizations to accommodate new betel technology" he added. Software-defined storage data centers (SDDC) will lay a better foundation of storage infrastructure equipping the business with tools to analyse this data for better business plans and improved consumer deliverables, eventually increasing the scale of the business. Not only this, SDDC will also prove to be a game changer for the businesses by introducing them to the idea of innovation. With the rising consumers’ demands, data is bound to grow exponentially, making it impossible for businesses to provide real-time solutions with manual workforce/processes. "Therefore, businesses need to switch from manual to automated workforce/processes to reduce cost and optimize operations which will eventually drive growth. Software Defined Storage Data Centres will act as a catalyst in this transition as businesses will be able to transfer as well as streamline their processes on digital platform, thereby increasing efficiency"he further explains. Not only this, automation will improve communication, reduce human errors, enabling businesses to provide effective instant solutions to the consumers’ demands

https://cio.economictimes.indiatimes.com/news/strategy-and-management/how-dell-emcs-modern-infrastructure-is-helping-cios-in-it-mutation/62289621

How Serverless Technologies Impact Kubernetes

#Containers may serve as functions in a @Kubernetes environment—an approach in contrast to other services. On today’s episode of The New Stack Makers streamed live from KubeCon + CloudNativeCon, we were joined by @Alex Ellis, Principal Application Developer at @ADP and Director at @OpenFaaS, and @Kassandra Perch, @IOpipe Developer Evangelist. We discussed the differences, similarities and the relative overall maturity of #serverless technologies. For those used to working with more traditional cloud-native environments and who are looking to utilize serverless technologies, having the ability to work with containers provides familiarity and stability. “Having the @Docker container as that primitive means that we can get a buy-in from the whole cloud-native community. It’s what they know, and it’s what they trust. It’s battle tested. You can just take that to production,” said Ellis. OpenFaaS gives developers the ability to package both code and binaries as serverless functions, with the flexibility to choose either Docker or Kubernetes as one’s serverless platform of choice. Sponsor Note The Cloud Native Computing Foundation (CNCF) is driving adoption of a new computing paradigm that is optimized for modern distributed systems environments, including container packaging, dynamic scheduling, and microservices-based application development and operations. Perch later went on to highlight the ways in which IOPipe is focusing on AWS Lambda, and is interested in seeing the serverless community evolve and grow. “You lose a lot of control over your application when you go with a cloud function provider like @AWS Lambda, or @Google Cloud Functions,” Perch said. “I think that’s why Kubernetes-based systems are so interesting, because you get a little bit of that control back, for better or for worse.”

https://thenewstack.io/serverless-technologies-impact-kubernetes/

Amazon Web Services CEO: We're going to see an 'explosion' of voice apps

For @Amazon ( #AMZN ), 2017 was another blockbuster stretch — and not just because it was @Yahoo Finance’s Company of the Year. The Seattle tech giant saw its stock soar nearly 58%, driven by strong revenues, the $13.7 billion acquisition of @Whole Foods Market and the popularity of voice assistant @Alexa. Far less buzzed about was Amazon Web Services’ ongoing hot streak. In the first three quarters of 2017, AWS generated $12.3 billion in revenues — up from nearly $8.7 billion in revenues during the same period in 2016 — and continued to have cloud computing cornered with a 35% market share. And during its sixth-annual re: Invent conference in Las Vegas this November, AWS released over 60 new features and product updates, including Alexa for Business.  “AWS is focused on giving companies the opportunity to iterate and innovate faster and more easily than anywhere else,” Andy Jassy, CEO of AWS, told Yahoo Finance. “Our significantly broader functionality and ecosystem, as well as performance maturity, are why so many more companies continue to choose AWS as their long-term, strategic technology partner.” Jassy contends there’s much more untapped potential in the cloud computing space as more companies traverse the big shift from managing their own data servers to storing and crunching their data remotely in the cloud. One piece of cloud-based technology Jassy remains particularly bullish about is voice-activated applications including Alexa, which runs off AWS. On its own, Alexa is the undisputed market leader, with hundreds of third-party Alexa-enabled devices on the market now, alongside competitors such as Google Home, Microsoft’s Cortana and Apple’s Siri. But the AWS chief executive foresees a future where voice becomes even more intertwined in people’s everyday lives as voice technology becomes more sophisticated and one day able to perform more tasks, like say, booking an entire trip from beginning to end. “When we first started being able to use applications on the phone, tapping a few times felt pretty handy,” Jassy adds. “Then you used an application on something like Alexa, a voice-driven application, and it actually seems awfully inconvenient to have to tap three or four times. So we’re going to see an explosion of voice applications over time, as well.” Moving at a breakneck pace AWS wasn’t always a surefire moneymaker or the go-to cloud computing provider for millions of businesses. When it launched in 2006, pundits at the time wondered why CEO Jeff Bezos strayed so far from the company’s retail roots and building data centers that stored and processed information for other businesses. In the BusinessWeek cover story “Jeff Bezos’ Risky Bet” from October 2006, one analyst griped that then-new investments like AWS were “more of a distraction than anything else.” Fast-forward to the present, and AWS is obviously more than a mere “distraction.” Although AWS isn’t as flashy as Amazon’s booming retail business, it nonetheless accounted for 10% of Amazon’s overall revenues during the first three quarters of 2017. Rolling out myriad new features and updates at a breakneck pace is a large part of the reason AWS stays on top.  Source: Synergy Research Group According to Jassy, the cloud computing business follows a 90-10 rule when it comes to development: 90% of what AWS builds is in direct response to what clients ask for, while the remaining 10% is what he calls a “strategic interpretation” of customer needs. One example of a feature AWS launched as a direct result of many AWS customers asking for it is Amazon Redshift, a service rolled out in February 2013 that basically serves up a fully automated virtual warehouse that processes and spits out data at brisk speeds. According to Jassy, Redshift became one of the fastest-growing services in the history of AWS, although he did not specify exactly what Redshift growth looks like. “We weren’t really thinking about a data warehouse, but customers were unhappy with the options that were available at the time,” Jassy recalled. “They were too hard to use and too costly. So they said us, ‘Why don’t you guys do this?’ Likewise, an example of something AWS launched that customers were not asking for is AWS Lamda, which lets customers upload some code that is automatically run once certain parameters are met. AWS Lamda is growing 300% year over year, according to Jassy. Untapped market potential The way AWS’ chief executive sees it, the cloud computing market is still in the early innings. In 10 to 15 years, Jassy sees far fewer businesses managing their own data centers and relying more and more on services crunched remotely in the cloud, like machine learning and artificial intelligence, connected devices and the so-called Internet of Things. That’s a view largely in line with third-parties like Gartner, which estimates worldwide revenues for overall public cloud services — which AWS is a part of — will grow 58% to $411 billion between now and 2020. The challenge for AWS, in particular, remains increasing competition from Azure and Google Cloud. The two may still be trailing AWS by a wide margin, but conversely, both Azure and Google Cloud’s businesses are seeing faster growth. AWS must continue to remain competitive by aggressively rolling out new updates and features, keeping pricing competitive and investing heavily in data centers around the world.  Source: Canalys In typical Amazon fashion, however, Jassy downplays concerns for the competition and maintains the same the “customer-obsessed” attitudes shared by other Amazon executives. “The key is to be focused on customers, not competitors, because that’s what matters to the people making purchasing decisions,” Jassy contends. “AWS is focused on giving companies the opportunity to iterate and innovate faster and more easily than anywhere else, and our significantly broader functionality and ecosystem, as well as performance maturity are why so many more companies continue to choose AWS as their long-term, strategic technology partner.” If in the process, AWS keeps its lead over the competition, Jassy likely wouldn’t mind that, eithe

https://www.google.com/amp/s/www.yahoo.com/amphtml/finance/news/amazon-web-services-ceo-going-see-explosion-voice-apps-152533321.html#ampshare=https://finance.yahoo.com/news/amazon-web-services-ceo-going-see-explosion-voice-apps-152533321.html

Current and near term quantum computers

 The (noisy) 50-100 qubit #quantumcomputer is coming soon. ( #NISQ = noisy intermediate-scale quantum computer). NISQ devices cannot be simulated by brute force using the most powerful currently existing #supercomputers. NISQ will be an interesting tool for exploring physics. It might also have useful applications. But we’re not sure about that. NISQ will not change the world by itself. Rather it is a step toward more powerful quantum technologies of the future. Potentially transformative scalable quantum computers may still be decades away. We’re not sure how long it will take. Qubit “quality” The number of qubits is an important metric, but it is not the only thing that matters. The quality of the qubits, and of the “quantum gates” that process the qubits, is also very important. All quantum gates today are noisy, but some are better than others. Qubit measurements are also noisy. For today’s best hardware (superconducting circuits or trapped ions), the probability of error per (two-qubit) gate is about 1 per 1000, and the probability of error per measurement is about 1 per 100 (or better for trapped ions). We don’t yet know whether systems with many qubits will perform that well. Naively, we cannot do many more than 1000 gates (and perhaps not even that many) without being overwhelmed by the noise. Actually, that may be too naïve, but anyway the noise limits the computational power of NISQ technology. Eventually we’ll do much better, either by improving (logical) gate accuracy using quantum error correction (at a hefty overhead cost) or building much more accurate physical gates, or both. But that probably won’t happen very soon. Other important features: The time needed to execute a gate (or a measurement). E.g., the two-qubit gate time is about 40 ns for superconducting qubits, 100 µs for trapped ions, a significant difference. Also qubit connectivity, fabrication yield, …  Quantum Speedups? When will quantum computers solve important problems that are beyond the reach of the post powerful classical supercomputers? We should compare with post-exascale classical hardware, e.g. 10 years from now, or more (over 10^18 FLOPS). We should compare with the best classical algorithms for the same tasks. Note that, for problems outside NP (e.g typical quantum simulation tasks), validating the performance of the quantum computer may be difficult. Even if classical supercomputers can compete, the quantum computer might have advantages, e.g. lower cost and/or lower power consumption. Quantum optimizers Eddie Farhi: “Try it and see if it works!” We don’t expect a quantum computer to solve worst case instances of NP-hard problems, but it might find better approximate solutions, or find them faster. Hybrid quantum/classical algorithms. Combine quantum evaluation of an expectation value with a classical feedback loop for seeking a quantum state with a lower value. Quantum approximate optimization algorithm (QAOA). In effect, seek low-energy states of a classical spin glass. Variational quantum eigensolvers (VQE). Seek low energy states of a quantum many-body system with a local Hamiltonian H. (Much easier than algorithms which require simulation of time evolution governed by H.) Classical optimization algorithms (for both classical and quantum problems) are sophisticated and well-honed after decades of hard work. Will NISQ be able to do better? How quantum testbeds might help Peter Shor: “You don’t need them [testbeds] to be big enough to solve useful problems, just big enough to tell whether you can solve useful problems.” Classical examples: Simplex method for linear programming: experiments showed it’s fast long before theorists could prove that it’s fast. Metropolis algorithm: experiments showed it’s useful for solving statistical physics problems before theory established criteria for rapid convergence. Deep learning. Mostly tinkering so far, without much theory input. Possible quantum examples: Quantum annealers, approximate optimizers, variational eigensolvers, … playing around may give us new ideas. But in the NISQ era, imperfect gates will place severe limits on circuit size. In the long run, quantum error correction will be needed for scalability. In the near term, better gates might help a lot! What can we do with, say, less than 100 qubits, depth less than 100? We need a dialog between quantum algorithm experts and application users. Quantum annealing The D-Wave machine is a (very noisy) 2000-qubit quantum annealer (QA), which solves optimization problems. It might be useful. But we have no convincing theoretical argument that QAs are useful, nor have QA speedups been demonstrated experimentally. QA is a noisy version of adiabatic quantum computing (AQC), and we believe AQC is powerful. Any problem that can be solved efficiently by noiseless quantum computers can also be solved efficiently by noiseless AQC, using a “circuit-to-Hamiltonian map.” But in contrast to the quantum circuit model, we don’t know whether noisy AQC is scalable. Furthermore, the circuit-to-Hamiltonian map has high overhead: Many more qubits are needed by the (noiseless) AQC algorithm than by the corresponding quantum circuit algorithm which solves the same problem. Theorists are more hopeful that a QA can achieve speedups if the Hamiltonian has a “sign problem” (is “non-stoquastic”). Present day QAs are stoquastic, but non-stoquastic versions are coming soon. Assessing the performance of QA may already be beyond the reach of classical simulation, and theoretical analysis has not achieved much progress. Further experimentation should clarify whether QAs actually achieve speedups relative to the best classical algorithms. QAs can also be used for solving quantum simulation problems rather than classical optimization problems (D-Wave, unpublished). Quantum speedups in the NISQ era and beyond Can noisy intermediate-scale quantum computing (NISQ) surpass exascale classical hardware running the best classical algorithms? Near-term quantum advantage for useful applications is possible, but not guaranteed. Hybrid quantum/classical algorithms (like QAOA and VQE) can be tested. Near-term algorithms should be designed with noise resilience in mind. Quantum dynamics of highly entangled systems is especially hard to simulate, and is therefore an especially promising arena for quantum advantage. Experimentation with quantum testbeds may hasten progress and inspire new algorithms. NISQ will not change the world by itself. Realistically, the goal for near-term quantum platforms should be to pave the way for bigger payoffs using future devices. Lower quantum gate error rates will lower the overhead cost of quantum error correction, and also extend the reach of quantum algorithms which do not use error correction. Truly transformative quantum computing technology may need to be fault tolerant, and so may still be far off. But we don’t know for sure how long it will take. Progress toward fault tolerant QC must continue to be a high priority for quantum technologists. Quantum hardware: state of the art IBM Quantum Experience in the cloud: now 16 qubits (superconducting circuit). 20 qubits by end of 2017, 50-qubit device “built and measured.” Google 22-qubit device (superconducting circuit), 49 qubits next year. Harvard 51-qubit quantum simulator (Rydberg atoms in optical tweezers). Dynamical phase transition in Ising-like systems; puzzles in defect (domain wall) density. UMd 53-qubit quantum simulator (trapped ions). Dynamical phase transition in Ising-like systems; high-efficiency single-shot readout of many-body correlators. ionQ: 32-qubit processor planned (trapped ions), with all-to-all connectivity. Microsoft: is 2018 the year of the Majorana qubit? And many other interesting platforms … spin qubits, defects in diamond (and other materials), photonic systems, … There are other important metrics besides the number of qubits; in particular, the two-qubit gate error rate (currently over 1 per 1000) determines how large a quantum circuit can be executed with reasonable signal-to-noise

https://www.google.com/amp/s/www.nextbigfuture.com/2017/12/current-and-near-term-quantum-computers.html/amp#ampshare=https://www.nextbigfuture.com/2017/12/current-and-near-term-quantum-computers.html

Thursday, December 28, 2017

Here’s How AMD Is Giving Intel Sleepless Nights

@Intel 's (NASDAQ: #INTC ) monopolistic position in the server processor market is under threat now that @Advanced Micro Devices (NASDAQ: #AMD) has landed its first big customer win in this space. @Microsoft 's (NASDAQ: #MSFT) #Azure is the first public cloud platform that has decided to deploy AMD's #EPYC server processors, which were launched with a lot of hype earlier this year. AMD has spoken at length about how its EPYC server processors are more powerful and cost-efficient than the competition. Not surprisingly, they have helped the company rack up a string of partnerships with big cloud service providers and data center companies, and it wasn't long before one of them decided to commercially deploy the chip. The fact that Microsoft has become the first to use the EPYC servers is great news for AMD, and it should start giving Intel sleepless nights. Image Source: AMD. AMD kick-start's Microsoft's Project Olympus Microsoft launched its ambitious next-generation cloud hardware project, known as Project Olympus, just over a year ago with the aim of creating a standard, open-source hardware ecosystem for providing cloud and enterprise services. In simple words, Microsoft is creating a scalable architecture (known as a hyperscale cloud) that can be deployed by organizations of all sizes, so that they can change components to increase capacity when needed. Microsoft had made this platform CPU (central processing unit) independent, allowing it to strike partnerships with multiple vendors including AMD and Intel. The fact that Microsoft selected AMD's EPYC processor over Intel's Xeon server processors, which hold a 99% share of this market, is a big vote of confidence for AMD's fledgling platform. AMD claims that its EPYC processors provide 33% more connectivity options when compared to the competing two-socket solutions available on the market. As a result, they will allow Microsoft Azure's latest L-Series virtual machines to connect to a large number of non-volatile memory express (NVMe) drives, enabling them to tackle storage-intensive workloads. Microsoft executive Corey Sanders said in a blog post that AMD's EPYC processor was the right fit for Azure's latest Lv2-Series instances (an instance refers to a virtual server in cloud computing) because it can "support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O." [Bolding added.] Microsoft's endorsement could help AMD land more wins in the cloud computing space, as more than half of enterprise servers and over 60% of enterprise storage appliances are expected to be running NVMe drives by 2020. Moreover, this is just the beginning of the AMD-Microsoft partnership as both companies are planning to bring more cloud computing instances to the market in 2018. AMD's EPYC processors look like a good fit for deployment in hyperscale data centers, a market that's expected to grow at more than 26% a year over the next five years. What next for AMD? It won't be surprising if Microsoft selects Intel's server processors in other cloud computing instances as a part of Project Olympus. Both companies have collaborated on the deployment of Chipzilla's Xeon server processors and programmable chips in the cloud, so it remains to be seen how many more wins AMD lands at Microsoft. But it is likely that AMD can claw back more market share from Intel in this space, according to comments made by Intel VP and CFO Robert Swan at a recent conference. Fellow fool.com writer Ashraf Eassa pointed out in a recent article that Swann talked about "increasing competitive dynamics" in the data center space. Now, this isn't entirely unexpected as AMD has struck partnerships with a number of Chinese cloud computing players, as well as racking up a collection of OEMs (original equipment manufacturers) such as Dell, Asus, and Acer. Baidu, for instance, has committed to using the EPYC chip in the cloud to power artificial intelligence applications and search. Moreover, AMD's win at Microsoft isn't the first big one for the EPYC processors. Last month, Hewlett-Packard Enterprise became the first major server OEM to deploy a two-socket server based on the EPYC, boasting that it can tackle a wide range of workloads including analytics and big data. According to @HPE, the #ProLiant #DL385 Gen10 server, based on the #EPYC, can slash operational costs by half in each #virtualmachine, while supporting more #cloudcomputing instances per server. This is another important contract win for @AMD as HPE controls 19.5% of the server market, followed closely by @DellTechnologies  at 18.1%. Now, it won't be surprising if Dell follows HPE with a commercial deployment of its own server based on the EPYC processor. This is because Dell made EPYC a part of its server platform earlier this year, promising availability in the second half of 2017. Though there is no news of the availability yet, industry watchers expect a launch soon to counter HPE. AMD, therefore, looks all set to gain more traction in the data center space as it has a lot going for it, eventually destroying Intel's monopoly.

http://host.madison.com/business/investment/markets-and-stocks/here-s-how-amd-is-giving-intel-sleepless-nights/article_7e61b4e8-daf0-5de0-9593-ac04e4a61773.html

iXsystems: 'weird' but profitable NAS vendor

Here’s something you rarely hear from high tech companies today: “We’re a hardware company in our heart and soul.” That is how executive vice president Brett Davis introduced iXsystems during a press tour in early December at the NAS vendor’s San Jose, California headquarters. The company sells open-source based TrueNAS enterprise hardware and FreeNAS desktop systems.

Self-identifying as a hardware company in this software-defined world is only one reason why iXsystems seems out of place in Silicon Valley. The vendor also bootstrapped its financing, turning a profit without accepting outside investment.

“We’re private, profitable and self-funded,” Davis said. “Our heritage goes back to the ‘90s. We’ve just been here. We say we’re unique, but you can say we’re weird.”

But it’s the hardware tag that provides most of the weirdness these days. The iXsystems headquarters includes a manufacturing facility in the back, where Davis said the 130-person company can fulfill 3,000 orders in a day.

The vendor claims more than 4,000 customers, including Sony, NBC, Duke University and NASA.

But the iXsystems strategy of bundling open-source storage software on commodity hardware isn’t that unusual.  Plenty of others do that, and label it software-defined storage. But only iXsytems boasts it’s a hardware company even if its value comes from open-source projects.

The company’s roots date to Berkeley Software Design, Inc. (BSDi), which started in 1991. iXsystems founders founders Mike Lauth and Matt Olander acquired the hardware business from BSDi in 2002. Lauth has been the CEO and Olander the CTO since then. From the start, iXsystems was heavily involved in the FreeBSD project and is the project leader for FreeNAS Storage and TrueOS Desktop open-source operating systems.

Davis said 70% of @iXsystems appliances are custom configurations. The vendor uses @Intel, @AMD and #ARM chips inside. The systems support @VMware, @Microsoft #HyperV, #Citrix, @KVM and @OpenStack #virtualization software, #Hadoop, @Docker and #MySQL data and container platforms, and @FreeNAS, @FreeBSD, @CentOS, @RedHat @Linux and  @Ubuntu #opensource software.

“We’ve been doing open source since way before it was cool,” Davis said. “We give away the number one software-defined storage (FreeNAS), but software and hardware are inseparable.”

iXsystems re-sold storage systems from Dot Hill, Infortrend and others in the late 2000s, but had to rely on those vendors for support. Now iXsystems provides end-to-end support for its storage. The company acquired the FreeNAS project in late 2009, and then spent two years re-writing the operating system before making it commercially available. iXsystems ported the OpenZFS open-source enterprise file system to FreeNAS. That gave FreeNAS file, block and object support, triple parity RAID, support for flash and unlimited instant snapshots.

@FreeNAS is file-only storage while @TrueNAS is unified storage with #FibreChannel networking support. TrueNAS competes with the likes of @DellEMC #VNXe and #Unity, @NetApp #FAS, @Hewlett Packard Enterprise’s #MSA2040 and @Nimble arrays, and @Western Digital’s @Tegile platform.

A more souped-up @iXsystems platform is planned for March with #NVMe support, but the vendor is staying quietly publicly about that for now.

Davis said iXsystems is staying out of hyper-convergence, even though it has all the pieces – including FreeBSD’s open-source bhyve hypervisor.

“We can run virtual machines and containers in FreeNAS,” he said. “We have the capabilities to do it, and we have our own hypervisor. But it’s a competitive space, and we have other plans.”

http://searchstorage.techtarget.com/blog/Storage-Soup/iXsystems-weird-but-profitable-NAS-vendor

3 Takeaways From Cloudera's Strong Earnings Report

@Cloudera (NYSE:CLDR), the #Hadoop software distributor, recently reported third-quarter earnings, sending shares up about 7%. Cloudera went public back in April at $15 but has traded as high as $23 earlier this year. #Bigdata #analytics and #opensource Hadoop is becoming a key tool for large enterprises over the world, and the recent quarter's results showed this in spades. However, as is the case with many high-growth, young tech companies, Cloudera also continues to rack up losses with $24.4 million in non-GAAP operating losses. Including stock-based compensation, operating losses were $56.6 million. Here's what management wants us to know about the company's key achievements last quarter.  IMAGES SOURCE: GETTY IMAGES. Going into the cloud Cloudera's biggest announcement last quarter was a product called the Shared Data Experience, or SDX. Basically, Cloudera has made a product that allows large enterprises to work across Amazon (NASDAQ:AMZN) Web Services, Microsoft (NASDAQ:MSFT) Azure, or a private data center running the Hadoop Distributed File System. Traditionally, Hadoop deployments occurred with a company's data center, but as more and more large enterprises are increasingly turning to the cloud, they run into a problem -- how to govern the company's data moving around all over the place, and how do you cleanse and analyze all the data across environments? Said Chief Strategy Officer Mike Olsen: If you're an enterprise CIO or CBO, you're required by law to protect sensitive user data. You're generally under the gaze of regulators, making sure you comply with rules, governing how data is used and by whom... But that data is essential for high value business applications. You need to be able to process new data as it arrives, to clean it and enhance it. Then, you need to be able to query it or to do statistical analyses or to train machine learning models... At every step, there are new security and identity systems. The result is that operations staff are forced to create multiple copies of data. This data replication and movement increases risk, management complexity and cost. There's no way to look at data lineage, or to set policies for end-to-end data governance. There's no support for answering the questions that regulators ask. And that's precisely the problem that SDX solves... This is good news, given that Hadoop has traditionally run in a data center. Last quarter, for instance, only 20% of Cloudera's revenue came from cloud deployments. Since large enterprises -- which make up most of Clouder's revenue -- are increasingly accelerating cloud and hybrid cloud environments, it is essential that Cloudera's software works seamlessly across the major clouds. The SDX product not only does that but aids companies in assuring security, too. Torrid growth These enterprise-oriented initiatives may help explain the growing confidence corporations are putting in Cloudera. Here are some eye-popping growth figures from the past quarter: Subscription growth of 48%, accelerating from 46% in the second quarter. Net expansion rate (which is same-customer sales versus one year ago) of 135%, which is still strong, compared to 140% in the second quarter. The difference was that the company landed a higher proportion of Global 8000 customers, which have a higher initial spend rate. This increased the initial customer spend from $65,000 to $75,000. 50 customers had annual run rates of $1 million or greater, up from 40 just last quarter. Subscription non-GAAP gross margin expanded 250 basis points to 86%. Clearly, the company is seeing positive momentum, increasing usage, and bigger wins, which helped drive the optimism following the earnings announcement.  But can it be profitable? Despite this rapid pace of revenue growth, the company is still unprofitable. Cloudera doesn't believe it will be operating cash flow break-even for another four to six quarters. Investors should remember that stock-based compensation, a real expense, isn't counted in operating cash flow, so real profitability lies even beyond that horizon. SBC was a whopping $31 million in this past quarter alone, so keep an eye out for that number going forward. Before you discount Cloudera on that basis, however, keep in mind these Hadoop companies operate a "land and expand" strategy, which means existing customers pay Cloudera according to how much data they use. Since data projects should continue to grow (as evidenced by the expansion rate), the company should become more profitable over time just based on increasing usage from current customers. Clearly, the company is aggressively investing ahead of that. The big picture Cloudera, like its rival Hortonworks (NASDAQ:HDP) reported a strong third quarter, showing that Hadoop and big data science tools are being used more and more across enterprises of all types. Just keep in mind that profitability is still a ways off.

https://www.fool.com/investing/2017/12/27/3-takeaways-from-clouderas-strong-earnings-report.aspx

7 Tech Predictions For 2018 And Beyond

The technology trends of 2017 were big and impactful, with #ArtificialIntelligence ( #AI ) and many self-service platforms enhancing human experience. Today technology helps people and businesses accomplish so much more than they could even think a decade ago. Technology is evolving and meshing up into business processes and everyday lives seamlessly. As technology reaches a newer level, we can expect the following trends in the coming times: Integrating New Data into Existing Data Streams From wearables to facial and voice recognition, there are new types of data that can provide insight which haven’t been utilized yet. Integrating a wide range of datasets into marketing campaigns can drive customer signups, lead generation and sales. Companies can integrate new data streams from activity trackers, gym networks and grocery stores to reward customers for loyalty. This new data can also help correctly time tailored notifications, messages and promotions across platforms for consumers. IoT is no longer about Things The convergence of the digital and physical worlds makes it inevitable to stay connected. Businesses harnessing the power of IoT, will gradually shift from product-centric to service-centric business models to deliver convenience to customers throughout the product’s life cycle. Companies selling products will also become service- oriented as their business value moves from products to the enhanced customer experiences they enable. This inclusion of IoT will fundamentally change how businesses operate, interact with customers and earn profits. In the coming times, internet of things will not just be about things but about services positioned to meet newer customer demands and will unlock new sources of revenue. 360-degree Videos and Augmented Reality for 2018 Pokémon Go feels like years ago now, and Augmented Reality is ready to move on to the next level. we have seen a surge in many apps with AR launched in 2017 and there will be more to come in 2018. Apple has built-in support for augmenting whatever the phone’s camera captures with additional information. This means that AR is suddenly within the reach of hundreds of app developers as well as customers. Expect to see VR, AR and 360-degree videos as the next tech trend to become popular since they can be great medium for showcasing or previewing your products and services or encourage product trials. 360-degree Video is becoming the most popular, quick and influential form of digital content for businesses today. Wearable wallet on your wrists In 2018 a growing number of our devices can become payment devices - car keys, vending machines, smart phones, connected cars, sports watches etc.  Artificial Intelligence and IoT will generate intelligent and personalized context-aware customer services, enabling more tailored solutions to help customers make better financial decisions and help companies reduce their operational costs. In 2018, several new device payment systems can be expected, you may have your wallet on your wrists or controlling payments remotely from your wearables. Self-driven cars on the roads Well, thanks to what technology and artificial intelligence has brought to the table, we have got the self-driving car concept. The concept of self-driving cars is still being developed by companies like Google and Tesla Motors, but significant progress has been made so far. But sooner, we may get to see them in commercial use. The idea of self-driving cars revolves around eliminating the human factor in driving, reducing the driving stress and thus reducing the number of traffic accidents and traffic jams. Your assistant isn’t virtual anymore Now it’s time to explore another form of User Interface that might feel even more natural and intuitive: speech. We already have experienced speech recognition- whether it’s in the form of iPhone’s Siri, Android’s Google Now, or Windows Cortana. These services are designed to let you interact with your phone and satiate all your leisure, digital or informational activities. It’s an amazing feat of engineering, but it may not be quite perfect. But with great innovations happening with microchips and cloud computing, we can expect virtual assistants to become frighteningly accurate soon. In the coming times, you may imagine your virtual assistants to understand your speech perfectly, understand the context behind the questions you ask; they will recognize the indirect signals given off by your tone of voice; they will even engage in long form conversations with you Data storage innovations in the pipeline There are limits to the capacity we can squeeze into and the performance we can squeeze out of traditional hard disks using existing designs and manufacturing processes. But to our relief, science is coming to the rescue. A few near- and long-term innovations will be done by hard disk drive manufacturers to satisfy our storage-hungry society. Better hard disk drives, Solid state hard drives, #PhaseChangeMemory ( #PCM ), #ResistiveRandomAccessMemory ( #RRAM ) and #3Dflash memory will bring in the next generation of #datastorage to offer more memory capacity, speed, endurance and power efficiency. These innovations will take by storm the data storage industry and will create the need to shift to cloud based platforms

http://www.cxotoday.com/story/7-tech-predictions-for-2018-and-beyond/