Dell, EMC, Dell Technologies, Cisco,

Thursday, July 27, 2017

A deep dive on VMware vSAN 6.6 and Veeam integration

Hyper Converged Infrastructure ( #HCI ) is starting to disrupt traditional storage markets (SAN, NAS + DAS) as enterprise IT begins to replicate. #Hyperscale public cloud provider’s infrastructure. Public cloud giants such as #AWS and #Microsoft #Azure have developed their own scale-out, software-defined enterprise storage built on commodity servers and disks. Adding the enterprise features from existing SAN and NAS devices into software allows the use of commodity hardware while shifting the storage directly to the host to improve performance and scalability. The storage is provided directly by the host. The storage is scaled-out, as are the compute resources, providing a truly scalable and cost-effective solution. A report from 2016 shows the predicted change from traditional storage to HCI and Hyperscale technologies for the next 10 years.

Even though different vendors provide HCI solutions, but I am not here to compare features between them. VMware’s HCI offering is VMware vSphere combined with their Software Defined Storage (SDS) solution — VMware vSAN. I will cover vSAN and Veeam’s recent announcement to provide further support.

VMware vSAN: What is it and what challenges does it overcome?

With traditional storage, customers faced several challenges such as the hardware was not commodity and it often created storage silos that lacked granular control. Deploying traditional storage could be time consuming because it often included multiple teams and lacked automation.

By moving the storage into software, vSAN provides a linear scalable solution using the same management and monitoring tools VMware admins are already using, while also providing a modern, policy-based and automated solution.

What does the VMware vSAN architecture look like?

vSAN is an object-based file system where VMs and snapshots are broken down into objects and each object has multiple components. vSAN objects are:

VM Home Namespace (VMX, NVRAM)VM Swap (Virtual Memory Swap)Virtual Disk (VMDK)Snapshot Delta DiskSnapshot Memory Delta

Other objects can exist such as vSAN performance service database or VMDKs that belong to iSCSI targets. An object can be made up of one or more components, depending on different factors such as the size of the object and the storage policy assigned to the object. A storage policy defines such factors as Failure to Tolerate and stripe size. For an object to have a failure tolerance of RAID 1 would mean two full copies of the data is distributed across two hosts with a third witness component on a third host, resulting in tolerance for a full host failure. Rack awareness and multiple-site fault domains can also be configured, which can dictate how the objects are distributed.

vSAN uses the concept of disk groups to pool together flash devices and magnetic disks as single management constructs. A disk group is comprised of one flash device for the read cache/write buffer and up to seven capacity devices that can be magnetic disks (Hybrid mode) or flash (All-Flash mode). A disk group must have a cache tier with a capacity tier, and a host can have up to five total disk groups.

Any supported hardware can be used for vSAN. VMware has extensive HCL available, but it also provides vSAN-ready nodes with a multitude of server hardware vendors that come pre-built with all supported hardware.

Other HCI providers require a virtual appliance to run on the host for offloading to the storage. This typically requires reserved CPU and memory from each host in the cluster. vSAN is directly embedded into the vSphere hypervisor kernel by deployment of the vSphere Installation Bundle (VIB). vSAN does still require resources, typically up to 10% of the hosts compute, but this doesn’t compete with other VMs for resources. Because it is integrated with vSphere, the admin uses the same tools to manage that are used for vSphere, and vSAN has full support for native vMotion and DRS.

Standard and 2-Node Cluster deployment methods are supported with vSAN Standard licenses, with Stretched Cluster deployment enabled through the vSAN Enterprise license. Because the vSAN is all in the software, these deployments can be scaled as required.

https://www.veeam.com/blog/deep-dive-on-vmware-vsan-6-6.html

State of Louisiana Builds Technology 'Systems of Tomorrow' with VMware NSX

PALO ALTO, CA--(Marketwired - July 27, 2017) - #VMware, Inc. (NYSE: VMW), a global leader in cloud infrastructure and business mobility, today announced that the State of Louisiana Office of Technology Services (OTS) is taking advantage of VMware Cross-Cloud Architecture™ and VMware #NSX® to transform security and modernize data center operations as part of a statewide cloud transformation initiative. VMware is empowering OTS with a common operating model that spans both private and public clouds to more securely manage and scale its newly consolidated IT operations. According to OTS, to date, NSX has saved OTS upwards of $1 million in capital expenditures and even more in operational expenses by better equipping IT teams with the tools they need to scale systems with better outcomes and at lower costs. Born out of legislation, the State of Louisiana OTS consolidated the IT operations of 16 state-wide agencies, including healthcare, education, and public safety, into a single entity, led by Richard "Dickie" Howze, the State's Chief Information Officer. Using this as a catalyst, the State embarked on an IT transformation initiative to shape and change how IT services and infrastructure were built, delivered and managed across the State of Louisiana. The first major enterprise system to benefit from this initiative is the Louisiana Department of Health's Medicaid Eligibility and Enrollment system, which is funded by the Centers for Medicare & Medicaid Services (CMS) using a 90/10 match. The department implemented a software-defined data center with VMware NSX to deliver a service-first approach that prioritizes end-user security and ease-of-use while enabling easy scalability to various aspects of its government. OTS first deployed VMware NSX as part of their SDDC solution to support two of Louisiana's largest healthcare services, the Medicaid Eligibility and Enrollment System and the Medicaid Management Information System. By implementing VMware NSX network virtualization, OTS said it can better service the 1.4 million and growing constituents who use these programs by guaranteeing high availability, application continuity and streamlined IT workloads. "The work we've done with VMware and NSX will serve as a model for other states looking to consolidate and streamline their IT operations while improving security," said Michael Allison, Chief Technology Officer at State of Louisiana, Division of Administration/OTS. "We knew that we needed a virtualization solution to have agility and security across IT systems statewide. Through micro-segmentation, NSX offers a single pane of glass view into our data center that gives us insight into security and operational efficiencies, enabling us to scale quickly and securely. NSX was the piece of the puzzle that brought our vision to life in a way that didn't compromise security, agility, and most importantly, service." With VMware NSX, OTS said it is able to extend a software-defined data center across a metro available cluster, providing OTS with Four-9's availability with an active-active topology and near zero RTOs and RPOs for application continuity. OTS can fail over some traditional clusters between different data centers, and use NSX to do micro-segmentation. NSX enables OTS to automate application deployment, security, creating a more secure infrastructure across the entire environment. With NSX, OTS said its systems can also scale on demand to handle peaks in traffic, when volumes can increase by as much as 20X, as was the case with the availability of a new Medicaid service. With the built-in load balancing capabilities, OTS is able to move many of those capabilities into software, further reducing the requirements on hardware-defined approaches. OTS is responsible for often 20- to 30-year-old legacy systems and thousands of applications spanning public safety, education, healthcare tax and many other government services. Using VMware NSX and a software-defined, cloud-centric approach, OTS is building modernized "systems of tomorrow" today that are stable, scalable and can help the organization address not only technology and service requirements, but also procurement cycles that can put IT behind the technology curve. "We're looking at automation and containers, we're looking at continuous integration, continuous deployment, and the whole continuum of modern application approaches," Allison said. "Being able to have a single platform to bring them on and tie them together, without risking security policies or having to handle two separate environments is a game changer for OTS. NSX is the piece that allows us to achieve all of this. I honestly don't understand how people can go into a hyper-converged environment or build an enterprise cloud solution without a solution like NSX."

http://m.marketwired.com/press-release/state-of-louisiana-builds-technology-systems-of-tomorrow-with-vmware-nsx-nyse-vmw-2227880.htm

HTBASE to Serve as Key Provider of Composable Infrastructure Software Solutions for Dell Emc Customers Read more: http://m.digitaljournal.com/pr/3428942#ixzz4o5Ey8Ndr

SUNNYVALE, CA-July 27, 2017- (Newswire.com) #HTBASE today announced a new partnership with #DellEMC to provide it's #Maestro composable infrastructure platform to Dell EMC enterprise customers of all sizes. Bundled with new or existing Dell EMC #PowerEdge servers and storage hardware, HTBASE Maestro turns compute, storage and network into a fluid infrastructure that is hardware-agnostic. The platform empowers IT teams to quickly provision resources and deploy self-designed solutions to create or add onto any infrastructure, composing it into one resource pool that can scale up or down as per business requirements. Maestro is a software-defined infrastructure that offers flexible, independent scaling as memory and/or storage capacity can be added without deploying new hardware. A unique component of HTBASE Maestro is its OneCloud hypervisor, which enables true multi-cloud integration by supporting live, seamless movement of workloads between private and public clouds to the lowest cost and best performing cloud. This results in significantly simplified and quicker IT operations, and cost reductions.

http://m.digitaljournal.com/pr/3428942

Look Who’s MeriTalking: Dell EMC Federal’s Cameron Chehreh on the IoT, Cloud, and What’s Next

#MeriTalk sat down with Cameron Chehreh, Chief Technology Officer at #DellEMC Federal, to discuss the Internet of Things ( #IoT ), cloud, and what’s next for Federal agencies. MeriTalk: The number of Internet-driven devices continues to grow every day. How can cloud computing help manage the growing IoT landscape? Cameron Chehreh: We are an “always on, always connected” society–whether it’s by phone, laptop, smart watch, or tablet–consumers are used to instant access to information. The Federal workforce brings these expectations to their jobs. And, as we have more connected devices, we have more information–Federal IT leaders are having a hard time keeping up with the surplus of data created by the IoT. Cloud provides Federal agencies with the improved flexibility, scalability, and storage capacity they need to successfully manage the mountains of data being collected each day. By providing this foundation, cloud enables agencies to focus their resources on innovation–fueling successful digital transformation.

https://www.meritalk.com/look-whos-meritalking-dell-emc-federals-cameron-chehreh-on-the-iot-cloud-and-whats-next/

Why HPE Might Want to Move on Gigamon

#Gigamon (GIMO) could sooner or later sell itself to the highest bidder. According to a Reuters report, Gigamon hired Goldman Sachs (GS) to help it explore a sale. Shortly after the report, Needham analysts raised their price target on Gigamon to $50 from $34, saying that the odds that Gigamon could be acquired had significantly increased. Needham went further to list possible buyers of Gigamon like #Cisco (CSCO), Hewlett Packard Enterprise ( #HPE ), #PaloAlto Networks (PANW), and #Fortinet (FTNT) on that list.

One common thing about these companies mentioned in relation to the Gigamon acquisition is that they are interested in cybersecurity. They are betting on security to add appeal to their core products. Thus, they could benefit from Gigamon’s network monitoring and security technology.

Strategic fit for HP Enterprise

Of these potential buyers of Gigamon, HP Enterprise could be the hungriest. The company is not only busy hunting for strategic tools that could boost demand for its core server and storage products, but its CEO Meg Whitman has also recently talked about acquisition becoming a bigger part of its strategy.

Therefore, #Gigamon, with its traffic monitoring and security software, could be a strategic fit for #HPE as it seeks to transform its fortunes. The above chart shows how #HP Enterprise’s top line has compared with consensus estimates in the last few quarters.

Need for tuck-in acquisitions after pruning season

#HPE has eliminated several units since it separated from its computer and printer sibling HPE Inc. (HPQ). The pruning hasn’t just unlocked funds for reinvestment, but it has also created room for tuck-in acquisitions to help push back against rivals such as #DellTechnologies (DVMT) and #Amazon (AMZN).

http://marketrealist.com/2017/06/analyzing-american-electric-powers-dividend-yield/

MapR Technologies Launches Top-Tier Channel Program To Recruit, Enable Systems Integrators

#Bigdata platform developer #MapR Technologies is expanding its channel partner initiatives with a new Elite Premier program for the company's global and regional systems integrator partners. The main focus of the new Elite Premier program, part of the company's larger MapR Converge Partners program, is to recruit and enable systems integrators to work with the vendor's MapR Converged Data Platform, said Geneva Lake, vice president of worldwide alliances.  "Systems integrators are becoming a more critical part of MapR's business," Lake said in an interview with CRN, noting that the company began a push to recruit systems integrators last year. [Related: The 2017 Big Data 100] "We really need to make sure they understand the product and how to implement it. It's very important for us to have that high-quality, high-performing group of systems integrators." The #MapR Converged Data Platform integrates #Hadoop, #Spark and #Apache Drill with a real-time database, event streaming and other capabilities for running big data applications. The company, based in San Jose, Calif., competes with #Cloudera and #Hortonworks in the Hadoop/big data platform arena. MapR currently works with one global systems integrator and is developing enablement plans with two more. Lake said MapR would probably cap the number of global systems integrators in the program at three. The vendor is working with about 10 additional boutique, regional and super-regional systems integrators and is looking to recruit more, especially in geographies and vertical industries where the company doesn't have a presence. Some systems integrators focus on implementation work while others provide additional services or have consulting practices in areas such as business analytics and business process management. Enabling them with the proper training and resources is key, Lake said, because projects incorporating the MapR Converged Data Platform tend to be complex. Through the Elite Premier program systems integrators will be offered comprehensive training on MapR's software, including a boot camp training session on installing and deploying the product. One systems integrator MapR is already working with is RCG Global Services, based in Edison, N.J. The company incorporates MapR's software into its data analytics practice, which includes big data strategic planning, systems development and implementation services, said John Bender, senior vice president and general manager. The company develops use cases and analytical applications for system maintenance, health care and other tasks. The new Elite Premier program signals a "greater level of commitment" by MapR to its systems integration partners and will "further the strategic alignment between our two companies," Bender said. The program will provide RCG with access to more training and certification opportunities "to keep our skills up to date," he added, and help the company stay on top of new software releases. And he expects the program to provide more joint sales and marketing investments, although he said the two already hold joint marketing events for prospects. Lake noted that while MapR Technologies does have a professional services business, it only accounts for about 15 percent of the company's revenue and it has no plans to expand that business – hence the need for experienced systems integrators. MapR has a range of consulting/integration, ISV, OEM and distribution partners. More than 50 percent of the vendor's sales involve a partner in some way, according to Lake. Systems integrators play an influencer role in about 20 percent of MapR's sales – a number that Lake said has been increasing

http://m.crn.com/news/channel-programs/300089346/mapr-technologies-launches-top-tier-channel-program-to-recruit-enable-systems-integrators.htm

Big Switch Networks Raises $30 Million To Scale And Expand Channel

( #SDN ) Software-defined networking specialist Big Switch Networks has raised $30.7 million in funding with plans to scale and expand its channel partners and strategic alliances. "The investment for this round is going to be concentrated and focused on how we scale the company from a go-to-market channel point-of-view," said Big Switch CEO Douglas Murray, in an interview with CRN.  The privately-held Santa Clara, Calif.-based vendor announced on Wednesday it secured $30.7 million to support further sales, product and geographical expansion with financing from Dell Technologies Capital, Silverlake Waterman, MSD Capital, Intel Capital and an unnamed Tier 1 service provider, to name a few. The company has now raised more than $120 million to date. [Related: Big Switch Networks CEO 'Bullish' About Dell EMC Partnership; Talks White-Box Networking Momentum and New $30 Million Funding] "What we're really excited about is in the past 18 months, we've been growing incredibly fast and as a result, we're now at a point where this round is primarily based in looking at what we do from the investment perspective to really continue to build out our sales channel and strategic alliance investments," said Murray. "As we continue to grow, we do very little direct, the push of our company is to do anything we can in partnership with the channel." Murray said business has doubled over the past 12 months thanks to greater adoption of open, white-box networking where customers can buy the hardware and software separately. Chris Becerra, president and CEO of Terrapin Systems, a San Jose, Calif.-based solution provider and Big Switch partner, said the open networking approach is receiving "more and more traction each and every day." "We have multiple customers that are looking to deploy the technology in 2018," said Becerra. "Although we haven’t seen huge revenue numbers with these types of solutions yet, I expect that to change by the end of the year." Becerra said he has high hopes that the $30 million funding will enable Big Switch to expand their channel organization "so they better drive regional partner campaigns and regional partner revenue." In June, Big Switch hired Claudio Perugini as its new vice president of worldwide channels. Perugini was previously executive vice president of worldwide sales at LightCyber, which was acquired by Palo Alto Networks in February. He also spent nearly five years at competitor Gigamon as vice president of sales in the Americas. "[Perugini] has been hard at work putting together a comprehensive strategy for our channel play and looking at what we can do to effectively scale and get much broader in the ecosystem – both on the strategic side with companies like #Dell, but very importantly in terms of what we do with distribution and VARs," said Murray. Murray said his company has been working on revamping partner programs around training, certifications and MDF. "We're working through the program and program timeline so we'll be in the position to launch that later this year," he said. For the first time, #BigSwitch made it onto Gartner's Magic Quadrant for Data Center Networking in 2017, dubbed a 'visionary' in the space. Big Switch's flagship products include its network packet broker Big Monitoring Fabric, and data center switching Big Cloud Fabric. The vendor's software runs on top of hardware from vendors like Dell, Accton and Hewlett Packard Enterprise, to name a few. Murray said Big Switch will be "incredibly focused" on increasing its US market presence over the next year, and then look at channel expansion abroad in geographies such as Japan, Australia and the U.K.

http://m.crn.com/news/networking/300089301/crn-exclusive-big-switch-networks-raises-30-million-to-scale-and-expand-channel.htm

Huawei sees annual smartphone shipment growth slowing amid focus on profit

HONG KONG (Reuters) - China's #Huawei Technologies [HWT.UL] forecast smartphone shipments of 140 million-150 million this year, a sharp slowdown in annual growth as the world's third-largest smartphone maker chases profits instead of volumes by selling more high-end phones. Shenzhen-based Huawei's profit growth has been under pressure with its low margin smartphone business facing strong competition at home and abroad, forcing Huawei earlier this year to de-emphasize revenue growth.

Richard Yu, CEO of Huawei's consumer business group, said on Thursday the company expected to ship 140 million-150 million smartphones for the full year compared to 73 million achieved in the first half.

The full-year forecast would compare to the 139 million smartphones Huawei shipped last year, which was up 29 percent compared to the year before.

And though its smartphone shipments surged by more than a fifth in the first half, Yu emphasized that Huawei was not aiming for volumes.

"We want to focus on the high-end and mid-range market. We give up the ultra low-end," he said, adding the average selling price of Huawei phones rose 28 percent in the first half.

Yu acknowledged intensifying competition at the high-end of smartphones from Apple Inc's planned release of the iPhone 8 but said Huawei was well equipped to hold its ground.

"We have the Mate 10 series that can compete head to head against the iPhone 8," Yu told reporters, referring to Huawei's next premium flagship that is expected to be released in the fall. According to Yu, it will be superior to Apple's anticipated smash-hit product with "much longer" battery life, "much faster" charging speed and better camera and photo-shooting algorithm.

Huawei plans to unveil this autumn an "AI processor" as it seeks to move from making smartphones to "intelligent phones", Yu said.

"Huawei will launch our own AI processor. We could be the first phone maker in the world to introduce AI processor for our products," he said, but declined to give more details or comment on whether that will be used for the Mate 10.

The company, which is also one of the world's largest telecom equipment makers, posted on Thursday a 15 percent rise in first-half group revenue to 283.1 billion yuan ($42.03 billion), compared with a 40 percent rise last year. That represented the slowest first-half revenue growth for Huawei since 2013.

Its operating margin fell to 11 percent from 12 percent a year ago. Huawei does not release half-year profits.

Yu forecast the consumer business group's full-year revenue at over $33 billion, compared to about $26.7 billion last year.

https://www.reuters.com/article/us-huawei-results-idUSKBN1AC0YX

Microsoft’s new Azure Container Instances make using containers fast and easy

Barely a day passes without some news about containers and that speaks to how quickly this technology is being adopted by developers and the platforms and startups that serve them. Today it’s #Microsoft ’s turn to launch a new container service for its #Azure cloud computing platform: #AzureContainerInstances (ACI). The company also today announced that it is joining the Cloud Native Computing Foundation as a platinum member (that’s a $370,000/year commitment). While we’ve seen our fair share of container-centric services from the major cloud vendors, ACI is different from the likes of Azure’s existing Container Service, #AWS ’s #EC2 #Container Service and the #GoogleContainerEngine. ACI, which is now in preview, is all about simplicity. It lets you spin up a single container with your choice of memory and CPU cores in a few seconds and usage is billed by the second. As Microsoft stresses, these containers are first-class objects on Azure and get all of the same role-based access controls, billing tags and other features that you’d expect on the platform. These containers are isolated from other customers using “proven virtualization technology,” Microsoft says. What you don’t get, however, is the hassle of having to manage VMs or learning about container orchestration. If you do want to use orchestration, though, you can do that with the help of Microsoft’s new open source Kubernetes connector. This allows Kubernetes clusters to deploy containers directly to ACI and lets developers mix and match VMs and ACI as needed.

https://techcrunch.com/2017/07/26/microsofts-new-azure-container-instances-make-using-containers-fast-and-easy/

Lenovo CEO's 'Lighthearted' Vow To Quit If Goals Aren't Met Belies Daunting Datacenter, PC Challenges

Lenovo Chairman and CEO Yang Yuanqing made a "lighthearted comment" when he said he'd step down if the company failed to hit sales targets, but the Beijing, China-based firm's competitive position is no laughing matter. Known to Lenovo employees as YY, Yang this week reportedly told Chinese media in Shanghai at the company's annual Tech World conference that he'd leave his position if the company doesn't hit the aggressive $12 billion annual online PC revenue goal he's set within three years.

Lighthearted as Yang's comment may have been, his online PC sales goal represents nearly half of the company's current PC revenue, and it'll have to reach for that goal while facing significant market headwinds.

MichaeI Goldstein, CEO of LAN Infotech, a Ft. Lauderdale, Fla., solution provider that works with Lenovo, said Lenovo has products that compete strongly in the IT market, but those products are up against stiff competition and tend to be expensive.

"I do see good momentum, and we just got reassigned a new rep, and from the past, the reps have been pretty aggressive in what we need to sell the product, and they have a great set of products," Goldstein said. "I have #Microsoft, #Dell and #Lenovo, and on mobile, I think Lenovo is ahead of the others. The majority of our mobile customers love the #X1 line. If you're a road warrior, the X1 is amazing, but it's not the least expensive solution. The Tiny Workstation is great, too. You can slap it on the back of a monitor, or tuck it under a desk. On the other side of it, it's a very competitive space. Those things distinguish them, but everything in between becomes about whether you like vanilla or chocolate."

Worldwide PC and data center hardware markets are in decline. Lenovo, which became the largest PC manufacturer in the world under Yang's leadership, registered revenue declines in all of its major businesses during its 2017 fiscal year ended March 31.

Lenovo PC sales dipped 2.3 percent year-over-year, while sales in its mobile and data center businesses fell 5.4 percent and 10.6 percent, respectively. In the second quarter of the calendar year, Lenovo's PC shipments fell nearly 6 percent year-over-year as it ceded the worldwide lead to Hewlett-Packard Inc., according to research firm IDC. In the first quarter – the most recent period available from IDC – Lenovo's server sales saw a year-over-year decline of 16.5 percent while shipments fell more than 27 percent.

In addition to online PC sales, Lenovo is making a shift toward the burgeoning market for artificial intelligence [AI] technology. AI is the focus of a $1 billion investment effort by Lenovo, and last week, the company introduced several AI concept products, including a virtual assistant; a VR headset; a "SmartVest" for monitoring heart function; an AI-based services platform and a smart speaker.

Yang, in addition to making #Lenovo the world's largest PC manufacturer through the acquisition of #IBM 's PC business, also kicked open the door of the data center market with the $2.3 billion acquisition of IBM's x86 server business about two years ago, but the company has had trouble formulating an effective strategy for that business. In June, the company also introduced a new data center line, including several new server, storage and networking products, aimed at turning that business around.

http://m.crn.com/news/data-center/300089461/lenovo-ceos-lighthearted-vow-to-quit-if-goals-arent-met-belies-daunting-datacenter-pc-challenges.htm

Oak Ridge National Laboratory to use D-Wave cloud service

Department of Energy science and energy laboratory, Oak Ridge National Laboratory (ORNL), has signed a deal with #DWave to have cloud access to its #2000Q system. #ORNL plans to use the #quantumcomputer as part of a larger research initiative into advancing #hybridcomputing applications and #exascale computing. SHOW FULLSCREEN  D-Wave quantum processing unit Source: D-Wave Wave hello “ORNL researchers are investigating the use of quantum, neuromorphic, and other new computing architectures with the potential to accelerate applications and programs important to the Department of Energy,” said Dr Jeff Nichols, associate laboratory director of computing and computational sciences at ORNL. “This agreement fits squarely within our objective of providing distinctive equipment and unique facilities to our researchers to solve some of the nation’s most compelling computing challenges. This program is also a natural extension of the lab’s leadership in high-performance computing, with the next step being to accelerate the nation’s exascale program.” D-Wave computers have been met with widespread skepticism on whether they are indeed true quantum computers. John Morton, a professor of quantum computing at University College London, told Bloomberg that, with a version of the D-Wave system he saw, its qubits only remained in a quantum state for about 10 nanoseconds, not enough time to solve a very complex equation. Entanglement - where qubits function collectively - also appeared to be limited. But its systems have found favor despite the criticism and limitations - early customers include Google, Nasa and Lockheed Martin. Earlier this year, when it launched its $15 million 2000Q system, D-Wave announced its first customer for the 2,000 qubit machine - the secretive cyber security firm Temporal Defense Systems. But while TDS has close ties to the US government, directly selling to the US government has long been an ambition of D-Wave. In an effort to enter that market, the company last year launched the subsidiary D-Wave Government Inc, filled with senior appointees from the NSA, the Navy, Air Force and the CIA. In March 2017, it then partnered with Virginia Tech to create a permanent quantum computing center housing a D-Wave system at the Hume Center for National Security and Technology. Speaking last year, company president Robert Ewald told Defense News: “The US government is the largest procurer of computers on the planet. They also are the ones who typically apply technologies before others do for programmatic needs, whether it be the Pentagon programs or Department of Energy or the intelligence community.” But selling to the Department of Energy may become more troublesome in the future, with large cuts expected should the Trump administration persist with its planned economic strategy. D-Wave already sold one older system to the Los Alamos National Laboratory, but this time sold a presumably more affordable cloud service. The two groups will also work together to map applications to the D-Wave architecture. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” Ewald said. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.” ORNL and D-Wave aim to share some of these results with the scientific user community. Funds for the project appear to come from the DOE’s Exascale Computing Project, a $350 million per year scheme that recently saw business awarded to AMD, Cray, Nvidia and more. The Google in the room While D-Wave has pushed forward, earning contracts along the way, it is far from alone in the quantum computing space - IBM offers a prototype 17 qubit quantum processor cloud service, Microsoft is working on a topological quantum computer, and Alibaba is partnering with the Chinese government, to name but a few. The one to watch, many agree, is Google. Reports surfaced last year the Google was aiming for ‘quantum supremacy’ by the end of 2017, that is, to be on par, or better than, existing high performance computers. This month, Bloomberg reportedly saw internal Google slides that reiterated this claim. They also revealed a new lab it calls an “Embryonic quantum data center,” with it thought that the quantum computer would be offered on Google’s Cloud Platform. Jonathan DuBois, a scientist at Lawrence Livermore National Laboratory, told the publication that Google staff have been clear about plans to open up quantum service on cloud, and have pledged that government and academic researchers will get free access.
http://www.datacenterdynamics.com/content-tracks/servers-storage/oak-ridge-national-laboratory-to-use-d-wave-cloud-service/98688.article

Meg Whitman is stepping down from HP's board

#HewlettPackardEnterprise CEO @MegWhitman is stepping down from #HP 's board, the company announced on Wednesday. HP split into two companies in 2015: HP and #HPE. Whitman will remain the CEO of HPE. It is the board of the other company, HP, which makes PC and printers, that she is leaving. Whitman was the chairman of HP after the split. That seat, and the chairman title, will go to Chip Bergh, Levi Strauss & Co. President and CEO. Dion Weisler, who is HP's CEO, will continue in his role. Whitman's move comes as rumors circulate that she's in the running for the Uber CEO job. But the reality may be less dramatic. Since HP and HPE split in 2015, the two companies have continued to sever ties between the two, freeing the two businesses up to pursue their own deals. Whitman stepped down from Survey Monkey's board in May, replaced by Serena Williams. The HPE CEO remains on HPE's board. "HP has now been a standalone company for almost two years and I'm proud to have helped get the company off to a great start," Whitman said in a company release. "The HP board is among the most diverse in our industry, and the company has established a proven track record of successfully delivering on its commitments. I'm very confident in Dion's continued vision and leadership as well as Chip's ability to help guide the organization."

https://www.google.com/amp/s/amp.businessinsider.com/hpe-ceo-meg-whitman-steps-down-from-hewlett-packard-board-2017-7#ampshare=http://www.businessinsider.com/hpe-ceo-meg-whitman-steps-down-from-hewlett-packard-board-2017-7

Wednesday, July 26, 2017

Dell EMC PowerEdge R740xd review

#Intel 's Purley platform promises the biggest shakeup in the server world since the ' #Nehalem ' launch over eight years ago and in this exclusive review, we look at the first production server to show off Intel's new Skylake-SP Xeon Processor Scalable family. On review we have #DellEMC 's #PowerEdge #R740xd, which heralds in its #14th Generation PowerEdge series. It showcases a completely new architecture with even more storage options, enhanced system security and a radical revamp in the server management department.

http://www.itpro.co.uk/server/29110/dell-emc-poweredge-r740xd-review

OpenText to acquire Guidance Software for $222 million

#OpenText said Wednesday that it plans to acquire #GuidanceSoftware, the makers forensic security software, for about $222 million. Guidance Software is known for its digital discovery and endpoint information security products. Waterloo, Ontario-based OpenText plans to use the deal to bolster its overall infosec portfolio, with the primary gain centering around Guidance's digital investigation and forensic products. Under the terms of the agreement, Guidance Software will become a wholly owned subsidiary of OpenText. The purchase is expected to close in the third quarter of this year.

http://www.zdnet.com/article/opentext-to-acquire-guidance-software-for-222-million/

Solarflare Opens Market for 10 Million Firewalls Per Year with World’s First NIC-Based Hardware Server Firewall

LAS VEGAS--(BUSINESS WIRE)-- #Solarflare, a pioneer in the development of neural-class networks, today introduced a NIC-based platform for security “inside” the data center where 90% of all IP traffic exists. Proven by hyperscale cloud service providers with millions of servers, Solarflare is transforming the enterprise server security market based on appliances and FPGAs and NPUs costing thousands of dollars, into a 10 million server per year market leveraging security that is “just there” on every standard NIC costing less than $300. Available September 1, 2017, #SolarSecure ® can be enabled concurrently with ethernet NIC, acceleration, monitoring and capture solutions in a Solarflare #XtremeScale ™ Smart NIC fabric. .@opens market for 10 million firewalls per year with world's first NIC-based hardware server firewall Tweet this The foundation of SolarSecure is innovative Solarflare chip hardware with the power to inspect packets at line-speed, and which cannot be breached with root access to a software application or operating platform. Deployed as a fully featured firewall platform, SolarSecure provides integrated packet surveillance, filtering, and server cloaking. The high-speed chip hardware solution is tamper-resistant, scales-out with every server, and allows micro segmentation down to virtual machines or microservice workloads for fine-grain security policy management. SolarSecure is also a platform for machine learning. The software defined SolarSecure NIC hardware and applications maintain surveillance of every packet which allows SolarSecure to provide network flow analytics with application granularity, and to feed data lakes with the massive volumes of the right traffic data needed for effective big data security analytics. Trading Technologies and CDL are two managed service providers integrating SolarSecure hardware technology into their service platforms. Trading Technologies makes the tools which the world’s premier traders trust to maintain their edge, and their TT® Platform is a next-generation trading framework that allows trading anywhere—on virtually any device. “The whole challenge in providing a low-latency trading platform is to make the infrastructure as thin as possible, said Dan Feldman, vice president of systems and network engineering at Trading Technologies. “Adding firewall appliances adds hops and latency. By placing a Smart NIC with SolarSecure in line with the transactions, we eliminate hops and deliver a similar security narrative.” CDL provides a managed security solution scaled for the enterprise providing server cloaking and packet capture everywhere solutions for compliance, risk mitigation, security and forensic preservation by securing packets within data lakes in the cloud. “Our business is modernizing data security for scale-out environments,” said Mark Schreiber, General Manager at CDL. “With software defined network processing on every server, SolarSecure provides both the granularity and scalability we need in a security platform to cloak data lakes from prying eyes.” SolarSecure: A platform for security micro segmentation and machine learning SolarSecure represents a trend which will soon be a best practice for Enterprise IT: micro segmentation of security policies down to a VM or container microservice workload, combined with NIC-based security services which can be delivered on a per-VM or per-microservice basis. SolarSecure solutions are made possible by the XtremePacket™ Engine inside of every plug-and-play Solarflare XtremeScale™ 8000 Series NIC, which can inspect every packet at line-speed and with no loss of performance. Solarflare has also published a rich set of traffic engineering APIs for Solarflare and third-party security application developers. The portfolio of SolarSecure security services is shipped with every XtremeScale™ 8000 Series Solarflare NIC. Using the SolarSecure Manager, security policies can be micro segmented, packet surveillance can be initiated, and firewalls can be configured for each local TCP/IP address including learn and enforce modes, white listing or black listing; alerts, and cloaking a server by dropping packets. With extensive information about traffic flows provided by the XtremePacket Engine, Solarflare provides a software defined machine learning platform for Solarflare and third-party security analytics applications. SolarSecure offers the unique ability to supply a security data lake with a rich set of data about 100% of the packet traffic in a data center using a low-power, low-cost, standard form-factor NIC. Security data scientists have access to the data and traffic engineering tools through open APIs. According to Ahmet Houssein, Vice President of Marketing at Solarflare, “A new class of standard NICs which can inspect packets at line speed is making it possible for security to follow in the footsteps of networking, storage and databases--all of which migrated to distributed architectures that scale with each server, and all of which will incorporate machine learning for self-driving capabilities in the future.” Hyperscale Model for Granular, Scalable Security Inside Every Server With millions of servers inside their data centers, hyperscale cloud service providers need a security solution they can trust, that is affordable, and which can scale. The answer for major cloud service providers like Google is to instrument thousands of servers with their own security chips delivering line-speed packet inspection, that cannot be hacked with root access to the server OS, and which allow network engineers to define application-specific firewall policies down to a single server. Google is publishing information about how they are instrumenting servers with security chips to offload servers from a variety of applications such as performance acceleration and security. In the Google Infrastructure Security Design Overview, the company says that both the server boards and the networking equipment are custom-designed by Google. Google also discloses they design custom chips, including a hardware security chip that is currently being deployed on both servers and peripherals. These chips allow Google to securely identify and authenticate legitimate Google devices at the hardware level. Solarflare XtremeScale Architecture, Smart NIC Fabric and Software Defined NICs The Smart NIC hardware and software deployed by hyperscale cloud service providers is proprietary. Solarflare has developed an XtremeScale Smart NIC platform which provides server-level granularity and scalability and is now commercially available. The platform consists of the: 1) XtremeScale Architecture—All new Solarflare chips, adapters and software are designed under an architecture designed for scalable and granular Ethernet traffic engineering, 2) XtremeScale Software Defined NICs—Join FPGAs and NPUs in the Smart NIC class of server adapters. The powerful, software defined products are the industry’s first Smart NICs with FPGA-like capabilities at the cost of a NIC, and 3) XtremeScale Smart NIC Fabric—Once XtremeScale NICs are installed, XtremeScale Fabric applications are available to provision performance, security and visibility applications tailored for specific workloads running on physical servers, VMs and container microservices.

http://www.businesswire.com/news/home/20170725006583/en/Solarflare-Opens-Market-10-Million-Firewalls-Year

Cavium Programmable Switch Family Reaches for Hybrid Cloud Market

#Cavium expanded its #Xpliant programmable Ethernet switch family to include new products targeting the hybrid cloud environment. Cavium said the XP70 series can scale from 1 Tb/s to 1.8 Tb/s, and is targeted as a connectivity upgrade for enterprises, data centers, and carrier networks. The XP60 series can scale from 280 Gb/s to 720 Gb/s, and is designed for the campus, embedded, and carrier access markets.

The new products slide in under the XP80 line, which can scale up to 3.2 Tb/s.

John Harrsen, director for Cavium’s marketing switch platform group, said the new products fit into a number of market segments, including hybrid cloud, radio access, and the gigabit passive optical network (GPON) market. The products are targeted at deployments looking to scale from 1-gigabit Ethernet to 10-GbE support for the XP60, and up to 25-GbE support for the XP70.

The new products leverage the same programmable engine as the XP80. Cavium said this supports software migration for those currently using the XP80 platform and access to the Xpliant software tools.

Cavium explained its platform enables a “true” software-defined networking (SDN) support that it claims is the first of its kind at the supported speeds. The company said the support also applies to flexible routing and header formatting required for scaling network functions virtualization (NFV) solutions.

“What is relevant to us is that companies are starting to use SDN control framework for PON aggregation,” Harrsen said. “We are seeing environments where services are being provisioned using an SDN controller. This is a network management decision that was made and basically changes the paradigm on how the chips work. We are able to differentiate ourselves in the GPON aggregation market to support the control plane as an SDN controllable plane.”

Harrsen noted the new products also take into account the growing virtualization of the server environment, with greater use of hybrid cloud and containers.

“It’s interesting to watch how servers are being virtualized,” Harrsen said. “The new things we are seeing are the use of containers. A basic server can support more containers than [virtual machines]. This allows for better utilization of that server. We have enhanced our products to support this increased utilization.”

Programmable switch platforms continue to gain steam in the market.

#Barefoot Networks recently scored deals with China-based #Alibaba Group, #Baidu, and #Tencent Holding to deploy its #Tofino switches running the open source P4 language and targeted at programmable forwarding plane technology. Those deals followed up on a domestic win with #ATT, which signed on to become the first telecommunications provider to deploy the Barefoot switches.

#Broadcom last month said it was adding programmability to its chip-based switches designed for data centers, enterprises, and service provider networks. The market giant said the programmable switch series includes support for protocol parsing, processing, and virtualization services.

https://www.sdxcentral.com/articles/news/cavium-programmable-switch-family-reaches-for-hybrid-cloud-market/2017/07/

White Graphene: It is white but not graphene

White #graphene is the trivial name given to a material called hexagonal boron nitride (hBN). It occurs in several forms, the hexagonal form is the one of primary interest to us. Think of the basic unit of a hexagon that repeats creating a flat sheet, just like graphene, it is another two-dimensional (2D) material. There are some differences though, read on to find out more… Structure In graphene there is a carbon atom at each of the six corners, in hBN there are three boron atoms and three nitrogen atoms alternating at each of the six corners.This hexagonal structure is made possible by a special type of atomic bond called sp2 hybridisation, you can find out more here. This sp2 bonding is very stable and gives the flat sheet its strength and also flexibility. hBN is nearly as strong as graphene. These bonds also allow the transmission of heat through the material and hBN is a very good conductor of heat as a result. Properties As well as conducting heat, graphene and hBN also have very high melting points. Graphene has the highest melting point at over 4000 degrees centigrade and hBN is not far behind at 2800 degrees centigrade. In fact neither material actually melts, they both decompose rather than go through the liquid phase as water would do.Like its sister material, graphene, hBN is a flat sheet of tightly bound atoms that are connected together so closely that they form an impenetrable barrier to gases and liquids. This means that hBN is resistant to chemical attack making it a good candidate material for corrosion resistant coatings. Applications Graphene is permeable to hydrogen ions and it turns out that hBN is too as this research shows. Being permeable to ions is a rather useful property for the electric battery business so it should come as no surprise to find that researchers have found that white graphene electrolytes have found a niche application in batteries where high temperatures would decompose lesser alternatives.

https://investorintel.com/sectors/technology/technology-intel/white-graphene-white-not-graphene/

IonQ Raises $20M Series B Round Led By NEA, GV To Advance Quantum Computing For Commercial Applications

COLLEGE PARK, Md., July 26, 2017 /PRNewswire/ -- #IonQ, an early-stage company developing #quantumcomputing for commercial applications, today announced completion of a $20 million series B round led by New Enterprise Associates (NEA) and GV (formerly #GoogleVentures), with participation from new strategic investors. Building on the pioneering work of world-renowned experts at the University of Maryland and Duke University, IonQ plans to bring general-purpose quantum computers to market by late 2018. "Efforts to harness quantum mechanics to accelerate computation have gained tremendous momentum in recent years, fueled by significant scientific progress and growing interest—and investment—by some of the world's largest governments, organizations and technology companies," said David Moehring, PhD, CEO of IonQ. "While there is much yet to be discovered about specific applications, there is a massive market clamoring for practical #quantumcomputers. Building on the groundbreaking work of Chris Monroe and Jungsang Kim, IonQ's mission is to make reliable, scalable quantum computing a reality." IonQ's team comprises some of the world's foremost experts in the field of quantum computing. Co-founders Chris Monroe of University of Maryland and Jungsang Kim of Duke University are longtime collaborators, well-known for their pioneering work in ion-trap experiments. Ion trapping is the hallmark of IonQ's approach and a key differentiator from other efforts underway. Moehring, who joined as CEO in 2016, was most recently at U.S. Intelligence Advanced Research Projects Activity (IARPA), charged with oversight of government-sponsored quantum computing initiatives. "There is simply no team in the world with more expertise in trapped ion technology, or with greater understanding of the broader quantum computing landscape," said Ron Bernal, Venture Partner at NEA and a member of IonQ's board of directors. "We are incredibly excited to partner with Dave, Chris and Jungsang as they advance the field of quantum computing and develop a platform for its commercial applications." IonQ's trapped ions represent one of multiple approaches being explored to power a quantum computer. Most large technology companies initiating quantum computing research attempt to leverage their core semiconductor technology by developing superconducting qubits. IonQ believes trapped ion technology, which uses lasers to cool and isolate individual ions, will prevail because trapped ions are identical, more stable, can be better controlled, and are therefore likely to scale with better performance and greater predictability. "Quantum computers are the next giant leap in the ability to process information, and IonQ's promising approach builds on nearly two decades of research by highly regarded physicists and engineers in the field," said Blake Byers, General Partner at GV and a member of IonQ's board of directors. "Quantum computers have the potential to solve data-intensive problems in areas like healthcare, machine learning, and cryptography, and we're excited to partner with IonQ as it accelerates development." IonQ has raised approximately $22 million in total, with an earlier $2 million invested by NEA in 2016. The company has demonstrated significant momentum over the last year, adding several new board members and strategic advisors, expanding its product development team, and attracting growing interest from both the scientific and startup communities. In addition to Bernal and Byers, new board members include Jeong Kim, former president of Bell Labs, and Andrew Schoen, Principal at NEA. John Preskill, the Director of the Institute for Quantum Information at Caltech, and Jagdeep Singh, founder and CEO of QuantumScape, Lightera Networks, and Infinera, are serving as advisors to the company. About IonQ IonQ is developing world-leading general-purpose quantum information processors. Our unique trapped ion approach combines unmatched physical performance, perfect qubit replication, optical networkability, and highly-optimized algorithms to create a quantum computer that is as scalable as it is powerful, and that will support a broad array of applications across a variety of industries. IonQ is headquartered in College Park, MD and backed by New Enterprise Associates (NEA) and GV (formerly Google Ventures). Co-founders Christopher Monroe and Jungsang Kim are leading experts in the field of trapped-ion quantum computing. Professors Monroe and Kim also maintain large academic research groups at the University of Maryland and Duke University, respectively, with a combined government funding of roughly $10M per year. This pioneering work is the cornerstone of IonQ's uniquely scalable technology. About Quantum Computing Quantum computing represents a radical departure from classical computing and has the potential to overcome these limitations by embracing uniquely quantum phenomena, such as superposition and entanglement, that emerge at atomic scale. Quantum computers' ability to perform incredibly complex calculations and data processing exponentially faster than classical computers opens up a world of possible commercial and scientific applications. The potential is staggering, and though we are still in the early days of quantum computing, we anticipate that the technology will transform industries and facilitate breakthroughs in areas as disparate as machine learning, molecular interaction simulation, cryptography, financial analysis, logistics optimization, and big data search.

http://markets.businessinsider.com/news/stocks/IonQ-Raises-20M-Series-B-Round-Led-By-NEA-GV-To-Advance-Quantum-Computing-For-Commercial-Applications-1002206209

HPE boss Whitman among candidates for Uber CEO job – report

#HewlettPackardEnterprise CEO @MegWhitman is reportedly high among the names shortlisted to take over as the boss of #Uber. Those oh so loquacious sources told Bloomberg a handful of candidates were in the running for the top job at the app-based taxi business, and that Whitman had met with the leadership team on more than one occasion in recent weeks. The report further claimed that Uber’s HR head, Liane Hornsey, had told staff a brand spanking new chief exec will be hired inside six weeks. HPE dismissed the suggestion that Whitman will be leaving for cash-burning Uber, which turned over $6.5bn in fiscal ’17 and went through $2.8bn in cash. “As Meg has said several times before, she is fully committed to HPE and plans to stay with the company until her work is done,” a PR handler told The Reg. Uber had not responded to an email request for comment at the time of writing. Whitman, who was made CEO at HP in 2011, knows a thing or two about trying to turn around a high-profile, failing businesses, not that it is a fait accompli at her current employer. HP is almost unrecognisable to the business she inherited, what with the break-up of the group; PCs and printers were first, then Enterprise Services and then the Software arm. During her time, she laid off more than 100,000 employees globally to counter sliding sales and profits, and put pressure on staff to repeatedly cut costs in areas including travel. The company has bet the farm on IoT, hybrid cloud and big data being areas that will stand HPE in good stead for the long term, yet the business has not yet managed to find its groove, continually reporting declining top and bottom lines in each quarter since it listed as a standalone business on the New York Stock Exchange in November 2015. Industry talk has suggested that Whitman may decide to exit HPE once the software business is finally offloaded to MicroFocus after the summer, largely because that will signal the final major piece of the divestiture she began. Uber has the profile to attract Silicon Valley execs, but the new CEO - the previous head Travis Kalanick quit under pressure from investors last month - will need to fight battles on multiple fronts: from regulators, privacy lawsuits and accusations Uber stole self-driving car tech to accusations it deliberately targets government officials tasked with investigating it. Uber has hired recruitment firm Heidrick & Struggles to help a bunch of people on the Uber board to identify the right person to take over at CEO
https://www.theregister.co.uk/2017/07/26/whitman_hpe_ceo_shortlisted_for_uber_ceo/
Ober_ceo/

Datos IO Intros Rebellion Channel Program, Looks To Take Advantage Of Investment From Cisco And NetApp To Grow

#DatosIO, an application-centric data management technology startup in which #Cisco Systems and #NetApp have an equity stake, Tuesday unveiled its first formal channel program to help recruit partners interested in working with customers' next-generation cloud applications. Datos IO, founded in 2014, came to market in 2016 with a technology that manages multiple thousands of copies of data across multi-cloud and hybrid cloud environments.

http://m.crn.com/news/storage/300089281/datos-io-intros-rebellion-channel-program-looks-to-take-advantage-of-investment-from-cisco-and-netapp-to-grow.htm

Why Seagate's Big Earnings Miss Isn't Terrible News for Western Digital

If you're going to refrain from making big investments to up your exposure to a new technology that's gradually eating away at your core market, it's important to at least have a very strong competitive position in that core market. #Seagate Technology PLC (STX) has failed to do that, and together with the steady cannibalization of large portions of that core market (hard drives) by a newer technology (SSDs), this has left the company in a much tougher spot than archrival #WesternDigital Corp. (WDC) . On Tuesday morning, Seagate reported June quarter (fiscal fourth quarter) revenue of $2.41 billion (down 9% annually) and adjusted EPS of $0.65, easily missing consensus analyst estimates of $2.56 billion and $0.98. And on its earnings call, the company guided for September quarter revenue of $2.5 billion to $2.6 billion, below a $2.71 billion consensus and implying a 9% annual decline at the midpoint. Seagate added that while it expects revenue to grow sequentially in the seasonally strong December quarter, it's "not on a trajectory" to hit prior calendar 2017 EPS guidance of $4.50.

https://www.thestreet.com/story/14242077/1/why-seagate-s-big-earnings-miss-isn-t-terrible-news-for-western-digital.html

Conversation Starters: 5 Keys To Lenovo's AI Future

#Lenovo is betting big on smart devices' ability to catapult the company into a future IT market dominated by #artificialintelligence, and the company is giving a peek at the concept products it says will get it there. Lenovo CTO Yong Rui, in a blog post published as the company's Tech World conference kicked off in Shanghai, China, this week, said the company is investing heavily in AI, including spending on personne, and an Artificial Intelligence Lab staffed by more than 100 researchers around the world. Rui writes that in order to enable AI to "change how we live and work and how our societies operate," Lenovo must get algorithms, big data and computing power right. Lenovo this week is showing off a handful of concept products intended to show that the company has done just that.

http://m.crn.com/slide-shows/components-peripherals/300089261/conversation-starters-5-keys-to-lenovos-ai-future.htm?itc=ticker

THE RISE OF AI IS FORCING GOOGLE AND MICROSOFT TO BECOME CHIPMAKERS

BY NOW OUR future is clear: We are to be cared for, entertained, and monetized by #artificialintelligence. Existing industries like healthcare and manufacturing will become much more efficient; new ones like augmented reality goggles and robot taxis will become possible. But as the tech industry busies itself with building out this brave new artificially intelligent, and profit boosting, world, it’s hitting a speed bump: Computers aren’t powerful and efficient enough at the specific kind of math needed. While most attention to the AI boom is understandably focused on the latest exploits of algorithms beating humans at poker or piloting juggernauts, there’s a less obvious scramble going on to build a new breed of computer chip needed to power our AI future. One datapoint that shows how great that need is: software companies #Google and #Microsoft have become entangled in the messy task of creating their own chips. They’re being raced by a new crop of startups peddling their own #AI -centric silicon—and probably #Apple, too. As well as transforming our lives with intelligent machines, the contest could shake up the established chip industry. Microsoft revealed its AI chip-making project late on Sunday. At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft’s research efforts, showed off a new chip created for the HoloLens augmented reality googles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. Microsoft wants you to be able to smoothly reach out and interact with the virtual objects overlaid on your vision and says nothing on the market could run machine learning software efficiently enough for the battery-powered device that sits on your head. Microsoft’s project comes in the wake of Google’s own deep learning chip, announced in 2016. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company’s cloud. The company told WIRED earlier this year that it saved the company from building 15 new datacenters as demand for speech recognition soared. In May Google announced it had made a more powerful version of its TPU and that it would be renting out access to the chips to customers of its cloud computing business. News that Microsoft has built a deep learning processor for Hololens suggests Redmond wouldn't need to start from scratch to prep its own server chip to compete with Google’s TPUs. Microsoft has spent several years making its cloud more efficient at deep learning using so-called field-programmable gate arrays, a kind of chip that can be reconfigured after it’s manufactured to make a particular piece of software or algorithm run faster. It plans to offer those to cloud customers next year. But when asked recently if Microsoft would make a custom server chip like Google’s, Doug Burger, the technical mastermind behind Microsoft’s roll out of FPGAs, said he wouldn’t rule it out. Pieces of the design and supply chain process used for the HoloLens deep learning chip could be repurposed for a server chip.

https://www.wired.com/story/the-rise-of-ai-is-forcing-google-and-microsoft-to-become-chipmakers/

How Apple, Microsoft, Google, and Facebook Plan to Eliminate Flash

How #Apple, #Microsoft, #Google, and #Facebook Plan to Eliminate #Flash Jonathan Vanian Jul 25, 2017 After over twenty years serving videos and animations to people over the web, #AdobeFlash media player is being put to the rest. #Adobe’s (ADBE, -0.88%) decision, announced Tuesday, to stop supporting Flash comes after several big technology companies like #Microsoft (MSFT, +0.80%) and #Google (GOOG, -3.02%) have already said they would phase out the software in their own products due to its numerous security holes and bugs. These tech giants supported Adobe in its plans to close the curtain on Flash, and all issued their own takes on the announcement and how it will affect their respective customers. Get Data Sheet, Fortune’s technology newsletter. Here’s a rundown on what they are saying and how it affects you. Google As Google notes in a blog post, Flash’s popularity has greatly declined over the years. The search giant noted that three years ago, 80% of people who used its Chrome browsers on desktop computers visited a website that incorporate Flash in some way. Now, however, only 17% of PC and Chrome users visit websites with Flash, and that percentage “continues to decline.” The company said that it would “remove Flash completely from Chrome toward the end of 2020.” Until then, if people visit a site that uses Flash, they will be prompted to give the web site permission to run Flash. Microsoft Microsoft gave a timeline of how it plans to stop supporting Flash for its old-school Internet Explorer browser and its newer Microsoft Edge browser. From 2017 through 2018, people will need to give permission to the Edge browser to run Flash when they visit a website that use the tech. The Edge browser will remember their choice so that they only need to give permission once. Internet Explorer users can continue using Flash without needing to give permission. In mid-to-late 2018, however, the Edge browser will require permission from users to run Flash each time they visit a website that uses Flash. Again, Internet Explorer users can continue to visit website with Flash like they normally do.  Microsoft will then disable Flash by default in its two browsers in middle or late 2019, but people will be allowed to re-enable Flash in those browsers. After re-enabling Flash, people will “will continue to require approval for Flash on a site-by-site basis.” Microsoft plans to eliminate all support for Flash by the end of 2020, and people will no longer be able to run Flash in both the Edge and Internet Explorer browsers. Apple As Apple (AAPL, +0.47%) notes in its blog post on the announcement, Apple has never supported Flash on its iOS-powered mobile devices like the iPhone and iPad. For its Mac lineup of personal computers, Apple said it began moving away from Flash in 2010 when it didn’t pre-install those computers with Flash. Now, if people install Flash onto their Macs, Flash will still remain “off by default.” This means that each time a person uses Apple’s Safari browser to visit a website with Flash, they must give the website permission to run the tech.

http://fortune.com/2017/07/25/apple-microsoft-google-facebook-flash/

Wipro and Hewlett Packard Enterprise announce partnership

BENGALURU: #Wipro Limited today announced a partnership with Hewlett Packard Enterprise ( #HPE ) to offer IT infrastructure solutions in a consumption-based or pay-per-use business model for enterprises. This model for IT Infrastructure procurement and provisioning will be offered to both Wipro and HPE's customers, globally. As part of the alliance, Wipro will leverage HPE Flexible Capacity to offer flexible and scalable IT infrastructure services in a consumption-based IT model, accelerate growth and enable digital transformation for its customers, Wipro said in a statement here

http://m.economictimes.com/tech/ites/wipro-and-hewlett-packard-enterprise-announce-partnership/articleshow/59754858.cms

Symbolic IO CEO cuffed by cops, vanishes from his storage startup

#SymbolicIO CEO and founder Brian Ignomirello was arrested last week on outstanding warrants and for allegedly violating a restraining order, it is reported. According to the Asbury Park Press, Ignomirello was cuffed last Tuesday at his six-bedroom mansion that he shared with his girlfriend and their two children in Colts Neck, New Jersey, USA. No further charges were laid against Ignomirello after he was collared, it is reported. On Monday this week, we noticed he had disappeared from Symbolic's website amid rumors he is no longer CEO nor at the company at all. A spokesperson for the storage biz told us on the phone it had "no comment" to make, and hung up. A spokesperson for Colts Neck police was not available for comment. A spokesman for the Monmouth County Prosecutor's Office also declined to comment due to the sensitive nature of the case, which involves an allegation of domestic violence. Symbolic IO is a storage hardware-slash-software upstart developing an efficient way of storing and encoding data that results in faster processing. It was founded by Ignomirello in 2012, has about 30 staff, and has taken in around $15m in funding. Timeline May 2017: Ignomirello is reportedly arrested, and is accused of giving his girlfriend a black eye. He is charged with domestic assault, and denies any wrongdoing. "We intend to plead not guilty, and we’re going to vigorously defend this case," said Mitchell Ansell, the founder's attorney. July 2017: Ignomirello is arrested again. In May, Ignomirello returned from a restaurant in a drunken state, knocked his girlfriend to the ground, and beat her up, it is claimed. After the tech boss was arrested and charged, his lawyer Mitchell Ansell was quoted as saying: "Just because facts are alleged does not mean they occurred the way they were alleged. And we are confident once all the facts come out, Brian will be vindicated." After last week's kerfuffle, Ansell said: "Brian was not charged with any new offenses ... He surrendered to police. We have defenses to those charges and Brian looks forward to being exonerated in a court of law. We believe he'll be eventually exonerated." It was reported that Ignomirello had barricaded himself in his luxury home when the cops showed up on Tuesday, although Ansell insisted the next day: "I would not use the word 'standoff' to describe what happened." Not only is Ignomirello no longer listed as Symbolic IO's CEO, the company is not showing any chief exec at all. The only two executives listed on the website are chief operating officer Jonah Ninger and chief technology officer Rob Peglar. With Ignomirello being the founder and guiding light of the company, there now has to be concern about its future

https://www.theregister.co.uk/2017/07/25/symbolic_io_ceo_arrested/

Tuesday, July 25, 2017

T. Rowe Price Group reports second-quarter profit

#TRowe Price Group Inc. reported a profit of $373.9 million in the second quarter, up 84 percent from the April-to-June period last year. Earnings at the Baltimore-based investment management company rose to $1.50 per share, up 71 cents compared to the same quarter in 2016. Adjusted to remove what T. Rowe considers non-core operations, per-share earnings were $1.28, beating analyst estimates and boosting the firm’s stock price Tuesday.

“I do think this was a strong quarter,” said President and CEO William J. Stromberg, citing strong sales, earnings growth and investment performance. “Those are all really good milestones for us.”

Four analysts polled by Zacks Investment Research had estimated adjusted per-share earnings of $1.26.

T. Rowe’s stock closed at $83.83 a share, up 3.4 percent in Tuesday trading.

Macrae Sykes, a research analyst for Gabelli & Co., said T. Rowe’s performance is a sign of the firm’s resilience amid ongoing industry pressures, such as a shift among investors to low-fee products and market volatility brought on by political and geopolitical uncertainty

“What we’re seeing is the more scaled players such as T. Rowe, with solid investment performance, take share,” Sykes said. “The strong are getting stronger.”

Stromberg said T. Rowe is beginning to see the results of investments in new technology and international expansion. Earlier this year, T. Rowe signed agreements to sell its products through no fee platforms operated by Charles Schwab & Co. and Fidelity, a move that will give T. Rowe more exposure, Stromberg said.

http://www.baltimoresun.com/business/bs-bz-t-rowe-second-quarter-20170725-story.html

Texas Advanced Supercomputing Center Taps Latest HPC Tech

Building on the successes of the Stampede1 supercomputer, the Texas Advanced Computing Center ( #TACC ) has rolled out its next-generation #HPC system, #Stampede2. Over the course of 2017, Stampede2 will undergo further optimization phases with the support of a $30 million grant from the National Science Foundation ( #NSF ). With the latest Xeon and Skylake processors, and enhanced networking provided by the Omni-Path architecture, the new flagship system is expected to deliver approximately 18 petaflops, nearly doubling Stampede1’s performance. Stampede2 continues Stampede1’s mission: enabling thousands of scientists and researchers across the United States to deliver breakthrough scientific discoveries in science, engineering, artificial intelligence, the humanities, and more. Dr. Dan Stanzione, Executive Director, at TACC, describes the breadth of their work. “TACC provides large-scale support for our users who make significant engineering and research discoveries. As research goals advance, the problems supercomputers help solve are getting bigger and more complicated every year. For our team at TACC, delivering greater speed and capacity with Stampede2 are top concerns.” A faster system means TACC can help scientists gain insights from their data more quickly. It also means Stampede2 can offer support for many more projects. In 2016, TACC received five times more requests to use Stampede than its capacity could handle. Unfortunately, that means many important scientific projects did not have access to the resources they needed. “Stampede2 will be a big step forward, since we can accommodate more projects and complete them more quickly,” Stanzione continued. TACC’s Stampede2 is deployed with support from vendor partners #Dell, #Intel, and #Seagate Technology, and operated by a team of cyberinfrastructure experts at TACC, UT Austin, Clemson University, Cornell University, the University of Colorado at Boulder, Indiana University, and Ohio State University. “An HPC system the scope of Stampede represents an incredible team effort,” expresses Stanzione. “We are thankful for the many partner organizations and experts across the country who supported us.” Asked about the challenges of rolling out such a major HPC system upgrade, Stanzione describes key considerations. “Much of the work in science and engineering is computation-focused, especially in fields such as weather, astrophysics, and aerodynamics. However, we are seeing exciting growth in demand from fields like biology and the humanities, and these new communities bring different needs. Today we are providing familiar web and application interfaces to supercomputing systems that naturally fit into established workflows. And we are building systems from the ground up with big-data in mind. Stampede has grown from being a big engine to being an end-to-end solution.” Several other benefits give Stampede2 a huge advantage over earlier systems. New applications require greater computational throughput. Stampede2 supports more than 40 major science and engineering applications at any given time, alongside thousands of applications used by small groups or individual researchers. Stampede2 is a heterogeneous system: one third of its processors are Intel Xeon processors, and the rest are Intel Xeon Phi processors. This combination of processors allows a broader range of applications to take advantage of the improved system configuration, and speeds up parallel workloads. For intensive computing applications such as visualization and ray tracing, Intel processors can also replace GPUs when tapping OSPRey interactive rendering capabilities. Given the scope of the design, Stampede2 is being deployed in three phases. Phase 1, based on the Intel Scalable System Framework, is already in place. Each node consists of Dell PowerEdge C6320P servers with Intel Xeon Phi processors 7250 (code name Knights Landing). As with Stampede1.5, Intel Omni-Path Architecture provides 100Gbps bandwidth in a low-latency network fabric. Despite having deployed only part of the system, phase one of the Stampede2 system ranks at #12 on the June 2017 Top500.org rankings of the fastest supercomputers worldwide. Comprising 4,200 nodes – each with 68 cores, 96GB of DDR RAM, and 16GB of high-speed MCDRAM – Stampede2 achieved a peak performance of nearly 13 petaFLOPS. Phase two will add 1,736 of the newest Intel Xeon processors (formerly code named Skylake). As part of the Intel Xeon processor scalable family (formerly code named Purley), these new processors with 28 cores offer significantly faster performance for compute and data-intensive workloads. Faster performance is due also to significant increases in memory and I/O bandwidth. Additional efforts will take the system’s capability even further. Incorporation of 3D XPoint non-volatile memory will provide an important opportunity to evaluate the impact of this new approach to memory on a large scale, and is expected to nudge performance even higher. When complete, Stampede2 is expected to deliver about 18 petaFLOPS of computing capacity for open science, nearly doubling the maximum performance of Stampede1. Researchers have used the system since May, conducting diverse research on Stampede2 including automated tumor identification from MRI data (The University of Texas at Austin), simulations supporting the LIGO gravitational wave discovery (Cambridge University), and real-time weather forecasting to direct storm chaser trucks (University of Oklahoma). When asked about the planning and rollout of Stampede2, Stanzione replies, “Stampede2 has been TACC’s smoothest deployment so far. Our team has already accomplished many system upgrades, and we are working hard on further optimizations. One element that makes the deployment process easier is our early collaboration with Intel. We always want to be among Intel’s earliest adopters so we can realize the benefits of their new technologies right away. In turn, we can also offer feedback to Intel when we encounter unexpected challenges.” Intel, working with TACC and other early adopters, uses this feedback to ensure products have been tested thoroughly in production systems before broad release. While Stanzione and his team strive for the greatest possible performance, they must also ground their ambitions with the reality of a fixed budget. “We are very fortunate to have the financial support of the NSF, and we want to use their grant very wisely,” reflects Stanzione. “By experimenting with the latest Intel technologies, we can better understand the price-performance impact of each, consider bottlenecks, and determine where Stampede will benefit most from upgrades. Working early with Intel helps us identify and adopt technologies that maximize our system’s capability while remaining cost effective. The latest Intel Xeon processors will give us quite a boost in performance, and we are excited to implement phase two very soon.” Once phase two is complete, Stanzione expects a much greater number of applicants seeking time on the supercomputer. “Our team here at TACC is very excited to get our work done so we can better serve engineers and researchers who require the system’s full potential,” he reflects. “We are the people behind the scenes; our reward comes when scientists achieve breakthrough discoveries.” While the team at TACC cannot anticipate the multitude and diversity of scientific projects which will ultimately run on Stampede2, one thing is certain: HPC systems like Stampede2 serve as foundations for innovation, taking science forward into a very optimistic future.

https://www.nextplatform.com/2017/07/25/texas-advanced-supercomputing-center-taps-latest-hpc-tech/