Dell, EMC, Dell Technologies, Cisco,

Wednesday, August 31, 2016

All-flash VSAN gaining speed

#VSAN became more flash-friendly with version 6.2 earlier this year, with data deduplication enabling customers to gain more usable capacity from their expensive flash media. That has prompted customers to use all-flash VSAN nodes for applications that demand significant performance. At #VMworld this week, Lee Caswell, VMware's vice president of storage and availability products, called VSAN "the fastest-growing product in the fastest-growing market in storage." He said VSAN is lining up 100 customers per week -- triple its growth rate from last year -- and now has 5,000 customers. Caswell said, with flash and hyper-converged, VSAN is bringing "the two biggest disrupters in the market together." It helps that storage and server vendors, such as VMware's parent company #EMC and EMC's new parent #DellTechnologies, sell hyper-converged systems running VSAN. EMC representatives said its VSAN-powered #VxRail has beaten internal sales forecasts since its February launch. Dell is among the partners who sell all-flash VSAN on Ready Node prepackaged hardware.

http://searchconvergedinfrastructure.techtarget.com/news/450303501/All-flash-VSAN-gaining-speed

VMware and HPE: BFFs, Best Frenemies Forever!!

Don’t let the title fool you. We love #VMware. Everybody loves VMware, even if they don’t want to admit it. VMware has created a world where virtualization is more than just part of a company’s ecosystem, it’s practically the cornerstone on which infrastructures are built. The first question asked in any sales meeting is usually, “What percentage of your infrastructure is virtualized?”

Hewlett Packard Enterprise ( #HPE ) and VMware have enjoyed a long, successful history together. It’s often been suggested that HPE is VMware’s biggest partner, biggest reseller, biggest integrator, etc. Think about it: What industry benefits most from virtualization if not servers? And who holds the #1 market share for servers?

There’s clearly no question how much VMware and HPE have helped each other and benefited from being BFFs, and it’s more than just servers. A vast majority of VMs are connected to shared storage, which is why the top conversations at VMworld are typically related to storage integration into that ecosystem.

We’re so tight with VMware in 2014, a full year before VVols were actually released, HPE was running demos on a #3PAR StoreServ storage array at VMworld. (See Calvin Zito’s postVMware VVols and HP 3PAR StoreServ at VMworld.).

But getting to the heart of the matter: virtualization

In spite of the lovefest, we do compete with VMware, and that’s OK. VMware competes with almost everyone on some level.

HPE StoreVirtual is at the heart of HPE Composable Infrastructure, our Hyper Converged systems, and ourComposable Data Fabric. While VMware offers VSAN as their software-defined storage, HPE StoreVirtual offers much more. It’s more flexible, more mature, more resilient. It offers unlimited replication targets – infinitely valuable where software-defined and hyperconverged products are used – and it isn’t limited to VMware only.

On to composability: making “IT” synonymous with “easy”

Composable infrastructure is removing the tedious planning and preparation that goes into traditional IT decisions. This way, admins can focus on the things that are really important: the business and the apps they need to keep it moving.

Remember, this is what composable infrastructure is:

Unified APIs across compute, fabric, and storageSoftware-defined intelligencePhysical, virtual, and container-fluid resource pools

We’re finally getting to the place where everyone’s been wanting to go since the dotcom era. No more complicated IT and no more wasted cycles on ensuring interoperability and deciding whose “XYZ feature” is best. The world today loves simple management interfaces. That’s why as part of our approach to composability, we made HPE OneView, the management platform, as simple as it gets – without sacrificing a smidgen of its robustness and power. (You’re welcome.)

Maybe one of the best assets composable infrastructure brings to the table is the depth of its partner ecosystem capabilities. It integrates Docker, Microsoft, and a whole slew of other automation technologies. Why bother picking between “XYZs” when you can simply say, “Yes, please” and enjoy all the latest, greatest technologies available with a composable infrastructure?

http://www.computerworld.com/article/3114231/data-center/vmware-and-hpe-bffs-best-frenemies-forever.html

Why VMware, Inc. (NYSE:VMW), Pivotal Will Replace Quest as Dell Software: Mechel PAO (NYSE:MTL), Sino-Global Shipping America (NASDAQ:SINO), Interactive Intelligence Group (NASDAQ:ININ)

#VMware, Inc. (NYSE:VMW) traded 4.46 Million shares on last trading day with closing price of $74.22. Company gross margin stands at 84.80% whereas its return on investment (ROI) is 10.40%. Stock value has moved between $43.15 – 82.47 in last one year. Analyst’s mean target price for VMware, Inc. (NYSE:VMW) is $3.00 while analysts mean recommendation is 2.70. VMW EPS growth this year and EPS growth next year is 14.80%, 6.50% respectively. VMware, Inc. (NYSE:VMW) and #Dell and aren’t admitting it, but VMware and #Pivotal are now, in effect, the new #DellSoftware. When Dell bought #Quest Software in July 2012 for $2.36 billion—its largest acquisition ever—and anointed it as the new crown jewel of the company, Michael Dell told eWEEK: “Dell is now an international-class software company.” Fast forward four years: On June 20 of this year, after basically renting Quest for 16 fiscal quarters, Dell changed its mind about the value of its acquisition and announced it was selling it to a private equity firm for $2 billion and change. Mechel PAO (NYSE:MTL) shares moved to $1.91 on Tuesday. MTL market capitalization is 397.54M with 445 employees. Analyst’s mean target price for Mechel PAO (NYSE:MTL) is 56.00 while analysts mean recommendation is 3.50. Its weekly performance is 5.52% while year to date (YTD) performance is 16.46%. Mechel PAO (NYSE:MTL) reported positive trends which appeared in 1Q2016, the overall price level in this period was significantly lower than the previous year’s. In the first and second quarters of this year, contract prices for hard coking coal were $81 and $84 per tonne, while in the first and second quarters of 2015 those prices were $117 and $109.5 per tonne. The decrease of sales of several types of products as market demand weakened in 1Q2016, had an additional negative impact on the division’s revenue. Nevertheless, optimization of our costs structure enabled us, despite a drop in revenue and prices, to increase the division’s EBITDA by 10% to 14.4 billion rubles, while the EBITDA margin went from 23% to 26% year-on-year.

http://www.benchmarkmonitor.com/why-vmware-inc-nysevmw-pivotal-will-replace-quest-as-dell-software-mechel-pao-nysemtl-sino-global-shipping-america-nasdaqsino-interactive-intelligence-group-nasdaqinin/1887757/

Revealed: Google’s plan for quantum computer supremacy

SOMEWHERE in California, #Google is building a device that will usher in a new era for computing. It’s a #quantumcomputer , the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers. And New Scientist has learned it could be ready sooner than anyone expected – perhaps even by the end of next year. The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first. The firm’s plans are secretive, and Google declined to comment for this article. But researchers contacted by New Scientist all believe it is on the cusp of a breakthrough, following presentations at conferences

“They are definitely the world leaders now, there is no doubt about it,” says Simon Devittat the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

We have had a glimpse of Google’s intentions. Last month, its engineers quietly published a paper detailing their plans (arxiv.org/abs/1608.00263). Their goal, audaciously named quantum supremacy, is to build the first quantum computer capable of performing a task no classical computer can.

“It’s a blueprint for what they’re planning to do in the next couple of years,” says Scott Aaronson at the University of Texas at Austin, who has discussed the plans with the team.

So how will they do it? Quantum computers process data as quantum bits, or qubits. Unlike classical bits, these can store a mixture of both 0 and 1 at the same time, thanks to the principle of quantum superposition. It’s this potential that gives quantum computers the edge at certain problems, like factoring large numbers. But ordinary computers are also pretty good at such tasks. Showing quantum computers are better would require thousands of qubits, which is far beyond our current technical ability.

Instead, Google wants to claim the prize with just 50 qubits. That’s still an ambitious goal – publicly, they have only announced a 9-qubit computer – but one within reach.

To help it succeed, Google has brought the fight to quantum’s home turf. It is focusing on a problem that is fiendishly difficult for ordinary computers but that a quantum computer will do naturally: simulating the behaviour of a random arrangement of quantum circuits.

Any small variation in the input into those quantum circuits can produce a massively different output, so it’s difficult for the classical computer to cheat with approximations to simplify the problem. “They’re doing a quantum version of chaos,” says Devitt. “The output is essentially random, so you have to compute everything.”

To push classical computing to the limit, Google turned to Edison, one of the most advanced supercomputers in the world, housed at the US National Energy Research Scientific Computing Center. Google had it simulate the behaviour of quantum circuits on increasingly larger grids of qubits, up to a 6 × 7 grid of 42 qubits.

This computation is difficult because as the grid size increases, the amount of memory needed to store everything balloons rapidly. A 6 × 4 grid needed just 268 megabytes, less than found in your average smartphone. The 6 × 7 grid demanded 70 terabytes, roughly 10,000 times that of a high-end PC.

Google stopped there because going to the next size up is currently impossible: a 48-qubit grid would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world. If Google can solve the problem with a 50-qubit quantum computer, it will have beaten every other computer in existence.

Eyes on the prize

By setting out this clear test, Google hopes to avoid the problems that have plagued previous claims of quantum computers outperforming ordinary ones – including some made by Google.

Last year, the firm announced it had solved certain problems 100 million times faster than a classical computer by using a D-Wave quantum computer, a commercially available device with a controversial history. Experts immediately dismissed the results, saying they weren’t a fair comparison.

Google purchased its D-Wave computer in 2013 to figure out whether it could be used toimprove search results and artificial intelligence. The following year, the firm hiredJohn Martinis at the University of California, Santa Barbara, to design its own superconducting qubits. “His qubits are way higher quality,” says Aaronson.

It’s Martinis and colleagues who are now attempting to achieve quantum supremacy with 50 qubits, and many believe they will get there soon. “I think this is achievable within two or three years,” says Matthias Troyer at the Swiss Federal Institute of Technology in Zurich. “They’ve showed concrete steps on how they will do it.”

Martinis and colleagues have discussed a number of timelines for reaching this milestone, says Devitt. The earliest is by the end of this year, but that is unlikely. “I’m going to be optimistic and say maybe at the end of next year,” he says. “If they get it done even within the next five years, that will be a tremendous leap forward.”

The first successful quantum supremacy experiment won’t give us computers capable of solving any problem imaginable – based on current theory, those will need to be much larger machines. But having a working, small computer could drive innovation, or augment existing computers, making it the start of a new era.

Aaronson compares it to the first self-sustaining nuclear reaction, achieved by the Manhattan project in Chicago in 1942. “It might be a thing that causes people to say, if we want a full-scalable quantum computer, let’s talk numbers: how many billions of dollars?” he says.

Solving the challenges of building a 50-qubit device will prepare Google to construct something bigger. “It’s absolutely progress to building a fully scalable machine,” says Ian Walmsley at the University of Oxford.

For quantum computers to be truly useful in the long run, we will also need robust quantum error correction, a technique to mitigate the fragility of quantum states. Martinis and others are already working on this, but it will take longer than achieving quantum supremacy.

Still, achieving supremacy won’t be dismissed.

“Once a system hits quantum supremacy and is showing clear scale-up behaviour, it will be a flare in the sky to the private sector,” says Devitt. “It’s ready to move out of the labs.”

“The field is moving much faster than expected,” says Troyer. “It’s time to move quantum computing from science to engineering and really build devices.”

https://www.newscientist.com/article/mg23130894-000-revealed-googles-plan-for-quantum-computer-supremacy/

SVA Software Launches With New Partner Program to Capitalize on Demand for Storage Performance Management Solutions to Optimize Virtualized Data Center Operations

#SVA Velocity Partner Program Delivers Compelling New Revenue Opportunities With Industry Leading Incentives and Enablement Resources for #IBM Spectrum Storage Channel Partners CHARLOTTE, NC--(Marketwired - Aug 31, 2016) - SVA Software, a leading provider of storage performance management software, today announced its company launch in North America along with a new Partner Program to capitalize on the demand for its BVQ storage optimization solutions that simplify the management of virtualized data center operations. The SVA Velocity Partner Program delivers new revenue opportunities for system integrators and channel partners to deliver the industry's most comprehensive solutions for performance and capacity monitoring and analysis for IBM SAN Volume Controller (SVC) and the IBM Storwize family including V9000, V7000, V5000 and V2700. SVA Software is a recognized leader, worldwide, in delivering solutions that optimize data center efficiencies and prevent unplanned IT outages. Its flagship product, BVQ (Business Volume Qualicision) provides visibility into the performance, health and utilization of the entire IBM virtualized infrastructure. Delivering granular analysis information it improves storage efficiency, which reduces cost and operational risks while also delivering the insights that enable proactive action before business operations are impacted. Partners Underscore the Value of BVQ Storage Optimization Solutions "BVQ is a solution that we recommend to our clients with IBM Spectrum Storage environments," said Matt Cleland, Director, Converged Infrastructure and Storage Solutions at Sirius Computer Solutions, a leading national solutions integrator. "BVQ gives us the visibility and deep inspection to help our clients properly architect and optimize their virtualized storage. The Sirius team will provide assessments using BVQ to help clients resolve issues, optimize efficiencies and manage their infrastructure." "BVQ is a stellar and unparalleled Storwize and Spectrum Virtualize -- deep monitoring and troubleshooting solution," said Joe Jurek, Solution Architect, Mark III Systems. "As a storage architect who is intimately involved with large SAN Volume Controller (Spectrum Virtualize) and Storwize storage infrastructures in the most critical of environments -- I can say without hesitation that this tool is a "must-have" for the mature and proactive data center that desires the ultimate in control and understanding of their Spectrum Virtualize environments." To support the accelerated adoption of BVQ, the SVA Velocity Partner Program provides channel partners with new revenue streams, sales and support training, product resources and marketing support to build a successful practice to sell BVQ. Included in the program is the turnkey offering to deliver the BVQ Health Check. This limited engagement works to analyze a customers' current storage state, resulting in a comprehensive assessment report that delivers findings, recommendations and actionable insights for storage environment optimization. "Today's growing data volumes place increased strain on storage infrastructure while adding cost, complexity and management requirements at every step," said Don Mead, SVP Marketing & Partner Enablement, SVA Software. "To increase efficiency and make the most of existing storage investments, enterprises need to find the right storage mix to optimize performance, health and utilization. SVA Software's BVQ solution works to guide strategic investments and configuration decisions based on actionable performance information -- taking the guesswork out of storage procurement. With our new partner program, we are extending this opportunity to IBM Spectrum Storage channel partners, so that they can deliver the valuable insights customers need to optimize their virtualized storage environments with confidence and ease." SVA Software was recognized this month by CRN®, a brand of The Channel Company to its 2016 list of Emerging Vendors. This annual list recognizes recently founded, up-and-coming technology suppliers who are shaping the future of the IT channel through unique technological innovations. The SVA Software Velocity Global Partner Program is available now for qualified partners. For more information, visit svasoftware.com/partner.
http://m.marketwired.com/press-release/sva-software-launches-with-new-partner-program-capitalize-on-demand-storage-performance-2154603.htm

Machine learning boosts HPE’s big data analytics portfolio

#HPE is continuing its push into the big data market by releasing the next version of #Vertica 8 and HPE #Haven OnDemand Combinations. The OnDemand Combinations, which was originally launched in March, aims to offer developers the ability to apply the power of machine learning to build next generation applications on the cloud-based offering. In addition to a pre-built catalogue of cognitive services and a drag and drop interface, the offering allows developers to chain together multiple machine learning APIs into combinations that can then be copied and pasted into development projects, the idea is that this will help speed up the process of building new and exciting mobile and enterprise applications. Further capabilities include a publish and reuse function which makes sure that custom combinations are securely published to the organisation account for reuse and augmentation. Developers can also build new combinations for prototype apps free of charge by signing up to a Freemium account, and make use of more than 70 APIs to create, test, and publish new services. All of this is hosted in the #Microsoft #Azure cloud, which will help to increase the developer reach for HPE. In addition to the HOD offering, HPE has come out with Vertica 8 in order to provide a unified architecture and advanced analytics and advanced in-database analytics set of capabilities to help users conduct sophisticated analysis. One of the issues that the latest release aims to combat is that of extracting intelligence from data that resides in multiple silos across the data centre. Vertica 8 will offer “analyse-in-place” functionality that will let businesses continually gain intelligence from their information, wherever it lives, the company said. Vertica 8 can be deployed across on-premise, private, and public clouds, and in Hadoop data lakes. Colin Mahony, SVP and GM, Big Data Platform, HPE Software. “Integrated with broad in-database applied machine learning analytic capabilities and open source innovation, Vertica 8 gives companies a competitive edge to solve problems faster and at lower costs.” The company said that thanks to core data movement and orchestration enhancements it will provide up to 700% faster data loading for hundreds of thousands of columns, simplified data loading from Amazon S3, and visual monitoring of Apache Kafka data streams. To bolster the offering further, HPE Vertica is getting Azure support and is available immediately from the Microsoft Azure Marketplace. The latest features include in-database machine learning, analysis-in-place analytics on Hadoop, and an optimised #Apache #Spark Adapter. The analysis-in-place feature uses Vertica’s Parquet and ORC Readers to help users access and analyse data that resides in Hadoop data lakes without copying or moving the data. HPE Vertica 8, codenamed “Frontloader”, will be generally available during the company’s forth quarter.
http://www.cbronline.com/news/big-data/analytics/machine-learning-boosts-hpes-big-data-analytics-portfolio-4992771

How eBay Uses Apache Software to Reach Its Big Data Goals

“The data is the most important asset that we have,” said Seshu Adunuthula, #eBay ’s head of analytics infrastructure, during a keynote at #Apache #BigData in Vancouver. The #Linux Foundation eBay’s ecommerce platform creates a huge amount of data. It has more than 800 million active listings, with 8.8 million new listings each week. There are 162 million active buyers, and 25 million sellers. “The data is the most important asset that we have,” said Seshu Adunuthula, eBay’s head of analytics infrastructure, during a keynote at Apache Big Data in Vancouver in May. “We don’t have inventory like other ecommerce platforms, what we’re doing is connecting buyers and sellers, and data plays an integral role into how we go about doing this.” Inside eBay, hordes of hungry product teams want to make use of all the transactional and behavioral data the platform creates to do their jobs better, from surfacing the most interesting items to entice buyers to helping sellers understand the best way to get their stuff sold. Adunuthula said that about five years ago, eBay make the conscious choice to go all-in with open source software to build its big data platform and to contribute back to the projects as the platform took shape. “The idea was that we would not only use the components from Apache, but we also start contributing back,” he said. “That has been a key theme in eBay for years: how do we contribute back to the open source community.” Repository, Streams, and Services Adunuthula said there are three main components to eBay’s data platform: the data repository, data streams, and data services. Starting with data repositories, eBay is making use of #Hadoop and several of the surrounding projects, like Hive and Hbase, along with hardware from Teradata to store the data created by millions of daily transactions on eBay. “A decade ago we used to call them data warehouses; now for the last five years because of the type of the data and the structure of the data changing, we are calling them data lakes,” Adunuthula said. “Apache Hadoop is a big component of how we’re implementing the data lakes. It essentially is a place where you store your denormalized data, your aggregated data, and historical data.” The data streams are a key portion of the strategy; product teams and analysts desperately want to see data as it comes in so they can pull insights that much quicker. eBay has built connectors to Hadoop, processes the streaming data with Storm and Spark clusters, and accesses it via Kafka. “Today we have deployed 300-400 Kafka brokers,” he said. “LinkedIn probably has the biggest Kafka deployment, but we might get there soon. The amount of data that the product team is requesting to be available in streams is high. We’ll get a lot of Kafka topics with lots of data available stream processing happening on Storm, but Spark 2.0 looks very promising.” For data services, eBay has created its own distributed analytics engine with an SQL interface and multi-dimensional analysis on Hadoop and made it open source: the Apache Kylin project. “The realization was: now we've got this commodity scale computation platform, and I have MOLAP style cubes and they were never operational at scale before,” Adunuthula said. “You could never take a 100TB cube and keep scaling it at the rate at which the data is growing. “But now with all components that are available to us: the raw data in Hive, the processing capabilities using MapReduce or Spark, and then storing the cubes in HBase, with a very limited effort we were able to build out these MOLAP cubes, and we have more than a dozen MOLAP cubes operational within eBay, around 100TB is the size of the largest cubes, with around 10 billion rows of data in them.” eBay’s latest work is making the Kylin cubes “completely streaming aware,” Adunuthula said. “Instead of taking three hours to do daily refreshes on the data, these cubes refresh every few minutes, or even to the second,” he said. “So there is a lot of interesting work going into Kylin and we think this will be a valuable way of getting to the data.” The final piece is creating “Notebook” views on all that complex data being processed with Apache Zeppelin, allowing analysts to work together collaboratively and quickly. “The product teams love this a lot: there is always that one analyst among them that knows how to write that best query,” he said. “We can take that query, put that into this workspace so others can use it.” Watch the complete presentation below:

https://www.linux.com/news/how-ebay-uses-apache-software-reach-its-big-data-goals

Words to go: Microsoft Azure big data services

#Microsoft #Azure big data services are gaining traction, as the company refocuses its mission on building an intelligent cloud platform. The Azure platform offers capabilities including information management, storage, machine learning, analytics and cognitive services. Additionally, enterprises can access applications from big data and advanced analytics partners in the Azure Marketplace. But before getting started, use this list of terms to get acquainted with #Microsoft Azure big data services: Azure Data Lake Analytics: Data Lake Analytics is a query service for big data in Microsoft's public cloud. The service allows users to analyze data to gain insights and automatically scale resources. According to Microsoft, organizations can use Data Lake Analytics with their existing tools for identity, management, security and warehousing tools. Azure Active Directory is integrated with the service to provide further management for user permissions. Azure Data Lake Analytics has access to Azure SQL Data Warehouse, Power BI and Data Factory and is part of the Cortana Analytics Suite. The service uses U-SQL: a Microsoft query language derived from SQL and C#.

http://searchcloudcomputing.techtarget.com/feature/Words-to-go-Microsoft-Azure-big-data-services

Teradata ports Aster analytics to Hadoop

With the new version of #Aster analytics, #Teradata has finally done the inevitable. The new release makes the unique analytic functions of Aster available as software-only for #Hadoop and the #Amazon Web Services cloud ( #AWS ). It decouples Aster analytics from the Aster database; until now, you had to buy the Aster database to get the Aster analytics capabilities.

This comes just over five years after Teradata acquired Aster, a specialized columnar analytic database that had over a hundred analytic functions, some of them patented. Specifically, Aster analytics allows SQL developers to work with what they know -- SQL -- to get access to advanced analytic techniques that they would use to explore data. Examples of Aster analytic functions include SQL- #MapReduce computation for complex multi-step problems; SQL-GR for graph analysis that is used in deciphering the interrelationships of people and/or things; and nPath, which provides path analysis for problems such as analyzing and optimizing how customer navigate online sites.

The new release acknowledges a reality in the market: the columnar database technology that was leading edge and unique five years ago is no longer so, and that the real value-add of the Aster platform is its SQL-based exploratory analytics.

Aster analytics was originally developed for columnar database table architectures and used to differentiate Aster Data from then-rivals like #Greenplum and #Netezza. Columnar tables are better suited than traditional row-based counterparts for analytic queries, where you care about ranges of values, as opposed to ranges of individual records. A technology pioneered by IQ Systems (later Sybase, and now SAP IQ) a decade earlier, a wave of new providers capitalized on columnar at the time with innovations in data compression and in-memory processing that enabled data warehouses to boost their scalability to gigabyte range when Hadoop was breaking barriers with terabytes and petabytes.

While providers like Teradata initially had to buy their way into columnar technology, today columnar is add-on option for most major household brandrelational databases. Columnar has also become the de facto standard for cloud-based data warehousing platforms such as AWS #Redshift or #Snowflake.

In the meantime, Hadoop has become a much more capable platform, and the logical target for utilizing Aster analytics. In the early days of Aster Data, Hadoop was strictly a MapReduce batch analytics platform. Today, Hadoop can run a variety of workloads from batch to interactive and real-time. And the scalability of Hadoop compute and storage makes it a natural target for Aster analytics.
http://www.zdnet.com/article/teradata-ports-aster-analytics-to-hadoop/

Cisco is the new container for ContainerX

The container market just got a new 900-pound gorilla: #Switchzilla has landed in the segment with its acquisition of eighteen-month-old #ContainerX. The acquisition target is a container management specialist whose first beta landed in November 2015, with general availability arriving in June this year. While #Cisco digests its target, ContainerX has switched off product downloads, webinars, and product support. Blogging the buyout, Cisco's Rob Salvagno says the attraction is a technology that gives it “enterprise-class container management” across various target platforms. “The ContainerX team will join Cisco’s Cloud Platform and Services Group led by vice president Kip Compton”, he writes. ContainerX's site says it can manage bare metal, virtual machine, Windows and Linux systems on public or private clouds. The Register would expect Cisco's UCS platforms to get added to the list in short order. Salvagno notes that the company wants to develop its own “comprehensive cloud-native stack” for container users. Switchzilla's new CEO Chuck Robbins has launched the company down the same track as much of the industry: in a world where hardware growth is difficult to sustain, he wants a pivot to software and services. That can hardly be more risky than business-as-usual: it's suffered flat or marginal revenue growth for years, and in its Q3 2016 turned in revenue of US$12 billion, down $100 million from the same quarter in 2015. Things were better in Q4 for the bean-counters, but not for staff in the “legacy” businesses: 5,500 had their summer holiday turned into a job-hunt.

http://www.theregister.co.uk/2016/08/31/cisco_is_the_new_container_for_containerx/

Google Is Winning the Fight for PayPal's Business

TECH #PAYPAL #Google Is Winning the Fight for PayPal's Business and It could potentially beat out #Amazon and #Microsoft. #Alphabet ’s Google GOOGL -0.26% is close to winning PayPal as a client for its cloud business, potentially beating out Amazon and Microsoft, CNBC reported on Tuesday. While Google is the front-runner, the online payments processor is evaluating the other providers and hasn’t made any decision yet, CNBC reported, citing people familiar with the matter. However, PayPal PYPL 0.41% may not move its technology infrastructure in the fourth quarter, the peak period for online commerce, CNBC said. PayPal has some existing business with Amazon Web Services, according to the CNBC report.

http://fortune.com/2016/08/30/google-paypal-business/

AT&T drone strategy targets down-to-earth applications and sky-high ambitions

When you think of the up-and-coming players in the commercial drone market, you might think of Amazon, or Google … but how about AT&T? “ #AT&T is going to be one of the biggest users of drones in the United States,” Art Pregler, who heads AT&T’s drone program and serves as director of national mobility systems, told GeekWire in an interview.

That may sound like a bold statement – but Pregler is just reinforcing what John Donovan, AT&T’s chief strategy officer and president of technology and operations, said last month at the company’s Shape conference in San Francisco.

Long before #Amazon gets its drone delivery fleet in operation in the United States, AT&T will be deploying fleets of robo-fliers across the nation, thanks toregulatory changes that took effect this week.

Because of those changes, AT&T is now able to use unmanned aircraft systems to inspect cellular towers and check cellphone reception in urban areas – including the University of Washington’s Husky Stadium, where the procedure is being demonstrated this week.

The drone inspection routine already has been tested in rural areas, where AT&T workers face less likelihood of coming in contact with the uninvolved public. Repair crews can send a drone flying up to the top of the tower and have it beam back live video to an operations center for review.

The idea is to give the crews a better idea what they’ll be dealing with before they climb up to make repairs. “The results have been very favorable,” said Pregler, who’s based in the Seattle area.

http://www.geekwire.com/2016/att-drone-strategy/

Microsoft to sell MSN China to XiChuang Technology

Three months after saying it would shutter its #MSNChina web portal, #Microsoft Corp. has decided instead to sell the business to a Chinese entity run by a former general manager at Microsoft Online. Microsoft MSFT, -0.74% said Tuesday that it entered into a “management buyout agreement” on Aug. 10 with #XiChuangTechnology (Beijing) Co. under which XiChuang would acquire Microsoft Online Network Communications Technology (Shanghai) Co. The companies didn’t disclose the terms of the deal but said that Anderson Liu, a former general manager of Microsoft Online, is the chief executive of XiChuang.

http://www.marketwatch.com/story/microsoft-to-sell-msn-china-to-xichuang-technology-2016-08-30

Verizon's Master Plan to Take On Google and Facebook

The U.S. wireless market is at a mature stage, smartphone sales growth has slowed, and the probability of finding someone without a wireless plan is slim. Though wireless carriers will still be able to increase their revenues as data consumption grows, without subscriber growth their user base numbers will be stable at best.
The c ountry's two largest wireless carriers, #Verizon and #AT&T (NYSE:T), have gone in two different directions, more so because of Verizon's push to create a content empire. But why is Verizon going down this road?

Why Verizon needs this new line of business

The first clue to that is that online advertising is expected to grow at double-digit rates for the next four years. The second is that cable is also approaching a tipping point because of heavy competition from video streaming sites like #Netflix (NFLX), #YouTubeRed, #Amazon (AMZN) Prime Video, #Hulu, #HBO Now and so on.

So Verizon found itself facing two mature service industries that weren't going to yield the kind of growth it was looking for, and online advertising became the desirable third option.

Therefore, Verizon went on a buying spree and grabbed up as many of the top content sites as it could. With all these brands under its belt Verizon is now ready to take on the monsters of online advertising - #Google and #Facebook (FB).

Five years ago the online advertising market was at the mercy of Google. Then Facebook came along asking for equal rights and has grown its advertising business to nearly $17 billion through last year. Now Verizon is looking to become the third player in the market. Wherever the money goes businesses will follow, and Verizon did.

http://finance.yahoo.com/news/verizons-master-plan-google-facebook-201255189.html

Samsung launches first Exynos chip with all radios built in

#Samsung has revealed a new chip that could have a ripple effect on its high-end smartphones, and will make #IoT devices and smartphones for developing markets faster, slimmer and cheaper. The quad-core 7570 is the first #Exynos chip to have all wireless tech, including Cat.4 LTE, WiFi, Bluetooth, FM and GNSS (GPS), built in to a single chip. It has 70 percent more performance and uses 30 percent less battery power than its predecessor, with everything squeezed into a 20 percent smaller package. The chip can also handle signal processing for up to 8-megapixel front and 13-megapixel back cameras, Full HD video, and a WXGA screen (1,366 x 768 resolution). Samsung was able to pack all that in by using 14-nanometer manufacturing for the first time on a budget chip. So far, that's been reserved for its higher-end processors, including the top-of-the-line Exynos 8890.
https://www.engadget.com/2016/08/30/samsung-launches-first-exynos-chip-with-all-radios-built-in/

Tuesday, August 30, 2016

Historic Dell and EMC Transaction Set to Close on September 7, 2016

Today’s announcement follows regulatory approval of the #Dell and EMC transaction by China’s Ministry of Commerce (MOFCOM), which has granted clearance for the companies’ proposed combination. MOFCOM approval was the final regulatory condition to closing the transaction. EMC shareholders approved the transaction on July 19, with approximately 98 percent of voting #EMC shareholders casting their votes in favor of the merger, representing approximately 74 percent of EMC's outstanding common stock. “This is an historic moment for both Dell and EMC. Combined, we will be exceptionally well-positioned for growth in the most strategic areas of next generation IT including digital transformation, software-defined data center, converged infrastructure, hybrid cloud, mobile and security,” said Michael Dell, chairman and CEO of Dell Technologies. “Our investments in R&D and innovation, along with our 140,000 team members around the world, will give us unmatched scale, strength and flexibility, deepening our relationships with customers of all sizes.” “I am proud of everything we’ve built at EMC – from humble beginnings as a Boston-based startup to a global, world-class technology company with an unyielding dedication to our customers,” said Joe Tucci, chairman and chief executive officer of EMC. “The combination of Dell and EMC creates a new powerhouse in the industry – providing the essential technology for the next era in IT.” At closing, EMC shareholders will receive $24.05 per share in cash in addition to a newly issued tracking stock linked to a portion of EMC’s economic interest in the VMware business. Based on the estimated number of EMC shares outstanding at the close of the transaction, EMC shareholders are expected to receive approximately 0.111 shares of new tracking stock for each EMC share. Upon close of the transaction, EMC shares under the ticker symbol “EMC” will be suspended from trading on the New York Stock Exchange. Shares of the tracking stock, trading under the ticker symbol “DVMT,” are expected to begin trading on the New York Stock Exchange on Wednesday, September 7.

http://www.businesswire.com/news/home/20160830006294/en/Historic-Dell-EMC-Transaction-Set-Close-September

Dell Unveils New Converged Infrastructure, Integrates VMware VSAN

#Dell Tuesday unveiled a new converged infrastructure strategy with the introduction of a platform designed to provide flexible options to meet the widest range of customer requirements. That platform, the Dell Validated System for Virtualization, provides a full range of customization options targeting a wide range of on-premise and cloud applications, said Glenn Keels, executive director of product management for engineered systems, high performance computing, and cloud at Dell, Round Rock, Texas. Nearly six years of experience in converged infrastructure solutions have showed Dell there is a huge need in the middle part of the market between hyper-converged infrastructure and reference architectures where channel partners can leverage their existing integration, deployment and services capabilities, Keels told CRN.
"In between is a missing middle where customers don't have to compromise their existing policies," he said. "There's a need for form-factor-independent platforms that are easy to deploy, with some options, but still driven by automation."

Dell's answer is the Dell Validated System for Virtualization, a series of converged infrastructure appliances built around the company's PowerEdge servers or FX2 server chassis, SC-series storage appliances or raw hard drives and SSDs, and Dell or #Brocade networking.

Management, monitoring, orchestration and data protection are accomplished by various Dell tools and applications.

The two primary storage options for the Dell Validated System for Virtualization are either iSCSI or Fibre Channel Dell SC-series arrays, formerly known as Compellent, and VMware's VSAN software-defined storage technology, Keels said. The first versions will feature VSAN, he said.

At this week's #VMworld conference, Dell is demonstrating a four-node solution based onVSAN 6.2, and showing how it can be downsized to a three-node solution and configured for SQL databases.

Dell appears to be looking at what comes beyond converged and hyper-converged infrastructure, said Michael Tanenhaus, principal at Mavenspire, an Annapolis, Md.-based solution provider and Dell partner.

 "Dell is starting to pull together the next generation of pre-validated, pre-configured systems," he said. "It's starting small. But it will grow in the future."

http://m.crn.com/news/storage/300081919/dell-unveils-new-converged-infrastructure-integrates-vmware-vsan.htm

Dell poised to seal $63.4bn EMC purchase

The largest-ever technology industry acquisition is set to close next week, following the announcement on Tuesday that Chinese regulators have approved Dell’s $63.4bn cash and stock purchase of computer storage company #EMC.

However, news of the deal’s imminent completion failed to resolve lingering Wall Street scepticism about some aspects of the complex, highly-leveraged transaction that has persisted since it was announced in October.

EMC’s shares were still changing hands after the news at an 11 per cent discount to the value implied by #Dell ’s cash and stock offer, apparently reflecting doubts about the value of an unusual type of equity that is being issued to help finance it. The so-called tracking stock is designed to reflect the performance of #VMware, the datacentre technology company majority owned by EMC, though investors will have no direct ownership stake or say in how it is run.

https://www.ft.com/content/942a50e8-6ef7-11e6-a0c9-1365ce54b926

-Solutions-Accelerate-Adoption-Converged

#Dell Validated System for Virtualization delivers on the promise of service-defined infrastructure, resulting in the industry’s most flexible converged system to-date1 Dell Solutions for #VMware ( #VSanVirtual SAN Ready Nodes for desktop and application virtualization delivers a simple, elastic and efficient hyperconverged solution optimized for virtual machines Support for VMware Blast Extreme remote protocol for Dell #Wyse Windows-based thin clients for increased flexibility and management #DellFinancialServices named preferred financing partner for VMware

http://www.businesswire.com/news/home/20160830005256/en/Dell-Unveils-Systems-Solutions-Accelerate-Adoption-Converged

Michael Dell Plans to Keep On Making Deals After EMC Acquisition

@MichaelDell Plans to Keep On Making Deals After #EMC Acquisition Michael #Dell, who is preparing to complete the largest merger in technology history after taking his own company private three years ago, says there’s even more deal-making in his future. "We’ll continue to do acquisitions," Dell said in an interview Monday in Las Vegas at #VMworld, an annual conference put on by #VMware Inc., which is majority-owned by EMC and will become part of the new #DellTechnologies when Dell and EMC complete their $67 billion merger later this year. "I also think that alliances will continue to be super important to us -- alliances and partnerships, both for Dell Technologies and for VMware." Dell, Chief Executive Officer and founder of his namesake personal-computer company, agreed to buy EMC in October 2015, aiming to broaden both companies’ product lineups -- from servers to storage devices and key software services -- amid intensifying competition. Hardware companies are facing pressure from cloud-based products that enable customers to rent computing power from Amazon.com Inc. or Google, rather than buying and maintaining their own systems from the likes of Dell and EMC.

en making progress with key product lines. Its server line grabbed 18.3 percent of the global market in the first quarter, up from 18 percent in the year-earlier period, according to IDC. Still, the overall market for those powerful machines shrunk by 3.6 percent. In PCs, Dell desktop and laptop computers accounted for 16 percent of shipments in the second quarter, up from 14.6 percent. The total PC market declined, though at a slower pace. The external storage business, including products Dell will gain with the EMC acquisition, is under pressure, shrinking by 3.7 percent industrywide in the first quarter, according to IDC.

"The market will do what it’s going to do," Dell said. "What we see is a tremendous opportunity here, bringing together servers, storage, virtualization, software-defined, converged, hyper-converged.”

VMware, which specializes in virtualization software that makes data centers more efficient, has fared better of late, reporting second-quarter sales that topped estimates, while EMC met analysts’ projections for unchanged quarterly revenue. Michael Dell and VMware CEO Pat Gelsinger both said they expect VMware to continue to devote a big portion of its resources to research and development. At its event in Las Vegas, VMware touted new productsthat help customers manage applications hosted in the cloud.

http://www.bloomberg.com/news/articles/2016-08-29/michael-dell-plans-to-keep-on-making-deals-after-emc-acquisition

Nutanix, the company driving VMware crazy, just made a brilliant acquisition

On Monday, #Nutanix officially confirmed that it bought a startup called #PernixData. News of the deal had leaked a few weeks ago when PernixData's outgoing CTO Frank Denneman told The Register's Chris Mellor that the sale was already a done deal. This is a brilliant move by Nutanix and a curious one by PernixData's leadership. It's brilliant for Nutanix for a bunch of reasons. For one, Nutanix has nabbed a key person familiar with the tech of its biggest rival.

Nutanix helped usher in a new computer storage market known as "hyperconverged," which combines storage along with compute power and the special software that manages computers called a hypervisor.

VMware is the biggest maker of hypervisors - that's its flagship, bread-and-butter product -although #Microsoft also has its own, as does #RedHat, #Citrix, others.

Nutanix's products works with VMware's hypervisor, of course, but the company also built its own to take on VMware head on, a hypervisor product known as #Acropolis.

Given Nutanix's enormous popularity with businesses, this was a tricky move on Nutanix's part to unseat the hypervisor king, so much so that VMware has tried to counter-punch and release its own hyperconverged storage product, known as #EVO:RAIL. It didn't sell well and The Reg reported in February that VMware had quietly shelved EVO:RAIL in favor of different storage software.

One of the co-founders of PernixData, Satyam Vaghani, was the early employee of VMware that built some of VMware's key storage technology and then ran VMware's storage business. (Storage giant EMC grew so nervous by these projects, it acquired a controlling interest in 2003 for about $635 million.)

So bringing Vaghani into the Nutanix fold is a big score.

Meanwhile PernixData's other founder, Poojan Kumarwas a key engineer who developed Oracle's compute and storage product, #Exadata, which has become a multi-billion product for #Oracle.

http://www.businessinsider.com/nutanix-just-made-a-brilliant-acquisition-2016-8

Citrix: Want Access to Skype for Business on Dell’s ThinOS? You Need Citrix

Why does #Dell Thin OS Support the #Skype for Business Solution virtualized by #Citrix … instead of Others? ThinOS for Dell Wyse thin and zero clients now supports the only #Microsoft-endorsed solution in the market for delivering Skype for Business in virtualized environments, the new “v2” HDX RealTime Optimization Pack of Citrix XenApp and XenDesktop. With Microsoft Skype for Business now seen as a strategic application by many enterprise customers, the new ThinOS 8.3 release brings to market a valuable new option for your users who demand high quality real-time voice and video. The Receiver for ThinOS is developed and maintained by Dell under a technology license from Citrix. This arrangement enables Dell to port features that we introduce in our Citrix Receiver to this proprietary OS. And it empowers the team at Dell to develop differentiating capabilities, as they have done many times over the years.

http://www.dabcc.com/citrix-want-access-to-skype-for-business-on-dells-thinos-you-need-citrix/

Broadcom Is Powering Ahead

#Broadcom (AVGO) reports fiscal 2016 third-quarter earnings on Thursday. I think the company will beat the consensus estimate and raise guidance. In the last year, shares of Broadcom are up 38%. Aided by the merger with #Avago, Broadcom is poised to increase sales 90% in fiscal 2016 and 23% next year. Growth is coming from a broad array of industries, including wireline, wireless and the industrial market. The company took on $14 billion in debt to complete the merger between Avago and Broadcom, but it should be able to realize some cost savings from the merger and drive double-digit earnings growth. Management said it plans to cut $750 million in operating costs over the next 18 months. Toward that end, earlier this month Broadcom was able to chop its annual interest expense by $100 million by refinancing some of its older high-rate debt. That savings should flow down to earnings in the coming quarters.

Broadcom competes in four market segments that should see solid growth this year and next. The largest segment, wireline, represents approximately half of the company's revenue. The wireline business, which is mostly fiber optic components and semiconductor interface chips for enterprise data centers and telecom equipment, is expected to be a $6.6 billion business this year. The wired business looks like it can grow by mid-single digits over the next two years.

The enterprise storage business is about 15% of revenue and is another mid-single-digit grower. Storage and wireline are solid, profitable businesses that should continue their current trajectory over the next two years.

https://www.thestreet.com/story/13687247/2/broadcom-is-powering-ahead.html

Tegile Launches IntelliFlash Cloud Platform at VMworld 2016

#VMworld, Booth #2057 – #Tegile Systems, the leading provider of multi-tiered flash storage arrays that enable enterprises to transition to an all-flash data center, today announced its #IntelliFlash Cloud Platform (CP) at VMworld 2016. This rack scale all-flash platform is designed for Fortune 1000 enterprise companies looking to achieve both the cost benefits of cloud architectures and the reliability and control of on-premises storage solutions. IntelliFlash CP touts scale-out functionality through multi-controller clusters and Non-Volatile Memory Express ( #NVMe ) to equip enterprises with the optimal storage performance to cost-effectively build a private cloud.
“High-density flash combined with a very fast storage access layer will be paramount to data center transformation,” said Tim Stammers, senior analyst at 451 Research. “We expect architectures like IntelliFlash CP to deliver a strong combination of performance, capacity and economics.”

IntelliFlash CP will stitch together multiple controllers to provide the ability to scale out horizontally and vertically across protocols. Controllers will be shared across a global namespace and have shared access to multiple grades of media that data is intelligently placed upon. As a result, enterprises will be able to build massively scalable infrastructure with up to tens of petabytes of flash storage. Enterprises will experience the resulting performance gains and benefit from pricing that is better than public cloud offerings.

IntelliFlash CP lays the foundation for an integrated system that will use NVMe to massively reduce network latency to meet the enterprise need for real-time access to data. By leveraging NVMe to connect SSDs to the controller plane, IntelliFlash CP will be able to provide memory-to-memory transfers nearly as fast as accessing local file storage at sub-millisecond latencies. This is unprecedented latency in the storage industry.

http://finance.yahoo.com/news/tegile-launches-intelliflash-cloud-platform-150000476.html

Can IBM Win The Storage War Against Dell-EMC?

Summary #IBM has announced its new #Storwize all-flash storage portfolio. The move is aimed at grabbing the market share of #Dell / #EMC post merger. The strategy can translate into long-term success for IBM given its software expertise. IBM (NYSE:IBM) has launched its new Storwize all-flash storage portfolio targeted at enterprises that run cloud or cognitive applications and workloads. The company also announced its "Flash In" storage migration program to help enterprises migrate to its all-flash storage platform from the platform of privately held Dell and EMC (NYSE:EMC), the companies that are going to merge this fall. As a result of this mega merger, Dell will be able to focus on the enterprise storage marketplace, an area where EMC enjoys the leadership position. IBM is playing an opportunistic game here. Many people including storage customers are anticipating that a smooth transition toward a unified company isn't something Dell and EMC will experience post merger. IBM's plan is to grab a part of the market share of Dell-EMC via its "Flash In" migration program. I believe this is a rare opportunity for IBM to boost its storage market share and Big Blue is heading in the right direction by targeting cloud or cognitive computing customers.

http://seekingalpha.com/article/4002896-can-ibm-win-storage-war-dell-emc

How one company is bringing Storage-as-a-Service to enterprise | #VMworld

Many of the #VMware, Inc. customers present at this year’s conference are using a fairly new offering in enterprise storage: #StorageAsAService. VMware itself has been attempting to get its own version of this type of service off the ground, but for now it is being handled through third-party providers, such as #ZadaraStorage, Inc. Nelson Nahum, cofounder and CEO of Zadara, sat down with John Furrier (@furrier) and John Troyer (@jtroyer), host and guest host of theCUBE, from the SiliconANGLE Media team, during VMworld in Las Vegas, to discuss the company’s Storage-as-a-Service business model and why it is relevant in today’s enterprise market. The Storage-as-a-Service model Zadara is a true pay-as-you-go enterprise storage service provider. The company offers a choice of public cloud through either #Amazon or #Azure, or private cloud where they ship the customer all the necessary equipment at no charge and manage it remotely. Once the customer is set up on Zadara’s system, they can then replicate and move data easily between public and private cloud, according to Nahum. “When you we say enterprise storage, it’s primary and secondary storage with the same capabilities that you will find in traditional storage arrays, but because it’s coming to the cloud, this can be provisioned as a service in one minute,” explained Nahum. The benefits of using Zadara Many existing VMware enterprise customers are used to more traditional storage solutions, such as SAN or #VSAN and direct attached storage solutions. Furthermore, these companies are used to allocating a great deal of overhead for the more traditional methods — not just in hardware, but in staffing teams to manage it. With Zadara’s Storage-as-a-Service model, any hardware required for private cloud is provided and the system is managed remotely, so there is no need to provide resources for either. Customers can spend their capital and manpower focusing on more worthwhile projects, Nahum explained. “We manage the system for the customer, even if it’s on premise,” said Nahum. “This is part of the capabilities of the advantages of using Zadara, because they can free up their time to work on more strategic things.”
http://siliconangle.com/blog/2016/08/29/how-one-company-is-bringing-storage-as-a-service-to-enterprise-vmworld/

Supermicro® All-Flash Server and Storage Systems Deliver Virtualization at the Speed of Flash at VMworld 2016

August 29, 2016 LAS VEGAS, Aug. 29, 2016 /PRNewswire/ -- #Super Micro Computer, Inc. (SMCI), a global leader in compute, storage, and networking technologies and green computing is introducing the very latest virtualization optimized server and storage systems at #VMworld 2016 in booth 2170, Mandalay Bay Hotel and Convention Center. The new all- #NVMe flash #VSAN solution is based on the 1U Ultra SuperServer® 1028U-TN10RT+, which supports ten NVMe SSDs with the industry's first-to-the-market true hot-swap capability for enterprise mission-critical RAS applications. These Supermicro VSAN ready nodes are ideal for customers looking to create a simple to deploy and manage blazing fast hyper-converged cluster with high-availability that is targeted for high-performance database and big data analytics applications that demand a high degree of compatibility, reliability and serviceability. For even higher density all-NVMe flash systems, Supermicro offers a 2U SuperServer that support 24 hot-swap NVMe SSDs and another that supports 48 hot-swap NVMe SSDs. The company has also introduced a new high-availability 2U dual-node system with support for 20 dual-port NVMe SSDs.

http://finance.yahoo.com/news/supermicro-flash-server-storage-systems-130500742.html

European Commission - Press releaseState aid: Ireland gave illegal tax benefits to Apple worth up to €13 billion

The #EuropeanCommission has concluded that Ireland granted undue tax benefits of up to €13 billion to #Apple. This is illegal under #EU state aid rules, because it allowed Apple to pay substantially less tax than other businesses. Ireland must now recover the illegal aid. Commissioner Margrethe Vestager, in charge of competition policy, said: "Member States cannot give tax benefits to selected companies – this is illegal under EU state aid rules. The Commission's investigation concluded that Ireland granted illegal tax benefits to Apple, which enabled it to pay substantially less tax than other businesses over many years. In fact, this selective treatment allowed Apple to pay an effective corporate tax rate of 1 per cent on its European profits in 2003 down to 0.005 per cent in 2014." Following an in-depth state aid investigation launched in June 2014, the European Commission has concluded that two tax rulings issued by Ireland to Apple have substantially and artificially lowered the tax paid by Apple in Ireland since 1991. The rulings endorsed a way to establish the taxable profits for two Irish incorporated companies of the Apple group (Apple Sales International and Apple Operations Europe), which did not correspond to economic reality: almost all sales profits recorded by the two companies were internally attributed to a "head office". The Commission's assessment showed that these "head offices" existed only on paper and could not have generated such profits. These profits allocated to the "head offices" were not subject to tax in any country under specific provisions of the Irish tax law, which are no longer in force. As a result of the allocation method endorsed in the tax rulings, Apple only paid an effective corporate tax rate that declined from 1% in 2003 to 0.005% in 2014 on the profits of Apple Sales International. This selective tax treatment of Apple in Ireland is illegal under EU state aid rules, because it gives Apple a significant advantage over other businesses that are subject to the same national taxation rules. The Commission can order recovery of illegal state aid for a ten-year period preceding the Commission's first request for information in 2013. Ireland must now recover the unpaid taxes in Ireland from Apple for the years 2003 to 2014 of up to €13 billion, plus interest. In fact, the tax treatment in Ireland enabled Apple to avoid taxation on almost all profits generated by sales of Apple products in the entire EU Single Market. This is due to Apple's decision to record all sales in Ireland rather than in the countries where the products were sold. This structure is however outside the remit of EU state aid control. If other countries were to require Apple to pay more tax on profits of the two companies over the same period under their national taxation rules, this would reduce the amount to be recovered by Ireland. Apple's tax structure in Europe Apple Sales International and Apple Operations Europe are two Irish incorporated companies that are fully-owned by the Apple group, ultimately controlled by the US parent, Apple Inc. They hold the rights to use Apple's intellectual property to sell and manufacture Apple products outside North and South America under a so-called 'cost-sharing agreement' with Apple Inc. Under this agreement, Apple Sales International and Apple Operations Europe make yearly payments to Apple in the US to fund research and development efforts conducted on behalf of the Irish companies in the US. These payments amounted to about US$ 2 billion in 2011 and significantly increased in 2014. These expenses, mainly borne by Apple Sales International, contributed to fund more than half of all research efforts by the Apple group in the US to develop its intellectual property worldwide. These expenses are deducted from the profits recorded by Apple Sales International and Apple Operations Europe in Ireland each year, in line with applicable rules. The taxable profits of Apple Sales International and Apple Operations Europe in Ireland are determined by a tax ruling granted by Ireland in 1991, which in 2007 was replaced by a similar second tax ruling. This tax ruling was terminated when Apple Sales International and Apple Operations Europe changed their structures in 2015. Apple Sales International Apple Sales International is responsible for buying Apple products from equipment manufacturers around the world and selling these products in Europe (as well as in the Middle East, Africa and India). Apple set up their sales operations in Europe in such a way that customers were contractually buying products from Apple Sales International in Ireland rather than from the shops that physically sold the products to customers. In this way Apple recorded all sales, and the profits stemming from these sales, directly in Ireland. The two tax rulings issued by Ireland concerned the internal allocation of these profits within Apple Sales International (rather than the wider set-up of Apple's sales operations in Europe). Specifically, they endorsed a split of the profits for tax purposes in Ireland: Under the agreed method, most profits were internally allocated away from Ireland to a "head office" within Apple Sales International. This "head office" was not based in any country and did not have any employees or own premises. Its activities consisted solely of occasional board meetings. Only a fraction of the profits of Apple Sales International were allocated to its Irish branch and subject to tax in Ireland. The remaining vast majority of profits were allocated to the "head office", where they remained untaxed. Therefore, only a small percentage of Apple Sales International's profits were taxed in Ireland, and the rest was taxed nowhere. In 2011, for example (according to figures released at US Senate public hearings), Apple Sales International recorded profits of US$ 22 billion (c.a. €16 billion[1]) but under the terms of the tax ruling only around €50 million were considered taxable in Ireland, leaving €15.95 billion of profits untaxed. As a result, Apple Sales International paid less than €10 million of corporate tax in Ireland in 2011 – an effective tax rate of about 0.05% on its overall annual profits. In subsequent years, Apple Sales International's recorded profits continued to increase but the profits considered taxable in Ireland under the terms of the tax ruling did not. Thus this effective tax rate decreased further to only 0.005% in 2014. Apple Operations Europe On the basis of the same two tax rulings from 1991 and 2007, Apple Operations Europe benefitted from a similar tax arrangement over the same period of time. The company was responsible for manufacturing certain lines of computers for the Apple group. The majority of the profits of this company were also allocated internally to its "head office" and not taxed anywhere. Commission assessment Tax rulings as such are perfectly legal. They are comfort letters issued by tax authorities to give a company clarity on how its corporate tax will be calculated or on the use of special tax provisions. The role of EU state aid control is to ensure Member States do not give selected companies a better tax treatment than others, via tax rulings or otherwise. More specifically, profits must be allocated between companies in a corporate group, and between different parts of the same company, in a way that reflects economic reality. This means that the allocation should be in line with arrangements that take place under commercial conditions between independent businesses (so-called "arm's length principle"). In particular, the Commission's state aid investigation concerned two consecutive tax rulings issued by Ireland, which endorsed a method to internally allocate profits within Apple Sales International and Apple Operations Europe,two Irish incorporated companies. It assessed whether this endorsed method to calculate the taxable profits of each company in Ireland gave Apple an undue advantage that is illegal under EU state aid rules. The Commission's investigation has shown that the tax rulings issued by Ireland endorsed an artificial internal allocation of profits within Apple Sales International and Apple Operations Europe, which has no factual or economic justification. As a result of the tax rulings, most sales profits of Apple Sales International were allocated to its "head office" when this "head office" had no operating capacity to handle and manage the distribution business, or any other substantive business for that matter. Only the Irish branch of Apple Sales International had the capacity to generate any income from trading, i.e. from the distribution of Apple products. Therefore, the sales profits of Apple Sales International should have been recorded with the Irish branch and taxed there. The "head office" did not have any employees or own premises. The only activities that can be associated with the "head offices" are limited decisions taken by its directors (many of which were at the same time working full-time as executives for Apple Inc.) on the distribution of dividends, administrative arrangements and cash management. These activities generated profits in terms of interest that, based on the Commission's assessment, are the only profits which can be attributed to the "head offices". Similarly, only the Irish branch of Apple Operations Europe had the capacity to generate any income from trading, i.e. from the production of certain lines of computers for the Apple group. Therefore, sales profits of Apple Operation Europe should have been recorded with the Irish branch and taxed there. On this basis, the Commission concluded that the tax rulings issued by Ireland endorsed an artificial allocation of Apple Sales International and Apple Operations Europe's sales profits to their "head offices", where they were not taxed. As a result, the tax rulings enabled Apple to pay substantially less tax than other companies, which is illegal under EU state aid rules. This decision does not call into question Ireland's general tax system or its corporate tax rate. Furthermore, Apple's tax structure in Europe as such, and whether profits could have been recorded in the countries where the sales effectively took place, are not issues covered by EU state aid rules. If profits were recorded in other countries this could, however, affect the amount of recovery by Ireland (see more details below).

http://europa.eu/rapid/press-release_IP-16-2923_en.htm?mkt_tok=eyJpIjoiTkRCbU9EQTJNVGcwTURVdyIsInQiOiJrNnhxVXFQWkx1OEw5Q0NcL1NMTXdGOGZsQUs1UDhuZFU2NFhoTXJYSURpREZLREFUR1cxNVwvc1dYVGpWbVVPa3QrOGkxcHhHQ0N2eEpUbVE3d2hyekY4WFF5Zlwvam9WQzVXcnpSRXdubDAzRT0ifQ%3D%3D

AMD: Zen Likely Not Good Enough

The CPU Market When it comes to the CPU market, a great deal of confidence has been put on #AMD 's new " #Zen " core. This confidence was recently boosted by AMD showing off a AMD 8-(Zen core) chip supposedly beating a top-of-the-line Intel 6900K 8-core chip in a bender rendering job. However, I believe there's a very large reason to doubt the Zen-based chips will live up to the expectations put on them. These expectations are more or less as follows: Zen will gain ~40% in IPC (instructions per clock) when compared to previous "excavator"-based AMD chips. Zen will enable AMD to make chips competitive with directly spec'd (number of cores and frequency) Intel chips. Zen will allow for much better power efficiency Put these together and what comes out is that AMD would be positioned to bring the battle to Intel (NASDAQ:INTC). If this was true, Intel would take a beating (as I've said in my previous article). However, in this article I will put forward a reason why I believe the Zen core-based chips will surprisingly disappoint (based on existing information). The Reason - The Market Is Assuming What It Can't Assume When we see comparisons between the existing AMD chips and the Zen-based ones, they usually go like this: OMG, look how much faster Zen goes! For instance, in the Ashes of the Singularity benchmark, the 8-core Zen chip attains a 58 "CPU Framerate" whereas the AMD FX8350, also 8-core, attains just 42. This is already a 38% gain for Zen versus the older 8-core AMD CPU. This is followed by "and this is just an engineering sample," the Zen CPU is running at 2.8GHz/3.2 Ghz, while the FX8350 is a 4.0/4.2Ghz chip. So the performance advantage would be even greater once the Zen sees ramped up frequencies.

http://seekingalpha.com/article/4002864-amd-zen-likely-good-enough

Huawei Announces Availability of FusionServer with VMware Virtual SAN Ready Node

#Huawei today announced at #VMworld ® 2016 the availability of 12 VMware Virtual SAN™ Ready Node configurations on Huawei #FusionServer. Huawei's FusionServer RH2288H V3 and RH5885H V3 Virtual SAN ( #VSAN ) Ready Nodes are validated server configurations for Virtual SAN deployment. VMware Virtual SAN is a hyper-converged software-defined storage solution that can be deployed on various x86 servers. It abstracts and pools server-side flash and disk into shared virtual data store with policy-based management and application-centric data services. "Enterprise customers who want a converged platform with Virtual SAN Ready Node for cloud computing and key services with databases can turn to Huawei's FusionServer RH2288H V3 and RH5885H V3," said Qiu Long, President of Huawei IT Server Product Line. "Huawei's participation in the VMware Virtual SAN Ready Node program accelerates our customers' time-to-value." Top reasons IT departments and enterprises choose Huawei FusionServer for Virtual SAN implementations include: Choice of hyper-converged infrastructure solutions Choice of Gen1 - 6G or Gen2 - 12G profiles Choice of small, medium or large workloads; virtual desktop infrastructure (VDI) linked clone and VDI full clone configurations Option for all-flash configurations Support for Intel Xeon V3 processor The VMware Virtual SAN certification is part of Huawei's overall strategy to provide enterprises with Huawei's highly efficient converged infrastructure, software-defined storage (SDS) capabilities and a pathway to hyper-converged infrastructure for virtualization and cloud applications. FusionServer RH2288H V3 is a 2U, 2-socket rack server that delivers high performance, large storage capacity, scalability, and is ideal for Internet, Big Data, cloud computing, virtualization, telecom services and mission-critical applications. FusionServer RH5885H V3 is a 4U four-socket rack server that supports Intel® Xeon® E7 v3 or v4 series processors with up to 96 cores and 6 TB memory capacity, boosting performance. It is ideal for mission-critical applications, such as databases, Enterprise Resource Planning (ERP), Business Intelligence (BI), Big Data, and virtualization. Huawei, which is participating as a partner and Silver sponsor at VMworld 2016, will have product experts on hand in Booth 961 to provide more information on Huawei's solutions for Virtual SAN Ready Node. View all of Huawei's VMware Ready Nodes solutions on VMware's compatibility guide website http://www.vmware.com/resources/compatibility/search.php.

http://www.prnewswire.com/news-releases/huawei-announces-availability-of-fusionserver-with-vmware-virtual-san-ready-node-300319134.html