Dell, EMC, Dell Technologies, Cisco,

Friday, September 30, 2016

Meet the New Authoritarian Masters of the Internet

President Barack Obama’s drive to hand off control of Internet domains to a foreign multi-national operation will give some very unpleasant regimes equal say over the future of online speech and commerce.

In fact, they are likely to have much more influence than America, because they will collectively push hard for a more tightly controlled Internet, and they are known for aggressively using political and economic pressure to get what they want.

Here’s a look at some of the regimes that will begin shaping the future of the Internet in just a few days, if President Obama gets his way.

China

China wrote the book on authoritarian control of online speech. The legendary “Great Firewall of China” prevents citizens of the communist state from accessing global content the Politburo disapproves of. Chinese technology companies are required by law to provide the regime with backdoor access to just about everything.

The Chinese government outright banned online news reporting in July, granting the government even tighter control over the spread of information. Websites are only permitted to post news from official government sources. Chinese online news wasn’t exactly a bastion of freedom before that, of course, but at least the government censors had to track down news stories they disliked and demand the site administrators take them down.

Unsurprisingly, the Chinese Communists aren’t big fans of independent news analysis or blogging, either. Bloggers who criticize the government are liable to be charged with “inciting subversion,” even when the writer in question is a Nobel Peace Prize winner.

Chinese citizens know better than to get cheeky on social media accounts, as well. Before online news websites were totally banned, they were forbidden from reporting news gathered from social media, without government approval. Spreading anything the government decides is “fake news” is a crime.

In a report labeling China one of the worst countries for Internet freedom in the world, Freedom House noted they’ve already been playing games with Internet registration and security verification:

The China Internet Network Information Center was found to be issuing false digital security certificates for a number of websites, including Google, exposing the sites’ users to “man in the middle” attacks.

The government strengthened its real-name registration laws for blogs, instant-messaging services, discussion forums, and comment sections of websites.

A key feature of China’s online censorship is that frightened citizens are not entirely certain what the rules are. Huge ministries work tirelessly to pump out content regulations and punish infractions. Not all of the rules are actually written down. As Foreign Policy explained:

Before posting, a Chinese web user is likely to consider basic questions about how likely a post is to travel, whether it runs counter to government priorities, and whether it calls for action or is likely to engender it. Those answers help determine whether a post can be published without incident — as it is somewhere around 84 percent or 87 percent of the time — or is instead likely to lead to a spectrum of negative consequences varying from censorship, to the deletion of a user’s account, to his or her detention, even arrest and conviction.

This was accompanied by a flowchart demonstrating “what gets you censored on the Chinese Internet.” It is not a simple flowchart.

Beijing is not even slightly self-conscious about its authoritarian control of the Internet. On the contrary, their censorship policies are trumpeted as “Internet sovereignty,” and they aggressively believe the entire world should follow their model, as the Washington Post reported in a May 2016 article entitled “China’s Scary Lesson to the World: Censoring the Internet Works.”

China already has a quarter of the planet’s Internet users locked up behind the Great Firewall. How can anyone doubt they won’t use the opportunity Obama is giving them, to pursue their openly stated desire to lock down the rest of the world?

http://www.breitbart.com/national-security/2016/09/29/meet-new-authoritarian-masters-internet/

Thursday, September 29, 2016

Dell, HPE reveal systems for Microsoft’s forthcoming Azure Stack software

#DellTechnologies Inc. and Hewlett-Packard Enterprise Co. ( #HPE ) yesterday unveiled a bunch of new integrated systems and services for #Microsoft ’s upcoming #Azure Stack software, in moves aimed at boosting each company’ hybrid cloud efforts. Azure Stack is a piece of software that will, when it’s finally released, allow organizations to replicate Microsoft’s Azure cloud in their own on-premises data centers. It’s believed to be a central part of Microsoft’s hybrid cloud strategy going forwards, which envisions a future wherein most enterprises will choose to host some of their applications and workloads in public clouds, while keeping more sensitive ones secured in their own data centers, or “private clouds.” The idea behind Dell’s and HPE’s new systems is that they’ll help businesses that wish to follow the hybrid cloud approach build on-premises infrastructure that has the same agility and flexibility as public cloud platforms like Azure, Amazon Web Services and Google Cloud Platform. “Many enterprises would like to host Azure within their data centers for performance, security or compliance reasons, and many service providers would like to offer Azure-compatible services for data sovereignty or other targeted services,” Paul Miller, HPE’s vice president of marketing for converged data center infrastructure, said in a statement. “HPE is teaming up with Microsoft to deliver the right mix of on- and off-premises Azure services with built-in security, operations management, pay-as-you-go pricing and the expertise that provides unmatched flexibility and cost savings for enterprise workloads.” Microsoft’s General Manager of Product Marketing Mike Schutz agreed, saying that his company and HPE share the view that “hybrid cloud capabilities and consistency across public and private cloud environments help customers accelerate their cloud strategy.” Just this week, Microsoft announced the release of Azure Stack Technical Preview 2. However, the official release of Azure Stack won’t come until summer of next year, when it will be bundled with pre-validated integrated solutions, with Dell, HPE and #Lenovo Group Ltd. all named as official partners. Dell and HPE both already sell systems capable of running Microsoft’s Azure software, but today’s announcement is a significant expansion of both company’s portfolios.

http://siliconangle.com/blog/2016/09/29/dell-hpe-reveal-their-forthcoming-integrated-azure-stack-systems/

Microsoft Storage Spaces Direct powers the new StarWind HyperConverged Appliance

Middleton, Sept. 28, 2016 –  #StarWindSoftware, a leading #hyperconverged solutions vendor, has presented the new release of their StarWind HyperConverged Appliance powered by #Microsoft S2D. Now, in addition to choosing a preferred hypervisor, StarWind HCA also provides Software-Defined Storage options. Thus, Microsoft users can enjoy the environment they are used to. The news is presented at Microsoft Ignite 2016 in Atlanta, along with a non-disruptive way to add Azure to the setup as a storage tier. StarWind is a pioneer of virtualization and has been in the market for 11 years already. StarWind developed its own software-defined storage product, and was one of the first to move software-defined storage out of “test and development” niche into mainstream. In 2015, the company shifted from a software-only to hardware vendor. With the release of HyperConverged Appliance, StarWind has brought Enterprise-class hyperconvergence to SMB and ROBO. StarWind HyperConverged Appliance (HCA) is a turnkey hyperconverged platform that includes commodity hardware, disks and flash, hypervisor of choice, Software-Defined Storage of choice and associated software, all unified into an easily manageable tier. It is pre-built, pre-configured and pre-tested by StarWind engineers. StarWind HCA is designed with unlimited interoperability in mind, which means it can seamlessly merge with any IT infrastructure and allows the user to recycle existing or legacy hardware and software to save money. Another recent StarWind development that is now presented at Microsoft Ignite, is AcloudA – the plug-and-play Azure gateway that fits into HDD bay, providing SMB3 and iSCSI cloud connectivity. Basically, it adds Azure or any other cloud storage as a tier for cold data non-disruptively, thus reducing capacity CapEx and OpEx.
http://www.prurgent.com/2016-09-28/pressrelease415964.htm

Enterprise Storage: Still A Land Of Opportunity For Those With The Right Vision

As someone well-placed at a major storage vendor pointed out to me recently, big box storage arrays are going the way of the dinosaur because a single all flash array can replace multiple disk arrays. And for that reason, the all flash revenue potential isn’t anywhere near that of disk even though solid state storage is a hot market. Add to that the fact that storage software can’t make up the shortfall in revenue from exiting disk arrays either and you have a glum near-term outlook for the well-known big box storage vendors. Nevertheless, #enterprisestorage is still a land of opportunity for those with the right vision. Remember that those who have traditionally predicted meteoric data growth haven’t backed off. #IoT is only the latest data volume generator and more will come when we get to some others now appearing on the horizon like blockchain. So there are now some interesting startups looking to capitalize on some hot storage trends. Cases in point include:

http://www.forbes.com/sites/johnwebster/2016/09/29/enterprise-storage-still-a-land-of-opportunity-for-those-with-the-right-vision/#24354d093d0e

Dell EMC offers first look at merged plans in the region

DUBAI: #Dell #EMC on Thursday revealed a “transformational road map” for the region that was big on strategy and low on nitty-gritty detail. Aongus Hegarty, EMEA regional president at the recently amalgamated company, sidestepped questions about its market share and projected growth within Middle East, instead quoting global figures of $74 billion revenue, and 140,000 employees across 1420 countries. Mohammad Amin, senior vice-president for Turkey, Eastern Europe, Africa and Middle East at EMC, said however, that Dell EMC was market leader in the region “with a big gap between us and number two”. Hegarty, speaking at a media round-table in Dubai, following a meeting with Dell EMC enterprise clients, said the firm would be seeking growth both through expanding its market share and through expanding the overall market through new technology and innovation, and that constraints on growth varied from country to country. “We’ve invested ahead of the curve extensively in the region,” Hegarty said. “We are committed to do that. As Dell and EMC I’ve seen that, and as a privately controlled company we make decisions with more medium- to long-term horizons. We don’t look at short-term things. That’s why even in some constrained environments in different markets here we’ve not cut back — if anything we’ve actually incrementally invested.”

m.gulfnews.com/business/sectors/technology/dell-emc-offers-first-look-at-merged-plans-in-the-region-1.1904405

Dell-EMC's dual server supply problem... is not a problem, it reckons

You might have thought it did not compute, #Dell having its own in-house server operation for its acquired #EMC kit to use while EMC’s #VCE operation uses #Cisco server-powered #Vblocks in its converged systems. But no, having a dual server supply is fine. In a blog post, Chad Sakac, president of the Converged Platforms and Solutions operation at Dell EMC, says #Cisco servers will continue to be used in Vblocks and #VxBlocks, the Converged Infrastructure (CI) systems, while the hyper-converged infrastructure ( #HCI ) products will use Dell PowerEdge servers. We ARE (not WILL be) working behind the scenes to leverage the power of our Dell EMC family to welcome Dell PowerEdge Servers into our HCI portfolio soon. Why? On the server hardware front, Sakac says CI systems, with an integrated external storage array, need “the peak of blade system design with the most dense CPU/memory model” and he’s fine with Cisco supplying that. HCI systems, with no external storage array, need a rackmount as well as modular server design with dense storage, compute and memory components. Dell is providing the PowerEdge R-series for rackmount needs and the PowerEdge FX-series for modular needs. Sakac believes that the hardware element “of the [HCI] system-level offer depends on having an awesome, fast, flexible, and cost effective x86 rackmount, modular set of systems, and an incredible supply chain,” support model, and “configure to order” flexibility.” Which Dell has. He plays the why-fix-what-isn’t-broken card: “It’s a no brainer to ensure the continuity and strength of our CI leadership Vblock and VxBlock with Cisco UCS.” And implies Dell is better than Cisco for HCI because “it’s a no brainer for us to combine PowerEdge servers with our HCI #VxRack and #VxRail offers to give our customers more choice, more configure to order, faster access to the whole Intel ecosystem, and ultimately better flexibility at a better price point.” Dell EMC is working to deeply integrate PowerEdge server design, roadmap, manufacturing, supply chain and support into the VxRack and VxRail products. On the reference architecture front, we are told: “ #VSAN -Ready Nodes and ScaleIO-Ready Nodes, and validated systems – Dell EMC PowerEdge, Dell EMC Storage, and Dell EMC Networking will be at the heart of reference architectures, bundles, and validated systems.” Comment Dell does have its own blade servers, which a Cisco white paper says are not as good as its UCS blades: "Dell has tried to implement this vision twice: first with the Dell M1000e blade enclosure and now with the Dell FX2 architecture. Meanwhile, the Cisco Unified Computing System is the only blade system on the market today that delivers “true unity computing” while also giving you choice. This document describes the benefits of the blade server vision and how Dell and Cisco measure up to the vision." Conversely, a Dell document (PDF) says its own blade servers outperform Cisco gear. We might suppose there are no substantial technical obstacles to Dell servers being used in the Vblock and VxBlock CI systems, but it suits both Dell and Cisco from an enterprise selling and support point of view to continue with the battle-hardened, successful and proven VCE operation. El Reg would also like to gently point out that Dell EMC’s Vblock and VxRack CI offerings are based on VMware. It’s not inconceivable that Sakac’s operation could develop Hyper-V virtualisation-based HCI and CI offerings. These could use Dell server and networking technology in both the CI and HCI categories. Cisco’s sales reps for Dell EMC must know they face a continuing threat of Dell servers being used instead of Cisco UCS boxes in the future. They have a continuing mountain to climb.

http://www.theregister.co.uk/2016/09/29/dell_emc_dual_server_supply_problem/

Intel Corporation (NASDAQ:INTC) Potentially Threaten IBM on Growing Popularity Of Hyper-Converged Infrastructure

#Intel Corporation (NASDAQ:INTC) [Trend Analysis] is posturing a new threat to International Business Machines Corp. (NYSE: #IBM ). The growing popularity of hyper-converged infrastructure based on Intel’s x86 processors is a new threat for IBM. The hyper-converged appliances running on Intel’s commodity x86 processors are fast gaining traction in datacenters about world, which could be a threat for IBM’s (NYSE:IBM) server business. For Intel, however, this could be a boon because it will support the demand for Xeon chips. So, IBM cannot afford to take a seat idle without venturing into hyper-converged infrastructure and what all IBM needs, is a change in outlook and current server strategy. However, growing popularity of hyper-converged infrastructure could be a missed prospects for the firm if it continues to be reluctant to offer consumers hyper-converged software and appliances. On the other hand, Chip analysts at Citigroup (C), Roland Shu Christopher Danely recently review outlook of Intel (INTC) competing directly with Taiwan Semiconductor Manufacturing (TSM) in building chips for other firms, and declare TSM the likely winner in any such face-off. While Apple will still use Intel processors for its next generations of Mac computers, but the suggestion that Apple would use ARM-based A10 processors inside the new MacBook or iMac is unrealistic. The software compatibility issue will render ARM-based Macs unattractive to Mac users who have already bought several x86-only Mac software products. Apple will still use Intel processors for its next generations of Mac computers. Intel Corporation (NASDAQ:INTC) has kept up with the changing appetite, stock inched up around 0.70% in early session as it gain volume of 18.9 Million shares as compare to its average volume of 22607.99 shares and then traded at $37.44. Analysts Remarks And Rating Finally to see some strong financial remarks by WSJ over INTC performance. Out of the pool of analysts 18 gave their BUY ratings on the stock in previous month as 21″ Analysts having BUY in current month. The stock was ranked as Underweight by 1 analysts while 2 analysts gave SELL rank. Majority ranked Overweight from the pool of analysts. The next year first quarter EPS estimates trend for current period shows $0.72 while one month ago this estimate trend was for $0.66. EPS indicator shows the robustness of the stock and its analysis opens up the clear picture of company’s profitability. On annual basis of EPS, FY 2017 Estimate Trends at current were for $2.81 and for the one month was for $2.71 as compared to three months ago was for $2.61. Whereas, INTC received highest price target of 49.00 and low target of 31.00. The stock price target chart showed average price target of 40.21 as compared to current price of 37.18.
http://streetwisereport.com/intel-corporation-nasdaqintc-potentially-threaten-ibm-on-growing-popularity-of-hyper-converged-infrastructure/3410864/

SAP Aims to Deliver Big Data as a White Glove Service

When it comes to data, #SAP has been pursuing a dual strategy. On the one hand, SAP is driving adoption of in-memory computing using the #HANA database to enable both faster transactions and real-time analytics. The other half of the strategy is anchored by SAP HANA #Vora, an instance of #Apache #Hadoop and #Spark that SAP deploys to create a data lake. This week at the #Strata + Hadoop conference, SAP announced it is bolstering its data lake strategy by acquiring #Altiscale, a provider of a managed cloud service based on Hadoop and the Spark in-memory computing framework. Mike Eacrett, vice president of product development for #BigData at SAP, says SAP is adding what amounts to a “white glove service” for deploying Hadoop and Spark. Many IT organizations don’t have the skills required to deploy Hadoop and Spark on their own. SAP will continue to give customers the option of deploying Hadoop and Spark as implemented by Altiscale or deploying the SAP HANA Vora implementation as a service on the Altiscale cloud. But there will be no requirement to use HANA in order to subscribe to the Altiscale service. Eacrett says the customers most interested will be organizations trying to operationalize Big Data within the context of an overall business process strategy that employs Big Data analytics to make better informed decisions. “Organizations want to be able to use Big Data to drive repeatable processes,” says Eacrett.

http://mobile.itbusinessedge.com/blogs/it-unmasked/sap-aims-to-deliver-big-data-as-a-white-glove-service.html

Microsoft HDInsight gets Spark 2.0, faster Hive, and better security

When #Microsoft started out dipping its toes into the #Hadoop waters, it worked with #Hortonworks to port #Hadoop to #Windows and run it in the #Azure cloud. But running Hortonworks Data Platform (HDP) for Windows meant #HDInsight (as Hadoop on Azure was eventually branded) was always a step behind the more mainstream #Linux distributions, and constantly playing catch-up. When Microsoft decided to offer HDInsight clusters running on Linux, everything changed. Support from across the industry materialized and the newest Hadoop features were added to the service in much faster timeframes.
Still, HDInsight has been due for a polishing, and today Microsoft is announcing just that. A new version of HDInsight, based on HDP 2.5 is launching today and, along with it, some Microsoft-specific security and application integrations that make HDInsight a contender for leading cloud Hadoop offering.

Spark in its eye
So what's inside? Apache Spark 2.0, to start. This version of Spark includes technology from Project Tungsten, giving Spark the power of vectorized computations. Along with the new version of Spark itself, HDInsight will now include support for Apache Zeppelin notebooks, which let developers build scrapbook-like compositions of code and data visualizations that run on Spark.

Also read: Spark comes to Azure HDInsight

HDInsight had already offered similar capabilities using Jupyter, another open source notebook technology. But it's nice to see HDInsight include both notebook technologies, in parity with most other Hadoop offerings. Another nice Spark-related addition is that of a Spark-HBase connector, allowing Spark SQL to be used -- from notebooks or elsewhere -- to query data in Apache HBase.

Hive moves into express lane
Using HDP 2.5 under the hood also means that Microsoft can ship Apache Hive's new LLAP ("Live Long And Process") mode, stemming from the "Stinger.Next" initiative around Hive. As I reported a year and a half ago, the technology combines Hive running on Apache Tez with caching, vectorization, and other optimizations to deliver what both Microsoft and Hortonworks claim are sub-second response times
http://www.zdnet.com/article/microsoft-hdinsight-gets-spark-2-0-faster-hive-and-better-security/

Hadoop, Big Data ecosystem shifting, analysts say | #BigDataNYC

#Hadoop has always been seen the “poster child” for #BigData, and indeed the vast majority of Big Data customers use it in at least some variation. But, recently, there have been some changes within the Hadoop ecosystem, as well as some outside competition, and it is creating permanent changes to the Big Data landscape. As day two of the BigDataNYC 2016 event kicked off at the Mercantile Annex in New York, NY, Dave Vellante (@dvellante) and Jeff Frick (@JeffFrick), cohosts of theCUBE, from the SiliconANGLE Media team, sat down to discuss highlights from the event, the big changes to the Hadoop ecosystem and the evolving Big Data landscape. Big changes for Hadoop One of the big themes from BigDataNYC is how many users are starting to find alternatives to the standard Hadoop ecosystem. The various #Hadoop distributions, #Cloudera, #MapR and #Hortonworks, have always been at the heart of Hadoop, but recently developers have been looking for other solutions for Big Data, both inside the Hadoop ecosystem and outside of it. The complexity and long learning curve to being functional with Hadoop are just a few of the driving reasons developers are seeking other alternatives. “This whole Hadoop ecosystem is really shifting,” said Vellante. “We used to spend all the time talking about Cloudera, MapR and Hortonworks, [but now] those guys are sort of like infrastructure plumbing.”

http://siliconangle.com/blog/2016/09/28/hadoop-big-data-ecosystem-shifting-analysts-say-bigdatanyc/

AI? Hadoop? Companies still don’t have the data basics right, says Birst

Birst’s core strategy focuses on being a seller of ‘networked BI’ in the cloud – the idea being that most enterprises have little, or no, insight into the data across their entire organisation. Companies build data silos in an attempt to get some structure, but ultimately they’re still silos, which either sit in multiple systems, data warehouses or on users’ desktops. As a product, think of Birst as a fabric that sits on top of all these silos, tapping into them and linking them all together to provide predictive insights. Sitting down with Birst’s VP of product strategy, Southard Jones, in San Francisco this week, he said: You want to solve a use case problem, like address the way that purchasing happens across a 78 country company that’s highly decentralised, across 29 ERPs. That’s not a desktop discovery tool. That’s an analytics solution. It’s great to have a discovery tool that can give an end user a faster, beautiful way to answer questions on a spreadsheet. But to address and create value for the organisation and to deliver millions of dollar of value, we need a platform that’s going to address a larger problem. And IT and business are both going to team together to make that purchasing decision. Jones believes that although enterprises are interested in the latest data technologies, such as #Hadoop and #AI, ultimately they’re more interested in fixing the fundamentals of their data structures. He argues – and I tend to agree with him – that companies seek relief from just getting some governance in place, and getting a better understanding of their entire data portfolio. AI and Hadoop are nice to have, but ultimately they can’t do a great deal if you don’t even know what data sits where. Jones said: Birst was built on delivering automated machine learning capabilities. So, under the covers, we had this already. What we are hearing is that companies want a pragmatic approach to enable those end users, who are operational in nature, to leverage the intelligence of machines and AI. I wouldn’t say it’s the number one thing they’re asking for though. We talk about it, but the chief purchasing officer, for example, says ‘that’s great, that’s nice, I want to do that some day’. ‘But let me at least understand how many purchases, by supplier, in all countries, across all ERPs, are happening on a daily basis. I will be ecstatic if I can get that’. That’s a hard problem for me to solve. After you solve that problem, yeah let’s talk about this AI stuff. Let’s not get the cart before the horse.

http://diginomica.com/2016/09/28/ai-hadoop-companies-still-dont-have-the-data-basics-right-says-birst/

Newly Diversified Broadcom Gets "Wired" After Avago Merger

#Broadcom (AVGO) has a lot of different tentacles at this point, but the company's recent transformation seems to going along quite nicely. Earlier this year Avago Technologies completed its $37 billion acquisition of Broadcom. It's hard to find a more diversified chip name than Broadcom these days. The company now has a formidable presence in the wireless, wired, enterprise storage and industrial segments. Broadcom makes radios for cellphones and other consumer devices, plus chips used in networking equipment for data centers. Avago makes analog and mixed signal chips for automotive, communications and industrial applications. In late 2013, Avago made a bet on increased use of cloud and mobile software when it struck a deal to acquire #LSI Logic for $6.6 billion. The purchase gave #Avago a big presence in the networking and storage chip market.

http://www.investors.com/stock-lists/sector-leaders/newly-diversified-broadcom-gets-wired-after-avago-merger/

Amazon is eating away at Google's core business

For more and more people, #Amazon is the first port of call when it comes to researching potential purchases — and that's bad news for #Google. Over half of Americans now go to Amazon to carry out their first search for products, turning away from search engines and other online retailers, according to a new study from the marketing company BloomReach. (The research was previously reported on by Bloomberg.) Fifty-five percent of those surveyed made their first search on Amazon, up from 44% a year ago. At the same time, just 27% of people began at search engines, down from 34%. Retailers also saw a decline, dropping to 16% from 21%. (The study took place on Labor Day, May 1, and surveyed 2,000 US consumers. There's no word on data from other countries, but it seems reasonable to assume that the data might be similar in Western markets where Amazon has a similar presence as in the US.) It's a yet another sign of how fully Amazon is dominating online shopping — but it's also particularly bad news for Google. Google's original, core business is a search engine. But more and more consumers are now opting to bypass it in favor of heading straight to the ultimate destination.

http://www.businessinsider.com/55-people-start-product-searches-amazon-google-search-share-2016-9

#Dell #Boomi today announced its data and application integration technology is supplying critical interoperability between the #Salesforce Sales Cloud CRM instances that were in place at Dell and #EMC before the historic merger. By using Dell Boomi’s market-leading #AtomSphere integration platform as a service (iPaaS) to unify the disparate Salesforce systems, Dell Technologies has achieved vital visibility and collaboration to drive immediate business impact in such areas as cross-sell of complementary Dell and EMC products, revenue forecasting, sales pipeline management and supply chain planning.

The world’s largest technology companies hold the keys to some of the largest databases on our planet. Much like goods and coins before it, data is becoming an important currency for the modern world. The data’s value is rooted in its applications to artificial intelligence. Whichever company owns the data, effectively owns #AI. Right now that means companies like #Facebook, #Amazon, #Alphabet, #IBM and #Microsoft have a ton of power. In an act of self-governance, these five companies came together today to announce the launch of the new Partnership on AI. The group is tasked with conducting research and promoting best practices. Practically, this means that the group of tech companies will come together frequently to discuss advancements in artificial intelligence. The group also opens up a formal structure for communication across company lines. It’s important to remember that on a day-to-day basis, these teams are in constant competition with each other to develop the best products and services powered by machine intelligence. Financial support will be coming from the initial tech companies that are members of the group, but in the future, membership and involvement is expected to increase. User activists, nonprofits, ethicists and other stakeholders will be joining the discussion in the coming weeks. “We want to involve people impacted by AI as well,” said Mustafa Suleyman, co-founder and head of applied AI at DeepMind, a subsidiary of Alphabet. The organizational structure has been designed to allow non-corporate groups to have equal leadership side-by-side with large tech companies. As of today’s launch, companies like Apple, Twitter, Intel and Baidu are missing from the group. Though Apple is said to be enthusiastic about the project, their absence is still notable because the company has fallen behind in artificial intelligence when compared to its rivals — many of which are part of this new group. The new organization really seems to be about promoting change by example. Rather than preach to the tech world, it wants to use a standard open license to publish research on topics, including ethics, inclusivity and privacy. “The power of AI is in the enterprise sector,” said Francesca Rossi, an AI ethics researcher at IBM Research. “For society at-large to get the benefits of AI, we first have to trust it.” The focus of the organization is a refreshing juxtaposition to more pop-culture discussions about the risks of artificial intelligence. While the jury is still out as to whether a singularity event could threaten mankind, we already face a long list of challenges in today’s world of AI. While computers are not at a point yet where they can take all of our jobs, they can amplify the negative tendencies that humans already possess. A biased world can result in biased data sets and, in turn, bias artificial intelligence frameworks. To combat this, companies like Microsoft have already formed AI ethics advisory boards. But, rather than override existing efforts, the new group augments projects already undertaken at individual companies and provides a forum for sharing valuable advice. The group plans to make discussions and minutes from meetings publicly available.

https://techcrunch.com/2016/09/28/facebook-amazon-google-ibm-and-microsoft-come-together-to-create-historic-partnership-on-ai/

Wednesday, September 28, 2016

Dell Boomi’s Salesforce Integration Delivers Critical Visibility and Collaboration for Dell Technologies

ROUND ROCK, Texas--(BUSINESS WIRE)--#Dell #Boomi today announced its data and application integration technology is supplying critical interoperability between the #Salesforce Sales Cloud CRM instances that were in place at Dell and #EMC before the historic merger. By using Dell Boomi’s market-leading #AtomSphere integration platform as a service (iPaaS) to unify the disparate Salesforce systems, Dell Technologies has achieved vital visibility and collaboration to drive immediate business impact in such areas as cross-sell of complementary Dell and EMC products, revenue forecasting, sales pipeline management and supply chain planning.

http://www.businesswire.com/news/home/20160928005262/en/Dell-Boomi’s-Salesforce-Integration-Delivers-Critical-Visibility

D-Wave's 2,000-Qubit Quantum Annealing Computer Now 1,000x Faster Than Previous Generation

#D-Wave, a Canadian company developing the first commercial “ #quantumcomputer ,” announced its next-generation quantum annealing computer with 2,000 #qubits, which is twice as many as its previous generation had. History Of D-Wave D-Wave was created more than a decade ago, when it first developed a 16-qubit prototype. The company unveiled its 28-qubit version publicly for the first time in 2007. Since then, its increased its number of qubits at a steady pace, more than doubling every two years or so. In 2013, the company announced its 512-qubit computer and a collaboration between #Google and #NASA, who were going to test various algorithms on it and see how fast it could get compared to conventional computers. Last year, D-Wave announced a 1,000-qubit generation, and now the company is previewing its 2,000-qubit computer, which will likely go on sale next year.

http://www.tomshardware.com/news/d-wave-2000-qubit-1000x-faster,32768.html

Cloudera Accelerates Customer Success by Enabling Production-Ready Big Data Analytics Across Multiple Cloud and Hybrid Cloud Environments

#STRATA + #HADOOP WORLD NEW YORK, NY, Sept. 27, 2016 (GLOBE NEWSWIRE) -- #Cloudera, the global provider of the fastest, easiest, and most secure data management and analytics platform built on #Apache Hadoop and the latest open source technologies, today announced new technology enhancements to its core platform that will make it easier for companies to use elastic, on-demand cloud infrastructure to gain significant business value from all their data. The company also announced it is generating significant momentum with customers running production environments on public cloud infrastructure. The move to cloud is a top priority for CIOs in 2016 across the globe. According to a Gartner Survey Analysis: Cloud Adoption Across Vertical Industries Exhibits More Similarities Than Differences, February 2015, IT spending on public cloud is growing at a five-year CAGR of 18% through 2018 – further evidence that cloud spend is far outpacing IT spend with no signs of slowing down.

http://finance.yahoo.com/news/cloudera-accelerates-customer-success-enabling-200000856.html

Docker storage: how to get persistent storage in Docker

#Docker, by default, doesn’t come with persistent storage, which presents an issue to some workloads customers that want to run in containers – however, there are ways to achieve persistent storage #Container technology provides a great way to create, manage and run applications with a high degree of efficiency. A physical host running Docker, for example, can manage hundreds of containers at the same time, through efficient use of scheduling and memory resources.
A container is effectively a collection of processes needed to run an application that uses features of the underlying operating system to access networking and storage functions.
Operating characteristics such as cgroups and namespaces provide process isolation between containers, making it appear to each container that it is the only instance running on the server.

The initial intention was for containers to be transient or temporary in nature, spun up to manage a particular demand or workload. While this is an efficient way to run applications for the time they are needed, the fact a container (by default) loses all of its data when deleted represents a problem for running applications like databases, where persistent storage is essential, rather than just desirable.

Docker provides mechanisms to enable persistent data across the lifetime of a container. (This article refers to features up to and including Docker 1.12. The Docker ecosystem is rapidly changing and new features are released all the time, so be sure to check which version runtime you are using).

There are four ways to provide persistent storage to Docker containers: data volumes, data volume containers, mounting a host directory and Docker storage plugins.

Docker data volumes
A data volume is a directory within the file system of the host that is used to store persistent data for a container (typically under /var/lib/docker/volumes). The directory appears as a mount point specified by the administrator when starting the container up (eg /data).

By default, volumes are given 64-character randomly generated UUIDs, unless a friendlier name is provided.

Pro tip: It is definitely worth providing a friendly name that relates to the name of the associated container, as this becomes especially helpful when doing clean-up on orphan volumes.

Docker data volumes created in this way will persist across the container being started and stopped. Details of the volumes attached to a container can be found by running docker inspect against the container. Attached volumes are shown in the “Mounts” section. A list of volumes can be found using the docker volume ls command, however there’s no direct way to show a container associated with a volume using the docker volume commands.

Information written to a data volume is managed outside the storage driver that is normally used to manage Docker images. This means the volume is accessed much faster than it would be when writing to a directory within the container. This is because the storage driver doesn’t have to maintain update/change differences from the container image itself.

Unfortunately, data volumes aren’t really that useful because existing volumes can’t be attached to either a running or a new container. This can lead to orphan volumes (volumes with no associated container) and be an issue to clean up, especially when friendly volume names haven’t been used.

Data volume containers
An alternative to volumes is to create a container used specifically to manage volumes. This container doesn’t run any code, but instead acts as a single access point from which other containers can access a data volume.
http://www.computerweekly.com/feature/Docker-storage-how-to-get-persistent-storage-in-Docker

Hortonworks Tightens Focus on the Cloud with Connected Data Architecture

SANTA CLARA, Calif., and NEW YORK, Sept. 27, 2016 /PRNewswire/ -- #Hortonworks, Inc. ® (HDP), a leading innovator of open and connected data platforms, will showcase its latest technology solutions for streaming analytics, security, governance, and #Apache #Spark at scale at #Strata + #Hadoop World. As customers increasingly look to deploy cloud-based solutions, #Hortonworks' connected data architectures and mission-critical security, governance, and data access innovations provide unparalleled integration across multiple clouds and the data center.

http://finance.yahoo.com/news/hortonworks-tightens-focus-cloud-connected-130000271.html

Splice Machine Introduces PL/SQL Support in Support Oracle-to-Hadoop Migration

#SpliceMachine, provider of an #SQL RDBMS powered by #Hadoop and #Spark, now supports native PL/SQL on Splice Machine. The goal, the company says with PL/SQL support is to reduce the time and cost for companies to move big data workloads from Oracle databases. Announced at #Strata + Hadoop World in NYC, the new capabilities are available through the Splice Machine Enterprise Edition. The Splice Machine PL/SQL support has two components, according to the vendor. The compiler converts the PL/SQL into a type-checked and optimized runtime representation. In addition, the interpreter executes the optimized runtime representation with PL/SQL semantics so that users can ensure their application will behave the same on Splice Machine as it does on #Oracle. It maintains a procedural context and handles scoping for variable de-referencing, iteration and conditional testing, and also dispatches DDL and DML to the Splice Machine RDBMS for execution. Describing a key use case for the technology, Monte Zweben, CEO, Splice Machine, said that many customers are developing artificial intelligence applications that deploy machine learning models, but this real-time intelligence can be difficult to achieve in PL/SQL applications because of the time it takes to get data out of the engine into an analytical framework via ETL. Now, he said, with this PL/SQL capability, applications can run their native application logic while running new machine learning processes on Splice Machine's RDBMS powered by #Apache Spark and Apache Spark MLlib.

http://www.dbta.com/Editorial/News-Flashes/Splice-Machine-Introduces-PL-SQL-Support-In-Support-Oracle-to-Hadoop-Migration-113780.aspx

IBM, WanDisco, DataTorrent and pals sign Hadoop interoperability pac

The initiative created to standardise #Apache #Hadoop applications has netted a handful of large enterprise vendors that have committed to its interoperability programme. The Open Data Platform initiative ( #ODPi ) has announced that #DataTorrent, #IBM, #Pivotal, #SAS, #Syncsort, #WanDisco and #Xavient have committed themselves to only providing applications and platforms which are interoperable with each other. ODPi published its first runtime specification to establish the technical necessities of that interoperability back in May. Although the initiative received a warm reception from Hortonworks, the other large Hadoop businesses, #MapR and #Cloudera, have both treated it with scepticism. Hadoop creator and Cloudera man Doug Cutting told The Register he didn't regard the ODPi as worthy of praise. ODPi “is actually something which isn't part of the open source community; really, it's a separate foundation,” said Cutting. Talking to The Register ahead of this announcement, the ODPi's John Mertic said that whatever questions were raised when the initiative first launched, the changes in its governance and direction have made it more likely than before that Cloudera and MapR might change their minds about refusing to be involved. As of today, a dozen ODPi interoperable applications have been tested to successfully run on multiple runtime compliant platforms, and currently Altiscale, ArenaData, Hortonworks, IBM, and Infosys Apache Hadoop Platforms are compliant. Additionally, ODPi announced today that version 2.0 of its runtime specification will add Apache Hive and Hadoop Compatible File System support. While specification version 1.0 included Yarn, MapReduce and HDFS, the ODPi hopes that by including Hive in 2.0 - to read, write, and manage large datasets throughout distributed storage - the specification will reduce SQL query inconsistencies across Hadoop platforms. Ritika Gunnar, IBM veep of Offering Management, Data and Analytics said: "ODPi is providing a common platform to develop big data apps, enabling interoperability across different distributions and application offerings. In that spirit, IBM is ensuring many of our Apache Hadoop related offerings are interoperable including IBM Big SQL, IBM SPSS Analytic Server, IBM Big Replicate, and others. "By supporting interoperable applications, the value to clients of ODPi Compliant distributions is beingenhanced," added Gunnar.
http://www.theregister.co.uk/2016/09/27/odpi_nabs_enterprise_companies_in_hadoop_interoperability_scheme/

Four things to know before Nutanix goes public

#Nutanix, an enterprise cloud company and a tech unicorn, is expected to debut on the public market Friday. The company on Wednesday raised the price range for its initial public offering to $13 to $15 a share from a previous $11 to $13. The company plans to sell 14 million shares to raise up to $241.5 million. Underwriters have the option to buy 2.1 million additional shares of class A stock.

Nutanix had 3,768 end customers as of July 31, 2016, which includes more than 300 Global 2000 companies.

The company NTNX, +0.00%   filed to list on the Nasdaq Global Select Market under the symbol “NTNX.”

The lead underwriters on the offerings are Goldman, Sachs & Co., Morgan Stanley and J.P. Morgan.

Here are four things to know before it goes public:

A possible down round

Nutanix received a valuation of more $2 billion in August 2014, at the time of its latest funding round, according to The Wall Street Journal.

Even if the company prices at the top of its stated price range, it would net a market cap of $1.8 billion, based on the shares outstanding.

Growing revenue

Nutanix has posted growing revenue of $127 million in 2014, $241 million in 2015 and $445 million in 2016.

But its net losses have also widened from $84 million in 2014, to $126 million in 2015 and $168 million in 2016.

The company said it expects to see net losses for the foreseeable future.

The competitors

Nutanix exists in the category of “hyperconverged infrastructure,” which it says it pioneered when it sold the first version of its operating system in October 2011. Overall, it offers a server that provides networking and storage for businesses.

Still, it has a number of competitors including software company #VMware Inc. and large companies such as Hewlett Packard Enterprise Company, #Cisco Systems Inc. and #DellTechnologies, which offer their own versions of #hyperconverged infrastructure and software storage products.

The investors

Nutanix is a venture-backed company, backed by firms such as Lightspeed Centure Partners, which owns 22.7% of the company before the offering, and Khosla Ventures, which owns 10.8%.

Other investors include Blumberg Capital II, which owns 5.7% and Riverwood Capital Partners, which owns 5%.

http://www.marketwatch.com/story/four-things-to-know-before-nutanix-goes-public-2016-09-27

Dell EMC adds to Microsoft systems and services

#Dell #EMC has announced new #Azure-based services along with new systems validated for #SQL Server and #Exchange. Dell EMC has announced three new Azure-based services. Azure Backup provides cloud-based backup for physical servers, virtual machines, and systems running in Azure. Azure Dev/Test Labs allows developers and testers to create and provision #Windows and #Linux environments quickly using reusable templates and pre-provisioned environments in Azure.
http://www.itwire.com/hardware-and-storage/74990-dell-emc-adds-to-microsoft-systems-and-services.html

Surge Seen for Software-Defined Datacenter Market

The booming #bigdata market along with growing demand for automated datacenter operations are spurring adoption of management and other services offered by #softwaredefineddatacenters, according to a new report that also forecasts a 32-percent annual growth rate for virtualized computing, networking and storage. In a report released this week, Allied Market Research pegs the software-defined datacenter market at $139 billion by 2022. While cost savings and flexibility continue to drive global adoption, the market researcher also noted that integration complexity and security threats still represent a drag on wider adoption. Among the adoption drivers for virtualized datacenters is the steady embrace of hybrid cloud models as enterprises look to leverage cloud flexibility and savings while securing proprietary data and applications on-premises. The Asian-Pacific region is expected to be the fastest growing region during the forecast period as more datacenters are built and major players like the Chinese government make heavy investments in IT infrastructure. The region accounted for an estimated 20 percent of the global market for virtualized datacenters in 2014. Exponential growth in big data projects also is driving demand for virtualized storage, the report noted. Hence, software-defined infrastructure is seen as the most cost effective way of securely storing the projected 40 zettabytes to be generated by big data initiatives by 2020. Soaring data volumes also are expected to place a premium on hyper-scale infrastructure that can be properly sized for data analytics efforts. Data security remains an ongoing concern, as does the growing complexity of data and application integration. Among the service segments analyzed in the datacenter reports, integration, deployment and migration accounted for the highest percentage of revenues since 2014 "as it involves new hardware setup cost such as servers and automated storage systems," the report notes. Meanwhile, managed services are expected to grow faster over the next several years, accounting for about 36 percent of the current market while forecast to expand rapidly as demand grow from large enterprises requiring assistance in deploying software-defined infrastructure.

http://www.enterprisetech.com/2016/09/27/surge-seen-software-defined-datacenter-market/

Datera Announces Next-Generation Elastic Data Fabric to Simplify Data Storage for Enterprise and Service Providers

SUNNYVALE, CA, Sep 27, 2016 (Marketwired via COMTEX) --  #Datera Continues to Elevate the Data Storage Landscape by Enabling Continuous and Faster App Delivery, Massive Scalability and Price-Performance Flexibility for All Apps and Infrastructure Stacks SUNNYVALE, CA--(Marketwired - Sep 27, 2016) - Datera, the application-driven data infrastructure company, today announced new enhancements to its Datera Elastic Data Fabric™ 2.0 platform. Datera has developed the industry's leading universal data platform that supports any application, on any orchestration stack, at any scale. Enterprises and service providers rely on Datera to build large-scale, fully-automated private and public clouds for faster application delivery with industry-leading economic flexibility and shorter time to market.
http://www.marketwatch.com/story/datera-announces-next-generation-elastic-data-fabric-to-simplify-data-storage-for-enterprise-and-service-providers-2016-09-27-8160051

Seagate Showcases Latest Enterprise SSD and HDD Portfolios at Open Data Center Summit 2016

#Seagate Technology plc (STX), a world leader in storage solutions, today showcased its latest and most advanced enterprise solid state drives at the Open Data Center Summit 2016 which is being held at the Beijing International Convention Center on September 27, 2016. The full range of products displayed includes the recently launched SSD solutions like 8TB #Nytro® #XP7200 #NVMe #SSD, 60TB SAS SSD, 2TB Nytro® XM1440 M.2 NVMe SSD, and Seagate enterprise HDD solutions including 8TB Archive HDD, Enterprise Performance HDD, and 10TB helium-based Enterprise Capacity 3.5 HDD. Seagate will also showcase an NVMe Over Fabrics Storage system technology, achieving millions of IOPs and providing extremely low latency storage to host applications. This technology demo utilizes Seagate NVMe SSDs, storage systems, and software.
http://finance.yahoo.com/news/seagate-showcases-latest-enterprise-ssd-070100050.html

High capacity SSDs: How Big Can They Grow?

Think high capacity #SSD s are far off on a distant horizon? Not so. New manufacturing techniques mean that petabyte, high capacity SSDs are probably a question of when, not if. The world of SSDs – and what we think of as a "high capacity SSD" – is going to change radically over the next few years. That's because a single SSD will soon be able to store in such vast quantities of data that the very term "high capacity SSD" may end up being wholly inadequate and in need of replacement. That's good news for many businesses, because the high capacity SSD market has been quite dull for the last few years and technology advances moderate. The main change has been a move from single level cell ( #SLC ) technology to multi-level cell ( #MLC ) and enterprise multi-level cell ( #eMLC ) technology. That's made it possible to offer cheap consumer grade SSDs to sit alongside the more expensive enterprise class ones, and also allowed a general fall in the price per GB stored on high capacity SSDs.

http://mobile.enterprisestorageforum.com/storage-hardware/high-capacity-ssds-how-big-can-they-grow.html

Russia wants nothing to do with Microsoft’s technology

They city of Moscow is getting rid of Microsoft. Russia’s capital is ditching Microsoft’s technology after president Vladimir Putin urged state officials and local companies to reduce their reliance on foreign technology. As a result, it is replacing #Microsoft #Exchange and #Outlook on 6,000 computers with email systems developed by state-run carrier #Rostelecom #PJSC, reports Bloomberg. It may also consider installing software developed by Russia’s New Cloud Technologies to replace Microsoft Windows and Office on nearly 600,000 devices. Russia began pushing for technological self-sufficiency after the EU and US enacted sanctions against the country for its illegal annexation of Crimea in September 2014. While both countries are large enough to survive without the other, there’s one obvious loser here: #SiliconValley

http://qz.com/793258/russias-vladimir-putin-is-replacing-microsofts-msft-products-and-servers-with-local-alternatives/

More Lenovo staffers to be buried in the redundancy pit

#Lenovo has told around 1,100 staff they are surplus to requirements as it battens down the hatches following continued weakness in smartphone demand. The Chinese tech giant confirmed the “resource action” will affect “less than two per cent” of the roughly 55,000 people employed worldwide. “The majority of the positions being eliminated are part of the ongoing strategic integration between Lenovo and its #Motorola smartphone business as the the company further aligns its organisation and streamlines its product portfolio to bets compete in the global smartphone market.” Job cuts are also happening elsewhere across the business “as part of a continued effort to manage costs, drive efficiency and support ongoing improvement in overall financial performance”. It didn’t specify the divisions where the axe was set to swing. “While these actions are never easy, they are a necessary part of our continued efforts to ensure long-term, profitable growth across all our businesses,” Lenovo added. In the 2016 financial year - not exactly a vintage one - Lenovo made 3,200 people redundant as it targeted cost savings of $650m, cutting its cloth accordingly to match the sickly PC sales environment. It wasn't exactly a great start to this FY’17, either, with revenue sliding six per cent to $10.05bn for Q1 ended 30 June, though the planned drop in operating expenses boosted the bottom line. The smartphone market is edging closer to global saturation, a trend that is hitting every brand from the premium players to the low-end makers.

http://www.theregister.co.uk/2016/09/27/lenovo/

Google's AI translation system is approaching human-level accuracy

#Google is one of the leading providers of artificial intelligence-assisted language translation, and the company now says a new technique for doing so is vastly improving the results. The company’s AI team calls it the Google Neural Machine Translation system, or #GNMT, and it initially provided a less resource-intensive way to ingest a sentence in one language and produce that same sentence in another language. Instead of digesting each word or phrase as a standalone unit, as prior methods do, GNMT takes in the entire sentence as a whole. "The advantage of this approach is that it requires fewer engineering design choices than previous Phrase-Based translation systems," writes Quoc V. Le and Mike Schuster, researchers on the #GoogleBrain team. When the technique was first employed, it was able to match the accuracy of those existing translation systems. Over time, however, GNMT has proved capable of both producing superior results and working at the speed required of Google’s consumer apps and services. These improvements are detailed in a new paper published this week.

http://www.theverge.com/2016/9/27/13078138/google-translate-ai-machine-learning-gnmt

Google launches offline YouTube app and Wi-Fi expansion in India

#Google wants to do a lot more for mobile users in India. The tech giant laid out its plans on Tuesday at an event in New Dehli to help people save data and get online. To start, Google is adapting its apps to handle slower mobile connections and is looking to add more Wi-Fi hotspots across the country.

http://money.cnn.com/2016/09/27/technology/google-india/

Rumors hint Nvidia, Apple may get back together

For the last few years, #Apple has been an #AMD -only shop. Graphics may not be a core area that Apple focuses on — #OpenGL support in the latest version of #macOS remains stuck in 4.1 territory). But it does include discrete graphics cards in several of its #MacBook Pro and #iMac products, as well as in all of the Mac Pro SKUs. Now, there’s talk that Apple might switch back to Nvidia. Bloomberg first noticed multiple job listings at #Nvidia (some of which have since been edited) that talk about “help[ing] produce the next revolutionary Apple products” and claimed the role would require “working in partnership with Apple” and writing code that will “define and shape the future” of the Mac’s graphics software. There are also openings on the Mac driver team, and Nvidia doesn’t bother writing drivers and software for a platform it doesn’t intend to sell.

http://www.extremetech.com/gaming/236349-rumors-hint-nvidia-apple-may-get-back-together

Rep. McCaul: US Must Gain Decryption Edge

House Homeland Security Committee Chairman Michael McCall is calling on Congress to increase spending on #quantumcomputing research to ensure that the United States is the first nation to employ quantum computing as a tool to decrypt data. "We can't lose this one to the Chinese," he says. #Google #Dwave

Speaking at a U.S. Chamber of Commerce cybersecurity summit Sept. 27, McCaul, R-Texas, noted: "If China develops quantum computing first, it would be a national security disaster. If we want to be first in this area, the federal government has to be driving this."

McCaul compared the quantum computing race to the space race in which the United States landed the first person on the moon. "We want to be first in this; it's really important," he said.

A Long-Term Effort
The National Institute of Standards and Technology has begun work on quantum computing, which could be applied to crack current approaches to cryptography (see Tackling Quantum Computing Threats to Cryptography).

Quantum computers will make use of the quantum states of subatomic particles to process information at speeds exponentially greater than today's devices. Such processing speeds, in theory, could easily break the massively long strings of numbers used in today's encryption software.

Work on refining cryptography so it's effective in the era of quantum computing must start now because "it will take 10 to 20 years to get new algorithms selected, standardized and implemented out into the field," says NIST mathematician Dustin Moody, who co-authored a NIST report on post-quantum cryptography.

http://www.bankinfosecurity.com/rep-mccaul-us-must-gain-decryption-edge-a-9422

Tuesday, September 27, 2016

Dell EMC Expands Broad Microsoft Support, Delivering New Innovations across Cloud and Converged Infrastructure

New services and validated systems help customers build individualized cloud solutions that increase business agility, improve efficiency and lower costs  The #Dell #EMC Hybrid Cloud System for #Microsoft (Photo: Business Wire)   September 27, 2016 09:00 AM Eastern Daylight Time ATLANTA--(BUSINESS WIRE)--MICROSOFT #IGNITE “Dell and Microsoft have a long history of working together to speed customers’ journey to hybrid cloud. We are pleased to extend our alliance with Dell EMC to help even more customers leverage hybrid cloud on their terms with Microsoft Cloud Services.” Tweet this News Highlights: New Microsoft Azure Cloud Services from Dell EMC help customers easily adopt hybrid cloud, streamline operations and extend the functionality of their IT infrastructure Dell EMC customers worldwide experience the positive benefits and improved business results from the Dell EMC Hybrid Cloud System for Microsoft New Dell EMC Validated Systems for Microsoft Exchange and Validated Systems for Microsoft SQL provide fast and flexible infrastructure for business intelligence and collaboration Full Story: Dell EMC today announced broadened support of Microsoft environments with new Microsoft Azure Services, Validated Systems for Microsoft Exchange and SQL, as well as customer momentum for the Dell EMC Hybrid Cloud System for Microsoft. The new services and validated systems can enable customers to increase business agility, improve efficiencies and lower costs with highly customizable hybrid cloud and infrastructure solutions. “For more than 30 years, Dell EMC and Microsoft have focused on delivering best-in-class, innovative solutions that span the entire Microsoft product portfolio to organizations all over the world,” said Jim Ganthier, senior vice president, Validated Solutions Organization, Dell EMC Converged Platforms Solution Division. “Together, we are working to both simplify and accelerate customers’ journey to the cloud, helping them build individualized cloud solutions that are future-proofed for business and IT transformation.” Through this alliance, Dell EMC today announced availability of a new suite of Azure Cloud Services, which enables customers to easily adopt hybrid cloud services, increasing efficiencies and extending the capabilities of their IT infrastructure. These cloud services are hosted, managed and delivered from Azure. Backed by Dell EMC enterprise-grade service level agreements, 24/7 tech support and service health monitoring, customers pay only for the services consumed and without long-term contracts. The new services include: Azure Backup – a flexible, secure, scalable backup solution in Microsoft’s cloud management portfolio. With no capital investment and consumption-based pricing, this service delivers significant cost savings while protecting data running in virtual machines or on physical servers, as well as in Azure. Azure Dev/Test Labs – a service that allows developers and testers to create and provision Windows and Linux environments quickly using reusable templates and pre-provisioned environments in Azure. As a result, customers can minimize time and waste, control costs and scale-up load testing efficiently. Azure Business Continuity Solution – utilizes multiple Azure services delivered with Dell EMC consulting, provisioning, break/fix support, and single-source pay-as-you-go billing. This solution brings enterprise-class business continuity to small businesses by making it accessible and radically simpler without having to design, build, or maintain it on their own. Dell EMC Hybrid Cloud System for Microsoft Gains Momentum, Help Businesses to Simplify Cloud Delivery, Management and Scale Since introduced in October 2015, customers from financial services to technology providers have selected the Dell EMC Hybrid Cloud System (DHCS) for Microsoft to power their hybrid cloud environments. These customers are benefiting from the industry’s first integrated, hybrid cloud solution that offers simplified, automated deployment and maintenance capabilities and unparalleled hybrid cloud governance, control, and policy-based automation for Azure with an optional, unique payment solution to reduce investment risk: Glarner Kantonalbank, an innovative digital Swiss bank, wanted to upgrade its already successful hybrid cloud. Glarner Kantonalbank chose the DHCS, installed with the help of Dell EMC Deployment Services, in the first rollout of its kind in Europe. As a result, it can expand and become a full service provider of digital services for other banks. CGI Group Inc., the fifth largest independent information technology and business process services firm in the world, offers private cloud deployments with a customized approach to meet client requirements. With DHCS, the company significantly improves time-to-value for clients by simplifying deployments, reducing risks, and cutting deployment time by up to 50 percent. PeeringOne, a Malaysian-based cloud provider, needed to meet growing customer demand for cloud services and guarantee reliability and performance. The modular scale-out architecture of the DHCS enabled PeeringOne to minimize capital expenditure and leverage cost savings to ensure compelling customer price points. Pulsant, a UK-based cloud hosting company, increasingly required a hybrid cloud based on Microsoft technology. After running a successful proof of concept, the company deployed two DHCS appliances and expects its new hybrid cloud offerings to drive up to $2 million (US) in new business.

http://www.businesswire.com/news/home/20160927005295/en/Dell-EMC-Expands-Broad-Microsoft-Support-Delivering

Mellanox and Micron Smash 1 Terabit per Second Storage Performance Record with Windows Server 2016

SUNNYVALE, Calif. & YOKNEAM, Israel--(BUSINESS WIRE)-- #Mellanox® Technologies, Ltd. (NASDAQ:MLNX), a leading supplier of high performance cloud and storage networking solutions, today announced a high performance #WindowsServer2016 storage solution that delivers 1.2Tb/s data communication bandwidth, maximizing cloud infrastructure performance at the lowest possible cost. This solution eliminates the traditional bottlenecks associated with software-defined data center deployments, while enabling all enterprise applications to run over #Windows Server 2016 with extremely high efficiency.

http://www.businesswire.com/news/home/20160927005382/en/Mellanox-Micron-Smash-1-Terabit-Storage-Performance

Surge Seen for Software-Defined Datacenter Market

The booming #bigdata market along with growing demand for automated datacenter operations are spurring adoption of management and other services offered by software-defined datacenters, according to a new report that also forecasts a 32-percent annual growth rate for virtualized computing, networking and storage. In a report released this week, Allied Market Research pegs the software-defined datacenter market at $139 billion by 2022. While cost savings and flexibility continue to drive global adoption, the market researcher also noted that integration complexity and security threats still represent a drag on wider adoption. Among the adoption drivers for virtualized datacenters is the steady embrace of hybrid cloud models as enterprises look to leverage cloud flexibility and savings while securing proprietary data and applications on-premises. The Asian-Pacific region is expected to be the fastest growing region during the forecast period as more datacenters are built and major players like the Chinese government make heavy investments in IT infrastructure. The region accounted for an estimated 20 percent of the global market for virtualized datacenters in 2014. Exponential growth in big data projects also is driving demand for virtualized storage, the report noted. Hence, software-defined infrastructure is seen as the most cost effective way of securely storing the projected 40 zettabytes to be generated by big data initiatives by 2020. Soaring data volumes also are expected to place a premium on hyper-scale infrastructure that can be properly sized for data analytics efforts.

http://www.enterprisetech.com/2016/09/27/surge-seen-software-defined-datacenter-market/

RAID Inc., ClearPointe And Chelsio Unveil NVMe-Tuned Hyper-Converged Solution For Windows Server 2016 Storage Spaces Direct

ATLANTA, Sept. 27, 2016 /PRNewswire/ -- RAID Inc., a high performance computing solutions company, #ClearPointe, a Gold level #Microsoft datacenter partner, and #Chelsio Communications, a leading provider of high performance Ethernet adapters for storage networking, virtualized enterprise datacenters, cloud service installations, and cluster computing environments, today announced that they will demonstrate a hyper-converged infrastructure Microsoft #StorageSpaces Direct architecture based on Windows Server 2016, at Microsoft Booth #MS34 and Lenovo Booth #1010 at the Microsoft Ignite Conference in Atlanta, Georgia from September 26 to September 30, 2016. The new 4-node cluster solution will be presented and managed by RAID Inc. and ClearPointe with the directive to validate the high performance computing capabilities achieved by partnering with storage at scale and Windows Server design experts. It integrates best-of-breed Chelsio 40GbE iWARP RDMA-enabled networking adapters that offload network stack processing from the CPU, to provide a proof point that higher performance networking enables faster data communication.
http://www.prnewswire.com/news-releases/raid-inc-clearpointe-and-chelsio-unveil-nvme-tuned-hyper-converged-solution-for-windows-server-2016-storage-spaces-direct-300334958.html

SUSE Enterprise Storage Review

#SUSE Enterprise Storage is a software-defined storage solution powered by #Ceph designed to help enterprises manage the ever-growing data sets. Further, SUSE aims to help by taking advantage of favorable storage economics as hard drives continue to get larger and flash prices continue to fall. While typically configured as a hybrid, Ceph is ultimately as flexible as the customer's demand it to be. While much of the software-defined gusto these days is focused on primary storage and hyperconverged offerings, Ceph is helping to fuel significant hardware development as well. #HP, #Dell, #Supermicro and others have all invested heavily in dense 3.5" chassis with multiple compute nodes in an effort to provide the underlying hardware platforms Ceph requires. In the case of this particular review, we leveraged #HPE gear including ProLiant servers and Apollo chassis, but SUSE Enterprise Storage can be deployed on just about anything.

http://www.storagereview.com/suse_enterprise_storage_review