Dell, EMC, Dell Technologies, Cisco,

Wednesday, July 25, 2018

Dell EMC Updates Unity & SC Series Storage Software

Today @DellEMC announced major software updates to its #Unity and SC Series. For Unity, the company is updating to Operating Environment (OE) 4.4 that brings a boost in file-based replication and data mobility as well as new security capabilities for block and file workloads. Dell EMC is also announcing that it has shipped 1 Exabyte of raw flash storage capacity since its May 2016 debut. For its SC Series Operating System (SCOS) 7.3, users gain a new Unisphere HTML5 interface and a big boost in performance.  The updates today show that Dell EMC is shows the company’s commitment to innovation for midrange solutions. The updates, built into both Dell EMC Unity and SC Series All-flash and Hybrid storage arrays, give customer new features and performance. The upgrades can be done non-disruptively and for free for those that have an active ProSupport agreement. With over 1 Exabyte of raw flash storage capacity shipped in two years, Dell EMC Unity is quite a popular midrange storage option. Designed to be simple including the ability to be easily deployed, have streamline management and seamlessly tier unified storage workloads to the cloud. The Unity line was has: Array-based file replication New Metrosync array-based file replication protects transactional file applications with rapid recovery and automated failover to a secondary data center 2-Way NDMP improves performance for better NAS backup times while reducing WAN/LAN network congestion Data Mobility Built-in data mobility capabilities to migrate block data from third-party storage arrays directly to Dell EMC Unity Security Enhancements LDAP/AD enhancements for performance, availability, and search flexibility for Microsoft environments Dell EMC is also releasing an update to its SCOS, up to version 7.3. The update is all about performance and we had a review of its new performance here. Feel free to check out the review. The company states that the update can also do the following: Major IOPS performance boost Now up to 2.2M max IOPS per array, over 20M in federated multi-array environments 2X increase in maximum IOPS across every current SC array model 50,000-100,000 more mixed-workload IOPS across every SC array Improved management for single and multi-array environments New Unisphere for SC HTML5 Web UI simplifies and automates common storage management tasks; shares common look and feel with Unisphere for Dell EMC Unity Easier, more cost-effective upgrades and expansion Up to 2X increase in maximum capacity per array Data-in-place upgrades for the popular SC4020 offers cost-effective option for customers to migrate data to newer SC models SCv2000 can now federate/replicate with other SC Series arrays More enterprise-class efficiency and availability Support for 100Gb and 25Gb iSCSI network hardware offers customers future-proof solutions and buying assurance Distributed sparing, for greater I/O efficiency and up to 500% faster rebuilds Live Volume enhancements designed to deliver lower latency and reduced network traffic in high availability environments Availability Dell EMC Unity OE 4.4 and SC Series SCOS 7.3 are available now as no-cost software upgrades for customers with an active Dell ProSupport agreement.

https://www.storagereview.com/dell_emc_updates_unity_sc_series_storage_software

Tuesday, July 24, 2018

DARPA Effort Pushes Beyond Moore’s Law

Responding to market forces that are driving the otherwise vibrant global electronics industry toward narrower computing applications and away from the next wave of innovation, the Defense Advanced Research Projects Agency (DARPA) is pouring $1.5 billion into “foundational enhancements” designed to jumpstart the post-Moore’s Law electronics industry. The five-year roadmap dubbed Electronic Resurgence Initiative (ERI) hits the ground running this week with a summit in San Francisco focusing on what program managers identify as ERI’s three pillars: new chip architectures, IC design and materials and integration. “What we are trying to do are foundational enhancements that are beyond what any one corporate partner would want to do or try to do on their own,” said William Chappell, director of DARPA’s Microsystems Technology Office. “The double-edged sword of Moore’s Law is that while the train is kept moving, the costs across the board from fab to design to verification have all gone up precipitously and even exponentially,” Chappell added. “What we are trying to do is look at other avenues, for example in fabrication, where you can get benefit through either novel materials or novel integration processes that don’t rely just on the scaling of transistors.” DARPA also announced a list of university and corporate research efforts aim at advancing its “three-pillar” chip strategy. For example, ERI places a large bet on maintaining U.S. and European leadership in chip design and the valuable intellectual property it generates. “We need to make sure that we have new inventions occurring across our country and other allied countries where we invent new processes as older processes are being replicated en masse.” While chip maker Intel Corp. and challengers such as GPU specialist Nvidia Corp. and ARM increasingly focus on specific applications, many driven by AI and other emerging machine learning applications, DARPA is attempting to reboot the semiconductor R&D cycle in response to huge global chip investments. Chappell specifically cited China’s estimated $150 billion investment in an indigenous chip industry. That initiative would underpin broader efforts, including Beijing’s national strategy to become the world leader in AI by 2030. The technological battleground in this global competition, DARPA reckons, revolves around new materials, automating the design of new systems-on-chips (SoCs) along with novel architectures that could, for example, be mixed and match as building blocks in larger systems. One effort, dubbed Intelligent Design of Electronic Assets, or IDEA, seeks to develop a platform supporting an automated design flow for SoCs along with the chip packages and interconnects to larger systems. These future chip design flows would leverage machine learning, analytics and automation technologies used to verify chip designs before they are mass produced. Contactors for the IDEA program include Cadence Design Systems, Nvidia and Carnegie Mellon University. ERI’s chip architecture research focuses on reconfigurable frameworks that leverage specialized hardware to tackle specific computing problems. Contractors for the Software Defined Hardware (SDH) initiative include Intel, Nvidia and top U.S. engineering schools. The hardware effort along with a separate “domain-specific” SoC effort were launched last fall. Meanwhile, Nvidia will lead a team of industry and university researchers under the SDH program seeking performance gains akin to today’s ASICs “without sacrificing programmability for data-intensive algorithms,” the company said Tuesday (July 24). Among the goals of chip architecture effort is bringing an open-source software approach to hardware development. Among the applications are machine vison and machine learning, Chappell added. The semiconductor materials and circuit integration theme addresses looming performance issues such as “memory bottlenecks” that plague current big data applications. One task is finding new ways to combine dissimilar blocks of chip intellectual property, many based on different materials and circuit designs. The initiative that includes researchers from MIT, Stanford and other leading engineering schools would focus on both emerging 3-D SoC designs as well as future approaches under the rubric, Foundations for Novel Compute, or FRANC, program. Among the contractors for the 3-D chip effort is Skywater Technology Foundry, a pure-play 200-mm chip fab in Minnesota with roots dating back to Control Data Corp. and, more recently, Cypress Semiconductor. Ultimately, DARPA’s Chappell said, the chip initiative is designed “to see what the art of the possible is.” He added: “It’s more important than ever that we have new inventions coming out of the pipeline to make sure the semiconductor space does not become a commodity.”

https://www.hpcwire.com/2018/07/24/darpa-effort-pushes-beyond-moores-law/

Google Cloud CEO Diane Greene: ‘We’re playing the long game here’

@Google is hosting its annual #CloudNext conference in San Francisco this week. With 25,000 developers in attendance, Cloud Next has become the cloud-centric counterpart to Google I/O. A few years ago, when the event only had about 2,000 attendees and Google still hosted it on a rickety pier, @DianeGreene had just taken over as the CEO of Google’s #cloud businesses and Google had fallen a bit behind in this space, just as @Amazon and @Microsoft were charging forward. Since then, Google has squarely focused on bringing business users to its cloud, both to its cloud computing services and to G Suite. Ahead of this year’s Cloud Next, I sat down with Diane Greene to talk about the current state of Google Cloud and what to expect in the near future. As Greene noted, a lot of businesses first approached cloud computing as an infrastructure play — as a way to get some cost savings and access to elastic resources. “Now, it’s just becoming so much more. People realize it’s a more secure place to be, but really, I feel like in its essence it’s all about super-charging your information to make your company much more successful.” It’s the cloud, after all, where enterprises get access to globally distributed databases like Cloud Spanner and machine learning tools like AutoML (and their equivalent tools from other vendors). When she moved to Google Cloud, Greene argued, Google was missing many of the table stakes that large enterprises needed. “We didn’t have all the audit logs. We didn’t have all the fine-grained security controls. We didn’t have the peer-to-peer networking. We didn’t have all the compliance and certification,” she told me. People told her it would take Google ten years to be ready for enterprise customers. “That’s how long it took Microsoft. And I was like, no, it’s not 10 years.” The team took that as a challenge and now, two years later, Greene argues that Google Cloud is definitely ready for the enterprise (and she’s tired of people calling it a ‘distant third’ to AWS and Azure). Today, when she thinks about her organization’s mission, she sees it as a variation on Google’s own motto. “Google’s mission is to organize the world’s information,” she said. “Google Cloud’s mission then is to supercharge our customers’ information.” When it comes to convincing large enterprises to bet on a given vendor, though, technology is one thing, but a few years ago, Google also didn’t have the sales teams in place to sell to these companies. That had to change, too, and Greene argues that the company’s new approach is working as well. And Google needed the right partners, too, which it has now found with companies like SAP, which has certified Google’s Cloud for its Hana in-memory database, and the likes of Cisco.

https://techcrunch-com.cdn.ampproject.org/v/s/techcrunch.com/2018/07/24/google-cloud-wants-enterprises-to-think-beyond-infrastructure/amp/?amp_js_v=a2&amp_gsa=1#amp_tf=From%20%251%24s&ampshare=https%3A%2F%2Ftechcrunch.com%2F2018%2F07%2F24%2Fgoogle-cloud-wants-enterprises-to-think-beyond-infrastructure%2F

How edge networking and IoT will reshape data centers

The Internet as we have all known it mirrors the design of old #mainframes with dumb terminals: The data path is almost entirely geared toward data coming down the network from a central location. It doesn’t matter if it’s your iPhone or a green text terminal, the fast pipe has always been down, with relatively little data sent up. ADVERTISING The arrival of IoT threatens to turn that on its head. IoT will mean a massive flood of endpoint devices that are not consumers of data, but producers of it, data that must be processed and acted upon. That means sending lots of data back up a narrow pipe to data centers. [ Related: How to plan a software-defined data-center network.] For example, an autonomous car may generate 4TB of data per day, mostly from its sensors, but 96% of that data is what is called true but irrelevant, according to Martin Olsen vice president, global edge and integrated solutions at Vertiv, a data center and cloud computing solutions provider. “It’s that last 4% of what’s not true that is the relevant piece. That’s the data we want to take somewhere else,” he said. So does this mean a massive investment in rearchitecting your network for fatter pipes into the data center? Or can the advent of edge computing take the load off central data centers by doing much of the processing work at the edge of the network? What is edge computing? Edge computing is decentralized data processing specifically designed to handle data generated by the Internet of Things. In many cases, the compute equipment is stored in a physical container or module  about the size of a cargo shipping container, and it sits at the base of a cell tower, because that’s where the data is coming from. Edge computing has mostly been to ingest, process, store and send data to cloud systems. It is the edge where the wheat is separated from the chaff and only relevant data is sent up the network. If the 4% Olsen talks about can be processed at the edge of the network rather than in a central data center, it reduces bandwidth needs and allows for faster response than sending it up to the central server for processing. All of the major cloud providers – like @AWS, #Azure or @Google #Compute Engine – offer #IoT services and process what is sent to them. [ Take this mobile device management course from @PluralSight and learn how to secure devices in your company without degrading the user experience. ] In many cases, the edge can perform that processing discard the unneeded data. Since cloud providers charge by how much data they process, it is in the customer’s financial interest to reduce the amount they send up for processing.

https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html

World-first quantum computer simulation of chemical bonds using trapped ions

An international group of researchers has achieved the world's first multi-qubit demonstration of a quantum chemistry calculation performed on a system of trapped ions, one of the leading hardware platforms in the race to develop a universal quantum computer. The research, led by University of Sydney physicist Dr. Cornelius Hempel, explores a promising pathway for developing effective ways to model chemical bonds and reactions using quantum computers. It is published today in the prestigious Physical Review X of the American Physical Society. "Even the largest supercomputers are struggling to model accurately anything but the most basic chemistry. Quantum computers simulating nature, however, unlock a whole new way of understanding matter. They will provide us with a new tool to solve problems in materials science, medicine and industrial chemistry using simulations." With quantum computing still in its infancy, it remains unclear exactly what problems these devices will be most effective at solving, but most experts agree that quantum chemistry is going to be one of the first 'killer apps' of this emergent technology. Quantum chemistry is the science of understanding the complicated bonds and reactions of molecules using quantum mechanics. The 'moving parts' of anything but the most-simple chemical processes are beyond the capacity of the biggest and fastest supercomputers. By modelling and understanding these processes using quantum computers, scientists expect to unlock lower-energy pathways for chemical reactions, allowing the design of new catalysts. This will have huge implications for industries, such as the production of fertilisers. Other possible applications include the development of organic solar cells and better batteries through improved materials and using new insights to design personalised medicines. Working with colleagues at the Institute for Quantum Optics and Quantum Information in Innsbruck, Austria, Dr. Hempel used just four qubits on a 20-qubit device to run algorithms to simulate the energy bonds of molecular hydrogen and lithium hydride. These relatively simple molecules are chosen as they are well understood and can be simulated using classical computers. This allows scientists to check the results provided by the quantum computers under development. Dr. Hempel said: "This is an important stage of the development of this technology as it is allowing us to set benchmarks, look for errors and plan necessary improvements."

https://phys-org.cdn.ampproject.org/v/s/phys.org/news/2018-07-world-first-quantum-simulation-chemical-bonds.amp?amp_js_v=a2&amp_gsa=1#amp_tf=From%20%251%24s&ampshare=https%3A%2F%2Fphys.org%2Fnews%2F2018-07-world-first-quantum-simulation-chemical-bonds.html

Iron Mountain Breaks Ground On A $430 Million Data Center

The @IronMountain Inc. data storage facility in Boyers, Pennsylvania, located in a former limestone mine, stores 200 acres of physical data for many clients, including the federal government. (Photographer: Stephanie Strasburg/Bloomberg) The new 550,000-square-foot, three-story, 48-megawatt multi-tenant facility will double the size of the existing adjacent campus, which was formerly the flagship location of IO Data Centers. Iron Mountain acquired the existing 38-megawatt data center and the adjacent land parcel when it purchased the U.S. operations of Phoenix-based IO Data Centers LLC for $1.34 billion in January 2018. The new facility, at the northeast corner of 48th and Van Buren streets, will be built in two phases over five years. The first phase, which will add 24 megawatts of capacity, is scheduled to open in June 2019. IO Data Centers bought the nine-acre parcel in 2016 for $8.55 million, intending to expand its Phoenix campus over the next six to eight years. Planning and approvals for the site were complete prior to IO’s acquisition by Iron Mountain, according to Data Center Frontier. As the largest provider of data center services in the Phoenix market, IO Data Centers developed an impressive customer base, including enterprise customers in finance, aerospace and technology. With its acquisition of IO, Iron Mountain gained four data centers—two in Arizona, one in New Jersey and one in Ohio—with a combined total of 728,000 square feet and 62 megawatts of capacity. The company already owned six data centers in Boston, Denver, Kansas, Pennsylvania and Virginia. Inside the Home Box Office Inc. (HBO) vault at the Iron Mountain Inc. data storage facility in Boyers, Pennsylvania. (Photographer: Stephanie Strasburg/Bloomberg) Originally founded in 1951, Boston-headquartered Iron Mountain now owns more than 85 million square feet of real estate in 53 countries. Its 1,400-plus facilities serve more than 230,000 organizations around the world. The company’s total data center portfolio potential represents more than 285 megawatts. And the expansion continues. Iron Mountain has been both building and buying additional capacity in multiple key markets. It entered the Croatian market with its June 2017 purchase of Zagreb-based Arhiv Trezor, a provider of secure records management, destruction and transportation. The next month, Iron Mountain acquired Denver-based FORTRUST for $128 million, gaining a 210,000-square-foot data center in that city.

https://www-forbes-com.cdn.ampproject.org/v/s/www.forbes.com/sites/ellenbarber/2018/07/23/iron-mountain-breaks-ground-on-a-430-million-data-center/amp/?amp_js_v=a2&amp_gsa=1#amp_tf=From%20%251%24s&ampshare=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fellenbarber%2F2018%2F07%2F23%2Firon-mountain-breaks-ground-on-a-430-million-data-center%2F

Monday, July 23, 2018

Google Cloud Launches Blockchain Toolkit for App Developers

Developing #blockchain solutions for enterprise clients just got easier for @Google Cloud users, thanks to a collaboration between the computing giant and a distributed ledger technology (DLT) startup led by former @JPMorgan executive @BlytheMasters. Announced on Monday, the partnership will see Digital Asset provide a software development kit (SDK) to developers working on Google Cloud, enabling them to test and build blockchain applications without having to code the entire platform themselves. Leonard Law, Head of Financial Services Platform at Google Cloud, said in a statement that DLT has “great potential” for customers in a number of industries.

https://www-ccn-com.cdn.ampproject.org/v/s/www.ccn.com/google-cloud-launches-blockchain-toolkit-for-app-developers/amp/?amp_js_v=a2&amp_gsa=1#amp_tf=From%20%251%24s&ampshare=https%3A%2F%2Fwww.ccn.com%2Fgoogle-cloud-launches-blockchain-toolkit-for-app-developers%2F

Five Reasons Data Center Liquid Cooling Is on the Rise

Traditionally reserved for mainframes and academic supercomputers, liquid cooling may soon be seeping into more enterprise data centers. New, more demanding enterprise workloads are pushing up power densities, leaving data center managers looking for more efficient alternatives to air-based cooling systems. We’ve asked a number of data center opearators and vendors about the applications that are driving liquid cooling into the mainstream. Some of them didn't want to disclose specific applications, saying they viewed those workloads and the way they’re cooled as a competitive advantage.  #Hyperscale #cloud operators, including @Microsoft, @Alphabet’s @Google, @Facebook, and @Baidu, have formed a group working on an open specification for #liquidcooled server racks without saying that exactly they would use them for. At least one category of workloads in the hyperscalers’ arsenal, however, clearly calls for liquid cooling: #machinelearning systems accelerated by #GPUs, or, in Google’s case, also #TPUs, which the company has said publicly are now cooled using a direct-to-chip liquid cooling design. Despite operators’ caginess around this subject, some usage trends are starting to emerge. If you're supporting any of the following workloads in your data center, liquid cooling may be in your future too:

http://www.datacenterknowledge.com/power-and-cooling/five-reasons-data-center-liquid-cooling-rise

Assessing Dell EMC’s aggressive channel plans

@DellEMC is delivering on the promise of increased opportunities through the channel by rolling out a fresh round of partner incentives in markets experiencing growth. Forming part of aggressive indirect plans both locally and globally, the tech giant is actively rewarding competitive take-outs, heavily compensating individual sellers and driving business growth across multiple business lines. The goal is to create an ecosystem of partners turning over US$50 billion globally, emphasising rapid growth from US$35 billion figures posted only 14 months ago. Driven by partner program refinements, Dell EMC is tabling a new strategy for the channel which on the one hand, emphasises a shift in thinking for the vendor, but also recognises the successes of past endeavours. “There’s plenty of focus on how we grow the channel,” said Joyce Mullen, president of global channels at Dell EMC. “And there’s plenty of focus on how we grow direct. “We don’t have a goal to say that X per cent of our business will go through the channel, and X per cent will be direct. “We want to ensure our customers can choose how they want to transact. Our goal is to grow both parts of the business as fast as possible, to gain as much share as possible and to get our solutions to market.” In a reference to Dell EMC’s chequered channel past, Mullen - when speaking to ARN - accepted that conflict can occur between direct sellers and partners, but insisted “rules of engagement” are in place to ensure transparency across both sides of the business. “That’s why we have a deal registration process,” Mullen explained. “We’ve got to keep simplifying this process and make our approach predictable so our partners know when they have a responsibility to drive the sale, and when the direct team should.” Working from a black and white playbook, the tech giant aims to drive profitability through incentivising partners to acquire new contracts, grow existing customers while expanding into the lines of business. Specifically within services and storage, billed as two blockbuster markets for Dell EMC in 2018 with the “most upside” for partners. “From a business point of view, partners are trying to control their costs,” said Geoff Wright, general manager of channels across Australia and New Zealand (A/NZ) at Dell EMC. “They are trying to simplify their own business which is why we’ve created a single program which incorporates multiple lines of business. We have a team that can help partners sell across different segments through one sale. “It means the channel is no longer dealing with this vendor, or that vendor, and we’re seeing partners want to lock this process down because it’s proving successful for them.” A case in point is Nutanix, which revealed new partner incentives for the channel earlier this year, with the vendor expanding its standalone software license offerings with Dell EMC. As reported by ARN, the new incentives include a rebate of up to five per cent for winning new customers and a $3,000 per node rebate for selling qualified Dell EMC XC core nodes with Nutanix software. Closer to the Dell Technologies family, VMware is another example of changes designed to simplify how partners engage and interact on a daily basis. Specifically, Dell EMC was officially appointed as a distributor of VMware technologies across A/NZ in March, in a move open to the vendor’s leading local partners. Credit: Christine Wong Joyce Mullen (Dell EMC) Revealed by Reseller News, Titanium Black and Titanium partners are now eligible to purchase VMware solutions from Dell EMC, as the vendor aligns technology offerings across the wider Dell Technologies portfolio. “We’re seeing high growth locally across the emerging technologies markets,” Wright added. “Hyper-converged is a good example as we’re seeing strong adoption, alongside our server and storage businesses. “But then also consumption models are changing which I think is an important factor to consider. The way our customers are consuming our technology is miles ahead of other parts of the world, and that’s because we’re leveraging our early adopter status globally.” Program changes Changes come as part of a global partner program refresh, which resulted in a new rebate structure, training improvements and MDF changes. Delving deeper, the changes are designed to increase profitability with refined rebates through a simpler structure, which includes “flexible” pay options. Meanwhile, training requirements have been simplified and unified, with the roll-out of eight solutions competencies in 2018, starting with software-defined infrastructure, hybrid cloud and connected workforce competencies in the first quarter. “We’re always listening to what our partners are telling us about what needs to be different or what needs to change,” Mullen acknowledged. “We made some changes at the beginning of the year to drive more emphasis on services and to simplify those offerings. READ MORE White label infrastructure makers eat into tech giants’ public cloud market share “We had a pretty complicated services rebate structure and people couldn’t figure it out. So we simplified all of that.” Alongside a need for increased simplification, Mullen said the new rounds of incentives are designed to drive a specific behaviour in partners, chiefly around services. “We want to drive adoption of specific services offerings,” Mullen said. “We’re trying to be super targeted because simplification is always a good thing.” The vendor has also initiated changes around certification, revamping training credentials and requirements in a bid to better enable partners from a technology standpoint. “We drove a bunch of new incentives to help our partners focus on storage to make sure that’s an even more profitable line of business for them,” Mullen added. “There’s big rewards for competitive swaps, technology refreshes and that kind of thing.” Such changes have been made with the individual in mind, with targeted campaigns aimed at sellers, who can now earn “big bucks” by selling Dell EMC storage solutions, in addition to taking out rival business. “We’re trying to make sure that we’re engaging not only the partners and the owners through the rebate structure but also the sellers on the ground,” Mullen said. The move comes following a bumper quarter of growth for Dell EMC both globally and locally, with the vendor’s server business growing 41 per cent year-on-year during the first quarter of 2018. “We’re already number one and it’s a huge base we’re working from,” Mullen explained. “We definitely took share. “Going forward we’ve got to continue this trend because we did go through a period of losing share in storage. Getting back up to that 32 per cent was really important for our business and we’ve now got 68 per cent to go.” For Mullen, a further 68 per cent growth in a market tracking close to US$3 trillion is not a “high percentage point” to chase, with Dell EMC currently accounting for US$86 billion of the market. Credit: Christine Wong Geoff Wright (Dell EMC) In short, opportunities are still rife for resellers, system integrators and managed service providers. READ MORE Dell EMC lands $45.4M servers deal with Dept of Human Services “There’s a lot of business to be won,” Mullen said. “And we’ve changed to make it easier for the channel to do more business with us. It’s back to focusing on the simple, predictable and profitable mantra.” Evolution of the ecosystem For Mullen however, this means more than simply being operationally easier to transact and engage with. This is a table stakes for a channel-focused vendor, and seldom considered value-add given that partners today now demand more. “We want to fast track the ability of our partners to sell across the Dell Technologies portfolio and this is centred around transformation,” Mullen said. “We also want to facilitate selling around new technologies such as artificial intelligence, machine learning and the Internet of Things (IoT). “Our goal is to ensure that Dell Technologies is the essential infrastructure company to support those environments. This means we must continue investing in research and development to drive growth across the market.” Alongside traditional technology deployments - such as servers, storage and infrastructure - partners are now beginning to branch out in pursuit of new and emerging technologies, tapping customer curiosity and confusion in equal measure. “This represents a huge opportunity for our partners because it’s not exactly obvious how everyone should adopt these technologies,” Mullen acknowledged. “Where there’s confusion in the market, this creates a need for consulting and services. “Our partners are getting smart around this part of the market and we’re already seeing great examples through the channel.” As the current channel chases new opportunities through new technologies, new partners are also entering the market in parallel, creating a new-look competitor for traditional players. Whether it be specialised players, born-in-the-cloud providers, start-ups or developers, such partners are disrupting the supply chain, redefining the very notion of the channel in the process. “There will be new channel partners and we’re working on trying to identify what those characteristics are,” Mullen said. “We’re calling them operating technology partners internally. “These could be partners who understand manufacturing or oil and gas or transportation. There are some new ones there. READ MORE Dell EMC rolls out $2.3M supercomputer upgrade for University of Sydney “Some of our existing partners are also trying to build skills in those areas or have been building skills and in some instances, this is not really that new. “We have partners who are doing that type of work already and we want to figure out how to work together to create the right kind of solutions in the market.” Despite the direction of travel, Mullen acknowledged that challenges remain around identifying the system integrators operating with specialised industry knowledge, with Dell EMC currently operating on a “country by country” basis. And of course, size isn’t always a factor when searching for niche capabilities, with the onus now on Dell EMC to strike up combinations between vendor, distributor, partner and end-user. “Together with our distributors we’ve identified combinations between independent software vendors (ISVs) and infrastructure providers,” Mullen explained. “We’re bundling different types of solutions and driving them through distribution to partners and onto customers. “We have a handful of those bundles currently in market and will show off a few more in the coming months.” At this stage, Dell EMC works closely with around “80 or 90” ISVs globally, with plans in place to more “thoroughly develop” this area of the ecosystem. “We’ve got to figure out the right structure because some new-look partners may never buy a thing from us,” Mullen said. “But they could be really important influencers and crucial to delivering on the outcomes that our customers are looking for.”

https://www.arnnet.com.au/article/644203/assessing-dell-emc-aggressive-channel-plans/

PCMag Breaks Down Dell's Return to a Public Company

After a period of restructuring and digital transformation following its $67 billion acquisition of EMC Corporation in 2015, Dell Technologies is re-entering the public market. Five years ago, Dell went private due to uncertainty regarding the PC market. The company had yet to develop a strong presence in cloud computing and had a limited IT infrastructure lineup in some areas as well. When Dell went private in the $25 billion deal, the company was able to focus on pleasing customers rather than investors. Dell founder and CEO Michael Dell had survived a buyout attempt by investor Carl Icahn to regain control of the company on Oct. 29, 2013. "Today, Dell enters an exciting new chapter as a private enterprise," Dell said in a statement that day. "Our 110,000 team members worldwide are 100-percent focused on our customers and aggressively executing our long-term strategy for their benefit." At that time, he told a conference room of Dell employees, "It's great to be here and to not have to introduce Carl Icahn to you." Anatomy of The Dell Deal In a $21.7 billion deal announced on July 2, 2018, Dell will exchange a Class V tracking stock called DVMT that monitors the performance of VMware for a new class of common shares called the Dell Technologies Class C stock. The company will exchange Class V common stock for 1.3665 shares of Dell Technologies Class C common stock. The Class V shares will be worth $109 per share. The deal will close in the fourth quarter of 2018 and is subject to Class V stockholder approval. Dell generated $21.4 billion in revenue in the first quarter of 2018, a 19-percent increase year over year, according to the company. Silver Lake Partners, the private equity firm that gained a share of Dell when the company went private in 2013, will continue to maintain its same share of Dell. In a statement, Egon Durban, Managing Partner and Managing Director at Silver Lake Partners, said that the deal would keep Dell "strategically positioned to take full advantage of the new era of emerging technology trends, including Internet of Things (IoT), artificial intelligence (AI), machine learning (ML), 5G, cloud computing, and mobility." In a call with investment analysts on July 2, 2018, Dell boasted that his company has increased PC share for 21 straight quarters and leads in revenue for that category as well as x86 servers. In addition, in the first quarter of fiscal year 2019, he said the company's non-Generally Accepted Accounting Principles (GAAP) revenue was up 17 percent year over year, the company's highest mark for quarterly year-over-year growth since 2011. "You know, earnings were up, strong double digits, and this is about simplifying our capital structure and exposing the value that we've created to shareholders," Dell told CNBC on July 2. Dell owns 72 percent of Dell Technologies common stock and Silver Lake Partners owns 24 percent. "It makes it easier for Dell, as they become public, to start exercising some of the benefits of being a public company with public equity, but still allowing Michael and Silver Lake to have a large degree of control," said J. Craig Lowery, Research Director for Cloud Service Providers at Gartner Research and a former Dell executive.  Why Dell Made the Deal The company declined to make a Dell executive available for this article. However, in the July 2 call, Dell said that the move to go public would be a way to simplify the company's structure and provide more flexibility for the business. This was in response to a question from Shannon Cross, analyst and owner of Cross Research. "As you know, earlier this year, we kicked off this process, looked at the various alternatives, and concluded that this was a great way to not only simplify the structure but create flexibility for us and expose the great businesses that we have here back to the public markets," Dell said. @TomSweet, Chief Financial Officer at @DellTechnologies, said the company plans to simplify its product lineup roadmap for storage and data protection solutions. Dell has been consolidating its storage offerings following its acquistion of @EMC, and is a formidable maker of network attached storage ( #NAS ) appliances and other #datacenter infrastructure. After the successful integration of EMC's businesses, including @Pivotal, @RSA, @Virtustream, and @VMware, into the @Dell Technologies portfolio, now was a good time to go public. "And this timing of going back into the public market, I think, is based on the execution of the integration of the EMC deal. It's gone very well," Gartner's Lowery said. "The value of having the EMC portfolio, and the synergy of VMware and all the other companies that come in the EMC portfolio aligned with Dell, that's all been shown to be real." Following the EMC merger, Dell Technologies paid off debt and sold off Dell Services and its software businesses, including Quest Software and SonicWall. "They started selling some of the smaller divisions, and they had good operating profit and cash flow. They were paying off debt regularly there, and were paying it off in big chunks," said Roger Kay, founder and President of market research firm Endpoint Technology Associates. "They got past the risky part of the transaction, and now they're in a pretty stable position. "Maybe it was a good time to go back to the public market," Kay continued. "It doesn't quite explain why Michael was so happy to have gone private in the first place, and why he's equally happy with going public again. I expect it's because he made money personally and that's the answer to the question." Analysts also say the Trump administration's 2017 tax legislation, the Tax Cuts and Jobs Act (TCJA), was a factor in making the deal happen. "The Republican tax plan that was implemented a few months ago included provisions that reduced or eliminated companies' ability to deduct the interest they pay on loans and debt," said Charles King, President and Principal Analyst at Pund-IT. "That directly impacts Dell, even though it has paid down a substantial portion of the debt it took on to purchase EMC." In addition, the deal will position Dell for future acquisitions, according to industry insiders. "As Dell goes public, it's probably going to start pursuing more mergers and acquisitions," Gartner's Lowery said. "So I expect that we will see them shake the market up and start going to look for targets." What It Means for Customers The deal will likely keep Dell focused on the same long-term growth plan and end-to-end product strategy it had while the company was private, it reported. By offering end-to-end IT hardware along with effective infrastructure management tools, the company says it will be positioned strongly enough to back up its marketing slogan of servicing customers "from the edge to the core to the cloud." Dell, along with Lenovo, is one of two Tier 1 IT vendors that have this type of end-to-end strategy, but will going public affect Dell's customers? "I don't think that customers are going to see an immediate change," Lowery said. "I think, long term, it improves things for them because it puts Dell on a path to continue to expand its capabilities, products, and services." The real change is how Dell focused on its customers when the company went private, according to Endpoint's Kay. "When Dell was able to focus on fewer constituencies rather than keeping a weather eye out on investors, they could keep both eyes focused on customers and they did," Kay said. He doesn't expect this focus on customers to change after Dell goes public. What the Deal Means for VMware Under the deal, VMware, which owns the mobile device management (MDM) company AirWatch, will remain independent as a separate, publicly traded company, with Dell owning 81 percent of VMware common stock. "This transaction simplifies our capital structure while maintaining VMware's independence," said Tom Sweet, Chief Financial Officer at Dell, in the July 2 call. Sweet explained that letting VMware maintain its own currency on the market and financial flexibility could help with recruiting talent for the virtualization software company, which specializes in software-defined IT infrastructure. Dell will continue to invest in software-defined technology through its stake in VMware, Dell said. Via a software-defined architecture, the software layer not only controls management features, but also directly controls the data center's compute, network, and storage infrastructure on the same virtual layer. This represents not only cost and implementation advantages, but also a new level of response time should the larger business' needs change abruptly. "Prior to Dell's announcement, there was speculation that Dell and VMware might merge into a single entity," Pund-IT's King said. "That seems to be off the table." Moving Forward Without Big Changes Dell plans to continue to grow in multi-cloud management, application development, and data analytics. The plans to go public will not lead to big changes in product strategy in the short term, the analysts said. "I don't think it'll have anything to do with the asset side of the balance sheet," Endpoint's Kay said. "The project strategy, goals, all that stuff is the same." Pund-IT's King agreed that Dell likely wouldn't introduce significant changes in its data center-focused products and services in the short term. "However, that's likely to change as emerging technologies, like IoT and AI, become mature," he said. "Dell is well positioned to pursue those opportunities." In response to a question about whether or not there will be management changes following the move to go public, Dell said he doesn't foresee changes in management strategy. "The simple answer is we don't anticipate any changes," Dell said. "I think if you look at the last five years, we've been consistently investing for growth and it's been working. We've had steady, strong share gains across our businesses. And we intend to continue to do that."

http://uk.pcmag.com/vmware-airwatch/116521/feature/pcmag-breaks-down-dells-return-to-a-public-company

7 ways Teradata’s CMO plans to turnaround the 40-year old technology brand

If there’s one attribute @MartynEtherington is counting on to realise his ambitious brand transformation plans at @Teradata, it’s bravery.  “As a CMO, you cannot have fear, you have to be fearless. You can’t blink otherwise the team senses that, other functions will sense that, and you’re dead in the water,” the recently appointed chief marketing officer tells CMO. “You have to have conviction you’re going to do this. I’ve done this a couple of time before, and without arrogance or conceitedness, I believe this is achievable.”  Etherington has been in marketing for most of his career, working for technology companies in roles internationally as well as in Europe such as @Deck, @Sequin, @IBM, @Tektronix, @Danaher, @Mitel, @Cisco and now @Teradata.  He’s also built “and had near-death professional experiences” working on three startups during the height of the dotcom boom and bust. And that’s not to mention running business divisions regionally and globally and taking up P&L responsibility along the way. In March, Etherington took on his next big turnaround challenge: Transforming market and customer perceptions of Teradata from a data warehouse appliance technology provider 20 years out of date, to a data analytics at scale powerhouse with customers’ interests at its heart.

https://www.cmo.com.au/article/643919/7-ways-teradata-cmo-plans-turnaround-40-year-old-technology-brand/

Complete Solutions from A to Zones

@Zones @nfrastructure™ delivers complete #lifecycleservices for all your IT needs. Our services include everything from #advancedtechnology to #servicedelivery, #consulting, branch and store #modernization, and practically everything in between. Our team of certified technicians are ready to design, deploy, and manage all of your IT solutions across multiple distributed enterprise locations with absolute precision and efficiency.

Our proprietary #nterprise platform allows you to track the progress of the deployment and implementation process in real-time. Which gives you complete project transparency and detailed inventory levels, field technician status, and up-to-the-minute viewings at any location. You can even view and print downloadable photos to verify and document project completion every step of the way.

Our partnership doesn't end at the point of sale.

Today's business challenges are more complex than ever before. Companies need to be more flexible, scalable, and secure, without disrupting workflows or employee productivity. And when you add in things like the scalability of people, processes, and project intake, it's easy to see why having a technology partner who can provide end-to-end services is so important.

And that's where we can help.

Not only do we offer the types of solutions businesses need today, we can deliver them with the speed, agility, and precision you need to stay competitive long into tomorrow.

http://www.zones.com/site/statics/static_page.html?name=solutions/nfrastructure&utm_source=LinkedIn&utm_medium=Zones&utm_campaign=Solutions

Ready-to-deploy deep learning solutions

Accelerate your deep learning project deployments with Radeon Instinct™ powered solutions Deep learning adoption is lagging as companies struggle with how to make it work. Now a new ecosystem is rising to deliver the integrated pieces that ultimately will be part of one turnkey system for deep learning. Automation has proved its worth in meeting IT and business objectives. Even so, efficiencies in automation and work augmentation software can be greatly enhanced with deep learning. Yet deep learning adoption rates are low. That’s in part because the tech is difficult, and the talent pool is thin. The good news is that an ecosystem is forming and already beginning to resolve some of these issues as it continues to grow towards becoming a single turnkey system. Why it takes an ecosystem A Deloitte report found that fewer than 10% of the companies surveyed across 17 countries invested in machine learning. The chief reasons for the adoption gap is a lack of understanding on how to use the technology, an insufficient amount of data to train it with, and a shortage of talent who could make it all work. Translated in the simplest of terms, deep learning is perceived by some to be too hard to deploy for practical use. The solution for that dilemma is what it has always been for any new technology requiring esoteric skill sets and faced with a talent shortage – build an easy-to-use, turnkey system. That is, of course, easier said than done. “The ongoing digital revolution, which has been reducing frictional, transactional costs for years, has accelerated recently with tremendous increases in electronic data, the ubiquity of mobile interfaces, and the growing power of artificial intelligence (AI),” according to a McKinsey & Company report. “Together, these forces are reshaping customer expectations and creating the potential for virtually every sector with a distribution component to have its borders redrawn or redefined, at a more rapid pace than we have previously experienced.” That’s why today’s sophisticated and complex systems are commonly constructed not by a single vendor but by a strong and diverse ecosystem capable of delivering the many moving parts needed to make a single turnkey system. Especially when said systems must be equally workable for companies across industries and with diverse needs. As a result, ecosystems are growing at breathtaking speeds. McKinsey & Company analysts predict that new ecosystems are likely to entirely replace many traditional industries by 2025. Such an ecosystem is forming for machine learning. It’s seeded with four recently launched, ready-to-deploy solutions. They center on AMD’s Radeon Instinct training accelerator for machine learning, and its ROCm Open eCosystem (ROCm), an open source HPC/Hyperscale-class platform for GPU computing. AMD takes open source all the way down to the graphics card level. Open source is key to successfully wrangling machine learning systems as it leverages the skills and coding work from entire communities and makes an ecosystem functional across technologies and applications. The ROCm open ecosystem This newly forming ecosystem is optimal for beginning or expanding your deep learning efforts whether you are the IT person looking to get pre-configured deep learning technologies in place, or the scientist who just needs access to HPC systems with one of the frameworks loaded. Either way, users can quickly get to work with their data. Developers also have full and open access to the hardware and software which speeds their work in developing frameworks. Everything AMD develops for its Radeon Instinct system is open source and available on GitHub. The company also has docker containers for easier installs of ROCm drivers and frameworks which can be found on the ROCm site for Docker.  Caffe and TensorFlow machine learning frameworks are offered now, with more to follow soon. A deep learning solutions page has gone live, which features the four systems that service as the bud of the blooming ecosystem rooted in AMD technologies. The frameworks docker containers will be listed there as well. This budding machine learning ecosystem is already bearing fruit for organizations looking to launch machine learning training and applications with a minimum of technical effort and expertise by combining: Fast and easy server deployments ROCm Open eCosystem and infrastructure Deep learning framework docker containers Optimized MIOpen framework libraries The four systems forming the ecosystem center “Data science is a mix of art and science—and digital grunt work. The reality is that as much as 80 percent of the work on which data scientists spend their time can be fully or partially automated,” according to a Deloitte report. This newly forming ecosystem is focused on automating much of the machine learning processes. While complicated to achieve, the end results are far easier for organizations to use. Deloitte identified five key vectors of progress that should help foster significantly greater adoption of machine learning by making it more accessible. “Three of these advancements—automation, data reduction, and training acceleration—make machine learning easier, cheaper, and/or faster. The others—model interpretability and local machine learning—open up applications in new areas,” according to the Deloitte report. There are four prebuilt systems shaping this ecosystem early on. Each is provided by an independent partner and built on or for AMD’s Radeon Instinct and ROCm platforms, but their initial presentations are at varying levels of integration. While more partners will join the ecosystem over time, these four provide a solid bedrock for organizations looking to get started in machine learning now. 1) AMAX is providing systems with preloaded ROCm drivers and a choice of framework, either TensorFlow or Café, for machine learning, advanced rendering and HPC applications. 2) Exxact is similarly providing multi-GPU Radeon Instinct-based systems with preloaded ROCm drivers and frameworks for deep learning and HPC-class deployments, where performance per watt is important. 3) Inventec provides optimized high performance systems designed with AMD EPYC™ processors and Radeon Instinct compute technologies capable of delivering up to 100 teraflops of FP16 compute performance for deep learning and HPC workloads. 4) Supermicro is providing SuperServers supporting Radeon Instinct machine learning accelerators for AI, big data analytics, HPC, and business intelligence applications. The payoff from leveraging the technologies in a machine learning ecosystem potentially comes in many forms. “A growing number of tools and techniques for data science automation, some offered by established companies and others by venture-backed start-ups, can help reduce the time required to execute a machine learning proof of concept from months to days. And, automating data science means augmenting data scientists’ productivity in the face of severe talent shortages,” say the Deloitte researchers.

https://www.hpcwire.com/2018/07/23/ready-to-deploy-deep-learning-solutions/

The quantum meltdown of encryption

@ShlomiDolev is the Chair Professor and founder of the Computer Science department of Ben-Gurion University of the Negev. He is the author of Self-Stabilization. Shlomi also is a cybersecurity entrepreneur and the co-founder and chief scientist of Secret Double Octopus. The quantum computing apocalypse is imminent The world stands at the cusp of one of the greatest breakthroughs in information technology. Huge leaps forward in all fields of computer science, from data analysis to machine learning, will result from this breakthrough. But like all of man’s technological achievements, from the combustion engine to nuclear power, harnessing quantum comes with potential dangers as well. #Quantumcomputers have created a slew of unforeseen vulnerabilities in the very infrastructure that keeps the digital sphere safe. The underlying assumption behind nearly all encryption ciphers used today is that their complexity precludes any attempt by hackers to break them, as it would take years for even our most advanced conventional computers to do so. But quantum computing will change all of that. Quantum computers promise to bring computational power leaps and bounds ahead of our most advanced machines. Recently, scientists at @Google began testing their cutting edge 72 #qubit quantum computer. The researchers expect to demonstrate with this machine quantum supremacy, or the ability to perform a calculation impossible with traditional computers. Chink in the Armor Today’s standard encryption techniques are based on what’s called Public Key Infrastructure or PKI, a set of protocols brought to the world of information technology in the 1970’s. PKI works by generating a complex cipher through random numbers that only the intended recipient of a given message, the one in possession of the private key, can decode. As a system of encoding data, PKI was sound and reliable. But in order to implement it as a method to be used in the real world, there was still one question that needed to be answered: how could individuals confirm the identity of a party reaching out and making a request to communicate? This vulnerability left the door open for cybercriminals to impersonate legitimate servers, or worse, insert themselves into a conversation between users and intercept communications between them, in what’s known as a Man-in-the-Middle (MITM) attack. The industry produced a solution to this authentication problem in the form of digital certificates, electronic documents the contents of which can prove senders are actually who they claim to be. The submission of certificates at the initiation of a session allows the parties to know who it is they are about to communicate with. Today, trusted third party companies called Certificate Authorities, or CAs, create and provide these documents that are relied upon by everyone from private users to the biggest names in tech. The problem is that certificates themselves rely on public-key cryptographic functions for their reliability, which, in the not too distant future, will be vulnerable to attack by quantum machines. Altered certificates could then be used by cyber criminals to fake their identities, completely undermining certificates as a method of authentication. Intel’s 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)   Decentralizing the Threat This isn’t the first time we’ve had to get creative when it comes to encryption. When Bitcoin creator Satoshi Nakamoto, whose true identity is still unknown, revealed his revolutionary idea in a 2008 white paper, he also introduced the beginnings of a unique peer-to-peer authentication system that today we call blockchain. The brilliantly innovative blockchain system at its core is an open ledger that records transactions between two parties in a permanent way without needing third-party authentication. Blockchain provided the global record-keeping network that has kept Nakamoto’s digital currency safe from fraudsters. Blockchain is based on the concept of decentralization, spreading the authentication process across a large body of users. No single piece of data can be altered without the alteration of all other blocks, which would require the collusion of the majority of the entire network. For years, blockchain and Bitcoin remained one and the same. About five years ago, innovators in the industry began to realize that blockchain could be used for more than just securing cryptocurrency. Altering the original system designed for Bitcoin could produce programs to be applied in a wide range of industries, from healthcare, to insurance, to political elections. Gradually, new decentralized systems began to emerge such as those of Ripple and Litecoin. In 2015, one of the original contributors to the Bitcoin codebase Vitalik Buterin released his Ethereum project also based on blockchain. What these new platforms added to the picture was the ability to record new types of data in addition to currency exchanges, such as loans and contractual agreements. The advantages of the blockchain concept quickly became apparent. By 2017, nearly fifteen percent of all financial institutions in the world were using blockchain to secure aspects of their operations. The number of industries incorporating decentralized systems continues to grow. Saving PKI The best solution for protecting encryption from our ever-growing processing power is integrating decentralization into Public Key Infrastructure. What this means essentially, is that instead of keeping digital certificates in one centralized location, which makes them vulnerable to being hacked and tampered with, they would be spread out in a world-wide ledger, one fundamentally impervious to alteration. A hacker attempting to modify certificates would be unable to pull off such a fraud, as it would mean changing data stored on enumerable diversified blocks spread out across the cyber sphere. Decentralization has already been proven as a highly effective way of protecting recorded data from tampering. Similarly, using a blockchain-type system to replace the single entity Certificate Authority, can keep our digital certificates much safer. It is in fact one of the only foreseeable solutions to keep the quantum revolution from undermining the foundation of PKI.

https://techcrunch.com/2018/07/22/the-quantum-meltdown-of-encryption/

Pure Storage Inc (PSTG) President David Hatfield Sells 150,000 Shares

@PureStorage Inc (NYSE:PSTG) President @DavidHatfield sold 150,000 shares of the stock in a transaction that occurred on Wednesday, July 18th. The stock was sold at an average price of $24.21, for a total value of $3,631,500.00. The sale was disclosed in a document filed with the SEC, which is available through this link. David Hatfield also recently made the following trade(s): Tgyrsday July 5th, David Hatfield sold 4,963 shares of Pure Storage stock. The stock was sold at an average price of $23.41, for a total value of $116,183.83. On Monday, June 18th, David Hatfield sold 150,000 shares of Pure Storage stock. The stock was sold at an average price of $24.74, for a total value of $3,711,000.00. Pure Storage opened at $24.04 on Friday, according to MarketBeat.com. The company has a debt-to-equity ratio of 0.68, a current ratio of 4.34 and a quick ratio of 4.22. Pure Storage Inc has a fifty-two week low of $11.99 and a fifty-two week high of $25.62. The firm has a market cap of $5.58 billion, a price-to-earnings ratio of -28.62 and a beta of 1.32.  Pure Storage (NYSE:PSTG) last released its earnings results on Monday, May 21st. The technology company reported ($0.07) earnings per share for the quarter, topping the consensus estimate of ($0.12) by $0.05. Pure Storage had a negative return on equity of 35.13% and a negative net margin of 16.38%. The firm had revenue of $255.90 million during the quarter, compared to analysts’ expectations of $251.06 million. During the same period in the prior year, the company earned ($0.14) earnings per share. The company’s quarterly revenue was up 40.1% on a year-over-year basis. analysts predict that Pure Storage Inc will post -0.75 EPS for the current fiscal year. Several institutional investors and hedge funds have recently made changes to their positions in PSTG. Bank of New York Mellon Corp lifted its stake in shares of Pure Storage by 5.6% in the 4th quarter. Bank of New York Mellon Corp now owns 393,122 shares of the technology company’s stock valued at $6,234,000 after acquiring an additional 20,730 shares during the last quarter. UBS Asset Management Americas Inc. lifted its stake in shares of Pure Storage by 55.0% in the 4th quarter. UBS Asset Management Americas Inc. now owns 66,448 shares of the technology company’s stock valued at $1,054,000 after acquiring an additional 23,572 shares during the last quarter. Geode Capital Management LLC lifted its stake in shares of Pure Storage by 47.5% in the 4th quarter. Geode Capital Management LLC now owns 1,117,750 shares of the technology company’s stock valued at $17,726,000 after acquiring an additional 359,886 shares during the last quarter. Alliancebernstein L.P. lifted its stake in shares of Pure Storage by 9.6% in the 4th quarter. Alliancebernstein L.P. now owns 141,500 shares of the technology company’s stock valued at $2,244,000 after acquiring an additional 12,400 shares during the last quarter. Finally, Deutsche Bank AG lifted its stake in shares of Pure Storage by 26.4% in the 4th quarter. Deutsche Bank AG now owns 1,411,774 shares of the technology company’s stock valued at $22,390,000 after acquiring an additional 295,206 shares during the last quarter. Institutional investors own 57.38% of the company’s stock. A number of equities research analysts recently issued reports on PSTG shares. Stifel Nicolaus started coverage on Pure Storage in a research report on Tuesday, June 5th. They set a “hold” rating and a $23.00 price target on the stock. Zacks Investment Research raised Pure Storage from a “hold” rating to a “buy” rating and set a $27.00 price target on the stock in a research report on Tuesday. Summit Insights raised Pure Storage to a “buy” rating in a research report on Tuesday, May 22nd. BMO Capital Markets increased their price target on Pure Storage from $25.00 to $29.00 and gave the stock an “outperform” rating in a research report on Tuesday, May 22nd. Finally, ValuEngine downgraded Pure Storage from a “buy” rating to a “hold” rating in a research report on Tuesday, May 29th. One analyst has rated the stock with a sell rating, eight have issued a hold rating and seventeen have issued a buy rating to the stock. The company presently has a consensus rating of “Buy” and a consensus target price of $23.62. Pure Storage Company Profile Pure Storage, Inc engages in building a data platform that enables businesses to enhance performance and reduce complexity and costs worldwide. The company delivers its data platform through Purity Operating Environment, an optimized software for solid-state memory that offers enterprise-class storage and protocol services; FlashArray and FlashBlade optimized hardware products for solid-state memory to enhance the performance and density of flash, optimize its advanced software services, and reduce solution cost for customers; Pure1, a cloud-based management and support software; and FlashStack, a converged infrastructure solution.

https://pressoracle.com/2018/07/21/pure-storage-inc-pstg-president-david-hatfield-sells-150000-shares.html

Artificial Intelligence Is Powering Worldwide Tech Bubble

Jason Burack of Wall St. for Main St. interviews first time guest, Webmaster, Senior Editor, and Co-Host of the FS Insider podcast at Financial Sense, Cris Sheridan, to discuss the unfolding tech bubble 2.0 led by developments in artificial intelligence and machine learning. First, before Jason asks Cris about the technology and recent innovations in the AI industry, Jason asks Cris for a definition of artificial intelligence. Cris gives a different definition of AI than what's popularly conceived by the public following his years of research and interviews with AI experts on his FS Insider podcast. Jason and Cris also discuss the arms race involving entire countries and their militaries regarding quantum computing, AI, robotics, and other leading technologies. Cris talks about how AI is being combined with other technologies like Big Data to help AI learn faster. Most of the largest technology companies all have research and development (R&D) budgets for AI in the billions of dollars per year.
https://seekingalpha.com/article/4188893-artificial-intelligence-powering-worldwide-tech-bubble

How This Founder Went from $0 to $300M In Just Four Years

For @BipulSinha, there seems to be only one speed: super fast. He is the co-founder and CEO of @Rubrik, which is considered the fastest growing infrastructure software company … ever. The company has built a software-defined cloud platform that seamlessly manages backups, instant recovery, replication, search and analytics. For example, the technology can help with the recovery of ransomware attacks, protect data from hurricanes and comply with the requirements of GDPR. And yes, Rubrik is a big-time disruptive threat to incumbents like @Dell and @Commvault. Some of the company’s backers include Lightspeed Ventures, Greylock Partners, Khosla Ventures and IVP. Hey, even NBA MVP Kevin Durant is an investor! So what can entrepreneurs learn from Sinha? How can you find ways to pick-up the growth ramp? Well, let’s take a look:

https://www.forbes.com/sites/tomtaulli/2018/07/21/how-this-founder-went-from-0-to-300m-in-just-four-years/#7245ab365c03

Sunday, July 22, 2018

Google, Facebook, Microsoft, and Twitter partner for ambitious new data project

Today, @Google, @Facebook, @Microsoft, and @Twitter joined to announce a new standards initiative called the Data Transfer Project, designed as a new way to move data between platforms. In a blog post, Google described the project as letting users “transfer data directly from one service to another, without needing to download and re-upload it.” The current version of the system supports data transfer for photos, mail, contacts, calendars, and tasks, drawing from publicly available APIs from Google, Microsoft, Twitter, Flickr, Instagram, Remember the Milk, and SmugMug. Many of those transfers could already be accomplished through other means, but participants hope the project will grow into a more robust and flexible alternative to conventional APIs. In its own blog post, Microsoft called for more companies to sign onto the effort, adding that “portability and interoperability are central to cloud innovation and competition.” “THE FUTURE OF PORTABILITY WILL NEED TO BE MORE INCLUSIVE, FLEXIBLE, AND OPEN.” The existing code for the project is available open-source on GitHub, along with a white paper describing its scope. Much of the codebase consists of “adapters” that can translate proprietary APIs into an interoperable transfer, making Instagram data workable for Flickr and vice versa. Between those adapters, engineers have also built a system to encrypt the data in transit, issuing forward-secret keys for each transaction. Notably, that system is focused on one-time transfers rather than the continuous interoperability enabled by many APIs. “The future of portability will need to be more inclusive, flexible, and open,” reads the white paper. “Our hope for this project is that it will enable a connection between any two public-facing product interfaces for importing and exporting data directly.” The bulk of the coding so far has been done by Google and Microsoft engineers who have long been tinkering with the idea of a more robust data transfer system. According to Greg Fair, product manager for Google Takeout, the idea arose from a frustration with the available options for managing data after it’s downloaded. Without a clear way to import that same data to a different service, tools like Takeout were only solving half the problem. “When people have data, they want to be able to move it from one product to another, and they can’t,” says Fair. “It’s a problem that we can’t really solve alone.” Most platforms already offer some kind of data-download tool, but those tools rarely connect with other services. Europe’s new GDPR legislation requires tools to provide all available data on a given user, which means it’s far more comprehensive than what you’d get from an API. Along with emails or photos, you’ll find thornier data like location history and facial recognition profiles that many users don’t even realize are being collected. There are a few projects trying to make use of that data — most notably Digi.me, which is building an entire app ecosystem around it — but for the most part, it ends up sitting on users’ hard drives. Download tools are presented as proof that users really do own their data, but owning your data and using it have turned into completely different things. “WE ALWAYS WANT TO THINK ABOUT USER DATA PROTECTION FIRST.” The project was envisioned as an open-source standard, and many of the engineers involved say a broader shift in governance will be necessary if the standard is successful. “In the long term, we want there to be a consortium of industry leaders, consumer groups, government groups,” says Fair. “But until we have a reasonable critical mass, it’s not an interesting conversation.” This is a delicate time for a data-sharing project. Facebook’s API was at the center of the Cambridge Analytica scandal, and the industry is still feeling out exactly how much users should be trusted with their own data. Google has struggled with its own API scandal, facing outcry over third-party email apps mishandling Gmail users’ data. In some ways, the proposed consortium would be a way to manage that risk, spreading the responsibility out among more groups.

https://www.theverge.com/2018/7/20/17589246/data-transfer-project-google-facebook-microsoft-twitter

IoT hacker builds Huawei-based botnet, enslaves 18,000 devices in one day

How long does it take to build a #botnet? Not long, if you consider #Anarchy 's 18,000-device-strong creation, brought to life in only 24 hours. First spotted by researchers from @NewSky Security, as reported by @Bleeping Computer, other security firms including @Rapid7 and @Qihoo 360 @Netlab quickly jumped on the case and confirmed the existence of the new threat.

The security teams realized there has been a huge recent uptick in @Huawei device scanning.The traffic surge was due to scans seeking devices vulnerable to CVE-2017-17215, a critical security flaw which can be exploited through port 37215.Scans to find routers vulnerable to the issue began on 18 July.

If a Huawei router is exploited in this fashion, attackers can send malicious packets of data, launch attacks against the device, and remotely execute code -- which can be crafted in order to control, enslave, and add these devices to botnets.

Botnets are the creation of vast networks full of enslaved devices, which can include standard PCs, routers, smartphones, and a more recent addition, the compromise of Internet of Things (IoT) devices ranging from smart lights to fridges.

The LizardStresser botnet, a distributed denial-of-service (DDoS)-for-hire system, for example, was able to launch 400Gbps attacks thanks to our vulnerable IoT devices.

After the source code was released to the public in 2015, LizardStresser botnet variants were discovered which targeted IoT products use telnet brute-force logins to random IP addresses with a hard-coded list of user credentials.

Hard-coded credentials are a common problem with IoT products even today, and all it often takes is a simple scanner to compromise such devices.

https://www.zdnet.com/article/iot-hacker-builds-huawei-based-botnet-using-18000-devices-in-one-day/

High Point company files lawsuit against Dell 

A @HighPoint company specializing in the manufacturing of tablet enclosures has filed a lawsuit against @DellTechnologies.

https://www.bizjournals.com/triad/news/2018/07/20/high-point-company-files-lawsuit-against-dell.html

OpenSDS Project Issues First Release for Software-Defined Storage

The @LinuxFoundation is hosting a little-known open source project that’s doing #softwaredefinedstorage. The #OpenSDS project officially launched with the #Linux Foundation in November 2016. And it recently issued its first code release — #Aruba. The impetus of the project was to bring together different companies to collaborate on an open storage platform, said Steven Tan, chairman of the OpenSDS technical steering committee and CTO of cloud storage solutions at Huawei. There are so many open source projects for compute and networking. The founders of OpenSDS wanted something for storage.

Some of the vendors behind the start of OpenSDS were @Hitachi, @Huawei, and @Fujitsu. @Dell EMC joined a bit later. The group also includes end users of #softwaredefinedstorage such as @Vodafone, @YahooJapan, and @NTTCommunications.

“A lot of end users have a traditional IT set-up, and if they have cloud-native apps they still need to connect to the existing IT infrastructure,” said Tan. “A lot of projects are just addressing one environment. We’re trying to connect the dots and make sure to provide a framework that connects traditional with cloud-native.”

Similar to SDN, software-defined storage separates the control plane from the data plane. OpenSDS is creating the control plane for storage. Tan said it doesn’t matter whether the underlying storage is handled by hardware or software. The OpenSDS code provides a management and control interface.

“It’s definitely a very challenging endeavor,” said Tan. “In terms of more traditional storage like block or file, each vendor does stuff differently. OpenSDS is a consistent way to manage different storage.”

https://www.sdxcentral.com/articles/news/opensds-project-issues-first-release-for-software-defined-storage/2018/07/


Amazon Won't Be Entering the Enterprise Switch Market After All

Alarm bells rang for @Cisco Systems (NASDAQ:CSCO) investors last Friday when The Information reported that e-commerce and cloud computing giant @Amazon.com (NASDAQ:AMZN) was considering selling cheap #whitebox #networking hardware to third parties. Amazon entering a market should never be taken lightly, and Wall Street reacted by sending shares of Cisco and other networking stocks plunging. It was all much ado about nothing. Speaking to MarketWatch on Wednesday, a Cisco spokesman shot down the report: "Cisco and AWS have a long-standing customer and partner relationship, and during a recent call between Cisco CEO @ChuckRobbins and AWS CEO @AndyJassy, Andy confirmed that AWS is not actively building a commercial network switch." An AWS spokeswoman later confirmed that statement. THE CISCO CATALYST 9000 FAMILY OF SWITCHES. IMAGE SOURCE: CISCO SYSTEMS. What it means I argued last week that Amazon would face an uphill battle stealing market share away from Cisco. Cisco controls more than 50% of the market for networking switches, even after fighting off low-priced challengers for years. Amazon was reportedly considering undercutting Cisco on price by as much as 80%, which sounds like a disruptive move. But Cisco CEO Chuck Robbins said back in July that companies often spend $15 on operating costs over five years for each dollar spent on networking equipment. The price of the equipment, then, is far less important than the ongoing costs of using that equipment. Any cost savings from a lower-priced switch can easily be swamped by higher operating costs. Cisco's latest line of switches, the Catalyst 9000 series, uses machine learning to reduce operating costs. When the company was testing the switches with large organizations last year, it cited an operating expense savings of 61% for those participants. That wasn't the only benefit -- Cisco also said that the impact of security breaches was reduced by 48%.

https://www.fool.com/investing/2018/07/19/amazon-wont-be-entering-the-enterprise-switch-mark.aspx

Meet the New Airlander 10: World's Largest Aircraft with Luxurious Bedrooms and Glass Floors

Ever wanted to travel the world in a giant balloon with open-air views with the added perks of luxury accommodations? VEHICLES The World's First All-Electric Luxury Yacht Can Fully Recharge in Only 4 Hours The world's largest aircraft is about to be transformed into a ship that will let you do just that! British aerospace firm Hybrid Air Vehicles (HAV) and automotive and aviation design consultancy Design Q have partnered to develop a luxury tourism version of HAV's industrial Airlander 10, currently the world's largest aircraft.

Simulations of what the revamped model's interior is set to look like were revealed at this week's Farnborough International Airshow. Airlander Luxury Tourism Design Development Project The massive ship's new design will feature en-suite bedrooms, an Infinity Lounge featuring both glass ceilings on the floor and roof and an Altitude Bar for fine dining and drinks. The project, entitled ‘Airlander Luxury Tourism Design Development Project,' was awarded a £60,000 Design Foundations Round 2 grant by the UK’s Innovation Agency. Source: Hybrid Air Vehicles “We are excited with the prospect of working on such a unique project, not only is it the largest flying aircraft in the world but it demands an interior that truly breaks new ground and provides an experience that will be unlike anything seen before.

This will be something that passengers will treasure all their lives.” had said Howard Guy, C.E.O and joint founder of Design Q. Playvolume00:00/00:40Truvid Source: Hybrid Air Vehicles The Airlander was chosen as an ideal vehicle for high-end, luxury tourism due to its unique ability to stay in the air for up to five days at a time in virtual silence.  The ship's architecture that also allows for floor-to-ceiling windows made it particularly well-suited for navigating locations with breathtaking views.

Airlander 10 is also a hyper-efficient aircraft capable of flying for weeks at a time. The ship can also land virtually anywhere, including in desert sand, ice, and water, which means passengers can travel in luxury to some of the world's hardest to reach regions. Source: Hybrid Air Vehicles The new ship's first trial flight is set for 2018. Once approved as safe for all passengers, the luxury craft will be available to travel to extreme and varied destinations that range from the North Pole to the Namib Desert. Traveling to hidden corners around the world The ship's team has signed an agreement with luxury travel agency Henry Cookson Adventures to be the first to trial an expeditionary journey on the new craft.

The agency will collaborate with scientists and conservationists in order to offer tours to inaccessible exotic elusive locations hidden from most of the world. 

“I have flown Airlander a number of times now, and am really excited about the possibility of taking the first passengers on board.  I can imagine the awe and excitement of seeing the world in luxury, with amazing views, quietly and whilst respecting the environment.” said Dave Burns, Airlander Chief Test Pilot. 

https://interestingengineering.com/meet-the-new-airlander-10-worlds-largest-aircraft-with-luxurious-bedrooms-and-glass-floors?_source=newsletter&_campaign=o0Dkd3npXnDa3&_uid=Vyb82O6jev&_h=7458de4a4b7c504e517c638c1d978f9f00c336d4&utm_source=newsletter&utm_medium=mailing&utm_campaign=Newsletter-20-07-2018