Dell, EMC, Dell Technologies, Cisco,

Friday, July 29, 2016

Samsung NAND flashes its hot measurements: 64L 4G 3D 2C by EY

#Samsung will have 64-layer #3DNAND in mass production before #WesternDigital (WD)/ #Toshiba and is looking into #3DXPoint memory alternatives. With WD/Toshiba announcing that their 64-layer 3D NAND has started production, the timing is exactly right for Sammy to say it will have its own 64-layer 3D NAND in mass production by the end of the year. WD/Toshiba's initial 256Gbit chip should be in mass production before the middle of 2017, up to 6 months later than Samsung. A Samsung representative on an earnings call said: "Our goal is to mass-produce SSDs that use our 4G V-NAND within the year." In the call Sewon Chun, from Samsung's memory marketing team, said: "Our development plan for fourth-generation V-NAND and mass production of 18nm DRAM is progressing smoothly." This 4th-generation V-NAND is a 64-layer technology. He added: "Although overall demand is expected to increase, thanks to increased demand for high-density solution products in major SSD and mobile applications, supply and demand is expected to become much tighter in the second half due to supply restrictions caused by 3D NAND ramp-up delays in the market."
http://www.theregister.co.uk/2016/07/29/samsungs_64l_4g_3d_2c/

Microsoft’s Iowa data center cluster to reach 3.2M square feet

From a data center perspective, Iowa has a lot going for it. Its industrial electric rates are 5.71 cents per kWh, near the bottom nationally, according to U.S. data. It has experienced “minor earthquake activity” since the U.S. took control in 1803 under the Louisana Purchase.

It’s also far from hurricane trouble, but Iowa is at higher risk for tornadoes, and that may help explain #Microsoft ’s data center building plans.

Microsoft is building a 1.7 million-square-foot data center complex in the West Des Moines area. The project, announced last week, is on top of two other Microsoft data center projects for the area that were announced in2008 and 2014.

http://www.computerworld.com/article/3101386/data-center/microsoft-s-iowa-data-center-cluster-to-reach-3-2m-square-feet.html

Microsoft to Cut Thousands of Jobs

Job cuts primarily affect #Microsoft ’s smartphone business and global sales unit. Microsoft is cutting more jobs. The business technology giant said in a regulatory filing on Thursday that it plans to lay off an additional 2,850 workers to the previously announced 1,850 jobs it said it would slash in May. In total, Microsoft will cut 4,700 jobs worldwide by the end of the company’s fiscal year 2017. Microsoft said that the job cuts will mainly target its smartphone hardware business and global sales unit.

http://fortune.com/2016/07/28/microsoft-layoffs-thousands-phone/

Consett's Thomas Swan & Co helps develop wonder material graphene for electronics

A FAMILY-OWNED chemical firm is working with research bosses to pave the way for increased use of a wonder material in electronics. Thomas Swan & Co is supporting the National Graphene Institute ( #NGI ). The company, based in Consett, County Durham, is known for working on #graphene, which experts say is an ultra-light carbon material capable of adding toughness to plastics and cutting friction in lubricants. Swan previously tailored its work to use graphene in inks and electronics displays, but has now supplied materials for NGI to create a thermal paste using the chemical compound boron nitride.

Officials say the paste allows electronics to run at much lower temperatures, meaning high-performance products can last longer.

Andy Goodwin, Swan’s commercial director in its advanced materials division, said: “This is a great opportunity to work with world-class scientists to increase our understanding of material technologies and accelerate their adoption into real-world applications.”

James Baker, NGI graphene business director, added: “It is now important we look to scale up and develop these amazing new materials, which could have an untold impact on industry in conjunction with graphene.”

Swan began as a company in the 1920s and was known for converting steel industry slag waste into road surfacing material.

However, when road builders changed from using tar as a binder to bitumen on roads, it created a surfactant, which became its first chemical product.

Its expertise now includes tyre and rubber additives in a performance chemicals division, while it also supplies an active ingredient that goes into cleaning agent Dettol.

http://m.thenorthernecho.co.uk/business/14646045.Consett_s_Thomas_Swan___Co_helps_develop_wonder_material_graphene_for_electronics/

Thursday, July 28, 2016

Wise Words From the "Hill"


A new fund backed by George Soros and Michael Dell is off to a stellar start

#OwlRock Capital Partners, a credit fund started by private equity veterans Marc Lipschultz and Doug Ostrover, has raised almost $1 billion. The firm aims to lend to small and medium businesses through Owl Rock Capital Corp., and has attracted investors including @GeorgeSoros, @MichaelDell, and #Alibaba Group cofounder Joseph Tsai, Bloomberg first reported. Big banks are retreating from the lending space amid tougher regulations in the wake of the financial crisis, and alternative lenders like Owl Rock, #ApolloGlobal Management, and Oaktree Capital Group are eager to fill the void. In an April filing, Owl Rock said there are nearly 200,000 middle-market companies in need of "access ... to refinance existing debt, support growth and finance acquisitions." Owl Rock's credit investments will range from $20 million to $250 million with maturities between three and 10 years, the filing shows. The firm's dealmaking team features strong talents from Wall Street banks and alternative-investment giants. Lipschultz is a former head of energy and infrastructure investing at KKR, while Ostrover is an ex-Blackstone Group credit executive. Craig Packer, a former senior leveraged-finance banker at Goldman Sachs, joined as a cofounder in February and became CEO of Owl Rock Capital Corp.

http://www.businessinsider.com/owl-rock-capital-partners-hitting-1-billion-fundraising-mark-2016-7

Riverbed To Acquire Aternity To Bolster End-To-End Digital Visibility

Today, #Riverbed Technology announced that it is acquiring privately-held #Aternity. Aternity provides end-user experience (EUE) solutions as part of a broader application performance monitoring (APM) offering. The deal is expected to close in August 2016, and the financial terms of the deal were not disclosed. The acquisition will extend the reach of Riverbed’s SteelCentral performance monitoring solutions into end-user devices, rounding out an end-to-end solution that is critical for enterprise digital efforts.

“Aternity has a highly complementary value proposition with respect to the SteelCentral portfolio,” said Nik Koutsoukos, Vice President, Product Marketing at Riverbed Technology. “With Aternity, SteelCentral offers end-to-end performance monitoring, including the network infrastructure.”

http://www.forbes.com/sites/jasonbloomberg/2016/07/28/riverbed-to-acquire-aternity-to-bolster-end-to-end-digital-visibility/#6f2d2ad97fce

How Project Calico Transcends the Limits of Software-Defined Networking

#ProjectCalico is an open source layer three virtual networking system for containers, which can be deployed across a variety of todays platforms and setups, including hybrid environments. While centralized control can quickly cause an underlying software-defined network ( #SDN ) to reach load capacity, Project Calicos removal of a central controller helps ease this pain point for developers. In this episode of The New Stack Makers embedded below, we explore how Project Calico hopes to streamline and evolve software-defined networking, as well as the Calico’s collaboration with Flannel to build the Canal policy-based secure networking software. With Project Calico, there is no centralized controller. It uses etcd as a high-level key value store. Then we have an agent that runs on every host, that has an algorithm that calculates in a distributed fashion exactly what a host has to do. This is good for horizontal scale, which becomes important with the move to containers, said Pollitt.

https://www.linux.com/news/how-project-calico-transcends-limits-software-defined-networking

Oracle buys NetSuite for $9.3bn to further strengthen its cloud story

Updated 1758 BST #Oracle has announced the acquisition of cloud ERP software provider #NetSuite for $9.3 billion (£7.07bn). The move had been rumoured strongly for the past couple of weeks – and certainly in the back of people’s minds for asignificantly longer period of time – yet the agreement was confirmed today, and is expected to close later this year subject to usual regulatory approvals. “Oracle and NetSuite cloud applications are complementary, and will co-exist in the marketplace forever,” said Mark Hurd, Oracle CEO. “We intend to invest heavily in both products – engineering and distribution.” Evan Goldberg, founder and CTO of NetSuite, said: “NetSuite has been working for 18 years to develop a single system for running a business in the cloud. This combination is a winner for NetSuite’s customers, employees and partners.” The deal represents the latest move by Oracle to beef up its cloud operations. “This acquisition is all about strengthening Oracle’s cloud story,” John Dinsdale, chief analyst at Synergy Research told CloudTech. “It will push Oracle a couple of places higher in the enterprise SaaS market share rankings and will strengthen its position as one of the two leading ERP SaaS vendors, alongside #SAP.

http://www.cloudcomputing-news.net/news/2016/jul/28/oracle-buys-netsuite-93bn-further-strengthen-its-cloud-story/

New Dell Global Study Shows Business and IT Decision Makers are Aligned on Key IT Trends

(BUSINESS WIRE)--#Dell today announced the results of a new Dell State of IT Trends 2016 global study, which finds business decision makers’ (BDMs) and IT decision makers’ (ITDMs) understanding of current IT trends are much closer than they are generally perceived to be.

“To meet our clients’ requirements, we must continually fine-tune our operations, which requires the close collaboration of our business and IT leaders to support how we want to evolve

In the past, business and IT leaders had different levels of understanding of IT trends and technologies. However over time, business and IT leaders’ perceptions of technology have evolved and more closely aligned as new technologies have entered the market and become increasingly critical drivers of an organization’s success.

The Dell State of IT Trends 2016 global study reveals a greater sophistication and alignment in understanding of IT trends between the two groups. The results indicate that IT and business leadership are better collaborating and having in-depth conversations about not only how technology works but how it can propel the business forward.

“There is a lingering misperception that business leaders are disconnected during strategic IT discussions, but times have changed,” said Matt Baker, executive director, Enterprise Strategy, Dell. ”This study reveals that there is an increasingly common understanding between business and IT decision makers on the key IT trends and the growth opportunities that IT can deliver.”

http://www.businesswire.com/news/home/20160728006105/en/Dell-Global-Study-Shows-Business-Decision-Makers

Malaysia's Sedania Innovator signs five-year regional cloud analytics deal with Cloudera

Following the signing of a five-year partnership agreement with cloud data analytics company #Cloudera, Malaysia's platform services firm Sedania Innovator has distribution rights in Malaysia and access to the Southeast Asian market. Under agreement signed recently, Sedania Innovator has rights to provide Cloudera data analytics services to corporations in Malaysia and South East Asia region. Cloudera was represented by regional director, ASEAN & India, Joseph Lee and Sedania Innovator managing director Datuk Azrin Mohd Noor, who said, "The exponential adoption of advanced technologies by end-users across the South East Asia region has generated enormous amounts of digital data, and simultaneously created huge demand for enterprise solutions to gain insight on consumer behaviour to formulate future strategies." "We certainly recognize the immense potential in this space, and are already actively engaging with prospective customers across the telecommunications, utilities and banking sectors to provide data analytics services. We look forward to a successful rollout together with Cloudera in the near future," he said. Cloudera's Lee said: "Southeast Asia, Malaysia included, presents very exciting prospects for Cloudera. We believe that the data analytics market in this region is poised for a boom, given the rate of the population's embrace of digital frontiers in commerce and various lifestyle-disruption technologies."

http://www.mis-asia.com/tech/applications/malaysias-sedania-innovator-signs-five-year-regional-cloud-analytics-deal-with-cloudera/

Centrify and Cloudera Extend Open Data Model for Cybersecurity

Palo Alto, Calif., July 28, 2016 (GLOBE NEWSWIRE) -- #Cloudera, the global provider of the fastest, easiest, and most secure data management and analytics platform built on #Apache #Hadoop and the latest open source technologies, and Centrify, the leader in securing enterprise identities, today announced that Centrify has joined the Open Network Insight ( #ONI ) project. ONI is an open source, Apache 2.0 licensed cybersecurity project that leverages big data and machine learning to detect advanced threats. ONI provides an open data model for Network and with the addition of Centrify to the project will be extended to include identity. By extending ONI’s open data model into identity, Centrify is allowing users to centralize identity and account credential data, integrate it into cybersecurity applications that leverage ONI, and to share related threat analytics and intelligence among industry peers. “Centrify’s participation marks a major milestone for ONI,” said Tom Reilly, chief executive officer of Cloudera. “By adding the ability to integrate information about user identities alongside of data about network traffic and endpoints, the project is fulfilling on its promise to support a broad range of cybersecurity data sources that can be used to identify advanced threats and cyberattacks.”

http://finance.yahoo.com/news/centrify-cloudera-extend-open-data-110000838.html

NetApp signs deal for Varanasi's Smart City initiatives

BENGALURU: Storage and data management company #NetApp has signed MoUs with Varanasi and Karnal Municipal Corporation for their Smart City initiatives. Anil Velluri, president-India & Saarc, NetApp, said the company will provide storage, surveillance and security infrastructure for the projects. "Part of the process will be to get the specifications, design elements and architecture right," said Velluri. NetApp has partnered with #Cisco for smart city solutions and is open to more partnerships with other technology vendors."Each vendor brings a different dimension to the table. The whole idea is to collaborate at the core design and architecture-level and help government," he said.
http://m.economictimes.com/tech/internet/netapp-signs-deal-for-varanasis-smart-city-initiatives/articleshow/53428062.cms

Red Hat Unveils JBoss Data Grid 7 as Platform for Real-Time Data Analytics

RALEIGH, N.C.--(BUSINESS WIRE)--#RedHat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat #JBossDataGrid7. The latest version of Red Hat's leading in-memory data management technology, which can be used as a distributed cache, #NoSQL database, or event broker, introduces enhancements to help organizations generate insights for continuous business optimization through real-time data analytics, contributing to greater agility and competitiveness. #Apache #Spark #Hadoop @RedHatNews

In today's digital world, opportunities to engage with customers can be fleeting. Organizations that are unable to quickly recognize and respond to these opportunities risk losing customers and revenue to competitors. This competitive edge is driven by the ability to collect, analyze, and act on data as it is generated, during the moment of customer interaction, and the speed necessary to seize these business opportunities can only be obtained through in-memory computing.

Cited as a leader in The Forrester Wave™: In-Memory Data Grids, Q3 2015, JBoss Data Grid combines high performance with enterprise-class scale and flexibility, offering customers a powerful tool that can deliver data at the moment it is needed for real-time analysis.

JBoss Data Grid 7 addresses the need for faster data processing and more responsive applications by introducing new integration with #Apache #Spark, an open source framework for developing data-intensive applications. Spark processes data in-memory, and when paired with an in-memory data store such as JBoss Data Grid, helps eliminate traditional bottlenecks. JBoss Data Grid is also now fully supported as a #Hadoop-compliant data source. Support for Hadoop InputFormat and OutputFormat enables the use of many analytics tools that integrate with the Hadoop I/O format.

http://www.businesswire.com/news/home/20160727005267/en/Red-Hat-Unveils-JBoss-Data-Grid-7

Cask Congratulates Apache Twill on Top-Level Project Status

PALO ALTO, CA--(Marketwired - July 27, 2016) - #Cask (cask.co), the company that makes building and deploying big data applications easy, congratulates the #Apache Software Foundation ( #ASF ) for its graduation of Apache #Twill to a Top Level Project (TLP). (See press release dated July 27, 2016, "Apache Software Foundation Announces Apache® Twill™ as a Top-Level Project.") As the original Apache Twill developer, contributing the code to the ASF in 2013, Cask is proud to have supported the technical vision and collaboration of Apache Twill within the family of Apache projects, all the way to its graduation. Apache Twill (http://twill.apache.org/) is an abstraction over Apache #Hadoop ® #YARN that reduces the complexity of developing distributed Hadoop applications, allowing developers to focus more on their application logic. Twill provides rich built-in features for common distributed applications for development, deployment, and management, greatly easing Hadoop cluster operation and administration. "It's exciting that Apache Twill is graduating to become an ASF Top-Level Project," said Jonathan Gray, Cask CEO. "We look forward to working further with the highly skilled and talented Apache Twill community, continually adding enhancements that benefit enterprises with distributed Hadoop application development needs and broadening support for other Apache projects. We're also excited for people to see Twill in action at several upcoming events, including JavaOne in San Francisco and #Strata + Hadoop World in New York."
http://www.prnewswire.com/news-releases/new-ctera-platform-enhancements-simplify-cloud-migration-and-user-security-for-enterprise-file-services-and-data-protection-300304556.html

New CTERA Platform Enhancements Simplify Cloud Migration and User Security for Enterprise File Services and Data Protection

NEW YORK and PETACH TIKVAH, Israel, July 27, 2016 /PRNewswire/ -- #CTERA Networks today introduced new cloud data management capabilities that allow enterprise organizations to easily and securely implement comprehensive file service and data protection strategies. The enhancements to the CTERA Enterprise File Services Platform enable IT organizations to migrate data across any data center and cloud infrastructure, optimize storage infrastructure costs through intelligent file management, and simplify user access to enterprise data through new identity management integrations. The CTERA Enterprise File Services Platform enables enterprise IT to protect data and manage files across endpoints, offices, and the cloud – all within the organization's on-premises or virtual on-premises cloud storage. The platform is powered by CTERA's cloud service delivery middleware that IT-as-a-Service ( #ITaaS ) organizations leverage to create, deliver, and manage cloud storage-based services such as enterprise file sync and share ( #EFSS ), in-cloud data protection, endpoint and remote server backup, and office storage modernization. New enhancements to the CTERA Enterprise File Services Platform include:

http://www.prnewswire.com/news-releases/new-ctera-platform-enhancements-simplify-cloud-migration-and-user-security-for-enterprise-file-services-and-data-protection-300304556.html

Enterprise SSD DWPD: Calculating in Advance

Calculating SSD DWPD is essential. And sure, you can calculate how much data you can write and still be within the vendor’s warranty, but how do you determine in advance before you buy the SSDs how much data you are going to write a day? First, a some background on Enterprise SSD DWPD. Drive writes per day (DWPD) means how many times you can completely rewrite all the data on your SSDs within a 24-hour period. All enterprise and most consumer SSD vendors I could find online give specifications for how many DWPD can be done on their SSDs, or the drive endurance in GB. Either way, you can calculate how much data you can write and still be within the vendor’s warranty, but how do you determine in advance before you buy the SSDs how much data you are going to write a day?
http://mobile.enterprisestorageforum.com/storage-hardware/enterprise-ssd-dwpd-calculating-in-advance.html

Scality: Why object storage has a nice ring to it

#Scality 's software-defined storage #RING allows customers to store and access billions of objects, or even petabyte-sized objects, across standard hardware. The company's proposition is that it can do this with very large files (it claims to have over 800 billion objects stored on its servers) and at lightning speed.

The French company has spent seven years first developing and then delivering the product. Users and partners include blue-chip companies from Comcast to Time Warner Cable, Orange to RTL, and Renault to the Los Alamos National Laboratory.

We talked to CEO Jérôme Lecat to find the secret of its secret sauce. #HPE, #Dell, #Cisco

http://www.zdnet.com/article/scality-why-object-storage-has-a-nice-ring-to-it/

Is Microsoft Massively Overstating Its Cloud Revenues?

" #Microsoft Gets Lift From Cloud Gains" --The Wall Street Journal, July 19, 2016 Tech investors should be able to glean two important takeaways from the WSJ headline above. The first, that Microsoft's (NASDAQ:MSFT) stock popped in reaction to last week's earnings announcement, can be quickly verified elsewhere. See for yourself.

However, the second apparent takeaway, about the current state of affairs within Microsoft's cloud business, remains far less clear than that headline might lead you to think.

The tech giant uses an odd mix of reporting metrics to illustrate, or perhaps obfuscate, the progress of its cloud business -- its most important growth initiative against rivals like #Amazon (NASDAQ:AMZN), #IBM (NYSE:IBM) and #Alphabet (NASDAQ:GOOG)(NASDAQ:GOOGL).

Microsoft's confounding cloud metrics

On the surface, Microsoft's cloud numbers give the appearance of impressive growth. In the conference call, Microsoft touted that its "commercial cloud" business achieved a $12 billion run rate, up nicely from $10 billion in its previous quarter. 

Microsoft went on to report that sales from its "intelligent cloud" segment -- the actual cloud computing operating segment -- for the quarter totaled $6.7 billion, up 7%.

Muddling matters for investors trying to figure out how Microsoft's core #Azure infrastructure-as-a-service (IaaS) business -- which competes with Amazon, IBM, and #Alphabet -- is doing is that both the commercial cloud grouping and the intelligent cloud reporting segment contain other cloud-related pieces aside from Azure. Where exactly is the important Azure-only metric? Nowhere, unfortunately.

https://techcrunch.com/2016/07/27/launchkit-team-heads-to-google-and-open-sources-its-tools-for-helping-devs-launch-their-apps/

LaunchKit team heads to Google and open-sources its tools for helping devs launch their apps

The team behind #LaunchKit, a set of tools that helps developers launch their apps, is heading to #Google and joining the Developer Product Group. It doesn’t look like LaunchKit’s products are moving over to Google, so the team decided to open-source its products and make them available on #GitHub. LaunchKit’s hosted services will be available for the next 12 months. After that, they will be discontinued. LaunchKit currently offers four tools and developers will now be able to take them and run them themselves: Screenshot Builder for easily creating annotated screenshots for #Apple ’s and Google’s store, App Website Builder for creating responsive landing pages for new apps, Review Monitor for — well… — tracking reviews in Apple’s App Store, and Sales Reporter for keeping track of sales. The team has also written a couple of how-to guides for developers, too. Launchkit itself launched in early 2015 and was co-founded by Brenden Mulligan, Taylor Hughes, and Rizwan Sattar. They previously built Cluster and a number of other apps for both iOS and #Android.

https://techcrunch.com/2016/07/27/launchkit-team-heads-to-google-and-open-sources-its-tools-for-helping-devs-launch-their-apps/

Radeon Pro Duo SSG Has A Pair of SAMSUNG SSD's

The last few days have seen a few crazy graphics cards announced, with the likes of #Nvidia ’s #Quadro #P6000 offering ridiculous levels of performance at a hefty price tag. You haven’t seen anything yet though, as #AMD ’s #Radeon Pro Duo Solid State Gaming ( #SSG ) GPU, comes packing a pair of M.2 #Samsung 950 Pro SSDs, giving it a terabyte of on board storage. Much like Nvidia’s recently unveiled Quadro cards, this Radeon Pro Duo SSG isn’t designed for the consumer. With a price tag of $10,000 (£7,650), that should be obvious, but there are likely those still tempted by the idea of having a terabyte of local memory on their graphics card. For almost everyone though, even the idea of that will be complete overkill. When the average Steam gamer is just coming on board with 4GB of VRAM, it might seem a little ridiculous to consider a terabyte necessary. That’s why it has such a professional focus. As it stands, AMD’s top-memory GPU has around 32GB, so this offers something unique.

https://www.kitguru.net/components/graphic-cards/jon-martindale/amds-radeon-pro-duo-ssg-has-a-pair-of-samsung-ssds-in-it/

MEDIA ADVISORY: Samsung Keynote to Highlight Latest V-NAND Innovations at 2016 Flash Memory Summit

SAN JOSE, Calif.--(BUSINESS WIRE)--At Flash Memory Summit 2016, Samsung will discuss the latest #3DVNAND solutions aimed at meeting the growing requirements of big data networks, cloud computing and other means of real-time analysis. The remarks will be delivered during a keynote at 11:30 a.m. on Wednesday, August 10th in the Mission City Ballroom, Santa Clara (CA) Convention Center. #Samsung
http://www.businesswire.com/news/home/20160727006496/en/MEDIA-ADVISORY-Samsung-Keynote-Highlight-Latest-V-NAND

Wednesday, July 27, 2016

Dell Security Releases SonicOS 6.2.6

Today #DellSecurity announced that it has released the latest firmware update to its #SonicWALL operating system, SonicOS 6.2.6. This latest update comes with a couple of new features including the industry’s first multi-engine sandbox that enables customers to block suspicious files until a verdict is reached, SonicWALL Capture. The new update also comes with a feature that enables companies to enforce protection and productivity policies, Content Filtering Service 4.0 that controls access to inappropriate or unproductive web content.

http://www.storagereview.com/dell_security_releases_sonicos_626

Napatech Teams with Dell OEM to Keep Networks Secure and Compliant with New Packet Capture Solution

COPENHAGEN, Denmark, July 27, 2016 /PRNewswire/ -- In order to keep networks secure and compliant, IT teams require greater visibility into the critical data they are monitoring, both in real time and for forensic purposes. #Napatech announced today the partnership with #Dell OEM to provide customers with a reliable, stable and predictable network solution for high-speed packet capture and storage. The combined solution, available through the #DellOEM sell-through program, enables customers to capture and offload 100 percent of their data for post-analysis of all traffic, regardless of data type.

http://finance.yahoo.com/news/napatech-teams-dell-oem-keep-104700841.html

Talend CEO Mike Tuchen is hoping to have the next successful tech IPO on Friday

On Friday, big-data startup #Talend will launch itself as a public company in this anemic year for tech IPOs, and many other tech companies will be watching how investors react. Talend offers a service that helps a company take the data it stores in all sorts of apps and clouds, and cleans it up so that it can be used by popular big data software like #Hadoop and #Spark. Talend was born a French company but its current headquarters are in Redwood City, California. Its CEO, Mike Tuchen, joined in 2013. He was previously CEO of security company Rapid7 during its startup growth years, but left before its successful IPO in 2015. He cut his teeth at #Microsoft.

http://www.businessinsider.com/talend-hopes-for-big-ipo-on-friday-2016-7

FalconStor Software Announces Second Quarter 2016 Results

MELVILLE, NY--(Marketwired - July 27, 2016) - #FalconStor Software® Inc. (FALC), a market leader in software-defined storage, ( #SDS )today announced financial results for its second quarter ended June 30, 2016. "The storage industry landscape continued its transformation during the second quarter of 2016. We saw a reasonable recovery in overall performance sequentially from Q1 2016 and we saw significant growth in our new business initiatives for FreeStor® subscription model which grew over 200% sequentially" said Gary Quinn, President and CEO. "That said, we saw improvements in all routes to market for OEM, MSPs, and new enterprise customers, as well as the beginning of some existing enterprise customer conversions to FreeStor. We remain committed to driving our new business initiatives with FreeStor through the rest of 2016 and are adjusting to the marketplace with fiscal responsibility."

http://finance.yahoo.com/news/falconstor-software-announces-second-quarter-200500586.html

ScaleCare Remote Recovery Service Provides Offsite Protection to Ensure Business Continuity

#Scale Computing, the market leader in #hyperconverged storage, server and virtualization solutions for midsized companies, today launched its #ScaleCare Remote Recovery Service, a Disaster Recovery as a Service ( #DRaaS ) offering that provides offsite protection for businesses at a price that fits the size and budget of their datacenter needs. Building on the resiliency and high availability of the HC3 Virtualization Platform, ScaleCare Remote Recovery Service is the final layer of protection from Scale Computing needed to ensure business continuity for organizations of all sizes. ScaleCare Remote Recovery Service is a cost-effective alternative to backup and offsite shipping of physical media or third-party vendor hosted backup options. Built into the HC3 management interface, users can quickly and easily set up protection for any number of virtual machines to Scale Computing's SAEE-16 SOC 2 certified, PCI compliant, remote datacenter hosted by LightBound. "The ScaleCare Remote Recovery Service has put my mind at ease when it comes to recovery," said David Reynolds, IT manager at Lectrodryer LLC. "Setting up automatic monthly, weekly, daily and minute snapshots of my VMs is unbelievably easy. All these are pushed to the cloud automatically and removed on the date you set them to expire. Highly recommended." ScaleCare Remote Recovery Service provides all the services and support businesses need without having to manage and pay for a private remote disaster recovery site. Whether protecting only critical workloads or an entire HC3 environment, users pay for only the VM protection they need without any upfront capital expense. Built on snapshot technology already built into the HC3 HyperCore architecture, ScaleCare Remote Recovery Service allows users to customize their replication schedules to maximize protection, retention and bandwidth efficiency. After the initial replica is made, only changed blocks are sent to the remote data center. Remote availability of failover VMs within minutes, failback to on-site local HC3 clusters and rollbacks to point-in-time snapshots as needed provide ultimate data protection and availability.

 http://www.hostreview.com/news/160727-scale-computing-radically-simplifies-disaster-recovery-with-launch-of-draas-offering#ixzz4FeuXyuD9

White boxes are now ready for prime time

White box switches have been around for years, but adoption has been limited to niche companies that have large engineering departments. The rise of software-defined networking ( #SDN ) has brought them into the public eye, though, as a lower-cost alternative to traditional network hardware. In fact, some of the early messaging around SDN revolved around using white boxes as a complete replacement for all network hardware. Despite the promise that SDN brought, the use of white boxes has been limited for a couple of reasons. The first is that historically, any organization that wanted to leverage a white box switch needed to have a number of technical specialists that many enterprises do not have. This would include network programmers and engineers fluent in #Linux. These skills are commonly found in companies such as #Facebook, #Google and #Amazon, but not so much in your average enterprise.

http://www.networkworld.com/article/3100927/network-switch/white-boxes-are-now-ready-for-prime-time.html

This material can make dirty water safe to drink

A team of researchers has come up with a way to use "veritable wonder material" #graphene oxide sheets to turn dirty water clean for a thirsty world. According to the engineers at Washington University in St. Louis, the novel hybrid nanomaterials could be a global game-changer. "We hope that for countries where there is ample sunlight, such as India, you'll be able to take some dirty water, evaporate it using our material, and collect fresh water," said researcher Srikanth Singamaneni. The new approach combines bacteria-produced cellulose and graphene oxide to form a bi-layered biofoam. "The process is extremely simple," Singamaneni said. "The beauty is that the nanoscale cellulose fiber network produced by bacteria has excellent ability move the water from the bulk to the evaporative surface while minimizing the heat coming down and the entire thing is produced in one shot. "The design of the material is novel here," Singamaneni said. "You have a bi-layered structure with light-absorbing graphene oxide filled nanocellulose at the top and pristine nanocellulose at the bottom. When you suspend this entire thing on water, the water is actually able to reach the top surface where evaporation happens. He added, "Light radiates on top of it, and it converts into heat because of the graphene oxide, but the heat dissipation to the bulk water underneath is minimized by the pristine nanocellulose layer. You don't want to waste the heat; you want to confine the heat to the top layer where the evaporation is actually happening." The cellulose at the bottom of the bi-layered biofoam acts as a sponge, drawing water up to the graphene oxide where rapid evaporation occurs. The resulting fresh water can easily be collected from the top of the sheet.

http://wap.business-standard.com/article/news-ani/this-material-can-make-dirty-water-safe-to-drink-116072700277_1.html

Apache Software Foundation Announces Apache® Twill™ as a Top-Level Project

Forest Hill, MD, July 27, 2016 (GLOBE NEWSWIRE) -- The #Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Twill™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles. Apache Twill is an abstraction over Apache #Hadoop ® #YARN that reduces the complexity of developing distributed Hadoop applications, allowing developers to focus more on their application logic. "The Twill community is excited to graduate from the Apache Incubator to a Top-Level Project," said Terence Yim, Vice President of Apache Twill and Software Engineer at Cask. "We are proud of the innovation, creativity and simplicity Twill demonstrates. We are also very excited to bring a technology so versatile in Hadoop into the hands of every developer in the industry." Apache Twill provides rich built-in features for common distributed applications for development, deployment, and management, greatly easing Hadoop cluster operation and administration. "Enterprises use big data technologies - and specifically Hadoop - to drive more value," said Patrick Hunt, member of the Apache Software Foundation and Senior Software Engineer at Cloudera. "Apache Twill helps streamline and reduce complexity of developing distributed applications and its graduation to an Apache Top-Level Project means more people will be able to take advantage of Apache Hadoop YARN more easily." “This is an exciting and major milestone for Apache Twill,” said Keith Turner, member of the Apache Fluo (incubating) Project Management Committee, which used Twill in the development of Fluo, an Open Source project that makes it possible to update the results of a large-scale computation, index, or analytic as new data is discovered. "Early in development, we knew we needed a standard way to launch Fluo across a cluster, and we found Twill. With Twill, we quickly and easily had Fluo running across many nodes on a cluster."

http://globenewswire.com/news-release/2016/07/27/859217/0/en/Apache-Software-Foundation-Announces-Apache-Twill-as-a-Top-Level-Project.html

3 reasons Hadoop is a perfect fit for your Big Data environment

We live in a world that is driven by information. There is literally a flood of information flowing into our organizations today. It is known as big data. Traditional database software no longer has the capacity to manage this immense volume of data. Thankfully, innovators have come up with new database software that can store, manage and disseminate this data. This software makes it necessary for database administrators(DBAs) and application developers to learn new skills so that they can manage them.  What is this new database software technology? Traditionally, we used relational databases to store and manage data. These database systems relied on a Structured Query Language (SQL) framework to accomplish this. Examples of these database software are #Microsoft Access, #MySQL and #Oracle RAC. Since the emergence of big data, relational database software’s are getting phased out. This is because it is inefficient to organize big data into the structured tables used in this type of database software. Only small and medium amounts of data can be organized into this structured format. The immense volume of big data would take forever to organize in this way. Therefore, traditional database software is not scalable enough to handle big data. For this purpose, #NoSQL database software was invented.

http://www.ciol.com/3-reasons-hadoop-is-a-perfect-fit-for-your-big-data-environment/

Seagate Announces Industry’s First 2TB M.2 NVMe Enterprise SSD

#Seagate Technology plc, a world-leading producer of data storage devices, is announcing a first-of-its-kind high-capacity M.2 solid state drive (SSD) that enables data centers to more easily accommodate the exponential growth of data requiring storage, yet still achieving a high degree of computing power and performance. The 2TB version of Seagate’s #Nytro® #XM1440 M.2 non-volatile memory express ( #NVMe ) SSD is the highest-capacity enterprise-class M.2 NVMe SSD available today, making it ideal for demanding enterprise usage applications that benefit from fast data access, fast processing, and higher capacity

http://www.thessdreview.com/daily-news/latest-buzz/seagate-announces-industrys-first-2tb-m-2-nvme-enterprise-ssd/

Mizuho analyst believes that the 8-K filing does introduce the possibility of potential acquisitions or mergers from China or potentially in the storage HDD space

Mizuho Securities analyst offered commentary on the latest step taken by #Micron Technology, Inc. (NASDAQ:MU) to make an acquisition more difficult. The largest American chipmaker has adopted a rights agreement, called #poisonpill, that would prevent from change of ownership if a person or group buys 5% or more of its outstanding stock.

According to a regulatory filing from Monday, this step would significantly allow Micron to retain tax benefits related to previous losses. The news sent a wave of rekindled speculations in the market of Micro being the potential acquisition target. Analyst Vijay Rakesh has affirmed a price target of $14 on Micron stock, which calls for a 0.5% potential upside over the last close of $13.92. Micron stock rose to six month high in yesterday’s trading session, gaining 6% to close at $13.92.

Mr. Rakesh commented, “The 8-K notes that the board can approve exceptions to the agreement for any investor or acquirer, if the board determines that it does not jeopardize the operating loss carry-forwards or if the board determines that the rights agreement is no longer in the best interest of the company and stockholders, and so we believe a friendly acquirer or merger could see benefits.”

http://www.thecountrycaller.com/55695-micron-technology-inc-nasdaqmu-heres-why-analyst-sees-possibility-of-potential-ma-from-china/

You can now buy the world's largest SSD for the price of a new car

Remember that 15.36TB SSD #Samsung launched a year ago? You can now buy it, for just $9,690 (about £7,400 or AU$13,000), from a couple of US retailers.

At £480 (about $630, AU$840) per TB, that's actually not far off the selling price of Samsung's next biggest SSD, the Samsung 850 EVO 4TB model, which works out at about £325 (about $430, AU$570) per TB in the UK.

The 15.36TB SSD is not only more capacious than the largest hard disk drive (models from #Seagate and #WD currently top 10TB), but also far smaller (2.5-inch compared to 3.5-inch). That means Samsung could easily produce a 30TB HDD-sized SSD giant if it wanted to.

http://www.techradar.com/news/computing-components/storage/you-can-now-buy-the-world-s-largest-ssd-for-the-price-of-a-new-car-1325458

Why Yahoo Lost And Google Won

Many of the #Yahoo obituaries published over the last couple of days have contrasted its demise with the flourishing of #Google, another Web pioneer. Why was Google’s attempt to “organize all the world’s information” vastly more successful than Yahoo’s? The short answer: Because Google did not organize the world’s information. Google got the true spirit of the Web, as it was invented by Tim Berners-Lee.

http://www.forbes.com/sites/gilpress/2016/07/26/why-yahoo-lost-and-google-won/#270d256d4518

Tuesday, July 26, 2016

A $30 billion merger is more evidence of the tech market's most dominant trend right now

#AnalogDevices has struck a deal to buy chipmaker #LinearTechnology, the companies announced on Tuesday. The deal values Linear Technology at $14.8 billion, or $60 per share. The value of the combined enterprise will be $30 billion. Shares of Linear Technology were halted for trade up about 29% at $62 per share after news of the deal was broken by Bloomberg. Analog Devices shares were up about 4%. Stocks across the semiconductor space were broadly higher on Tuesday, with notable gainers including Maxim Integrated (+5%), NXP Semiconductors (+4%), Microsemi (+4%), and Semtech (+3%). Texas Instruments, one of the closest competitors to a combined Analog-Linear, also jumped, with shares up more than 9%. On Monday, after the market closed, Texas Instruments reported earnings that beat expectations. There has been a frenzy of multibillion-dollar deals in the tech industry over the past 18 months, including #Intel 's takeover of #Altera, #Avago 's acquisition of #Broadcom, and #Dell 's $67 billion takeover of #EMC. More recently, Dutch chip-making company #ASML bought Taiwan's Hermes #Microvision for $3.1 billion, and #NXPSemiconductors sold its standard-products business to a group of Chinese investors for $2.75 billion.

http://www.businessinsider.com/linear-technology-analog-devices-merger-talks-2016-7

Google Joins New SDN/NFV Open Source Networking Project

Open source groups focusing on software-defined networking ( #SDN ) and network functions virtualization ( #NFV ) have formed a new project, attracting new partners such as #Google, #Radisys and #Samsung. The project is called #CORD (Central Office Re-architected as a Data Center), formed by Open Networking Lab (ON.Lab) and The #LinuxFoundation, who are building out the fledgling CORD initiative -- conceived to bring "datacenter economics and cloud flexibility to the telco central office and to the entire access network" -- into a separate open source project with independent governance, partners, collaborators and contributors. The CORD Web site says: "CORD lets the operator manage their central offices using declarative modeling languages for agile, real-time configuration of new customer services. Major service providers like #AT&T, #SKTelecom, #Verizon, #ChinaUnicom and #NTTCommunications are already supporting CORD."

https://virtualizationreview.com/articles/2016/07/26/cord-sdn-nfv-project.aspx?m=1

Sedania, Cloudera tie up for data analytics services

PETALING JAYA: Technology empowerment company #Sedania Innovator Bhd has entered into a partnership with global data analytics giant #Cloudera to be the first and only gold partner in big data analytics services in Southeast Asia. Under the five-year partnership agreement signed yesterday, Sedania Innovator would have rights to provide Cloudera data analytics services to corporations in Malaysia, as well as access to the rest of the Southeast Asia region. Founded in 2008, Cloudera is the world’s leader in data analytics, providing #Apache #Hadoop-based software, support and services, and training to business customers the likes of #Oracle, #IBM and #HP. As a market leader, Cloudera commands more than 50% share of the Hadoop market. “This venture would enable both Sedania and Cloudera to capture sizable market share in the big data analytics market, in light of Cloudera’s comprehensive service offerings and IDC’s prediction of almost US$80 million (RM325.8 million) spending in Malaysia for Big Data Analytics by the year of 2018,” Cloudera’s regional director, Asean and India, Joseph Lee said. Industry research by the International Data Corporation (IDC) revealed that the global data analytics market is projected to grow at a compound annual growth rate of 23.1% per year from 2014 to 2019, with the market value increasing from US$17.2 billion to US$48.6 billion in the same period. The agreement would be effective for a period of five years, commencing this quarter.

http://m.thesundaily.my/node/383022

IBM Intros DeepFlash 150 All-Flash Storage For Big Data, Other Unstructured Workloads

#IBM Tuesday expanded its #allflash storage line with a new offering targeting big data and other unstructured data applications.

The new IBM #DeepFlash150, which is scheduled to start shipping late this week, is aimed at the types of applications that require capabilities not found on standard all-flash storage solutions, said Alex Chen, IBM's director of storage systems and offering executive for file and #objectstorage.

The majority of all-flash storage arrays are focused on more traditional applications such as virtual desktop infrastructure, block storage and on-line transaction processing, Chen told CRN.

"Analysts tell us 80 percent of data is unstructured," he said. " #Bigdata is a different problem for flash storage. With big data, data is measured in petabytes, not terabytes. And unstructured data is growing twice as quickly as structured data. So scalability is a big factor."

Big data and other unstructured data also require low-cost storage, Chen said. "A lot of unstructured data like videos is already compressed, so it's hard to count on data reduction to save capacity," he said.

For that reason, the IBM DeepFlash 150 is based on a different architecture than IBM's current FlashCore-based all-flash arrays, Chen said.

IBM, Armonk, N.Y., said the new offering has a price of about $1 per GB. Chen did say that many vendors, including IBM, already have all-flash solutions at that price point. "But the others, including the IBM solutions, include compression and deduplication, which don't work with unstructured data," he said.

The IBM DeepFlash 150 is based on a 3U chassis, each of which can be configured with 128 TB to 512 TB of capacity. When combined with IBM's Spectrum Scale software for file, object and integrated data analytics, the IBM DeepFlash 150 can scale to multiple exabytes of capacity, Chen said.

Combining the two as a complete solution will help customers and channel partners with workloads such as in-memory analytics, media and entertainment, real-time analytics, high-performance computing, life sciences and genomics, he said.

http://m.crn.com/news/storage/300081470/ibm-intros-deepflash-150-all-flash-storage-for-big-data-other-unstructured-workloads.htm

Stratoscale ramps up its challenge to VMware and AWS

There's no question that #Amazon Web Services has set the standard for enterprise cloud services and that other public cloud providers are taking note. The startup Stratoscale, meanwhile, says it's gaining traction with customers who want the simplicity of #AWS without having to rely on the public cloud.

"What we're seeing today is people are trying to move away from the outdated #VMware infrastructure," said CEO Ariel Maislos. "They love what they're seeing in AWS -- they want to have more of that, but in their own environment, on their own terms."

Stratoscale has offered that solution since December with Symphony, a hardware-agnostic, hyperconverged software layer designed to manage a collection of x86 servers together as a single cloud infrastructure. Now, with a couple dozen customers under its belt -- coming from both legacy systems and the public cloud -- and $70m in fundingfrom backers like #Cisco and #Qualcomm, the three-year-old firm is rolling out an updated version of Symphony.

"In order to set up a cloud infrastructure, it used to be a huge project," Maislos said. "What we've managed to do in Symphony is transform it into a simple experience."

The updated version has new capabilities to support significant investments already made in storage infrastructure. It supports storage arrays from #EMC, #NetApp, #IBM, #Nimble, #Pure Storage, #Infinidat, #Oracle, #HDS, and #Dell. #Stratoscale has also added built-in data protection, ensuring the cloud infrastructure, applications, and data are always available.

http://www.zdnet.com/article/stratoscale-ramps-up-its-challenge-to-vmware-and-aws/

EMC insiders say Salesforce has ordered $75m of its kit

#EMC landed a punch on behalf of tech's old guard after it won a $75m deal to furnish #Salesforce.com with shiny new on-premises storage hardware, sources have claimed. Salesforce last month took infrastructure services away from its cloud server staff and put them into the hands of AWS in a contract valued at $400m. But company insiders are talking up the relationship between the storage titan, soon to be assimilated by #Dell, and Salesforce. “Any talk of Salesforce.com aggressively pursuing #AWS is a lot of hoo-ha for the time being,” one told us. El Reg understands the order included 120 new #VMWAX arrays, which are mostly all-flash with a few SAS, along with 50 DataDomain 9500s. “The deal came in very late in the quarter and depleted inventory for common parts,” a source added. EMC told us it was unable to comment and Salesforce has yet to respond. ®

http://www.theregister.co.uk/2016/07/26/emc_aws_salesforce_75m_kit_deal/

New Offering Allows FNTS Enterprise Customers to Scale to Billions of Files with Unlimited Capacity

OMAHA, Neb. (PRWEB) - First National Technology Solutions ( #FNTS ), the recognized leader in the managed IT services industry, today announced its cloud object storage offering, based on #EMC ’s third generation Elastic Cloud Storage ( #ECS ) technology. FNTS’ service-based cloud object storage model provides a low-cost alternative to traditional storage in order to meet the overwhelming data demands enterprises face today. “With the proliferation of the Internet of Things ( #IoT ) and the increasing demands of cloud and mobile applications today, IT leaders are struggling to control costs and manage data volume with traditional storage methods,” said James O’Neil, chief technology officer at FNTS. “Leveraging EMC’s technology and our state-of-the-art facilities, FNTS’ cloud object storage makes storing massive amounts of data easier and more cost effective than ever before.” According to Enterprise Strategy Group (ESG), enterprises’ data storage requirements are growing up to 40 percent each year, while IT budgets are only growing 5-7 percent annually. However, traditional storage platforms were not designed to accommodate modern cloud applications and cloud scalability. To stay ahead of the growing demands of enterprise data, FNTS’ cloud object storage is designed to store billions, if not trillions, of files in a simple, unstructured manner.

http://www.hostreview.com/news/160725-first-national-technology-solutions-partners-with-emc-to-deliver-cloud-object-storage#ixzz4FZC5FXB9

Altiscale and QuakeFinder Announce Strategic Partnership for Earthquake Forecasting Research

Altiscale and QuakeFinder Announce Strategic Partnership for Earthquake Forecasting Research PALO ALTO, CA--(Marketwired - Jul 26, 2016) - #Altiscale, ( #Hadoop ) the leading provider of Big-Data-as-a-Service, and #QuakeFinder, a humanitarian research and development project to forecast earthquakes, today announced a strategic partnership to analyze QuakeFinder's large scientific data set to uncover correlations between earth-emitting electromagnetic signals and earthquakes. The goal of this project is to enable an earthquake forecasting system that provides days to weeks of advance notice of major earthquakes, ultimately saving lives. QuakeFinder is among a growing group of international scientists and researchers seeking to understand the earth's electromagnetic activity in the weeks prior to major earthquakes. Since 2005, QuakeFinder has developed and deployed a network of 165 sensors along fault lines in California, Chilé, Peru, Indonesia, Taiwan, and Greece. Based on the theory that rock near its fracture point releases electrical energy, QuakeFinder's remote stations employ highly sensitive magnetometers to collect Ultra Low Frequency (ULF) magnetic disturbances resulting from electrical pulses near the future earthquake hypocenter. QuakeFinder has amassed approximately 70 terabytes (TB) of data, including detailed data from over 140 earthquakes. QuakeFinder is a "moon-shot" initiative sponsored by Stellar Solutions, Inc., a leading provider of aerospace-related engineering services to the commercial and government sectors.

http://m.marketwired.com/press-release/altiscale-quakefinder-announce-strategic-partnership-earthquake-forecasting-research-2145223.htm

Teradata acquires Big Data Partnership

NEW DELHI: #Bigdata analytics company #Teradata on Monday announced the acquisition of London-based firm Big Data Partnership to broaden its open source analytics services across international marketplace, including in Asia. "Big Data Partnership brings exciting new capabilities and broadens our analytic services portfolio, enhancing Think Big's expertise and giving our customers more choices and outcomes tailored to their goals," said Rick Farnell, Senior Vice President, Think Big, a Teradata Company, in a statement. Big Data Partnership has expertise in disruptive technologies, including #Apache #Hadoop and helps its clients discover how to become more data driven and data savvy through data science and the adoption of the latest big data technologies, Teradata said.

http://www.datanami.com/2016/07/25/building-enterprise-ready-data-lake-takes-right/

Building the Enterprise-Ready Data Lake: What It Takes To Do It Right

The last year has seen a significant growth in the number of companies launching data lake initiatives as their first mainstream production #Hadoop -based project. This isn’t surprising given the compelling technical and economic arguments in favor of Hadoop as a data management platform, and the continued maturation of Hadoop and its associated ecosystem of open source projects. The value is undeniable: Providing a true “Data As A Service” solution within the enterprise has business users engaged, productive and driving immediate value. Cloudera and Hortonworks (NASDAQ: HDP) continued work around #Atlas, #Sentry, #Ranger, #RecordService, #Knox and #Navigator projects signal continued efforts to improve data security, metadata management, and data governance for data in Hadoop. The problem is that despite these incremental improvements, Hadoop alone still lacks many of the essential capabilities required to securely manage and deliver data to business users through a data lake at an enterprise scale. For example, the significant challenge and critical task of automatically and accurately ingesting data from a diverse set of traditional, legacy, and big data sources into the data lake (on HDFS) can only be addressed by custom coding or tooling. Even with tooling, understanding the challenges of data validation, character set conversions and history management, just to name a few, are often not fully understood, or worse; neglected all together. The open source projects also don’t address providing business users an easy way to collaborate together to create and share insights about data in the lake through crowd-sources business metadata, analytics and data views.
http://www.datanami.com/2016/07/25/building-enterprise-ready-data-lake-takes-right/

T. Rowe Price’s Profit Falls on Dell-Vote Error

T. Rowe Price Group Inc. said its profit fell by 41% in the second quarter, as the asset company accounted for the almost $200 million proxy-vote error it made related to the 2013 buyout of #Dell Inc. #TRowePrice Group unintentionally voted in favor of the $25 billion deal, despite having publicly argued @MichaelDell and private-equity backer #SilverLake were buying the company on the cheap just as it was poised for a rebound. A Delaware judge then ruled in May that they had underpaid and ordered them to repay dissenting investors—a windfall for which T. Rowe was ineligible. The firm could have received about $190 million for its 30 million shares.

http://www.wsj.com/articles/t-rowe-prices-profit-falls-on-dell-vote-error-1469536935

Shares of Western Digital Corporation (WDC) Sees Large Inflow of Net Money Flow  

#WesternDigital Corporation (WDC) : Fridays money flow indicated an uptick to downtick ratio was at 1.21. The total value of inflow transactions on upticks was $11.12 million, whereas, the total value of outflow trades on downticks was $9.19 million. The total money flow was $1.94 million, which shows a mild bullish bias. The total money flow into the stock in block trades was $2.96 million. The total value of the trades done on upticks was $2.96 million. Western Digital Corporation (WDC) was trading with a -0.45% change over previous days close. It fell $0.23 during the day and reached $51.42. The stock was -0.46% compared to the previous weeks close.
http://www.themarketdigest.org/201607/shares-of-western-digital-corporation-wdc-sees-large-inflow-of-net-money-flow/3111875/

Intel and Samsung invest in fast memory start-up

#Intel and #Samsung have identified a new fast memory access technology and have made venture investments in the company behind it. #WesternDigital has also invested in US-based #KazanNetworks that has closed a $4.5m Series A funding round. The communications interface technology is used to access solid-state drives (SSDs) used in datacentre storage and cloud service farms. Kazan Networks has developed a sub-1us latency technique for connecting non-volatile memory express ( #NVMe ) over Fabrics storage, as demonstrated at last years Intel Developers Forum, to racks of servers in the datacentre It is an Ethernet-based interface that uses #RDMA acceleration. Simultaneous #RoCE and iWARP support means customers need not worry about choosing one protocol over the other. NVMe was developed in 2010 as a high speed interface for solid-state storage devices as an alternative to legacy device interconnects, such as Fibre Channel, SAS, or SATA. Hooked in to the PCI Express technology roadmap it takes advantage of the parallelism of solid state devices. “NVMe over Fabrics is a key enabler to support attaching thousands of NVMe SSDs in modern rack-scale designs,” said Amber Huffman, Intel Fellow in the Non-Volatile Memory Solutions Group at Intel Corporation and chair of NVMe Workgroup.

http://www.electronicsweekly.com/news/products/memory/fast-memory-start-up-intel-and-samsung-invest-2016-07/

India takes first steps towards utility-scale energy storage

In the upcoming tenders for 100 MW of PV generation capacity in Andhra Pradesh and 200 MW in Karnataka, each 50 MW project would be connected to a storage capacity of 2.5 MWh. Tenders may draw attention of some key storage systems suppliers According to Bridge to India, these tenders would help India draw attention of some key storage systems suppliers such as #NGKInsulators, #AES #EnergyStorage, #Sumitomo Electric, #LGChem, #Samsung #SDI, #NECEnergy, #BYD, #Toshiba, #GE and #Saft. BYD is already known to be exploring this opportunity and may offer a joint bid with #SkyPower. The primary commercial objective of these particular tenders would be to showcase India as an upcoming market for utility-scale energy storage solutions, Bridge to India notes. These projects should just be seen as a start of the process to acclimatize project developers and grid operators with utility-scale storage. From a technical standpoint, the need for energy storage technology is plain obvious in a scenario where grid penetration of renewables is increasing rapidly. India expects to get 15% of all power from renewables by 2022 as against about 5.5% today. Greater amount of storage capacity will be required in future to address intermittency challenges of renewable energy by storing surplus electricity to meet short term demand-supply mismatch. Storage will also be critical in supporting the local grid through ancillary services such as frequency regulation, voltage support and peak demand shaving. Technical benefits of rather small storage capacity will be limited On first examination, the size of proposed energy storage systems in SECI tenders is rather small (equivalent to just 3 minutes of plant production of a 50 MW project at full capacity), says Bridge to India. As a result, technical benefits of this storage capacity will also be limited. Nonetheless, the pilot projects would showcase India as an upcoming market for utility-scale energy storage solutions and provide useful technical, operational and financial learning for the entire power sector. Assuming a price of INR 15,000/kWh (~USD 220/kWh) for lithium-ion batteries, a 2.5 MWh storage unit will cost around INR 38 million (USD 0.6 million). With India expected to become the third largest market for solar deployment after U.S. and China from next year onwards, Bridge to India expects to see much more focus on storage in the coming years.

http://www.solarserver.com/solar-magazine/solar-news/current/2016/kw30/india-takes-first-steps-towards-utility-scale-energy-storage.html