#Hadoop ’s ability to work with #Amazon S3 storage goes back to 2006 and the issue HADOOP-574, “FileSystem implementation for Amazon S3”. This filesystem client, “s3://” implemented an inode-style filesystem atop S3: it could support bigger files than S3 could then support, some its operations (directory rename and delete) were fast. The s3 filesystem allowed Hadoop to be run in Amazon’s EMR infrastructure, using S3 as the persistent store of work. This piece of open source code predated Amazon’s release of EMR, “ #Elastic #MapReduce” by over two years. It’s also notable as the piece of work which gained Tom White, author of “Hadoop, the Definitive Guide”, committer status. A weakness of the S3:// filesystem client was that it wasn’t compatible with any other form of data stored in S3: there was no easy way to write data into S3 for the Hadoop MapReduce to read, or for the results to be written back. (Remember —at the time, Hadoop meant MapReduce only). This was addressed in 2008 by the HADOOP-931 work and the “S3 Native Filesystem ” client, “S3N”. S3N paths have URLs which begin “s3n://”, followed by the name of the S3 “bucket” and the path underneath. S3N made collecting data for Hadoop-in-EC2 clusters easier, as well as allowing the output of work to be published directly for other applications. Since that date, s3n:// has been the ubiquitous prefix on URLs used when Apache Hadoop reads data from S3. But not, notably, from Amazon’s EMR: it uses a scheme, “s3:”, which resembles s3n but has a closed source implementation underneath. Amazon have done some good work there, and the Apache code has lagged. The S3N code has been relatively stable since 2008, with intermittent updates to the underlying jets3t library. It didn’t get much attention, however. The functionality of the jets3t library slowly fell behind that offered by Amazon’s own SDK —which added better authentication, advanced upload operations, and more. There was also work going on by Hadoop-in-cloud users such as Netflix, whose S3mper code addressed S3’s eventual consistency problem —so allowing it to be used as a direct output of analytics jobs.
https://dzone.com/articles/the-history-of-apache-hadoops-support-for-amazon-s-1
No comments:
Post a Comment