AWS Big Data Blog
Migrate data from an on-premises Hadoop environment to Amazon S3 using S3DistCp with AWS Direct Connect
This post demonstrates how to migrate nearly any amount of data from an on-premises Apache Hadoop environment to Amazon Simple Storage Service (Amazon S3) by using S3DistCp on Amazon EMR with AWS Direct Connect.
To transfer resources from a target EMR cluster, the traditional Hadoop DistCp must be run on the source cluster to move data from one cluster to another, which invokes a MapReduce job on the source cluster and can consume a lot of cluster resources (depending on the data volume). To avoid this problem and minimize the load on the source cluster, you can use S3DistCp with Direct Connect to migrate terabytes of data from an on-premises Hadoop environment to Amazon S3. This process runs the job on the target EMR cluster, minimizing the burden on the source cluster.
This post provides instructions for using S3DistCp for migrating data to the AWS Cloud. Apache DistCp is an open source tool that you can use to copy large amounts of data. S3DistCp is similar to DistCp, but optimized to work with AWS, particularly Amazon S3. When compared to Hadoop DistCp, S3DistCp is more scalable, with higher throughput and efficient for parallel copying of large numbers of objects across s3 buckets and across AWS accounts.
Solution overview
The architecture for this solution includes the following components:
- Source technology stack:
- A Hadoop cluster with connectivity to the target EMR cluster over Direct Connect
- Target technology stack:
- Amazon Virtual Private Cloud (Amazon VPC)
- EMR cluster
- Direct Connect
- Amazon S3
The following architecture diagram shows how you can use the S3DistCp from the target EMR cluster to migrate huge volumes of data from an on-premises Hadoop environment through a private network connection, such as Direct Connect to Amazon S3.
This migration approach uses the following tools to perform the migration:
- S3DistCp – S3DistCp is similar to DistCp, but optimized to work with AWS, particularly Amazon S3. The command for S3DistCp in Amazon EMR version 4.0 and later is s3-dist-cp, which you add as a step in a cluster or at the command line. With S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into Hadoop Distributed Filed System (HDFS), where it can be processed by subsequent steps in your EMR cluster. You can also use S3DistCp to copy data between S3 buckets or from HDFS to Amazon S3. S3DistCp is more scalable and efficient for parallel copying of large numbers of objects across buckets and across AWS accounts.
- Amazon S3 – Amazon S3 is an object storage service. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
- Amazon VPC – Amazon VPC provisions a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you would operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
- AWS Identity and Access Management (IAM) – IAM is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.
- Direct Connect – Direct Connect links your internal network to a Direct Connect location over a standard ethernet fiber-optic cable. One end of the cable is connected to your router, the other to a Direct Connect router. With this connection, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet service providers in your network path. A Direct Connect location provides access to AWS in the AWS Region with which it’s associated. You can use a single connection in a public Region or in AWS GovCloud (US) to access public AWS services in all other public Regions.
In the following sections, we discuss the steps to perform the data migration using S3DistCp.
Prerequisites
Before you begin, you should have the following prerequisites:
- Amazon EMR version 4.0 or greater. Refer to Plan and configure an Amazon EMR cluster for steps to create an EMR cluster.
- S3DistCp installed on the target EMR cluster (installed by default).
- An active AWS account with a private network connection between your on-premises data center and the AWS Cloud.
- A Hadoop user with access to the migration data in the HDFS.
- The AWS Command Line Interface (AWS CLI) installed and configured.
- An IAM role with permissions to put objects into an S3 bucket. For more information, refer to Customize IAM roles.
Get the active NameNode from the source Hadoop cluster
Sign in to any of the nodes on the source cluster and run the following commands on bash to get the active NameNode on the cluster.
In newer versions of Hadoop, run the following command to get the service status, which will list the active NameNode on the cluster:
On older versions of Hadoop, run the following method on bash to get the active NameNode on the cluster:
Validate connectivity from the EMR cluster to the source Hadoop cluster
As mentioned in the prerequisites, you should have an EMR cluster and attach a custom IAM role for Amazon EMR. Run the following command to validate the connectivity from the target EMR cluster to the source Hadoop cluster:
Alternatively, you can run the following command:
Validate if the source HDFS path exists
Check if the source HDFS path is valid. If the following command returns 0, indicating that it’s valid, you can proceed to the next step:
Transfer data using S3DistCp
To transfer the source HDFS folder to the target S3 bucket, use the following command:
To transfer large files in multipart chunks, use the following command to set the chuck size:
This will invoke a MapReduce job on the target EMR cluster. Depending on the volume of the data and the bandwidth speed, the job can take a few minutes up to a few hours to complete.
To get the list of running yarn applications on the cluster, run the following command:
Validate the migrated data
After the preceding MapReduce job completes successfully, use the following steps to validate the data copied over:
If the source and target size aren’t equal, perform the cleanup step in the next section and repeat the preceding S3DistCp step.
Clean up partially copied or errored out partitions and files
S3DistCp doesn’t clean up partially copied files and partitions if it fails while copying. Clean up partially copied or errored out partitions and files before you reinitiate the S3DistCp process. To clean up objects on Amazon S3, use the following AWS CLI command to perform the delete operation:
Best practices
To avoid copy errors when using S3DistCP to copy a single file (instead of a directory) from Amazon S3 to HDFS, use Amazon EMR 5.33.0 or later, or Amazon EMR 6.3.0 or later.
Limitations
The following are limitations of this approach:
- If S3DistCp is unable to copy some or all of the specified files, the cluster step fails and returns a non-zero error code. If this occurs, S3DistCp doesn’t clean up partially copied files.
- S3DistCp doesn’t support concatenation for Parquet files. Use PySpark instead. For more information, see Concatenating Parquet files in Amazon EMR.
- VPC limitations apply to Direct Connect for Amazon S3. For more information, see AWS Direct Connect quotas.
Conclusion
In this post, we demonstrated the power of S3DistCp to migrate huge volumes of data from a source Hadoop cluster to a target S3 bucket or HDFS on an EMR cluster. With S3DistCp, you can migrate terabytes of data without affecting the compute resources on the source cluster as compared to Hadoop DistCp.
For more information about using S3DistCp, see the following resources:
- S3DistCp (s3-dist-cp)
- Seven Tips for Using S3DistCp on Amazon EMR to Move Data Efficiently Between HDFS and Amazon S3
About the Author
Vicky Wilson Jacob is a Senior Data Architect with AWS Professional Services Analytics Practice. Vicky specializes in Big Data, Data Engineering, Machine Learning, Data Science and Generative AI. He is passionate about technology and solving customer challenges. At AWS, he works with companies helping customers implement big data, machine learning, analytics, and generative AI solutions on cloud. Outside of work, he enjoys spending time with family, singing, and playing guitar.