AWS Storage Blog
Transferring files securely at the Foxtel Group using AWS Transfer Family and Amazon EFS
The Foxtel Group is Australia’s leading, next-generation subscription television company providing movies, entertainment, lifestyle, documentaries, news, and live sports through a range of streaming and broadcast services to over four million subscribers.
Enterprises such as the Foxtel Group rely upon secure file transfer solutions for the timely and secure transfer of data between themselves and external parties. They want to migrate from their existing on-premises file transfer platforms to AWS whilst ensuring a consistent experience and minimal disruption for existing clients.
Using AWS Transfer Family and Amazon Elastic File System (Amazon EFS) together provided an easy way for the Foxtel Group to make the transition to AWS. However, they faced a challenge replicating the NFSv4 access control lists (ACLs) that were used to restrict access to sensitive files and directories for both external SFTP users and internal hosts mounting the on-premises file system.
In this post, we walk through how AWS Partner Tata Consultancy Services (TCS) replaced the Foxtel Group’s on-premises SFTP platform with an AWS Transfer Family SFTP server backed by Amazon EFS storage.
We show how capabilities provided by AWS Transfer Family and Amazon EFS deliver an equivalent level of security to on-premises NFSv4 ACLs.
Solution overview
The following diagram shows the key components that we used to deliver an AWS Transfer Family SFTP server backed by Amazon EFS and make it available to hundreds of third parties over the internet without compromising security.
AWS Transfer Family is a fully managed FTP, FTPS, and SFTP service backed by either Amazon S3 or Amazon EFS. It frees customers from managing any server infrastructure or disrupting existing file transfer workflows.
Amazon EFS provides a simple, serverless, set-and-forget elastic file system for use with AWS Cloud services and on-premises resources. Amazon EFS elastically scales on demand from zero to petabytes with no disruptions, growing and shrinking automatically as you add and remove files, removing the need to provision and manage capacity. Amazon EFS works with all AWS compute services, including Amazon EC2 instances, AWS container services, and AWS Lambda.
Amazon EFS is a Regional service providing strong file system consistency across three Availability Zones, matching the AWS Transfer Family support for high availability across up to three Availability Zones.
We migrated the external SFTP users to AWS Transfer Family by adding them as service-managed Transfer Family users. If you want to use password authentication or an external identity provider rather than service-managed users, you should follow the approach documented on the AWS Storage Blog. We created the users with their on-premises POSIX user and group IDs and set their home directory attribute within Transfer Family to their directory on the Amazon EFS file system to constrain their access to the file system.
To manage application access to shared datasets, we used Amazon EFS Access Points. Access points provide application-specific entry points into an EFS file system and can enforce a particular POSIX user identity, including the user’s groups, for all file system requests that are made through the access point, irrespective of the identity of the client. Access points can also enforce a different root directory for the file system so that clients can only access data in the specified directory or its subdirectories.
At the Foxtel Group, the application hosts reside in different VPCs from the Amazon EFS file system mount targets, and connectivity between the VPCs was enabled using an AWS Transit Gateway. However, the approach will also work if you have a VPC peering connection between your VPCs or if the workloads run in the same VPC as the file system.
Solution walkthrough
We demonstrate the approach with a simplified example of two external parties transferring files via AWS Transfer Family to dedicated Amazon EFS directories and two internal applications consuming that data. The first application has read and write access to the directory used by the first external party, and the second application has read-only access to both directories.
The walkthrough consists of the following steps:
- Creating an Amazon EFS file system.
- Creating IAM roles for Amazon EFS access.
- Creating an AWS Transfer Family SFTP server and Amazon EFS directories.
- Adding SFTP users to AWS Transfer Family.
- Creating EFS Access Points to enable access for Amazon EC2 instances.
- Updating the Amazon EFS file system IAM policy to enable access points.
- Mounting Amazon EFS via access points from Amazon EC2 instances.
Step 1: Creating an Amazon EFS file system
Create an Amazon EFS file system as a location for our AWS Transfer Family SFTP server to store the files it receives. For full details on how to do this, refer to the blog post “Making it even simpler to get started with Amazon EFS.”
Set a file system IAM policy to lock down access by checking the boxes Prevent root access by default, Enforce read-only access by default, and Prevent anonymous access.
Also check Enforce in-transit encryption for all clients to enforce encryption in transit for NFS client access to Amazon EFS, ensuring that all traffic is encrypted using Transport Layer Security 1.2 (TLS) with an industry-standard AES-256 cipher.
The resulting file system policy allows no actions against the file system, but when a client connects to Amazon EFS, any identity-based IAM permissions associated with the client are evaluated along with the file system policy to determine the appropriate file system access permissions to grant.
Step 2: Creating IAM roles for Amazon EFS access
This step creates IAM roles required to administer the EFS file system, to upload files via Transfer Family, and to access the file system from EC2 instances.
To add an IAM role that provides administrative rights to EFS, navigate to the IAM console and choose Create role.
Select AWS service for Trusted entity type and select Transfer from the dropdown under Use case, Use cases for other AWS services.
Search for and select the AmazonElasticFileSystemClientFullAccess permissions policy.
Choose Next, enter AWSTransferFileSystemAdminRole for the Name and choose Create role.
Now create a role to provide Transfer Family SFTP users with the permissions required to read and write files to Amazon EFS via Transfer Family.
Follow the same steps as above for the administrative user but search for and select the AmazonElasticFileSystemClientReadWriteAccess policy when adding permissions.
Choose Next, enter AWSTransferExternalUserRole for the Name and choose Create role.
Next, create two new IAM roles to assign to the EC2 instances hosting applications that will consume the file system.
Navigate to the IAM section of the console, select Roles, and select the Create Role button. Select AWS service as the Trusted entity type and EC2 as the Use case.
On the following screen, add the permissions policy for AmazonSSMManagedInstanceCore.
Choose Next and name the role as ApplicationOneRole before saving.
Repeat the process for a second role and name it ApplicationTwoRole.
These roles allow EC2 instances to interact with AWS Systems Manager, which is used to log in to EC2 instances in subsequent steps. However, neither role explicitly allows any actions against Amazon EFS. To enable access to the file system, we will later add statements for each role to the file system IAM policy to provide permissions against the access points.
Step 3: Creating an AWS Transfer Family SFTP server and Amazon EFS directories
Now that you have an EFS file system, use AWS Transfer Family to create an SFTP server, choosing Amazon EFS as the AWS Storage Service to store and access your data.
Once the status for your new server changes to Online, select the checkbox next to the SFTP server, and choose Add user.
Enter a Username of AdminUser, enter 0 for both the User ID and Group ID, and choose the AWSTransferFileSystemAdminRole role created earlier from the Role dropdown.
Select the EFS filesystem created earlier from the Home directory dropdown and leave the Enter optional folder field blank and the Restricted check box empty.
Then add the public SSH key for the user and click Add.
This admin user can now be used to recreate the on-premises directory structure in Amazon EFS.
Use an SFTP client to connect to the Transfer Family SFTP server as the AdminUser.
$ sftp -i /private/key/location AdminUser@s-abcdef01234567890.server.transfer.region.amazonaws.com Connected to s-abcdef012345678900.server.transfer.region.amazonaws.com. sftp> pwd Remote working directory: /fs-0123456789abcdef0
Create a directory for a user from the first external party with POSIX user and group IDs of 2001
.
sftp> mkdir ExternalOneDirectory sftp> chown 2001 ExternalOneDirectory Changing owner on /fs-0123456789abcdef0/ExternalOneDirectory sftp> chgrp 2001 ExternalOneDirectory Changing group on /fs-0123456789abcdef0/ExternalOneDirectory sftp> chmod 0770 ExternalOneDirectory Changing mode on /fs-0123456789abcdef0/ExternalOneDirectory
Create a directory for a user from the second external party with POSIX user and group IDs of 3001
.
sftp> mkdir ExternalTwoDirectory sftp> chown 3001 ExternalTwoDirectory Changing owner on /fs-0123456789abcdef0/ExternalTwoDirectory sftp> chgrp 3001 ExternalTwoDirectory Changing group on /fs-0123456789abcdef0/ExternalTwoDirectory sftp> chmod 0770 ExternalTwoDirectory Changing mode on /fs-0123456789abcdef0/ExternalTwoDirectory
Confirm that the directories have been created successfully and the permissions set correctly.
sftp> ls -la drwxrwx--- 2 2001 2001 6144 Mon D HH:MM ExternalOneDirectory drwxrwx--- 2 3001 3001 6144 Mon D HH:MM ExternalTwoDirectory
Step 4: Adding SFTP users to AWS Transfer Family
Navigate to AWS Transfer Family, Servers in the console, select the SFTP server previously created, and choose Add user to add a user for each of our two external parties.
In the User Configuration section, create a user to represent the first external party named ExternalOneUser, and enter 2001 for both the User ID and Group ID to match the POSIX values used earlier to create the ExternalOneDirectory directory on the EFS file system.
Select the AWSTransferExternalUserRole from the Role dropdown when creating each Transfer Family user.
Under Home Directory, select the file system, enter ExternalOneDirectory as the home directory, and check the Restricted box. This performs what is known as a chroot operation, and in this mode, users are not able to navigate to a directory outside of the home or root directory that you’ve configured for them.
If your directory structure requirements are more complex than a simple home directory, you can use logical directories to provide a virtual directory structure with user-friendly names. The use of logical directories is described in the blog post Simplify your AWS SFTP Structure with chroot and logical directories.
Finally, add the public SSH key for the external user and click Add.
Repeat the process to create a new user named ExternalTwoUser, but use the User ID of 3001 and Group ID of 3001 to match the POSIX configuration for the ExternalTwoDirectory created previously and set the Home directory value to ExternalTwoDirectory.
Now when you connect over SFTP as ExternalOneUser, you are able to upload a dummy file.
$ touch VendorOne.data $ sftp -i /private/key/location ExternalOneUser@s-abcdef01234567890.server.transfer.region.amazonaws.com Connected to s-abcdef012345678900.server.transfer.region.amazonaws.com. sftp> put VendorOne.data Uploading VendorOne.data to /VendorOne.data
However you are constrained to the ExternalOneDirectory as your root directory.
sftp> pwd Remote working directory: / sftp> ls VendorOne.data sftp> cd .. sftp> ls VendorOne.data
When you connect as ExternalTwoUser, you can only see the contents of ExternalTwoDirectory, which is set as the root directory for that SFTP user.
$ sftp -i /private/key/location ExternalTwoUser@s-abcdef01234567890.server.transfer.region.amazonaws.com Connected to s-abcdef01234567890.server.transfer.region.amazonaws.com. sftp> pwd Remote working directory: / sftp> ls sftp>
This demonstrates that SFTP users are contained within the appropriate part of the EFS file system.
Step 5: Creating EFS Access Points to enable access for EC2 hosts
Create an access point for each of two applications, one that should only have access to data from the first external party and another that will be able to access the data uploaded by both external parties.
Choose Create Access Point in the Elastic File System, Access Points section of the EFS console.
Enter details of the Amazon EFS file system in the resulting screen, set the Name for the access point as ApplicationOneAccessPoint, and the Root directory path that the application should be constrained to as ExternalOneDirectory.
In the POSIX user section, add 4001 as the User ID and Group ID, and in Secondary group IDs, enter 2001, the POSIX group ID associated with the ExternalOneDirectory directory previously created on the Amazon EFS file system.
Leave the remaining fields blank and create the access point.
Repeat the process to create an access point for the second application, but assign a Root directory path of /, User ID of 5001, Group ID of 5001, and Secondary group IDs of 2001,3001.
Step 6: Updating the Amazon EFS file system IAM policy to enable access points
Update the EFS file system IAM policy by adding extra statements to only allow certain actions against our file system when accessed via the access points and to restrict the use of each access point to only the IAM roles created for each application.
Edit the following JSON to replace the ARNs for each role and access point to match those you have created in your environment.
,
{
"Sid": "ApplicationOneAccess",
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::111122223333:role/ApplicationOneRole" },
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite"
],
"Condition": {
"StringEquals": {
"elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:region:111122223333:access-point/fsap-1234567890abcdef0"
}
}
},
{
"Sid": "ApplicationTwoAccess",
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::111122223333:role/ApplicationTwoRole" },
"Action": "elasticfilesystem:ClientMount",
"Condition": {
"StringEquals": {
"elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:region:111122223333:access-point/fsap-2345678901abcdef0"
}
}
}
These statements allow EC2 instances with the ApplicationOneRole assigned to mount and write to the file system via the first access point, but those with ApplicationTwoRole to only mount the file system when using the second access point.
Navigate to the Amazon EFS console, select File Systems, then select the previously created file system. Choose the File system policy tab, and select Edit.
Add the updated JSON to the array of policy statements.
Step 7: Mounting Amazon EFS via access points from Amazon EC2 instances
Launch an Amazon EC2 instance with the previously created ApplicationOneRole role into the same VPC used to create the EFS file system and use AWS Systems Manager to connect to the instance.
Install the amazon-efs-utils
package and create a directory that you will use as the EFS file system mount point using the following commands:
AppOneHost$ sudo yum install -y amazon-efs-utils AppOneHost$ sudo mkdir /mnt/efs
Note that the instance is not able to mount the EFS file system directly:
AppOneHost$ sudo mount -t efs -o tls,iam fs-0123456789abcdef0:/ /mnt/efs b'mount.nfs4: access denied by server while mounting 127.0.0.1:/'
Or access the file system via the access point requiring the ApplicationTwoRole role.
AppOneHost$ sudo mount -t efs -o tls,iam,accesspoint=fsap-2345678901abcdef0 fs-0123456789abcdef0:/ /mnt/efs b'mount.nfs4: access denied by server while mounting 127.0.0.1:/'
An EC2 instance with the ApplicationOneRole assigned is only able to mount the file system through the access point created for that role. When using that access point, the ExternalOneDirectory directory is presented as the root directory.
An application running on this instance can both read from and write files to the directory, because the file system IAM policy allows both the mount and write actions for the role when using the access point. In addition, the access point contains a secondary group ID with matching file system permissions.
AppOneHost$ sudo mount -t efs -o tls,iam,accesspoint=fsap-1234567890abcdef0 fs-0123456789abcdef0:/ /mnt/efs AppOneHost$ touch /mnt/efs/AppOne.data AppOneHost$ ls -la /mnt/efs/ total 8 drwxrwx--- 2 2001 2001 6144 MON D HH:MM . drwxr-xr-x 4 root root 29 MON D HH:MM .. -rw-r--r-- 1 2001 2001 0 MON D HH:MM VendorOne.data -rw-r--r-- 1 4001 4001 0 MON D HH:MM AppOne.data
Now launch a second EC2 instance with the ApplicationTwoRole assigned and observe that only the second access point can be mounted and both ExternalOneDirectory and ExternalTwoDirectory are available because the root directory path was set to the file system root when creating the access point.
AppTwoHost$ sudo mkdir /mnt/efs AppTwoHost$ sudo mount -t efs -o tls,accesspoint=fsap-2345678901abcdef0 fs-0123456789abcdef0:/ /mnt/efs AppTwoHost$ ls /mnt/efs ExternalOneDirectory ExternalTwoDirectory
This second host can read the contents of both directories because the file system policy statement for the access point contains mount permissions, and the file system group assigned read rights for each directory is contained within the secondary group IDs for the access point.
However, the host is unable to write to either directory even though the access point has secondary group IDs that match those with write permissions to those directories because the file system IAM policy only allows the EFS ClientMount
action for the ApplicationTwoRole IAM role against the access point and not the ClientWrite
action.
AppTwoHost$ sudo touch /mnt/efs/ExternalOneDirectory/AppTwo.data touch: cannot touch ‘/mnt/efs/ExternalOneDirectory/AppTwo.data’: Read-only file system AppTwoHost$ sudo touch /mnt/efs/ExternalTwoDirectory/AppTwo.data touch: cannot touch ‘/mnt/efs/ExternalTwoDirectory/AppTwo.data’: Read-only file system
These examples have shown how you can replace NFSv4 ACLs used on-premises and restrict access to data transferred by SFTP using combinations of:
- AWS Transfer Family POSIX profiles and home directories.
- Amazon EFS file system permissions and IAM policies.
- Amazon EFS access point POSIX profiles and root directories.
Cleaning up
If you’ve been following along with the steps in this post, then don’t forget to delete the resources set up to avoid any future recurring charges, including your EC2 hosts, AWS Transfer Family SFTP server, Amazon EFS file system, and IAM roles.
Conclusion
In this post, we showed you how Tata Consultancy Services was able to build a secure file transfer solution for the Foxtel Group using AWS Transfer Family and Amazon EFS.
We were able to lock down access to sensitive datasets using a combination of AWS Transfer Family users with home directories and POSIX profiles alongside Amazon EFS access points with POSIX profiles and a file system IAM policy. This approach replaced the NFSv4 ACLs that were present in the on-premises platform and enabled a seamless migration without any compromise in security.
Thanks for reading this blog. Leave a comment in the comments section if you have any questions. To learn more about the components of this solution, please check out the following links: