- Products›
- Migration and Transfer›
- AWS Transfer Family
AWS Transfer Family FAQs
General
What is AWS Transfer Family?
AWS Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, and FTP directly into and out of Amazon S3 or Amazon EFS. You can seamlessly migrate, automate, and monitor your file transfer workflows by maintaining existing client-side configurations for authentication, access, and firewalls — so nothing changes for your customers, partners, and internal teams, or their applications.
What is SFTP?
SFTP stands for Secure Shell (SSH) File Transfer Protocol, a network protocol used for secure transfer of data over the internet. The protocol supports the full security and authentication functionality of SSH, and is widely used to exchange data between business partners in a variety of industries including financial services, healthcare, media and entertainment, retail, advertising, and more.
What is FTP?
FTP stands for File Transfer Protocol , a network protocol used for the transfer of data. FTP uses a separate channel for control and data transfers. The control channel is open until terminated or inactivity timeout, the data channel is active for the duration of the transfer. FTP uses cleartext and does not support encryption of traffic.
What is FTPS?
FTPS stands for File Transfer Protocol over SSL , and is an extension to FTP. Like FTP, FTPS uses a separate channel for control and data transfers. The control channel is open until terminated or inactivity timeout, while the data channel is active for the duration of the transfer. FTPS uses Transport Layer Security (TLS) to encrypt traffic, and allows encryption of both the control and data channel connections either concurrently or independently.
What is AS2?
AS2 stands for Applicability Statement 2, a network protocol used for the secure and reliable transfer of business-to-business data over the public internet over HTTP/HTTPS (or any TCP/IP network).
What are SFTP connectors?
AWS Transfer Family’s SFTP connectors are used to easily and reliably copy files at scale between externally hosted SFTP servers and AWS storage services.
Why should I use AWS Transfer Family?
AWS Transfer Family supports multiple protocols for business-to-business (B2B) file transfers so data can easily and securely be exchanged with your stakeholders, third-party vendors, business partners, or customers. Without using Transfer Family, you have to host and manage your own file transfer service which requires you to invest in operating and managing infrastructure, patching servers, monitoring for uptime and availability, and building one-off mechanisms to provision users and audit their activity. AWS Transfer Family solves these challenges by providing fully managed and secure connectivity options over SFTP, AS2, FTPS, and FTP for B2B file transfers to eliminate the need for you to manage file transfer related infrastructure.. Your end users’ workflows remain unchanged, while data uploaded and downloaded over the chosen protocols is stored in your Amazon S3 bucket or Amazon EFS file system. With the data in AWS, you can now easily use it with the broad array of AWS services for data processing, content management, analytics, machine learning, and archival, in an environment that can meet your compliance requirements.
What are the benefits of using AWS Transfer Family?
AWS Transfer Family provides you with a fully managed, highly available file transfer service with auto-scaling capabilities, eliminating the need for you to manage file transfer related infrastructure. Your end users’ workflows remain unchanged, while data uploaded and downloaded over the chosen protocols is stored in your Amazon S3 bucket or Amazon EFS file system. With the data in AWS, you can now easily use it with the broad array of AWS services for data processing, content management, analytics, machine learning, and archival, in an environment that can meet your compliance requirements.
Can I use AWS Transfer Family to build event-driven managed file transfer (MFT) workflows in AWS?
Yes. AWS Transfer Family publishes event notifications in Amazon EventBridge for each file transfer operation. You can subscribe to AWS Transfer Family events in Amazon EventBridge, and use them to orchestrate event-driven MFT workflows using Amazon EventBridge or any other orchestration engine of your choice that integrates with these events. Refer to the File Processing Automation section for more details.
How do I get started with AWS Transfer for SFTP, FTPS, and FTP server endpoints?
In 3 simple steps, you get an always-on server endpoint enabled for SFTP, FTPS, and/or FTP. First, you select the protocol(s) you want to enable your end users to connect to your endpoint. Next, you configure user access using AWS Transfer Family built-in authentication manager (service managed), Microsoft Active Directory (AD), or by integrating your own or a third party identity provider such as Okta or Microsoft AzureAD (“BYO” authentication). Finally, select the server to access S3 buckets or EFS file systems. Once the protocol(s), identity provider, and the access to file systems are enabled, your users can continue to use their existing SFTP, FTPS, or FTP clients and configurations, while the data accessed is stored in the chosen file systems.
How do I get started with AWS Transfer for AS2?
You can start using AS2 to exchange messages with your trading partners in three simple steps: First, import your certificates and private keys and your trading partners’ certificate and certificate chain. Next, create profiles using your and your partner’s AS2 IDs. Finally, pair up your own and your partner’s profile information using an agreement for receiving data and connector for sending data. At this point you are ready to exchange messages with your trading partner’s AS2 server.
How do I get started with AWS Transfer SFTP connectors?
You can start using SFTP connectors to copy files between remote SFTP servers and Amazon S3 in three simple steps: First, create a secret that will store the credentials to be used by SFTP connector for authentication into the remote server. Second, create a SFTP connector by supplying the secret and remote server’s URL. Third, once the connector is created, you can start using it to copy files between remote server and Amazon S3 bucket by invoking the StartFileTransfer API.
What is the difference between SFTP and FTPS? Which should I use when?
FTPS and SFTP can both be used for secure transfers. Since they are different protocols, they use different clients and technologies to offer a secure tunnel for transmission of commands and data. SFTP is a newer protocol and uses a single channel for commands and data, requiring fewer port openings than FTPS.
What is the difference between the SFTP, FTPS, and AS2 protocols? When should I use the AS2 protocol?
SFTP, FTPS, and AS2 can all be used for secure transfers. Since they are different protocols, they use different clients and technologies to offer secure transmission of data. Aside from support for encrypted and signed messages, AS2’s built in mechanism for Message Disposition Notification (MDN) alerts the sender that the message has been successfully received and decrypted by the recipient. This provides proof to the sender that their message was delivered without being tampered in transit. Use of AS2 is prevalent in workflows operating in retail, e-commerce, payments, supply chain for interacting with business partners who are also able to use AS2 to transact messages so that it is securely transmitted and delivered. AS2 provides you with options to ensure identity of the sender and receiver, integrity of the message, and confirm whether the message was successfully delivered and decrypted by the receiver.
Can my users continue to use their existing file transfer clients and applications?
Yes. Any existing file transfer client application will continue to work as long as you have enabled your endpoint for the chosen protocols. Examples of commonly used SFTP/FTPS/FTP clients include WinSCP, FileZilla, CyberDuck, lftp, and OpenSSH clients.
How do I access files stored in an external SFTP site?
You can use AWS Transfer SFTP connectors to access files stored on external SFTP sites. Refer to SFTP connectors documentation to get started with SFTP connectors
How can I move files from my trading partner's business systems to my S3 bucket?
You can use AWS Transfer Family’s fully managed SFTP/FTPS/AS2 capabilities to receive EDI documents that are generated from your trading partner’s business systems. EDI documents received using AWS Transfer Family’s connectivity capabilities are automatically uploaded to Amazon S3 where they can then be transformed into JSON and XML formatted outputs using AWS B2B Data Interchange. Alternatively, you can use any other any other EDI connectivity tool to upload EDI documents to S3.
Can my users use SCP commands to transfer files using this service?
Yes, you can use Transfer Family to support SCP commands through the SFTP protocol to meet the majority of your SCP use cases for file transfers using S3 and EFS storage. To support SCP commands, ensure that your SCP client uses SCP over SFTP by default, such as OpenSSH 9.0 or newer. However, SCP protocol has been deprecated and is not supported by the service. To learn more, please visit our documentation.
Server endpoint options
Can I customize the login banners for users connecting to my Transfer Family server?
Yes. You can configure your Transfer Family server to display customized banners such as organization policies or terms and conditions to your users. You can also display customized Message of The Day (MOTD) to users who have successfully authenticated. To learn more, visit the documentation .
Can I use my corporate domain name (sftp.mycompanyname.com) to access my endpoint?
Yes. The service supplies a domain name by default to access your endpoint. If you already have a domain name, you can use Amazon Route 53 or any DNS service to route your users’ traffic from your registered domain to the server endpoint in AWS. Refer to the documentation on how AWS Transfer Family uses Amazon Route 53 for custom domain names (applicable to internet facing endpoints only).
Can I set up my server to be accessible to resources only within my VPC?
Yes. When you create a server or update an existing one, you have the option to specify whether you want the endpoint to be accessible over the public internet or hosted within your VPC. By using a VPC hosted endpoint for your server, you can restrict it to be accessible only to clients within the same VPC, other VPCs you specify, or in on-premises environments using networking technologies that extend your VPC such as AWS Direct Connect, AWS VPN, or VPC peering. You can further restrict access to resources in specific subnets within your VPC using subnet Network Access Control Lists (NACLs) or Security Groups. Refer to the documentation on creating your server endpoint inside your VPC using AWS PrivateLink for details.
Can I use FTP with an internet facing endpoint?
No, when you enable FTP, you will only be able to use VPC hosted endpoint’s internal access option due to the fact that FTP transmits data in clear text. If traffic needs to traverse the public network, secure protocols such as SFTP or FTPS should be used.
Can I use FTP without a VPC?
No. VPC is required to host FTP server endpoints. Please refer to the documentation for CloudFormation templates to automate creation of VPC resources to host the endpoint during server creation.
Can my end users use fixed IP addresses to allowlist access to my server’s endpoint in their firewalls?
Yes. You can enable fixed IPs for your server endpoint by selecting the VPC hosted endpoint for your server and choosing the internet-facing option. This will allow you to attach Elastic IPs (including BYO IPs) directly to the endpoint, which is assigned as the endpoint’s IP address. Refer to the section on creating an internet facing endpoint in the documentation: Creating your server endpoint inside your VPC .
Can I restrict incoming traffic by end users’ source IP addresses?
Yes. You have three options to restrict incoming traffic by users’ source IP address. If you are hosting your server endpoint within VPC, refer to this blog post on using Security Groups to allow list source IP address or use AWS Network Firewall service. If you are a public EndpointType Transfer server and API Gateway to integrate your identity management system, you can also use AWS WAF to allow, block, or rate limit access by your end users’ Source IP address.
Can I host my server’s endpoint in a shared VPC environment?
Yes. You can deploy your server endpoint with shared VPC environments typically used when segmenting your AWS environment using tools such as AWS Landing Zone for security, cost monitoring, and scalability. Refer to this blog post on using VPC hosted endpoints in shared VPC environments with AWS Transfer Family .
How do I improve performance of file transfers for remotely located end users?
You can use AWS Global Accelerator with your Transfer server endpoint to improve file transfer throughput and round-trip time. Visit this blog post for more information.
Can I select which cryptographic algorithms can be used when my end users’ clients connect to my server endpoint?
Yes. Based on your security and compliance requirements, you can select one of our available service managed security policies to control the cryptographic algorithms that will be advertised by your server endpoints. When your end users’ file transfer clients attempt to connect to your server, only the algorithms specified in the policy can be used to negotiate the connection. Refer to the documentation on pre-defined security policies .
Does AWS Transfer Family support quantum-safe exchange of public-keys?
Yes. AWS Transfer Family supports quantum-safe public-key exchange for SFTP file transfers. You can associate one of the pre-defined hybrid PQ security policies with your SFTP server enabling quantum-safe key exchange with clients that support PQ encryption algorithms.
Can my end users use fixed IP addresses to access my server whose endpoint type is PUBLIC?
No. Fixed IP addresses that are usually used for firewall whitelisting purposes are currently not supported on the PUBLIC Endpoint type. Use VPC hosted endpoints to assign static IP addresses for your endpoint.
What IP ranges would my end users need to allow list to access my SFTP server’s endpoint type that is PUBLIC?
If you are using the PUBLIC endpoint type, your users will need to allow list the AWS IP address ranges published here . Refer to the documentation for details on staying up to date with AWS IP Address Ranges .
Will my AWS Transfer for SFTP server's host key ever change after I create the server?
No. The server’s host key that is assigned when you create the server remains the same, unless you add a new host key and manually delete the original.
What types of SFTP server host keys are supported?
RSA, ED25519, and ECDSA key types are supported for SFTP server host keys.
Can I import keys from my current SFTP server so my users do not have to verify the authenticity of my server again?
Yes. You can import a host key when creating a server or import multiple host keys when updating a server. Refer to the documentation on managing host keys for your SFTP-enabled server .
Can multiple host keys be used to verify the authenticity of my SFTP server?
Yes. The oldest host key of each key type can be used to verify the authenticity of an SFTP server. By adding RSA, ED25519, and ECDSA host keys, 3 separate host keys can be used to identify your SFTP server.
Which host keys are used to verify authenticity of my SFTP server?
The oldest host key of each key type is used to verify authenticity of your SFTP server.
Can I rotate my SFTP server host keys to ensure secure connections?
Yes. You can rotate your SFTP server host keys at any time by adding and removing host keys. Refer to the documentation on managing host keys for your SFTP-enabled server .
How do my end users’ FTPS clients verify the identity of my FTPS server?
When you enable FTPS access, you will need to supply a certificate from Amazon Certificate Manager (ACM). This certificate is used by your end user clients to verify the identity of your FTPS server. Refer to the ACM documentation on requesting new certificates or importing existing certificates into ACM .
Do you support active and passive modes of FTPS and FTP?
We only support passive mode, which allows your end users’ clients to initiate connections with your server. Passive mode requires fewer port openings on the client side, making your server endpoint more compatible with end users behind protected firewalls.
Do you support Explicit and Implicit FTPS modes?
We only support explicit FTPS mode.
Can I transfer files over FTPS/FTP protocols if I have a firewall or a router configured between the client and the server?
Yes. File transfers traversing a firewall or a router are supported by default using extended passive connection mode (EPSV). If you are using an FTPS/FTP client that does not support EPSV mode, visit this blog post to configure your server in PASV mode to expand your server’s compatibility to a broad range of clients.
Does AWS Transfer Family support non-default ports for SFTP servers?
Yes. In addition to standard port 22, AWS Transfer Family also supports alternate ports 2222 and 22000. By default, port 22 is configured by default for your SFTP servers. To enhance the security of your server, you can configure SSH traffic to use port 22, 2222, or both. Refer to our documentation here.
SFTP connectors
What authentication methods are supported to establish connection with remote SFTP servers?
You can authenticate connections to remote servers using either SSH key-pairs or password, or both, depending on the remote server requirements. To learn more about storing and managing your connector’s authentication credentials in your AWS Secrets Manager account, visit the documentation.
Which AWS storage services are supported to transfer files using SFTP connectors?
You can transfer files to or from Amazon S3 to remote SFTP servers using SFTP connectors.
Which SSH host key algorithms are supported by SFTP connectors?
We support RSA and ECDSA host key algorithms. For more details on the supported key types, please visit the documentation here .
How do I validate the identity of a remote SFTP server when creating a connection?
The connector uses the host fingerprint to validate the identify of the remote server. If the fingerprint provided by the remote server does not match that uploaded to the connector’s configuration, the connection will fail and error details will be logged in CloudWatch. To learn more on how to upload the public portion of a remote server’s SSH key for identification, visit the SFTP connectors documentation here.
Can I select which cryptographic algorithms can be used with my SFTP connectors to connect to remote SFTP servers?
Yes. Based on your security and compatibility requirements, you can select one of our available service managed security policies to control the cryptographic algorithms that will be advertised by your SFTP connector. When your connector attempts to connect to remote server, only the algorithms specified in the policy attached to your connector will be used to negotiate the connection. Refer to the documentation on pre-defined security policies.
Can I create a SFTP connector in one AWS account and use it to transfer files from a different AWS account?
Yes. You can create a SFTP connector in one AWS account, and use it to transfer files from another account by providing the appropriate access permissions in the IAM role attached to your connector.
How can I check connectivity to the remote server without transferring files?
You can test connectivity to the remote server using the AWS Management Console or TestConnection API/CLI/CDK command. We recommend you test the connectivity to the remote server as soon as you create your connector to ensure that it is configured correctly. Make sure that the static IP addresses associated with your connectors are allow-listed by the remote server if needed. To learn more, visit SFTP connectors documentation.
What file transfer operations are supported by SFTP connectors?
SFTP connectors can be used to send files from Amazon S3 to a remote SFTP server, retrieve files from a remote SFTP server to Amazon S3, and list files stored in a directory on remote SFTP server. To learn more about using SFTP connectors, visit SFTP connectors documentation.
How do I retrieve files from remote SFTP servers when the file names are not known in advance?
You can list the files stored in a directory on remote SFTP servers by using SFTP connectors’ StartDirectoryListing API operation. You will then be able to retrieve target files from the remote server by passing file names from the list when using StartFileTransfer API operation to transfer files. For more information, Refer to the sample solution for synchronising new files from remote SFTP servers, or take the self-paced MFT workshop.
How can I use wildcards to specify file name patterns of the files to be copied using SFTP connectors?
You can list all files from a directory on remote SFTP server using SFTP connectors, and build custom logic to filter the file list based on your wildcard criteria for filename patterns. You can then use the StartFileTransfer API operation to transfer those files using SFTP connectors.
How do I track status of my file transfers?
You can monitor the current status of your file transfer operations using the ListFileTransferResults API command. In addition, SFTP connectors emit detailed logs in Amazon CloudWatch, including status of your file transfers, operation type (send or retrieve), timestamp, file path, and error description (if any) to help you maintain data lineage.
Can I schedule my file transfers using SFTP connectors?
Yes. You can schedule file transfers using Amazon EventBridge Scheduler. Create a schedule that meets your business’s needs using EventBridge’s Scheduler and specify AWS Transfer Family’s StartFileTransfer API, or StartDirectoryListing API, as the universal target for your schedule.
Can I invoke file transfers using SFTP connectors from my state machine in AWS Step Function?
Yes. AWS Step Functions integrates with various AWS services, including AWS Transfer Family, enabling you to invoke SFTP connector’s StartFileTransfer action directly from your state machine. Once you have created your SFTP connector with AWS Transfer Family, leverage Step Functions’ AWS SDK integrations to call the StartFileTransfer API. To learn more about orchestrating your file transfer and file processing workflows using AWS Step Functions and SFTP connectors, take the self-paced event-driven MFT workshop.
Can I orchestrate event-driven processing of my files transferred using SFTP connectors?
Yes. Every file transfer operation using SFTP connectors publishes an event notification in your default event-bus in Amazon EventBridge. You can subscribe to SFTP connector events, and use them to orchestrate event-driven processing of your transferred files using Amazon EventBridge or any other workflow orchestration service of your choice that integrates with these events.
Can I use a static IP address for my SFTP connector so that my business partner can allow list the connector’s IP address in their firewall?
Yes. Static IP addresses are associated with your connectors by default that can be used to allow-list connections on your business partner’s firewall. You can identify the static IP addresses associated with your connectors by navigating to the connector details page in the AWS Transfer Family Console, or by using the DescribeConnector API/CLI/CDK command.
Are static IP addresses the same for all SFTP connectors in my account?
Yes. All SFTP connectors in an AWS account region will share a set of static IP addresses. Sharing IP addresses between connectors of a given type reduces the amount of allow-list documentation as well as the onboarding communications needed with your external partners.
Can I use SFTP connectors to connect with servers within a private network?
No. Currently, SFTP connectors can only be used to connect with servers that offer an internet accessible endpoint. If you need to connect to servers that are only accessible via a private network, please let us know via AWS Support or through your AWS account team.
Multi-protocol access
Can I enable multiple protocols on the same server endpoint?
Yes. During setup, you can select the protocol(s) you want to enable for clients to connect to your endpoint. The server hostname, IP address, and identity provider are shared across the selected protocols. Similarly, you can also enable additional protocol support to existing AWS Transfer Family endpoints, as long as the endpoint configuration meets the requirements for all the protocols you intend to use.
When should I create separate server endpoints for each protocol vs enable the same endpoint for multiple protocols?
When you need to use FTP (only supported for access within VPC), and also need to support over the internet for SFTP, AS2, or FTPS, you will need a separate server endpoint for FTP. You can use the same endpoint for multiple protocols when you want to use the same endpoint hostname and IP address for clients connecting over multiple protocols. Additionally, if you want to share the same credentials for SFTP and FTPS, you can set up and use a single identity provider for authenticating clients connecting over either protocol.
Can I set up the same end user to access the endpoint over multiple protocols?
Yes. You can provide the same user access over multiple protocols, as long as the credentials specific to the protocol have been set up in your identity provider. If you have enabled FTP, we recommend maintaining separate credentials for FTP. Refer to the documentation for setting up separate credentials for FTP.
Why should I maintain separate credentials for FTP users?
Unlike SFTP and FTPS, FTP transmits credentials in cleartext. We recommend isolating FTP credentials from SFTP or FTPS because, if, inadvertently FTP credentials are shared or exposed, your workloads using SFTP or FTPS remain secure.
Identity Provider options for server endpoints
What identity provider options are supported by the service?
The service supports three identity provider options: Service Managed, where you store user identities within the service, Microsoft Active Directory, and, Custom Identity Providers, which enable you to integrate an identity provider of your choice. Service Managed authentication is supported for server endpoints that are enabled for SFTP only.
How can I authenticate my users using Service Managed authentication?
You can use Service Managed authentication to authenticate your SFTP users using SSH keys.
How many SSH keys can I upload per SFTP user? Which key types are supported?
You can upload up to 10 SSH keys per user. RSA, ED25519, and ECDSA keys are supported.
Is SSH key rotation supported for service managed authentication?
Yes. Refer to the documentation for details on how to set up key rotation for your SFTP users.
How do I get started with using Microsoft AD?
When you create your server, you select a directory in AWS Managed Microsoft AD, your on-premises environment, or self-managed AD in Amazon EC2 as your identity provider. You will then need to specify the AD Groups you want to enable for access using a Security Identifier (SID). Once you associate your AD group with access control information such as IAM Role, scope down policy (S3 only), POSIX Profile (EFS only), home directory location, and logical directory mappings, members of the group can use their AD credentials to authenticate and transfer files over the enabled protocols (SFTP, FTPS, FTP).
How can I set up my AD users so they have isolated access to different parts of my S3 bucket?
When you set up your users, you supply a scope down policy that is evaluated in run time based on your users’ information such as their username. You can use the same scope down policy for all your users to provide access to unique prefixes in your bucket based on their username. Additionally, a username can also be used to evaluate logical directory mappings by providing a standardized template on how your S3 bucket or EFS file system contents are made visible to your user. For more information, visit the documentation on granting access to AD groups .
Can I use Microsoft AD as an identity provider option for all the supported protocols?
Yes. You can use Microsoft AD to authenticate users for access over SFTP, FTPS, and FTP.
Can I revoke access for enabled AD groups?
Yes. You can revoke file transfer access for individual AD Groups. Once revoked, members of the AD groups will not be able to transfer files using their AD credentials.
Why should I use the Custom authentication mode?
The Custom mode (“BYO” authentication) enables you to leverage an existing identity provider to manage your end users for all protocol types (SFTP, FTPS, and FTP), enabling easy and seamless migration of your users. Credentials can be stored in your corporate directory or an in-house identity datastore, and you can integrate it for end user authentication purposes. Examples of identity providers include Okta , Microsoft AzureAD, or any custom-built identity provider you may be using as a part of an overall provisioning portal.
What options do I have to integrate my identity provider with an AWS Transfer Family server?
To integrate your identity provider with an AWS Transfer Family server, you can use an AWS Lambda function, or an Amazon API Gateway endpoint. Use Amazon API Gateway if you need a RESTful API to connect to an identity provider or want to leverage AWS WAF for its geo-blocking and rate limiting capabilities. Visit the documentation to learn more about integrating common identity providers such as AWS Cognito, Okta, and AWS Secrets Manager.
Can I apply access controls based on the client source IP?
Yes. The client source IP is passed to your identity provider when you use AWS Lambda or API Gateway to connect a custom identity provider. This enables you to allow, deny, or limit access based on the IP addresses of clients to ensure that your data is accessed only from IP addresses that you have specified as trusted.
Can I require multiple methods of authentication when users attempt to connect to my SFTP server?
Yes. You can enforce multiple methods of authentication to provide an additional layer of security when your data is accessed over SFTP. Your SFTP server can be configured to require both password and SSH key, password or SSH key, just password, or just SSH key. Refer to the documentation for details on how to enable multiple methods of authentication using your customer identity provider.
Can I use service managed option for password authentication?
No. Storing passwords within the service for authentication is currently not supported. If you need password authentication, use Active Directory by selecting a directory in AWS Directory Service, or follow the architecture described in this blog on Enabling Password Authentication using Secrets Manager .
Are anonymous users supported?
No. Anonymous users are currently not supported for any of the protocols.
Can I provide access to individual AD users or to all users in a directory?
No. We only support setting access by AD Groups.
Can I use AD to authenticate users using SSH keys?
No. AWS Transfer Family support for Microsoft AD can only be used for password-based authentication. To use a mix of authentication modes, use the Custom authorizer option.
AS2 trading partners
Is AWS Transfer Family support for AS2 Drummond Certified?
Yes. AWS Transfer Family support for AS2 has received the official Drummond Group AS2 Cloud Certification Seal. AWS Transfer Family AS2 capabilities have been thoroughly vetted for security and message exchange compatibility with fourteen other third-party AS2 solutions. Visit our announcement to learn more.
How do I uniquely identify my AS2 trading partner?
Your trading partner is uniquely identified using their AS2 Identifier (AS2 ID). Similarly, your trading partners identify your messages using your AS2 ID.
Which existing features of AWS Transfer Family are available for AS2? Which features are not available?
You can use AWS Transfer Family’s existing support for Amazon S3, networking features (VPC endpoints, Security Groups, and Elastic IPs), and access controls (AWS IAM) for AS2, as you could for SFTP, FTPS, and FTP. User authentication, logical directories, custom banners, and Amazon EFS as a storage backend are not supported for AS2.
What is non-repudiation and why is it important?
Non-repudiation, unique to AS2, validates that message are successfully exchanged between two parties. Non-repudiation in AS2 is achieved using Message Disposition Notifications (MDN). When an MDN is requested in a transaction, it ensures that the sender sent the message, the receiver successfully received it, and the message sent by the sender was the same message received by the receiver.
What are the steps involved in message transmission using the AS2 protocol?
There are two aspects to messages transmission – one from the sender and from the receiver. Once the sender has determined what message to send, the message is signed (using the sender’s private key), encrypted (using the receiver’s certificate), and the message integrity is calculated using a hash. This signed and encrypted message is transmitted over the wire to the receiver. Once the message is received, it is decrypted (using the receiver’s private key), validated (using the sender’s public key), processed and a signed Message Disposition Notifications (MDN), if requested, is sent back to the sender to acknowledge successful delivery of the message. Refer to the documentation on how AS2 handles message transmission.
What are the options available for message transmission?
The combination of options possible are driven from a sender’s standpoint. The sender can choose to either only encrypt or only sign the data (or both), and choose to request an Message Disposition Notifications (MDN). If the sender chooses to request an MDN, they can request a signed or unsigned MDN. The receiver is expected to honor these options.
Are requesting Message Disposition Notifications (MDN) optional?
Yes. The sender can choose to request an MDN, choose to request a signed or unsigned MDN, as well as select the signing algorithms that should be used to sign the MDN.
Do you support synchronous (Sync) and asynchronous (Async) MDNs? When should I use which option?
Currently we support both synchronous and asynchronous MDN responses. This enables you to respond to your trading partners with either a synchronous or asynchronous MDN after receiving an AS2 message. Since synchronous MDNs are sent over the same connection channel as the message, it is much simpler and hence the recommended option. If you need more time to process the message before sending an MDN, asynchronous MDNs are preferred. If you have a need to request and receive asynchronous MDNs when sending messages to your trading partners, contact us through AWS Support or your account manager.
How do I track and search for payloads and MDNs sent and received?
AWS Transfer Family extracts key AS2 information from payloads and MDNs exchanged and stores them as JSON files in your Amazon S3 bucket. You can query these JSON files using S3 Select or Amazon Athena, or index the files using Amazon OpenSearch or Amazon DocumentDB for analytics.
Can I archive the received MDNs (as the sender who requested them)?
Yes. Once you receive an MDN from your trading partner, the service validates the MDN using your certificate and stores the message in your Amazon S3 bucket. You can choose to archive the message by leveraging S3 Lifecycle policies.
How do I notify AWS Transfer Family when a message is ready for delivery to my trading partner’s endpoint?
Once your data is ready for delivery, you will need to invoke a service provided API, associate a connector to notify us that it is ready to be delivered, and provide us the recipient’s information. This will notify the service to send the message to your trading partner’s endpoint. Refer to the documentation on connectors to send messages to your trading partner over AS2.
Can I isolate each of my trading partners to use different inbound and outbound locations for messages?
Yes. When you set up your trading partner’s profile you can use different folders for each of them.
Can I use my trading partner's existing keys and certificates with my AWS Transfer Family AS2 endpoint?
Yes. You can import your partner’s existing keys and certificates and manage renewals and rotations. Refer to the documentation on importing certificates.
How do I know when my trading partner’s certificates are expiring?
Using the AWS Transfer Family console, you can view a dashboard of certificates sorted by their expiry dates. Additionally, you can opt in to receive notifications ahead of certificate expiry, giving you sufficient time to rotate them to prevent discontinuity in operations.
Can I connect to my trading partner’s AS2 host that requires me to authenticate using username and password credentials?
Yes. We support the ability to connect to your trading partner’s AS2 server using Basic authentication. Refer to the documentation on configuring Basic authentication on AS2 connectors.
Can I connect to my trading partner’s AS2 sever using static IP addresses?
Yes. Static IP addresses are associated with your connectors by default that can be used to allow-list connections on your trading partner’s AS2 server. You can identify the static IP addresses associated with your connectors by navigating to the connector details page in the AWS Transfer Family Console, or by using the DescribeConnector API/CLI/CDK command.
Can I send messages to my trading partners AS2 servers using fixed IP addresses?
Yes. Your AS2 connectors use static IP addresses when sending messages to remote AS2 servers and when returning asynchronous message disposition notification (MDN) responses. You can identify the static IP addresses associated with your connectors by navigating to the connector or server details page in the AWS Transfer Family Management Console, or by using the DescribeConnector or DescribeServer API/CLI/CDK commands.
Can I receive AS2 messages from my trading partners over a fixed IP enabled endpoint?
Yes. Your AS2 server endpoints support configuring IP allow-list controls by using security groups with internet-facing, VPC-hosted endpoints.
How can I orchestrate processing of AS2 messages received from my trading partners?
Every AS2 message received publishes an event to your default event-bus in Amazon EventBridge. You can subscribe to these events and use them to orchestrate event-driven processing of the received messages using Amazon EventBridge or any other workflow orchestration service. For example, you can use these events to copy incoming messages to other locations in S3, malware scan the contents of messages using a custom Lambda, or tag messages based on their contents so they can be indexed and searched by services like Amazon CloudSearch.
Will my AS2 asynchronous MDN responses use static IP addresses?
Yes. Your AS2 asynchronous MDN responses will use static IP addresses. You can identify the static IP addresses used for sending your asynchronous MDN responses by navigating to the server details page in the AWS Transfer Family Management Console, or by using the DescribeServer API/CLI/CDK command.
Can I automatically transform the EDI contents of my inbound AS2 messages using AWS B2B Data Interchange?
Yes. You can automatically transform the X12 EDI contents of your inbound AS2 messages into common data representations such as JSON and XML using AWS B2B Data Interchange. To do so, create an Amazon EventBridge rule that matches the event pattern of AWS Transfer Family’s AS2 Payload Receive Completed event and specifies AWS B2B Data Interchange’s StartTransformerJob API as the universal target for the rule. By transforming the X12 EDI contents of your inbound AS2 messages with AWS B2B Data Interchange, you can automate and accelerate the integration of your EDI data into downstream business applications and systems.
How can I automate the sending of AS2 messages to my trading partners?
You can automate the sending of AS2 messages by scheduling them with the Amazon EventBridge Scheduler or by triggering them using Amazon EventBridge rules. To create automated, time-based workflows for sending AS2 messages, create a schedule that meets your business’s needs using the EventBridge Scheduler and specify AWS Transfer Family’s StartFileTransfer API as the universal target for your schedule. To create automated, event-driven workflows for sending AS2 messages, create an Amazon EventBridge rule that matches events published to EventBridge and specify AWS Transfer Family’s StartFileTransfer API as the universal target for your rule.
Can I archive AS2 messages or MDN responses sent to my trading?
Yes. Every AS2 message and MDN sent publishes an event to your default event-bus in Amazon EventBridge. You can subscribe to these events and use them to delete or archive AS2 messages and MDN successfully sent to your trading partner.
Can I be notified when outbound AS2 messages fail to send or when inbound AS2 messages fail to process?
Yes. AWS Transfer Family publishes events to Amazon EventBridge for every successful or failed AS2 message or MDN sent and received. These events are published to your default event-bus in Amazon EventBridge where they can be used to trigger email notifications to you or your partners using services like Amazon SNS.
Can I use Transfer Family’s managed workflows to process messages received from my trading partners over AS2?
No. Currently, managed workflows are not supported for your AS2 endpoints. We recommend using Transfer Family’s event notifications that are published in Amazon EventBridge to orchestrate processing of your AS2 messages. For more details, refer to the File Processing Automation section.
Do you support AS3 and AS4?
No. AWS Transfer Family does not offer support for AS3 or AS4 at this time.
File processing automation
What options to I have to automate processing of files transferred using AWS Transfer Family?
You have two options: 1) AWS Transfer Family publishes file transfer event notifications in Amazon EventBridge for files transferred over SFTP, AS2, FTPS and FTP, and you can use these events to trigger processing of your files using any service that can integrate with EventBridge events, and 2) AWS Transfer family provides managed workflows to make it easier for you to automatically execute post upload processing of files uploaded over SFTP, FTPS and FTP server endpoints using pre-built file processing steps. When you associate a managed workflow with your server endpoint, all the files uploaded over that endpoint are processed using the same workflow steps.
Which Transfer Family operations publish event notifications in Amazon EventBridge?
AWS Transfer Family publishes event notifications in Amazon EventBridge upon successful or failed completion of each file transfer operation, for both server and connector resources. For more information on Transfer Family events published to Amazon EventBridge, refer to the documentation.
What is managed workflows for post-upload processing?
AWS Transfer Family managed workflows provide a pre-built framework for you to create, run, and monitor a linear sequence of steps for processing files uploaded over SFTP, FTPS, and FTP server endpoints. Using this feature, you can save time with pre-built steps to execute common file-processing tasks such as copying, tagging, and decrypting of files. You can also customize file-processing using a lambda function for tasks such as scanning files for PII, virus/malware, or other errors such as incorrect file format or type, enabling you to quickly detect anomalies and meet your compliance requirements. When you associate a managed workflow with your server endpoint, all the files uploaded over that endpoint are processed using the same workflow steps.
Why do I need managed workflows?
If you need to process files that you exchange with your business partners, you need to set up an infrastructure to run custom code, continuously monitor for run time errors and anomalies, and make sure all changes and transformations to the data are audited and logged. Additionally, you need to account for error scenarios, both technical and business, while ensuring failsafe modes are properly triggered. If you have requirements for traceability, you need to track lineage of the data as it passes along different components of your system. Maintaining separate components of a file-processing workflow takes time away from focusing on differentiating work you could be doing for your business. Managed workflows remove the complexities of managing multiple tasks, and provides a standardized file-processing solution that can be replicated across your organization, with built-in exception handing and file traceability for each step to help you meet your business and legal requirements.
What are the benefits of using managed workflows?
Managed workflows allow you to easily preprocess data before it is consumed by your downstream applications by executing a linear sequence of file-processing tasks for all files uploaded to your server endpoints, such as moving uploaded files to user-specific folders, decrypting files using PGP keys, malware scanning, and tagging. You can deploy workflows using Infrastructure as Code (IaC), enabling you to quickly replicate and standardize common post-upload file processing tasks spanning multiple business units in your organization. You can have granular control by associating a managed workflow to your server endpoint that is triggered only on fully uploaded files, and by associating a separate managed workflow that is triggered only for partially uploaded files to process incomplete uploads. Workflows also provide built-in exception handling to allow you to easily react to file-processing outcomes in case of errors or exceptions in the workflow execution, helping you maintain your business and technical SLAs. Each file processing step in your workflow also produces detailed logs, which can be audited to trace data lineage.
When should I use Amazon EventBridge to orchestrate processing of my transferred files and when should I use AWS Transfer Family managed workflows?
AWS Transfer Family server endpoints and connectors automatically publish event notifications in Amazon EventBridge when a file transfer operation completes, along with operational information such as file location, username of the sender, server-id or connector-id, transfer status etc. You can use these events when you need granular control in defining file processing, such as using conditional logic based on the source of the file, or when you need to build event-driven architectures to integrate with other AWS services, third-party applications, and your own applications. On the other hand, AWS Transfer Family managed workflows provide a pre-built framework to define a linear sequence of common file processing steps that are applied to all files uploaded over your SFTP, FTPS, and FTP server endpoints. You can associate a managed workflow to your endpoint when all uploaded files need to be processed using the same common file-processing steps without requiring to apply any granular or conditional logic.
How do I get started with managed workflows?
First, set up your workflow to contain actions such as copying, tagging, and a series of actions that can include your own custom step in a sequence of steps based on your requirements. Next, map the workflow to a server, so on file arrival, actions specified in this workflow are evaluated and triggered in real time. To learn more, visit the documentation , watch this demo on getting started with managed workflows, or deploy a cloud-native file-transfer platform using this blog post .
Can I use the same managed workflow across multiple servers?
Yes. The same workflow can be associated with multiple servers so it is easier for you to maintain and standardize configurations.
What actions can I take on my files using workflows?
The following common actions are available once a transfer server has received a file from the client:
Decrypting file using PGP keys. Refer to this blog post on encrypting and decrypting files using PGP.
Move or copy data from where it arrives to where it needs to be consumed.
Delete the original file post archiving or copying to a new location.
Tag the file based on its contents so it can be indexed and searched by downstream services (S3 only)
Any custom file processing logic by supplying your own Lambda function as a custom step to your workflow. For example, checking compatibility of the file type, scanning files for malware, detecting Personally Identifiable Information (PII), and metadata extraction before ingesting files to your data analytics.
Can I select which file to process at each workflow step?
Yes. You can configure a workflow step to process either the originally uploaded file or the output file from the previous workflow step. This allows you to easily automate moving and renaming of your files after they are uploaded to Amazon S3. For example, to move a file to a different location for file archival or retention, configure two steps in your workflow. The first step is to copy a file to a different Amazon S3 location, and the second step to delete the originally uploaded file. Read the documentation for more details on selecting a file location for workflow steps.
Can I use workflows to automatically decrypt files using PGP?
Yes. You can use a pre-built, fully managed workflow step for PGP decryption of files uploaded over your SFTP, FTPS and FTP server endpoints. For more information, refer to managed workflows documentation and this blog post on encrypting and decrypting files using PGP.
Can I preserve the originally uploaded file for records retention?
Yes. Using managed workflows, you can create multiple copies of the original file while preserving the original file for records retention.
Can I use managed workflows to dynamically route files to user-specific Amazon S3 folders?
Yes. You can utilize username as a variable in workflows copy steps, enabling you to dynamically route files to user-specific folders in Amazon S3. This removes the need to hardcode destination folder location when copying files and automates creation of user-specific folders in Amazon S3, allowing you to scale your file automation workflows. Read the documentation to learn more.
How do I monitor managed workflows activity?
Refer to the Monitoring section for details on the supported features for logging your managed workflows activity.
I am using AWS Step Functions to orchestrate my file-processing steps. How do AWS Transfer Family managed workflows differ from my current AWS Step Functions set up?
AWS Step Functions is a serverless orchestration service that lets you combine AWS Lambda with other services to define the execution of business application in simple steps. To perform file-processing steps using AWS Step Functions, you use AWS Lambda functions with Amazon S3’s event triggers to assemble your own workflows. Managed workflows provide a framework to easily orchestrate a linear sequence of processing and differentiates from existing solutions in the following ways: 1) You can granularly define workflows to be executed only on full file uploads, as well as workflows to be executed only on partial file uploads, 2) workflows can be triggered automatically for S3 as well as EFS (which doesn’t offer post upload events), and 3) workflows provide no code and pre-built options for common file processing like PGP decryption, and 4) customers can get end to end visibility into their file transfers and processing in CloudWatch logs.
Can I use managed workflows to customize file delivery notifications?
Yes. Refer to this blog post on using managed workflows for file delivery notifications.
Can managed workflows be triggered on partial uploads?
Yes. You can define separate workflows to be triggered on complete file uploads and on partial file uploads.
Which Transfer Family actions are not supported by Managed Workflows?
Currently, managed workflows can only be triggered for files uploaded over your SFTP, FTPS and FTP server endpoints and process one file per execution. Managed Workflows are not supported for messages exchanged over AS2, for file downloads over your server endpoints and for files transferred using SFTP connectors.
Can I trigger workflow actions on user downloads?
No. Processing can be invoked only on file arrival using the inbound endpoint.
Can I trigger the same workflow on batches of files in a session?
No. Workflows currently process one file per execution.
Can I trigger managed workflows granularly based on which of my users uploaded a file?
No. Managed workflows cannot be invoked on a granular, per-user basis. You can define conditional file processing logic based on which user uploaded the file using the file transfer event notifications published in Amazon EventBridge.
Amazon S3 access
How does AWS Transfer Family communicate with Amazon S3?
The data transfer between AWS Transfer Family servers and Amazon S3 happens over internal AWS networks and doesn’t traverse the public internet. Because of this, you do not need to use AWS PrivateLink for data transfered from the AWS Transfer Family server to Amazon S3. The Transfer Family service doesn’t require AWS PrivateLink endpoints for Amazon S3 to keep traffic from going over the internet, and therefore cannot use those to communicate with storage services. This all assumes that the AWS storage service and the Transfer Family server are in the same region.
Why do I need to provide an AWS IAM Role and how is it used?
AWS IAM is used to determine the level of access you want to provide your users. This includes the operations you want to enable on their client and which Amazon S3 buckets they have access to – whether it’s the entire bucket or portions of it.
Why do I need to provide home directory information and how is it used?
The home directory you set up for your user determines their login directory. This would be the directory path that your user’s client will place them in as soon as they are successfully authenticated into the server. You will need to ensure that the IAM Role supplied provides user access to the home directory.
I have 100s of users who have similar access settings but to different portions of my bucket. Can I set them up using the same IAM Role and policy to enable their access?
Yes. You can assign a single IAM Role for all your users and use logical directory mappings that specify which absolute Amazon S3 bucket paths you want to make visible to your end users and how you these paths presented to them by their clients. Visit this blog on how to Simplify Your AWS SFTP/FTPS/FTP Structure with Chroot and Logical Directories .
How are files stored in my Amazon S3 bucket transferred using AWS Transfer?
Files transferred over the supported protocols are stored as objects in your Amazon S3 bucket, and there is a one-to-one mapping between files and objects enabling native access to these objects using AWS services for processing or analytics.
How are Amazon S3 objects stored in my bucket presented to my users?
After successful authentication, based on your users’ credentials, the service presents Amazon S3 objects and folders as files and directories to your users’ transfer applications.
What file operations are supported? What operations are not supported?
Common commands to create, read, update, and delete, files and directories are supported. Files are stored as individual objects in your Amazon S3 bucket. Directories are managed as folder objects in S3, using the same syntax as the S3 console.
Directory rename operations, append operations, changing ownerships, permissions and timestamps, and use of symbolic and hard links are currently not supported.
Can I control which operations my users are allowed to perform?
Yes. You can enable/disable file operations using the AWS IAM role you have mapped to their username. Refer to the documentation on Creating IAM Policies and Roles to control your end users access
Can I provide my end users access to more than one Amazon S3 bucket?
Yes. The bucket(s) your user can access is determined by the AWS IAM Role, and the optional scope-down policy you assign for that user. You can only use a single bucket as the home directory for the user.
Can I use S3 Access Points with AWS Transfer Family to simplify user access to shared dataset?
Yes. You can use S3 Access Point aliases with AWS Transfer Family to provide granular access to a large set of data without having to manage a single bucket policy. S3 Access Point aliases combined with AWS Transfer Family logical directories enable you to create a fine-grained access control for different applications, teams, and departments, while reducing the overhead of managing bucket policies. To learn more and get started, visit the blog post on enhancing data access control with AWS Transfer Family and Amazon S3 Access Points .
Can I create a server using AWS Account A and map my users to Amazon S3 buckets owned by AWS Account B?
Yes. You can use the CLI and API to set up cross account access between your server and the buckets you want to use for storing files transferred over the supported protocols. The Console drop down will only list buckets in Account A. Additionally, you’d need to make sure the role being assigned to the user belongs to Account A.
Can I automate processing of a file once it has been uploaded to Amazon S3?
Yes. AWS Transfer Family publishes event notifications in Amazon EventBridge upon completion of a file transfer operation, and you can use these events to automate post upload processing of your files. Alternatively, when all your uploaded files need to be processed using the same file processing steps without any conditional logic, you can use AWS Transfer Family managed workflows to define a linear sequence of common file processing steps which are auto-invoked for each file that your users upload over your SFTP, FTPS or FTP server endpoints.
How do Amazon S3 event notifications differ from AWS Transfer Family service events in Amazon EventBridge and what should I use to trigger post upload processing of files?
Amazon S3 can publish event notifications for any new object created in your bucket. On the other hand, AWS Transfer Family publishes event notifications upon successful or failed completion of each file transfer operation. It differentiates from Amazon S3 event notifications in the following ways: 1) You can have granular control in defining post-upload processing for full file uploads vs partial file uploads when using event notifications from Transfer Family, 2) Transfer Family events are published for file uploads in both S3 as well as EFS, and 3) Events generated by Transfer Family contain operational information such as username of the sender, server-id, transfer status, etc., and allows you to define file processing granularly, based on conditional logic over these attributes.
Can I customize rules for processing based on the user uploading the file?
Yes. When your user uploads a file, the username and the server id of the server used for the upload is stored as part of the associated S3 object’s metadata. Refer to the documentation on information you use for post upload processing. The end-user information is also available in the automatic file upload event notification published by AWS Transfer Family in Amazon EventBridge. You can use this information to orchestrate granular post upload processing of your files based on the user.
It currently takes minutes for my end users to be able to see their S3 directories. Is there a way this can be accelerated?
Yes. You can optimize your S3 directory listing so your end users can enjoy accelerated listing of their directory – from minutes to seconds. If you are creating a new server through the console after 11/17/2023, your server will have optimized S3 directory listing enabled by default if you are using Amazon S3 as your storage. This can be toggled on or off at any time. Turning this feature off restores your S3 directory listing to default performance. If you are using CloudFormation, CLI, or API to create a server, optimized S3 directory listing is disabled by default, but can be enabled at any time. Refer to the documentation on how to enable optimized S3 directory listing.
Amazon EFS access
How do I set up my EFS file system to work with AWS Transfer Family?
Prior to setting up AWS Transfer Family to work with an Amazon EFS file system, you will need to set up ownership of files and folders using the same POSIX identities (user id/group id) you plan to assign to your AWS Transfer Family users. Additionally, if you are accessing file systems in a different account, resource policies must also be configured on your file system to enable cross account access. Refer to this blog post for step-by-step instructions on using AWS Transfer Family with EFS.
How does AWS Transfer Family communicate with Amazon EFS?
The data transfer between AWS Transfer Family servers and Amazon EFS happens over internal AWS networks and doesn’t traverse the public internet. Because of this, you do not need to use AWS PrivateLink for data transfered from the AWS Transfer Family server to Amazon EFS. The Transfer Family service doesn’t require AWS PrivateLink endpoints for Amazon EFS to keep traffic from going over the internet, and therefore cannot use those to communicate with storage services. This all assumes that the AWS storage service and the Transfer Family server are in the same region.
How do I provide access to my users to upload/download files to/from my file systems?
Amazon EFS uses POSIX IDs which consist of an operating system user id, group id, and secondary group id to control access to a file system. When setting up your user in the AWS Transfer Family console/CLI/API, you will need to specify the username, user’s POSIX configuration, and an IAM role to access the EFS file system. You will also need to specify an EFS file system id and optionally a directory within that file system as your user’s landing directory. When your AWS Transfer Family user authenticates successfully using their file transfer client, they will be placed directly within the specified home directory, or root of the specified EFS file system. Their operating system POSIX id will be applied to all requests made through their file transfer clients. As an EFS administrator, you will need to make sure the file and directories you want your AWS Transfer Family users to access are owned by their corresponding POSIX ids in your EFS file system. Refer to the documentation to learn more on configuring ownership of sub-directories in EFS . Note that Transfer Family does not support access points if you are using Amazon EFS for storage.
How are files transferred over the protocols stored in my Amazon EFS file systems?
Files transferred over the enabled protocols are directly stored in your Amazon EFS file Systems and will be accessible via a standard file system interface or from AWS services that can access Amazon EFS file systems.
What file operations are supported over the protocols when using Amazon S3 and Amazon EFS?
A: SFTP/FTPS/FTP commands to create, read, update, and delete files, directories, and symbolic links are supported. Refer to the table below on supported commands for EFS as well as S3.
Command |
Amazon S3 |
Amazon EFS |
cd |
Supported |
Supported |
ls/dir |
Supported |
Supported |
pwd |
Supported |
Supported |
put |
Supported |
Supported |
get |
Supported |
Supported including resolving symlinks and hardlinks |
rename |
Supported1 |
Supported1 |
chown |
Not supported |
Supported2 |
chmod |
Not supported |
Supported2 |
chgrp |
Not supported |
Supported3 |
ln -s/symlink |
Not supported |
Supported |
mkdir |
Supported |
Supported |
rm/delete |
Supported |
Supported |
rmdir |
Supported4 |
Supported |
chmtime |
Not supported |
Supported |
1. Only file renames are supported. Directory renames and rename of files to overwrite existing files are not supported.
2. Only root i.e. users with uid=0 can change ownership and permissions of files and directories.
3. Supported either for root e.g. uid=0 or for the file’s owner who can only change a file’s group to be one of their secondary groups.
4. Supported for non-empty folders only.
How can I control which files and folders my users have access to and which operations they are allowed to and not allowed to perform?
The IAM policy you supply for your AWS Transfer Family user determines if they have read-only, read-write, and root access to your file system. Additionally, as a file system administrator, you can set up ownership and grant to access files and directories within your file system using their user id and group id. This applies to users whether they are stored within the service (service managed) or within your identity management system (“BYO Auth”).
Can I restrict each of my users to access different directories within my file system and only access files within those directories?
Yes. When you set up your user, you can specify different file systems and directories for each of your users. On successful authentication, EFS will enforce a directory for every file system request made using the enabled protocols.
Can I hide the name of the file system from being exposed to my user?
Yes. Using AWS Transfer Family logical directory mappings, you can restrict your end users’ view of directories in your file systems by mapping absolute paths to end user visible path names. This also includes being able to “chroot” your user to their designated home directory.
Are symbolic links supported?
Yes. If symbolic links are present in directories accessible to your user and your user tries to access them, the links will be resolved to its target. Symbolic links are not supported when you use logical directory mappings to set up your users' access.
Can I provide an individual SFTP/FTPS/FTP user access to more than one file system?
Yes. When you set up an AWS Transfer Family user, you can specify one or more file systems in the IAM policy you supply as part of your user set up in order to grant access to multiple file systems.
What operating systems can I use to access my EFS file systems via AWS Transfer Family?
You can use clients and applications built for Microsoft Windows, Linux, macOS, or any operating system that supports SFTP/FTPS/FTP to upload and access files stored in your EFS file systems. Simply configure the server and user with the appropriate permissions to the EFS file system to access the file system across all operating systems.
How do I automate processing of a file once it has been uploaded to EFS?
You have two options: 1) AWS Transfer Family publishes event notifications in Amazon EventBridge upon completion of a file transfer operation, and you can use these events to automate post upload processing of your files, and 2) When all your uploaded files need to be processed using the same file processing steps without any conditional logic, you can use AWS Transfer Family managed workflows to define a linear sequence of common file processing steps that are applied for each file uploaded over your SFTP, FTPS or FTP server endpoints.
How do I know which user uploaded a file to EFS?
For new files, the POSIX user id associated with the user uploading the file will be set as the owner of the file in your EFS file system. Additionally, you can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, and read operations. Visit the documentation to learn more on how to enable Amazon CloudWatch logging.
Can I use AWS Transfer Family to access a file system in another account?
Yes. You can use the CLI and API to set up cross account access between your AWS Transfer Family resources and EFS file systems. The AWS Transfer Family console will only list file systems in the same account. Additionally, you’d need to make sure the IAM role assigned to the user to access the file system belongs to Account A.
What happens if my EFS file system does not have the right policies enabled for cross account access?
If you set up an AWS Transfer Family server to access a cross account EFS file system not enabled for cross account access, your SFTP/FTP/FTPS users will be denied access to the file system. If you have CloudWatch logging enabled on your server, cross account access errors will be logged to your CloudWatch Logs.
Can I use AWS Transfer Family to access an EFS file system in a different AWS Region?
No. You can use AWS Transfer Family to access EFS file systems in the same AWS Region only.
Can I use AWS Transfer Family with all EFS storage classes?
Yes. You can use AWS Transfer to write files into EFS and configure EFS Lifecycle Management to migrate files that have not been accessed for a set period of time to the Infrequent Access (IA) storage class.
Can my applications use SFTP/FTPS/FTP to concurrently read and write data from/to the same file?
Yes. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of NFS/SFTP/FTPS/FTP clients.
Will my EFS burst credits be consumed when I access my file systems using AWS Transfer Family?
Yes. Accessing your EFS file systems using your AWS Transfer Family servers will consume your EFS burst credits regardless of the throughput mode. Refer to the documentation on available performance and throughput modes and view some useful performance tips .
Security and compliance
Which protocols should I use for securing data while in-transit over a public network?
Either SFTP or FTPS should be used for secure transfers over public networks. Due to the underlying security of the protocols based on SSH and TLS cryptographic algorithms, data and commands are transferred through a secure, encrypted channel.
What are my options to encrypt data at rest?
You can choose to encrypt files stored your bucket using Amazon S3 Server-Side Encryption (SSE-S3) or Amazon KMS (SSE-KMS). For files stored in EFS, you can choose AWS or customer managed CMK for encryption of files at rest. Refer to the documentation for more details on options for at rest encryption of file data and metadata using Amazon EFS .
Which compliance programs does AWS Transfer Family support?
AWS Transfer Family is compliant with PCI-DSS, GDPR, FedRAMP, and SOC 1, 2, and 3. The service is also HIPAA eligible. Learn more about services in scope by compliance programs .
Is AWS Transfer Family FISMA compliant?
AWS East/West and GovCloud (US) Regions are FISMA compliant. When AWS Transfer Family is authorized for FedRAMP, it will be FISMA compliant within the respective regions. This compliance is demonstrated through FedRAMP Authorization of these two regions to FedRAMP Moderate and FedRAMP High. We demonstrate compliance through annual assessments and documenting compliance with in-scope NIST SP 800-53 controls within our System Security Plans. Templates are available on Artifact along with our customer responsibility matrix (CRM) which demonstrates at a detailed level or responsibility to meet these NIST controls as required by FedRAMP. Artifact is available through the management console accessible by an AWS account for both East/West and GovCloud. If you have any further questions on this topic, please consult the console .
How does the service ensure integrity of uploaded files?
Files uploaded through services are verified by comparing the file’s pre- and post-upload MD5 checksum.
What are my options to encrypt/ decrypt files in transit?
You can use AWS Transfer Family managed workflows to automatically decrypt files using PGP keys when they are uploaded to your AWS Transfer Family SFTP, FTPS or FTP server endpoints. For more information, refer to managed workflows documentation. Alternatively, you can subscribe to AWS Transfer Family event notifications published in Amazon Eventbridge to orchestrate granular and event-driven processing of transferred files using your own encryption/ decryption logic.
Monitoring
How can I monitor my end users’ and their file transfer activities?
You can monitor your end users’ and their file transfer activities using JSON formatted logs that are delivered to Amazon CloudWatch. Within CloudWatch, you can parse and query your logs using CloudWatch Log Insights, which automatically discovers JSON formatted fields. You can also track top users, total number of unique users, and their ongoing usage with CloudWatch Contributor Insights. We also provide pre-built CloudWatch metrics and graphs that are accessible within the AWS Transfer Family Management Console. Visit the documentation to learn more.
Can I create consolidated metrics to track users and file transfer activity across multiple servers?
Yes. You can combine log streams from multiple AWS Transfer Family servers into a single CloudWatch log group. This allows you to create consolidated log metrics and visualizations, which can be added to CloudWatch dashboards for tracking server usage and performance.
How do I monitor my workflows?
Workflow executions can be monitored using AWS CloudWatch metrics such as the total number of workflows executions, successful executions, and failed executions. Using the AWS Management Console, you can also search and view real-time status of in progress Workflow executions. Use CloudWatch logs to get detailed logging of workflows executions.
How are AWS Transfer Family logs formatted?
AWS Transfer Family delivers logs in JSON format across all resources – including servers, connectors, and workflows – and all protocols – including SFTP, FTPS, FTP, and AS2.
How do I receive notifications for file transfers using AWS Transfer Family?
You can use the Transfer Family’s managed workflows to receive notifications for files uploaded over your SFTP, FTPS and FTP server endpoints. Refer to this blog post. Alternatively, you can subscribe to AWS Transfer Family events in Amazon EventBridge to receive notifications using Amazon Simple Notification Service (SNS).
Can I send a notification if a workflow file validation check fails?
Yes. If a file validation check fails against preconfigured workflow validation steps, you can use the exception handler to invoke your monitoring system or alert team members via Amazon SNS topic.
Billing
How am I billed for use of the service?
You are billed on an hourly basis for each of the protocols enabled, from the time you create and configure your server endpoint, until the time you delete it. You are also billed based on the amount of data uploaded and downloaded over your SFTP, FTPS, or FTP servers, number of messages exchanged over AS2, and the amount of data processed using Decrypt workflow step. When using SFTP connectors, you are billed for the amount of data transferred and the number of connector calls. Refer to the pricing page for more details.
Will my billing be different if I use the same server endpoint for multiple protocols or use different endpoints for each protocol?
No. You are billed on an hourly basis for each of the protocols you have enabled and for the amount of data transferred through each of the protocols, regardless of whether same endpoint is enabled for multiple protocols or you are using different endpoints for each of the protocols.
I have stopped my server. Will I be billed while it is stopped?
Yes. Stopping the server, by using the console, or by running the “stop-server” CLI command or the “StopServer” API command, does not impact billing. You are billed on an hourly basis from the time you create your server endpoint and configure access to it over one or more protocols until the time you delete it.
How am I billed for using managed workflows?
You are billed for Decrypt workflow step based on the amount of data you decrypt using PGP keys. There is no other additional charge for using managed workflows. Depending on your workflow configuration, you are also billed for use of Amazon S3, Amazon EFS, AWS Secrets Manager and AWS Lambda.
Am I billed hourly for using SFTP connectors?
No. There is no hourly billing for SFTP connectors. For more information on SFTP connector pricing, refer to the pricing page.
AWS Transfer Family provides a fully managed service, reducing your operational costs to run file transfer services.
Get started building your SFTP, FTPS, and FTP services in the AWS Management Console.