AWS Developer Tools Blog

AWS SDK for Java 2.0 – Developer Preview

We’re pleased to announce the Developer Preview of the AWS SDK for Java 2.0. The 2.0 version of the SDK is a major rewrite of the 1.11.x code base. It’s built on top of Java 8 and adds several, frequently requested features, like support for non-blocking I/O and the ability to use a different HTTP implementation at runtime. In addition to these new features, many aspects of the SDK have been refactored and cleaned up with a strong focus on consistency, immutability, and ease of use. The Developer Preview is your chance to influence the direction of the AWS SDK for Java 2.0. Tell us what you like, tell us what you don’t like. Your feedback matters to us. Find details on various ways to give feedback at the bottom of this post.

Although we’re excited about the AWS SDK for Java 2.0 Developer Preview, we also want to reassure customers that we’re not dropping support for the 1.x line of the SDK any time soon. We know there are lots of customers who depend on 1.x versions of the SDK, and we will continue to support them. Version 2.0 is also able to run alongside version 1.x in the same JVM to allow partial migration to the new product. As we get closer to general availability for version 2.0, we’ll share a more detailed plan on how we’ll support the 1.x line.

Getting started

Let’s walk through setting up a project that depends on the SDK and makes a simple service call. The following steps use Maven as an example but you can use any build system that supports MavenCentral as an artifact source (Gradle, sbt, etc). These steps assume you have Maven and a Java 8 JDK already installed. See the developer guide for a more detailed tutorial on getting started.

    1. Create a new Java8 Maven project.
    2. Open the pom.xml file, and add a dependency on the Amazon DynamoDB module (see services/pom.xml for a full list of supported services).
      <dependency>
          <groupId>software.amazon.awssdk</groupId>
          <artifactId>dynamodb</artifactId>
          <version>2.0.0-preview-1</version>
      </dependency>
    3. Create a new class with a main method, and create a DynamoDB service client using the client builder.
      package com.example;
      
      import software.amazon.awssdk.auth.ProfileCredentialsProvider;
      import software.amazon.awssdk.regions.Region;
      import software.amazon.awssdk.services.dynamodb.DynamoDBClient;
      import software.amazon.awssdk.services.dynamodb.model.ListTablesRequest;
      
      public class Main {
      
          public static void main(String[] args) {
              // The region and credentials provider are for demonstration purposes. Feel free to use whatever region and credentials
              // are appropriate for you, or load them from the environment (See https://docs.thinkwithwp.com/sdk-for-java/v2/developer-guide/setup-credentials.html)
              DynamoDBClient client = DynamoDBClient.builder()
                  .region(Region.US_EAST_1)
                  .credentialsProvider(ProfileCredentialsProvider.builder()
                                                                 .profileName("my-profile")
                                                                 .build())
                  .build();
          }
      }
    4. Make a service request and do something with the response.
      ListTablesResponse response = client.listTables(ListTablesRequest.builder()
                                                                       .limit(5)
                                                                       .build());
      response.tableNames().forEach(System.out::println);
      

New features

Non-blocking I/O

The SDK now supports truly non-blocking I/O. The 1.11.x version of the SDK already has async variants of service clients. However, they are just a wrapper around a thread pool and the blocking sync client, so they don’t provide the benefits of non-blocking I/O (high concurrency with very few threads). Due to the limitations and poor resource use of the thread-per-connection model, many customers requested support for non-blocking I/O, so we are pleased to announce first class support for non-blocking I/O in our async clients. Under the hood, we use an HTTP client built on top of Netty to make the non-blocking HTTP call.

For non-streaming operations, the interfaces are nearly identical to the sync client. The only difference is that a CompletableFuture containing the response is returned immediately instead of blocking the thread until the response is available. Exceptions are delivered by completing the future exceptionally and can be accessed using the appropriate callbacks on the future (see https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html). Here’s an example of a simple service call using the async/non-blocking client.

// Creates a default async client with credentials and regions loaded from the environment
DynamoDBAsyncClient client = DynamoDBAsyncClient.create();
CompletableFuture<ListTablesResponse> response = client.listTables(ListTablesRequest.builder()
                                                                                    .limit(5)
                                                                                    .build());
// Map the response to another CompletableFuture containing just the table names
CompletableFuture<List<String>> tableNames = response.thenApply(ListTablesResponse::tableNames);
// When future is complete (either successfully or in error) handle the response
tableNames.whenComplete((tables, err) -> {
    if (tables != null) {
        tables.forEach(System.out::println);
    } else {
        // Handle error
        err.printStackTrace();
    }
});

Streaming operations are a bit different to allow for full non-blocking I/O. For streaming inputs (like the Amazon S3 PutObject operation), you must supply an AsyncRequestProvider that can produce content incrementally. To support asynchronous backpressure (to prevent out of memory errors if the SDK can’t send data as fast as it’s being produced) the SDK uses the reactive pull model. This is based on the well-known reactive streams interfaces. In fact, the request provider is simply a Publisher of ByteBuffer chunks. The SDK will call subscribe on that Publisher and request chunks of data as its buffer allows.

Here we upload a file asynchronously using the PutObject operation in Amazon S3. We’re using an implementation of AsyncRequestProvider that produces data from a file. It handles backpressure and retries automatically, reducing the burden on the developer. We want to support common implementations and sources of data out of the box, so if you have any suggestions or requests, please let us know.

public static void main(String[] args) {
    S3AsyncClient client = S3AsyncClient.create();
    CompletableFuture<PutObjectResponse> future = client.putObject(
            PutObjectRequest.builder()
                            .bucket(BUCKET)
                            .key(KEY)
                            .build(),
            AsyncRequestProvider.fromFile(Paths.get("myfile.in"))
    );
    future.whenComplete((resp, err) -> {
        try {
            if (resp != null) {
                System.out.println(resp);
            } else {
                // Handle error
                err.printStackTrace();
            }
        } finally {
            // Lets the application shut down. Only close the client when you are completely done with it.
            FunctionalUtils.invokeSafely(client::close);
        }
    });
}

For operations that have a streaming response (such as Amazon S3 GetObject), you must provide an AsyncResponseHandler that processes and transforms the response. This response handler has callback methods for various events in the response lifecycle. It follows the same reactive streams model for handling the data. (In this case, however, it’s the reverse. The SDK is the data publisher and the response handler implementation must subscribe to the publisher and request data from it.) Consult the Java documentation for a more detailed explanation of how to implement AsyncResponseHandler. In the following example we will use a pre-canned implementation that just emits the data to a file.

public static void main(String[] args) {
    S3AsyncClient client = S3AsyncClient.create();
    final CompletableFuture<Void> future = client.getObject(
            GetObjectRequest.builder()
                            .bucket(BUCKET)
                            .key(KEY)
                            .build(),
            AsyncResponseHandler.toFile(Paths.get("myfile.out")));
    future.whenComplete((resp, err) -> {
        try {
            if (resp != null) {
                System.out.println(resp);
            } else {
                // Handle error
                err.printStackTrace();
            }
        } finally {
            // Lets the application shut down. Only close the client when you are completely done with it
            FunctionalUtils.invokeSafely(client::close);
        }
    });
}

Pluggable HTTP layer

All earlier 1.x.x versions of the SDK have had a hard dependency on the Apache HTTP client to make HTTP calls. Although this is fine for most customers, some advanced customers wanted to swap out the default HTTP implementation to be able to use a more optimized one that’s better suited for their runtime environment. The AWS SDK for Java 2.0 now fully supports a pluggable HTTP layer. The SDK continues to ship Apache as the default, but you can remove it and replace it with another implementation that conforms to the appropriate SPI.

The SDK attempts to load an HTTP implementation from the classpath using the ServiceLoader utility. This enables end users to create their own distributions of the SDK with a different default HTTP implementation (by removing the dependency on Apache’s implementation and replacing it with their own). Customers who want to avoid potentially expensive classpath scanning can set the system property software.amazon.awssdk.http.service.impl to explicitly identify the implementation to use. Finally, for customers wanting precise control over how the HTTP client is created and configured, the SDK accepts either an SdkHttpClient instance or SdkHttpClientFactory instance in each service client builder. Passing in an SdkHttpClient enables customers to share a connection pool across multiple service clients for better resource utilization.

Configuring HTTP settings

Due to the pluggable nature of the HTTP layer, customers who want to configure HTTP specific settings such as socket timeout, proxy settings, etc., must declare a dependency on the underlying implementation and configure the client through implementation provided interfaces. In the following examples we show how to configure the default Apache implementation.

  1. Declare a dependency on the Apache implementation in your project.
    <dependency>
        <artifactId>aws-http-client-apache</artifactId>
        <groupId>software.amazon.awssdk</groupId>
        <version>2.0.0-preview-1</version>
    </dependency>
  2. Create and configure the Apache client factory.
    ApacheSdkHttpClientFactory apacheClientFactory = 
        ApacheSdkHttpClientFactory.builder()
                                  .socketTimeout(Duration.ofSeconds(10))
                                  .connectionTimeout(Duration.ofMillis(750))
                                  .build();
  3. Use the Apache client factory to create a SDK service client.
    DynamoDBClient client =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClientFactory(apacheClientFactory)
                                                                    .build())
                          .build();

Sharing HTTP clients

The SDK now supports sharing HTTP client instances across multiple service clients. This allows you to reuse the same connection pool for better resource utilization. To share a client across multiple SDK service clients, you must depend on a specific implementation and create an HTTP client factory for that implementation, as shown above.

  1. Create an SdkHttpClient instance using the HTTP client factory we created earlier (only follows steps 1 and 2 from above).
    SdkHttpClient sharedClient = apacheClientFactory.createHttpClient();
  2. Register that HTTP client instance with multiple SDK service clients. (You can even share clients across multiple services.)
    DynamoDBClient clientOne =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClient(sharedClient)
                                                                    .build())
                          .build();
    DynamoDBClient clientTwo =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClient(sharedClient)
                                                                    .build())
                          .build();
  3. Because the client is shared, the SDK will not close it when the service client is closed. Be sure to explicitly close it when it’s no longer needed.
    sharedClient.close();

Pluggable Async HTTP

The non-blocking async HTTP client is also pluggable, and you can configure or share it in exactly the same way as sync. The interfaces for the factory and client are SdkAsyncHttpClient and SdkAsyncHttpClientFactory, respectively. An implementation built on top of Netty is the default. Add the following to your pom.xml file to configure the default Netty implementation.

<dependency>
    <artifactId>aws-http-nio-client-netty</artifactId>
    <groupId>software.amazon.awssdk</groupId>
    <version>2.0.0-preview-1</version>
</dependency>

API changes

We’ve made several public API changes to improve consistency, make the SDK easier to use, strongly enforce immutability for safer concurrent programming, and remove deprecated or confusing APIs. The following are some of the bigger changes included in the AWS SDK for Java 2.0 Developer Preview.

Client Builders

In 1.11.x versions, we recently deprecated all client constructors and all mutable methods on the client in favor of the client builders. In version 2.0, the client builders are now the only way to create a service client. In addition, clients are 100 percent immutable after creation. For a cleaner programming experience, all interaction with the clients is done through interfaces.

To obtain an instance of the builder, you can use a static factory method on the client interface like this.

DynamoDBClient client = DynamoDBClient.builder().build();

If you want just a quick default client that loads region and credentials from the environment you can use the following. This will fail if region or credentials are not properly setup.

DynamoDBClient client = DynamoDBClient.create();

All builders and POJOs in version 2.0 now follow a new naming convention for setter methods. There is no set/with prefix. The setter method is simply the property name. The setter methods return the builder for method chaining.

DynamoDBClient client = DynamoDBClient.builder()
                                      .region(Region.US_EAST_1)
                                      .build();

Most advanced configuration in 1.11.x versions was HTTP related. Due to the pluggable nature of the HTTP layer, you must now configure this via the HTTP implementation directly (see “New features”, earlier in this post). You can change the non-HTTP related advanced configuration via the overrideConfiguration method.

DynamoDBClient client =
        DynamoDBClient.builder()
                      .overrideConfiguration(
                              ClientOverrideConfiguration.builder()
                                                         .retryPolicy(PredefinedRetryPolicies.NO_RETRY_POLICY)
                                                         .build())
                      .build();

Immutable POJOs

Previously, all request/response POJOs were mutable, which violated the thread safety guarantees of the client. In version 2.0, all POJOs are immutable and must be created through a builder.

ListTablesRequest request = ListTablesRequest.builder()
                                             .limit(5)
                                             .build();

You can modify POJOs only by converting the object into a builder, making the modifications, and rebuilding the object. In the example below, originalRequest is unchanged and a new instance of ListTablesRequest is created and returned.

public static ListTablesRequest updatePaginationToken(ListTablesRequest originalRequest, ListTablesResponse response) {
    return originalRequest.toBuilder()
                          .exclusiveStartTableName(response.lastEvaluatedTableName())
                          .build();
}

Due to the immutability of POJOs and the fluent setters, serialization requires some special care. Here’s an example of serializing a request object to JSON using the Jackson library, and deserializing it back into a request object.

ObjectMapper mapper = new ObjectMapper();
ListTablesRequest request = ListTablesRequest.builder()
                                             .limit(5)
                                             .build();
String serialized = mapper.writeValueAsString(request.toBuilder());

ListTablesRequest deserialized = mapper.readValue(serialized, ListTablesRequest.serializableBuilderClass())
                                       .build();

Regions

In 1.11.x versions of the SDK, there were many different classes used for configuring regions or accessing region metadata (Region, Regions, s3.Region, RegionUtils, etc). In version 2.0, these are all collapsed into a single Region class for simplicity and ease of use.

The new Region class looks similar to an enum and has constants for each region.

DynamoDBClient client = DynamoDBClient.builder()
                                      .region(Region.US_EAST_1)
                                      .build();

Creating a new region is safe to do using the static factory method of. This is useful when the region is coming from an external source such as a configuration file, or for using a region that the SDK doesn’t know about yet.

Region newRegion = Region.of("us-east-42");

You can access metadata about the region (name, partition, or domain) via the RegionMetadata interface.

String domain = RegionMetadata.of(Region.US_EAST_1).getDomain();

You can access region metadata for a service (such as which regions that service is supported in) via the ServiceMetadata interface.

DynamoDBClient.serviceMetadata().regions().forEach(System.out::println);

Streaming

There are substantial changes in the APIs for streaming operations (such as the Amazon S3 GetObject and PutObject) due to the newly added support for non-blocking I/O. Because the programming models for blocking I/O and non-blocking I/O are so radically different, we’ve removed the InputStream from the request/response POJO. Now, the sync and async clients have additional parameters for streaming operations to accept streamed content (PutObject) and to process a streamed response (GetObject). We explained the async streaming APIs earlier in this post, so let’s take a look at the sync versions.

In the following example, we’re uploading a file to S3 via the PutObject operation. Notice that we don’t set the content in the PutObjectRequest, but instead provide it as a second parameter to the putObject method. This content is provided using a new class, RequestBody, which has overloads for many common sources of data (File, String, byte array, ByteBuffer, InputStream).

S3Client client = S3Client.create();
client.putObject(PutObjectRequest.builder()
                                 .bucket(BUCKET)
                                 .key(KEY)
                                 .build(),
                 RequestBody.of(Paths.get("myfile.in")));

Next, we download the same object to a file using the GetObject operation. Again, instead of accessing the InputStream from the GetObjectResponse object, you can now provide a StreamingResponseHandler implementation to process the response contents. This is a functional interface that provides the unmarshalled GetObjectResponse and the input stream as parameters and returns some transformed value (or Void). This transformed value becomes the return value of the getObject method. There are a couple of convenience static factory methods on the interface to create handlers for common situations like dumping the data into a file or writing it to an OutputStream. We use the file one below.

S3Client client = S3Client.create();
client.getObject(GetObjectRequest.builder()
                                 .bucket(BUCKET)
                                 .key(KEY)
                                 .build(),
                 StreamingResponseHandler.toFile(Paths.get("myfile.out")));

S3 client changes

In 1.11.x the S3 service client is not generated like the rest of the SDK. Because of this, it’s somewhat inconsistent with other service clients in the AWS SDK for Java. It also doesn’t exactly match the service’s API, so it can be confusing using another SDK’s S3 client after getting used to the Java client. In version 2.0 we are now generating the S3 client like every other service. Play around with it and let us know what you think.

Giving feedback and contributing

You can provide feedback to us in several ways. Both positive and negative feedback is appreciated.

Public feedback

GitHub issues. Customers who are comfortable giving public feedback can open a Github issue in the V2 repo. This is the preferred mechanism to give feedback so that other customers can engage in the conversation, +1 issues, etc. Issues you open will be evaluated, and included in our roadmap for the GA launch.

Gitter Channel. For informal discussion or general feedback, you may join the Gitter chat for the V2 repo. The Gitter channel is also a great place to get help with the Developer Preview, although feel free to open an issue as well.

Private feedback

Those who prefer not to give public feedback can instead email the aws-java-sdk-v2-feedback@amazon.com mailing list. This list is monitored by the AWS SDK for Java team and will not be shared with anyone outside of AWS. An SDK team member may respond back to ask for clarification or acknowledge that the feedback was received and is being evaluated.

Contributing

You can open pull requests for fixes or additions to the AWS SDK for Java 2.0 Developer Preview. All pull requests must be submitted under the Apache 2.0 license and will be reviewed by an SDK team member prior to merging. Accompanying unit tests are appreciated.