Front-End Web & Mobile
Implementing caching for pipeline resolvers in AWS AppSync GraphQL APIs
This article was written by Eric Robertson, SDE Intern, AWS AppSync
AWS AppSync is a fully managed service which allows developers to deploy and interact with a scalable serverless GraphQL API backend on AWS. GraphQL provides a complete description of the API data in a strongly typed system, making it easier to evolve APIs over time and giving clients the power to securely retrieve exactly the data they need.
Developers connect their APIs to data sources using resolvers. A resolver is a function or method that is responsible for populating the data for a field or operation defined in the GraphQL schema. Resolvers provide the runtime to fulfill GraphQL queries, mutations, or subscriptions with the right data from data sources. AppSync provides two types of resolvers:
- Unit resolvers which run a single invocation to a designated data source.
- Pipeline resolvers which orchestrate and execute up to 10 unit resolvers (defined as re-usable functions) in sequence to return data from multiple data sources in a single API call.
Caching is a strategy to improve the speed of query and data modification operations in your API while making less requests to data sources. With AppSync you can optionally provision a dedicated cache for your API’s unit resolvers, speeding up response time and taking load off backend services. Some customers have been able to leverage the simplicity and efficiency of server-side caching in AppSync to decrease database requests by 99%. By default a simple toggle tells AppSync to store all relevant data into a cache key and store response data up to one hour with a customizable TTL.
AppSync now supports caching in pipeline resolvers, so customers can take full advantage of a dedicated managed cache instance to enable even faster response times and reduced backend request load on a full pipeline of AppSync resolver functions. By default, AppSync compiles a cache key encapsulating all the relevant data for a request, enabling developers to simply toggle caching for a pipeline and see immediate improvements. For specific cases, developers may take advantage of setting a specific subset of values to include in the cache key generation, decreasing key size and allowing unimportant or detrimental values to be removed.
How AppSync caching works
For an AppSync API, caching can be configured in one of three options to suit your needs.
- None: server-side caching is disabled, this is the default behavior for AppSync APIs.
- Full request caching: if the data is not in the cache, it’s retrieved from the data source and populate the cache until the TTL expiration. After that all requests to your API are returned from the cache, which means data sources are not contacted directly.
- Per-resolver caching: only API calls requesting data from a specific operation or field defined in a resolver return responses from the cache.
With the expansion to pipeline resolver caching, customers who use full request caching immediately see the benefits of the cache changes as pipeline resolvers are now included as part of full API caching. For per-resolver caching, customers now see an option in a pipeline resolver settings to enable or disable caching for that specific resolver. Caching in AppSync can be configured directly through the AWS Console, AWS CLI, or AWS CloudFormation. Customized cache keys can be configured through the AWS CLI or CloudFormation.
Setting up pipeline resolver caching
Follow the steps below to see pipeline resolvers caching in action. We walk through the setup of a new AppSync API with pipeline resolvers then enable caching accordingly.
In the AWS AppSync console create a new API by clicking Create API. Select Build from scratch, then click Start. Alternatively follow the quick start guide here
Navigate to Caching on the sidebar and create a cache setup for the API:
- Select Per-resolver caching as the cache behavior to allow the selection of specific pipeline resolvers we want cached.
- Select the caching instance type. As there’s a charge for caching we use a small instance for demonstration purposes, larger instances are more performant.
- Click Create cache, the instance should be ready a few moments later.
We now have a cache setup for the API as a whole, but need to define a schema for it be functional. We define a simple schema that reflects something that may be used for a blogging platform. More information about GraphQL schema design can be found in AppSync’s documentation. Select Schema on the side bar and use the following schema for your API:
In this schema we have two different types, a Comment
and a Post
. For each of these types, create and link an Amazon DynamoDB table as a data source in AppSync. Tutorials for integrating data sources manually can be found in our documentation. Alternatively, selecting Create Resources in the Schema section walks you through the automatic creation of new DynamoDB data sources in your account directly from the AppSync console.
Using the built-in AppSync sample templates for Put Item and Return Single Item, we can attach unit resolvers to the insertPost
and insertComment
mutations so that each one inserts data into their respective table. If you want to create a pipeline resolver, clicking the option to Convert to Pipeline Resolver automatically converts the default unit resolver into a pipeline resolver with a single function encapsulating the mapping template code.
Similarly, we can navigate to the Functions section on the sidebar and create new functions called GrabPosts
and GrabComments
that are then linked to the posts
and comments
DynamoDB data sources respectively. Each one simply scans the entire table and saves the returned data into the stash
, which allows to pass arbitrary data across request and response mapping templates, and across functions in a pipeline resolver. For more information about the stash and pipeline resolver functions refer to the documentation.
Now that we have setup these functions, we can package them together into a single pipeline resolver that returns both comments
and posts
from different tables in a single API request. Setup a pipeline resolver linked to the getFeed
query defined in the Schema section by attaching a resolver and selecting the option Convert to Pipeline Resolver. We add both GrabPosts
and GrabComments
functions we just created and enable caching for this pipeline resolver. In the After mapping template section, use the following VTL code to combine both the comments and the posts in a single response for the API clients. In a real-world scenario implementation, timestamps, post content, likes, and other attributes would be included to create the full functionality of a social media platform or blog. For the sake of simplicity the feed for users in our API is simply a collection of every comment
and post
on the platform.
Notice the new cache option at the bottom of our pipeline resolver configuration screen, this setting allows caching for this specific pipeline resolver. The option is visible as the cache behavior in the API is configured to be enabled only for specific resolvers. This option would be hidden if full request caching was the current behavior as caching would be enabled globally in the API for every resolver, unit or pipeline.
Now it’s time to test the API setup. Navigate to the Queries option in the sidebar, and execute a few mutations to create some sample data.
Then use queries to confirm data is available. Subsequent query results are stored in the cache, avoiding future table scans.
Cache keys are automatically generated for unique requests by AppSync. Data is retrieved from the cache should the key exist and the entry is within the TTL. In this particular example, the operation of scanning two tables and joining the results can be increasingly costly as table sizes grow. Now with pipeline resolvers caching, the scan operation is executed once every 45 seconds whereas without caching it’d be executed in both DynamoDB tables for every single client call.
Using X-Ray to visualize caching impact on performance
AppSync has a built-in integration with the flexible AWS X-Ray request tracing service. With X-Ray enabled in the API settings, we can get a detailed view on how the caching positively impacts API performance.
The service map from X-Ray shows us all the requests to the API. Initially 1000 requests were sent to the getFeed
query with caching disabled. We set up a load of 67 transactions per minute on two different DynamoDB tables with average 14ms latency overall.
With caching enabled, 1000 additional API requests are then executed. AWS X-Ray recognizes the new AppSync cache resource and incorporates it into the service map. We can see the cache is handling most of the transactions, leaving less than 1% of the requests to the DynamoDB instances themselves. Latency is also down 72% as the cache enables 4ms response times on average.
Caching an expensive query with pipeline resolvers
We tested a rather expensive query in AppSync based on a pipeline resolver where the sequential execution of the pipeline functions could usually take up to 800ms to complete. In this case, a single API call needs to contact multiple data sources sequentially in the pipeline and a couple of them require extra time to execute their requests or are simply slow to respond. By defining a 5 second TTL in the AppSync cache the overall request latency for the clients drops considerably thus enabling a noticeably faster experience to end users. Looking into the average resolver execution time in this example, if the API is expecting 100 executions per second with a TTL of 1 second then 99 of them have their execution time drop to less than 10ms, leading to considerably less latency. With pipeline resolvers caching users of this particular API can see upwards of 97% reduction in average resolver execution time.
Furthermore, from a cost perspective, handling millions of requests at 800ms each can increase costs and impact your monthly bill depending on the backend data sources you are using. By enabling caching, this expensive 800ms pipeline query is executed only once every 5 seconds while the remaining requests are executed with much lower latency. With AppSync’s built-in managed caching we could see meaningful execution time savings in applications powered by complex pipeline resolvers.
Setting up custom cache keys
In specific use cases there might be a requirement for custom cache keys. For example, after adding an additional TrendingPostsPerCountry
query to our initial API schema, users are able to access trending post in their country. However, all users should see the same trending posts if they are in the same country. In this case, caching on the user’s identity would be counter productive.
Using the AWS CLI you can modify and customize the cachingKeys
option to improve cache hits accordingly for this use case:
aws appsync create-resolver \
--api-id "yourAPIIDHere" \
--type-name "Query" \
--field-name "TrendingPostsPerCountry" \
--kind "PIPELINE" \
--pipeline-config functions=yourFunctionID1,yourFunctionID2 \
--caching-config ttl=60,cachingKeys="\$arguments.country" \
--request-mapping-template "{}" \
--response-mapping-template " \$util.toJson(\$ctx.result)"
Once the cache is configured in the API, defining a caching-config
and giving it a TTL on a specific pipeline resolver automatically enables caching for that pipeline. If full request caching is enabled in the API, all pipeline resolvers are cached and the caching-config
option is ignored in favor of the global settings specified for the API cache.
The AWS CLI command above creates a new pipeline resolver that only uses a country
argument defined in $arguments.country
to cache requests. All other values are removed from the cache key. Data is retrieved from the pipeline resolver data sources every 60 seconds otherwise it’s served from cache. This new caching configuration can enable potential reductions in cost and latency for this specific use case. However, we must be careful when setting up customized cache keys as they can map unique requests to the same data. When used well, the efficiency of pipeline resolvers can be greatly improved with custom caching keys depending on the requirements.
Clean up
While AppSync is serverless and you only pay for what you use (API calls), there is an additional charge when the managed server-side caching is enabled in the service. It’s billed per hour without any long-term commitments until the cache is deleted from your GraphQL API. In order to avoid charges, make sure to delete the cache in the API created in your account or delete the API altogether. For more information visit our pricing page.
Conclusion
As we showcased in this article, caching for pipeline resolvers is now fully supported in AppSync. The new feature unlocks lower latency with less requests to backend data sources, and reduces costs when using pipeline resolvers in AppSync to orchestrate and connect data from multiple data sources with a single GraphQL API call.
You can start using caching for pipeline resolvers in all regions where AppSync is supported. For more details, please refer to the documentation.