Networking & Content Delivery
Measuring CloudFront Performance
It is a common practice to benchmark Content Delivery Network (CDN) performance to understand what real end users will experience in a production environment. In an ideal scenario, performance measurements should be taken from actual production workloads to provide the best view of the end user’s experience. However, for many customers this is not an option due to the lack of tooling in client applications to collect and analyze these metrics. To solve this problem, some customers rely on third-party tools such as Cedexis to add telemetry to their web applications. For other customers, real user metrics are not available because the application being tested has not yet released and therefore is not receiving real web traffic. As a result, customers turn to synthetic monitoring tools, such as Dynatrace and CatchPoint, to simulate requests from locations worldwide. In this blog, we will discuss how to evaluate CDN performance with the options available today.
Real User Monitoring vs. Synthetic Monitoring
The two most common approaches to measuring CDN performance are known as Real User Monitoring (RUM) and Synthetic monitoring. Real User Monitoring (RUM) is the process of measuring performance while real end users are interacting with a web application. With synthetic monitoring, requests are proactively sent by external agents configured to mimic actual web traffic.
Real User Monitoring
When your web application is deployed to production and is receiving real web traffic, real user metrics provide the best understanding of what your viewers are experiencing because they measure actual viewer interactions with your application. Some companies have teams dedicated to building telemetry in to their applications in order to collect this data. This is the most ideal approach because metrics can be designed to measure success criteria specific to your application. For example:
- Throughput and latency metrics can measure downloads of the exact object(s) served by an application. CDN performance benefits can vary depending on workload characteristics such as the size of the objects served, so measuring your actual workload provides the most accurate view of what your end users will experience.
- For availability, the success criteria of these metrics can account for configurations custom to your application such as request retries, connection/response timeouts, and different error codes (HTTP 4xx/5xx). Some applications may consider retries acceptable, while others may never attempt to retry a failed request.
While developing custom telemetry is effective, it also requires investment in developing the tools. As an alternative, other customers rely on third-party tools to get this telemetry. There are many third-party tools that provide clients or code that can be embedded into applications to collect and send metrics to their service for analysis. In some cases, these tools measure network performance by downloading a fixed set of pre-configured test objects across different CDN networks from the viewer’s client after the application has successfully loaded. This approach provides a happy medium for measuring network performance from the viewer’s device without adding overhead into your web application.
Synthetic Monitoring
When real web traffic is not available, synthetic monitoring tools are a good way to simulate web traffic from different geographies. In order to create tests simulating end users in different locations, synthetic monitoring tools have nodes deployed in cities around the world configured to send requests to application endpoints. These synthetic test providers typically provide two forms of testing that are applicable to CDN monitoring which are often referred to as backbone and last mile testing.
Backbone – For these tests, providers use nodes installed in colocated data centers around the world. These servers have direct fiber connectivity to various network providers (both transit providers and internet service providers) such as Verizon, Cogent, Century Link/Level3, NTT, Comcast and ATT.
Last mile – For last mile tests, providers will install agents in the computers of real end users or deploy small test nodes that are connected to the routers of real end users in their home network. These nodes are intended to give an eye ball perspective of what performance looks like for end users through their internet service providers. These tests can be useful to get an idea of what real users experience but have limitations due to capacity and availability.
Both backbone and last mile tests are good ways to get a general idea of how CDNs will perform in locations where you expect your viewers to be when real web traffic is not yet available. While last mile tests offer end user like network conditions, backbone tests typically offer a larger selection of nodes from more locations. They also provide more consistent results due to dedicated capacity. Some testing providers do not offer node selection when using last mile tests and cannot guarantee that the same nodes will be used from one test to the next. This can complicate comparisons especially when done over a period of time. When comparing CDNs, we recommend using backbone tests for consistency because last mile nodes may not always be available from one test to the next.
Best Practices
While third party tools can help with gathering metrics for analyzing performance, there are some things to understand in order to use these tools effectively. Here are some best practices to consider when using these tools.
- RUM over Synthetic. When possible, use RUM over synthetic monitoring to get the best view of real user experience.
- Do you currently have a production workload that you will be adding a CDN to? If so, consider using these actual production workloads to measure performance. There are tools such as Cedexis that can be used to add telemetry to your web application to collect these metrics. Real user metrics are the best way to measure performance and should be used whenever possible.
- Understand your workload before configuring synthetic tests. It is important to take into consideration what your expected workload will look like in a production environment so that you can design your tests to mirror these actual workloads and rule out tests that will not reflect this web traffic. Some things to consider:
- Where do you expect your viewers to be populated? For example: US, EU, or Worldwide. CDNs have varying presence in different parts of the world. It is important to ensure that your tests are configured to reflect your viewer audience.
- What is the size of the object you will be delivering for each request? 1KB, 20KB, 1MB, etc..? Acceleration benefits will vary depending on object size. Larger objects will see greater benefits for both dynamic and cacheable content, as bytes will transfer over shorter distances and the TCP congestion windows will be able to scale up. Test multiple payloads to match what your average object sizes. Testing individual object can give more accurate results, as full web page tests may be impacted by downloads of external links not served through the CDN.
- What protocols do you require? HTTP or HTTPS? Some CDNs perform better than others depending on the protocol as some networks only provide support for HTTPS on a subset of their POP locations. CloudFront has optimized all edge locations to support both HTTP and HTTPS traffic. As most of the internet traffic is moving to HTTPS traffic, it’s important to configure your tests to reflect this trend.
- What is the popularity of your content? How frequently will each object be requested? Most CDNs use a least recently used (LRU) cache to ensure cache space is available for new and commonly fetched objects. Depending on the frequency of requests, some object will remain in cache longer. Cache width at the edge locations vary from one CDN to another. Depending on the popularity of your content, hot (frequently requested) or long-tailed (rarely requested) you will experience different cache hit ratios.
- Is your content dynamic or cache-able? For cache-able content, consider the popularity of objects as mentioned above. For dynamic content, take into consideration the benefits gained from using a CDN. For instance, CloudFront accelerates delivery of dynamic content by reducing the distance traveled for establishing TLS connections to your viewers and maintaining persistent connections to the origin from edge locations. To achieve optimal results, it is important to configure the appropriate keep-alive timeouts for connections to the origin and make sure your tests generate enough volume to reuse the connections. Also understand that for distributed origins, connections are persisted for each origin IP address so connections and requests to the origin will round robin between IP addresses. If your tests send requests too infrequently you may not be taking advantage of the persistent connections.
- Comparing CDNs. When comparing CDNs, ensure your tests have the same configuration and measurements for downloads of the same sized payload and volumes of traffic. When evaluating a new CDN against an incumbent one, it’s a common mistake to only provide a smaller subset of the traffic to the new CDN. CDNs perform better with high volumes of traffic as objects will remain primed in cache and connections will be reused more frequently.
- Testing dynamic content acceleration. When testing the performance of dynamic content acceleration, avoid tests that cannot leverage the persistent connections from the CDN to the origin. Ensure test frequencies are within the configured origin keep-alive timeouts and that the CDN has enough requests to establish persistent connections to all origin IPs.
- Match your demographic. When configuring synthetic tests, make sure that you use probes representative to network popularity. For example, if one ISP represents more than 60% of users in a certain country, make sure you select two probes for this ISP and one probe for the other ISP.
- Be aware of timeouts. Watch out for user agent time outs and origin read timeouts. Make sure that your object is downloadable within timeouts.
- Test over long periods. Internet traffic is constantly changing. Some CDNs adjust to capacity demands better than others. Measurements taken over a couple hours does not provide the full picture. Consider peak hours by region/country/city. In addition to this, you want to ensure the CDN is given enough time to populate and warm the cache when delivering cache-able content.
- Deep dive to understand anomalies. No network is 100% resilient. Investigate nodes/regions that perform worse than others. Test over multiple periods to find out if results were skewed due to a network outage. For synthetic tests, rule out nodes that don’t reflect real traffic due to localization issues with nodes using divergent resolvers. CDNs that use DNS based routing, such as CloudFront, intelligently route traffic based on latency measurements taken from viewer and DNS resolver data. In some cases, requests from synthetic test nodes may be routed incorrectly due to limited data from these nodes, which can skew the results. CDNs using anycast routing are not affected by this as requests are always routed to the nearest available server. However, when network capacity/conditions change, end users may see better performance using a CDN’s custom routing algorithm.
- When using RUM tools, consider the size of test objects. For example, some RUM tools take throughput measurements based on pre-configured objects of a fixed size (20KB -100KB). This can give one perspective into throughput performance but may not be relevant for use cases like video delivery sending chunks greater than 1MB.
- Understand how community shared metrics are measured. Some RUM monitoring tools provide community metrics based on downloads of identical objects from different CDN endpoints. These endpoints are configured by the CDN providers themselves. To improve results, some CDNs will enable edge locations for all, or larger parts of, their network which may not be available in normal customer use cases.
- Understand how availability is measured. Make sure the way that availability is measured by providers aligns with how you would measure availability for your application. Determine if your application considers retries and/or 4xx errors as an outage.
- Low frequency testing. Synthetic monitoring tools often have a default limit of approximately five minutes between requests from each node. With last mile tests, this limit may be even higher. When testing CDNs, understand that the long periods between requests may affect the results for different use cases which may not reflect what performance will look like with real user traffic. For example, when testing the delivery of dynamic content, any infrequent requests may not have the opportunity to leverage existing connections before viewer and origin keep-alive timeouts expire. For cacheable content, consider objects that are configured with a low time to live (TTL).
- Choose the right metrics. Select metrics that correlate to the success of your application. For example, if you are serving video on demand content, consider re-buffer and error rates. Use last byte latency (LBL) or response time over first byte latency (FBL), as most applications will not be able to serve content until the last byte is delivered. Consider looking at P90 over P50 metrics to understand what performance would look like for a larger distribution of your viewers.
Now that you understand the different options available for measuring CDN performance, visit the Amazon CloudFront product details page to learn more about how CloudFront can accelerate the delivery of your content.
Blog: Using AWS Client VPN to securely access AWS and on-premises resources | ||
Learn about AWS VPN services | ||
Watch re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures |