AWS for M&E Blog

Accelerating Motorsports: How NASCAR delivers real-time racing data to broadcasters, racing teams, and fans

This blog post was co-authored by Chris Wolford, Managing Director, Event & Media Technology at NASCAR.

Photo of NASCAR race track with cars lined up at starting line

Introduction

The National Association for Stock Car Auto Racing (NASCAR) has a rich history that dates back to the late 1940s. Over the years, NASCAR has become one of the most popular motorsport organizations in the world, giving fans close wheel-to-wheel racing around superspeedways, oval tracks, road courses, and street circuits. Millions of viewers tune in to watch the NASCAR Cup Series, the top level of competition in NASCAR, which features 40 race cars and top speeds of over 200 miles per hour.

During a race, each team has a crew chief, spotters, pit crew, engineers, and other team members working together to make real-time decisions analyzing a variety of data points. But that data is always evolving lap-to-lap. For example, tire wear can be a central strategy maker for a team. The team must decide when to make a pit stop and how many they should make throughout the race. However, tire wear data can vary lap-to-lap making the team’s coordination between the driver, engineers, and the expertise of the crew vital when analyzing data and implementing overall race strategy. This is where Amazon Web Services (AWS) comes in with cloud-based services and solutions to streamline data analysis and its application.

The data pipeline for NASCAR does not just stop at the teams. NASCAR shares real-time race data with broadcasters and fans to provide an unparalleled fan experience. “By far and away, NASCAR produces more content, more data, and more video off of a single race than anybody,” says Steve Stum, VP of Operations and Technical Production at NASCAR. “As we put more content at the track into the cloud, we pushed the limits on AWS, and AWS engineers came up with some great solutions so that we could get that data into devices for the fans or the racing teams.”

Let’s explore how NASCAR uses AWS to provide real-time data to racing teams, broadcasters, and fans.

The future of racing: NASCAR’s NextGen car

Debuting in 2022, the NextGen car was developed to enhance competition, improve safety, and reduce costs for teams. The new design featured numerous aerodynamic and mechanical changes from the previous generation, such as independent rear suspension and new 18-inch wheels with a single lug nut. Beyond these mechanical design changes, the NextGen car also featured technology changes and upgrades. The NextGen car provided more insight into performance during competition – and gave NASCAR the ability to add more data points in the future, something previously not possible.

During a race, 40 NextGen cars are outfitted with numerous sensors, totaling more than 60 different data points, capturing everything from engine RPMs to brake pressure. All of these data points are sampled hundreds of times per second on each car, resulting in over 612,000 messages per second during a given race. At a size of 0.15KB per message, NASCAR needed a system that could handle ingestion of almost 92 MB/s for the duration of the race – at times up to 4 hours in length. Once ingested, this data needed to be streamed in real time to competition teams and other NASCAR partners. To add to the complexity, this kind of data collection needed to be available at multiple racetracks across the United States for the racing season.

To accomplish this feat, NASCAR partnered with AWS Professional Services to create a car telemetry data ingestion and streaming platform known as the Event Racing Data Platform (ERDP).

Event Racing Data Platform Architecture

Figure 1: Event Racing Data Platform Architecture

Capturing data off the NextGen car

The first step in the journey of the NextGen car telemetry data begins at the car itself. The race cars are outfitted with over 60 different sensors, collecting data from the engine, tires, exterior, and more. All of this data is collected and aggregated by the Electronic Control Unit (ECU) onboard each car. This aggregated data packet is sent every 10ms from each of the 40 cars via Ultra-High Frequency (UHF) radio waves to a relay station at each track. This relay station then forwards this data packet via UDP on internal networks at the track to the NASCAR Mobile Data Center (MDC) that makes the journey to each racetrack.

Photo of a NASCAR long haul truck

Pushing the boundaries from the edge to the cloud at ultra-low latency

At this stage of the journey, the aggregated data has arrived at the MDC at the track, and is decoded and de-aggregated by the ingestion server. Here, the data messages are made available to the data publishing cluster in the truck. These data messages are then processed by an open source pub/sub messaging system known as NATS, running on a Kubernetes cluster inside of the MDC. In addition to keeping a copy of the incoming data locally, the purpose of this cluster is to propagate messages to the receiving NATS cluster running in the cloud. From off the car to message availability, the total elapsed time is less than 40ms locally at-track and under 200ms from the cloud.

To achieve ultra-low latency communication between the at-track MDC and AWS, NASCAR uses AWS Direct Connect (DX) connection. Using Direct Connect is a crucial piece in making this solution repeatable at all tracks across the country. Each track has a private network with routing to Direct Connect, so data traveling from the track to the cloud always uses the AWS backbone network – bypassing the public internet and providing the ultra-low latency needed for the solution. It also allows the cloud infrastructure to be located in one AWS region and provide the same performance, regardless of where the race is physically taking place.

The telemetry data is sent from the NATS cluster through the DX connection to an Amazon Elastic Load Balancing Network Load Balancer (NLB) located in NASCAR’s AWS account. Usage of the NLB allows for extremely high throughput, as it functions as a layer 4 load balancer for incoming traffic. The destination is the receiving NATS cluster. This cluster, also running on Kubernetes, is managed via the Amazon Elastic Kubernetes Service (EKS). Once here, the telemetry data is ready for consumption downstream.

In total, the elapsed time from data off the car to subscribers at the track is close to 40ms, with cloud subscribers having access to it from the Cloud in close to 200ms, depending on their internet latency.

How NASCAR, race teams, and manufacturers use the data

With the race telemetry data available in the NASCAR AWS account, there are three different destinations for how it is used and consumed downstream.

Telemetry data storage

Figure 2: Telemetry data storage

First and foremost, NASCAR needs to securely and durably store this racing telemetry data as its single source of truth. To do this, telemetry data is streamed to an Amazon Kinesis Data Firehose stream. Firehose is a tool for real-time streaming of data into downstream destinations both in AWS and other HTTP endpoints. In this case, an Amazon S3 bucket is used as the destination for the racing telemetry data. Amazon S3 was chosen by NASCAR as its storage service for the official race data due to its industry leading durability, data availability, security, and performance.

Outside of capturing this telemetry for official use by NASCAR, racing telemetry is made available to subscribers for their own use. This includes racing teams, manufacturers, and other third parties associated with the race. For many of these subscribers, accessing the data as close to real time as possible is essential. While every racing team has direct access to this data at track, many teams now incorporate remote engineers and analytics teams that can offer insights and strategy decisions that oftentimes will change the course of a given race, and give their drivers an advantage.

Real time telemetry data flow

Figure 3: Real time telemetry data flow

To ingest the racing telemetry data in real time, subscribers are able to connect to the EKS cluster via a public facing NLB. Similar to the ingestion of this data into the cloud, the NLB provides low latency layer 4 routing for fast access to the data in EKS. In practice, this allows racing teams to have remote access to racing telemetry data with only the addition of internet latency; roughly 200ms total from the time it is broadcast off-car.

One key feature of NATS incorporated in this design is isolation of the data between different subscribers. Ensuring that a given racing team is the only recipient of their car’s telemetry data is essential for competition integrity. Thus, teams who connect to the ERDP to receive live telemetry data are only able to access the appropriate data set for them, in real time.

Historical data flow

Figure 4: Historical data flow

In addition to real time consumption, many subscribers also have use cases for post-race analysis and utilization of the car telemetry data. To support this, there is a data synchronization service that copies the official racing data from the NASCAR S3 bucket to other S3 buckets that are customer facing. AWS Lambda functions are used to perform this data synchronization. Determinations of what data should be shared with which team are stored as configuration files also in S3. Lambda reads these configurations and performs the copy of racing data to customer-facing buckets, without transformation of the raw data.

These design decisions – having separate Amazon S3 buckets for the data as well as no transformation – are intentional. First, by providing unaltered data, subscribers can be certain that the telemetry and race data they are provided is consistent with the official racing data (source of truth) that NASCAR itself maintains. Secondly, by decoupling the main NASCAR bucket from customer-facing buckets, there is improved data security and fault tolerance. Any downstream services interacting with racing telemetry data never interacts directly with the official NASCAR bucket. Additionally, in the event of an issue with data ingestion, downstream subscribers are not provided this unready or potentially corrupt data.

Conclusion

NASCAR’s investment in technology is delivering a more level playing field, closer competition, and a more engaging fan experience. AWS Direct Connect, ELB, EKS, and Kinesis Firehose have provided NASCAR with ultra-low latency data streaming capability to modernize the sport. The industry-leading scalability, data availability, security, and performance of Amazon S3 allows NASCAR to provide useful data downstream. Lessons from NASCAR’s data journey story are applicable to any business trying to leverage data for better business outcomes, even if that business isn’t required to drive 200 miles per hour.

Ryan Kiel

Ryan Kiel

Ryan Kiel is a Senior Solutions Architect for AWS based out of Virginia. As part of AWS Sports, he helps leagues and franchises with their cloud journey on AWS by leveraging best practices and the newest technology. Outside of work, Ryan is a hockey enthusiast.

Carlos Valdes

Carlos Valdes

Carlos Valdes is a Senior Technical Account Manager for AWS based out of Florida. As part of AWS Enterprise Support, he is a technical advocate for his customers, helping to enable them in their cloud journey and accelerate their outcomes.