AWS for M&E Blog

Immersive viewing of the NASA Artemis I launch with Futuralis, Felix & Paul Studios, and AWS – Part Two

This blog was co-authored by Aravind Pamula, CEO – Futuralis and Nadeem Shaik, Solution Architect – Futuralis.

In this blog post, we will focus on equipping our readers with the knowledge necessary to use various AWS Media Services to construct a comprehensive live video streaming workflow. The workflows encompass live video transport, transcoding, origination, and distribution to audiences worldwide. In the preceding part of this blog series, we outlined the creation of workflows tailored for live video streaming to users equipped with Meta Quest VR headsets and subscribers to Meta’s Horizon Worlds platform, notably within the ‘Venues’ space. Building upon this foundation, we will now discuss how the same fundamental architecture can be applied to enable live video streaming to immersive environments like projection domes, planetariums, and Facebook 360.

About Futuralis: Futuralis is an AWS Advanced Tier Services Partner and global technology consulting firm helping customers implement solutions with AWS cloud services and modern application development. In the process of understanding what’s important to their customers, Futuralis implements solutions focused on the six pillars of the AWS Well-Architected Framework.

About Felix & Paul Studios:  Felix & Paul Studios is an EMMY® Award-winning creator of immersive entertainment experiences, creating unparalleled, highly engaging, and inspired virtual reality, augmented reality, and mixed reality experiences for audiences worldwide.

Use case introduction

Felix & Paul Studios collaborated with Futuralis to develop two distinct live video streaming workflows leveraging AWS Media Services. The first workflow demonstrated how AWS Media Services and Amazon CloudFront were used to deliver High Definition (HD) 180° VR streaming experience to viewers using Meta Quest VR headsets via Meta’s Horizon Venues platform. The second workflow focused on delivery of Ultra High Definition (UHD) 360o live streaming to other social platforms, namely Facebook 360 in addition to projection domes and planetariums worldwide. Figure 1-1 shows a high-level overview of the second workflow further discussed in this portion of the blog series.

This is a high level workflow diagram depicting an 8K UHD source from the Kennedy Space Center, contributing a video feed into AWS, and distributing it to social platforms as well as domes & planetariums.

Figure 1-1: High-level system architecture diagram

Prerequisites

As we discuss the following use cases and implementation in more detail, it may be helpful to familiarize yourself with the following services as depicted in Figure 1-2.

  • AWS Elemental MediaConnect is a live video transport service that makes it easy for broadcasters and other premium video providers to reliably and securely send & receive live video to/from the AWS cloud.
  • AWS Elemental MediaLive is a live video processing service used for video transcoding and creation of Adaptive Bit Rate formats such as HTTP Live Streaming (HLS).
  • AWS Elemental MediaPackage is a video origination & packaging service designed to prepare and protect content for distribution. MediaPackage supports a variety of streaming media formats.
  • Amazon CloudFront is a Content Delivery Network (CDN) that can be used to deliver content with low latency and high performance from globally distributed edge locations to end user devices.
  • Amazon Simple Storage Service (S3) is an object-based storage service that can be used for a wide range of applications pertaining to file storage workflows.
  • AWS Identity and Access Management (IAM) allows users to securely manage identities and access to AWS services and resources.
  • Amazon CloudWatch is a cloud observability and monitoring service for resources deployed in AWS.
  • AWS CloudFormation offers the ability to automate and deploy cloud resources in AWS using Infrastructure as Code (IaC).
This is a deployment architecture depicting an 8K UHD video transport using MediaConnect and mapping it to different channels using MediaLive. MediaPackage was used for origination and CloudFront for distribution to end users.

Figure 1-2: Overall live streaming architecture.

Key requirements

Now that we are familiar with several of the services used in the solution, we need to familiarize ourselves with key requirements around the solution. The following design requirements were implemented in the solution:

  • Broadcast-grade delivery: Stream 4K UHD to domes, planetariums, and Facebook 360.
  • Redundancy: Utilize multi-Availability Zone (AZ) for high availability and system-level redundancy.
  • Live-to-VOD capture: Maintain a secure recorded backup of the live event.
  • Stream routing: Perform routing/switching between live content and static content (e.g. slate/image).
  • Low latency: Maintain low-latency performance without compromising on video quality.
  • Multiformat Encoding: Encode both AVC/H.264 and HEVC/H.265 formats for end user/site selection.
  • Multi-track audio mixing: Provide ability to pass multiple audio tracks for different user listening modes.
  • Security: Implement security best practices for real-time data in-transit and at rest.

Implementation

The workflow starts with a client-side/on-premises video encoder to capture 8K UHD live video and scale it to 4K UHD for video transport (e.g. ground-to-cloud) using the MediaConnect service. In this configuration, MediaConnect was provisioned to accept a single live stream from the contribution encoder and map it to different channels using MediaLive. The first channel was an AVC/H.264 variation, while the second channel was an HEVC/H.265 variation. Therefore, downstream Domes & Planetariums could elect to display either video format depending on the decoding capabilities at each site. As media was encoded by MediaLive, the solution leveraged MediaPackage for origination and CloudFront for distribution. The solution used two video delivery formats. The domes and planetariums used HLS, whereas the social media platforms accepted RTMP.

Step one – Contribution encoding

In this step, an on-premises encoder generates a live stream at 8K UHD (7680×4320) for local capture while sending a live 4K UHD (3840×2160) stream to AWS. We start by defining a flow in MediaConnect, which is a connection between the live video source, a defined MediaConnect input, a defined MediaConnect output, and the connection to the destination. We configure the MediaConnect flow by specifying the IP source characteristics, the desired Availability Zone for the AWS Region, and the desired video transport protocol. In this case the input portion of the MediaConnect flow utilized the Secure Reliable Transport (SRT) protocol.

This architecture depicts a flow where an on-premises encoder generates an 8K UHD live stream for local capture while sending a 4K UHD live stream to the AWS Cloud. This is done by defining a MediaConnect flow with specified input characteristics, Availability Zone, and video transport protocol, which then sends the stream to MediaLive and MediaPackage for distribution.

Figure 2-1: Live streaming architecture to Domes & Planetariums.

On MediaConnect, we allowlist a source IP Access Control List (ACL) using two IPv4 addresses from the 8K encoder to two separate MediaConnect inputs for ingesting the live video streams. Each of the MediaConnect flows were assigned to different AZs in the us-east-1 Region, setting up a high availability video pipeline in AWS. To finalize the MediaConnect flow, output destination(s) must be specified. This involves creating the destination which will be the input to MediaLive. Within the MediaLive console we can use the input type called “MediaConnect.” Subsequently, this input is utilized as an “input attachment” for the MediaLive Channel connected to MediaConnect. Refer to figure 2-2 for additional detail.

The diagram describes setting up a high availability video pipeline on AWS using MediaConnect. This involves allow listing source IP addresses, creating MediaConnect flows in different Availability Zones, and configuring MediaLive to use the MediaConnect input as the source for the video stream distribution.

Figure 2-2: AWS Elemental MediaConnect Configuration.

Step two – Distribution formatting

The second step in the workflow involves setting up the distribution streaming formats to output destinations with MediaLive. To do so, we need to setup a MediaLive channel with the proper input specifications such as input codec and input resolution. We specify the input attachment for the MediaLive channel, which points to the MediaConnect flow we created in the previous step.

Using the ingested live stream from MediaConnect, MediaLive will build two video transcoding pipelines in AVC/H.264 before passing the output to MediaPackage. The live transcoding pipelines on MediaLive use a “standard channel” configuration, which adds dual-AZ redundancy to continue the high availability architecture.

Step three – Origination & distribution

The next step in the workflow connects the MediaLive stream to MediaPackage and CloudFront for distribution to viewers. MediaPackage offers just-in-time packaging/re-packaging of a video stream into various formats including HLS, MPEG-DASH, CMAF, and MSS. While MediaPackage can generate concurrent outputs of the above-named formats, the selected format for distribution was HLS.

The next step involves creating the CloudFront CDN distribution using MediaPackage as the origination. Various CDN behaviors can be used to optimize the stream experience by configuring additional caching options, Time-To-Live (TTL) and other parameters to enhance security and performance. Once configured, the domes and planetariums can utilize the CloudFront distribution URL provided for streaming and playback of the live content.

Step four – Streaming to Facebook 360 with RTMP

A second portion of the workflow was implemented to enable streaming to Facebook 360. The frontend of the architecture remains the same up to MediaLive. Here an output stream from MediaLive was created as an RTMP feed which is compatible with Facebook 360 video ingest specifications. Diagram 2-3 shows the second portion of the workflow for streaming to Facebook 360.

This diagram depict the streaming flow to Facebook 360 by creating an RTMP output stream from MediaLive that is compatible with Facebook 360 video ingest specifications.

Figure 2-3: Live streaming architecture to Facebook 360.

Step five – Archiving and media input switching with Amazon S3

In addition to the live streaming outputs created by MediaLive, the solution also used MediaLive’s archive output group to create a backup recording to an Amazon S3 bucket. Furthermore, content that is stored in an Amazon S3 bucket can be used as an input to MediaLive to facilitate source input switching between live content and other static assets (e.g. slate, preview videos, etc.).

Step six – Deployment and management

Other key considerations in the overall workflow involved security, deployment management, and system monitoring. Futuralis Solution Architects leveraged AWS cloud services for each of these associated subtasks. Starting with security, the solution aligned with best practices outlined by the AWS Shared Responsibility Model and managed access roles with IAM. To simplify architecture deployment and updates, the solution was templated using CloudFormation, enabling IaC that could easily be re-used later. Other services were useful for the team during the event were monitoring aspects with metrics gathered by Amazon CloudWatch for health and performance information (e.g. input loss, bitrate, network output, 4xx/5xx errors, etc.).

Observations

Below are some of the observations and lessons learned during the workflow design.

  • High Availability: The solution leveraged a multi-AZ architecture for high availability and redundancy.
  • Encryption: AWS Media Services support strong encryption of content from ground-to-cloud. Protocols such as SRT allowed for in-flight encryption, while encryption-at-rest was used for content stored on Amazon S3. Other important characteristics of SRT are broadcast-grade reliability by managing and overcoming packet loss, jitter, and fluctuating network conditions using Automatic Repeat Query (ARQ).
  • Security & Access control: IP ACLs were used on the MediaConnect input to secure the connection from the ground encoder allowing customers to control contribution to the defined ingest point.
  • Monitoring: Amazon CloudWatch metrics were used for monitoring the live stream which allowed us to detect the overall health of the stream, stream connections and/or potential issues.
  • Automation: To simply infrastructure deployments, Amazon CloudFormation was used to simultaneously deploy the workload to different environments such as Prod, Dev, and Test.

Conclusion

In this blog post we provided a comprehensive guide to implementing live video streaming on AWS. We highlighted the development of two distinct workflows tailored for immersive viewing experiences. Throughout the implementation, we discussed the use of MediaConnect, MediaLive, MediaPackage, CloudFront, and S3, while providing insights into their roles in live video transport, transcoding, switching, archiving, origination, and distribution. We also addressed important considerations around security, deployment management and system-level monitoring, demonstrating alignment with AWS best practices and leveraging services such as IAM, CloudFormation, and CloudWatch for enhanced security, scalability, and performance.

Call to Action

Customers can learn more about Futuralis Media Streaming Platform solution here and request demo.

Hamdy Eed

Hamdy Eed

Hamdy Eed is a Sr. Partner Solution Architect with AWS and is a member of the Serverless Technical Field Community. With over 20 years of industry experience, he is well-recognized as a Matter Subject Expert in Serverless and Enterprise integration. Prior to AWS, he worked at NIC Inc and held several positions at IBM. In his spare time, he coaches soccer to elementary school students and enjoys international travel. You can reach him on LinkedIn linkedin.com/in/hamdyeed

Matt Carter

Matt Carter

Matt Carter serves as a Principal Solutions Architect with AWS Elemental leading media solutions for Public Sector. With over 20 years of industry experience, a patent holder in video metadata processing, and contributor to the Motion Imagery Standards Board (MISB), he has become a well-recognized Subject Matter Expert in video technologies for Government applications. Matt obtained his degree in Applied Science from Miami University and is a veteran of the United States Army Signal Corps.