AWS for M&E Blog
Create a scalable workflow using the Intel Library for Video Super Resolution
Introduction
The rise of Free Ad-supported Streaming Television (FAST) channels has boosted the repurposing and distribution of archival content, including classic movies and TV shows, across modern platforms and devices. Much of this content is available only in lower-resolution, standard definition (SD) formats, and needs enhancement to meet viewer expectations. Traditionally, low-complexity methods like Lanczos and bicubic are used for upscaling. However, they often introduce image artifacts such as blurring and pixelation.
Deep learning (DL) techniques such as Super-Resolution Convolutional Neural Network (SRCNN) and Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) have shown remarkable results in objective quality assessments, such as VMAF, SSIM, and PSNR. However, they are computationally expensive, potentially making them less suitable for channels with limited budgets, which is often the case for FAST offerings. Amazon Web Services (AWS) and Intel propose a cost-efficient solution for video super-resolution. The solution leverages the benefits of AWS Batch (to process video assets) using the Intel Library for Video Super Resolution (VSR)—balancing quality and performance for real-world use cases. In this blog post, we describe a step-by-step implementation using an AWS CloudFormation template.
Solution
Implementing the Intel Library for Video Super Resolution, based on the enhanced RAISR algorithm, requires Amazon EC2-specific instance types, such as c5.2xlarge, c6i.2xlarge, and c7i.2xlarge. We leverage AWS Batch to compute jobs and automate the entire pipeline rather than dealing with all the underlying infrastructure, including start and stop instances.
The following are the main components of the solution:
- Create a compute environment in AWS Batch, where CPU requirements are defined, including the type of EC2 instance allowed.
- Create a job queue associated with the proper computing environment. Each job submitted in this queue will be executed using the specific EC2 instances.
- Job definition. At this point, it is necessary to have a container registered in the Amazon Elastic Container Registry (Amazon ECR). Building the docker image is further detailed within this GitHub link. The container includes installing the Intel Library for VSR, open-source FFmpeg tool, and AWS Command Line Interface (AWS CLI) to perform API calls to S3 buckets. Once the job is properly defined (with the image registered in Amazon ECR), the jobs can start being submitted into the queue.
The following diagram represents the general architecture as previously described:
Implementation
A CloudFormation template is available in this GitHub. Following are the steps to deploy the proposed solution:
- Download yml from the GitHub repository
- Go to CloudFormation from the AWS Console to create a new stack using template.yml
- The template allows definition of the next parameters:
- Memory: Memory associated to the job definition. This value can adjust the minimum and maximum memory that is required depending of the super-resolution job. For example, 1080p, AVC 30fps, 15:00 duration → 4000 memory and 4vCPU).
- Subnet: AWS Batch deploys the proper EC2 instance types (c5.2xlarge, c6i.2xlarge, and c7i.2xlarge) in a selected customer subnet with Internet access.
- VPCName: Existing virtual private cloud (VPC) where a selected Subnet is associated.
- VSRImage: This field uses an existing public image, but a customer can create their own image and insert the URL in this field. Instructions to create custom image are found here.
- VCPU: Virtual CPU (VCPU) associated to the job definition. This value can also be adjusted.
The next step creates a CloudFormation stack using the defined parameters.
- Once the stack has been successfully created, two new Amazon S3 buckets, starting with vsr-input and vsr-output, should be listed.
- Upload a SD file to the vsr-input-xxxx-{region-name}
- Go to Batch from the AWS console (figure 7), click on it to open the dashboard and validate a new queue (queue-vsr) and compute environment (VideoSuperResolution) have been created (Figure 8).
- Within the Batch dashboard click on Jobs (left-side menu). Click on Submit a new job, then select the proper job definition (vsr-jobDefiniton-xxxx) and queue (queue-vsr).
- In the next screen, click on Load from job definition and modify the name of the input and output files. For example, a user uploads a file input-low-resolution.ts and wants to name a super-resolution output file as output-high-resolution.ts. In this case a proper array of linux commands to add in the next interface would be:
[“/bin/sh”,”main.sh”,”s3://vsr-input-106171535299-us-east-1-f37dd060″,”input-low-resolution.ts”,”s3://vsr-output-106171535299-us-east-1-f37dd060″,”output-high-resolution.ts”]
- Review and submit the job. Wait until the Status transitions from Submitted (Figure 11) to Runnable and then to Succeeded Figure 12). The AWS console will also show additional details such as the number of job attempts and other details.
- Go to the output Amazon S3 bucket to validate the super-resolution file has been created and uploaded to the vsr-output automatically.
Compare subjective and objective visual quality
The open-source tool compare-video tool can be used to perform a subjective quality evaluation between original and super-resolution videos. In addition, an objective evaluation can be performed using VMAF. For objective evaluation, VMAF uses traditional upscaling methods, such as Lanczos or Bicubic, to match both resolutions before executing a frame-by-frame comparison. Following are visual examples:
Clean up
To delete the example stack we created during this solution, go to CloudFormation, click on Delete stack, and wait until it successfully completes (Figure 17).
Conclusion
In this post, we described a solution that enables the use of super resolution for a smooth integration with existing transcoding pipelines. This is a cost-effective way to help with adoption of future super-resolution enhancements.
By applying video super resolution using the Intel Video Super Resolution Library, you can upscale and sharpen low-resolution footage, transforming pixelated or blurry videos into crisp, high-definition content. Unlock new monetization opportunities by enabling the repurposing and distribution of archival footage across modern platforms and devices.
Special thanks to Surbhi Madan from the Intel team who contributed greatly to make this solution possible.
Contact an AWS Representative to learn how we can help accelerate your business.
Further Reading
Intel Open Omics Acceleration Framework on AWS: fast, cost-efficient, and seamless
AWS Batch Dos and Don’ts: Best Practices in a Nutshell
Create super resolution for legacy media content at scale with generative AI and AWS
Intel® Library for Video Super Resolution (Intel® Library for VSR)