AWS for M&E Blog
Globo scales creative capacity with AWS
Globo is one of the world’s largest broadcasters and the leading commercial television network in South America, reaching 99.6 percent of Brazilian homes as well as more than 170 countries with its content. As the company’s content offerings have expanded and evolved, so have its technology needs, and Globo has begun tapping the AWS Cloud to scale render capacity for increasingly complex visuals and graphics.
Globo has been using AWS Thinkbox’s Deadline to manage a modest on-premises render farm for visual effects and motion graphics since 2016. As they were bumping up against capacity issues and looking for solutions to scale, they started leveraging Deadline’s ability to closely integrate with the AWS Cloud. By using the AWS Portal in Deadline, Globo’s artists are able to use the familiar Deadline Monitor to increase compute with Amazon EC2 Spot instances when they need to scale.
“We’re always trying to push innovation and find ways to optimize infrastructure, so we’ve been keeping an eye on how companies like AWS are providing solutions in media and entertainment,” said Globo Project Manager Thiago Abreu, who helped to create the business case for Globo’s new hybrid workflow. “Ultimately we implemented our AWS workflow through Deadline for more efficient provisioning and the capacity to grow exponentially,” added Technology Specialist Jean Fernandes.
Upon internal research, Abreu found that while peak production needs were being met, the average demand for compute across departments was far below capacity most of the time. Armed with this data, and anticipating significant growth in VFX needs as advancements in content creation technology and techniques make CGI more economical and photoreal, Globo opted to extend into the cloud rather than invest in additional on-premises hardware.
“Machines become obsolete quickly and with the AWS Cloud, we can adjust our processing power based on a project’s needs. EC2 Spot Instances provide a nice value for performance and cost, and we’re only paying for the time are actually using,” Abreu noted. “I think one of the key factors to our success with our AWS implementation is our relationship with Thinkbox. We created a new path to the cloud together and developed a customer-centric workflow tailored to VFX creators.”
Fernandes added, “Our partnership with AWS and Thinkbox made our goal of achieving scalable architecture and on-demand resource allocation a reality.”
One of the first major tests of Globo’s new hybrid infrastructure came ahead of event coverage for last summer’s World Cup soccer tournament hosted in Russia. Since production was not permitted to fly drones to capture establishing shots of Moscow’s Red Square, Globo created a photoreal 3D model of the area.
Globo’s Antonio Victor Cardoso, who provided technical support for the Red Square project, explained, “We only have 16 machines on-prem and this was a big scene with lots of variables that had to be detailed enough to hold up during fly-through views and from up to five different virtual camera POVs. Additionally, we wanted our CG environment to match the time of day and weather conditions during the broadcast, so we were rendering a ton of material. We were able to run a lot of our tasks in parallel but would not have been able to complete this project in time without the AWS Cloud.”
Globo augmented its on-premises farm with 50 AWS Elastic Compute Cloud (EC2) C4.8xlarge instances for the three-month project, achieving 5.2 THz processing capacity in the AWS Cloud, with each instance featuring 36 virtual CPUs at 2.9Ghz each, in addition to 1.3 THz processing on premises for a total of 6.5 THz for the hybrid rendering setup. Of the project’s more than 60,000 render hours, 55 percent was rendered with the AWS Cloud.
The success of the Red Square project has emboldened Globo to become more ambitious in creating CG content and experimenting with cloud rendering and elasticity. Abreu concluded, “The beauty of our architecture with Deadline and AWS is that renders come back frame by frame, so artists receive results really fast and without needing a huge pipe. We use a VPN over a one gigabit Internet connection with no problems and scene files can have dozens of layers for comp. That speed allows artists to iterate more quickly or render sample frames to validate their work and reduce the amount of changes in the end.”