Meta develops new hardware-efficient video processing method


Meta Platforms Inc. today detailed an internally developed computing method that makes it possible to process some types of video content up to 94% more efficiently than before.

According to the company, the technology will help it more efficiently process videos that users upload to Instagram.

When a user uploads a clip to Instagram, Meta’s servers don’t immediately publish the clip but first perform a task known as encoding. The task involves repackaging the clip into a video format that runs well on different types of devices. Meta also compresses video content in the process, which enables the content to travel faster across the network to user devices and reduces bandwidth usage.

Meta’s encoding algorithms package every Instagram video into multiple formats. The first, most advanced format that the company uses stores clips in a way that maximizes the viewing quality. Meta also packages video content into two less advanced formats that run better on entry-level user devices.

The process of turning Instagram videos into a form that runs well on different devices requires a significant amount of data center infrastructure. According to Meta, packaging clips into the two less advanced video formats used by Instagram is a particularly hardware-intensive task. The task historically accounted for the bulk of the hardware resources that Instagram uses to process videos. 

“The problem was that we were spending more than 80 percent of our resources processing minimum functionality encodings,” Meta engineers detailed in a blog post. “If we stayed on that trajectory, minimum functionality would monopolize our resources within a year. As a result, videos would start to take longer to publish — or fail to publish altogether.”

To address the challenge, Meta’s engineers developed a more efficient way of packaging videos into the two less advanced video formats used by Instagram. The technique builds on certain technical similarities that the company discovered between the two formats.

The first format, which is known as ABR encoding, can automatically increase or decrease video quality based on the speed of a user’s network connection. The second format, progressive encoding, is optimized to run on legacy versions of the Instagram app. Meta historically generated ABR encoding and progressive encoding files separately through two different computing workflows. 

The new processing technique developed by Meta removes the need to generate files separately. Using the technique, a server can generate a progressive encoding first and then reuse many parts of the file to create an ABR encoding. Reusing file components removes the need to generate an ABR encoding from scratch, which lowers processing requirements. 

“Our Instagram engineering team developed a way to dramatically improve video quality,” said Meta Chief Executive Officer Mark Zuckerberg. “We made basic video processing 94% faster so we can now use more advanced codecs. This is especially helpful on slower internet connections, but it improves the experience for everyone.”

Image: Meta

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Related Articles