color depth bit depth container

Bit Depth and Color Subsampling

*Warning* This article goes pretty in-depth. You’ll find that a majority of our blog posts will be easy to read, we’ll even give away templates and other tips for using our software. Today though, we are going a bit deeper and so if you’re up for the challenge of learning how bit depth and color subsampling affect your output bandwith then this is the article for you!

In a previous article, we did a deep dive into resolution and frame rates and their effect on output bandwidth. The other two major factors that affect output bandwidth are color bit depth and color sub-sampling.

Color Bit Depth

We have to start with the basics. Digital images are made up of small dots called “pixels”, that are each a different color. The “resolution” of a video image is the number of pixels wide and tall of the output image. The color of the pixel is a combination of red, green, and blue light. If all three colors are at 100%, then the pixel is white, if all are at 0% the pixel is black, any other combination results in a spectrum of available colors based upon the color depth of the video stream.

NOTE: The color model described is RGB, which is the model used for the graphics output of the computer. For digital broadcast video standards (SDI), the color model is YCbCr, which is very different but more difficult to understand conceptually. We will discuss these differences a bit in the next section but I think the basics of color depth are better understood with the RGB model.

Most video streams, particularly in live productions, are 8-bit video streams. This means that for each of the three RGB colors (Red, Green, and Blue) there are 256 gradations (valued 0-255) which, when combined, define the color of the pixel. So, for example, to “mix” a pure red color the R value would be 255, and the G and B values would each be 0. Pure white would have 255 for all RGB values, and pure black would be 0 for all values. Using these combinations, there are 16.7 million colors available to be defined (256 x 256 x 256). You can see how these values change by playing with the sliders on this page: http://colorizer.org

Sometimes, an even higher range of colors is desired and for this reason, there is a 10-bit video stream. With 10-bit streams, there are over a billion different colors available as each of the three components has 1024 gradations (1024 x 1024 x 1024 = 1.07 billion). While helpful for some content, the difference of this level of color processing is not noticeable on most LED screens or projection displays.

So, if you are outputting an 8-bit video stream at 1280 x 720 resolution, then the data rate for the output is the frame size (1280×720) times the number of frames per second (60) times the total number of color bits (3 x 8):

(1280 × 720) frame size × 60 frames per second × 24 bits per pixel = 1.3gbps

Note that all Renewed Vision software output streams (as of September 2019) are 8-bit.

Color Subsampling

The final factor in output bandwidth is Color Subsampling. Processing video is computationally intensive and can result in large file sizes as well as large data bandwidths when played out through broadcast devices (such as Blackmagic SDI products). As such, manufacturers of video hardware figured out another method of processing video more efficiently by studying how the human eye and brain see visual imagery. We are far more sensitive to luminance (how dark or bright) an image is than we are specific colors. Because of this, when processing video, much of the color information of an image can often be thrown out, so long as the luminance information is retained. This is called Color Subsampling.

The amount of subsampling is notated by a ratio of the Luminance information to the color information. The color model used for broadcast video isn’t based on independent Red, Green, and Blue values as described above. Rather, broadcast video is based on Luminance (Y), and Color, or “chrominance” (represented as Pb and Pr in digital component video). Put them together and you get YPbPr. We won’t get too deep into how this works, but you can see how the colors are effected based on changing the sliders shown on http://colorizer.org under the YPbPr section.

Color Subsampling is notated as a ratio of the data rate used for these three channels (Y:Pb:Pr). So, if each channel gets the same data rate, this is notated as 4:4:4. In an 8-bit video stream that is 4:4:4, each channel is getting a full 8-bits of information per pixel, which results in 8 x 3, or 24-bits. If it was a 10-bit video in 4:4:4 then 30 bits per pixel of information would be required. Because our eyes aren’t as sensitive to color information, we often cut the amount of color information stored by half, which means the ratio is 4:2:2… the Pb and Pr channels get half the information of the luminance channel. This means an 8-bit video stream would use 8- bits for luminance, 4 for Pb, and 4 for Pr… 8+4+4 = 12 bits. For a 10-bit 4:2:2 stream there would be 10-bits used for luminance, 5 for Pb, and 5 for Pr… totaling 20 bits per pixel.

So, given this is an option, the logical question is: where would this matter?

Much like color depth, most of the time this doesn’t matter much… at least in the “final output” space that Renewed Vision products occupy. When actually creating videos or films, having the additional color information from the source material is somewhat important because it yields more flexibility with color grading (lightening/darkening areas of an image punching up the colors of different shots). Once the video is completed, however, the excess color information usually doesn’t make much difference to the end viewer. There are some exceptions, however. A classic example is a shot of a sunny sky… here the various colors of blue as it gradates from light to dark will create banding on the image if there isn’t enough color data.

One interesting note: since a lot of consumer content is heavily compressed, and a lot of color information is removed, a common workaround to the color banding above is to add some noise to the image. Much like a subtle soft filter, the noise smoothes edges and creates images that, while they still have flaws, are far less noticeable.


By default, all processing of broadcast outputs in Renewed Vision products is at 4:2:2.

Why should you care? It’s all about data-rates!

Where all of this comes to making a difference is in the calculation of bandwidth when using broadcast outputs. Products like PVP3 and ProPresenter 6 that allow for multiple video outputs through SDI can only output video if there is sufficient bandwidth between the CPU and the video hardware to do this. If you are outputting, for example, to a Decklink Quad card in an external Thunderbolt 3 enclosure then you need to make sure that the total bandwidth for the video streams being sent over the Thunderbolt cable is <22Gbps total. Knowing how these data rates are calculated is important in understanding what is possible with a given piece of equipment.

So, let’s say I wanted to output eight 1080p60 video outputs from PVP3.
Frame size: 1920×1080
Frame rate: 60

This chart shows that you’d end up with roughly 2Gbps per stream… this would result in eight such streams occupying 16Gbps of data on a Thunderbolt 3 bus, which is below the 22Gbps limit.


About Brad Weston

Brad Weston is President of Renewed Vision, and a 20-year production volunteer at his local church. He is most inspired by helping people to do more with less.


Have a blog idea?

We’d love to hear it! You could even be one of our guest bloggers. Click the button below to send an email to our marketing team and we’ll follow up with you!

You May Also Like

Buying a New Computer in 2022? Start Here

The ProPresenter Guide to Presentation Remotes (Clickers)

Linear Key or Alpha Key

The Basics of Video Keying

Guide to using Zoom with ProPresenter