Speaker 1: There's a plethora of video codecs and containers available nowadays, especially when you look at all the options a camera like the GH6 has to offer. H.264, H.265, HEVC, ProRes LT, ProRes HQ, DNxHD, DNxHR, BitRate, BitDepth, 8bit, 10bit, Chromasoft Soundpoint, RTQ. Let's talk about it. Codecs are used to compress a video into a smaller file size, while retaining as much quality as possible. It can be overwhelming to pick a codec if you don't know what a particular codec means. I'll try to give you a basic understanding of the different options and when to use them or when not to. I'll talk about the most common ones and also touch on containers, chroma subsampling, bit depth and bit rates. I'll use the GH6 and the G9 to show examples of some of the options that are available. In order to keep this video from getting too long, I'm keeping it relatively simple, but I am going to cover quite a lot. If you're new here, I'm Sebastian and welcome to the channel. We can separate codecs into three different groups, acquisition, intermediary and delivery codecs. An acquisition codec is a codec that's being used to store what your camera sees onto your SD cards or whatever storage media you're using. Examples are ProRes, B-RAW and ProRes RAW. An intermediary codec is a codec that's being used for post-production. These codecs are designed to make it easier for your computer and editing software to decode the video file by compressing it less, resulting in smoother playback and editing. Examples are ProRes and DNxHD. A delivery codec is a codec that's being used for distribution of the final result to, for example, YouTube, a website or TV broadcast. However, these different groups have overlap because it is often possible to record, edit and deliver in the same codec, as we will see later in the video. Let's start at the end, final delivery, and work our way back to capturing the footage. So you've been filming some stuff and you've edited a nice video with it. Say your final delivery is going to be published on a platform such as YouTube, Instagram, a website, or just sharing the video with friends and family online. That video needs to have smooth playback, be compatible with most devices, needs to have a small file size and preferably needs to have the highest quality possible. So when you want to export your video from your video editor, you need to pick a codec and a container. The most commonly used codecs for a web-based delivery are H.265, also known as HEVC, and H.264, also known as AVC. H.264 is widely used for video compression. It provides good video quality and a reasonable level of compression, making it suitable for many applications, including high definition video playback and streaming. It is widely compatible with all sorts of older and newer devices. H.265, on the other hand, was introduced in 2013 as a successor to H.264. It provides significantly greater compression than H.264 while maintaining the same level of image quality. This makes H.265 ideal for applications where bandwidth and storage space are limited, such as 4K video playback and transmission over low bandwidth networks. However, H.265 is much more complex than H.264 in terms of encoding and decoding. Therefore, H.265 requires more processing power to encode and decode video, which can be a problem for older devices that don't have powerful processors. Whether you export your final video in H.264 or H.265, it needs to be packed into a container. A container is what appears as what we often call file type, the file extension. The container is being used to package and store your video file, along with audio, subtitles and metadata. The most common containers for video are .mov and .mp4. .mov was developed by Apple and is mainly used for QuickTime player and professional video editing software. It supports a wide range of codecs, including ProRes and DNxHD, making it more suitable for professional video production. .mp4 is a more universal format that supports a wider range of codecs, including H.264 and H.265, making it more versatile and widely used for web video and general video playback. File sizes are usually smaller than with .mov. So how do you pick the codec and container for final delivery? Well, if you want the highest quality along with a relatively small file size, and the video is going to be played back on devices compatible with H.265 and .mov, this is what you should pick. But if you need your video to be widely compatible and you don't mind the bigger file size and the possibly slightly lower quality, H.264 in .mp4 is going to be the way to go. Editing a video in whatever software you are using is a demanding task for any device. Even the most powerful devices have to put in a lot of work to calculate the cuts, the transitions, the color grading you do, and still play the video back smoothly so you can see what you're doing. This is where paying attention to what codec you're working with can make the world of a difference. There are two commonly used codecs specifically designed for editing. The intermediary codecs. DNxHD for Full HD or DNxHR for resolutions for over Full HD. And ProRes. They come in many flavors, with different bitrates and both are 422 codecs. We'll get into bitrates and bitdepth and chroma subsampling a bit later. These codecs are designed to make it easier for a computer to decode the video file, resulting in smoother playback. ProRes is more suitable for working with Apple devices, since it was designed by Apple. And DNxHD is more suitable for working with Windows computers. But most modern cameras shoot in H.264 or H.265. This is where the overlap of the codec types really starts to show. As I said earlier, your camera can capture the footage in a delivery codec that you can also edit with. So why not use these codecs throughout the whole process? Well, even though it is possible to use H.264 or H.265 for editing, the complexity of these codecs can really mess up the playback and editing performance. But what if you shot your footage in one of these codecs and you want to be able to edit with smooth playback? Well, you could transcode your footage into a codec like ProRes or make proxies in ProRes. Transcoding is the process of converting one codec into another. You can do that within DaVinci Resolve, for example. Side note here, there's a common misconception that I want to clear up. If you shot a clip in, say, H.265 8-bit 420, which would be a relatively low-quality clip, transcoding it to ProRes 10-bit 422 is not going to give you higher quality. Transcoding to ProRes is not going to magically add information to the file that wasn't there to begin with. When you do decide to edit with H.264 or H.265 files, keep in mind that there's more to it than just the codec. We'll talk about that in the next bit of the video, which is about capturing your footage. So let's talk about capturing your footage. Users are getting more and more codec options, and it can be hard to figure out which one to use for your particular shoot. In order to understand the different options, we need to look at more than just the name of the codec. Compression format, bitrate, bitdepth, and chroma subsampling also affect the quality, size, and usability of the files. You've probably heard or seen LongGOP and AllEye. These are two different video compression formats. Both formats are being used across different codecs. AllEye stands for intra-frame only, and is a video compression format where every frame of a video is stored as an independent image. This results in high image quality, but also larger file sizes, because each frame contains all the information necessary to display it. LongGOP stands for long group of pictures, and is a video compression format where only partial information for each frame is stored, with the rest of the information being interpolated from other nearby frames. This results in smaller file sizes, but also lower image quality, because it can cause artifacts such as blockiness or blurriness. In general, AllEye is used for higher quality applications, such as professional video editing, although it doesn't necessarily provide smoother editing, but it does provide more image information. LongGOP is used for applications where smaller file sizes are important. Playback is usually smoother as well, but there is less information available. When there's a lot of complex movement in the frame, or there's a lot of camera movement, or you want to apply lots of effects to your footage in post, LongGOP can cause artifacts or stutter. So these are cases where you might pick an AllEye codec, because all the individual frames are being recorded, there's less of a chance that you'll get these issues. But the trade-off is much larger file sizes. This also brings me to bitrate. Currently AllEye codecs have a much higher bitrate than LongGOP codecs, which makes sense because more information needs to be encoded. On the GH6 for example, it ranges from 150 Mbps to 1.6 Gbps and can be recorded in H.264 and ProRes. LongGOP codecs usually have lower bitrates and therefore smaller file sizes. If you don't have much rapid movement in the frame, and you don't need to manipulate the image too much in post, LongGOP is a good option. 8 and 10 bit indicates the bit depth. It determines the number of possible colors that can be displayed or captured in the image. The higher the bit depth, the more colors can be represented, resulting in more vivid and accurate images. On the Lumix G9 we can choose between some 8 bit and 10 bit options. The GH6 is basically a fully 10 bit camera. But what does that mean? And does it matter? Well yes, it does matter, and I'll explain why. An 8 bit video uses 8 bits to represent the color of each pixel, resulting in a color palette of 256 possible colors. A 10 bit video, on the other hand, can represent 1024 possible colors per pixel. Which is 4 times as many colors as 8 bit. Quite significant. So the higher the bit depth, the more colors you can capture, and the more you can manipulate those colors in post. There's simply more information. 8 bit video can absolutely be good enough if you want to do minimal post production. It saves you a ton of storage space, because the file sizes are much smaller. But setting white balance and exposure correctly is key if you want to get good results. Chroma subsampling basically tells us the color resolution. It is a technique that reduces the resolution of color information in order to reduce file sizes. What we typically find on our cameras are 420 and 422 chroma subsampling. With 422 being the highest color resolution. The idea is that the human eye is less sensitive to color detail than it is to luminance detail. So the color information can be reduced with minimal impact on image quality. 422 chroma subsampling is the one to pick if you want to manipulate colors in post production. Note that 422 chroma subsampling in H.265 is, at the time of recording this video, not very common, because it is really heavy to work with in post. So how does all of this apply to capturing your footage? As I said earlier, there's a lot of overlap between acquisition, intermediary, and delivery codecs when it comes to capturing video. With what we now know about codecs, we can draw some conclusions. Let's take the G8-6 and the G9 as an example. On the G8-6 we have mostly 10-bit codecs with either 420 or 422 chroma subsampling in H.264, H.265, and ProRes. On the G9 we have 8-bit and 10-bit codecs with 420 or 422 chroma subsampling in mostly H.264. And long GOP with a maximum bitrate of 150 Mbps. If you want footage with the absolute highest quality and flexibility, you should pick a bit depth of 10-bit, 422 chroma subsampling, in all intra, and with the highest bitrate you can get. On a camera like the G8-6, that would be ProRes HQ. On a camera like the G9, this would be the 10-bit 422 option in long GOP for internal recording or, alternatively, the G9 can put out 10-bit 422 over HDMI to an Atomos Ninja 5, where it can be recorded in 10-bit 422, all intra, in ProRes LT, ProRes 422, or ProRes HQ. The trade-off for high quality footage will be massive file sizes. If you don't need the absolute highest quality and you need smaller file sizes and wide compatibility, picking a long GOP 8-bit or 10-bit codec with 420 chroma subsampling in H.264 would be the way to go. And in between those two extremes, you can pick and choose the option that best suits your needs. So if you made it this far into the video, well done. The takeaway here is, make sure that you're capturing in a codec that will give you enough information for the work you want to be doing with the footage in post, which in turn affects the decision for an editing codec, depending on how powerful your computer is. And then you decide what codec and container you are going to export your video in, based on the platforms or devices you're going to be showing it on. But if you're not going to be doing any post-production, and you need to make a quick video with a reasonable quality, and you want to upload straight to YouTube, for example, you could use the overlap that I talked about and shoot in the delivery codec straight away. Phew, that was a lot of information. I hope it gave you a better understanding of codecs and helps you pick the codec you need. If you have any questions, leave them in the comments. If you enjoyed this video, please give it a like and subscribe to the channel, and maybe hit that notification bell. And I hope to see you next time. Thanks for watching.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now