Speaker 1: Choosing a codec. Now, once you have your resolution figured out, it's time to pick your capture formats. If you already have a camera, then you'll be limited to what that camera offers, but knowing what your camera is doing under the hood actually has huge implications for your speed of editing in post, so don't go anywhere. The word codec is actually a contraction or portmanteau of two words, compression and decompression. Codec means the way a video is first compressed and then later decompressed when you play it back. Compressed? That sounds bad, like I might be losing quality. I don't want my video to be compressed. It's not as bad as it sounds. Now, while it's true that most video compression does throw some information away in the compression process, without it, we wouldn't have YouTube, Vimeo, Blu-rays, and even digital movie theaters. And while it might seem like all video should be delivered in the highest quality possible, believe it or not, compression actually allows for video to be captured and streamed and delivered to you in a higher quality than without it. Let me explain. So uncompressed video files are so large, like unbelievably large, every part of the chain would struggle to play it. The storage it's stored on, the internet, the player, the cables connecting the player, even the TV would struggle to process that much data. So a long time ago, smart engineers learned that you can use smart compression techniques to free up enough bandwidth over the internet to have great video. One way compression helps us is in resolution. Think watching HD instead of standard resolution. The higher resolution ends up giving you a much higher quality experience than just having less compression on low resolution. Also, as we're gonna see, compression techniques have gotten really good these days. The HD video that you watch on YouTube is the same amount of information as standard definition DVDs from 2001. Now note in the section, I may use the word codec and format sort of interchangeably, so don't be alarmed by that. I break codecs down into three main categories. You've got capture formats, intermediate formats, and delivery formats. And I'm gonna break each one of those down here. Capture codecs. These are formats that cameras use to record footage. A capture format's job is to capture as much information as possible while also staying inside the performance, storage, and power and heat constraints of that camera. Again, this comes as, you know, why we don't use fully uncompressed, right? You know, even if you could make a camera powerful enough to record that, it would take tons of battery power and have all this heat coming off of it and all this other, like it would be bad news, right? So these are formats efficient for cameras. All right, let's jump right into it. First, we have H.264 or AVC, Advanced Video Coding Format. This was the first of the great modern compressed video formats, and it came out in 2003. It was 50% more storage efficient than older MPEG-2 H.262, which got used on DVDs and stuff like that. It's also used extensively on the web, and it made YouTube as we know it possible. Now, that format served us well for about 10 years there, but then came H.265, also called HEVC, High Efficiency Video Coding. This was the next generation of modern compressed formats, and it came out in 2013. It was, again, 50% more storage efficient than H.264 at the same quality. It's used on current smartphones, DJI drones, mirrorless cameras, and more and more cameras all the time. H.266 or VVC, Versatile Video Coding, has already been created and is promised an additional 50% storage savings over H.265. And it wouldn't be surprising to see it start arriving in devices by 2025. Now, these formats are storage optimized, meaning they use smart compression to minimize their storage size, but also because the compression is complex, they take a little bit more processing power to play them back on the computer. But I'll show you how we can accelerate them with special hardware so they run blazingly fast. And then we're gonna get the best of both worlds, small storage size and plays back fast. Often you're gonna see these formats sort of rebadged with proprietary names, like AVCHD, XAVC, XAVCHS, XFAVC, XFHEVC. You get the picture. They throw Xs in there and Ss and stuff like that. Just know it's all H.26 something under the hood. So next for capture formats, we're gonna talk about uncompressed capture formats real quick. You have RAW formats like RED RAW, ARRI RAW, Blackmagic RAW, so many more. So if you come from the photography world, you've likely shot photos in RAW, or at least you've heard of it, right? This is a format that saves the actual like light capture data hitting the sensor rather than like first turning it into video and then saving video. It tries to save the actual like sensor data. It gives you the maximum color information and flexibility in post, and it gives you the ability to color grade your footage pretty extensively and do a lot with it. If you're in search of the cinematic look, like many people consider this the end game and where they wanna end up is at RAW video. The trade-off is in massive file sizes, clunky cameras. Dude, clunky cameras. I was using a RED Komodo not too long ago, and it was like over 60 seconds to boot the camera up still.
Speaker 2: And I was like, wow, this is 2023. 60 seconds to boot up the camera. I was like, wow, this is bad.
Speaker 1: You gotta choose where you want your trade-offs to be. Do I want maximum flexibility or do I wanna get more done? Sometimes you need to make that decision. The type of work I've done has been running a YouTube channel and other things like that where keeping that backlog is really, really important to us. We do reuse it to build things out of later down the line and we wanna keep that original footage. And so shooting everything in RAW is pretty unmanageable, right? And it's not cost-effective. It doesn't make any sense for what we wanna do. There's way more efficient ways out there to get the same job done. Now I've owned RAW capable cameras. I've edited RAW footage and done my fair share of color grading projects over the years. In my experience, a video being RAW or not RAW is not the biggest factor of image quality. So also it's not the biggest factor of a shot being fixable in post if it was shot poorly, for example. It's not the most important factor in a shot being color gradable, right? You also see Apple ProRes, but we'll talk about that in the next section, Intermediate Codecs. All right, next is Intermediate Codecs. What is an Intermediate Codec? These are formats specifically designed to be used in editing. Now, typically the only time an Intermediate Format is used is by first transcoding, it's called, I mean, it's like transforming from one codec into another transcoding from one of the Capture Formats we just talked about into one of these Intermediate Codecs. Now an Intermediate Codec's job is to make the editing process easier. They are designed not to lose any information from the Capture Format. So when you transcode from Capture Format to Intermediate, they want no data lost. All of the detail should be saved. And then next, they need to run super fast on computers so editors can do their job. And then they need to be able to handle advanced color grading without issues. Intermediate Formats are often used when passing footage from an editor to a colorist so no detail is lost, that kind of thing. Intermediate Formats include Apple's ProRes, Avid's DNxHD and DNxHR, and Cineform. Now all of these things sound wonderful. We should use Intermediate Formats for everything. They just sound great. Now where Intermediate Formats fail is in file size. They are compute optimized, meaning they have very little video compression. So your computer's processor does not have to work hard to play them back, but this means they have massive file sizes. You guys may have noticed that a lot of cinematographers, directors, editors, DIT people, producers even, are in love with the Apple ProRes format, okay? And I get it. Like back in the day when it came out, ProRes was really impressive, right? And also back then, computers were really weak. We didn't have accelerators for any format at all, really. So it made sense to use a format that computers of the day could play back and play back well. That was really, really important for post-production and editing houses and all of these people trying to get work done. You needed a solution, right? You couldn't use compressed formats. You needed to retain your detail and it all just needed to work, right? And so Apple stepped up to the plate. They made a format and it did its job and it did its job well and it got adopted by people all over the industry worldwide, right? And that love has become so ingrained in this industry that now it's spilled over into cameras. So now some cameras are offering ProRes as a capture format when originally it was just an intermediate format, right? You would transcode your footage into ProRes and then edit with it and then export to whatever your export was from there, right? But now it's been taken and used as a recording format in cameras. It works just fine. It's just inefficient, right? It takes way, way more powerful cameras and powerful batteries and expensive memory cards to record on it because there's just so much data, right? And it makes your whole system much slower and a much heavier. And if I'm going that far, like that heavy of a camera with those big of batteries, I'd rather have raw video than ProRes as a capture format. The file sizes are massive for the quality that you get and they don't offer higher dynamic range or bit depth than a efficient codec like H.265, for example, or H.264 even. Now, what I think shooting in ProRes is good for is green screen work or heavy color grading, some visual effects, stuff like that. It can yield a cleaner edge at the border of a really saturated color. So like at a green edge and that kind of thing. But again, that's like way down the list of priorities when you're doing something like that, like making sure you're lighting it well so you don't have a bunch of green spill on your talent. Make sure you're shooting in a little bit of a higher resolution. Again, solves that same problem, like shooting in 6K instead of 4K or 4K instead of 1080 solves the same problem. So it's not like a big deal nowadays. I think that there is better solutions for the majority of shooters than ProRes, but it's not bad. But I'll go out by saying this, it's a good idea to use it if your producer, director or cinematographer are asking for it. Finally, we have delivery codecs. So delivery formats are the formats you render your project into before you deliver them to your client or upload them to YouTube or whatever maybe. So their job is to present your project in all of its glory while being as lean and mean as possible for uploading and streaming or burning to a disc, VHS tapes, whatever maybe. These formats are typically fairly compressed, meaning they throw away all the unnecessary information and only keep what the viewer sees. Delivery formats include MPEG, H.264 and more recently H.265 and AV1. There have been many delivery formats over the years, but one has emerged as the clear winner and it's H.264. The magic format that's incredibly efficient and will get the job done 90% of the time. When in doubt, use H.264. Every platform lets you upload an H.264. It's just a great balance of file size, quality and it plays back on almost every device in the universe. So it just works. You may have noticed I didn't say anything about .mov, .m4v, mp4, wmv, mkv, mxf, anything like that. That's because these are not codecs. These are file extensions. You most often see them on your computer when you're looking at a file and it'll say, you know, name of the file .mkv or whatever maybe. A container format is a bundle of a video codec and an audio codec put together in the same file and wrapped up with a little bow so that a video player can play something back with sound and video together. Now, for our purposes, container formats mostly don't matter. It's much more important to know the actual video codec you're using under the hood. And you can use the same codec and even the same audio across different containers. You might have the exact same contents inside an .mov that you have inside an m4v, for example.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now