Speaker 1: Hi, my name is Tyler, and this is After Touch Audio. Today, I wanna break down my audio processing chain for dialogue. The goal of dialogue processing is really to keep things sounding as natural as possible. When applying your plugins to your dialogue, try to keep them in an order which they do as little work as possible. The order in which you do your processing is just as important as what processing you apply. The very first thing I do when I get my dialogue mix is try to balance out the dialogue using clip gain. So everything is relatively consistent volume-wise. If your dialogue needs to be mixed up to minus 23 LUFS, then I aim for minus 23 LUFS. This not only helps with gain staging for all of your plugins, but it also helps you with being able to hear problems in your dialogue when all of your dialogue is relatively consistent. If you are a Pro Tools user, I use a program called Pro Loudness Control, which analyzes the loudness for each clip. Then I take the clip volume and raise it up until I hit my target level. Alternatively, you can use really any loudness meter, Ulean, Doro, WLM, or my personal favorite, VisLM. They're all fantastic tools for helping you set your clip gain levels. This step is probably the most important step to do, and it is the first step you should do before applying any amount of processing. The next step is really just general cleanup. I listen to all the dialogue and remove things like clicks, pops, unwanted mouth noises, plosives, and hums using RX. Keep in mind that I don't do any denoising at this stage. I am just looking to remove artifacts from the dialogue. These artifacts are important to remove as they will be amplified later when we apply processing later on down the road. It is also super important when working with RX to only select what you want RX to remove and not just load the plugin and click process, as this will drastically alter how the actual words sound.
Speaker 2: The National Observer has been covering Canada's climate emergency for the last 15 years. And not to brag, but we've won awards for it. The National Observer has been covering Canada's climate emergency for the last 15 years. And not to brag, but we've won awards for it. The National Observer has been covering Canada's climate emergency for the last 15 years. And not to brag, but we've won awards for it.
Speaker 1: Okay, moving forward, we are actually applying plugins. It is super important to bypass and unbypass your plugins to make sure that what you are applying is constructive and not destructive. Remember to like and subscribe. It really helps out with the YouTube algorithm. And also just let me know what your favorite dialogue processing plugin is in the comments below. If you've ever edited a video before, you may have heard the term color correction, which basically means to balance the colors between shots so that everything looks consistent. We need to do a very similar process with the dialogue, which would be to reduce, not remove, ringing frequencies, apply high and low pass filters to remove rumble and extra hiss, and really just to shave things away so we have a nice solid base to work with. Most pro engineers will tell you to use your ears, but if you don't have tuned ears yet, you can easily do this by taking a bell with a high Q and boost it up by 10 dB. If you hear a ringing frequency as you do your filter sweeps, you can reduce it until it blends nicely
Speaker 3: with the rest of the dialogue. Devin was an obvious choice, and Jen, Jen's not that bad looking. There might've been girls in Juvie, but they were separated by a 30 foot fence and they all wanted to tear me limb from limb. Nothing's more frightening than the top most dangerous girls in Canada liking you. There might've been girls in Juvie, but they were separated by a 30 foot fence. There might've been girls in Juvie, but they were separated by a 30 foot fence.
Speaker 1: After the stage one EQ, I like to apply a small amount of dynamic processing through the use of a compressor. I usually set a fast attack and a fast release, and then I set my thresholds. I'm applying about six dB of compression. From there, I start doing my fine adjustments. Fast attacks are fantastic for tightening up the dynamics in your voice and can make the delivery of the lines sound very polished. Setting the attack too fast, however, can suck the absolute life out of a performance and make the person sound further away in your mix. A fast attack can also cause some distortion or other artifacts in the bass frequency, so be very careful with how fast you set your attack. A slower attack can make things sound bigger, as it will let some of the original signal come through before it is compressed. However, slower attacks are not really good for controlling dynamics, as they can actually make your dynamics worse. Fast release settings are fantastic for controlling the overall loudness of your tracks. If you're applying about six dB of gain reduction, a fast attack will sound very natural, but if you're actually applying a lot of compression, things can sound very pumpy. A slow release is great for smoothing out dynamics, but it can also make things sound further away. If the release is too slow, the compressor will actually suck the life out of the dialogue and make things sound quite flat. De-essers are really just specific compressors that work on specific frequency ranges. They are used to reduce the sibilance in your voice, which can sound quite harsh. For this process, I use FabFilter Pro-DS, which comes with a frequency analyzer and an audition section, which helps find your sibilances faster. Once you've found the sibilance you want to affect, set your threshold so the compressor knows when to trigger, and then set your reduction level so the compressor knows how much to reduce your sibilance by. Once the dynamics are under control, I then apply my noise reduction. Noise reduction can be very easily overdone, so remember to bypass and unbypass your processing as you go to make sure that you are not overdoing your noise removal. Now, there are a lot of de-noising plugins you can use. I use a few de-noisers here depending on the level of noise I need to remove. I use iZotope RX for anything like surgical removal. I also use a combination of NS1 by Waves, which works really well on the hiss, but for my general broadband de-noiser, I reach for Aera De-Noise Pro, which in my opinion has to be one of the best broadband de-noising plugins on the market.
Speaker 2: The National Observer has been covering Canada's climate emergency for the last 15 years. And, not to brag, but we've won awards for it. The National Observer has been covering Canada's climate emergency for the last 15 years. And, not to brag, but we've won awards for it. The National Observer has been covering Canada's climate emergency for the last 15 years. And, not to brag, but we've won awards for it. for it. The National Observer has been covering Canada's climate emergency for the last 15 years and not to brag but we've won awards for it. The National Observer has been covering Canada's climate emergency for the last 15 years and not to brag but we've won awards for it. The National Observer has been covering Canada's climate emergency for the last 15 years. Using
Speaker 1: compressor subtractive EQ and a little bit of noise reduction can remove actually quite a bit from your voice. So I like to apply a super subtle and I mean super subtle harmonic exciter to my dialogue which helps push the dialogue a little forward in the mix. This can very easily be overdone so make sure to bypass non-bypass your plugins to make sure that the processing you're applying is constructive and not deconstructive. I'm talking like two three four percent here. The final amount of direct processing that I like to apply is an EQ which is a shaping EQ. This EQ is placed here to reapply the high pass filter and widely boost frequencies to make sure that the voice sounds better. Subtlety is key here but really listen to your dialogue and make sure that the best characteristics of your voice are being amplified. There might have been girls
Speaker 3: in juvie but they were separated by a 30-foot fence and they all wanted to tear me limb from limb. There might have been girls in juvie but they were separated by a 30-foot fence and they all wanted to tear me limb from limb. Once you have your voice sounding the way you would like
Speaker 1: it is time to place your actors back in the world by applying some reverb. Whether they live in a living room or a cave applying a small amount of reverb can actually help go ahead and mask some cuts in your dialogue and also put everyone in the same space. This is my mixing console. It really doesn't matter what I use for this here but the more important thing that I want to go ahead and draw your attention to is how faders work. If you look at the top of the fader we have minus 10 to plus 6 in this range but then we have minus 20 to minus 30 in this range. If we did not set our levels with clip gain and slightly compress the dialogue before going into this step then we would be doing dialogue automation like this which just does not sound right. It is better to do fine adjustments at this step by riding the top of the fader. With that being said I usually do two or three passes on the dialogue with a fader then I apply a program called wave rider which helps add that extra five percent to make my dialogue sound that much more consistent. Keep in mind this is just my process for dialogue editing and it is always changing. There is never a golden rule of dialogue processing. The only real rule is just make it sound smooth. Try moving the order of the plugins around if you're not getting the result that you are looking for. Just remember to have your plugins do the least amount of work possible so everything sounds natural. Now go make some noise. you
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now