• SONAR
  • Leaving headroom for mastering? Why? (p.3)
2013/05/02 17:22:08
Jeff Evans
There is more to mastering than compression and limiting. Many are forgetting here what the first stage usually is and that is EQ. This is one main reason why you should leave 3 to 6 dB of headroom in your mix. 

So supposing we need to boost our mids are whatever by 3 or 4 dB. When this is applied the level of the track goes up. If the track is already sitting close to 0dB FS (And still has dynamics) there is no where for the EQ to boost into (if boosting is required of course) so clipping results. It means the level has to be pulled down before the EQ in order for any boosting to take place. This is OK but if the track has that headroom built in it is just one less thing to do to the signal before any mastering begins.

Each stage can add a few dB here and there. EQ might increase the overall rms level of a track by say 3 dB. If the compressor following is averaging about 3 dB of gain reduction then a further 3 dB can be added there in the make up gain. Now our track is already 4 or 5 dB louder before the limiter even kicks in and then that can do the final raising of rms level.

If 3 to 6 dB of headroom is allowed for right at the start then all these processes flow rather well and all work nicely. So if you are going to send out any tracks to a ME then take the advice and do not apply any processing on your mix and allow for 3 to 6 dB headroom as suggested here is good advice. Stop trying to make the track loud. That is NOT your job. Turn your mix up loud in the room if you want to hear it loud. That is your job.
2013/05/02 18:38:30
bandso
Thank you Jeff, yes eqing and other processes do affect DB's being raised and lowered.
ok my question has been sort of side stepped so I'm most likely going to bail on this one, as I can't seem to convey what I intended. I hope the OP gets the answers he is looking for. All I've heard is don't do it, with no sonic/scientific reason on why I can't give a louder (normalized to 0.1) non damaged/compressed audio file to my clients and then just have the ME bump it down a few DB's before he/she starts their magic.
Peace!
2013/05/02 19:15:10
jhonvargas
This looks overly complicated guys.
 
Making good sounding music should be simpler than this.
 
Cheers,
 
Jhon
2013/05/02 22:59:49
filtersweep
I think a problem with normalization is that in raising levels uniformly across the entire freq spectrum it can, depending on how much freq range there is in the material, alter the listeners perception of the music. The mid range loudness is most affected because the ear is much more sensitive to dB changes in the mid range.
2013/05/02 23:09:24
John
filtersweep


I think a problem with normalization is that in raising levels uniformly across the entire freq spectrum it can, depending on how much freq range there is in the material, alter the listeners perception of the music. The mid range loudness is most affected because the ear is much more sensitive to dB changes in the mid range.

That is not how normalization works. It is not uniform but proportional. All the levels are raised proportionally to the highest that has been set to a higher level. The dynamics do not change.  Also it has nothing to do with frequency but level only. 


2013/05/03 04:55:43
Chregg
" There is more to mastering than compression and limiting. Many are forgetting here what the first stage usually is and that is EQ. This is one main reason why you should leave 3 to 6 dB of headroom in your mix. " This !!!!!!
2013/05/03 09:54:15
brconflict
bandso


Thank you Jeff, yes eqing and other processes do affect DB's being raised and lowered.
ok my question has been sort of side stepped so I'm most likely going to bail on this one, as I can't seem to convey what I intended. I hope the OP gets the answers he is looking for. All I've heard is don't do it, with no sonic/scientific reason on why I can't give a louder (normalized to 0.1) non damaged/compressed audio file to my clients and then just have the ME bump it down a few DB's before he/she starts their magic.
Peace!

I really don't think there is a holistically scientific reason for it in existence, but that's ok. Music and Mixing/Mastering is art, right? Ask a Mixing or Mastering engineer whose done lots of work for a major label and they'll nearly all tell you that the label gets what they want, regardless if you feel right about it or not. Science doesn't enter into their minds at all, and they pay the big paychecks. Sure, the music sucks on many levels, but the reality is this: If you want your music to sound "good" take these recommendations above and give it a go. If you want your mixes smashed like a major label would ask for, then do that. I think your judgment will kick in, and you'll know when something isn't right, or you wouldn't be interested in doing this sort of thing. That's the fun part. 

The only wrong answer is not doing what the customer asks for. I think a clipping limiter on the Main buss is fine, if it means that a single kick drum hit if, when normalizing, the rest of your mix becomes a whisper (NOTE: I don't Normalize, myself, either). In that case, I recommend putting that clip limiter on a separate buss for your kick or automate that single transient by hand like the old days. -0.1 is perfectly fine if there's a few transients that get there and no audible side-effects (Know your DAW). You just don't want your guitars, bass, keys, or vocals to be hitting -0.1, even on occasion. I have some mixes that come out in the red, mainly because if I turn down all the faders even by 1db, the mix isn't right for my ears. Luckily, the floating-point 32-bit (or 64-bit) will work for me because I keep it 32-bit when I go to my Mastering Engineer. He uses Wavelab, which takes it in and works the magic I want and there's no audible clipping. He recommends not going into the red, but never has he told me to give him more headroom, although I'm sure he would love it.

There's rules of thumb, but nearly every situation is different. Use some good recommendations to start, then, after you've had some successful Masters generated, use your experience. It's a tweak-able rulebook, but everyone works a little differently, even Mastering Engineers.  

Hope this helps!  


2013/05/03 23:51:32
filtersweep
John,
I realize that normalization has nothing to do with frequency per se. It affects only the amplitude of waves. Not so sure what you mean by proportional. My understanding is that  if the highest peak wave is raised by 5dB to reach the desired normalization, then every peak is raised by 5dB. Proportional, to me, would mean that all peaks are raised by the same relative (i.e., %) amount, but that would change dynamics. Anyway, not trying at all to be argumentative, what I meant in my original statement was that psychoacoustics may come into play if, for example, your content has lower amplitude peaks that have substantially lower frequency components and  higher amplitude peaks that have heavy mid-range components.  Sort of the opposite of loudness compensation. I thought I read years ago that this was a problem sometimes encountered with normalization of classical and acoustic tracks.
    I'm just trying to figure this stuff out  as I go along. For all I know, you and others here are experienced mastering engineers. If my theory still makes no sense, let me know!
2013/05/03 23:57:11
Jeff Evans
John is correct and you are incorrect filtersweep. Think of it as this. If the highest peak is say 5db from 0dB FS then every sample point over the entire waveform is raised by 5dB during normalisation. Simple as that. There will be no change in the sound of the wave at the end of the day, only the fact that it is now 5dB louder than it was before.

Normalisation does not just effect peaks as you may be thinking. It effects every single sample in the waveform. 

Maybe in the past though if you had a very quiet recording (in 16 bit) and normalisation was applied then the digital noise is going to be boosted up as well. But these days with 24 bit being the norm and converters being much better then it is not much of an issue.

And to bandso who asks why cannot a waveform be knocked down a few db before the EQ so the ME can do his magic. Well some experts say that  adding and subtracting gain to a waveform could be seen as altering the quality but only to a slight extent. If this is happening in the 64 bit world then a lot of precision can be maintained during this process so it is Ok to do it. But if you were calculating gain changes in a much lower bit depth eg 16 bit then errors could creep in. But I agree this is not a biggie as even on a 32 bit system you are still 8 bits ahead of a 24 bit waveform anyway. And if you were using say the LP64 EQ and used the input trim control to do this then it would be OK even on a 32 bit system as it can be put into double precision mode on mode DAW's anyway. 

This may be something that has hung on from a previous era where it was better to leave well alone. Similar to recording as close to 0dB FS as you can but these days things are much better in terms of the resolutions that are going on behind the scenes so it is not such an issue any more.

2013/05/04 12:06:25
drewfx1
filtersweep

Proportional, to me, would mean that all peaks are raised by the same relative (i.e., %) amount, but that would change dynamics.

It is proportional - raising the level by a number of dB's just means you multiply every sample by a number greater than one, and it doesn't change dynamics. Decreasing the level means you multiply by a number less than one. A decibel is just a logarithmic way of expressing a ratio. In audio, you translate back and forth between dB's and this ratio using the following formulas:

dB = 20*log10(ratio)
ratio = 10dB/20

Anyway, not trying at all to be argumentative, what I meant in my original statement was that psychoacoustics may come into play if, for example, your content has lower amplitude peaks that have substantially lower frequency components and  higher amplitude peaks that have heavy mid-range components.  Sort of the opposite of loudness compensation. now! 
The sensitivity of our hearing at different frequencies indeed changes with level. But unless levels are specifically calibrated (so that a given level on the source plays back at a specific listening level), this depends directly on the listening level, which listeners generally just set to their liking.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account