• SONAR
  • Does anyone use Normalize? (p.2)
2012/07/23 10:19:04
AT
Nothing wrong w/ normalizing - as long as you don't depend upon it for day to day operation instead of proper recording technique.  Sometimes a recording comes out too quiet to be easily fitted into a mix.  If you are doing it on most or many tracks, you need to look at your technique and boost the gain going in.  A simple but obvious fact is it is easy to play and record a "quiet" track so it pre-fits the mix, instead of capturing it loud and bringing it down later.  I know, I've done that.

@
2012/07/23 10:42:11
konradh
On the S/N ratio, noise while the main signal is present is one thing.  Noise in-between main content is another, and that would become more noticeable when normalizing.  Example: Room noise in-between a singer's phrases.  If that is the problem, maybe you can edit the clip (manually or with a gate) first.

I use Melodyne or V-Vocal to reduce the volume of unwanted breaths, hiss, etc. in-between vocal phrases.  In the case of noises like console button presses, mic stand bumps, throat clearing, etc.  I completely cut the sections out.  For breaths and low level room noise, I reduce the volume between phrases instead of cutting them so things don't sound unnatural.  These things are usually almost inaubible in a mix, but compression can really make them a problem.

Remember that in Melodyne, you can use the Separate tool (looks like a vertical line) to cut up a blob (or a horizontal line that represents noise) so you can delete or soften the unwanted parts without losing the singer's final consonants or some other good program material.

2012/07/25 00:25:21
Calkwalker
synkrotron


Kalle Rantaaho


AFAIK, normalising does not affect S/N ratio, because noise and signal are boosted equally (?).

Yes... of course, I missed that. It is a ratio, so it stays the same when you normalise.


And you are absolutely correct regarding using it for "routine" boosting or track levels that are recorded too low. At the end of the day, you should track at the right level to start with.
Thanks for the clarification; I stand corrected regarding Normalize and S/N ratio. 
 
My specific use of Normalize has been to boost the level of 6 relatively weak audio signals coming from a guitar with a hexaphonic pickup, running into an RMC "Fanout Box" that outputs each string's signal to a separate 1/4" input jack on my UA-101, and mapped from there to 6 SONAR tracks.  I have the DIP switches on the UA-101 set to Mic level on these 6 inputs, which improves the signal level into SONAR by several dB, but the levels are still fairly weak.  The signal level is therefore what it is.  Unless I put 6 channels of hardware preamp between the Fanout Box and the UA-101, my only option is to boost the gain as needed within SONAR.

I have my reasons for recording 6 tracks of individual guitar string signals, and I just need to optimize it as best as I can.  The S/N is not the best, due primarily to the Fanout Box being in the signal chain.  I'm using short, premium quality TRS cables between the Fanout Box and the UA-101.  And there's a fresh 9V battery in the guitar to drive the hexaphonic pickup.
 
From all the responses above, it seems that my best options are a Normalize/gate combo (leaving enough headroom so as not to clip when plugins are added later during mixing), or a compressor/gate combo, or either Normalize or a compressor with a low-pass filter to consistently suppress the hiss noise (trading off some presence).
 
Any further feedback appreciated.
 
2016/02/19 07:29:59
Jyri T.
Personally I use normalizing a lot. If you have a lot of clips it may save a lot of time.
 
If you mix modern music (with compressors and other hard hitting plugins) with audio material recorded well at 24 bits, normalizing makes no difference in the end product. If you use the track/clip gain or plug-in trim instead, you'll end up exactly in the same place. Normalizing is basically just a gain change and nobody can mix music without changing gains anyways.
 
There are only two exceptions when it may make a difference. In both case you need to use 16-bit audio and destructive editing to make a noticeable difference.
 
#1. If you normalize the track very low, and raise the gain after that, you will lose information.
 
Like this:
original audio (16 bits)       123456789ABCDEF
normalized audio (16 bits)  6789ABCDEF-----
regained audio (16 bits)     -----6789ABCDEF
 
#2. If you go crazy and keep normalizing the same clip up and down and all over the place.
 
If you use 32-bit-deep files, there is absolutely no risk in normalizing. None. Just don't normalize the clips to 0 dBFS when mixing. Go for -3 dBFS or, even better, less (I usually use -12 dB to keep a healthy headroom for mixing).
2016/02/19 09:12:43
Zargg
Hi. I use Normalize less that I used to. But I use it when needed. I still have an old key binding that I have had for years for both Normalize and Gain.
All the best.
2016/02/19 09:17:25
jpetersen
Yet another old thread suddenly re-activated.
2016/02/19 09:23:42
Zargg
jpetersen
Yet another old thread suddenly re-activated.


I did not notice that Oops...
2016/02/19 09:29:07
Anderton
jpetersen
Yet another old thread suddenly re-activated.



The power of search engines...
 
I use phrase-by-phrase normalization on vocals, which does most of what compression does without the pumping/breathing/artifacts. Then I need to add only a little limiting to give real presence to vocals. This is only one element in what I call creating "HD" vocals. I'll be introducing the full technique at this year's Sweetwater Gearfest during one of my workshops.
 
Normalization has a lot of uses beyond the traditional ones. On album projects I normalize all cuts, then upon listening to the album, reduce the level of tracks that sound too loud by comparison.
 
I won't argue whether normalization affects the sound but all it does is turn up the level, so it's no different from moving a fader. Mathematically, it's an addition function.
2016/02/19 09:38:04
wetdentist
i never use normalize, but rather Ozone 4
2016/02/19 10:48:43
mettelus
I tend to normalize after a noise reduction check fairly often for a few reasons. It keeps faders closer to unity, compressor settings more consistent, and fader usage for mixing is to bring levels down rather than up. Overall, the biggest concern is bringing up noise levels (same deal for raising a fader above unity, hence proper tracking levels mentioned above). I often do this as the last step for "baking" (destructive edit) a track before mixing, which can also reduce CPU usage (depending) by removing some FX (e.g., noise gates) from the chain.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account