Normalize or Compress First?

Page: 12 > Showing page 1 of 2
Author
mtrainer
Max Output Level: -89 dBFS
  • Total Posts : 61
  • Joined: 2004/01/06 20:16:31
  • Location: Chicago
  • Status: offline
2004/11/01 21:00:27 (permalink)

Normalize or Compress First?

Probably a dumb question for you experts, but I'm trying to get the most out of an audio track I've recorded from MIDI via external modules- I've got one audio track representing the entire song and I've worked to keep the record meters from hitting the top red button (I have SONAR 4.0 Producer) and now I need to know-

Which makes more sense, compression then normalization on the entire track or vise versa?

P.S. Sorry if I have misspelled any words- I can barely see anything within arms reach- I just had LASIK.

Mark
#1

37 Replies Related Threads

    Al
    Max Output Level: -35 dBFS
    • Total Posts : 4047
    • Joined: 2003/11/07 01:03:27
    • Location: NYC
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 21:11:12 (permalink)
    compression then normalization on the entire track or vise versa?


    simple - do NOT normalize.... EVER !!

    use the compressors/eq's/whatever effect chain that you need..
    by using these and putting a limiter at the end of the chain (if needed..the compressor maybe limited the signal earlier) just get close to the level that you need ( somehwhere less than 0db .. -3dB is fine if you are sending it to a mastering studio..if not - just use around -0.2 dB as your limit)
    #2
    mtrainer
    Max Output Level: -89 dBFS
    • Total Posts : 61
    • Joined: 2004/01/06 20:16:31
    • Location: Chicago
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 21:34:07 (permalink)
    Really? OK I will try out compression only and try to put as much signal on the track as I can up front.

    I usually record my songs as a collection of MIDI tracks played back at the same time- should I be recording each instrument as its own audio track with compression, reverb, etc. ? And then playing them all back together to record the final track? It seems like that would just be more work to go back and change something on the MIDI track.

    Mark
    #3
    LixiSoft
    Max Output Level: -70 dBFS
    • Total Posts : 1017
    • Joined: 2003/11/07 03:06:33
    • Location: Sunny TuneTown, USA
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 21:44:01 (permalink)
    Really? OK I will try out compression only and try to put as much signal on the track as I can up front.


    What are you recording at 16 bit or 24 bit ? No need to put as much signal as you can on a track, if you need more volume....just turn it up !!! DO NOT NORMALIZE !!!

    LixiSoft
    #4
    mtrainer
    Max Output Level: -89 dBFS
    • Total Posts : 61
    • Joined: 2004/01/06 20:16:31
    • Location: Chicago
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 21:59:50 (permalink)
    Well I'm trying to put several tracks onto a CD that I can listen to in a car, and distribute to friends, etc. My issue is that one particular project uses my old Roland SC-55 Sound Canvas as a MIDI playback device and the individual volume on each track is a bit low- so that even with the master volume on the SC-55 turned up all the way as an input to my Delta 1010, I'm not getting as much signal as I'd like for the final wave file. I then use SONAR 4.0 export to dither down to 44.1 and 16 bit for my CD-burning software to work (although my native Delta 1010 and SONAR 4.0 resolution is 24/96).

    I might be doing it all wrong which is why I'm posting here.

    Mark
    #5
    Guitslinger
    Max Output Level: -70 dBFS
    • Total Posts : 1018
    • Joined: 2003/11/15 00:55:12
    • Location: USA
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 22:38:24 (permalink)
    ORIGINAL: Al

    just get close to the level that you need ( somehwhere less than 0db .. -3dB is fine if you are sending it to a mastering studio..if not - just use around -0.2 dB as your limit)


    As I understand the nuances of sound perception, the smallest change in amplitude that can be perceived by the human ear is 3dB. If true, then there should be no audible difference between a -2dB peak, and one that registers at 0dB.

    Intel I5-2500K
    ASUS P8P67Pro mb
    16gb Corsair Vengeance RAM
    ASUS EN210 silent GPU
    Hyper 212+ CPU fan
    Fractal Audio midtower case
    Corsair TX650 PSU
    ASUS blueray optical dr
    WD 500gb SATA hard drive x 2
    Windows 7 Professional
    Focusrite Saffire Pro 40
    #6
    michael japan
    Max Output Level: -22.5 dBFS
    • Total Posts : 5252
    • Joined: 2004/01/29 03:01:03
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 22:50:35 (permalink)
    usually record my songs as a collection of MIDI tracks played back at the same time- should I be recording each instrument as its own audio track with compression, reverb, etc. ? And then playing them all back together to record the final track? It seems like that would just be more work to go back and change something on the MIDI track.


    one more ditto to not normalizing. The way you are doing it is fine, but you might want to consider doing a sub mix of the kick and bass (bounce them down first) and really tuning into the eq and fatten them up. From my remembrance the Canvas was pretty weak in the drum/bass area. I would also suggest you get something better. If you are into hardware units you can get modules (great ones) for dirt cheap now such as an EMU ultra series sampler/Proteus 2000/etc. Nice big phat sounds for all genre. The EMU 6400 ultra here with a 16 gig HD full of sounds will cost you about $700. Want one?

    Michael

    Windows 10/64 bit/i7-6560U/SSD/16GB RAM/Cakelab/Sonar Platinum/Pro Tools/Studio 1/Studio 192/DP88/MOTU AVB/Grace M101/AKG Various/Blue Woodpecker/SM81x2/Yamaha C1L Grand Piano/CLP545/MOX88/MOTIF XS Rack Rack/MX61/Korg CX3/Karma/Scarbee EP88s/ Ivory/Ravenscroft Piano/JBL4410/NS10m/Auratones/Omnisphere/Play Composers Selection/Waves/Komplete Kontrol
    #7
    ...wicked
    Max Output Level: -1.5 dBFS
    • Total Posts : 7360
    • Joined: 2003/12/18 01:00:56
    • Location: Seattle
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 23:00:36 (permalink)
    I know I should side with the experts here, but I use normalization quite a bit on things like recorded hardware synth parts.

    Sometimes my XP-50 just doesn't put it out the way I want... so I normalize to get it to 0dB then I back it off using -3dB increments to get some headroom. From there I just use faders. But sometimes I crank it 6dB and it just doesn't cut it... what can one do other than normalize it from there?

    I wouldn't recommend it with recorded sounds...especially vocals. But sometimes you just gotta crank something up....

    -m

    ===========
    The Fog People
    ===========

    Intel i7-4790 
    16GB RAM
    ASUS Z97 
    Roland OctaCapture
    Win10/64   

    SONAR Platinum 64-bit    
    billions VSTs, some of which work    
    #8
    Al
    Max Output Level: -35 dBFS
    • Total Posts : 4047
    • Joined: 2003/11/07 01:03:27
    • Location: NYC
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 23:09:15 (permalink)
    the smallest change in amplitude that can be perceived by the human ear is 3dB. If true


    It is more like 1dB but it really depends on WHAT you listen to and HOW ...
    we had this discussion before about 1 year ago -
    some guys jumped in and said they can hear every change of 0.2dB ( or less !? ;)

    ...sometimes its true , sometimes its hard to hear even a 2 or 3 dB change in amplitude .

    there should be no audible difference between a -2dB peak, and one that registers at 0dB.


    The point here is to get to a "standard" level... after shaping (and sometimes over compressing) the highest peaks ( limiting them ) .
    just check some commercial cd's - NOT that the final volume
    that we hear depends just on that , most of the compressed stuff would sound "louder" even at -3dB .
    #9
    LixiSoft
    Max Output Level: -70 dBFS
    • Total Posts : 1017
    • Joined: 2003/11/07 03:06:33
    • Location: Sunny TuneTown, USA
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 23:14:33 (permalink)
    so that even with the master volume on the SC-55 turned up all the way as an input to my Delta 1010, I'm not getting as much signal as I'd like for the final wave file.


    So turn up the fader in SONAR until you hit "0" db with no overs. Or edit your midi file with the scale velocity command to turn up your midi tracks. Use CC 7 for volume, or CC 127 for expression to tweak your midi file. I still use my SC55 all the time for "live" use...I love it !!

    LixiSoft
    #10
    Al
    Max Output Level: -35 dBFS
    • Total Posts : 4047
    • Joined: 2003/11/07 01:03:27
    • Location: NYC
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 23:15:47 (permalink)
    I use normalization quite a bit on things like recorded hardware synth parts.

    Sometimes my XP-50 just doesn't put it out the way I want


    Running thru a mixer ? use more gain..then Record .

    so I normalize to get it to 0dB then I back it off using -3dB increments to get some headroom. From there I just use faders

    that's bad.. you are degrading the quality of what you record at least 3-4 times ! ( recording low , normalizing , using the -3 or +3 dB to re-calculate the wav file ... )

    use Sonar's Trim (like u said it might not be enough but with better recorded signals..if the xp's output is tpp weak tweak the preset and/or use a mixer to record at louder levels)

    another options - use some fx .. like an EQ or Compressor...even use subtle presets or "effect level" but all these have a master input/output level ( usually) so ca use these to get more gain .
    #11
    LixiSoft
    Max Output Level: -70 dBFS
    • Total Posts : 1017
    • Joined: 2003/11/07 03:06:33
    • Location: Sunny TuneTown, USA
    • Status: offline
    RE: Normalize or Compress First? 2004/11/01 23:23:00 (permalink)
    so I normalize to get it to 0dB then I back it off using -3dB increments to get some headroom


    A double fault, you have just used 2 destructive commands. You need to study up on the definition of "headroom". Your basic understanding of digital audio is faulty. That said....."If it sounds good.....it is", rules are made to be broken. But, you need to know the rules to play the game

    LixiSoft
    #12
    guitardood
    Max Output Level: -82 dBFS
    • Total Posts : 413
    • Joined: 2004/08/02 21:12:50
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 02:51:21 (permalink)
    THERE IS NO NOTICABLE DEGRADATION BY NORMALIZING!!!!!!!

    See this thread, then call cakewalk and ask them to please post a stinking response and put this ridiculous issue to bed once and for all.





    Sorry for the steam:)

    - Guitardood
    #13
    bso
    Max Output Level: -83 dBFS
    • Total Posts : 351
    • Joined: 2003/11/20 23:38:03
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 03:19:45 (permalink)
    ORIGINAL: guitardood

    THERE IS NO NOTICABLE DEGRADATION BY NORMALIZING!!!!!!!

    See this thread, then call cakewalk and ask them to please post a stinking response and put this ridiculous issue to bed once and for all.





    Sorry for the steam:)

    - Guitardood

    Absolutely right....It's just math. Use compression mildly if you have too much dynamic range........Go figure guitardood...We all go digital to increase the dynamic range and lower the niose floor...and then.........KERPLONK !...Take every ( BIT ) of it away...People..you know ther are volume knobs on all that stuff you buy and play through.
    #14
    guitardood
    Max Output Level: -82 dBFS
    • Total Posts : 413
    • Joined: 2004/08/02 21:12:50
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 03:22:53 (permalink)
    mtrainer, BTW in answer to your original question, if you are trying to use presets with a compressor, you should normalize prior to compressing. If you are familiar with compressor functionality and comfortable changing the parameters, you could probably compensate for the low volume tracks with a few adjustments and probably would not need to normalize.

    Hope that helps.

    - Guitardood
    #15
    Guitslinger
    Max Output Level: -70 dBFS
    • Total Posts : 1018
    • Joined: 2003/11/15 00:55:12
    • Location: USA
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 05:10:46 (permalink)
    ORIGINAL: guitardood

    THERE IS NO NOTICABLE DEGRADATION BY NORMALIZING!!!!!!!

    See this thread, then call cakewalk and ask them to please post a stinking response and put this ridiculous issue to bed once and for all


    According to the help files provided with Sonar, normalization or incremental increases in volume erode the waveforms recorded.

    Intel I5-2500K
    ASUS P8P67Pro mb
    16gb Corsair Vengeance RAM
    ASUS EN210 silent GPU
    Hyper 212+ CPU fan
    Fractal Audio midtower case
    Corsair TX650 PSU
    ASUS blueray optical dr
    WD 500gb SATA hard drive x 2
    Windows 7 Professional
    Focusrite Saffire Pro 40
    #16
    daverich
    Max Output Level: -41 dBFS
    • Total Posts : 3418
    • Joined: 2003/11/06 05:59:00
    • Location: south west uk
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 05:19:22 (permalink)
    Normalising is useless unless you're making convolution impulses or something.

    Sit and think about it for 2mins and you'll realise why.

    If you NEED to normalise then you NEED to re-record your audio. period.

    Also sometimes all you want to do is make the wave bigger so you can see it better - you can do this non-destructively in sonar using the zoom slider on the end of the track lanes.

    Kind regards

    Dave Rich.
    < Message edited by daverich -- 11/2/2004 10:27:37 AM >

    For Sale - 10.5x7ft Whisperroom recording booth.

    http://www.daverichband.com
    http://www.soundclick.com/daverich
    #17
    UnderTow
    Max Output Level: -37 dBFS
    • Total Posts : 3848
    • Joined: 2004/01/06 12:13:49
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 05:58:54 (permalink)
    Quick answer: On your final mix, compress then normalise.

    Long answer:

    First up, any digital gain change will introduce quantisation errors to the audio being processed. It is good practise to try and do as few gain changes as possible and, when needed, do it with the best tools available. But one shouldn't get to obsessed with this. At 24bit, a small gain change will not be noticable. A bit of compression well done will probably have more benefits than negative effects on the audio. Lets not throw the baby out with the bath water ...

    Secondly, there is no real inherent difference between processing your sound with compressors/limiters + normalising at the end and using a plugin that combines these processes (Think L1, L2, L3, Elephant). They effectively do the same thing.

    In practise though, different plugins or processing functions in editors, are implemented in different ways. Some use oversampling thus working at higher sampling rates. Others use higher bit depth. Also the actual algorithms might be of different quality or work better on different material.

    Another thing to consider is that high-end plugins that combine several functions will usualy keep the audio at the higher sampling rate or bit depth in between these processes. So there won't be unnecessary downsampling or bit depth convertion being applied in between. But as with the gain changes, try and use the best tools sound wise and don't get too obsessed as to miss the big picture.

    So back to the question, normalise as the very last step reguardless of wether this is done manualy or automaticly by a plugin. And try to normalise to something like -0.5dB. This will let (really) old CD players read the disks and will allow for overshoots in the D/A converters (often caused by clipped material).

    If your material is going to be mastered by a pro, don't do anything. Just get as good a mix as possible and don't do any processing on the final mix.
    #18
    daverich
    Max Output Level: -41 dBFS
    • Total Posts : 3418
    • Joined: 2003/11/06 05:59:00
    • Location: south west uk
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 06:47:29 (permalink)
    hmm, I think you've got the wrong end of the stick here.



    "First up, any digital gain change will introduce quantisation errors to the audio being processed. "

    hmmm - you're not doing any samplerate changes or anything weird and wonderful, merely changing the volume - the problem with normalizing it you are also normalising the noise level - for no benefit.

    "It is good practise to try and do as few gain changes as possible and, when needed, do it with the best tools available. But one shouldn't get to obsessed with this. At 24bit, a small gain change will not be noticable. A bit of compression well done will probably have more benefits than negative effects on the audio. Lets not throw the baby out with the bath water ..."

    Im sorry but compression is going to destruct far more than normalising ever will - Normalising is merely a volume change, compression adds distortion and processes the signal in a much more complex way.


    "Secondly, there is no real inherent difference between processing your sound with compressors/limiters + normalising at the end and using a plugin that combines these processes (Think L1, L2, L3, Elephant). They effectively do the same thing. "

    Master limiters don't normalise - they compress/limit/distort - but in a fairly transparent way - the two processes are completely different. Normalising won't effect your dynamics at all.


    "Another thing to consider is that high-end plugins that combine several functions will usualy keep the audio at the higher sampling rate or bit depth in between these processes. So there won't be unnecessary downsampling or bit depth convertion being applied in etween. But as with the gain changes, try and use the best tools sound wise and don't get too obsessed as to miss the big picture."

    True but again, has nothing to do with normalising.

    "So back to the question, normalise as the very last step reguardless of wether this is done manualy or automaticly by a plugin. And try to normalise to something like -0.5dB. This will let (really) old CD players read the disks and will allow for overshoots in the D/A converters (often caused by clipped material)."

    Normalising is not compression, nor is it limiting. It merely makes the audiowave louder so that the biggest peak is 0db. Limiting will chop the peak so it can go louder. Limiting and then normalising makes no sense - in that scenario you've used your limiter to crush the wavefile, and then normalised it up - just limit with a reasonable signal going in - if it's still not making 0db then it's set up wrong.


    "If your material is going to be mastered by a pro, don't do anything. Just get as good a mix as possible and don't do any processing on the final mix."

    yup true.

    I hope you don't take offense at this but I remember being confused about the process myself years ago when I used to normalise every audio take right after recording.

    Kind regards

    Dave Rich.
    < Message edited by daverich -- 11/2/2004 11:56:21 AM >

    For Sale - 10.5x7ft Whisperroom recording booth.

    http://www.daverichband.com
    http://www.soundclick.com/daverich
    #19
    UnderTow
    Max Output Level: -37 dBFS
    • Total Posts : 3848
    • Joined: 2004/01/06 12:13:49
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 07:33:08 (permalink)
    ORIGINAL: daverich

    hmm, I think you've got the wrong end of the stick here.


    I doubt it. :)


    "First up, any digital gain change will introduce quantisation errors to the audio being processed. "

    hmmm - you're not doing any samplerate changes or anything weird and wonderful, merely changing the volume - the problem with normalizing it you are also normalising the noise level - for no benefit.


    Your audio is in 24 bits which means it has 16777216 "amplitude slots" for each sample. Or put otherwise, 16777216 possible different levels for each sample. Everytime you do a gain change, the chances of the samples falling at exactly the right place is close to nil. So the samples have to be quantised into those 16777216 slots. That is called quantisation errors. This is audio degradation.


    "It is good practise to try and do as few gain changes as possible and, when needed, do it with the best tools available. But one shouldn't get to obsessed with this. At 24bit, a small gain change will not be noticable. A bit of compression well done will probably have more benefits than negative effects on the audio. Lets not throw the baby out with the bath water ..."

    Im sorry but compression is going to destruct far more than normalising ever will - Normalising is merely a volume change, compression adds distortion and processes the signal in a much more complex way.


    I am not comparing to normalising in this paragraph. I am refering to all the gain changes that compression introduce and thus the quantisation errors. I am saying that with good tools, the quantisation errors are insignificant compared to other benefits. (Assuming the compression has benefits on this particular material. That all depends on what you are doing).


    "Secondly, there is no real inherent difference between processing your sound with compressors/limiters + normalising at the end and using a plugin that combines these processes (Think L1, L2, L3, Elephant). They effectively do the same thing. "

    Master limiters don't normalise - they compress/limit/distort - but in a fairly transparent way - the two processes are completely different. Normalising won't effect your dynamics at all.



    Downward compression and limiting in itself always reduces levels. Tools like the L2 automaticaly add gain to the signal to get back at full volume. In other words L2 has a limiter and an auto normalizer. The same goes for the other tools I mention and many others.

    There is NO difference between the principal of auto normalization and manual normalization. Of course the actual implementation might differ.


    "Another thing to consider is that high-end plugins that combine several functions will usualy keep the audio at the higher sampling rate or bit depth in between these processes. So there won't be unnecessary downsampling or bit depth convertion being applied in between. But as with the gain changes, try and use the best tools sound wise and don't get too obsessed as to miss the big picture."

    True but again, has nothing to do with normalising.


    Yes it does as normalising is a gain change. And it explains one advantage of using tools that combine several processes in one.

    All these plugins with oversampling and extra bit depth have to output back to 24bit and your project sample rate. This means anti-aliasing filters for the downsampling and quantisation errors for the bit depth reduction with each plugin. This is above and beyond any other dsitortion that these plugins might introduce. If you can reduce the number of times this happens by using less plugins (if they are good) it can only be a good thing. :)


    "So back to the question, normalise as the very last step reguardless of wether this is done manualy or automaticly by a plugin. And try to normalise to something like -0.5dB. This will let (really) old CD players read the disks and will allow for overshoots in the D/A converters (often caused by clipped material)."

    Normalising is not compression, nor is it limiting. It merely makes the audiowave louder so that the biggest peak is 0db.


    I never said it was. :) But the tools I mention above all have normalising built in.


    Limiting will chop the peak so it can go louder.


    Not exactly. Limiting just limits the level to whatever your settings are. (Assuming the limiter has instant attack). The getting louder part is a SECOND process within the plugins (or hardware). This normalising process can be done seperatley afterwards. It makes no difference. (Except for the points I mention about staying at higher sampling rates/bit depths etc).

    I think it is confusing to not make the distinction. It isn't because all these modern tools are bundled together that they are just one process. :)


    Limiting and then normalising makes no sense - in that scenario you've used your limiter to crush the wavefile, and then normalised it up - just limit with a reasonable signal going in - if it's still not making 0db then it's set up wrong.


    I think you are confused about the tools. :) Like I said above, you are thinking of limiters that have built in auto normalising. So they ALWAYS normalise. You just didn't realise it untill now. :)


    "If your material is going to be mastered by a pro, don't do anything. Just get as good a mix as possible and don't do any processing on the final mix."

    yup true.

    I hope you don't take offense at this but I remember being confused about the process myself years ago when I used to normalise every audio take right after recording.


    No offens taken. And no confusion on my side. I hope I cleared up some of yours. ;) Sorry if I wasn't clear in my first post.

    UnderTow
    < Message edited by UnderTow -- 11/2/2004 7:42:10 AM >
    #20
    turklet2
    Max Output Level: -89 dBFS
    • Total Posts : 65
    • Joined: 2004/05/31 11:14:24
    • Location: Manchester - UK
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 08:51:30 (permalink)
    Just quickly talking about audible differences in amplitude relating to decibel levels, it was my understanding that the decibel scheme was logarithmic. Also, a 3dB rise in volume will actually double the acoustic level coming out of the speakers (the SPL). For this reason, audible difference is completely dependent on the initial dB level of the signal.


    Funny fact - my mate works for a nightclub. The manager comes in (who thinks he knows it all) and says "what dB level do you think we're currently running in the middle of the dancefloor". My mate, not wanting to give a ballpark figure, says "I really don't know". The manager, in a smug voice, says "Well, I reckon we're running somewhere between 105-120dB". But working on 3dB=Double SPL, 105-120dB is a MASSIVE difference, acoustically.

    Jamie
    #21
    Al
    Max Output Level: -35 dBFS
    • Total Posts : 4047
    • Joined: 2003/11/07 01:03:27
    • Location: NYC
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 09:33:59 (permalink)
    I am not comparing to normalising in this paragraph. I am refering to all the gain changes that compression introduce and thus the quantisation errors. I am saying that with good tools, the quantisation errors are insignificant compared to other benefits. (Assuming the compression has benefits on this particular material. That all depends on what you are doing).


    Good point .. and not just this one - great answers there , UnderTow .. I'm 100% with you
    #22
    UnderTow
    Max Output Level: -37 dBFS
    • Total Posts : 3848
    • Joined: 2004/01/06 12:13:49
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 10:16:29 (permalink)
    ORIGINAL: turklet2

    Just quickly talking about audible differences in amplitude relating to decibel levels, it was my understanding that the decibel scheme was logarithmic. Also, a 3dB rise in volume will actually double the acoustic level coming out of the speakers (the SPL). For this reason, audible difference is completely dependent on the initial dB level of the signal.


    3dB increase is a doubling of power. 6dB is a doubling in output voltage or SPL.

    And you are right about initial level of the signal being important. hence the loudness wars. Record labels, advertisers, radio stations etc want their CD, add, channel to jump out at the audience. Some people say to just turn the volume knob on the amp if you want it louder.

    I like to be somewhere in between. Loud but without clipping or distortion.


    Funny fact - my mate works for a nightclub. The manager comes in (who thinks he knows it all) and says "what dB level do you think we're currently running in the middle of the dancefloor". My mate, not wanting to give a ballpark figure, says "I really don't know". The manager, in a smug voice, says "Well, I reckon we're running somewhere between 105-120dB". But working on 3dB=Double SPL, 105-120dB is a MASSIVE difference, acoustically.

    Jamie


    Yeah ridiculous. But oh well ... he was probably pleased with himself. :)

    UnderTow
    < Message edited by UnderTow -- 11/2/2004 10:23:59 AM >
    #23
    tommydee
    Max Output Level: -81 dBFS
    • Total Posts : 490
    • Joined: 2003/11/05 23:15:54
    • Location: New York City
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 10:34:14 (permalink)
    ORIGINAL: michael japan
    you might want to consider doing a sub mix of the kick and bass (bounce them down first) and really tuning into the eq and fatten them up.

    a very decent idea/approach.
    sub-mixes (or "stems" can be invaluable).
    also consider doing a variety of mixes: vox up, vox down, vox where you think it should be.... that way you can come back and assemble a new mix by cutting and pasting the old mixes and/or stems together without having to start all over... saves OOODLES of time.
    #24
    Andy C
    Max Output Level: -65 dBFS
    • Total Posts : 1272
    • Joined: 2003/11/04 10:09:38
    • Location: Scotland
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 10:44:09 (permalink)
    Your audio is in 24 bits which means it has 16777216 "amplitude slots" for each sample. Or put otherwise, 16777216 possible different levels for each sample. Everytime you do a gain change, the chances of the samples falling at exactly the right place is close to nil. So the samples have to be quantised into those 16777216 slots. That is called quantisation errors. This is audio degradation.

    Ahhh now, this is where *I* think your wrong. I can only talk form the point of view of the DXi Plugin's I've worked with, but as far as I can tell internally Sonar is working with high precision Floating Point numbers. So when your audio is created, it may come in at 24 bit, but internally thats represented as a number between -1.0 and 1.0. It's only at output time that it's converted (in sonar 4 through the wonderful POW-3 dithering) back to 24 (or 16bit) quantisation.

    So in your normalisation example you are not introducing any quantisation error since you are working with floating point numbers.

    Andy C
    As ever this is jut my interpret ion of whats going on !
    #25
    jardim do mar
    Max Output Level: -66 dBFS
    • Total Posts : 1247
    • Joined: 2003/12/02 06:23:57
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 11:53:26 (permalink)
    Which makes more sense, compression then normalization on the entire track or vise versa?


    Hi Mark,,
    Sonars normalization is basic ,not offering any adjustments,, you should look into sound forge ,, adobe audition or wavelab,, if you are interested in learning more about audio editing,,,,,

    normalization can be used for matching the volumes of two or more audio tracks,you must understand the differences between peak mode and RMS mode/power , using normalization in rms mode will achieve better results ,,, when used wisely,,,, any good audio editor will give you these options plus more to adjust to your needs and a general explanation of the differences.... As a rule I do agree with others about using a dynamic compressor,, but I would'nt rule out normalization. it can be put to good use,,,,,

    I forgot to mention as always ,,train your ears ,,use you ears....
    < Message edited by jardim do mar -- 11/2/2004 12:06:17 PM >

    marcella
    And Remember,,,,One thing at a Time.....
    #26
    UnderTow
    Max Output Level: -37 dBFS
    • Total Posts : 3848
    • Joined: 2004/01/06 12:13:49
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 12:17:16 (permalink)
    ORIGINAL: Andy C

    Your audio is in 24 bits which means it has 16777216 "amplitude slots" for each sample. Or put otherwise, 16777216 possible different levels for each sample. Everytime you do a gain change, the chances of the samples falling at exactly the right place is close to nil. So the samples have to be quantised into those 16777216 slots. That is called quantisation errors. This is audio degradation.


    Ahhh now, this is where *I* think your wrong.


    Maybe but I don't think so. See below.


    I can only talk form the point of view of the DXi Plugin's I've worked with, but as far as I can tell internally Sonar is working with high precision Floating Point numbers. So when your audio is created, it may come in at 24 bit, but internally thats represented as a number between -1.0 and 1.0. It's only at output time that it's converted (in sonar 4 through the wonderful POW-3 dithering) back to 24 (or 16bit) quantisation.


    AFAIK, floating point helps with headroom. Not with quantisation errors. Of those 32 bits, 24 bits are the mantisa, giving you the same resolution as 24 bit audio, and 8 bits are the exponant, giving nearly limitless headroom.

    Here is a simplified example: You have a sample with the following 16 bit FP value (yeah I'm lazy ;): 0.10000001 exponant 1. Now halve the volume of that sample. What do you get? 0.050000005 same exponant (1). What do you do with that trailing 5? It is 1 bit too much for your bit depth so it has to be rounded up or down. This rounding up or down is the quantisation error.

    This applies to ANY bit depth. Of course the higher the bit depth, the smaller the errors. But they don't disapear.

    One advantage I can see with FP is that some gain calculations will only affect the exponant thus not affecting the "resolution" of the sample value. I don't know if this is a significant difference or not.



    So in your normalisation example you are not introducing any quantisation error since you are working with floating point numbers.


    This makes no (or nearly no) difference for resolution. See above.


    Andy C
    As ever this is jut my interpret ion of whats going on !



    This does bring up an interesting question though, how is the 64 bit in the buses implemented in Sonar? If the resolution is increased, this would plead for doing as many volume changes in the buses as possible and leaving the tracks on 0.

    Maybe someone from CakeWalk can shed some light on this ...

    UnderTow
    < Message edited by UnderTow -- 11/2/2004 12:24:34 PM >
    #27
    UnderTow
    Max Output Level: -37 dBFS
    • Total Posts : 3848
    • Joined: 2004/01/06 12:13:49
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 12:23:31 (permalink)
    ORIGINAL: jardim do mar

    normalization can be used for matching the volumes of two or more audio tracks,you must understand the differences between peak mode and RMS mode/power , using normalization in rms mode will achieve better results ,,,


    If you use RMS values and normalise to 0dBfs or close to that, you would be clipping the peaks of your audio. Not a good idea. Always use peak values when normalising your final material. (And you shouldn't really be normalising anything else except maybe for ease of use). If you want volume, use one of the tools that doesn't hard clip the audio. (L1, L2, L3, Elephant, etc etc).

    But with all these things, especially things like quantisation errors, take it all with a grain of salt unless you are a high-end engineer working with the best gear and monitoring in a near ideal accoustic environement. Keep it in the back of your mind but don't worry TOO much about it.

    UnderTow
    #28
    Andy C
    Max Output Level: -65 dBFS
    • Total Posts : 1272
    • Joined: 2003/11/04 10:09:38
    • Location: Scotland
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 13:25:23 (permalink)
    ORIGINAL: UnderTow
    AFAIK, floating point helps with headroom. Not with quantisation errors. Of those 32 bits, 24 bits are the mantisa, giving you the same resolution as 24 bit audio, and 8 bits are the exponant, giving nearly limitless headroom.


    Sorry I may have got this wrong. I think Plugin's a (and hence Sonar ) use Double for there representation of float, so thats 11-bit exponent, 52-bit mantissa much bigger than I suggested.

    Here is a simplified example: You have a sample with the following 16 bit FP value (yeah I'm lazy ;): 0.10000001 exponant 1. Now halve the volume of that sample. What do you get? 0.050000005 same exponant (1).


    Again I don't think so. Excuse me if I get the maths a little wrong but your first number should be 0.10000001 Exp 0, the second is 0.50000005 Exp-1. It really depends on the accuracy of the internal FPU or procedure.

    Andy
    #29
    UnderTow
    Max Output Level: -37 dBFS
    • Total Posts : 3848
    • Joined: 2004/01/06 12:13:49
    • Status: offline
    RE: Normalize or Compress First? 2004/11/02 13:40:12 (permalink)
    ORIGINAL: Andy C

    Sorry I may have got this wrong. I think Plugin's a (and hence Sonar ) use Double for there representation of float, so thats 11-bit exponent, 52-bit mantissa much bigger than I suggested.


    Fair enough. This makes the problem smaller but it doesn't make it go away. (If my understanding is correct).


    Here is a simplified example: You have a sample with the following 16 bit FP value (yeah I'm lazy ;): 0.10000001 exponant 1. Now halve the volume of that sample. What do you get? 0.050000005 same exponant (1).


    Again I don't think so. Excuse me if I get the maths a little wrong but your first number should be 0.10000001 Exp 0, the second is 0.50000005 Exp-1. It really depends on the accuracy of the internal FPU or procedure.

    Andy


    Hmmm interesting. This was a bad example. ;) What if you start with 0.40000001 exp 0 and multiply it by 3? You get 0.120000003 exp 1. That doesn't fit.

    So you might get less distortion but it doesn't go away altogether AFAIK. I'll go and check how this is implemented exactly ...

    UnderTow
    < Message edited by UnderTow -- 11/2/2004 1:48:23 PM >
    #30
    Page: 12 > Showing page 1 of 2
    Jump to:
    © 2024 APG vNext Commercial Version 5.1