Why include frequencies no one hears?

Page: 12 > Showing page 1 of 2
Author
backwoods
Max Output Level: -49.5 dBFS
  • Total Posts : 2571
  • Joined: 2011/03/23 17:24:50
  • Location: South Pacific
  • Status: offline
2013/11/01 21:28:44 (permalink)

Why include frequencies no one hears?

This is for my own education mostly. After fiddling around with Izotope and the brickwall filters I got to thinking.
 
Why not brickwall the low frequencies and the hi ones- say over 18000 that no one hears or cares about on standard playback equipment. Won't we then be able to push the signal higher without distortion for a louder file?
 
Also, why does a hi pass at say 40hz cause a file touching zero to clip? 

 
#1

30 Replies Related Threads

    Danny Danzi
    Moderator
    • Total Posts : 5810
    • Joined: 2006/10/05 13:42:39
    • Location: DanziLand, NJ
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 00:45:38 (permalink)
    backwoods
    This is for my own education mostly. After fiddling around with Izotope and the brickwall filters I got to thinking.
     
    Why not brickwall the low frequencies and the hi ones- say over 18000 that no one hears or cares about on standard playback equipment. Won't we then be able to push the signal higher without distortion for a louder file?
     
    Also, why does a hi pass at say 40hz cause a file touching zero to clip? 




    Anything under your target low frequency (we can use 40 hz for this example) should be removed....anything up high that is doing nothing, can also be removed. I do it all the time. But some things may show a little activity in those ranges too, so you have to be careful. Some of this will be genre specific too.
     
    For example, if we remove everything below 40 Hz in a r&b song or rap song, we just affected any bass drops they may have had in the song.
     
    If we remove 18 k and above (which is usually safe to do) there are a few plugins out today that accentuate like 22 k. I see them as senseless but some guys swear by them and I have a few clients like that. They literally feel that high end air is making a difference for the better where to me, it's adding hiss that can mess with the audio.
     
    So depending on what style of music you are working on will determine where you high pass and low pass. I think you'd be better off high passing and low passing over brickwalling the stuff. This way you have a little more control over what you allow to pass through naturally where as brickwalling it could introduce artifacts.
     
    A file touching 0 dB is clipping in the digital realm unless you are literally getting readings of -0 dB. But at regular 0 dB without the minus, you'll show clip points no matter what you add to the audio unless you are cutting something drastically. But just about anything you bring in will clip a file like that. It may even clip just passing through the plug without touching a parameter on the plug.
     
    Your best bet is to keep anything un-mastered at -3dB peak...anything you master, -0.3 and no hotter than -0.1 dB peak. But after something has been mastered, there is a good chance anything you add will make the song clip...even if you're high passing.
     
    Though -0 dB doesn't show up as a clip, you shouldn't really go that high if you can help it as you're just way too close to the clipping zone and chances are you just may go over.
     
    -Danny

    My Site
    Fractal Audio Endorsed Artist & Beta Tester
    #2
    rumleymusic
    Max Output Level: -60 dBFS
    • Total Posts : 1533
    • Joined: 2006/08/23 18:03:05
    • Location: California
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 01:11:50 (permalink)
    It is regular practice to cut low frequencies below the lowest note needed.  For flute for example you don't need anything below 190Hz so that can safely be removed. 
     
    As for the high frequencies, cutting them will tick off all the audiophiles who convinced themselves they can hear the higher frequencies, or that leaving the high frequencies in will somehow make the lower partials sound better, (which defies all physics).  But is generally is safe to leave them in unless they are filled with noise you want to remove.  CD's and mp3 limit themselves to a max of ~20kHz anyway. 
     
     
     
     

    Daniel Rumley
    Rumley Music and Audio Production
    www.rumleymusic.com
    #3
    The Maillard Reaction
    Max Output Level: 0 dBFS
    • Total Posts : 31918
    • Joined: 2004/07/09 20:02:20
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 06:11:56 (permalink)
    "Also, why does a hi pass at say 40hz cause a file touching zero to clip?"
     
    Ringing?
     
    What filter, and filter order or slope are you using?
     
    best regards,
    mike


    #4
    Jeff Evans
    Max Output Level: -24 dBFS
    • Total Posts : 5139
    • Joined: 2009/04/13 18:20:16
    • Location: Ballarat, Australia
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 06:26:37 (permalink)
    Also, why does a hi pass at say 40hz cause a file touching zero to clip?  Much more important question, why is the signal touching zero in the first place?
     
    Interesting article in SOS September issue on gain staging. Although it fails to mention a really good approach to system calibration one of the main things that comes out of it is there is no reason to be anywhere near 0dB FS at any point in your production. Except for mastering and that can be a time when a signal comes close to 0dB FS but until that point it should not and does not need to.
     
    People just need to keep well away from it and keep everything well under, turn your monitor level up.
     
    On another issue one thing I find helps with keeping your mix clean is to think about the highest frequencies that various parts are actually going up to and putting a LPF with the cutoff set at some point up just past the highest necessary frequency that is can really help to keep a mix clean. It may sound like it is doing nothing on the track but it all adds up and helps keep unnecessary high freq clutter to a minimum. It works down the other end too as Daniel points out. If something only goes down to 200Hz then a HP set somewhere below that will help to minimise low freq clutter. The trick is to ensure these cutoff frequencies don't really impact on the sound. In the case of the LPF sometimes the cutoff freq can even be set on purpose to roll off some HF and make a track sound smoother and better.
    post edited by Jeff Evans - 2013/11/02 07:14:05

    Specs i5-2500K 3.5 Ghz - 8 Gb RAM - Win 7 64 bit - ATI Radeon HD6900 Series - RME PCI HDSP9632 - Steinberg Midex 8 Midi interface - Faderport 8- Studio One V4 - iMac 2.5Ghz Core i5 - Sierra 10.12.6 - Focusrite Clarett thunderbolt interface 
     
    Poor minds talk about people, average minds talk about events, great minds talk about ideas -Eleanor Roosevelt
    #5
    backwoods
    Max Output Level: -49.5 dBFS
    • Total Posts : 2571
    • Joined: 2011/03/23 17:24:50
    • Location: South Pacific
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 16:13:18 (permalink)
    I was using 0 as an arbitrary figure. Putting any hi pass on a file raises the top level it seems. Just easier to say hi pass 0 makes it clip rather than hi pass-3 makes it go higher than -3. I'm trying to understand these things. Just looking at an EQ curve it would suggest the volume will go down with a hi pass- not so.
     
    Do you guys instead of brickwalling sometimes use several hi pass filters on the same file?
     
    here we go: http://www.soundonsound.com/sos/dec05/articles/qa1205_3.htm
                       http://www.gearslutz.com/board/4376759-post13.html
    post edited by backwoods - 2013/11/02 16:32:50

     
    #6
    Jeff Evans
    Max Output Level: -24 dBFS
    • Total Posts : 5139
    • Joined: 2009/04/13 18:20:16
    • Location: Ballarat, Australia
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 16:50:04 (permalink)
    I think it is a good point that you raised about how levels can go upward when using a HPF. My point is that if your gain staging is correct and the average and peak levels are well below 0 dB FS than it does not matter. As long as you make your limiter the very last stage in the mastering process then an increase in level is just never going to happen.
     
    With HPF (and LPF of course) the slope of the filter is important rather than using more than one in series. Sometimes it is better to move the cutoff freq much higher but use a 6 db/oct slope and you will get the sound you are after but on other ocassions it is better to use a 48 dB/oct slope and have the cutoff as low as it can go so you are letting as much low energy through as you can but then slamming off everything under a certain frequency.

    Specs i5-2500K 3.5 Ghz - 8 Gb RAM - Win 7 64 bit - ATI Radeon HD6900 Series - RME PCI HDSP9632 - Steinberg Midex 8 Midi interface - Faderport 8- Studio One V4 - iMac 2.5Ghz Core i5 - Sierra 10.12.6 - Focusrite Clarett thunderbolt interface 
     
    Poor minds talk about people, average minds talk about events, great minds talk about ideas -Eleanor Roosevelt
    #7
    rumleymusic
    Max Output Level: -60 dBFS
    • Total Posts : 1533
    • Joined: 2006/08/23 18:03:05
    • Location: California
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 17:58:49 (permalink)
    If your levels are at 0, as Jeff said they should not be before you add eq, there is a possibility of the program thinking the output is increasing even when you cut instead of boost, especially in the bass range due to phase shift and general calculation errors.  You may not be clipping, but the meters think you are.  Give yourself plenty of headroom to make adjustments and increase the levels at the final stage. 
     
    In a DAW it is perfectly fine to decrease the level of the audio and increase it later.  The noise floor is virtually non-existent and there will be no loss of resolution. 

    Daniel Rumley
    Rumley Music and Audio Production
    www.rumleymusic.com
    #8
    The Maillard Reaction
    Max Output Level: 0 dBFS
    • Total Posts : 31918
    • Joined: 2004/07/09 20:02:20
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 18:29:07 (permalink)
    backwoods
    Putting any hi pass on a file raises the top level it seems.



    This is an example illustrating "ringing" in a EQ filter. The gui doesn't identify the filter design or the filter order.
     

     
    The "Q" is reported as 8.14 which is very steep. (3dB/octave?) If that "Q" was below 1.4 it would be far less steep and probably would not have any ringing.
     
     
    I don't have a great understanding of EQ math so I will not pretend to be able to answer your question. If you want to understand why hi-pass, low pass, or shelving EQ filters "ring" or resonate when they are set to high "Q" factors with steep slopes you'll have to learn about EQ filter math and the various types of EQ filter designs.
     
    I generally try to avoid "ringing" by using hi-pass, low pass, or shelf filters with low "Q" factors and gentle slopes, but sometimes I exploit the ringing by choosing appropriate settings.
     
    Good luck learning about the math. There are some folks here who seem to know it very well. I usually gloss over every time I repeat the process of endeavoring to learn it.
     
    When you really need a steep slope with out ringing, the stacking of instances can be an effective solution.
     
     best regards,
    mike
     
     
    post edited by mike_mccue - 2013/11/02 18:34:13


    #9
    bitflipper
    01100010 01101001 01110100 01100110 01101100 01101
    • Total Posts : 26036
    • Joined: 2006/09/17 11:23:23
    • Location: Everett, WA USA
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/02 20:13:29 (permalink)
    Don't worry about the math. Just remember that steep slopes should generally be avoided -unless you actually want the effect Mike showed above. And sometimes, you actually do. It can be a cool effect. Try it on kick drums. But for mastering, probably not. Which is why we use linear-phase filters for that application. 
     
    Cutting very low frequencies is common practice in mastering, although 40Hz is a tad high for most genres. 30 Hz or so is a more reasonable target. The thing is, even though you can't hear those frequencies (and your listener's playback system can't reproduce them), your dynamics processors can hear them just fine. Excessive energy down there can affect bus compressors and limiters in unexpected ways.
     
    As for limiting the extreme high end, the only reason I can think of for doing that might be to reduce the likelihood of aliasing when played back as a wave file on a really cheap player. I wouldn't bother, though - if they're that cheap, the heck with 'em. OTOH nobody will miss those frequencies, and they're gonna get tossed anyway if you encode to MP3. But generally they're not going to cause any harm, and as Daniel noted above, some audiophile may hook up a spectrum analyzer and then send you an angry letter.


    All else is in doubt, so this is the truth I cling to. 

    My Stuff
    #10
    wst3
    Max Output Level: -55.5 dBFS
    • Total Posts : 1979
    • Joined: 2003/11/04 10:28:11
    • Location: Pottstown, PA 19464
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/07 20:55:16 (permalink)
    There is a lot we still don't understand about how the human processes what they hear. There is a lot of silliness in the audiophile world, but keep in mind they are the ones who pushed audio pros to do better. And it turns out that sometimes they are right.

    Extended bandwidth, especially on the high end, seems to be the hallmark of most of the revered vintage gear. Now the truth is, the upper limit of the pass band was out around 40kHz, or even 80kHz because they couldn't build filters with steeper slopes that didn't ring terribly. BUT, it turns out that those gentle slopes may have contributed more to the sounds we like than we realize.
     
    I designed a microphone preamplifier a while back. I used all discrete, Class A gain elements, no negative feedback, precision current sources, and multiple blocks of moderate gain (yeah, it should sound familiar<G>!) But I was designing this to have a digital output, so I spent a great deal of time playing with the Low Pass Filter at the output of the preamplifier. Actually, I tried placing the LP filter at different points, including as the very first stage. Everyone that listened to this design agreed that it sounded better with no LP filter. As it turns out, modern "Switched Capacitor" filters on the input of most A/D converters to a very nice job of band limiting at somewhere below the Nyquist frequency, so it wasn't as big a deal as I expected.
     
    So what do we do about large multi-track productions where 20, 30, 40 or more tracks will contribute significant energy in the upper registers, and most of that energy will be noise? I tend to use gentle filters (even though there are digital filter configurations that do not cause phase problems, and don't ring) and I still tend to place them at least an octave above where I might think they need to be. And I place them on the individual tracks, because one the tracks are summed together it's a lot more difficult to address noise problems.
     
    As far as the low end goes, if the source is electronic I don't use High Pass filters... there isn't going to be any energy there anyway. If the source is acoustic then I will place a HP filter early in the recording chain.
     
    One important caveat, that several folks have pointed out, is that NONE of this is done to gain an advantage in terms of levels. It is done entirely to manage the noise contributions. I still think that -18 dBFS is a fine operating point!!!!

    -- Bill
    Audio Enterprise
    KB3KJF
    #11
    Rimshot
    Max Output Level: -29 dBFS
    • Total Posts : 4625
    • Joined: 2010/12/09 12:51:08
    • Location: California
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/08 00:15:01 (permalink)
    Nice thread.

    Rimshot 

    Sonar Platinum 64 (Lifer), Studio One V3.5, Notion 6, Steinberg UR44, Zoom R24, Purrrfect Audio Pro Studio DAW (Case: Silent Mid Tower, Power Supply: 600w quiet, Haswell CPU: i7 4790k @ 4.4GHz (8 threads), RAM: 16GB DDR3/1600 
    , OS drive: 1TB HD, Audio drive: 1TB HD), Windows 10 x64 Anniversary, Equator D5 monitors, Faderport, FP8, Akai MPK261
    #12
    quantumeffect
    Max Output Level: -47.5 dBFS
    • Total Posts : 2771
    • Joined: 2007/07/22 21:29:42
    • Location: Minnesota
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/08 02:31:36 (permalink)


    Dave

    8.5 PE 64, i7 Studio Cat, Delta 1010, GMS and Ludwig Drums, Paiste Cymbals

    "Everyone knows rock n' roll attained perfection in 1974. It's a scientific fact." H. Simpson

    "His chops are too righteous."  Plankton during Sponge Bob's guitar solo 
    #13
    quantumeffect
    Max Output Level: -47.5 dBFS
    • Total Posts : 2771
    • Joined: 2007/07/22 21:29:42
    • Location: Minnesota
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/08 02:56:16 (permalink)
    Always loved the above cartoon from many years ago.  Unfortunately I was unable to find a higher resolution copy of it for you ... If you can't read the sign in the window it says:
     
    Senior Citizens!
    Why pay for useless thrills?
    By this 1200 - 5800 Hz
    Speaker System and Save!!!

    Dave

    8.5 PE 64, i7 Studio Cat, Delta 1010, GMS and Ludwig Drums, Paiste Cymbals

    "Everyone knows rock n' roll attained perfection in 1974. It's a scientific fact." H. Simpson

    "His chops are too righteous."  Plankton during Sponge Bob's guitar solo 
    #14
    bitflipper
    01100010 01101001 01110100 01100110 01101100 01101
    • Total Posts : 26036
    • Joined: 2006/09/17 11:23:23
    • Location: Everett, WA USA
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/08 15:50:20 (permalink)
    Good one, Dave. Of course, when today's teens become senior citizens they won't need the Senior Special. They'll have been conditioned by years of listening to plastic earbuds to have low expectations. The sign might read "these speakers sound just like your earbuds, but more than two people at a time can listen to them!"
     
     


    All else is in doubt, so this is the truth I cling to. 

    My Stuff
    #15
    BenMMusTech
    Max Output Level: -49 dBFS
    • Total Posts : 2606
    • Joined: 2011/05/23 16:59:57
    • Location: Warragul, Victoria-Australia
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/08 18:30:11 (permalink)
    Danny Danzi
    backwoods
    This is for my own education mostly. After fiddling around with Izotope and the brickwall filters I got to thinking.
     
    Why not brickwall the low frequencies and the hi ones- say over 18000 that no one hears or cares about on standard playback equipment. Won't we then be able to push the signal higher without distortion for a louder file?
     
    Also, why does a hi pass at say 40hz cause a file touching zero to clip? 




    Anything under your target low frequency (we can use 40 hz for this example) should be removed....anything up high that is doing nothing, can also be removed. I do it all the time. But some things may show a little activity in those ranges too, so you have to be careful. Some of this will be genre specific too.
     
    For example, if we remove everything below 40 Hz in a r&b song or rap song, we just affected any bass drops they may have had in the song.
     
    If we remove 18 k and above (which is usually safe to do) there are a few plugins out today that accentuate like 22 k. I see them as senseless but some guys swear by them and I have a few clients like that. They literally feel that high end air is making a difference for the better where to me, it's adding hiss that can mess with the audio.
     
    So depending on what style of music you are working on will determine where you high pass and low pass. I think you'd be better off high passing and low passing over brickwalling the stuff. This way you have a little more control over what you allow to pass through naturally where as brickwalling it could introduce artifacts.
     
    A file touching 0 dB is clipping in the digital realm unless you are literally getting readings of -0 dB. But at regular 0 dB without the minus, you'll show clip points no matter what you add to the audio unless you are cutting something drastically. But just about anything you bring in will clip a file like that. It may even clip just passing through the plug without touching a parameter on the plug.
     
    Your best bet is to keep anything un-mastered at -3dB peak...anything you master, -0.3 and no hotter than -0.1 dB peak. But after something has been mastered, there is a good chance anything you add will make the song clip...even if you're high passing.
     
    Though -0 dB doesn't show up as a clip, you shouldn't really go that high if you can help it as you're just way too close to the clipping zone and chances are you just may go over.
     
    -Danny


    Hi Danny, just a couple of things from the above statement, -0.2DB is now the worldwide mastering standard and get this mastering for I-Fools its -1DB.  I'd have to hunt for the article in regards to mastering for I-Fools but apparently that's the level.  Sorry for going off topic.
     
    Ben

    Benjamin Phillips-Bachelor of Creative Technology (Sound and Audio Production), (Hons) Sonic Arts, MMusTech (Master of Music Technology), M.Phil (Fine Art)
    http://1331.space/
    https://thedigitalartist.bandcamp.com/
    http://soundcloud.com/aaudiomystiks
    #16
    bitflipper
    01100010 01101001 01110100 01100110 01101100 01101
    • Total Posts : 26036
    • Joined: 2006/09/17 11:23:23
    • Location: Everett, WA USA
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/09 12:02:46 (permalink)
    -0.2DB is now the worldwide mastering standard...

    Can you cite an authoritative reference to support that statement?
     
    AFAIK, there is no standards organization for mastering, but -0.2 dB could still be a de facto standard by consensus. I've only checked a small handful of recent releases personally, so I wouldn't know.
     
    However, I do have a great many CDs that peak well under -1.0 dB, so if this is indeed the worldwide standard then it must be a recent development. Or was your statement perhaps aimed at a specific musical genre or market segment?


    All else is in doubt, so this is the truth I cling to. 

    My Stuff
    #17
    rumleymusic
    Max Output Level: -60 dBFS
    • Total Posts : 1533
    • Joined: 2006/08/23 18:03:05
    • Location: California
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/09 12:30:13 (permalink)
    Well, -.3 used to be the "standard".  For no other reason than it was the default preset in the widely used Waves L1 Ultramaximiser.  I don't know how it slowly crept up to -.2 but that seems to be a consensus by no other reason than popular use.   Many older disks and independent projects don't follow the popular consensus.  For my own classical work sometimes I won't get anywhere near the peak, but that is because I master to average levels for the entire disk first and foremost. 

    Daniel Rumley
    Rumley Music and Audio Production
    www.rumleymusic.com
    #18
    Grem
    Max Output Level: -19.5 dBFS
    • Total Posts : 5562
    • Joined: 2005/06/28 09:26:32
    • Location: Baton Rouge Area
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/09 14:29:53 (permalink)
    Rimshot
    Nice thread.


    Yep.

    Grem

    Michael
     
    Music PC
    i7 2600K; 64gb Ram; 3 256gb SSD, System, Samples, Audio; 1TB & 2TB Project Storage; 2TB system BkUp; RME FireFace 400; Win 10 Pro 64; CWbBL 64, 
    Home PC
    AMD FX 6300; 8gb Ram; 256 SSD sys; 2TB audio/samples; Realtek WASAPI; Win 10 Home 64; CWbBL 64 
    Surface Pro 3
    Win 10  i7 8gb RAM; CWbBL 64
    #19
    BenMMusTech
    Max Output Level: -49 dBFS
    • Total Posts : 2606
    • Joined: 2011/05/23 16:59:57
    • Location: Warragul, Victoria-Australia
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/09 16:00:50 (permalink)
    bitflipper
    -0.2DB is now the worldwide mastering standard...

    Can you cite an authoritative reference to support that statement?
     
    AFAIK, there is no standards organization for mastering, but -0.2 dB could still be a de facto standard by consensus. I've only checked a small handful of recent releases personally, so I wouldn't know.
     
    However, I do have a great many CDs that peak well under -1.0 dB, so if this is indeed the worldwide standard then it must be a recent development. Or was your statement perhaps aimed at a specific musical genre or market segment?


    Hi Bit, it's what's taught in all the audio schools these days.

    Benjamin Phillips-Bachelor of Creative Technology (Sound and Audio Production), (Hons) Sonic Arts, MMusTech (Master of Music Technology), M.Phil (Fine Art)
    http://1331.space/
    https://thedigitalartist.bandcamp.com/
    http://soundcloud.com/aaudiomystiks
    #20
    Danny Danzi
    Moderator
    • Total Posts : 5810
    • Joined: 2006/10/05 13:42:39
    • Location: DanziLand, NJ
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/09 16:03:50 (permalink)
    bitflipper
    -0.2DB is now the worldwide mastering standard...

    Can you cite an authoritative reference to support that statement?
     
    AFAIK, there is no standards organization for mastering, but -0.2 dB could still be a de facto standard by consensus. I've only checked a small handful of recent releases personally, so I wouldn't know.
     
    However, I do have a great many CDs that peak well under -1.0 dB, so if this is indeed the worldwide standard then it must be a recent development. Or was your statement perhaps aimed at a specific musical genre or market segment?




    I was wondering the same thing, bit, or thought maybe it was "aimed" at me just because. In my experience though, even if there were some sort of "etched in stone" value as stated in the "constitution of mastering" handbook, I'd still say use what works for a particular project regardless of what setting it ends up being. For me, nothing hotter than -0.1, nothing lower than -0.3 dB peak is what I prefer for most projects. However, there are times when I'll close my eyes and set something to what it sounds like. Wherever it ends up, it ends up. It depends on the genre, how the song was recorded, mixed/handled and of course what limiter I use.
     
    I have an album here that David Rosenthal was involved with in the 90's. It's by far the loudest album I have ever heard from start to finish. It's even louder than Metallica's St. Anger. For some reason, this album that David did sounds great even though it's insanely loud. I wouldn't have even attempted to make such a fine album so loud, but I must say....it's definitely a loud one done right. It even shows clip points yet doesn't SOUND clipped. Go figure..
     
    -Danny

    My Site
    Fractal Audio Endorsed Artist & Beta Tester
    #21
    bitflipper
    01100010 01101001 01110100 01100110 01101100 01101
    • Total Posts : 26036
    • Joined: 2006/09/17 11:23:23
    • Location: Everett, WA USA
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/10 12:01:25 (permalink)
    It strikes me as silly that a difference of 0.1 dB will make an audible difference, in any context.
     
    For that matter, 1.0 dB usually won't make a noticeable difference when you're talking peak values in reasonably dynamic music. Try it yourself: make two mixes of the same song, normalized to the same RMS but with one with peaks limited at -1.0 dB and the other at -0.1 dB and see if you can distinguish the two in a blind test. 
     
    You may or may not be able to; it depends on how dynamic the mix was to begin with. If you started with a dynamic range of 4 dB, then that single decibel represents 25% of your range. But if you have a crest factor of, say, 14 dB, it's far less likely you'll notice if peaks are increased or decreased by 1.
     
    Perhaps all this will eventually become moot when R128 is universally adopted. I know I'd have to remaster most of my own stuff to meet the new broadcast requirements, and my stuff is far from squashed, at least by current conventions. (Want to have your eyes opened? Get an EBU loudness meter, check your material and see how much of it would be flat-out rejected for European broadcast.)


    All else is in doubt, so this is the truth I cling to. 

    My Stuff
    #22
    Grem
    Max Output Level: -19.5 dBFS
    • Total Posts : 5562
    • Joined: 2005/06/28 09:26:32
    • Location: Baton Rouge Area
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/10 13:03:48 (permalink)
    bitflipper
    Get an EBU loudness meter, check your material and see how much of it would be flat-out rejected for European broadcast.)


    Why would they reject it?

    Grem

    Michael
     
    Music PC
    i7 2600K; 64gb Ram; 3 256gb SSD, System, Samples, Audio; 1TB & 2TB Project Storage; 2TB system BkUp; RME FireFace 400; Win 10 Pro 64; CWbBL 64, 
    Home PC
    AMD FX 6300; 8gb Ram; 256 SSD sys; 2TB audio/samples; Realtek WASAPI; Win 10 Home 64; CWbBL 64 
    Surface Pro 3
    Win 10  i7 8gb RAM; CWbBL 64
    #23
    The Maillard Reaction
    Max Output Level: 0 dBFS
    • Total Posts : 31918
    • Joined: 2004/07/09 20:02:20
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/11 08:05:21 (permalink)
    Much of it is too loud.


    #24
    bitflipper
    01100010 01101001 01110100 01100110 01101100 01101
    • Total Posts : 26036
    • Joined: 2006/09/17 11:23:23
    • Location: Everett, WA USA
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/11 10:03:47 (permalink)
    I am told that sound for television is particularly picky about EBU levels, and have heard stories about people's submissions being kicked back that were just close to the limit but did not actually exceed them.


    All else is in doubt, so this is the truth I cling to. 

    My Stuff
    #25
    rumleymusic
    Max Output Level: -60 dBFS
    • Total Posts : 1533
    • Joined: 2006/08/23 18:03:05
    • Location: California
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/12 14:02:33 (permalink)
    I work in radio broadcast full time.  Usually we expect varying degrees of loudness in music submissions and is my job in the production department to bring the loudness to spec.  Too loud and it will sound just awful on the air after further pre-transmitter compression,  too soft and the audience gets upset having to turn up their dials (gasp!).  At any rate, it is good to know the target.  But don't worry too much if your music isn't at the correct specs for broadcast, I still need to earn a paycheck.  

    Daniel Rumley
    Rumley Music and Audio Production
    www.rumleymusic.com
    #26
    Danny Danzi
    Moderator
    • Total Posts : 5810
    • Joined: 2006/10/05 13:42:39
    • Location: DanziLand, NJ
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/12 14:21:14 (permalink)
    rumleymusic
    I work in radio broadcast full time.  Usually we expect varying degrees of loudness in music submissions and is my job in the production department to bring the loudness to spec.  Too loud and it will sound just awful on the air after further pre-transmitter compression,  too soft and the audience gets upset having to turn up their dials (gasp!).  At any rate, it is good to know the target.  But don't worry too much if your music isn't at the correct specs for broadcast, I still need to earn a paycheck.  




    Daniel, just curious....can you tell me what "spec" actually is for where you are? And, is it different for each place that has a guy like you taking care of that stuff? For example, does WXYZ have a different spec than WVFN?
     
    Next, one of my friends was the creative services program director in Philly at the old Y-100 station. Basically, he did all the voice work, all the commercials, little jingles, all the weird sounds like stations changing etc and of course, he had to bring things "to spec". This was a long time ago in the lat 90's to mid 2000's....but if you were doing what you do now during those times, has this "spec" changed from then to today, and if so....how much of a difference is there? Thanks in advance.
     
    In my experience when getting things ready for radio for my clients, I do my best not to squash anything because I know the station will do quite a bit of that. I have a few limiter settings that simulate what *could* go on at a station so I sort of use those as my guides. Of course none of them are probably correct, but it shows me how something too squashed on my end could sound like absolute dog crap on a radio station if I hammer the limiter too much.
     
    That said, with the stuff I do at my studio, I can get away with hammering if I needed to and the end results isn't bad at all. It depends who records the material in my opinion. A guy that may not have a grasp on how to really record or mix something is not going to get the same results as a guy that has a clue. That reminds me of a good quote I came up with to a student the other day...lol!
     
    "The more we try to process a turd, the more it tries to find its way back to the sewer"
     
    I find the same to be true trying to master something that isn't recorded very well from the start. Or you get a client that says "it has to be louder...no, louder still..no, it's still not loud enough!" What they fail to realize is, you don't just grab something and master it loud. It has to be MIXED to be mastered loud. It has to be recorded in a good way (notice I said "good" and not great) for the loudness to come across the right way. But people don't get it. Anyway...sorry, I drifted a bit there.
     
    Like I mentioned before in my post, whatever sounds the best to me that is not showing clip points or allowing me to HEAR clip points is what I use on a project. If I smash a limiter to -7 dB and it works for that material, so be it. I try not to, but anyone that knows anything about this field knows that you can end up with some really freaky things going on at the end of the day. Stuff you wouldn't normally be down with, ya know? Whatever works, makes the client sound great and makes you not ashamed to have your name on the project as the engineer is what is best in MY opinion. :)
     
    -Danny

    My Site
    Fractal Audio Endorsed Artist & Beta Tester
    #27
    The Maillard Reaction
    Max Output Level: 0 dBFS
    • Total Posts : 31918
    • Joined: 2004/07/09 20:02:20
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/12 14:34:49 (permalink)
    In the USA, radio is still mostly analog transmissions so when it's all said and done the FCC has the final word on that. They don't want your licensed broadcasting stepping on other entities licensed broadcasts.
     
    In the USA, TV is now mostly digital transmission so, for the most part, the loudness levels are regulated in house to set some perceived standard of quality that the broadcaster wants to be known for.
     
    best regards,
    mike


    #28
    rumleymusic
    Max Output Level: -60 dBFS
    • Total Posts : 1533
    • Joined: 2006/08/23 18:03:05
    • Location: California
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/12 18:07:49 (permalink)
    The ITU-R BS 1770-2 measurement standards are what most stations aim for.   Usually around -14dB RMS before it goes out, but that depends on the settings of the broadcast audio processor.  (in my station it is closer to -20dB RMS since we are classical and need as much dynamics as possible)  This is mainly for dialogue, but the music in radio needs to be matched dynamically as well.  I doubt most stations have more than 5-6dB of dynamic range by the time it reaches your car.  Pre-processor digital and online streaming make the need for in house standards imperative as Mike said.  Everything goes in the computer now of course, the days of tape, cards, and even CD's are mostly over, which makes control over levels a much easier task.   
     
    Many popular music stations though will not bother with levels on their music, much of which is taken directly from online content sources directly to their automation systems.  They can rely on their preset processors to squash the audio down to an acceptable level in comparison to the talking, even before online streaming or HD transmitters.  If you want to submit music that will sound good on radio, make sure you adhere to the -14dB RMS target, or it could be just a muddy mess and sound nothing like your mix once it reaches your listeners.  My advice anyway.
     
     

    Daniel Rumley
    Rumley Music and Audio Production
    www.rumleymusic.com
    #29
    Jeff Evans
    Max Output Level: -24 dBFS
    • Total Posts : 5139
    • Joined: 2009/04/13 18:20:16
    • Location: Ballarat, Australia
    • Status: offline
    Re: Why include frequencies no one hears? 2013/11/12 18:22:28 (permalink)
    Anothe reason why the K system approach to production is just so good. -14 dB rms is one of the ref levels and so is -20 dB rms  with -12 dB rms being the third ref level.  -14 is such is a nice level. It is reasonably loud but still retains quite a lot of dynamics and transients. (For a lot of music genres and I agree -20 is nice for classical)
     
    The great thing about -14 is that you can easily reach it without any mastering (loudness wars that is) required. Of course you an still apply mastering processes such as EQ and compression but the limiter is not needed under those conditions.
     
    All the more reason as well to get more VU meters into your DAW's and start using them.

    Specs i5-2500K 3.5 Ghz - 8 Gb RAM - Win 7 64 bit - ATI Radeon HD6900 Series - RME PCI HDSP9632 - Steinberg Midex 8 Midi interface - Faderport 8- Studio One V4 - iMac 2.5Ghz Core i5 - Sierra 10.12.6 - Focusrite Clarett thunderbolt interface 
     
    Poor minds talk about people, average minds talk about events, great minds talk about ideas -Eleanor Roosevelt
    #30
    Page: 12 > Showing page 1 of 2
    Jump to:
    © 2024 APG vNext Commercial Version 5.1