• Software
  • Vocal Rider vs Compressor (p.2)
2016/09/24 18:02:15
BassDaddy
Raise and lower phrases and even single note with Melodyne is also a good way to go. Good example in the Carlo Libertini/Warren Huart video posted in this Forum.
2016/09/24 19:30:55
cclarry
Compressors were originally invented to squash Dynamic Range to get recordings to 
"fit" the DR of the Media...i.e Records.  Their DR I believe was like 45 Db...
coming from Tape, where it was higher
2016/09/24 20:02:22
Jeff Evans
This thread is interesting and although it covers spoken word it also applies to vocal tracks.
 
http://forum.cakewalk.com/How-to-bring-up-quieter-parts-of-spoken-dialog-m3445444.aspx
 
This approach is by far the best and you will get you the best vocal sound this way.
2016/09/24 20:28:10
TheSteven
Compressing or limiting a vocal (or any track) just gives you more level consistency on that one track.
It doesn't do anything to keep that track floating on top of a combination of other tracks or a mix that varies dynamically.
Level riding be it manual or automated in some fashion is often what is takes for a vocal to stay in balance with a dynamically changing mix.
 
Another technique is use something like that can create an automation curve for the vocal track based on the volume of the mix, for example.. 
http://www.bluecataudio.com/Products/Product_DPeakMeterPro/
Probably could feed the output directly in a sidechain to control volume but I haven't tried it.
 
 
2016/09/25 10:14:49
Kamikaze
Along the same lines as gain automation, there was talk and a think a feature request for RMS normalising. So you could go though each section and RMS normalise and the whole track would feel the same volume, before compressing.
2016/09/25 15:39:55
bitflipper
Once upon a time, I believed that RMS normalization was the holy grail.
 
I love fat harmonies that blend so well that you can't even pick out individual voices. (For reference, listen to the chorus on "Sweet Little Lies" by Fleetwood Mac.) For a long time it was a mystery to me how they achieved that. Logically, it made sense that if each voice was RMS-normalized to one another, that would do the trick.
 
So I took a short piece out of a 4-part harmony section and very carefully adjusted their average RMS values - by hand - to within 0.5 dB of one another. Looking back, what I discovered should have been obvious, but nevertheless surprised me at the time. The vocal balance was worse than when I'd just tweaked them by ear. Duh. We don't perceive all frequencies the same, even if they're RMS-matched. A high harmony part might have to be as much as 6 dB below the main part to sound balanced.
 
I didn't give up, though, but kept on trying stuff and reading about classic techniques. In the intervening years I've figured out that there simply is no way to automate this kind of thing. Turns out, vocal levels are highly subjective, depending on the vocalist's timbre, pitch, even lyrical content. And of course, what's going on behind the vocals that may or may not be masking them. This is why any automated leveling is only ever going to be partly successful, and will never obviate the need for manual tweaks.
 
So how did they do it in "Sweet Little Lies"? Heavy compression, triple-tracking and manually-programmed volume automation. 
12
© 2025 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account