• Techniques
  • “Get Rid of the Egg Shell First” Technique
2017/10/12 16:15:50
RobWS
I don’t have a lot of mixing experience yet, but after a couple of completed projects I found myself with a routine early in the mix process.
 
I go through each recorded track one at time and open the quad curve EQ before any other plugins.  I do the narrow Q boost sweep technique and listen for problem frequencies.  Once I determine the bad frequencies, I’ll do a cut of whatever amount seems to make a difference.
 
That technique I just described is nothing new but doing it first makes sense to me.  Why process a track with plugins that include frequencies that should be removed?  This also starts to carve out a bit of space from each track to lessen the chances of frequency masking.
 
So what’s with the egg shell reference in the title of the post?  If I have all of my ingredients out and ready to combine them in a bowl and I put the eggs in first, if a piece of an egg shell ends up in the bowl, I would rather get the shell out first before putting the other ingredients in.  Why have something in the bowl that doesn’t belong in the mix and then add all of the other ingredients (plugins)?  Let me get rid of that first.  Is my analogy accurate?
 
Give me your opinion.
2017/10/12 16:59:20
bitflipper
The analogy doesn't work because in cooking you assume the egg - less the shell - is a complete ingredient that requires no further refinement. You already know in advance what its role will be, e.g. as a binding agent.
 
Once upon a time, in the early days of recording, engineers applied no EQ or minimal EQ. They worked on getting the sound they wanted up front. But they were also typically recording the whole band at once, so every component was heard in context even before capturing it. They'd position microphones and instruments in such a way that the mix and the tonal balance was already right. When balance tweaks were needed, such as bringing up an instrument for a solo, they'd ride the faders in real time while the tape (or disk cutter) was rolling.
 
Then along came multi-track recording, which allowed productions to be built up one track at a time. You could now wait until you'd gotten the perfect drum track before moving on to the next instrument, and build an entire backing track before adding vocals. The vocals might consist of multiple parts performed by the same singer.
 
But as you add more and more layers, conflicts become apparent that require changes to mix levels and EQ, or even changes to the song arrangement. Consequently, nowadays we tend to not apply a lot of processing up front because we know that will only restrict our options later on. Plus you just don't really know how an instrument will fit into the mix until you hear it in context. 
 
 
2017/10/12 20:32:27
mettelus
For any signal chain, that premise is accurate. Do not pass anything forward that isn't to be processed, and know what each process does so you can order them properly.

As for eggshells... they add crunch to pudding and cakes... a must for some guitarists.
2017/10/12 23:28:31
RobWS
My thought process behind this post is similar to the debate of EQ first or Compression first.  I've seen other posts in the Cakewalk forum discussing that topic.  My two songs produced in Sonar thus far have manifested a problem with the vocal tracks.  There appeared some unpleasant "wooly" sound (for lack of a better description) in the 800 - 1200Hz region.  My guess was it would be better to dip some EQ in that region before compression which is what I tried.  By the way, I've used two different mics but got the same results.  I'm guessing that the off-sound must be from the room I record in.  Maybe moving my vocal mic into another bedroom will be a good experiment for my next project.
 
"They'd position microphones and instruments in such a way that the mix and the tonal balance was already right."  Bitflipper, that reality of recording in days gone by always demands great respect from me.  You have to possess tremendously trained ears to know when it sounded right.  We now have seemingly endless options with DAWs recording seemingly endless tracks. So yes, you just don't really know how an instrument will fit into the mix until you hear it in context.  And I'm no J.S. Bach.
 
I have watched Warren Huart processing a vocal track with EQ after compression into another EQ into more compression into more and more plugins, etc.  That's fascinating to watch a long chain of processing.  Since I'm pretty much a beginner, I'll keep it simple for now but it's good to learn from those who have been doing it for years.
 
A few months ago I heard Rob Mayzes state that all the recording knowledge won't do much good if you don't start applying it.  So I'll keep working at it until I find what works well.  Thanks for all your input to guide me along the way.


2017/10/13 13:24:03
bitflipper
The rules for processor precession are complicated and dependent on the specific context.
 
Broadly speaking, processors can be divided into two categories: those that add content and those that remove content. As a (very) general rule, you should apply effects that add content first and things that remove content after. For example, you'd normally place a distortion effect before a chorus or flanger. There are exceptions, of course, e.g. noise gates usually precede delays and reverbs.
 
Depending on their settings, compressors can fall into either category, and that's why there is no set rule for where they sit in the chain. It's not unusual to have two compressors bookending an effect placed between them.
 
EQ is also moveable. For example, a HPF usually makes more sense in front of a compressor because it's going to have a drastic impact on how the compressor reacts. But if you're using EQ to enhance the midrange, you'd probably want that to happen after compression. Similarly, if you're carving a spectral space for an instrument via cuts, it makes sense to do that last. And as with compression, you might even want two equalizers with an effect sandwiched between them, a not uncommon practice with reverb.
 
Hey, if it was easy everbody'd be doing it!
2017/10/14 10:31:45
codamedia
RobWS
I go through each recorded track one at time and open the quad curve EQ before any other plugins.  I do the narrow Q boost sweep technique and listen for problem frequencies.  Once I determine the bad frequencies, I’ll do a cut of whatever amount seems to make a difference.

 
That is more of a precision cut... something I would leave for much later.
My technique is generally the following...
 
1: Capture a good clean source... I never approach with "fix it in the mix", get it as good as possible at the source.
2: Apply filters (LPF / HPF)... 
3: Apply comps / gates / expanders as desired
4: Apply EQ as desired (this seems to be where you are starting)
5: Pan
6: Route or apply effects
 
That's a guideline... I'll stray from those rules if it gets me a better result. 
 
Just my 2 cents...
2017/10/24 23:12:57
ChuckC
Rob,
  My contention with your method there is that if the track you are working on is solo'd/not in the mix as you sweep around for frequencies you couldn't know what frequencies "need" to be cut!  It sounds like you are chopping stuff up in solo (forgive me if I am wrong) and then attempting to throw all the faders up at once.    I can tell ya for instance I have had times where a ringing frequency in a snare (Around 700-800k) was annoying as hell in solo, and I cut the piss out of it.  Then, I couldn't get the snare to either cut through or sound right with the guitars & bass in and it was driving me nuts!   Then I realized back then that those ringy mid frequencies can often be a HUGE part of the sound that defines a snare drum in the mix.  Yes, you can attenuate it some if it is an offending sound but, you can ONLY decide the right amount IN THE MIX.    Similarly, the frequencies in a guitar that annoy you in solo may be what helps it separate from keys & vocals, or horns...  
 
  I have been at it a while now and generally have a pretty good idea of where I am heading with a mix, for me I tend to listen to a rough balance and get a mental picture of where I want to go, then I start with the drums.  Kick first and I get it where I want it, all processing, I may tweak it a little later and that's ok, but for the most part it's damn close.   Now the snare, toms, cymbals, add the bass, then I bring guitars and vocals back in work each one up while listening to the now mixed drums & bass.   
  It's hard to compare audio processing to cooking...  Other than if you want it raw, well then there is less to do, and you can overcook the crap out of both if you don't know what you are doing.
my 2cents.
2017/10/25 14:18:21
RobWS
I appreciate the input from all of you.  I certainly don't want to develop bad habits in mixing.  The point I really seem to have a problem with is frequency masking.  If I can get rid of some frequencies right off the bat, I thought it would help the project down the road. (Who needs an eggshell in their cake batter?)  So maybe the analogy of a chef to an audio mixer does not work here.  It's quite a learning process, but fun none the less.
2017/10/29 14:48:35
bitflipper
Rob, you might want to try using a multi-track spectrum analyzer such as SPAN Plus or MMultiAnalyzer. These let you overlay spectra from multiple tracks and clearly see where the conflicts are. Or take it one step further and combine multi-track analysis with EQ, with GlissEQ.
2017/11/17 03:15:26
montezuma
If there's outrageous ringing or something and you can't record again, by all means cut that stuff out of there

;)
12
© 2024 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account