• SONAR
  • Odd Question about EQ Frequencies (p.2)
2013/06/10 13:51:59
scook
He got that in one. The first sentence seems to indicate an understanding of the difference between the note and the sound. Then there is some blending of the two ideas in the rest if the post. And then everything bunches up at the lower end of the spectrum because it is not linear. I guess when he comes back we will see if it is cleared up a bit.
2013/06/10 14:36:10
Danny Danzi
This to me is where the science of this stuff loses people and why at times I think it should totally be removed from the foundation.
 
Let's take a look at frequencies and tuning. Just because something exists in a chord or single note doesn't mean that frequency is dominating on a recording. Just because people high pass every instrument 9 out of 10 times doesn't mean they HAVE to if the instrument in question does not need to be high passed at the source.
 
Just because 600-800 Hz can bring on mid range congestion in a guitar doesn't mean we have to remove it "just because". If the sound was printed with these fequencies curbed already, you'll most likely need to boost them.
 
Sure 10k or 12k is off the scale for a voice....you're not supposed to blast those freq's in a mix. They create air...whisper...presence....texture...you just can't put this type of a numeric value on this stuff when you are dealing with sound sources. The reason being, each sound source is different. You don't low pass if you don't hear anything harsh...and even if you do, before you just low pass, it's best to try and find out where the source of the harshness is before you just low pass and destroy all the good stuff in the high end.
 
Now with that said, if someone is having an issue with an instrument as far as eq goes, when everything else has failed, yeah, I bring up my little chart that tells me all the Hz of notes and I try and deal with things that way. But this is only with instruments that are driving me crazy like low tuned guitars or basses. When a guitar or bass is tuned to a low B or even a low A, you have to first figure out what notes they are playing that may be giving you problems. Not all notes will bother you so you wouldn't just set up shop to handle low B or low A. The tuning makes other notes weird too...so you need to decipher what they may be. I've set up automation of mutli-band compressors to handle when instruments go astray depending on the notes they play. Just about always....they are frequencies that wouldn't have anything to do with those instuments...and this is due to over-tones.
 
See, that's the whole thing in a nutshell. Just because something is tuned to something or has this, that, and this element of a frequency in the sound, doesn't mean the frequency EXISTS enough in the sound to alter it. This is where you have to throw some of the science stuff right out the window and use your ears. If I'm chugging on an A chord with a heavily driven guitar, I can just about promise you that 119 Hz will show up as that seems to be the "whoomfing" frequency in a driven guitar that sticks out. Does it make sense? Sorta kinda it does...yet, 119 isn't in an A. The closest to it in the range we would be playing would be a B at 123 Hz and of course the A chord we'd be rooting from would be 110 which IS an A. However, 110 usually doesn't show up for me. I've seen 106, 100, 92, but never the elements that literally make an A.
 
So my point is, you're not always going to get results by using the numerical values "that make up a sound". Some of those charts weren't created with the sounds we use today. Seriously. If you use a normal, clean elelctric guitar, yeah, you'll get close to those numbers if not spot on...but again, it depends on how much of those frequencies are included in your eq curve. Add distortion and the game changes. Pop/slap a bass, then what do you do? The numbers won't jive with the numerical bass values, that I promise you.
 
Those numbers are all based on:
 
The human voice ranges from this to this...
A guitar ranges from this to this...
A piano ranges from this to this...
 
You get the idea. As soon as YOU create the sound with a mic or use effects like distortion or something, those numbers change. What if you're a bass prostitute and like a lot of low end in your piano you just mic'd up? Right...you'll see low end that wouldn't normally show in a piano.
 
What if you love high end in your guitar tones because you're going deaf? Right...you'll see high end that wouldn't normally be in a guitar tone.
 
What if you use a mic that just has one of those phased sounds....or one that doesn't really project like a decent mic would? Right...you'll see an eq curve pop up on an analyzer that will make your head spin that won't have the "human voice" ranges in it. So me personally, I don't read into any of those charts or graphs that were created as starting points for us. OUR instrumentation of today goes totally against those grains. See for yourself. As soon as you take into consideration all that I'm saying, the charts change drastically and are not even worth considering depending on the style of music you are working with as well as how you record.
 
You mention high passing a bass just passed it's lowest musical note....I sure don't do that. Why? Because the sounds that come in to me from clients need to be treated for what they are, not what some number tells me. I sweep through the bands via high pass until I get what I'm looking for. It could be 40 on down....it could be 90 on down...it could be 120 on down. It depends on how much eq is recorded with the sound at the sound source. You just can't look at numbers for any of this stuff in my humble opinion, but that's just me. Science hasn't shown me anything in this field that has made me a better engineer. The only good thing science has done for me was to introduce me to cool cats like bitflipper and DrewFx. :)
 
-Danny
2013/06/10 14:38:32
konradh
scook, While I am obviously no expert, I do understand harmonics and have even done additive synthesis back when that was the thing (and subtractive before that).  I have degrees in math and ed psych and understand your comment about human perception and logarithmic scales.
 
My point was really a simple one: it seems counter-intuitive to boost or attenuate frequencies so high on the musical scale to resolve issues that are perceived as much farther down; and I was more interested in thoughts and discussion than answers.  A female singer who is too harsh and has way too much energy around the F above Middle C can be tamed by cutting, for example, at 2,750.  Of course, there is definition there but the energy at that frequency relative to the fundamental tone is weak and therefore the perceived amount of the effect is very surprising.  A similar but less extreme example is the acoustic guitar that sounds less boxey when cut at 800.  Your ears would tell you that much lower content was being reduced.
 
Common sense might tell you that since everytime the singer hits E above Middle C, the earbuds hurt your ears, that you need a cut somewhere around 329 Hz because that is a resonant point.  It is therefore surprising when that does not help and a cut at 2,500 hz does.
 
"Muddiness" sounds like something that happens in the bass region, but it is usually a problem in the 300-400 hz range, which is close to the full voice-falsetto break for many females and is solidly in the high tenor range for a male.
 
High means something very different in EQ than in musical notes; but since music is what we are working on, that just seems strange.
2013/06/10 14:44:16
konradh
And to Danny, Thanks as always for the thorough and thoughtful post.  Regarding the bass, I was just citing a quick example.  What I really do is to sweep the HPF until I hear any bad rumble or noise in the bass go away without taking any of the desired body out of it.  The number is just sort of a quick guideline.  If I am often using the same bass or bass sample, and I know where the bottom string frequency lies, I have a good starting point.
 
Your comments, of course, are correct.  Thanks.
2013/06/10 14:51:53
Danny Danzi
konradh
And to Danny, Thanks as always for the thorough and thoughtful post.  Regarding the bass, I was just citing a quick example.  What I really do is to sweep the HPF until I hear any bad rumble or noise in the bass go away without taking any of the desired body out of it.  The number is just sort of a quick guideline.  If I am often using the same bass or bass sample, and I know where the bottom string frequency lies, I have a good starting point.
 
Your comments, of course, are correct.  Thanks.



You're very welcome. Ah good, you do the bass stuff the same as I do...right on. Yeah for stuff that you use all the time, it's perfectly fine to have your starting points since you use those instruments time and time again. I have that stuff too on instruments I use all the time. But as soon as you get something from someone else, you'll notice the entire game changes. This is what I mean with the whole frequency thing. Ever hear guys with super raspy guitar tones? They don't have enough drive so they compensate and use loads of harsh treble from 3k-4k etc? This totally changes the "these are the frequencies that make up a guitar" scenario because now we have sound synthesis because of distortion AND the excessive eq that goes with it.
 
Off the record, every time I see a post from you and see your avatar....I always say "that lucky baystid found a girl that looks like Angelina Jolie!" She really does look like her in that pic. Hahahaha! :) 
 
-Danny
2013/06/10 14:57:59
js516
The issue is that the related harmonics (odd and even ordered) are an amalgam which interacts with the fundamental causing the fundamental to change. If a certain higher odd-ordered harmonics exist that can interact negatively with the fundamental (and other harmonics), adjusting the fundamental will not correct the issue. However, locating and eliminating the harmonics that are interacting with the fundamental in a bad way will have a large effect, even if the harmonic itself is inaudible.
 
So the issue is not what harmonics are present, the issue is the overall effect that they have on the timbre.
 
2013/06/10 14:58:58
scook
It is switching back and forth from musical scale to perception that break the any logic or common sense for me. FWIW my degrees are in math and computer science but was playing music a long time before that. Maybe we are just working with different sets of starting assumptions.
2013/06/10 15:08:56
Beepster
Well I just like that now I have some guidelines to work with. I definitely use my ears but having some starting points and knowing approximately where I might want to cut or boost is helpful. I guess I wasn't very helpful here. :-/
2013/06/10 16:31:20
Danny Danzi
Beepster
Well I just like that now I have some guidelines to work with. I definitely use my ears but having some starting points and knowing approximately where I might want to cut or boost is helpful. I guess I wasn't very helpful here. :-/



That's the thing Beeps...it's hard to even approximate because it depends so much upon the sound source. How can you even have starting points if the source doesn't comply with what the starting points are *supposed* to be? It's like...."the sound of a guitar spans from this to this" yeah...well, my guitar tone doesn't and neither does yours because we're driving distortion and other stuff, ya know what I mean? So those starting points aren't going to be there.
 
You know about guitar tones so let's look there. Quite a few guys love low end in their guitar tone because they are eqing it as an entity, not as teamate "in the mix". When you add those lows, it goes against everything we know as far as starting points go. js516 put it perfectly...and this is what we're hearing. The fundamentals of an instrument may land between such and such a range...but as soon as WE record them or doctor them up, you have a totally different starting point range. You now have to create your own starting points based on the instruments YOU record. :)
 
-Danny
2013/06/10 16:46:18
Beepster
@Danny... Yeah, I certainly never liked "rules" when it came to tone (you've heard what I did with my guitar tone stumbling around in the dark with just some sims and a bit of elbow grease) and really guitar is easy for me in that regard. It's all the other stuff that just bends my head. Like you heard how non existent my kick drum sound was on that very same track where the guitars were cutting... and everything else was flat. Just the idea of taking a standard kick sound, using a high pass just to snuggle up to the very bottom of the signal, low pass to get rid of annoying room noise and bleed then finding the beater click to get some attack is the type of thing I wouldn't have in a million years thought to do. And that's before even compressing or gating or whatever. I would have just turned dials until it sounded okay... which obviously is cool but then in comes everything else and all is lost. I guess for a n00b like myself having that kind of structure helps to begin with. Believe me I intend to go absolutely apeshizzle once I get accustom to stuff and if the "guidelines" aren't making my heart pound they will certainly go out the door but now that I'm actually starting work on one of my old albums I did some very basic rough mixing to get through the editing process without making my ears bleed and wow... what a difference a little knowledge makes. Took me no time to dial things in the way I envisioned them to be. I'm really pumped to dig right into the mix but I must be patient.
 
Certainly not arguing though (I would never argue your methods... you're stuff just sounds too damn good). I was just lost and intimidated before when looking at all the fancy knobs, graphs and doodadads. Not so much now. Just need the bleeding program to work properly. lol
 
Cheers, buddy.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account