• SONAR
  • Loudness, clipping and Class A audio (p.2)
2014/04/20 01:49:22
robert_e_bone
Frequency overlap can be a giant problem, as it makes everything indistinct and muddy - hard to pick out notes, that sort of thing.
 
I found a chart online that I ended up purchasing a copy of, and I use it every time I mix.  It is a laminated multi-colored chart showing the frequency ranges of all kinds of different instruments.
 
I use it to make sure I know what parts of the song I am working on have a good chance for instrument frequencies to overlap and cause problems, and it is overlaps that I can then notch out or sometimes even rerecord to give each instrument better 'space' and clarity.
 
A real good example of giving space to each of the instruments is a Genesis song, called ABACAB.  Each and every note of each instrument is crystal clear, because each instrument has its own sonic 'space', where none of the instruments crowd out each other.  If you want to hear phenomenal dynamics, listen to their song called Undertow, from And Then There Were Three. 
 
Another good example is the song writing of Steely Dan.  The keyboard chords are usually literally 3-5 notes at max, and mostly right-hand only or maybe with a single bass note.  
 
One of the strengths of the Beatles was to not play too many notes.  Those were the literal instructions given by John Lennon to a bass player named Tony Levin on one of his solo tunes.  John told him he could play whatever he wanted, but just not to play too many notes.
 
Bob Bone
 
2014/04/20 04:56:53
keyzs
vladasyn
Thank you for your detailed replies. Here is a song that I made in 2008. I did everything I could to it then, and left it to ripe for a while. Then I returned to it in 2014 with all experience I gained over time, and invested 2 weeks in listening it over and over and over and trying to alter every detail to make it better. So I feel- there is NOTHING I personally can do with it to make it louder. Also it was mastered on Ozone at +1. It was recorded with Phonic Helix FW mixer- not very high class audio- I think my Presonus today can do better job. This song was recorded at 16/44.1. Not trying to get you to listen my music, just need to see what I am doing wrong.
https://soundcloud.com/vl...clean_astral_2014m-wav
 

Thank you for the opportunity to listen to your song. I have PM you on this. 
 
vladasyn
Keyz, you saying almost exactly the same thing my partner is saying, but I am not understanding a half of what you saying....
 
I doubt it would be practical to place meters on individual tracks (not know how to do it). This song has over 70 tracks. I do not know what frequencies overlapping....
 
Everything I use is panned Center. May be I should reconsider this, but when I hear something to the left or to the right, I feel like I am about to loose a balance and fall off the chair. I hate panning to either side....
 

Oooopss.... i apologise to have confused the issue further. 
 
You dont have to place the Meter Taps on each and every track. Just on the tracks you suspect there is overlap. Then opening up Ozone's Spectrogram and select the specific colours, you will be able to see which tracks are clouding each other. - The other alternative, as Anderton mentioned, is to use the MUTE button. 
 
As for panning, it is also not necessary to pan the track hard left or right. With Sonar, you may wish to try 40L or 40R for starters.
 
vladasyn
Ok, I do not use Hi pass filters, I use EQ- the one on the channel in Sonar. I get rid of everything under 50 Mhz. Is it how I supposed to do it? Now- what do I do with highs...
 
My drums. If I make them any louder, they would overpower the synths. I dont like them loud, but in commercial production, it is not loud- it just stands out.
 

Hmmm.... let's try an experiment. 

Create an 8 bar loop with just some drums and bass. Just remember to keep all the tracks separate. You should end up with 4 MIDI tracks and 4 Audio tracks.

Start with a Kick, Snare and Hi Hat; using MIDI and your favourite drum machine. Once done, create a simple bass line, again with MIDI and your favourite bass instrument. 

Once done, do a simple mix just using faders - NO eq, NO compressor, No effects. Balance the way you want and listen. 
 
NOTE: the following is just a example and some set values may even be wrong.

Now using the ProChannel EQ only, - No fader movements, No other effects...
  1. Kick - set hi-pass to 60Hz, low-pass 900Hz
  2. Snare - set hi-pass to 150Hz, low-pass 5000Hz
  3. Hi Hat - set hi-pass to 400Hz, low-pass 11000Hz
  4. Bass - set hi-pass to 90Hz, boost +1.5db on 120Hz with high Q, low-pass 2500Hz
Listen again. This time your bass and kick should stand out or perhaps sound "louder" while the rest remain just as clear - all without touching the faders. This should get you the very basic concepts of mixing. 
 
vladasyn
So when you talking about Monitor Calibration- which monitors you talking about? My headphones or my new Yamaha SH8? (I actually did not even hear it playing yet- only hooked them up).
 

Ooooppss.... once again i apologie for the confussion. Lets not worry about that for now. Monitor and room calibration is a whole other beast. The Yams HS8 are great monitors. Coincidentally i am using them too.
 
vladasyn
I have Ozone Advanced for now. What you mean by average? You mean- I can let pics go over 0 db long as average is still under 0 db? Is there a setting to monitor by average? I thought we looking at picks.
 

 
The K-System meters (use K-12) in Ozone allow you to view the levels with 12db headroom. Meaning to say that when you mix the average should be around the 0 mark. Some levels will jump above the 0 mark into the + or red range. 
 
Now during your mastering stage, if you so choose to squeeze the mix up, you will have 12db space to do so. 
 
Using Ozone's Maximizer, set your Margin to -1.0 and then increase Threshold. You will notice the levels on the K-12 meters moving up to the REDs and staying there. As they move the levels will get louder and louder. The trade off here is you will be killing off all the dynamic range. 
 
NOTE: Do give this a try and see if it works out for you. Just be very careful not to damage your monitors (HS8) or headphones with too loud volumes. 

hope this helps... cheers!!!
 
2014/04/20 08:06:48
dcumpian
I think you are struggling with some basics. Frequency overlap is when two or more tracks share frequency ranges. For example, Kick drum and Bass share some frequencies. Electric guitar and some lead synth patches share frequencies. Piano can use almost the entire frequency spectrum. The point is that you have to decide which frequencies a track needs to live in to sound good in the mix, then remove those frequencies from other tracks that are overlapping. In order to do this, you must be able to hear it. Good monitors and a balanced listening environment are a must.
 
In the case of a synth pad, it will likely have quite a large frequency spectrum, but only a much narrower spectrum is actually needed to hear the track in the mix (unless the pad also provides bass for the mix as well). Find that range and remove anything below and/or above using HPF/LPF EQ curves. You can also use EQ to move the frequency range of a track as well.
 
Regards,
Dan
2014/04/20 08:19:23
Silicon Audio
mettelus
...Complexity and clarity are often competing. Focus can only be given to one thing at a time usually, so how focus is shifted is important to keep the listener following the song...

Just wanted to say, excellent post, mettelus.  One of the best I've read here in a while.
2014/04/20 09:51:22
robert_e_bone
I suggest you do a Google search on: instrument frequency chart
 
Here is one of the links from a search of the above:
 
http://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/817538-instrument-frequency-chart-electronic-music-what-goes-where.html
 
Carving frequencies out to leave room for each instrument, or for those deliberately blended knowing which of those frequencies you want retained, is SUPER important in getting a 'clean' and punchy sound.
 
Here is a link to the chart I use: http://www.independentrecording.net/irn/resources/freqchart/main_display.htm
 
The other above link may suit your electronic music sounds better, but both are good.  (I ordered a REALLY nice big color copy of the one I use, and have it hanging on the wall directly in front of me).
 
Bob Bone
 
2014/04/20 11:51:17
Anderton
These are really great posts about mixing issues and overlap. Using EQ to carve out dedicated parts of the spectrum for specific instruments is something I do almost instinctively, so I don't think about it much. But in addition to keyzs excellent example of EQ for drums and bass (as well as his other tips), here are some additional examples from a tutorial project I'm doing on mixing with the ProChannel. All the displays are set for +/-6dB.
 
Choirs: Use a low-shelf to cut starting at the midrange. This gives the choir more brightness and "air" so it kind of floats over the mix.

 
Power chords: High pass and low pass create a broad bandpass in the midrange area where other instruments aren't. I also put a peak in at 1kHz which gets across the "meat" of the guitar. This kind of technique can also work with other "thick" sounds.

 
Pads: Depending on the character, they often fit into one of the above categories.
 
Vocals: The ear is most sensitive in the 3-4kHz range. Giving a slight boost in this range to vocals lets you mix them lower yet have them cut better. You have to be very careful not to boost too much, though, as these frequencies can make vocals harsh. The following curve is what I used for my voice when close-miking with a dynamic mic. The boost may seem extreme, but my voice is naturally more "mellow" so I need the extras. Cutting the same frequencies with other instruments can also make the voice stand out better. For example with a singer/songwriter playing acoustic guitar, I'll cut the guitar subtly in the 3-4 kHz range when the singer is singing, and boost it when there's no vocal. You don't perceive a change because that part of the spectrum has the same energy in both cases. I would NOT recommend the following curve for everyone, but it works for me.

 
Acoustic guitar: I sometimes do a shallow, fairly broad parametric cut around 300-400Hz. This is a range where many instruments overlap, and when added together make a "muddy" or "tubby" sound. Cutting a bit makes the highs and lows stand out a bit more. I also do this when mastering with some of the material I receive.
 
The following screen shot shows EQ for acoustic guitar. In addition to the midrange cut I've boosted the brightness a bit so the level can sit lower in the track, brought up the range around the low E string, and added a sharp low-frequency cutoff to minimize "boom" from the body.

 
Drums: This was a premixed loop. There's a bit of a bump to bring out the kick, a shallow dip in the lower mids to tighten the sound, and the snare on these drums had a nice "ring" so the midrange peak brings that out. There's also a very slight boost to the highs to make the high-hat more prominent.

 
Bass: In this song, the bass added more of a melodic component then a rhythmic one. The boost may seem excessive, but there's not a lot of energy up there, and this level of boost brought out the pitch. Note also that timing can make a huge difference in the bass vs. kick issue. If you advance the bass track by a few milliseconds compared to the drums, this emphasizes the melody and makes the bass seem louder. Delaying the bass track by a few milliseconds compared to the drums emphasizes the rhythm and makes the drums seem louder (even though in both cases, there are no level changes happening).

 
Hmmm...I think I'll flesh this out, and write it up for the Cakewalk blog.
2014/04/20 13:20:12
Anderton
Here's a good example of sparse production where every element has its own space. When it goes to the denser chorus, the effect is dramatic.
 
https://music.yahoo.com/v...y-perry-002101570.html
2014/04/20 17:44:50
AT
Excellent EQ examples, Mr. Anderton.  Many times "seeing" such large yet gentle EQ shapes solidifies the idea of carving up the instruments in a soundstage.
 
To the OP, I would say trying to make your music as loud as the next guy w/ a limiter isn't a good thing.  However, if you must, less is many times a louder thing.  Trying to get 50 tracks loud means white noise.  Try this - go in and strip out everything but the rhythm and see if the song don't get subjectively louder.  Arrangement is an art we don't talk about enough.  A music line seems louder when we notice it, and the easiest way to notice it is drop the line you previously had in focus.  Once you have a popping rhythm go in and add the most important element (to your ears) sequentially in the song.
 
And as pointed out above, a little bit of saturation can help the apparent loudness, unless you saturate everything.  Wide panning can help.  And electronic music - like trance and dance stuff, is unnatural.  Unnatural as opposed to a band playing naturally, like rock, where most people have an idea of how the pieces are supposed to fit together.  It makes judgment harder during the mix, although dance stuff has its own rules.  If you like the more sparse effect, work on that until you get it right.  In a few songs you can start adding back the thickening elements and have a feeling where they start to clog things up.
 
@
2014/04/20 22:30:43
vladasyn
I have to read it again tomorrow when the wine effect wears off and I can store more information in my short term memory at the same time. Lol.
 
I appreciate the replies, I am not arguing against of your points, rather trying to find practical application to your ideas.
 
Complexity. The goal is not to purposely be complex. The goal is to express what is on our mind, and often it is very hard to turn in to the sound the mass we create with our imagination. I do not sit down and think, “Let me make something complex, so everybody falls out of the chair”. I am listening hundreds of sounds and I find one sound I kind of like, and then I listen hundreds of other sounds to find another one that matches with the first one. Some time they become one perfectly, some time they only partially match. I try to recreate my idea, I hear it in my head but the right sounds are not always there, or I simply use wrong software synth and the right sound is stored somewhere else in my library. I know I have it, I know it is somewhere but I cannot find it so I use what I get in 2 hours I have for music before it turns 2 am and tomorrow is another day and 7 am comes fast.
 
Drums. Who has a luxury to record drums on the separate channels? I do not. My drums come from Maschine- 2 tracks stereo, Beat Tweaker- 2 tracks stereo or Yamaha Motif. If I manually play the drums, there is no way I can play just a drum track and then come back and do the snare on another track. So mixing the drums should consider high and low frequencies as they are on the same track.
Synths. I do not understand your point about the “Mute” button. If I don’t want a sound in the song, then I would not put it there, or I would delete it or mute it, but if it is good sound- how Mute would help it sound better? I use a Pad, a rhythm synth (arpeggiated synth) and 2nd arppegiated synth that makes figure with 1st. I like more than one Pad, I like them to open up filters at a different times. If I have 2, 3, 4 synths sounds playing on the same part, how can I EQ them to not interfere with each other? They are meant to be in the same frequency. Now add to it distortion guitar (I do not use accoustic). Metal guitar competes with synths for the mid range. So I have to do my best to select rhythms that would create room for each other and complement each other. The secret is in what your rhythm patter is playing. I have to make sure that guitar player learns what the arpeggiator doing and blends with the synth. I recird 2 guitar tracks and pan them left and right- this is how I was told to do it. One guitar track panned center sounds awful. Guitars 100% L and R.
 
Now add the vocal to it. Again- you want me to lower my 3 synths, 2 pads and stereo guitars tracks to let the vocals through? Then why did I even bother to put so many details in? You right- it is tough decision- do you want all synth sounds be heard, guitar to cut through or vocals? I noted that with distortion guitars many people have no idea what hides under the guitar tracks. Guitars kind of kill all my keyboard work. But what can I do? I like heavy metal, I want to be in the rock band, and I can not compromise and not do good keyboard work- even if nobody can hear it behind the guitars.
 
I am sorry if I am sounding as I am my own enemy. My entire songwriting style is at fault. But back to the bassics.
 
In SONAR particularly- there is no options for the types of meters. I have my meters set to default. So I do not let any picks go over 0 db. Now somebody says- I can let some picks go over 0 db? So where is the limit of how much?
 
I make sure no picks go over 0 on the Master output. Are you saying- picks can go over 0? How far? When it picks, it stays at +0,1 or +0.2 and so on. How far can I pick?
 
While my entire production habits are at fault for competing frequencies, other producers have all the complexity and levels and get best of both words. Loudness should not be at the expense of complexity. I hear full frequency of Pads and synths sounds, nothing is cut down, and their tracks are still loud and full. My are not.  
 
 
 
2014/04/21 09:01:24
robert_e_bone
It is not the sonic complexity so much as the frequency competition that can be the issue.
 
Notching out a few key frequencies from a particular synth or guitar part may instantly make the guitar sound better simply because it is no longer buried by the particular frequencies that you just notched out, as an example.
 
I think the suggestion about muting was to point you toward experimenting with particular sections of projects, by just sort of identifying which layers are needed for 'defining' that particular section, for purposes of coming up with some pieces that could be done more sparsly.
 
The bottom line is that if 10 keyboard parts are playing the same frequency range as where the vocal line is, then the vocals AND the keyboard parts will all be fighting to be heard clearly.  The 'fix' is to identify the most important frequencies for any given point in the project, and to either remove parts or remove frequencies as needed to give the vocals the space to be heard clearly and separately from whatever else is going on at that moment.
 
If you listen to most successful pop tunes, even though it is a different style, you will notice that the instruments leave space for each other's parts.  When the singer is singing, there is not a lead guitar competing for your focus at the same time.  Everything gets its own 'space', and becomes more prominent only for the parts where they are MEANT to be more prominent.
 
Bob Bone
 
 
 
 
 
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account