2014/07/29 21:57:58
RexRed
Recording, mixing, mastering and marketing music.
 
This is a thread to discuss how other people do these things, in particular in Sonar. Please spare no words here in your replies...
 
For some people dreaming up a song is the hard part. Sometimes it is not the song or the parts that are wrong but how these parts are mixed and how they need to be processed in a standardized a way.
 
Each song is different but all songs have to fit within the same spectrum of human hearing.
 
Keeping this in mind, how does one "focus" a mix? What are your effects chains, how do you set stuff? How do you visualize the sound? Do you use color graphs to represent your waves in real-time, how do you interpret and analyse the music mix in color?
 
It looks more easy with a camera because it seems you just turn a dial and stop where it appears the best. But the problem with music is getting rid of so much mud but not that you leave it too empty.
 
I could talk on about this a lot myself, mostly questions, but how about if people leave tips here and helpful information on how you "focus songs in Sonar". A digital audio recorder focuses in on a "scene" of instruments.
 
How do you decide which instruments will play when and what forms do you incorporate into song structures and how do you automate these elements into a final product?
 
How do you pick your instruments? How do you blend them? How do you take a lot of continuous tracks and decide which one will make a peep in a moment's second in a song? There can be thousands of notes and in a song.
 
Music editing can be a nightmare. When do you say no to a song that may just be too big to make? A seventeen minute song, how do you ever finish it? Every time you hear it you just keep editing and editing and editing and it seems you will never get it done. Everyday the song sounds totally different.
 
How do you focus it?
 
It comes to hearing it somehow, to hearing what the song is trying to say midst a lot of overly loud other instruments.
 
Each moment in time the focus needs to bend in some angle and the song finds another flavor of musical progression.
 
All along everything in the mix needs to be put in tastefully complementary places and set perfectly in the virtual sound space.
 
It seems this focusing of the song is the most critical step of all.
 
Sometimes move the focus and the song blurs more, sometime it gets more clear and vibrant.
 
Please help out here and expand on this discussion.
 
It is of course the greatest thing for a song having the perfect parts simply performed so all you do is set the volumes and that is it. But not all songs are like that, some are watercolor impressions, patchworks and some very complex geometrical patterns and mathematical shapes.
 
How do you perceive song style and how does that factor into the way the mix is focused?
 
Eq, reverb, surround versus stereo. How do you set the mix what are the standards you go by? This is a discussion so all musical input on the processes of making music in Sonar is welcome here.
 
How do you focus music in Sonar? How do you polish and shine the image of the music? How do you fit the parts into their place in the mix? How do you make a final product that is ready for world distribution?
 
Please leave comments.
 
Thanks
2014/07/30 11:29:48
sock monkey
http://forum.cakewalk.com/Techniques-f9.aspx
 
Ook! This where we find da answer! 
2014/07/31 13:35:33
bitflipper
"Focus" in mixing is the process of drawing the listener's attention to one or two important elements at a time. Human brains can really only listen to one or two things at once, so at each point in the mix you have to decide what they should be focusing on for that particular verse, phrase, fill or effect and make sure it has enough space in the mix to rise above everything else. "Space" in this context mainly refers to using panning and equalization to mitigate masking.
 
Rex, you might get more participation in this thread if the topic weren't so broad. It needs a little "focus".
2014/07/31 14:14:27
jsg
In electronic music, I think of mixing as what the conductor does in terms of balancing instruments, phrasing, getting dynamics to sound right, finalizing tempos, etc. 
 
There's the composition of a piece (melody, harmony, counterpoint, orchestration, motives, themes, form/structure, feel, style, mood, etc.) and then there's the interpretation of that music, in other words how does it actually sound?  Music production, to my mind, is the interpretation of composition.  If you're using samples of acoustic instruments the problem of how much musicality you can give to every phrase you write has a lot to do with 1) how much you know about the composition and performance of music, and 2) how much you know about MIDI programming.  What a live player does intuitively, physically and, somewhat, spontaneously, in computer-based composition and production the musician must do conceptually and through the programming of envelope, velocity, note length, articulation, location relative to the beat and so forth.  If you're using synths, you have to understand signal path, oscillators, envelopes, LFOs, signal processing, steppers, arpeggiators, in other words synth programming and editing.  
 
Between synthesizers, sample libraries, sequencing, composition and production, you're looking at years of study, years of practice and years of trial-and-error.  Get good teachers and/or enroll in classes that will help you learn what you want to learn.  There are no shortcuts to excellence. 
 
You're asking a lot of good questions concerning both the composition and production of music--each one of those a complete subject in itself.  To get the answers is going to take effort and patience on your part.
 
My experience is that almost any intelligent person, with the proper training and motivation, can learn music production. Of course having natural talent in it is helpful.  Composition is more difficult, that is if we're talking about writing anything more complex than a 2 or 3 minute song.  Good composers usually have a high capacity for abstract thinking, the more complex and lengthy the piece, the more abstract thinking ability is required!  Where there's talent and motivation, there's the potential to write and produce something good, maybe even excellent. 
 
To hear a fully orchestrated symphonic movement I finished a few weeks ago which demonstrates some of the concepts I introduced above, go here:  www.jerrygerber.com/symphony9.htm
 
Jerry
www.jerrygerber.com
 
 
 
 
 
 
2014/07/31 15:04:01
Anderton
Some random tips...
 
1. Don't fall in love with a part just because it's cool. If it doesn't serve the song, nuke it.
2. The mute button is your friend. The fewer elements you have competing for the listener's attention, the more importance each element has.
3. It is your job to direct the listener toward what you want them to hear.
4. Think of the frequency spectrum as a finite resource. The elements sharing it have to respect each others' space.
5. The best production, arranging, and engineering skills in the world can't save a song people don't want to hear.
6. The worst production, arranging, and engineering skills in the world can't kill a song people do want to hear.
7. Low frequencies are the hardest to get right. That's where room acoustics, transducers, and the human ear have the greatest difficulties.
8. Mix at low levels. The midrange will be accented and there won't be enough highs or lows, but deal with it and make it work. Then turn up the volume. If all three elements have a decent balance, you're in the right ballpark.
9. Start your mix in mono. Get the EQ right. If all the tracks sound distinct and work well together in mono, they'll sound fabulous when you start scattering them around the soundstage.
10. Take chances. The worst thing you can do is bore the listener.
 
...or the reader, so I'll leave it at these 10 tips for now 
2014/07/31 15:35:26
jsg
Anderton
Some random tips...
 
1. Don't fall in love with a part just because it's cool. If it doesn't serve the song, nuke it.
2. The mute button is your friend. The fewer elements you have competing for the listener's attention, the more importance each element has.
3. It is your job to direct the listener toward what you want them to hear.
4. Think of the frequency spectrum as a finite resource. The elements sharing it have to respect each others' space.
5. The best production, arranging, and engineering skills in the world can't save a song people don't want to hear.
6. The worst production, arranging, and engineering skills in the world can't kill a song people do want to hear.
7. Low frequencies are the hardest to get right. That's where room acoustics, transducers, and the human ear have the greatest difficulties.
8. Mix at low levels. The midrange will be accented and there won't be enough highs or lows, but deal with it and make it work. Then turn up the volume. If all three elements have a decent balance, you're in the right ballpark.
9. Start your mix in mono. Get the EQ right. If all the tracks sound distinct and work well together in mono, they'll sound fabulous when you start scattering them around the soundstage.
10. Take chances. The worst thing you can do is bore the listener.
 
...or the reader, so I'll leave it at these 10 tips for now 




 
Regarding #8, I would add: 
 
Bob Katz, the renowned mastering engineer says that mixing should take place at around 83dB SPL.  I usually have the loudest sections in my work measuring about 83dB SPL when mixing.   If you're mixing at too hot of a level you'll hear the bass and high end louder than they would be relative to the mid-range and if you mix at too low a level, you'll overcompensate the lower and upper frequencies due to the nature of how we hear sound. 
2014/07/31 16:26:51
Anderton
jsg
Regarding #8, I would add: 
 
Bob Katz, the renowned mastering engineer says that mixing should take place at around 83dB SPL.  I usually have the loudest sections in my work measuring about 83dB SPL when mixing.   If you're mixing at too hot of a level you'll hear the bass and high end louder than they would be relative to the mid-range and if you mix at too low a level, you'll overcompensate the lower and upper frequencies due to the nature of how we hear sound. 



Yes, that is why I suggest testing at different levels so you can get the best "average."
 
I believe the rationale for mixing at a consistent level is so your ears can become acclimated to a standard. That way you won't come in the next day, have the level turned up, and think you mixed the lows and highs too loud.
 
Unfortunately the reality is that playback systems have never been more variable. It's still possible to do mixes that translate well over multiple systems, but that doesn't mean they will sound exactly as intended.
2014/12/28 14:14:23
RexRed
Compression after effects?
 
I was watching an instructional youtube video and was told that i needed to compress after reverb and equalization.
 
I have "always" compressed before reverb and the compressor is first by default in the prochannel.
 
I am now tempted to use two compressors, one before effects and one last in my track rack.
 
The instructor in the youtube video said that effects can cause peaks in the sound and undo initial compression leveling.
 
I have always thought that this was minimal and this is why i have not used compressors after effects.
 
Also the idea of flattening out an echo somehow does not seem appealing where it seems the reverb trail might become less transparent.
 
Any suggestions, comments or criticisms are very welcome...
2014/12/28 16:49:33
konradh
I would not compress after reverb in general: I think compressed reverb sounds odd.
 
That said, if you are compressing a whole mix, you can't avoid it, but I would definitely compress before reverb on instrument and vocal tracks.
 
Of course, I am prepared to be corrected by Craig and others who may know better.
2014/12/28 17:09:15
Bristol_Jonesey
jsg
Anderton
Some random tips...
 
1. Don't fall in love with a part just because it's cool. If it doesn't serve the song, nuke it.
2. The mute button is your friend. The fewer elements you have competing for the listener's attention, the more importance each element has.
3. It is your job to direct the listener toward what you want them to hear.
4. Think of the frequency spectrum as a finite resource. The elements sharing it have to respect each others' space.
5. The best production, arranging, and engineering skills in the world can't save a song people don't want to hear.
6. The worst production, arranging, and engineering skills in the world can't kill a song people do want to hear.
7. Low frequencies are the hardest to get right. That's where room acoustics, transducers, and the human ear have the greatest difficulties.
8. Mix at low levels. The midrange will be accented and there won't be enough highs or lows, but deal with it and make it work. Then turn up the volume. If all three elements have a decent balance, you're in the right ballpark.
9. Start your mix in mono. Get the EQ right. If all the tracks sound distinct and work well together in mono, they'll sound fabulous when you start scattering them around the soundstage.
10. Take chances. The worst thing you can do is bore the listener.
 
...or the reader, so I'll leave it at these 10 tips for now 




 
Regarding #8, I would add: 
 
Bob Katz, the renowned mastering engineer says that mixing should take place at around 83dB SPL.  I usually have the loudest sections in my work measuring about 83dB SPL when mixing.   If you're mixing at too hot of a level you'll hear the bass and high end louder than they would be relative to the mid-range and if you mix at too low a level, you'll overcompensate the lower and upper frequencies due to the nature of how we hear sound. 


That all depends on room size.
 
Working on nearfileds in an an average size bedroom studios @ 83dB SPL is VERY loud, for this reason I use 78dB which is a lot more forgiving on my ears, the wife & neighbours
© 2025 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account