• SONAR
  • Loudness, clipping and Class A audio (p.3)
2014/04/21 11:19:42
AT
Vlad,
 
I sympathize - I too love layered sounds and lots of them.  My usual method of "writing" a song is to work off a main loop, lately it has been bass.  Get 12 or 16 bars, put some drums underneath it, and then work out the structure of the song - ie., how many loops I need to string together for 4-6 minutes of a song.  And change up those basic rhythms, add a second part (much like I did for the first section) as a middle section, just add some variety.  I write a one page lyric sheet, since that seems to fit into right for the vocalist.
 
Then I get to my point - layering inharmonic synths into it.  that is what I actually like doing - "playing" synth sounds.  Again, I will typically use 3 or 4 different synth sounds I play throughout the whole piece.  Then I bring in my guitarist, and he will lay 3-4 guitar tracks, more or less randomly playing rhythm and lead lines - whatever he feels like.
 
At that point I start culling the herd of sound, using all the methods described in earlier posts - EQ, panning, and yes, muting.  Even trained musicians can only listen to 3-4 lines or instruments at once.  Usually divided into a rhythmic blur and the lead element that stands out.  And that means, for me, killing off a lot of nice bits and pieces that don't serve the song, as cool as that guitar lick is, or that blob of sound I've found.  Because in the next 16 bars that blob of sound finds its own space where it sticks out.  Or that phrase.  And once it is established, the listener can find it again when it is more submerged, if they are listening for it.  When I'm happy for the music, then I get out the knives again when the singer turns the piece into a song w/ the main melody.
 
So even if you don't want to go through the culling part throughout your piece, you have to introduce elements, and provide a space when they first appear or come to prominence.  And get brutal about providing such space, even if you don't mute.  Volume envelopes, subtractive EQ, panning all can carve out space.  Even those pads, which I usually think of as more full range, can have the bottom cut off, esp. if you are mainly interested in spotlighting it for the filter cut off.  And cutting out the bottom, even up to 400-500 Hz, leaves that less clogged for the drums and bass, which have more room to punch (tho I think punch needs dynamics, something relatively empty to punch into, but that doesn't seem to be the modern sensibility). 
 
The other thing is it takes a lot of time, much less talent to get a handle on all the crafts involved.  Learning an instrument, learning how to record it (even running soft synths through an amp and mic'ing it to add some "air" around the virtual thing can differentiate it in a busy song), writing the music, arranging the instrumentation and then mixing it is full-time work for many years.  One of the funniest things I heard about that phenomenon was John Cale, classically-trained musician, co-founder of the Velvet Underground, record exec and solo artist, who was over twenty years into the business before he felt like he knew what he was doing "producing" an album (or CD at that point).  And his work at that time wasn't as good as the earlier stuff (tho that was more a matter of the strength of his songwriting, not the technical stuff).
 
It sounds like you know what you want to do, it is merely a matter of buckling down and refining your techniques.  That was one reason I suggested muting (or perhaps more a matter of not adding in elements until you get something simpler to punch etc. like you want).  Experimenting on fewer tracks at once might make it quicker to learn what works for you, which you could then expand to more tracks. Or take a "full" song and spend a lot of time on it fooling w/ all the techniques others have written to get it closer to how you think it should sound.
 
@
2014/04/21 11:24:22
Anderton
There are as many ways to approach music as there are musicians. Symphonic music has plenty of parts playing simultaneously, but they coordinate together to create a single, complex, layered sound, like a string or brass section. Or you have Bach, who has many complex parts going on simultaneously; but he pulled it off because of the jigaw puzzle way in which the harmonies fit together. All the parts retained their own identity in service of a larger whole.
 
I recently wrote an article for the Cakewalk Blog called "Ten Nasty Mixing Mistakes." Tip number 3 is:
 
3. Falling in love with a part because it’s cool
The question you really need to ask isn’t whether a part is cool, but whether it contributes to the song. Removing unneeded parts emphasizes the parts that are left—there’s a reason why sometimes the most compelling music is a solo piano, or a singer with a guitar. Exercise the mixer’s Mute button often and ask yourself “is this part (or section of this part) really necessary?” If the answer isn’t a resounding “yes,” nuke it—the remaining parts will thank you. (Note: Muting parts in certain strategic places can also add drama. Again, the mute button is your friend.)
 
This doesn't mean you can't get lots of parts to work together, but it requires thinking in a more symphonic or Bach-like direction; the former regarding sonic complexity, the latter regarding melodic complexity.
 
But really, the main emphasis in the advice given here hasn't been about changing the arrangement as much as changing the EQ on elements of the arrangement so that the parts don't compete sonically, but rather, complement each other. By complementing each other, it's possible to increase the apparent level of the parts so they have more prominence.
 
2014/04/21 12:30:26
robert_e_bone
I have played piano/keys for decades now, and there was a period in my late teens when I played crazy aggressive, with as many notes as humanly possible, all the time.
 
It wasn't until I began to record with bands that I learned how horrible that sounded to everyone but me.  Since then, I have learned to play 'in context', and while those are tiny words, they have HUGE impact.
 
Bob Bone
 
2015/02/14 06:54:55
mettelus
mettelus
I believe it was George Lynch who made a comment that getting carried away with speed runs loses the listener because it is "too many notes." He was a good person to say such a thing, since it carries more weight, but speed [can lose] "expression."


Sorry to bump this old thread, but I happened to catch a video of George Lynch at NAMM 2015 and it reminded me that I made this post. It is sort of ironic that he made the comment of "too many notes" being distracting to the listener since 64th notes are common for him... but he is one of many players who prove that speed and expression can co-exist.
 
Slight rig issue at the beginning of this song (never let others touch your rig when performing!) -
http://youtu.be/Mw4uZ_R18w8?t=4m36s
2015/02/14 13:35:41
dubdisciple
Vlad, you are definitely being your own enemy to a degree. Although it is possible to integrate a stereo drum mix into a mix, it is far from ideal. Trying to eq a track that has key frequencies that range from low end to hi end is like trying to make a custom fit dress that fits women sizes 2,  10 and 22 simultaneously. You can make it one size fits all but it will certainly not flatter each size as well as one made in each size.  One of the time honored methods for bringing out the clarity of drums is using high pass or low pass filters to kill overlap. With that said, if you must use a stereo drum mix, treat that component as a submix, utilizing the internal mixing features to their max capabilities. I use geist a lot. If i do go the route of a stereo track going into Sonar, I create isolation within Geists mixer, carving out space for each sound within Geist's system of mixers ( it has mixers on a pad level, layer level and what they call engines ) . I pretty much do what I would in Sonar (High pass or low pass filter on just about every element first before i bother with compression or any other effect). In doing that, the mix initially does not sound that dramatically different when soloed. It's when those other layers of sound drop in that having created a more surgical drum mix makes a difference. That monster kick turns to mud when that monster bass fights for space with it. 
 
Using multiple synths can be a pain as well because it often requires one to eq the synths in a way in which they sound less good when soloed but much better when played together. I have recently run into the same problem trying to mix several tracks of acoustic guitar. The player wanted several layers of the exact same guitar, meaning tons of notes ended up being the exact same frequency. Like musical chairs, only one person iis getting in that seat comfortably.  For me it meant a combo of choosing which takes would shine in which frequencies, outright cutting some takes (sometimes less is more) and automation to bring parts in and out where the conflict was too much to eq around.
 
The bottomline is no amount of limiting will compensate for a mix where too many elements are drowning in conflict with others. Sometimes even panning elements using LCR will make things cleaner sounding although your mono mix will sound crappy. I'm sure there is a way to convert your maschine tracks to multiple channels. Almost ashamed to admit that for years i thought one was stuck with stereo in Session Drummer 3 if you did not select mutiple at the beginning. Not sure how it hit me, but one day i just changed output number on snare on a stereo instance and discovered i had more control than i thought.
2015/02/14 13:53:40
dubdisciple
PS- the next time someone tells you that Macs have some magical superior audio capabilies, ask them to show you some measurable proof. My friends and I created identical pro tools mixes on mac and pc and exported with exact same settings using exact same interface and got the exact same result. The moment Macs started using intel processors, you would think some of the Mac myths would disapear. The one major advantage that Mac holds is the overall consticency of experience due to hardware and software and OS (and even some software like Logic) coming from same company.  I concede this can make for a better experience across varying user levels, but narrows among power users. The person with the custom music computer and not using it to watch sloth porn and organize quiche recipes is likely to have as smooth of an experience as a Mac user. I can honestly say my computers don't give me anymore issues than the Mac I use in the studio.
2015/02/14 14:05:55
Leadfoot
mettelus
mettelus
I believe it was George Lynch who made a comment that getting carried away with speed runs loses the listener because it is "too many notes." He was a good person to say such a thing, since it carries more weight, but speed [can lose] "expression."


Sorry to bump this old thread, but I happened to catch a video of George Lynch at NAMM 2015 and it reminded me that I made this post. It is sort of ironic that he made the comment of "too many notes" being distracting to the listener since 64th notes are common for him... but he is one of many players who prove that speed and expression can co-exist.
 
Slight rig issue at the beginning of this song (never let others touch your rig when performing!) -
http://youtu.be/Mw4uZ_R18w8?t=4m36s

Not bad for someone turning 60 this year!
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account