• SONAR
  • Side chain question- think you're a ducking expert? (p.2)
2016/12/20 13:52:34
Anderton
Yes, Sanderxpander already pointed that out...it's a great feature, and I wasn't aware of it because the send shows only a single sidechain input. However, I also just realized that you can clone the track with the LP MB, clone the track controlling the MB, and be able to have a single audio source control two different bands in the LP MB...or even have two different audio sources control two different bands in the cloned track with the LP MBs, which would open up a LOT of options! Because the crossover is linear phase, you won't get any smearing, either.
 
 
 
 
2016/12/20 14:02:26
Sanderxpander
I don't see why you'd even need to clone any tracks, I know you're a fan of that approach and you get a lot of mileage from it but for me it creates clutter. If you need an extra feed you can use a send from the same source track and if you need another side chainable band simply add a second LP MB to the chain with a different band.

Unless I'm misunderstanding your purpose.
2016/12/21 00:27:31
Anderton
Sanderxpander
I don't see why you'd even need to clone any tracks, I know you're a fan of that approach and you get a lot of mileage from it but for me it creates clutter. If you need an extra feed you can use a send from the same source track and if you need another side chainable band simply add a second LP MB to the chain with a different band.

Unless I'm misunderstanding your purpose.



No, not at all. It's just a different way of working...but let me explain, as I think this is an interesting discussion.
 
What you propose is a more efficient use of resources, but I prefer having a 1:1 correlation between what I set up and what I see. Although you could put two LP MBs in one track, I have more flexibility with two tracks in terms of panning, imaging (e.g., adding very short delays to one track but not the other), and effects.
 
For example, assume the OP's situation but now let's add a bass player, and assume we also need to duck the midrange a bit with vocals. The "no bass" target track has now carved out less bass, while the "less midrange" one still has the bass intact. Now I can pan them somewhat oppositely, reduce the bass in the "less midrange" one with EQ, reduce the midrange somewhat in the "no bass" track with EQ, and pan the bass dead center because there's a wide open space for it. The elements the two vocal tracks have in common will now tend toward the center, but the low end will "pulse" a bit due to the compression in one target track, while the midrange will "pulse" a bit in the other target track, and there will be wider imaging than what you would expect from a vocal without the use of ambience or widening.
 
Now, this may end up sounding like crap! Or it may give a really great imaging effect...I won't know until I try it, but setting up the individual target tracks from the start lets me try it. As to why I would have two source tracks for control, yes, in most cases all you would need is a send. But I may want to boost the bass on the control track so the bass goes away sooner, but not boost the  bass on the control track for the midrange because that would defeat the purpose of using EQ to reduce the bass.
 
So the bottom line is you're right, in many cases it's not strictly necessary. But by setting up a project this way, I have more flexibility should I need to make tweaks. There are also visuals involved because of how I lay out my tracks, but of course, everyone has their own favorite way of doing that sort of thing.
 
Another aspect which is more or less a matter of personal style is that I generally believe in using the minimum number of parts possible but this doesn't necessarily mean the minimum number of tracks. If you have only one instrument, then it gets all the attention. If you have two instruments, they divide the listener's attention in half, and so on. (Of course there are exceptions, but consider something like the Brandenburgs. Despite all the instruments that are playing, what you hear clearly is a limited number of harmony lines weaving in and out - this is why you can have successful guitar transcriptions of a full concerto. When it comes to what can be done with a single instrument, if I could produce music with the same impact as Henrik Szeryng's playing of Bach's solo violin partitas and sonatas, I would die a happy man .)
 
 
Even with EDM, I don't have a lot of tracks because all the cross-modulations and imaging give the feel of more tracks, but without needing more parts.
 
So...keeping that personal preference in mind, I try to get the maximum mileage out of every track, and that usually involves multiband processing. When trying to keep track (haha) of everything that's going on, I find it easier to have the functionality of two tracks represented by two tracks, not a track and a send. Furthermore, after cloning I'll often end up tweaking the clones separately at some point, but of course not always.
 
There are other reasons for keeping source tracks separate, like having one track I actually want to have appear in the final mix, with the other used solely for control and therefore should not be part of the mix. When you want to do soloing or muting, when you have sends combined with tracks, it's more complicated and I'm allergic to complicated. Selective soloing and muting is essential with things like multiband processing that combines tracks you want to hear with copies of the track you don't want to hear.
 
However...more and more, I'm integrating Aux Tracks, Patch Points, and Folder Tracks to cover these kinds of situations. The end result kind of splits the difference between how you work and how I currently work. You'll see this applied in the next "Friday's Tip of the Week," which relies on complex multiband processing and cloning of tracks but ultimately, can fold up neatly into folder tracks that can also be unfolded easily for manipulation during the mixdown process. Now, if we could just have folder tracks in the Console, I'd be in really great shape...
 
So ultimately, there are multiple ways to accomplish the same end result, but how you get there can be a matter of personal preference and intended functionality more than anything else.
2016/12/21 02:24:56
Sanderxpander
I understand, my reasoning is just opposite. If I have to deal with two or three tracks for the same part, if I DON'T want to treat them differently, I'll need to do all tweaks and automation two or three time or create extra routing like an aux track or bus. This complicates my workflow, what was a simple question "I'd like to duck the low end of my acoustic guitar from the vocals" now involves two vocal tracks and two guitar tracks. If I do an edit on the vocal track I'll need to replicate that on the control track. If I want to pan the guitar I need to do it twice and be careful I make the same settings. Worse, if I want to add any effect after the LP MB I'll need to reroute the bunch to a bus or aux track and put it on there, even though it's still a single part I'm dealing with. So it's for me not so much about limiting cpu usage or screen space as it is about simplicity and speed. If I do want to try some creative panning I can always clone the track when I think of it. I will concede the point about muting, it can be useful to mute the track but not it's side chaining function or vice versa. This is why we need mute buttons on sends like most DAWs have.
2016/12/21 13:53:32
tlw
One thing I find an extra track used as the side-chain source comes in handy for is when I don't want e.g. the bass to be ducked a little on every kick hit only some of them. Or where I want the amount of ducking to change over time - sometimes a very short time, sometimes over maybe many bars.

I set up an audio track and drop a kick sample into it on the beats I want the bass to duck. The sound of the sample doesn't matter, just that it has a sharp clear transient and appropriate length. Or I'll set up a MIDI track and point it at e.g. a kick loaded into Session Drummer. This is then used as the side-chain "control" track.

I set up a pre-fader send on the "control" track then pull the fader all the way down so no sound reaches the master. The pre-fader send is then used to trigger the compressor or expander's side-chain. Between automating the send and adjusting the volume/velocity of the samples/MIDI notes in the "control" track a lot of fine tuning can go on.

But in general, I'm more inclined towards Sanderxpander's way of doing things. I prefer to use sends where possible than have loads of tracks that often look similar enough to be confusing or are visual clutter.

And for "four on the floor" electronica bass/kick ducking sometimes I cheat and just put Wave's One Knob Pumper on the bass track and adjust it to give suitable amounts of ducking at suitable intervals. No frequency-specific ducking that way of course, but Pumper's a surprisingly useful tool in all kinds of ways for something so simple.
2016/12/21 14:43:27
Anderton
Sanderxpander
I understand, my reasoning is just opposite. 



Yes, I agree your approach is better for the OP. I was responding as to why I use this approach a lot.
 
I'm going to re-visit the next "Friday's Tip of the Week" and see if it can be simplified, but I don't think so because there are three bands that require separate processing...at present I don't see any way around not having three aux tracks with crossovers.
2016/12/21 15:06:57
Sanderxpander
Thanks to both Tlw and Anderton for highlighting some examples where it's useful to have distinct "side chain source" tracks.
2016/12/21 16:26:45
Klaus
Great thread, really liked to read about the different views and ways to accomplish a task!
 
Anderton
 
There are other reasons for keeping source tracks separate, like having one track I actually want to have appear in the final mix, with the other used solely for control and therefore should not be part of the mix. When you want to do soloing or muting, when you have sends combined with tracks, it's more complicated and I'm allergic to complicated. Selective soloing and muting is essential with things like multiband processing that combines tracks you want to hear with copies of the track you don't want to hear.
 



This is only a guess, but if you mean by "complicated" the same thing I struggled with until I changed the default Send behaviour in SONAR to also mute Pre-Fader Sends (not only Post-Fader Sends) when the same track or bus is muted or another track or bus is soloed:

AUD.INI:

LinkPFSendMute=1 (default is 0)

...then this could be helpful, for me, it was! 
 
Best,
Klaus
2017/03/22 22:28:06
Granteus
Funny after 20+ summers and recently getting back into the e-music groove, I stumble here.  Long time Craig.
 
One other reason to consider the use of cloning the sidechain track is to help in audio restoration.  By having a separate cloned track you gain the ability to time shift the sidechain which can allow for longer attack times on the original signal at the expander/gate which can help make the end result sound more natural and less truncated or processed.  
 
In the studio this can be quite useful if the track/take you like is not close mic'd or the idiom is not high energy.  For me I usually go with the send to bus as sidechain approach...but since we can slide tracks in time on today's tech... pre-triggering a gate/compressor/expander can clean up the attack without loss of the initial transient...reverse can be also true with high initial transient instruments like pianos. but adding a couple millisecs of delay on the compressor sidechain to help squash a rough unflattering piano has been around well before the DAW.  
 
With ducking or band specific downward expansion I prefer the post-fader bus send approach since I can mix down each channel to a group and then send a relatively mixed sidechain of all but the sources to the intended group's "master" channel.   In my duet I most often record all live vs tracking so I have 1 direct and 4 vox fx tracks with three piano mics, a instrument mic and a couple of room mics.  Once I get the general relative sound I like, I use the compander with side chain on/from each group master to clean up the resulting leftovers.  
 
The waves c-6 has been working out for me this way, and I have prefader sends nulled on my master groups in my project templates.  Then I just dial it up to fix the mix soundfield spectrum balance.  The C-6 can also work well on the individual tracks as a compressing/expanding paragraphic eq, but I often prefer a super simple quasi parametric 2-band with shelves type channel eq to make broad strokes and free up CPU usage.  If I need more precision I place those additional hungry FX plugins on the stereo group channels and master out.
 
So many options to set up your digital mixer these days is a real dream to our individual workflows.
 
Really terrific thread...glad to see the recording guru is in.
 

12
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account