For newbies (a broad overview of many topics)
So, I'm not too far away from my one year anniversary of indulging in this thing of ours. I've been banging away it very seriously, and in the process I've learned a lot of stuff, but it's not been learned so long that I've forgotten it or (hopefully) forgotten how to state it in terms semi-comprehensible to newer users. So I figured I'd take this time to enumerate some of these things to help the newer newbies coming along after me. Some folks may not agree with everything I say here, but that's good since it'll stimulate some discussion that will also be helpful. Obviously this is a very high level overview, just to give newbies a rough picture that they can then dig deeper into. Pardon any spelling issues, since I wrote this in stream of consciousness mode.
Tracking Tracking refers to the process of performing and recording the performances that make up each track of the song. This is an area where you can read very wildly varying opinions about how to do it, partly because much of what you read can come from the pre-digital days, where fairly different issues existed relative to the modern, digital DAW type of studio. Even then though there are lots of opinions and options.
In the pre-digital days, and in the pre-24 bit digital days (when digital DAWs were limited to storing 16 bit CD format data), it was very important to record your tracks as 'hot' as reasonable, meaning the volume was as close to hitting 0dB (the highest level possible) as you could get it, without actually hitting 0dB. This raised a lot of issues, because if you were recording that high, it was very easy to go over 0dB. In the tape world that wasn't so bad since tape has a natural kind of saturation quality that would compress those 'overs', bit in the digital world it's really bad because it will create ugly distortion. This would often require that external compressors were used between the amp/mic and the recorder, to prevent the signal from going over 0dB.
You can still use an external compressor if you want, but in the modern 24 bit digital studio you don't really need to. There are enough bits to spare that if you track so that you are peaking in that -9dB to -6dB range, or -12 to -6dB, then you are fine. That leaves plenty of room for the occasional overly loud note without clipping (going over 0dB.) There's a lot to be said for working this way since if you compress during tracking you cannot undo it later. This applies to any effects you apply during tracking. Whereas you can always apply those effects after the fact as plugins on the track and change them as desired all the way up to the final mixdown. So it's hugely flexible.
* You can still hear these effects as you track the part, if you monitor through the DAW, i.e. you let the person tracking the part hear the output of the DAW which will include any effects you have put on their track that they are recording. But you need to be sensitive here to latency issues, i.e. how long it takes for the output to come out after the input goes in. It will drive people crazy if the latency is more than a few milliseconds generally, though it depends on the type of instrument sometimes. If you do monitoring via external hardware before it enters the DAW, they cannot hear anything that you do to the track within the DAW. If all of your tracks are peaking in that -12dB to -6dB range (so on average pearking about -9dB probably), when all those tracks are added together, you'll end up with a final combination that leaves you with about from 6dB to 9dB to play with as you do the mastering steps (see below), and we can probably assume that track and/or buss compression during the mix is going to knock them down a bit more as well. If you are going to send out your songs to be mastered by someone else, then you should ask them where they prefer the peak levels to be on the final mix, since various mastering houses have different opinions about how much room they need to do their thing (called the 'headroom', i.e. the space they have between the maximum 0dB level and the high peaks of your mix.) You (or they) will get the mix the rest of the way up to CD volume levels as the final step of that mastering phase.
Tracking Hardware In terms of hardware, there are four fundamental pieces you need, though often two of them are one and the same. These are:
- Pre-amp. Any time you use microphones, you need to be able to get the output of the microphone up to the level that your sound card can accept. A pre-amp does this job. Sometimes the pre-amp is built into the sound card, and sometimes it's an external device. There are some types of mics that have this pre-amplication built in, but most require a pre-amp.
- DI box. A direct input box. Some people don't use these and use a microphone to get everything into the DAW that they want to record. Other folks want to directly input things like guitars and basses into the DAW and apply processing after the fact. See the Traditional vs. Simulators section below. Often the same box serves both as a pre-amp and DI box.
- Sound card/interface Since we are living in a digital world when we work on computers, the signal from your pre-amps or DI boxes have to be convertd to digital format as it passes into the computer. And, when you do playback, the digital data spit out from the computer has to be converted back to analog form to send to the speakers. The audio card or audio interface provides this service.
- Microphones. Not everyone needs microphones, but most of us do, and they are important parts of the equation.
Pre-amps, and microphones, suffer probably more than any other device used in recording from the 'golden ear' syndrome. You'll hear people waxing poetic about the enormous difference that this or that pre-amp made, when you can't hear a bit of difference. Probably most of the time they can't either, but it's well known that spending $5000 will affect your hearing.
You don't need to spend $5000 to get a good quality pre-amp, but you may need to get up around $500 to get one that's unambiguously not doing any disservice to your recordings. There are two basic types of pre-amps, those that strive to be 'transparent', i.e. not to change the signal at all, just amplify it, and those that purposefully color the sound in a way that is deemed pleasant. The later type are often tube based, but not always. It's nice to have one of each, but if you can have only one, it's probably best have a transparent one (though I went the other way myself.)
Some pre-amps are mono, some are stereo, and some have 4, 8, 16 or whatever number of inputs. If you are just recording yourself, then you can usually get away with a mono or stereo pre-amp. If you want to record a group at a time, or need to do something like record real drums (which can require a number of mics), you'll have to get the required number of pre-amps. Be aware that a device that costs $500 and has one pre-amp is likely to be of better quality than one that costs $500 and has 16, for obvious reasons. But if you have to have that many and that's all you can spend, then that's that.
You can also have many other types of external hardware that's used during tracking, such as compressors, EQs, various types of special effects, etc... But for most of us, in a small home studio, that's not something we'd have, leaning more towards appyling those things via plugins inside the DAW.
All of that external audio hardware will feed into the audio interface, where it is digitized and passed to your software package to be stored on disc in whatever format your software uses. Audio interfaces come in two basic physical forms, internal and external. Internal ones have all of their guts inside the computer, and it just exposes some input and output connectors, usually via some sort of 'break-out' cable since they don't have room on the back of the card for lots of connectors. External ones either have a card that goes in the computer that in turn talks to some external box via a proprietary protocol, or it's based on Firewire (or less desirably USB) in which case the operating system/motherboard provides the communications protocol between them.
Generally internal boxes are somewhat limited in inputs and outputs, usually stereo or a couple stereo pairs only. If you are just recording yourself one instrument at a time, that's probably fine. If you want to record a band, or you need to mic up a drum kit, you'll need more inputs, one for each pre-amp output that you need to bring into the computer at a time. On the other hand, external interfaces (Firewire based usually) will have somewhat higher latency since they have the extra burden of getting the data both over the internal data paths of the computer, but also back and forth to the external box. This may not be an issue for you, or it might, depending on how you work. If you use lots of soft synths and amp sims, you will need to monitor through the DAW as you track, so that you can hear the effect of the amp sim or synth. If you are just recording through microphones, many audio interfaces provide a 'zero latency' output where you near it before it goes into the DAW, but you won't hear the effect of any plugins you have on that track in this case.
* Note that there are various audio interfaces that have a proprietary card in the computer, but some external 'pod' type thing which allows it to have the space for more inputs and outputs while still retaining the performance benefits of an internal card.
Since the audio card does the conversion from analog to digital (A to D, or A/D as it's usually written), it is an important part of the process, and bad conversion can be a limit to sound quality. This is another of those subjects where page upon page of debate is carried on as to how important it is, but clearly you don't want converters that suck. We've come a long way in digital technology and most semi-pro and up audio cards will do a pretty good job, and it's clearly far less important than various other issues such as your playing technique, quality of your instruments and recording and room, etc...
If you are doing pure electronica, or doing instrumentals that are all DI'd in, then you wouldn't need a microphone. But otherwise, you need to get accoustic instruments (guitar, drums, tamborine, percussion, etc...) and vocal performances into the system, and mics are obviously the way to do that. There are two fundamental types of microphones, dynamic and condensor. Condensor mics generally require power from the pre-amp to work (so most pre-amps have a +48 volt) button on them to provide that power when needed. Dynamic mics don't require this, but they still need a pre-amp generally.
Broadly speaking, condensor mics tend to be more sensitive, and so they capture a lot more 'air', i.e. the tinkly higher frequencies, and more fine detail in the sound. So they are good for a lot of things, particularly vocals and accoustic instruments. Dynamic mics often cover less than full frequency range, sometimes having puropsefully rolled off low and high frequencies, which makes them very good often for certain types of vocals, for drums, for guitar mic'ing and things like that, things that sometimes benefit from less low-lows and high-highs. Both types will sometimes purposefully deviate from a flat response in some way in order to be optimized for some particular applications. Picking the right type of mic can often mean less processing required after the fact, which means a cleaner signal through the system and that means higher fidelity, all other things being equal.
In the condensor mic area, there are tube based and non-tube based ones, and there is a lot of lust factor involved, since this is an area where the best of the best costs a lot and there is a lot of fetish activity involved in microphone selection. Someone had an immaculate set of old Telefunken 251 mics up for sale for $30,000 on Gearslutz! If you read any thread of the 'what mic is best for X' type, you'll get as many opinions as posters most of the time, and the more obscure your suggestion, obviously the more knowledgeable you are. If no one but you has ever heard of the mic you are using, then you win. But clearly, microphones can have a lot of 'character', meaning that they color the sound in certain ways, often quite substantially, so here again, using the right mic for the occasion, combined with the right pre-amp, can often mean getting the right sound straight into the system with little or no additional EQ required.
For most of us in a small home studio though, this is a pipe dream. We cannot afford $20K worth of mics. So you generally want to get a couple that are flexible and suit your needs. A good set would probably be something like a nice, fairly characterless condensor microphone, plus some dynamic mic that would be good for guitar, possibly vocals, and other things, and if you have the bucks, then maybe a high character condensor, maybe tube based but not necessarily. That would get you a good bit of flexibility.
One consideration is stereo recording. If you want to record in stereo for some instruments, such as guitar or piano, you need two mics of the same sort. In most such situations you are recording something that benefits from a condensor mic, i.e. accoustic instruments are often the ones that are recorded this way. And that of course means you also need a stereo pre-amp, not just a mono one. So you are looking at about twice the price on both the amp and mic front. You don't necessarily have to use identical mics, but commonly that's desired to get a consistent sound from left to right. But there are other stereo techniques such as mid/side that might do better with different mics.
The 'Mixing Spaces' When you are mixing your song, the goal is obviously of course to have all the various tracks combine in some complementary way. You do this by placing each track somewhere in a number of spaces. These spaces are listed here along with the tools you use to place tracks within those spaces:
- Time (Tools=Composition.) The most important of all. If no instruments play at the same time, then you can hear all of them no matter what they sound like. Of course that wouldn't be too interesting, but a good composition can make all the difference in the world to the clarity of the result. Many huge rock songs with multiple monster sized distorted guitars going on all the time and all the other instruments often playing at the same time on the same beats are hugely difficult to make clear relative to something more sparse.
- Frequency (Tools=Composition,EQ.) You can have instruments playing at the same time if they don't share any or many frequencies and they will both be completely hearable separately. Limiting each instrument to only those frequencies that are absolutely necessary, and removing frequencies from one track and emphasizing those in another track will allow them to live together much better. Good composition (using tones on each instrument that don't step on each other) and EQ (de-emphasizing and emphasizing frequencies) is hugely important here. Too many newbies don't think at all how composition affects the clarity of the final product.
- L/R Space (Tools=Panning.) You can have tracks that don't completely meet the above criteria but still keep them clear if you keep them apart from each other. So you can have two guitars, for instance, that sound fairly similar and are playing the same part (a common trick to create a thick sound), and just pan one to the left and one to the right. You have to be careful since many people won't listen to your stuff from the 'sweet spot' where that distinction is clear, but many people do. And of course not only does it help separate instruments from each other, it can create a very interesting 'landscape' in front of the user when they do listen from the sweet spot.
- Front/Back Space (Tools=Compression,EQ,Reverb.) An instrument that has a strong attack (the initial hit of the sound is strong) and has no reverb and has plenty of high frequencies will come forward. It will sound very immediate and up front, like someone talking right in front of you or whispering in your ear. As you add reverb, reduce the attack (using compression) and remove more high end (using EQ), that sound will seem to move further and further away. You can't always have every instrument completely up front all the time, well you can I guess if you are say a rock trio (drums, bass, guitar), but as you add more instruments, some need to be more spotlighted and some need to be less so. So you want to push some back and some forward.
- Top/Bottom Space (Tools=EQ.) This one is more subtle, but with the correct EQ you can often create a certain amount of illusion of being above or below the speaker plane, but it probably really depends on the user being in the sweet spot with the appropriate relationship of their head to the tweater/woofer position. But sometimes you will hear mixes where instruments seem to float above or below.
A good mix will use all of these spaces to place instruments, not not all of them necessarily on every track. I.e. you will commonly hear kick, snare and vocal dead center. But that's fine because these parts are very well separated along the frequency axis. As long as they are very well separated along one axis or pretty well separated along more than one, they can often live together quite well.
Compression and EQ The real art in all of this is probably EQ. It's definitely an art and you can spend years and years learning it. It's so important. Keep in mind that EQ is both part of composition and mixing, and they really are kind of the same thing, it's just that one set of decisions are made up front during tracking (the settings you use on the guitar amp and pedals and such, and another after the fact during mixing (though that's not the case in some studio configurations, see the Traditional vs. Simulators topic below.) Having this frequency range cut out here instead of there can make a huge difference in the clarity of the mix and the character of a track, and how well it lives with other tracks. For instance, you can change a brain splittingly hard sounding guitar to a soft, fizzy guitar by cutting out the right frequencies.
As you will read always, don't EQ tracks in isolation, do it in the context of the overall tune. The reason being that the appropriate EQ for a track in the overall context of the song may sound like crap in isolation. This isn't an absolute rule of course (there are no absolute rules, another aphorism that you'll often hear.) You'll learn over time probably that this style of guitar in this type of song will likely need an EQ kind of like this, and you can quickly dial that in in isolation and then bring up the whole thing and tweak it the rest of the way.
Guitars in particular are often radically EQ'd because they are enormous frequency suckers, meaning that they cover a huge amount of frequency range very thickly. So you often need to significantly reduce their range (on the top and bottom end, or often pulling fequencies out of the middle in various ways.) Sometimes you will hear guitars, particularly accoustic ones, that are nothing but very tinkly high frequencies. This lets them float up above everything else. Distorted guitars often have much of their low frequency content removed to avoid sitting on top of the bass guitar. It's a delicate call as to where you get enough low frequency out to avoid problems but not so much as to lose the power of the part.
Compression is a bit of a black art also. It's hard to explain to newbies in a meaningful way and I'll probably do no better, but I'll take a whack at it. Basically, every sound you hear has two very important characteristics:
- Attack. How fast the sound reaches it's maximum volume.
- Decay. How fast, after the maximum volume is reached, it takes to go away.
These two things are the difference between a gun shot and a whistle. A gun shot reaches it's maximum volume very fast, and it goes away almost as fast. So if you kind of drew a graph of it, with volume going up and time going right, it will be a very tall, very thin spike, with a very low trail of any echos of the sound. A whistle comes up very softly to it's maximum volume and decays slowly back down. So it would look more like a long, low hump.
Fundamentally, compression is designed to allow you to adjust these two characteristics of sounds you've recorded. Every compressor has an attack setting and a decay setting (though they may not always actually expose both of them to you, but it's still there, even if hard coded.) When new sound enters the compressor, it sees the attack of the new sound occuring, and it starts counting time. Once the set attack time has been reached, it will then reduce the volume based on the amount of compression you've set the compressor to apply. So basically it's redrawing that graph to change the shape, which changes the sound. For instance, if you redrew the graph of the gun shot to remove the spike, and just left the trailing echo, it would sound nothing at all like a gun shot, and more like a whistle. Compressors allow you to change that graph in that sort of way.
Some basic ways that you use a compressor are:
- Make the attack shorter. By setting the attack time so that you make the graph go back down more quickly than the natural sound, you make the loud part of the sound shorter in time. If the sound doesn't quickly come up to maximum volume, this can also reduce the peak volume because the sound may not ever reach peak volume before the compressor knocks it down again. Even a very short, sharp snare sound takes a little while to come up to full volume, so a very short attack time will reduce it's volume. It may soften the sound, because it never reaches full volume, but it also make make it sharper, because the peak it narrowed.
- Push down the decay. The length and heigh of the decay has a huge effect on the sound. Once the attack time you've set has been reached, the compressor pushes the rest of the sound down. How much you push it down and for how long (the amount of compression and the release time) allows you to further shape the sound. For instance, if you have a drum that rings after you hit it, you can get rid of the ring by heavily pushing down the decay part of the drum sound, keeping just the initial whack. You will also often just push down the part of the decay just after the attack, and let the rest of the decay happen naturally, after it's naturally come down to a lower level.
- Get rid of excessive attacks. If the compressor has a zero attack time (some don't), you may also use it to just knock down the attack (completely, not letting any of it through) and let the rest of the sound through. You can often use a limiter for this (a compressor that has fixed zero attack time) if your regular compressor doesn't have zero attack.
- Round off the leading edge of the attack. With a fast attack, relatively low ratio, and a fast release, you can often round off the leading edges of the attack, to soften the sound without really reducing dynamics a lot necesarily. Be careful of really fast attack and fast release used together, since it can cause distortion.
A big gotcha here though is that if the sound happens again before the release time is up (the time that the compressor is pushing down the decay), then the attack on the next instance of the sound will be totally clipped off because the compressor hasn't had time to get back to being ready for another sound. So you have to carefully balance the desire to push down any trailing decay of the sound, and how fast subsequent sounds will come along on a given track.
The subsequent sound will probably be far louder and will cover up any remaining decay from the previous sound anyway, so that argues for setting the release to the shortest time between notes of the instrument. But, then if you have some more widely spaced notes, the decay will not be pushed down completely and they might sound bad. SO it can be a compromise. This is another case where composition that is mindful of the requirements of the mix can help, by not mixing wildly different styles in the same part. You can also use automation to change the release time in different sections of the track if necessary.
The other gotcha is that some instruments, with very low notes, take a while to even go through a single cycle (the high/low cycles of the waves of air pressure that make up all sounds), and you can set the compressor attack/release so short that it's happening faster than a single cycle. So the compressor cannot correctly figure out when a new attack is happening, and this can cause distortion. So you have to be a little careful of that on base instruments.
Compressors have 'knees', which is a term used to describe how quickly the compressor applies the compression. It may start applying it a good ways before the signal reaches the threshold you've set and gradually increase it. Or it might wait until the signal is very close to the threshold and apply it very quickly. Soft knee compression tends to have the 'round off the leading ege of attacks' type of effect similar to that disussed discussed above, because the compression is ramped up more in response to the signal rising up to the threshold. Hard knee compressors tend to sound more agressive since they let the signal come right up to the threshold, then clamp down fast. Soft knee compressors tend to sound more 'musical' and many vintage compressors of yore that people drool over now were soft knee'd.
There are no recipes for good compression and EQ, so don't bother to even ask. It depends completely on the circumstances at hand. Many EQs have graphics that show the EQ curves. They can be very helpful for us less experienced folks, but in the end it's about how it sounds. With compressors, there's no graphic representation of the change of the graph of the sound. You really need hear it. You could of course do a bounce of a track and compare the original and new wave forms to see how they were affected.
Busses, Sends, and Track Routing This is one that I think all newbies struggle with initially. But it's really not too complex. The important points are:
- Track Routing. Every track (at the bottom of the track in the console view) has a routing setting. This indicates where you want the output to go. In the simplest project, each of them would go to the sound card, that would send all tracks to output so that they can be heard.
- Busses. Sending everything to the sound card is a little limiting. You may want to be able to, for instance, control the volume of all the drums, or all the guitars, or all the vocals, as a group, or apply processing to all of those things. To support this, you can create busses, discussed below.
- Sends. The routing of each track can only send it to one place, usually to some buss. But you often also want to send a track to more than one place. So you can also add one or more Sends to a track and send it to other places. Each send has it's own volume control and pan control, so can control how much of that track is sent. For the main track routing the track volume fader provides that level control.
So every track has to go to at least one buss (where the sound card can be considered a special buss), else it's useless because you'll never hear it. That is done via the track routing and the fader controls the track's output level to that buss. Busses are arbitrary, you can create as many as you need and call them whatever you want. You can also use Sends to send a track to more than one buss.
Note that busses also have a routing and sends. So they in turn must be routed somewhere, and you can send the output of the busses to more than one other buss by using Sends. So basically what you end up with is something like a river system in which smaller streams (tracks) feed into bigger streams (busses) which potentially feed into larger rivers (more busses) which eventually feed into the Amazon river (the sound card), and hence out into the ocean (the room via the monitors.) So it's a kind of graph. In mathematical terms it should be an acyclic graph, i.e. no loops, but you can do this by accident. If you do, you can get a huge feedback cycle and have to dive to the volume control to stop it. So be careful. I.e. don't route a buss to another buss that in turn feeds back into the original bus via a send or something.
So in a typical project, you might have a buss named Drums, that all the drum tracks feed into, a Guitars track that all the guitars feed into, etc... Those busses in turn would probably feed into a buss you call something like Master, and then that would feed into the sound card. It often gets much more elaborate than that, but that's the idea. The busses also have f/x bins, so you can apply plugins to groups of instruments using one plugin, instead of putting one on each individual source track. In this scenario, you are using the track routing to route the tracks to the busses, and those busses are routed to the master buss, and it is routed to the sound card. No sends are involved here.
Another typical use of busses is for effects. You can put a reverb on a buss, and set it up so that it only puts out the reverb sound, and not any of the original incoming signal (this is referred to as a 'wet' signal, where the 'dry' signal is the original signal. You can then use Sends from other tracks and busses to send various amounts of those tracks and busses through that reverb. This way, everyone shares the reverb, which saves resources and greats a coherent sense of space for the mix. The same scheme is used with various other types of effects, such as flangers, delays, doublers, etc... You use the Send level on each source track and buss to control how much of that track/buss is sent through that effect buss.
Mastering In more traditional scenarios, mixing and mastering are two completely different things, usually done by completely different people. But in the home studio situation, most of us do both. So you'll often hear wildly varying opinions about this process, because the people discussing it are comeing from wildly varying approaches. I'll assume here the latter scenario where you are doing both. There are two parts of mastering. One is the creation of the final two track representation of the mix. The other is the putting together of the tracks into a coherent order and getting relative volumes and creating the actual CD. I'm just talking about the first part of it here, the 'pre-mastering' part.
The mastering step generally has a few basic goals:
- Often you tweak the EQ of the overall mix at this point. Though it's best to get everything sounding right in the mix itself, sometimes it's hard to get exactly the right EQ because of all of the instruments mixing together. And even if you don't adjust the overall EQ, you may want to chop off the lowest low frequencies, below 40Hz, sometimes slightly above that, since few people have systems that can correctly reproduce them.
- Often a small amount of compression is applied
- Various other optional processes, such as adding simualted analog warmth, widening the mix out, adding a wee bit of overall reverb, and various other things.
- Bring the mix up to levels that are correct for CD creation, i.e. peaks are just below 0dB level (the loudest level that can be represented on a CD.)
The big source of confusion in that list is the bringing up of the volume to CD levels. The CD format indicates that the highest peaks of the content come up to 0dB (usually just slightly below that for practical reasons.) This makes the most use of the limited dynamic range (16 bits) of the CD format, providing as much signal relative to noise as possible. This step is done using a limiter/maximizer. It's doing two things, as indicated in the name. It allows you to maximize the volume up so that the peaks are just below 0dB. But, it actually lets you push up the volume so that the peaks go over 0dB, and it clips those peaks back down to keep them under 0dB. This is the limiting part of it.
The deal is that, in most mixes, there will be various short peaks that are considerably higher than the overall body of the song. In many cases these are snare or kick drum hits, or other fast attack instruments. If you strictly followed the above formula, and pushed up the highest peak to just under 0dB, and you had one snare hit that was 6dB above everything else, you would give up a lot of dynamic range just to fully represent that one share hit. It's better to knock down that peak and push up the body further than it would otherwise be able to go.
Such a peak can happen because the compression used during the mix rarely uses a zero attack time, so it always lets a little of the attack get through. A particularly sharp and loud snare hit could leave a very short but high peak of this sort. The limiter is a zero attack time compressor that will competely compress anything that goes beyond the level you've set, by just chopping it off. They try to be smart about it, but still they just chop it off fundamentally. But for a very short peak, it won't really have much hearable effect, and it can allow you to get the body of your content up considerably higher.
The down side is that people have started abusing this capability. Instead of just knocking down 'stray hairs', they started knocking down more and more and more over the last decade. The reason is to make their stuff sound louder than other people's stuff, so that it stands out more when the user listens to it. The end result has been content that almost has no dynamics at all because the body of the content has been pushed up to the point that it's just below 0dB itself, and all of the peaks are chopped off completely. This leads to very harsh, boring music generally and you should avoid this. The trick is mix it well, so that your peaks are well under control but still you have a dynamic mix, then use the limiter to just knock down just those stray hairs. You'll end up with better sounding content.
The limiter/maximizer must be the last plugin you use, and it spits out the final two track master, also adjusting down from the 24 bits that SONAR uses for recording to the 16 bit format of the CD. However, where you use that limiter/maximizer depends on how you approach the pre-mastering phase. There are two basic scenarios:
- In the DAW. You can do all these steps in the DAW itself. You create a buss, usually with a name like "Master" or something like that. All other busses and tracks are routed to this buss, so it sees everything. You can then put plugs on that master buss that affect the whole mix. You then adjust those plugs to get the results you want, and then use the Export command of SONAR to export out the final two track, 16 bit CD version of the tune.
- Externally. You can also just export out the raw mix, in 24 bit WAV format, and use one or more external tools to do the pre-mastering process. Things like Audacity, Harbal, T-Racks, and so forth are often used for this style of pre-mastering. In this case, you would use the limiter/maximizer within this tool.
Obviously you could do some combination here, where you do some processing on the master buss inside the DAW and complete the process in some external tool.
Plug-ins Plugins come in two basic flavors, VST and DX. These are both just two well known interfaces that allow DAWs to incorporate bits of software written by third parties. VST plugins are put into a specific directory(ies) and the DAW scans them looking for VSTs. DX plugins are registered in the Windows registry and the DAW scans the appropriate place in the registry. So a very simple VST doesn't even require an installer. You just put it in the correct place (set in the global options of SONAR) and SONAR will find it the next time it scans the directory (usually on startup if you configure that, else you can force it to scan in the same global options page.)
The DX interface is kind of being phased out and more plugin vendors are moving towards VST format, I assume because VST is not Windows specific and the VST interface has continued to be improved so that it's more competitive.
Various types of plugins use more or less CPU, according to what they do. Some are very heavy, such as convolution reverbs, and you need a fairly hefty machine if you want to use a good number of them. And such plugins tend to eat up considerably more resources as you increase the sample rate of SONAR. At 88.2K or 96K, they can really be extremely heavy. But most are fairly lightweight, and the most commonly used ones (EQ and compression) tend to be pretty efficient, else they'd be pretty useless because so many of them are required in most projects.
Room Treatment It's always interesting that so many theads argue endlessly about what is the best speaker or pre-amp or lava lamp for your studio, when many of those people arguing have done nothing to treat their rooms. It's crucial that your room have good response, or you will both lower the quality of the input going in, and make it very difficult for you to evaluate the sound coming back out as you mix all the stuff you've tracked.
Most of us have small rooms, and a small room is horrible from an audio standpoint, because the sound very quickly hits walls and bounces back at us, before it's lost very much energy at all. So we are hearing not what's coming out of the speakers, but what is coming out of the speakers mixed with what is bouncing off the walls (and ceiling and floor.) Because of the nature of all wave-based phenomena, waves that mix together will either cancel each other out or enforce each other depending on whether they are in or out of phase with each other at a given point. The given point in this case is generally your head. When you are sitting in the chair mixing, all of the sound that is bouncing off the walls back to your head are interfering with your ability to truly hear what's coming out of the speakers.
The worst frequences are the low ones. These have a lot of energy and the sound waves are very long (often longer than the room we are in for those of us with small rooms), and so they can cancel out enormously, with differences between the lowest dips and highest peaks being 40dB or more. You have to deal with this or you'll never be able to really mix confidently. This is generally done using dense insulation placed in key areas of the room. Mostly it is in the corners, because the corners will bounce sound back at you very strongly.
So you generally need to cover the two corners behind the speakers from floor to ceilling. And the areas on the wall/ceiling intersections where sound would bounce back at you, i.e. between the speakers and to either side of you. And you want want cover the walls and ceiling where the sound would bounce directly off the walls and back at you, so to either side of you and just above you. These are the most important, but if you can do more, you generally should if you are in a smallish room.
There's plenty of info out there about this subject if you search for something like "bass Corning insulation", since Corning 703 and 705 insulation is commonly used for this purpose. It is very dense so it absorbs well.
Also keep in mind that placement of your speakers and your sitting position are just as important as treatment of the room. All rectilinear rooms tend to have the worst case problems in particular places from front to back and left to right. By placing your listening position such that it is in a least-worst case spot, you minimize the problems quite a bit to start with, so that you are getting the biggest bang for your buck from the treatment bucks you spend. The optimum position is usually 38% of the room's length (measing from the wall behind the speakers.) This also tends to get your speakers away from the wall in most cases as well, which is a good thing. Having the speakers jammed up against a way creates lots of side effects, such as hyped up low frequency response. This is another area where, even in expensive rooms, you still often see the speakers flat against the wall despite a lot of other things having obviously been carefully thought out. Sometimes it's for practical reasons of course.
Here are some pics from my own studio. It might not be Better Homes and Gardens, with all the bass traps (filled with 703), but for a room that size it has pretty good bass response:
http://www.charmedquark.com/Web2/TmpAudio/FinalStudioLeft.JPG http://www.charmedquark.com/Web2/TmpAudio/FinalStudioRight.JPG Traditional Guitar/Bass vs. Amp Simulators With the advent of amp simulators, many of us don't have any amplifiers in our studios, and don't ever use a mic to record bass or guitar. We use a direct input box to get the signal into SONAR and then put an amp simulator on the track to provide the amp sound. There are some significant differences between the traditional way of doing things and the amp simulator way of doing it, and this can lead to a lot of confusion because so much of what you read out there about mixing and tracking applies to the traditional scenario, and doesn't necessarily work the same in an amp simulator world.
One example is gain vs. volume envelopes. The distinction is important beyond this particular issue, because gain envelopes reduce the level of the signal into the f/x bin, so it affects the level of the signal that any plugins see, whereas a volume envelope affects the output level of the track after all processing has been done. But when you use an amp simulator, this means it also becomes the same as turning down the volume knob on the guitar. In a traditional scenario, where you mic an amp and record it, a gain envelope will not affect the tone of the guitar/bass, but in an amp sim world, it's you can adjust the level of (say) distortion after the fact using a gain envelope to lower or raise the level of the signal into the amplifier just as you do with the volume knob.
Another difference is in the use of compression. When you read articles on mixing from a traditional standpoint, it can be misleading because they are talking about the amount of compression that they are putting on the already recorded track, which probably also had compression on it during tracking, either via some sort of compression pedal used by the player or an external hardware compressor before the recorder. So they may be saying just a few dB of compression and an attack setting of whatever, but that's in addition to existing compression so they don't need a lot more and the attack is letting through some amount of an attack that was already reduced (often, but not always.)
But in the amp sim world, you can just record without any compression and apply it after the fact to fit your needs (which may change as the mix develops), but you may need to use a good bit more since you are doing all of it in one step. And you have a lot more flexibility this way as well. You can put a compressor before the amp sim (similar to the player's compression pedal) or one after it (similar to the external compressor) or both, as suits your needs. But you can adjust it at any time. As with gain envelopes, it makes a difference which side you put them on. A compressor before the sim changes the levels that reach the amp and therefore how the amp is driven. A compressor after the sim affects the output level of the track and the attack envelope, but not the amp tone itself.
You can also use much higher quality effects than would often be used by the guitarist in a set of pedals. I.e. you don't have to use the effects of the amp sim, you can use separate plugs for EQ, chorus, reverb, doubling, etc... and use the highest quality ones you have available to you. This can really help improve the quality of the result.
As mentioned above, when you are in an amp simulator world, the traditionally separate steps of composition and mixing can become more blurred. You have much more flexibility to change the sound after the fact, instead of having to make the decision up front. It's not complete flexibility though. The things that you still have to commit to are things like guitar tone/volume settings, pre-amp settings, how hard/soft you play, where on the neck you pick, what pickups you used, etc... These things have a huge influence on the sound and using an amp simulator doesn't allow you to avoid making these decisions up front. But you can change after the fact the microphone, the amp, the cabinet, the effects, distortion, etc...
post edited by droddey - 2009/05/20 02:30:50