I have to read it again tomorrow when the wine effect wears off and I can store more information in my short term memory at the same time. Lol.
I appreciate the replies, I am not arguing against of your points, rather trying to find practical application to your ideas.
Complexity. The goal is not to purposely be complex. The goal is to express what is on our mind, and often it is very hard to turn in to the sound the mass we create with our imagination. I do not sit down and think, “Let me make something complex, so everybody falls out of the chair”. I am listening hundreds of sounds and I find one sound I kind of like, and then I listen hundreds of other sounds to find another one that matches with the first one. Some time they become one perfectly, some time they only partially match. I try to recreate my idea, I hear it in my head but the right sounds are not always there, or I simply use wrong software synth and the right sound is stored somewhere else in my library. I know I have it, I know it is somewhere but I cannot find it so I use what I get in 2 hours I have for music before it turns 2 am and tomorrow is another day and 7 am comes fast.
Drums. Who has a luxury to record drums on the separate channels? I do not. My drums come from Maschine- 2 tracks stereo, Beat Tweaker- 2 tracks stereo or Yamaha Motif. If I manually play the drums, there is no way I can play just a drum track and then come back and do the snare on another track. So mixing the drums should consider high and low frequencies as they are on the same track.
Synths. I do not understand your point about the “Mute” button. If I don’t want a sound in the song, then I would not put it there, or I would delete it or mute it, but if it is good sound- how Mute would help it sound better? I use a Pad, a rhythm synth (arpeggiated synth) and 2
nd arppegiated synth that makes figure with 1
st. I like more than one Pad, I like them to open up filters at a different times. If I have 2, 3, 4 synths sounds playing on the same part, how can I EQ them to not interfere with each other? They are meant to be in the same frequency. Now add to it distortion guitar (I do not use accoustic). Metal guitar competes with synths for the mid range. So I have to do my best to select rhythms that would create room for each other and complement each other. The secret is in what your rhythm patter is playing. I have to make sure that guitar player learns what the arpeggiator doing and blends with the synth. I recird 2 guitar tracks and pan them left and right- this is how I was told to do it. One guitar track panned center sounds awful. Guitars 100% L and R.
Now add the vocal to it. Again- you want me to lower my 3 synths, 2 pads and stereo guitars tracks to let the vocals through? Then why did I even bother to put so many details in? You right- it is tough decision- do you want all synth sounds be heard, guitar to cut through or vocals? I noted that with distortion guitars many people have no idea what hides under the guitar tracks. Guitars kind of kill all my keyboard work. But what can I do? I like heavy metal, I want to be in the rock band, and I can not compromise and not do good keyboard work- even if nobody can hear it behind the guitars.
I am sorry if I am sounding as I am my own enemy. My entire songwriting style is at fault. But back to the bassics.
In SONAR particularly- there is no options for the types of meters. I have my meters set to default. So I do not let any picks go over 0 db. Now somebody says- I can let some picks go over 0 db? So where is the limit of how much?
I make sure no picks go over 0 on the Master output. Are you saying- picks can go over 0? How far? When it picks, it stays at +0,1 or +0.2 and so on. How far can I pick?
While my entire production habits are at fault for competing frequencies, other producers have all the complexity and levels and get best of both words. Loudness should not be at the expense of complexity. I hear full frequency of Pads and synths sounds, nothing is cut down, and their tracks are still loud and full. My are not.