• Techniques
  • Recording with more than 44.1khz and 16Bit
2016/09/19 14:02:10
bewerber2
Hello,
 
I have a question regarding samplerate and dynamics:
 
If I want to burn my final product on CD and I am 100% sure that I don't want to keep other options open, what is the advantage of using more than 44.1khz and 16bit (which is "CD-quality") in my DAW for audio recording and editing then?
 
If I understood correctly, using of 24Bit e.g. in my DAW requires dithering (which causes artefacts) in order to bring it down to 16bit CD Quality. So why not to use 16bit for recording then? Could somebody explain this?
 
Thx a lot!
 
Cheers,
V.
2016/09/19 14:25:33
Kuusniemi
The basic idea is that you have more information with higher bitrates. The more information you have the better the original quality you start to downgrade from. With more information you might record things you would not get with lower qualities. Reducing quality later gives you a bit more control over what information is lost. It's easier to come down in quality to go up. That's why you should never ever use compressed audio (like MP3) when making music.
 
The big upside in using high quality recordings is that you capture less noise since there's more sonic information that get's collected. Noise keeps piling up when you stack recordings and the more noise you the worse your final result will sound.
 
But the bottom line is, people are used listening to MP3 format files on bad head phones so a CD quality final result is almost always enough. The normal listener really doesn't care.
2016/09/19 15:54:49
bewerber2
Hi Kuusniemi,
 
thank you for your time and the interesting answer.
 
But the point is that I always have the feeling that my bounced *.wav-files sound different than my DAW playback. That's why I asked this question. Since I am big friend of WYSIWYG, I am always frustrated when I export a project to a *.wav; I invest a lot of time with justifying plugins to reach something special and then I export my project and the samplerate is converted, the plugins change the oversampling rate during export operation (for some of them you can't even controll this behavior) etc and everything sounds different in the *.wav.
 
What is your recommendation and what do you think about recording the MASTER bus directly with a plugin like MeldaProduction recorder or similar? How is your experience with differences between DAW output and bounced output or are you not bothered?
 
Cheers from Munich,
V.
2016/09/19 16:53:22
Jeff Evans
Recording in 24 bit mode is definitely much better.  The digital noise floor drops way down.  Allowing you to track and mix at a lower overall rms level. eg around -20 db for example.  That way you will have 20 dB headroom above and still well over 100 dB of signal to noise below.
 
Dithering does not introduce artifacts.  I have yet to hear any artifacts in dithering.  In fact it is adding in noise and making things sound better.  I stay in 24 bit all the way right to the end and use my PSP Xenon limiter to do the dithering at the very last minute.  It does it very nicely and sounds great.
 
Bounced wave files for me do not sound any different than listening to the session in real time either.  I am not on Sonar though but on Studio One but I never hear any difference.  People have said this before too but I don't agree with it.  There must be something else going on for that to happen.
 
Higher sampling rates during recording is another question though.  From my experience it does not seem to do a whole lot.  However rendering some virtual synths at 96K can sound different.  You guys with Sonar now have the option to up sample in those sensitive areas which is great.  But the 24 bit depth will give you a better sound though. I often record at 44.1K and 24 bit if it just a straight ahead audio session only.  That way I only have to alter the bit depth for any CD burning etc..
2016/09/19 17:02:26
Kuusniemi
bewerber2
Hi Kuusniemi,
 
thank you for your time and the interesting answer.
 
But the point is that I always have the feeling that my bounced *.wav-files sound different than my DAW playback. That's why I asked this question. Since I am big friend of WYSIWYG, I am always frustrated when I export a project to a *.wav; I invest a lot of time with justifying plugins to reach something special and then I export my project and the samplerate is converted, the plugins change the oversampling rate during export operation (for some of them you can't even controll this behavior) etc and everything sounds different in the *.wav.
 
What is your recommendation and what do you think about recording the MASTER bus directly with a plugin like MeldaProduction recorder or similar? How is your experience with differences between DAW output and bounced output or are you not bothered?
 
Cheers from Munich,
V.


I personally have not experienced this with Sonar, though I export 48 khz 24 bit files and only then use and external editor to downgrade it. Are you exporting the same sample rate that you've specified for your projects?
 
If you experience odd behavior from plugins during exporting you might want to freeze the tracks (that way you're only dealing with one single audio file with all the effects etc already in place there.
 
Another thing worthy of a test is try to master in a different project. Get your stems mix and exported and then master them.
2016/09/19 20:29:55
JohanSebatianGremlin
bewerber2
 
But the point is that I always have the feeling that my bounced *.wav-files sound different than my DAW playback. That's why I asked this question. Since I am big friend of WYSIWYG, I am always frustrated when I export a project to a *.wav; 

It might help to think of it in terms of photography. You take a 10 megapixel picture of something and then turn around and take the same exact picture of the same exact subject with a 20 megapixel camera. To the naked eye, both unedited photos will look more or less exactly the same. The difference comes when you start editing and manipulating. Even though both photos look essentially the same, the 20 megapixel version contains much more information about what original subject actually looked like. Therefore when you start editing and applying processing, you end up with a much better result from the 20 megapixel source than you do from the 10 megapixel version. Even if your end result is going to be 10 megapixels. 

It is exactly the same with audio. You will be hard pressed to tell the difference between an unedited 16 bit recording and an unedited 24 bit recording of the same source. Unedited, both will sound exactly the same to all but the most talented ears (think like 10 people on the planet could tell the difference maybe). 
 
But once you start applying processing, the more information your source contains (i.e. bits), the better your processors will be able to do their job accurately. 

That being said, here's a way to consider it that is much more literal. The nature of audio and bit rates is such that 16 bit audio only actually uses all 16 bits when the audio level hits 0dB. If your signal is less than 0dB, then your audio is playing back at something less than a true 16 bits. 

And since nasty god awful things happen if we exceed 0dB in the digital world, it stands to reason that most of our audio is going to end up being something less than 16 bits if we start with a 16 bit source and then mix so only the highest peaks approach 0dB. 


However if we start with a 24 bit source, we've got lots of headroom to process and mix our output to something less than 0dB and still end up with a result that exceeds 16 bits. This then allows us to dither the end result down to 16 bits without losing any detail that would be detectable to most ears. 


As for your belief that your 24 bit audio always sounds different after being truncated down to 16 bits, I can only speculate. Are you listening to the resulting 16 bit output on the exact same system with all of the exact same processing in the signal path? Or are you creating the 16 bit output and then listening to it on other systems?
 
And if the system and processing in between is exactly the same (i.e. same speakers and same everything between the software and the speakers), have done any true blind a/b listening tests where you try to identify which version you're listening to without otherwise knowing? If not, I would strongly recommend you do so. You may be shocked at how much our preconceived notions impact what we think we're hearing out of the speakers.
2016/09/20 03:27:03
Kalle Rantaaho
The most common reason to "exported track sounding different from the project" is auditioning the export
through a different software  or gear (Like WMP using the mobo soundchip instead of the audio interface used with Sonar) and using different volume . You can only compare the exported wav reliably to the project by importing the wav back into SONAR and A-B:ing them there. Then again, if the project converts the import back to 24/48 you wouldn't believe what you hear, anyway :o)
 
If the best and most used producers in the world don't hesitate to convert from 24 bits  and 48/86/92 kHz down to 44,1/16 using Dithering, why would you make a problem of it?? As mentioned above, it's the great dynamics that make 24 bits desirable.
2016/09/20 09:28:03
Guitarhacker
I too record at 44.1K/24.  Most people can not hear the difference in quality when you go above that standard level.
 
CD's are 44.1/16 so recording at 24 bits of depth, as has been mentioned, gives you more headroom to work with during the editing.
 
Yes, I also agree that what you hear in the DAW should sound like the exported wave, and the only difference should be the player involved and they can and do sound a bit different depending on what you are using for playback. Many players let you set EQ and other FX and come with a default setup. It's likely different from the settings in your DAW.  To test that.... simply import the wave back to a track in the project and after checking that the levels are correct for the wave in question....  play it back and hit the SOLO button on the imported wave track.  Be sure to MUTE the imported wave when you are comparing it to the DAW project playback so it's not adding to the mix. It should sound the same at this point, proving that the wave is the same as what you hear in the DAW.
2016/09/20 13:00:38
drewfx1
The relatively short technical answer:
 
More bits = lower noise level. Recording at 24 bit allows you to leave lots of headroom (i.e. not let your meters get anywhere near clipping) without having to worry about quantization error at all because it's going to be buried under other noise in the room and/or analog electronics. In the real world you can generally record at 16 bit without quantization error being a problem, but as there's 48dB less room to spare you have to set record levels much more carefully to keep your signal between clipping and the quantization level. And since Sonar and all other modern DAWs all do their processing at a higher bit depth internally and CPU power and hard drive space is plentiful, there's no real advantage to using 16 bit in the modern world.
 
Higher sample rate = less latency if your CPU has horsepower to spare to accommodate it and it also allows people to record higher frequencies that no one can hear in real world - except for some younger people who are able to hear high frequency test tones played back at very high volumes, so if you want to record high frequency test tones...
2016/09/20 13:19:31
drewfx1
JohanSebatianGremlinEven though both photos look essentially the same, the 20 megapixel version contains much more information about what original subject actually looked like. Therefore when you start editing and applying processing, you end up with a much better result from the 20 megapixel source than you do from the 10 megapixel version. Even if your end result is going to be 10 megapixels. 

It is exactly the same with audio. You will be hard pressed to tell the difference between an unedited 16 bit recording and an unedited 24 bit recording of the same source. Unedited, both will sound exactly the same to all but the most talented ears (think like 10 people on the planet could tell the difference maybe). 
 
But once you start applying processing, the more information your source contains (i.e. bits), the better your processors will be able to do their job accurately. 



From a recording standpoint, if the noise floor of the audio being recorded is more than a little louder than the quantization level and there are no frequencies greater than ~1/2 the sampling rate present, then digital audio already contains all of the information available and using a higher bit depth or sampling rate achieves nothing. 
12
© 2024 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account