• SONAR
  • BIT DEPTH QUESTION
2018/01/14 22:43:26
JohnEgan
Good Day, question about bit depth.
 
Going to back to some old recordings I did when I was just starting out and more naive, then I still am, I had set record bit depth in Sonar preferences/file/audio data to, record bit depth - 32, ( I guess thinking at time this was better than 24 bit depth, which it may be?) Since my audio interface is only capable of 24 bit depth record/play, are these files somehow converted to 32 bit depth in Sonar, or are they actually only 24 bit? Looking in Project/audio files,  listing does say 32 bit. Nowadays Ive been setting record bit depth to 24, to match AI interface capability, and avoid any confusion on my part (I keep render bit depth - 32) . However going back to edit these older files should I be changing the record bit depth preference back to the 32, so as shown in control panel, under sample rate, to keep this consistent with how they were recorded at that time, or more so when bouncing tracks or exporting to a stereo master, should I be maintaining the 32 bit depth setting as it was set originally when recorded, and/or would I be causing any noise or other issues if going down from 32 - 24 bit without dithering? i.e should I be dithering, if I bounce/export to 24 bit? and dithering again going down to the final 16 bit depth MP3? (my understanding is dithering should only be done once?)
I hope this makes sense and someone can provide me some "feedback". 
 
Cheers  
2018/01/15 00:39:09
BenMMusTech
I use nothing but 64bitFp files in my new compositions, and render to 64bitFp with all older compositions started at 24bit. I even bounce down to 64bitFp and use these master files for uploading to Soundcloud and within my video editing software. The reason is simple, to get the best out of analog emulation, you need the highest bit depth because the second and third harmonic distortion you add to your mix and master will get lost in the noise floor (I'm still trying to understand this mind you), and indeed what's left in regard to harmonic distortion can then also sound harsh and brittle. This is one of the reasons analogue audio engineers believe there is no value in emulation software, and indeed higher bit depth. There's also the fact, that some AAE still want to make accessing a studio and their expertise the only way to make music. Unfortunately, there is some merit in this, because Justin Beiber anyone...If only we could get new musicians and engineers to want to learn theory again.

There are other reasons why you record at 64bitFp, other types of effects, but these are more reliant on upsampling - think time based effects delays and verbs, but the higher bit depth also protects the sound disappearing into the noise foor as well.

Finally, as to your question to what is the difference between a 64bitFp and 24bit audio file, size is the obvious one. What's really happening is the 24bit file is wrapped in a larger file...integer numbers off the top of my head...anyone? But this also means less glitches within both the non and processed file. So again, if you use time based effects and analog emulation...There is an upside to the larger files. Now there is a caveat, and this is...There is no point in 64bitFp if you intend to use outboard gear after tracking and this is mixing and mastering, and indeed say you were recording an acoustic act - not classical mind you, but an acoustic act without any processing. The reason why 64bitFp is useless for hybrid mixing and mastering is because you can't take advantage of the higher dynamic range...converters still hover at around 119 db of dynamic range, whereas 64bitFp is really high...off the top of my head. The higher dynamic range allows a digital in the box mix to be mixed like an analogue mix basically.

Hope that helps
2018/01/15 14:56:50
bitflipper
You are correct, audio interfaces are incapable of generating 32-bit floating-point data. Truth is, they can really handle only around 20 bits of resolution, beyond which you're looking at stuff that's happening well below the analog noise floor. When data is "converted" to a higher resolution, what's really happening is a bunch of zeroes are being tacked on to make a longer word. SONAR normally converts everything to 32 bits (by appending zeroes) on import/recording, so that there is greater resolution available internally to sink rounding errors that naturally occur when irrational numbers are multiplied together.
 
You should experience no loss of quality by editing/converting your 32-bit files. That's exactly why we use such crazy high resolution in the first place, to make it possible to work at a high enough resolution that we can comfortably reduce that resolution later and still have acceptable quality.
2018/01/15 15:53:24
hbarton
Interesting subject, and I am not an expert, but doesn't "aliasing" also come play at lower bit counts? The way I understand aliasing, is it is the "masking" of content that is not captured (recorded) due to the rate that the content is being captured (typically, the lower the count, the harder is is to record/reproduce higher frequency content/harmonics).
 
If that is the case, it would also seem to be true that if you down sample content, you will not be able to recreate a true audio representation of the original content (and even if you then up sample it again to the higher rate, it will not recreate the lost content).
 
So sampling at lower bit depths may keep you above the noise floor, but you may not capture all those higher frequencies and harmonics?
 
Again, no expert and just curious.
 
Take care,
 h
2018/01/15 16:24:45
GaryMedia
Mr. hbarton,
 
Bit counts or bit depths in your parlance (also known as word size) is 16-bit, 24-bit, etc., and is what Mr. bitflipper was referring to as he was explaining resolution and the 20-bit or so analog noise floor.
 
Aliasing is a side effect of sampling rate, 44.1kHz, 48kHz, 88.2kHz, 96kHz, etc., and is managed with various filtering strategies.  
 
Your comments conflate the two concepts, and they are substantially separate. The high frequency content of a wave file at 96khz sampling rate isn't going to be compromised by it being done at 16-bit depth. 
 
Also, to bring Mr. BenMMusTech into the conversation. The 2nd and 3rd harmonics are the desirable harmonics. They're the ones that make for that analog warmth that is so beloved by our ears. Their proportion and level-sensitive behavior are what make tape machines so nice to hear, and why we emulate them on purpose, and those harmonics are definitely not buried in the noise floor.  The analog noise floor definitely isn't getting better than what a 24-bit converter will give us because we live in a world that is far above absolute zero Kelvin.  Moreover, as soon as there's a microphone and a preamp in the chain, abandon all hope of a silent analog noise floor.
 
2018/01/15 18:25:18
Cactus Music
https://www.izotope.com/en/support/knowledge-base/differences-between-32-bit-and-64-bit-audio-software-plug-ins.html
 
 I thought this explains it pretty good too. https://theproaudiofiles.com/6-facts-of-sample-rate-and-bit-depth/
 
 

4. The Audio File’s Bit Depth

The audio file’s bit depth is often misunderstood and misinterpreted. The audio file that’s on our computer, the same one that is created by our DAW, is simply a container for the information that the ADC already created. So, the data already exists in complete form before it gets into the computer. That’s the key thing to take from this.
When we select an audio file bit depth to record at in the DAW, we’re selecting the size of the container or “bucket” that we want that information to go into. Most ADC’s will be capturing your at audio 24 bit regardless of the selection you make in the DAW.
So what happens when I select a 32-bit or 64-bit float file? Your audio is still 24 bits until it is further processed. In most cases it’ll pass through a plugin effect at 32-bit float, or even a 32-bit mixbus. (Some have 64-bit float.) But with the topic of capturing audio, you aren’t changing anything, or making it sound better by simply putting it into a 32-bit or 64-bit float container. It’s the same information, with just a bunch of 0’s tagged on, waiting for something to do.
So why is a 32-bit or 64-bit float file container good? With a 24-bit file, we have a finite number of (in this case it’s 24) decimal places to capture information between 0 and 1 that our ADC delivers. In a float file, the decimal place can move or “float” to represent different values. Not only that, but we also have an extra 8 bits of resolution or headroom, that wasn’t there before. This allows us to do some pretty impressive things in terms of processing and computing.
We can essentially give our audio more resolution that it originally had, simply by processing it and interpolating new points in the dynamic spectrum. We can also dynamically and non-destructively alter our audio as long as it remains in the digital realm. We can even prevent further clipping of the captured audio. That’s why you hear it’s so important to keep the same bit depth or higher, throughout the entire production process.
So yeah, if you get 16-bit files from your friend to mix down, work at 24-bit, or better yet 32-bit float. You aren’t making it any worse, only better. Whether you are producing subtle classical recordings, or mixing a new breed of square wave EDM, bit depth is just as important at representing your dynamics, even if perceivably there aren’t any.



 

2018/01/15 18:36:21
JohnEgan
Thanks, I guess what I’m only asking is if I should maintain the same record bit depth setting in preferences I had set when I recorded (record bit depth 32),  from mix to master, (until final export to 16 bit MP3) or does it matter, as its already only actually 24 bit depth resolution/accuracy anyways? (i.e., since I now normally leave record bit depth at 24, the same as my AI interface is capable of, but what effect does leaving at 24 bit have when I load older project files, where I had it set to 32?) Also is their significant benefit rendering at 32 bit, rather than also at 24 bit depth? I think this is somewhat answered by BF, that 24 bit depth is converted to 32 by Sonar for internal processing, regardless of if preference is set to record at 24 bit depth, 
 
Otherwise there's no doubt my 96K/24 bit depth wave files sound better on my studio monitors than the processed 44K/16 MP3 files sound on BT speaker or in car, but if you have the processing power and memory common these days why not use the best possible to try and best simulate the analog world.
 
However, that aside, as I understand bit depth now, possibly wrongly, since my audio interface is "only" capable of 24 bit depth A/D, conversion, so as you mention a 119 dB analog dynamic range, so for e.g. the A/D conversion at a point in time determines the analog level amongst the 119 dB range in 16.7 million possible steps,  or into 0.0000007 dB  steps of accuracy between quietest and loudest levels? (32 bit ~ 4 billion steps, if you had an AI capable of this?). So say any 2, 80 dB  sample levels would be accurately represented to within approximately +/- 0.00000035 dB of each other. So with DAW recording bit depth set at 32, the already digitized analog signal level determined at 24 bit depth by the AI, the computer would only be using more CPU power to determine the same 24 bit step value already established by AI, say the minus tolerance 79.999999... dB,  so it would determine the same number using but using something like 250 times more accuracy (or more CPU power) to define it. (Since its already digitized, its not the original analog signal being digitized at a bit depth of 32 so there's really no accuracy level or rate of change (transient detection) benefit being gained at this step?)
 
Where I assume, but not certain, where benefit comes in is establishing a 32 or 64 bit depth files and having the overhead (> 0 dBFS), and for rendering FX or VI's at same bit depth accuracy levels they allow for. This may also become more relevant in having higher sampling rates (96K or higher), for more accurate bit depth level variations to be captured per time domain unit and which may be make small nuances like harmonics more apparent when converted back from digital to analog world and may end up sounding better to our analog ears. But more often does not sound as good as its not capable of reproducing the smaller nuances created with effects or VI's you hear or sense when listening to your 96K/64/32 bit depth (or actually "only" 24 bit depth from AI) mix or master wave files on studio monitors, as opposed to your processed 44K/16 bit depth production in your car or elsewhere. I find the final "bit rate" used for MP3 significantly affects the final sound quality, possibly more so than a lot of the other things along the way.  
So in that respect it may make more sense to record, mix and master with same settings as your final production will be in, so your dealing with same sound quality challenges from start to finish?
 
Cheers, Thanks for replies
2018/01/15 18:54:38
The Maillard Reaction
.
2018/01/15 18:59:37
JohnEgan
Cactus Music
I thought this explains it pretty good too. https://theproaudiofiles.com/6-facts-of-sample-rate-and-bit-depth/

Thanks for info, and that link.
 
Cheers 
2018/01/15 19:02:06
azslow3
I will try to be short...
 
16 and 24bit reference fixed point number representation.
32 and 64bit reference floating point number representation.
Let say you only have 3 digits with fixes point, you can use (range 0 - 1) 0.000, 0.001, 0.002, ... 0.999
Now let say you have floating point with 3 digits + 1 digit for power of 10. So in the same range (0-1), you can use:
0.000, 0.001 * 10^-9 = 0.00000000001, 0.002 * 10^-9 = 0.00000000002, ... 0.999 * 10^-0 = 0.999.
In the second we always have 3 digits precision , independent from how small the number is. In the first case, we loose precision when we need lower numbers, f.e. we can not represent anything between 0.001 and 0.002.
But we need this "power of 10" extra digit.
 
32 bit floating point number has precision 24 bits and 8 for the power (of 2). 24 bit fixed point number also has precision 24, but the power is always zero.
 
AD/DA converters work in "fixed point" mode and have ~20 bit precision. So, by recording into 24 bits fixed point you do not loose anything, in fact you save ~3-4 bits of garbage. Saving more bits or using 32bit floating point does not improve the quality, it just save more garbage bits.
 
With a chain of analog equipment, try to lower the volume by -50dB and then amplify +50dB. You can easily notice the difference then, compare to the original signal. Because the noise level is absolute, if you down the signal toward the nose level and then amplify it, you will amplify the noise as well.
The same  (but with "digital noise" happens) in case you will process the signal with fixed point numbers. Digital noise will be (again absolute) -96dB for 16 bits and -144dB for 24 bits. With -50dB and 16bit, you will "bit crash" the sound.
 
Now try to make this example within Sonar. Lower the output of a track by -50dB and then "amplify it" (with several Buses +6dB). You will notice NO signal degradation. Why? Because DAWs are using floating point numbers internally. So when you lower by -50dB you still preserve original precision (you just changing the "power of 2").
 
But if after lowering -50dB you render the track into 16/24bit fixed point, that precision is LOST. To avoid, use 32bit files for processing (intermediate) formats.
 
The requirement to use 64bits instead of 32 (still floating point) comes from the fact many plug-ins are written by musicians and not mathematicians. With complicated algorithms,  avoiding more then one bit calculation error is enormous job. From the beginning, PCs was preferring just to make calculations with more bits than deal with the problem (x87 co-processor could do this with 80bit format, while the result was normally converted to 64 or even 32). But that primary make sense for calculations, not for exporting. So almost paranoid "safe side" (even for top studio).
 
How can it be people "clearly hear" bad dithered 24bit files (easy to google such claims), while the equipment completely ignore everything above 20-21bits? Simple. They DIGITALLY amplify fixed point signal, so they "bit crash" it the way I have described before. If original signal is not yet mastered (not maximized), there is "free space" for such amplification (especially in almost silent places), so where the precision is not 24bit.  So nothing to do with "golden ears", cheating pure. Bu that "cheating" is what happens during next processing of not yet finalized files, in case they are saved in 24bit format (they do not have 24bit precision).
 
For final 16bit format, correct dithering is real thing. In max places, theoretical SNR is -96dB. So already technically "reproducible". In reality the precision fall fast, and so the "noise"/"distortion" from incorrectly dithered material becomes audible even for noobs without "cheating". 
© 2025 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account