JohnEgan
Max Output Level: -80 dBFS
- Total Posts : 543
- Joined: 2014/10/21 10:03:57
- Location: Ottawa, Ontario, Canada
- Status: offline
BIT DEPTH QUESTION
Good Day, question about bit depth. Going to back to some old recordings I did when I was just starting out and more naive, then I still am, I had set record bit depth in Sonar preferences/file/audio data to, record bit depth - 32, ( I guess thinking at time this was better than 24 bit depth, which it may be?) Since my audio interface is only capable of 24 bit depth record/play, are these files somehow converted to 32 bit depth in Sonar, or are they actually only 24 bit? Looking in Project/audio files, listing does say 32 bit. Nowadays Ive been setting record bit depth to 24, to match AI interface capability, and avoid any confusion on my part (I keep render bit depth - 32) . However going back to edit these older files should I be changing the record bit depth preference back to the 32, so as shown in control panel, under sample rate, to keep this consistent with how they were recorded at that time, or more so when bouncing tracks or exporting to a stereo master, should I be maintaining the 32 bit depth setting as it was set originally when recorded, and/or would I be causing any noise or other issues if going down from 32 - 24 bit without dithering? i.e should I be dithering, if I bounce/export to 24 bit? and dithering again going down to the final 16 bit depth MP3? (my understanding is dithering should only be done once?) I hope this makes sense and someone can provide me some "feedback". Cheers
John Egan Sonar Platinum (2017-10),RME-UFX, PC-CPU - i7-5820, 3.3 GHz, 6 core, ASUS X99-AII, 16GB ram, GTX 960, 500 GB SSD, 2TB HDD x 2, Win7 Pro x64, O8N2 Advanced, Melodyne Studio,.... (2 cats :(, in the yard).
|
BenMMusTech
Max Output Level: -49 dBFS
- Total Posts : 2606
- Joined: 2011/05/23 16:59:57
- Location: Warragul, Victoria-Australia
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 00:39:09
(permalink)
I use nothing but 64bitFp files in my new compositions, and render to 64bitFp with all older compositions started at 24bit. I even bounce down to 64bitFp and use these master files for uploading to Soundcloud and within my video editing software. The reason is simple, to get the best out of analog emulation, you need the highest bit depth because the second and third harmonic distortion you add to your mix and master will get lost in the noise floor (I'm still trying to understand this mind you), and indeed what's left in regard to harmonic distortion can then also sound harsh and brittle. This is one of the reasons analogue audio engineers believe there is no value in emulation software, and indeed higher bit depth. There's also the fact, that some AAE still want to make accessing a studio and their expertise the only way to make music. Unfortunately, there is some merit in this, because Justin Beiber anyone...If only we could get new musicians and engineers to want to learn theory again.
There are other reasons why you record at 64bitFp, other types of effects, but these are more reliant on upsampling - think time based effects delays and verbs, but the higher bit depth also protects the sound disappearing into the noise foor as well.
Finally, as to your question to what is the difference between a 64bitFp and 24bit audio file, size is the obvious one. What's really happening is the 24bit file is wrapped in a larger file...integer numbers off the top of my head...anyone? But this also means less glitches within both the non and processed file. So again, if you use time based effects and analog emulation...There is an upside to the larger files. Now there is a caveat, and this is...There is no point in 64bitFp if you intend to use outboard gear after tracking and this is mixing and mastering, and indeed say you were recording an acoustic act - not classical mind you, but an acoustic act without any processing. The reason why 64bitFp is useless for hybrid mixing and mastering is because you can't take advantage of the higher dynamic range...converters still hover at around 119 db of dynamic range, whereas 64bitFp is really high...off the top of my head. The higher dynamic range allows a digital in the box mix to be mixed like an analogue mix basically.
Hope that helps
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 14:56:50
(permalink)
You are correct, audio interfaces are incapable of generating 32-bit floating-point data. Truth is, they can really handle only around 20 bits of resolution, beyond which you're looking at stuff that's happening well below the analog noise floor. When data is "converted" to a higher resolution, what's really happening is a bunch of zeroes are being tacked on to make a longer word. SONAR normally converts everything to 32 bits (by appending zeroes) on import/recording, so that there is greater resolution available internally to sink rounding errors that naturally occur when irrational numbers are multiplied together. You should experience no loss of quality by editing/converting your 32-bit files. That's exactly why we use such crazy high resolution in the first place, to make it possible to work at a high enough resolution that we can comfortably reduce that resolution later and still have acceptable quality.
All else is in doubt, so this is the truth I cling to. My Stuff
|
hbarton
Max Output Level: -89 dBFS
- Total Posts : 61
- Joined: 2015/01/18 23:30:35
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 15:53:24
(permalink)
Interesting subject, and I am not an expert, but doesn't "aliasing" also come play at lower bit counts? The way I understand aliasing, is it is the "masking" of content that is not captured (recorded) due to the rate that the content is being captured (typically, the lower the count, the harder is is to record/reproduce higher frequency content/harmonics). If that is the case, it would also seem to be true that if you down sample content, you will not be able to recreate a true audio representation of the original content (and even if you then up sample it again to the higher rate, it will not recreate the lost content). So sampling at lower bit depths may keep you above the noise floor, but you may not capture all those higher frequencies and harmonics? Again, no expert and just curious. Take care, h
|
GaryMedia
Max Output Level: -86 dBFS
- Total Posts : 217
- Joined: 2003/11/05 23:04:20
- Location: Cary, NC
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 16:24:45
(permalink)
☄ Helpfulby tlw 2018/01/16 18:35:23
Mr. hbarton, Bit counts or bit depths in your parlance (also known as word size) is 16-bit, 24-bit, etc., and is what Mr. bitflipper was referring to as he was explaining resolution and the 20-bit or so analog noise floor. Aliasing is a side effect of sampling rate, 44.1kHz, 48kHz, 88.2kHz, 96kHz, etc., and is managed with various filtering strategies. Your comments conflate the two concepts, and they are substantially separate. The high frequency content of a wave file at 96khz sampling rate isn't going to be compromised by it being done at 16-bit depth. Also, to bring Mr. BenMMusTech into the conversation. The 2nd and 3rd harmonics are the desirable harmonics. They're the ones that make for that analog warmth that is so beloved by our ears. Their proportion and level-sensitive behavior are what make tape machines so nice to hear, and why we emulate them on purpose, and those harmonics are definitely not buried in the noise floor. The analog noise floor definitely isn't getting better than what a 24-bit converter will give us because we live in a world that is far above absolute zero Kelvin. Moreover, as soon as there's a microphone and a preamp in the chain, abandon all hope of a silent analog noise floor.
CbB Win10 | Mac Pro 12-core 3.33GHz/48GB | TCL 55" 4K UHD | 480GB SSD | 6TB HDD RAID-5 array| 1.5TB SSD RAID-0 array | Midas M32 | 2x Audient ASP800 | UAD-2 Duo PCIe | Adam A7X. http://www.tedlandstudio.com/articles
|
Cactus Music
Max Output Level: 0 dBFS
- Total Posts : 8424
- Joined: 2004/02/09 21:34:04
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 18:25:18
(permalink)
☄ Helpfulby jimfogle 2018/01/18 03:05:29
https://www.izotope.com/en/support/knowledge-base/differences-between-32-bit-and-64-bit-audio-software-plug-ins.html I thought this explains it pretty good too. https://theproaudiofiles.com/6-facts-of-sample-rate-and-bit-depth/ 4. The Audio File’s Bit DepthThe audio file’s bit depth is often misunderstood and misinterpreted. The audio file that’s on our computer, the same one that is created by our DAW, is simply a container for the information that the ADC already created. So, the data already exists in complete form before it gets into the computer. That’s the key thing to take from this. When we select an audio file bit depth to record at in the DAW, we’re selecting the size of the container or “bucket” that we want that information to go into. Most ADC’s will be capturing your at audio 24 bit regardless of the selection you make in the DAW. So what happens when I select a 32-bit or 64-bit float file? Your audio is still 24 bits until it is further processed. In most cases it’ll pass through a plugin effect at 32-bit float, or even a 32-bit mixbus. (Some have 64-bit float.) But with the topic of capturing audio, you aren’t changing anything, or making it sound better by simply putting it into a 32-bit or 64-bit float container. It’s the same information, with just a bunch of 0’s tagged on, waiting for something to do. So why is a 32-bit or 64-bit float file container good? With a 24-bit file, we have a finite number of (in this case it’s 24) decimal places to capture information between 0 and 1 that our ADC delivers. In a float file, the decimal place can move or “float” to represent different values. Not only that, but we also have an extra 8 bits of resolution or headroom, that wasn’t there before. This allows us to do some pretty impressive things in terms of processing and computing. We can essentially give our audio more resolution that it originally had, simply by processing it and interpolating new points in the dynamic spectrum. We can also dynamically and non-destructively alter our audio as long as it remains in the digital realm. We can even prevent further clipping of the captured audio. That’s why you hear it’s so important to keep the same bit depth or higher, throughout the entire production process. So yeah, if you get 16-bit files from your friend to mix down, work at 24-bit, or better yet 32-bit float. You aren’t making it any worse, only better. Whether you are producing subtle classical recordings, or mixing a new breed of square wave EDM, bit depth is just as important at representing your dynamics, even if perceivably there aren’t any.
|
JohnEgan
Max Output Level: -80 dBFS
- Total Posts : 543
- Joined: 2014/10/21 10:03:57
- Location: Ottawa, Ontario, Canada
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 18:36:21
(permalink)
Thanks, I guess what I’m only asking is if I should maintain the same record bit depth setting in preferences I had set when I recorded (record bit depth 32), from mix to master, (until final export to 16 bit MP3) or does it matter, as its already only actually 24 bit depth resolution/accuracy anyways? (i.e., since I now normally leave record bit depth at 24, the same as my AI interface is capable of, but what effect does leaving at 24 bit have when I load older project files, where I had it set to 32?) Also is their significant benefit rendering at 32 bit, rather than also at 24 bit depth? I think this is somewhat answered by BF, that 24 bit depth is converted to 32 by Sonar for internal processing, regardless of if preference is set to record at 24 bit depth, Otherwise there's no doubt my 96K/24 bit depth wave files sound better on my studio monitors than the processed 44K/16 MP3 files sound on BT speaker or in car, but if you have the processing power and memory common these days why not use the best possible to try and best simulate the analog world. However, that aside, as I understand bit depth now, possibly wrongly, since my audio interface is "only" capable of 24 bit depth A/D, conversion, so as you mention a 119 dB analog dynamic range, so for e.g. the A/D conversion at a point in time determines the analog level amongst the 119 dB range in 16.7 million possible steps, or into 0.0000007 dB steps of accuracy between quietest and loudest levels? (32 bit ~ 4 billion steps, if you had an AI capable of this?). So say any 2, 80 dB sample levels would be accurately represented to within approximately +/- 0.00000035 dB of each other. So with DAW recording bit depth set at 32, the already digitized analog signal level determined at 24 bit depth by the AI, the computer would only be using more CPU power to determine the same 24 bit step value already established by AI, say the minus tolerance 79.999999... dB, so it would determine the same number using but using something like 250 times more accuracy (or more CPU power) to define it. (Since its already digitized, its not the original analog signal being digitized at a bit depth of 32 so there's really no accuracy level or rate of change (transient detection) benefit being gained at this step?) Where I assume, but not certain, where benefit comes in is establishing a 32 or 64 bit depth files and having the overhead (> 0 dBFS), and for rendering FX or VI's at same bit depth accuracy levels they allow for. This may also become more relevant in having higher sampling rates (96K or higher), for more accurate bit depth level variations to be captured per time domain unit and which may be make small nuances like harmonics more apparent when converted back from digital to analog world and may end up sounding better to our analog ears. But more often does not sound as good as its not capable of reproducing the smaller nuances created with effects or VI's you hear or sense when listening to your 96K/64/32 bit depth (or actually "only" 24 bit depth from AI) mix or master wave files on studio monitors, as opposed to your processed 44K/16 bit depth production in your car or elsewhere. I find the final "bit rate" used for MP3 significantly affects the final sound quality, possibly more so than a lot of the other things along the way. So in that respect it may make more sense to record, mix and master with same settings as your final production will be in, so your dealing with same sound quality challenges from start to finish? Cheers, Thanks for replies
John Egan Sonar Platinum (2017-10),RME-UFX, PC-CPU - i7-5820, 3.3 GHz, 6 core, ASUS X99-AII, 16GB ram, GTX 960, 500 GB SSD, 2TB HDD x 2, Win7 Pro x64, O8N2 Advanced, Melodyne Studio,.... (2 cats :(, in the yard).
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
.
post edited by mister happy - 2018/01/24 12:33:12
|
JohnEgan
Max Output Level: -80 dBFS
- Total Posts : 543
- Joined: 2014/10/21 10:03:57
- Location: Ottawa, Ontario, Canada
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 18:59:37
(permalink)
John Egan Sonar Platinum (2017-10),RME-UFX, PC-CPU - i7-5820, 3.3 GHz, 6 core, ASUS X99-AII, 16GB ram, GTX 960, 500 GB SSD, 2TB HDD x 2, Win7 Pro x64, O8N2 Advanced, Melodyne Studio,.... (2 cats :(, in the yard).
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 19:02:06
(permalink)
☄ Helpfulby jimfogle 2018/01/18 03:07:32
I will try to be short... 16 and 24bit reference fixed point number representation. 32 and 64bit reference floating point number representation. Let say you only have 3 digits with fixes point, you can use (range 0 - 1) 0.000, 0.001, 0.002, ... 0.999 Now let say you have floating point with 3 digits + 1 digit for power of 10. So in the same range (0-1), you can use: 0.000, 0.001 * 10^-9 = 0.00000000001, 0.002 * 10^-9 = 0.00000000002, ... 0.999 * 10^-0 = 0.999. In the second we always have 3 digits precision , independent from how small the number is. In the first case, we loose precision when we need lower numbers, f.e. we can not represent anything between 0.001 and 0.002. But we need this "power of 10" extra digit. 32 bit floating point number has precision 24 bits and 8 for the power (of 2). 24 bit fixed point number also has precision 24, but the power is always zero. AD/DA converters work in "fixed point" mode and have ~20 bit precision. So, by recording into 24 bits fixed point you do not loose anything, in fact you save ~3-4 bits of garbage. Saving more bits or using 32bit floating point does not improve the quality, it just save more garbage bits. With a chain of analog equipment, try to lower the volume by -50dB and then amplify +50dB. You can easily notice the difference then, compare to the original signal. Because the noise level is absolute, if you down the signal toward the nose level and then amplify it, you will amplify the noise as well. The same (but with "digital noise" happens) in case you will process the signal with fixed point numbers. Digital noise will be (again absolute) -96dB for 16 bits and -144dB for 24 bits. With -50dB and 16bit, you will "bit crash" the sound. Now try to make this example within Sonar. Lower the output of a track by -50dB and then "amplify it" (with several Buses +6dB). You will notice NO signal degradation. Why? Because DAWs are using floating point numbers internally. So when you lower by -50dB you still preserve original precision (you just changing the "power of 2"). But if after lowering -50dB you render the track into 16/24bit fixed point, that precision is LOST. To avoid, use 32bit files for processing (intermediate) formats. The requirement to use 64bits instead of 32 (still floating point) comes from the fact many plug-ins are written by musicians and not mathematicians. With complicated algorithms, avoiding more then one bit calculation error is enormous job. From the beginning, PCs was preferring just to make calculations with more bits than deal with the problem (x87 co-processor could do this with 80bit format, while the result was normally converted to 64 or even 32). But that primary make sense for calculations, not for exporting. So almost paranoid "safe side" (even for top studio). How can it be people "clearly hear" bad dithered 24bit files (easy to google such claims), while the equipment completely ignore everything above 20-21bits? Simple. They DIGITALLY amplify fixed point signal, so they "bit crash" it the way I have described before. If original signal is not yet mastered (not maximized), there is "free space" for such amplification (especially in almost silent places), so where the precision is not 24bit. So nothing to do with "golden ears", cheating pure. Bu that "cheating" is what happens during next processing of not yet finalized files, in case they are saved in 24bit format (they do not have 24bit precision). For final 16bit format, correct dithering is real thing. In max places, theoretical SNR is -96dB. So already technically "reproducible". In reality the precision fall fast, and so the "noise"/"distortion" from incorrectly dithered material becomes audible even for noobs without "cheating".
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
.
post edited by mister happy - 2018/01/24 12:33:34
|
GaryMedia
Max Output Level: -86 dBFS
- Total Posts : 217
- Joined: 2003/11/05 23:04:20
- Location: Cary, NC
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 21:15:37
(permalink)
JohnEgan ... However, that aside, as I understand bit depth now, possibly wrongly, since my audio interface is "only" capable of 24 bit depth A/D, conversion, so as you mention a 119 dB analog dynamic range, so for e.g. the A/D conversion at a point in time determines the analog level amongst the 119 dB range in 16.7 million possible steps, or into 0.0000007 dB steps of accuracy between quietest and loudest levels? (32 bit ~ 4 billion steps, if you had an AI capable of this?). .... But more often does not sound as good as its not capable of reproducing the smaller nuances created with effects or VI's you hear or sense when listening to your 96K/64/32 bit depth (or actually "only" 24 bit depth from AI) mix or master wave files on studio monitors, as opposed to your processed 44K/16 bit depth production in your car or elsewhere. I find the final "bit rate" used for MP3 significantly affects the final sound quality, possibly more so than a lot of the other things along the way. ....
Again, a couple of different concepts got conflated here: The 16-bit (96dB) or 20-bit (120dB), or 24-bit (144dB) capability of the converter gets stretched out over the real-world noisy analog circuitry, with is able (on a good day with the wind at its back) able to give about 120dB of dynamic range. The signal-to-noise ratio of a typical condenser microphone will be in the 84dB range, so its noise floor isn't exactly plumbing the depths of what a converter can do. The MP3 'bit rate' that you mention is a completely different thing than the bit depth and sampling rate of pulse-code-modulated wave files, and it's absolutely reasonable that you'd hear reproduction problems with MP3's that are encoded at less than 320kbps. Keep in mind that the whole MP3 strategy is to 'throw away' significant portions of the musical content while minimizing the acoustic harm it causes. A regular stereo CD wav file is 1141kbps equivalent bitrate, while the maximum (and least harmful) MP3 bitrate is 320kbps.
CbB Win10 | Mac Pro 12-core 3.33GHz/48GB | TCL 55" 4K UHD | 480GB SSD | 6TB HDD RAID-5 array| 1.5TB SSD RAID-0 array | Midas M32 | 2x Audient ASP800 | UAD-2 Duo PCIe | Adam A7X. http://www.tedlandstudio.com/articles
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 21:34:33
(permalink)
mister happy Unless you conduct a dither listening test... then it turns out nobody can hear the prescence or absence of dithering in a 16bit file even if the playback levels are calibrated to -23DBFS pink noise = 105dBSPL C weighted at listening position.
I agree, words are useless without some tests. So: http://www.azslow.com/files/NoDither2.wavhttp://www.azslow.com/files/WithDither2.wav Before anyone start to wonder why I have uploaded 2 files with silence... You will have to drive your card really hard (or chain 2 buses with +6 dB in Sonar) to hear something. But once you manage to get normal level of the sound, the difference will be obvious. And that is the whole point: clearly demonstrate what dithering does, without "hi end" equipment, "golden ears" or any cheating. Both files are exported from Sonar, 16bit 44.1 kHz from the same just recorded n00b clip (with prior lowering the clip gate). One with and another without dithering. Initially I have made clips with ~10dB higher level. With studio headphones no blind test required to hear the effect, but with $5 phones in my notebook the effect was not so obvious.
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
GregGraves
Max Output Level: -85 dBFS
- Total Posts : 282
- Joined: 2014/11/14 11:32:14
- Location: florida usa
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 21:39:58
(permalink)
1. Uh, you should avoid upsampling audio, because you are adding something that isn't there, not making it somehow (ha ha) "better". 2. When is the last time you performed a frequency-based hearing test? Unless you are in your twenties, I doubt you can hear 18kHz, and there's a high probability you can't get to 13kHz, especially if you've spent a decade or two rockin' out live. If you are extended-range-deaf (likely true), you fool yourself going back to the notion of "better". 3. Most everyone listens to [and sometimes actually buy] mp3's. 64 bit cobbled down to an mp3 accomplishes ... what? 4. 24 bit 48kHz - in my fantastically self-over-rated opinion - is all you need, a good compromise between disk space, cpu load, artifact avoidance, and what you and your patrons might actually be able to hear.
|
hbarton
Max Output Level: -89 dBFS
- Total Posts : 61
- Joined: 2015/01/18 23:30:35
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/15 23:42:25
(permalink)
Hey, thanks to the OP for asking the question and thanks for all the experts that chimed in. I thought I understood some of this, but I think I owe myself a refresher course on Digital Audio I hope these discussions don't get erased or lost if the cake servers go away eventually. I would hope they could find a more permanent home somewhere on the "Intertubes." Take care, h
|
JohnEgan
Max Output Level: -80 dBFS
- Total Posts : 543
- Joined: 2014/10/21 10:03:57
- Location: Ottawa, Ontario, Canada
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 00:07:24
(permalink)
John Egan Sonar Platinum (2017-10),RME-UFX, PC-CPU - i7-5820, 3.3 GHz, 6 core, ASUS X99-AII, 16GB ram, GTX 960, 500 GB SSD, 2TB HDD x 2, Win7 Pro x64, O8N2 Advanced, Melodyne Studio,.... (2 cats :(, in the yard).
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
.
post edited by mister happy - 2018/01/24 12:34:00
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 08:18:08
(permalink)
mister happy
azslow3 ...words are useless without some tests. So: http://www.azslow.com/files/NoDither2.wav http://www.azslow.com/files/WithDither2.wav Before anyone start to wonder why I have uploaded 2 files with silence... ...
This is a specious example that may lead innocent bystanders to draw conclusions that have no practical application.
That is an example how dithering affects the sound. Nor more, nor less. Sorry to say, but what you provide are just WORDS. Without mathematical, physical aural explanation. You just claim: 15 bits are sufficient for any sound at any fidelity level. A test I like to use for this example is a cymbal crash followed by its complete tail to silence. You can either calibrate your playback as I suggested at the onset or just turn the levels up so that the cymbal's crash gives you a headache. Then you can listen intently for the final bit fluttering in the silence. Set up a pair of files and take an ABX test using one the ABX testing apps. When you are done you will never worry about dithering again.
If you have done that and do not worry, fine! You "like" testing with cymbal. That is the worse way to test any noise is audible: loud initial sound and the whole sound is just a noise by design. Take a gun and shoot it into air every time before listening music, you can save a lot of money on your equipment then
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
.
post edited by mister happy - 2018/01/24 12:34:25
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 14:03:40
(permalink)
If you listen to your musical content at common playback levels, For example; a playback level calibrated to -23dBFS pink noise = 72dBSPL C weighted at the listening position then the least significant bit in the digital stream will be fluttering well below the SPL threshold of hearing. That is very simple math.
That is not math at all. That is proposed loudness (only) condition for test. Math is in fact very simple: * 16 bit without dithering at all effectively leave 15 "true" bits. That means ~-90dB SNR * 16 bit with simple dithering keep all 16 bits meaningful. So ~96dB SNR. Without doing anything fancy, Dynamic range is equivalent to these numbers. Fancy dithering algorithms claim perceived dynamic range increase up to 120dB. Is that level of noise always meaningful and can be spotted? Definitively not. There can be at least several examples what can make it not noticeable by definition (any from the list): * calibrated listening environment in which that level is under ATH * when the environment has background noise comparable with SPL from this noise * when the signal already has more noise then that, so the content is "noisy" by design. Including f.e. drums and "vintage warm" processors or emulations (tubes, tapes) * when the signal is overall maxed and dynamic range is overall small But you claim it NEVER make any sense. And that part is questionable: * modern equipment can technically reproduce such level, unlike f.e. 64bit AD/DA * there are many SoftSynth which produce "crystal clean" signal. They sound "cold" and "airy", with tiny "noise" component". * hobbyists rarely achieve perfect mastering and so to not squash things completely leave quite big dynamic range in the final file. Also some classic CDs do not "ride" the volume by intention, there can be quite tracks. All that move "listeners" to turn clockwise the volume knob on Hi-Fi/End amplifier, effectively moving the noise level away from "callibrated" listening conditions. The math is just as simple if you listen to your music more loudly. So, using your words... Assuming you have done "everything right" in recording and mastering your cymbals, I completely agree that dithering is not important. But for us, n00bs, assuming that is just a check-box in the export dialog, it is good to set it. But avoid it during intermediate processing since that make no sense and can hit hard our low-end computers.
post edited by azslow3 - 2018/01/16 19:27:04
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 18:31:21
(permalink)
azslow, your files are the equivalent of 10 bit audio.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 19:26:45
(permalink)
drewfx1 azslow, your files are the equivalent of 10 bit audio.
In that direction... But that is just an illustration for the effect. Exaggerated illustration, to allow understand it on the phone with $1 headphones. In better conditions the (same) effect is audible with higher signal levels. The noise level from not dithering stay the same, with lower level we obviously lower SNR, down to what is demonstrated in the files.
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 20:31:31
(permalink)
Take a step back - the reason you only used 10 bits is because if you used all 16 bits no one would hear it. If the only way one can hear it is to "exaggerate" the effect, that's another way of saying it's inaudible under normal conditions.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 21:31:01
(permalink)
drewfx1 Take a step back - the reason you only used 10 bits is because if you used all 16 bits no one would hear it. If the only way one can hear it is to "exaggerate" the effect, that's another way of saying it's inaudible under normal conditions.
"Not everyone" would hear it. The purpose was to demonstrate the effect for everyone. People can decide do they need it or not for what they do. It can be hard decision without understanding what it is at first place, what exactly it produce and when. I have just mentioned that without dithering this bit is ALWAYS garbage. Original DAW output (especially synthetic) ALMOST ALWAYS has correct information for this bit. Dithering can preserve it. That are mathematical facts. I repeat, if someone do not want or do not need one bit from total 16, I have no problem with that! There is a "big clan" which say "I can hear the change from golden to silver wires of my speakers", some "pro" with "it is very important to do proper dithering at 24 bit" (Samplitude creator...) and yet "who needs the bit 16, lets use 15". I do not care in what people believe. Also I do not judge what they can hear or not, till that is technically impossible.
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
berlymahn
Max Output Level: -85 dBFS
- Total Posts : 257
- Joined: 2007/11/28 08:48:13
- Location: Northern VA
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 21:59:46
(permalink)
You guys are why I love this forum. Please QA my "Explain Like I'm 5", explanation: In my limited understanding, bit depth represents dynamic range (absolute silence to the "11" on the volume scale.....ha!). Right? And at 24-bit you can take each sample of audio in your ADC and convert it in to potentially millions of slices (low volume to high volume) for each portion of sampled data (16,777,216 slices (of volume)). That's more than enough. To me, the other aspect, sample RATE, is far more crucial. A sample rate of 44,100 will yield 441 samples for EACH sine wav for a tone of 100hz (bass drum, e.g). A sample rate of 44,100 will yield 100.2 samples for EACH sine wav for a tone of 440hz (A note on a keyboard). Mapping out each sine wave of a 440hz tone with 100 samples, and 16.7 million "volume resolution" (24-bit) will render a very good resolution of the wave form. It is really quite good. At the upper end of the sampling spectrum, 14,700hz for example, the 44,100 sample rate will only be three samples per sine wave. Imagine a sine wav and placing 3 equidistant points on the wave form and you can see the resolution go to hell real fast. Thankfully there are great dithering algorithms out there, plus our ears, most of us anyway, suck at hearing freqs that high. Boosting sample rates to 48,00 are meh (to me), and 96,000 more than doubles the 44.1 sample resolution (great for higher freqs, I suppose)..... Ultimately all of this fretting is lost on the user who is walking the noise downtown streets listening to your tune on their complete **** ear buds, with their ears ringing after a night out on the town in a noisy club (not to mention the fact that soundcloud is going to take your amazing creation and crap it out at a lovely 128kbps (basically tossing out 94% of the clarity of your 24bit (2116kbps / sec) original wav file, not to mention the fact that their 128K stream is being generated from your [hopefully] higher resolution mp3 submission. I usually upload 320K mp3 files to soundcloud, and Who knows what sort of algorithm they run it through before dumping them out on the web at 128K...... yikes!! Am I misinformed? Thank you. I yield the floor.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 22:07:22
(permalink)
azslow3
drewfx1 Take a step back - the reason you only used 10 bits is because if you used all 16 bits no one would hear it. If the only way one can hear it is to "exaggerate" the effect, that's another way of saying it's inaudible under normal conditions.
"Not everyone" would hear it. The purpose was to demonstrate the effect for everyone.
But you didn't exaggerate by a little. For each bit you reduce the bit depth you double the amplitude of the quantization error + dither (QE + dither). You didn't increase the level by 2x or 4x (4x = 14 bit) - you increased it by a factor of 64x! So why such a huge increase? Hmmm....
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 22:29:30
(permalink)
berlymahn You guys are why I love this forum. Please QA my "Explain Like I'm 5", explanation: In my limited understanding, bit depth represents dynamic range (absolute silence to the "11" on the volume scale.....ha!). Right? And at 24-bit you can take each sample of audio in your ADC and convert it in to potentially millions of slices (low volume to high volume) for each portion of sampled data (16,777,216 slices (of volume)). That's more than enough.
I would say that for properly dithered audio, higher bit depth simply means less noise. And once this noise is either below our absolute threshold of hearing (ATH) or buried under other noise (in the listening environment or in the signal itself) increasing bit depth accomplished nothing. You can't increase resolution by adding more bits when the bit depth isn't the thing limiting the resolution (fix that instead). To me, the other aspect, sample RATE, is far more crucial. A sample rate of 44,100 will yield 441 samples for EACH sine wav for a tone of 100hz (bass drum, e.g). A sample rate of 44,100 will yield 100.2 samples for EACH sine wav for a tone of 440hz (A note on a keyboard). Mapping out each sine wave of a 440hz tone with 100 samples, and 16.7 million "volume resolution" (24-bit) will render a very good resolution of the wave form. It is really quite good. At the upper end of the sampling spectrum, 14,700hz for example, the 44,100 sample rate will only be three samples per sine wave. Imagine a sine wav and placing 3 equidistant points on the wave form and you can see the resolution go to hell real fast.
No. What the sampling theorem says is that a signal that contains nothing greater than or equal to 1/2 the sampling rate (aka the Nyquist frequency) can be reconstructed perfectly, including the part between the samples. IOW, adding more samples per cycle doesn't increase resolution because all of the information about the part between the samples is already stored there in a long string of samples. And in fact if we know it's a pure sine wave, only 3 successive samples are enough to mathematically reconstruct the sine wave - the frequency, amplitude and phase of the sine wave can be calculated from only those 3 samples. So increasing sample rate really only allows for higher frequencies in our signal, not higher resolution, at least not in and of itself. In the real world perfect reconstruction is impossible, but modern converters are extraordinarily good up to ~10% below the Nyquist frequency.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
azslow3
Max Output Level: -42.5 dBFS
- Total Posts : 3297
- Joined: 2012/06/22 19:27:51
- Location: Germany
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/16 23:11:16
(permalink)
drewfx1 So why such a huge increase? Hmmm....
Common. Make an example which YOU can hear. I am sure you can do this with at least 12-13bits, probably even all 15. Put please do not follow cymbal advise.... take at least clean guitar or FM synth. I have just demonstrated that on ANY device, for ANY person there is NOT ZERO number of bits, when dithering effect can be observed. berlymahn Am I misinformed?
A bit In my limited understanding, bit depth represents dynamic range (absolute silence to the "11" on the volume scale.....ha!). Right? And at 24-bit you can take each sample of audio in your ADC and convert it in to potentially millions of slices (low volume to high volume) for each portion of sampled data (16,777,216 slices (of volume)). That's more than enough.
Almost... existing AD conversion does not work with 24 bits precision, best of them work with ~20. Once they manager to do all 24 bit, they will immediately claim >140dB SNR. You can easily check in interfaces specification that they mention 115-120dB at the moment (with some variations about RMS / dBA ) To me, the other aspect, sample RATE, is far more crucial. ... At the upper end of the sampling spectrum, 14,700hz for example, the 44,100 sample rate will only be three samples per sine wave. Imagine a sine wav and placing 3 equidistant points on the wave form and you can see the resolution go to hell real fast. Thankfully there are great dithering algorithms out there, plus our ears, most of us anyway, suck at hearing freqs that high.
Do not thank dithering nor ears for that. There was people which you can thank: https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theoremAt "I'm 5 years" level: they have proved it is possible EXACTLY reproduce original wave with even less (!) then 3 points Hard to believe at first. On "fingers": when you look far away on the road, do you need to recognize wheels or other fine details before you are sure that you see a car? I mean not a cow, not a house, not a bike. And it is on the road, not in garage and it does not fly. Signal processing knows that in case there is SOMETHING (dot), that IS a wave. And it just need a tiny bit of extra information to identify WHICH one. If this wave is not 14,700hz but 14,701hz, the "dot" will be at different place (as long as you do not cross "magic border", that is why LPF is required during down-sampling...). EDIT. I was slow, as usual . I am AZ Slow
Sonar 8LE -> Platinum infinity, REAPER, Windows 10 pro GA-EP35-DS3L, E7500, 4GB, GTX 1050 Ti, 2x500GB RME Babyface Pro (M-Audio Audiophile Firewire/410, VS-20), Kawai CN43, TD-11, Roland A500S, Akai MPK Mini, Keystation Pro, etc. www.azslow.com - Control Surface Integration Platform for SONAR, ReaCWP, AOSC and other accessibility tools
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/17 00:33:47
(permalink)
JohnEgan Thanks, I guess what I’m only asking is if I should maintain the same record bit depth setting in preferences I had set when I recorded (record bit depth 32), from mix to master, (until final export to 16 bit MP3) or does it matter, as its already only actually 24 bit depth resolution/accuracy anyways?
Short answer: no need to over-think it; set your render depth to 32 and forget about it. Export to... 32 bits if you're going to be using another program for mastering or file conversion 16 bits for writing to a CD 24 bits if you want to import it to another DAW or a media player that can't handle 32 bits 8 bits if you're composing for a video game in 1985 And BTW, it sounds worse in your car because it's in your car. Not because it's 16 bits.
All else is in doubt, so this is the truth I cling to. My Stuff
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: BIT DEPTH QUESTION
2018/01/17 01:32:19
(permalink)
azslow3
drewfx1 So why such a huge increase? Hmmm....
Common. Make an example which YOU can hear. I am sure you can do this with at least 12-13bits, probably even all 15. Put please do not follow cymbal advise.... take at least clean guitar or FM synth. I have just demonstrated that on ANY device, for ANY person there is NOT ZERO number of bits, when dithering effect can be observed.
Yes. Your example demonstrates that dither is desirable in a 10 bit scenario. I don't think anyone will find that controversial. Had you mentioned in the first place that it was an example of dithering to a 10 bit level then I wouldn't have commented. But you neglected to say that and thus people may have mistakenly believe your example (and any conclusions based on it) were based on 16 bit resolution rather than 10 bit resolution.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|