• SONAR
  • Questions RE default 32 bit rendering bit depth (p.4)
2018/07/19 02:31:10
The Maillard Reaction

2018/07/19 05:29:05
BenMMusTech
wst3
We are talking here about technical issues, and reasonably well settled ones at that.

All engineering endeavors (as opposed to actually composing/arranging/orchestrating/playing) are an exercise in optimization. All engineering endeavors, including audio engineering. In order to optimize you need to know the limits of all the elements of a system, in our case the listener, the listening environment, the signal sources, and the processing engine, which includes any code used to mix and/or render.

All before we take into account personal taste.

For recording a live source you simply don't need more than 24 bits because, as stated earlier, the A/D converter can't match that. For the same reason fixed point is more than adequate. As stated previously, you can never increase the resolution in terms of frequency response or noise floor, what you record is what you get.
 
Sample rate is slightly more complex! If you are in a really quiet room with really good microphones, really good preamplifers, NO analog processing (unless you want to capture all the artifacts, which by the way will limit the dynamic range and pass band further) and a great instrument (great player doesn't hurt either) - if all of that is true then I'd probably record at 96 kHz. Otherwise 48 kHz is more than adequate.

I'd record at 96 kHz in order to preserve - in the recording - as much of the original signal as possible, or more even<G>! Keeping things as good as you can until the final mix is a great idea, and no, I don't think it is lost on us old folks (get off my lawn). Most of us "grew up" in a time when even 96 dB S/N ratio was unobtainable, and unnecessary since neither FM  radio nor vinyl discs could reproduce it<G>!

Fortunately (for me) my recording space is awful - ok, that's not entirely fortunate, but it does mean I have no excuse for recording at anything more than 48 kHz/24 bit fixed point. Which means that I don't need to consider processor power or disk space (I suppose I could probably get away with 44.1/16, but even I won't go that far!)

Where things get more dicey - a lot more dicey in some cases - is when we start manipulating the audio data. All processing in the digital domain is nothing more than math and the limits of precision and accuracy are well understood.

But if you have the horsepower why not work at greater word length? You only lose disk space, and maybe processing power in extreme cases. You won't hurt the audio, but you won't improve the source, and that is important.

For my own case, I can hear the difference between different sample rates and word lengths for some plugins (most notably some of the UA stuff). Well, I can hear the difference in a proper studio, in my studio these things have no impact. So that's another issue for optimization.

With respect, without something to compare I really can't comment on the tracks at 1331.space with respect to this discussion. Personally I found many of the tracks to be a bit too busy for my tastes, and in some cases perhaps over processed (this coming from me is almost comical!).

TL;DR
All this to say, if you are able to work with at least one order of magnitude greater resolution than you need you will be just fine.





I have ADHD - what you're hearing in regards to busy is just my head lol. I can make empty minimalist tracks too, it's just at the moment...I'm in Hulk mode lol and quite literally as I'm building my Hulk piece as speak.

What happens though is when you process at a lower bit depth, the fatness of the tracks I've managed to create, and only on the first few tracks on my web page because I've only managed to perfect the my technique in the last year - becomes thinner. And you can hear that thinness on a 24bit master. It's small - barely audible but it's enough to convince me of the importance of 64bit fp masters. Again, and the OP was about rendering to 32bit, but what everyone is missing and I keep saying it is - create 24bit wave masters for general listening and even distribution, but for the creation of Mp3 and other formats including visual ones like MP4 render a non dithered 64bit master and keep that master for future proofing. As I think Craig has been saying, what happens is you process all that lovely time based effects not too mention all the other effects and like sample rate - as we know the tails of these effects can get lost in the last few bits and indeed get lost at lower sample rates. So you do all this processing at 64bit, and how do we fit 64 into 24...we truncate 40bits. Now as we know when we use a compression algorithm - it doesn't discriminate what information or culls, we why compress a truncated file, when you can compress a 64bit file - you're basically are creating a hi resolution MP3 or 4. And again...I prefer the MP4 created from a 64bit master to the 24bit wave file. The difference is negligible - we're talking a mm of warmth and fatness, the definition in the time based effects also becomes murkier and those time based effects start to sound slightly metallic. The sound soundcloud stream created from a 64bit 48khz master suffers in a similar way, but its ten times better than a 16bit or even a 24bit soundcloud stream.

Remember, and I know I rabbit on and can probably be quite confusing - but a 64bitfp audio file is nothing more than a 24bit audio file but wrapped in a 64bit file, but once the 24bit file is wrapped in the 64bit file - it becomes a 64bit file. So everyone is right, you can't play that on anything apart from when it is Sonar and Sonar does it majik. But what everyone is missing is the extra bits allow the audio to be treated in the same way as analogue. Including digital varispeed. For example, sometimes I will bounce out a 64bit fp premaster to manipulate in Reaper. It's these subtle avant-garde techniques that have been missing from the digital realm, which were essential to creating some of the very best audio-visual and musical art of the 20th century. Think Strawberry Fields and Tomorrow Never Knows by The Beatles. Both rely on being able to manipulate the tape medium in a way more akin with sculpture and painting.

It's the misunderstanding of the digital medium, which is causing this debate now. It's also the ingrained ideas of the old digital paradigm that is causing a lot of confusion too, as well as the lack of true boffins. Digital was crap, let's not forget this...I mean PCM or pulse code modulation was created in the 30s off the top off my head. No one really understands, it's the perfect storage medium for ethereal materials because it adds nothing to the source material. I think PCM was invented for telephone stuff too - so it was never designed for being a storage medium or anything like what PCM and the associated tech it's used for now. Now we understand some of these above ideas, and now analogue emulation has come of age...its still a long way off of being perfect, because from my ears...there's only 2 or 3 companies that know how to create accurate emulations, but some of us have worked out that you need to add back in what we once took for granted in the analogue realm. I'm still working out all the different languages and so I'm sorry if sometimes I go off script or I don't make much sense and worse confuse ideas and terms - but I have studied these ideas extensively and I know that 64bitfp solves everything. It creates a hi resolution audio recording format and possibly a hi resolution MP3/4 format and It future proofs your masters too by storing a master in an undithered format and container file. I would have to dedicate the next couple of years to fully go through all the data and science to really pull together all emerging concepts of 64bitfp together. I'm good, but that's not my trip. I think I've explained the reasons for 64bitfp now pretty well. And I've also explained the reasons for plain old 24bit too. And there are times when you should stick with vanilla.

Now I'm sorry for the wanky art stuff...I hate wanky art stuff, and yes I have a Master of Philosophy in Fine Farts...it was an accident and I should have gone into composition - I'm trying to rectify this by doing a Phd in composition, but what's really missing from contemporary almost everything in regards to culture is everything is an understanding of aesthetics and mediums. Because, thanks to postmodernism wank...every button pusher Ifool owner now thinks they can create 'art'. They cannot lol, but when creating every decision you make needs to be thought through and the possible outcomes of those decisions also need to be thought through. For me, I've chosen a 64bit medium and the analogue emulation asethetic. In 10 years time, this may be considered hi resolution audio and 24bit may be considered low-for but neither is wrong and both are right so long as you own the choice you made when you created the work.

Sorry, I might have digressed...I've been working late on my Incredible Bulk not Hulk lol parody :). But hopefully some of you might find my ideas useful.

Ben.
2018/07/19 07:45:10
mettelus
I am still trying to glean the context of the 64-bit argument (a lot of TL/DR going on). To pass through a DAC it has to be the bit depth the DAC supports (either in a DAW, or Windows will do it for you). I get the feeling you are stating you are listening to 64-bit audio (??), so am most curious about what DACs you are listening through?

Computation precision and what makes it to headphones/speakers are not one and the same.
2018/07/19 09:23:27
msmcleod
The mistake is to mix up the 16bit / 24bit setting using in the ADC / DAC with the 32bit / 64bit rendering format.
 
Although they're related, their subtely different.
 
Once you've recorded a 16 or 24bit recording into CbB (which are lots of 16 bit or 24 bit integer values respectively, i.e. whole numbers), each integer number is converted internally in CbB as 32bit (or 64bit) floating point numbers. The reason being, that this limits the rounding errors associated with mixing tracks and processing audio when performing all the crazy DSP maths involved, as floating point numbers can deal with fractions.
 
Once all the processing/mixing of tracks has taken place, the sound is converted back to 24bit integers to send back to your sound card.
 
64 bit floating point numbers take up twice as much buffer space for internal processing, but as its precision is higher, there are less rounding errors during processing, hence the better sound quality.
 
When you mix down (usually down to 16bit 44.1KHz for CD), it has to take the 64bit floating point numbers and convert them into 16 bit integers. Rounding errors will occur at this point, but the dithering algorithms help to minimise the audible effect associated with this (especially when going from a different sample rate, e.g. 96Khz to 44.1Khz).
 
2018/07/19 11:10:26
Bristol_Jonesey
mettelus
I am still trying to glean the context of the 64-bit argument (a lot of TL/DR going on). To pass through a DAC it has to be the bit depth the DAC supports (either in a DAW, or Windows will do it for you). I get the feeling you are stating you are listening to 64-bit audio (??), so am most curious about what DACs you are listening through?

Computation precision and what makes it to headphones/speakers are not one and the same.

He isn't listening to 64 bit audio
 
There is no converter in the world operating at that depth.
2018/07/19 14:04:39
The Maillard Reaction

2018/07/19 16:12:55
Cactus Music
I am still in the habit of working at 24bit 44.1kHz and rely on the music to seem interesting enough to distract listeners from a preoccupation with technical matters.
 
This! 
 
2018/07/19 17:01:20
drewfx1
mettelus
I am still trying to glean the context of the 64-bit argument (a lot of TL/DR going on). To pass through a DAC it has to be the bit depth the DAC supports (either in a DAW, or Windows will do it for you). I get the feeling you are stating you are listening to 64-bit audio (??), so am most curious about what DACs you are listening through?

Computation precision and what makes it to headphones/speakers are not one and the same.



The argument is that if you're performing a number of calculations then the calculation errors due to limited precision that occur in the lowest bits (i.e. "rounding errors") can accumulate and work their way up to become audible.
 
This is a real concern that's well understood and not in any way controversial. It's a reason why programmers use larger bit depths for certain operations that they know can or will cause problems.
 
 
The controversy is that some believe for whatever reason that if they know about these errors then they must be audible to them.
 
I would say that the correct answer is, "it depends". And when you try to get beyond that it gets technical fast.
 
And technical arguments tend to be not particularly compelling to people who aren't technical and "I heard it!" arguments aren't compelling to technical types, so we go on.
 
And on.
 
And on.
 
Sorry (on a number of counts).
2018/07/20 12:08:53
The Maillard Reaction

2018/07/20 13:36:03
gswitz
noise sources...
Fan in the room
Mic
Preamp
Dac

I think noise alters values at all amplitudes but proportionately. This is why it isn't really distracting when listening to a strong recording.

I also want to point out that floating point is more precise as you approach zero [get quieter]. Someone who is working with low audio signals might benefit more fp than someone processing a higher signal.

Dacs are equally precise at all amplitudes which is why float doesn't help them.
© 2025 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account