• SONAR
  • 64 bit engine? (p.6)
2013/11/27 17:15:56
lawp
i suspect the confusion comes from the bakers' flip-flop: white papers, advertising hyperbole, best in class, etc, and then when it's buggy it's "just turn it off you won't hear the difference"... fwiw, i have it enabled, as i believe the maths makes it more accurate even if i can't hear it...
2013/11/27 18:25:56
Splat

 
So far tests at my end prove inconclusive....
 
2013/11/27 22:54:53
kitekrazy
bitflipper
Lynn
bitflipper
Lynn, have you tried just turning the 64-bit engine off and listening?
 
Or, if you want to get scientific about it, export the full mix with and without the 64-bit engine enabled and do a blind ABX test.

Dave, I'm going to download x3c again and give this a whirl.  I don't have golden ears, so I don't expect to hear any difference.  I'm sure CW will have this fixed in no time.

Therein lies the heart of the ongoing dilemma: the presumption that if we don't do everything just right and don't use the best gear, that someone else will hear shortcomings that we ourselves can't perceive.
 
Hence the ongoing forum questions: which interface is most accurate, what sample rate/bit depth to use, which reverb/compressor/limiter sounds best, what dither algorithm is better, which MP3 bitrate is acceptable, do all equalizers sound the same, does the 64-bit engine make a difference?
 
Whenever such queries are posed publicly (especially on Gearslutz), you can count on somebody replying that X made a "night and day difference", or that it was "like a veil being lifted". But think about it: if the differences are really so profound, then why do these questions repeatedly get asked in the first place? Because everybody fears that there are serious flaws in their gear and/or methodology that everybody else but them can hear.
 
Granted, with time and practice we do get better at listening, and everybody's hearing acuity is naturally a little different. Some are tone-deaf while others have perfect pitch, and high-frequency sensitivity drops with age and abuse. But the Golden Ear syndrome is largely a myth, or is at least irrelevant.
 
If you can't hear a difference between method X and method Y, try again. If you still can't hear it, then just let it go - chances are no one else can, either. 
 
 
 




This is one of the best posts ever.
2013/11/28 11:17:29
Paul P
bitflipper
 If you can't hear a difference between method X and method Y, try again. If you still can't hear it, then just let it go - chances are no one else can, either. 
 

drewfx1
 The simple answer is that people should subject their opinions to the crucible of objective testing.

 
This is all very nice, but it takes the fun out of everything, which is what belief is all about.
 
2013/11/28 14:09:47
shawn@trustmedia.tv
CakeAlexS
Lynn
So, I'm guessing, the 64 bit double precision engine helps 64 bit plug ins run more smoothly?  I don't necessarily consider turning the 64 bit engine off a viable workaround if you have to take one step backwards to take one step forward.  Is my logic faulty, or am I just being too picky?  I suspect many have the same reservations.  I would really love it if someone from CW chimed in here.




It's a valid work around, nothing should be broken when turning this off, you may experience a sudden loss of self esteem knowing it's not going quite up to 11. Also when you know it will probably be fixed in a month or so (I can't give estimates though, I'm not Cake) then there really is nothing much to complain about, not that you are actually complaining, far from it :) In the meantime I recommend cucumbers down underpants....


Ta


It's gold versus Diamonds...they are all just bling....I agree with CakeAlexS, try just being a little more louder and you will notice the difference.  Recording in the red in 64 bit can fool you into hearing clarity where in the end it will be distortion but 32 represents the real world. Louder is better. - Shawn Lee Farrell
 
 
2013/12/11 02:40:51
Goddard
Anderton
Goddard
 
Um, how about in [Craig's] paragraph which you snipped out?:
 
Recording resolutions higher than 24 bits are fictional, due to the limitations of A/D technology. But your sequencer’s audio engine needs far greater resolution.
 
This is because a 24-bit piece of audio might sound fine by itself. But when you apply a change to the signal (level, normalization, EQ, anything), multiplying or dividing that 24-bit data will likely produce a result that can’t be expressed with only 24 bits. Unless there’s enough resolution to handle these calculations, roundoffs occur—and they’re cumulative, which can possibly lead to an unpleasant sort of “fuzziness.” As a result, your audio engine’s resolution should always be considerably higher than that of your recording resolution.

 
Now, I don't want to infer anything from "unpleasant sort of fuzziness" as to whether that implies audibility, or even guess at what Craig meant by "far greater" or "considerably higher." Although surely, 32-bit single precision float is only equivalent in resolution (significand/mantissa digit-wise) to fixed integer 24-bit, not "far greater" or "considerably higher"?
 
But again, I certainly don't want to infer anything, so in the interest of precision, perhaps Craig would care to elaborate on precisely what he meant in his HC article?



Sure. First of all, here's a great article on DSP precision and such.

 
Hi Craig, that's an interesting article. Wrt the signal-dependent noise modulation caused by fp rounding error (which that article says is "thought by some to be audible"), this can become especially problematic when the fp rounding error-induced noise is "correlated" with the input, as explained in this paper:
 
http://nykiel-audio.pl/wp-content/uploads/2011/05/fixed-floating-fmt2.pdf
 
And this online DSP tutorial gives a good explanation of fp rounding error accumulation modes:
 
http://www.dspguide.com/ch4/4.htm
 
Anderton
By "unpleasant sort of fuzziness," if anyone ever used Sound Tools on a Mac Plus with a 16-bit audio engine, you'd know exactly what I mean It was fairly easy to run out of resolution, and the results were audible (the technical term would be "nasty").
 
The advantage of floating point is it's kind of like the computer equivalent of scientific notation. While the accuracy of 32-bit float is equal to 24-bit fixed, floating point can represent much larger numbers, giving a larger dynamic range. With fixed, the word length is the ultimate dynamic range. With floating-point, the exponent determines the ultimate dynamic range. Also with floating point, because the numbers are scaled to use the full word length, accuracy is maintained for smaller numbers than fixed.
 
Arguably the biggest practical advantage of floating point over fixed isn't sound quality but having plenty of headroom in the mix/processing engine so you don't have to be so concerned about gain-staging and headroom.



I don't see it so much as floating point math necessarily having the advantage over fixed point/integer math, as each has its advantages as well as disadvantages. Rather, the issue is one of sufficient precision and accuracy of calculation. That is, 48-bit integer math offers comparable "double precision" when processing 24-bit PCM digital audio samples/streams. And in fact some DSP chips and FPGAs have implemented 48-bit integer math internally to that end, whereas the x86 and x64 CPU architectures and compilers have been directed to facilitating floating point math calculations, either single- or double-precision.
 
Regarding "resolution", it should be noted that resolution is not determined by, nor necessarily affected by, number precision. Rather, resolution relates to the sampling bit depth (or width). That is, "16-bit" or "24-bit" refers to the "resolution" of PCM digital audio sample data.

But processing 16-bit or 24-bit PCM data using 32-bit or 64-bit floating point math cannot increase the audio data's original resolution but only determines the accuracy (that is, the precision) with which processing calculations are performed and thereby, how accurately (that is, how precisely) the calculation results can be represented when converted back to the original sampling bit-depth for output.

Btw, came across the following excerpt from another article you wrote (back in 2005) which appeared in some product literature for a native DSP system:
 
Anderton
The other part of the 64-bit equation is 64-bit internal resolution in digital audio workstations. This isn't a new concept; programs like iZotope's Ozone... have been using 64-bit resolution to do audio calculations for a while. But this isn't related to 64-bit hardware systems, aside from the superficial resemblance that they both use the catchphrase, "64 bits." A 32-bit system running a 32-bit OS can still have a 64-bit audio path, with no complications.

So why would you want a 64-bit audio path? Well, this is one situation where less is not more. With 16-bit systems, if you did too many operations--even functions as basic as level changes--round-off errors would occur due to the limited "numeric dynamic range" of a 16-bit system. Accumulate enough round-off errors, and the sound quality suffered. The 32-bit floating point was a major improvement, but 64 bits gives just that much more headroom and dynamic range. The improvement is particularly noticeable with material like reverb tails that decay into nothing, even when you really turn up the volume at the end. Those "fizzing" decays of low-res systems are gone forever.

The other big difference occurs when you're running projects with lots of tracks, plug-ins and soft synths. That's a lot of calculations going on at once, and 64 bits of resolution can handle it without you having to worry too much about clipping and other issues that relate to limited calculation abilities.

 
(from: http://www.prosoundnetwork.com/article/64-Bit-Buzzword-Is-Shorthand-For-Something-Deeper/3987#)
 
Heh, "fizzing" and "fuzziness".
2013/12/11 06:07:08
Goddard
drewfx1
Goddard
drewfx1
Goddard
Might as well revive this thread, rather than start a new one.
 
Alongside Cakewalk's posted video of Ron Kuper's 2006 AES presentation (linked in post #19 above) explaining why double precision floating point math is beneficial (if not critical) when mixing 24-bit PCM audio, Cakewalk also published a whitepaper by Ron Kuper which gave some additional info on the subject:
 
http://mixonline.com/online_extras/Cakewlk%20Wht%20Paper.pdf
 
In this whitepaper, respective code examples are given of single and double float mixing of 3x 24-bit samples and the outputted results are compared. This corresponds to what Ron Kuper was describing in the AES presentation video.
 
Basically, when mixing 24-bit audio, a 32-bit single precision engine lacks sufficient precision, such that errors can arise in the mixed audio even when mixing only a few streams. While such errors might not be easily audible, they do occur nonetheless and can accumulate and propagate downstream (for example, when further DSP operations are performed on the mixed audio) so as to become more audible. On the other hand, use of a 64-bit double precision engine simply avoids such errors occurring (which would explain why the Cakewalk developers chose to implement a double precision engine back when (in Sonar 5)).

 
What the white paper actually says:
 
What this simple program shows is if X is a 24-bit PCM sample, and the math is done using 32-bit floating point, an inaccuracy is introduced due to summation. In this case the least significant bit is lost. If the gain adjustments are more dramatic, or more gain stages are used, then more bits can be lost.

 
This is true, but you're supposed to infer here - as all the people who don't understand the math and want so badly to believe in this stuff usually do - that since there are errors, they, gasp, must be audible. But note that they don't actually say that. You might want to consider how many bits you can lose before it might be even close to being audible.

 
Nowhere was it ever asserted or left to inference that such LSB-wise errors in 24-bit PCM audio data are audible in themselves.
 
What the whitepaper also actually says (preceding the portion you quoted above):
 

An IEEE 64-bit float has 52 bits of mantissa plus a sign bit, giving 53 bits of precision when doing calculations. This amount of precision means audio fidelity is maintained even with dramatic gain staging within processing elements, and also means more accurate processing for recursive DSP such as IIR filtering.


 
Yes. It's indeed useful for recursive IIR filtering, but the mix engine doesn't do this sort of thing.

 
But FX plug-ins (such as convolution and recursive FX) get their input from the mix engine and provide their output to the mix engine.
 
Moreover, when the dpe is enabled, any plug-ins which also implement 64-bit dp processing internally will receive and output 64-bit fp streams, e.g.:
 

Written with sound quality and ease of use a priority, all of our plugins are 64-bit internal processing. If your host is VST 2.4 capable, we’ll accept and pass on 64-bit double-precision audio streams. If you have a host that is only 32-bit capable, we’ll still give you as much precision as possible.

http://www.stillwellaudio.com/plugins/stillwell-audio-plugins/
 
drewfx1 
Goddard
Double precision mixing is especially important when processing 24-bit PCM audio data.

For mixing? No, not at all.

 
Well, fortunately for you the dpe can be disabled in Sonar if so desired. Otoh, some other  DAWs such as Reaper only mix in 64-bit dp with no option available to use single precision.
 
drewfx1 
Goddard
In Cakewalk's AES presentation video, after presenting the test program results, Ron Kuper says (at 3:55):
 

Now maybe one bit doesn't matter, uh, but folks who've installed Sonar 5 and turned on the engine say it sounds great, you know, and I'm not a golden ears, it's like, subjective, but I believe the math that backs it up, and people are hearing a difference, so I think just on the basis of the math alone this is a pretty significant result, uh, that, if you're going to be processing 24-bits into a DAW, you need to mix using doubles, otherwise you're gonna lose, you're gonna lose a lot of bits on the bottom.


 
"It's like, subjective"  This is classic. He's deliberately avoiding saying whether he can hear it or not - "I'm not a golden ears!". Then he says "you're gonna lose a lot of bits". Gee, I wonder why he, or anyone from CW won't ever just say it's audible? Why must they always dance around it? Hmmm.

 
Well, Ron K's no longer with CW, but here's what he'd originally said here in the forum way back when:
 
http://forum.cakewalk.com/RonK-64-bit-Plugins-and-Summing-m594294-p3.aspx#599327
 
Although I'd hoped someone from CW would chime in on this thread (and possibly provide a link to the "clearly audible" test to which Ron K referred above), they're no doubt beavering away on X3d. But, anyway, it wasn't only CW who were finding that single precision processing of 24-bit PCM lost bits on the low end:
 
http://forum.cockos.com/showpost.php?p=116056&postcount=35
 
No dancing there, I'd say. Nor here either:
 
http://www.gearslutz.com/board/q-justin-frankel-designer-reaper/118481-reaper-64-bit-engine.html
 
drewfx1 
Goddard
Now, it's fine to dismiss the DPE as being unnecessary on the basis that nobody can hear such low bit errors. But don't flatter yourself, you're hardly the first pushing that position. Search the archived posts from the cakewalk.audio newsgroup from back when and you'll find plenty of company.

 
I don't flatter myself. Valuing logic and evidence over belief in one's own "golden ears" precludes that.

 
You're a bit late to the party:
 
http://forum.cakewalk.com/RonK-64-bit-Plugins-and-Summing-m594294.aspx
 
 
drewfx1 
Goddard
But asserting that low bit errors in 24-bit PCM audio are way below what is audible simply neglects to take into consideration what anyone who's worked with DSP well knows, and why fixed integer DSP chips have long had extended-precision registers and accumulators and why floating point DSP chips have continually moved toward greater precision. Namely, that precision does matter because with DSP what goes around can often come around (a whole lot of times).

Once again, there are indeed areas where recursive processing (involving thousands of operations) makes higher precision processing necessary. The mix engine just isn't one of them, because there is no recursive processing and not enough operations otherwise.

 
You never use any FX plug-ins when mixing/mastering?
 
drewfx1 
Goddard
It's interesting to note that your so-called "null test" had FX disabled, and thus biased the testing against revealing errors which manifest more significantly if not audibly due to recursive DSP (for example, by running a delicate condenser-mic'ed vocal passage or acoustic instrument track through an IIR reverb FX plug-in).

"So called"? Um, it was a null test. Period. My advice? If you want to have a technical discussion, lose this sort of nonsense. It's not gonna help you.

 
Tell you what. Don't insinuate that I've improperly paraphrased something, or snip out text when quoting something I've cited and then ask where it says what I've stated it alludes to, and we won't have any more nonsense in this thread. 
 
Yes, you called it a null test, but I'm not so sure it really was a mixing test.
 
Note that even when mixing with a 64-bit dpe, there will still be rounding errors in the lowest bit(s) which will be evident, e.g.
 
http://forum.cockos.com/showpost.php?p=626771&postcount=8
 
Instead of going by RMS metering of the difference in Sound Forge (RMS metering is not really precise and SF's metering used to be rather buggy), try putting it through Sonar's included Bitmeter plug-in or Schwa's free Bitter bitscope plug-in (available from above-linked Stillwell site).
 
drewfx1
Anyway....
 
THE MIX ENGINE HAS NO EFFECT ON WHAT BIT DEPTH PLUGINS ARE PROCESSING INTERNALLY WITH. The only potential difference is whether plugins get the higher bit depth at their input/output, not what they process with internally. Sonar has no control over what/how a plugin processes things internally and plugins already generally process at whatever bit depth is necessary (possibly down to the individual operation level).

 
No, Sonar's mix (audio) engine, when in dpe mode, passes/accepts 64-bit fp sample streams to/from any plug-ins which accept/output such (or otherwise, converts to/from 32-bit fp, this conversion presumably being done in the abstraction layer).
 
drewfx1 
As stated, disabling FX is necessary because the FX contained considerable random processing as shown when I exported twice using the 64bit engine and they didn't remotely null. Now I indeed could have gone through and disabled each plugin individually to discover which ones contained random processing and only disabled those containing random processing, but it's not like the results were even close...

 
Yes, some FX (and softsynth) plug-ins can throw off a mix engine null test and render its results unreliable/invalid, e.g.
 
http://www.gearslutz.com/board/music-computers/88771-64-bit-mix-engine-hype-aka-sonar-6-a.html
 
(as said, this party started long ago)
 
My point was that it's disingenuous to acknowledge that higher precision is necessary when performing recursive DSP but then exclude any recursive DSP FX plug-ins from your testing and assert that higher precision doesn't matter for mixing.
 
drewfx1 
You are of course welcome to do a "proper" (in your eyes) null test and post the results yourself. 

 
It's already been done:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#610928
 
It didn't null completely.
 
drewfx1
Goddard
drewfx1
Goddard
Craig Anderton alluded to this situation in an article over on HC earlier this year:
 
http://www.harmonycentral...chniques/ba-p/34780908



As far as I can see, this is all Craig says about it there:

But your sequencer’s audio engine needs far greater resolution.
.
.
.
Today’s sequencers use 32-bit floating point and higher resolutions, but many earlier sequencers did not. 

 
Um, where does he say 32bit isn't good enough?

 
Um, how about in the paragraph which you snipped out?:
 
Recording resolutions higher than 24 bits are fictional, due to the limitations of A/D technology. But your sequencer’s audio engine needs far greater resolution.
 
This is because a 24-bit piece of audio might sound fine by itself. But when you apply a change to the signal (level, normalization, EQ, anything), multiplying or dividing that 24-bit data will likely produce a result that can’t be expressed with only 24 bits. Unless there’s enough resolution to handle these calculations, roundoffs occur—and they’re cumulative, which can possibly lead to an unpleasant sort of “fuzziness.” As a result, your audio engine’s resolution should always be considerably higher than that of your recording resolution.

 
Now, I don't want to infer anything from "unpleasant sort of fuzziness" as to whether that implies audibility, or even guess at what Craig meant by "far greater" or "considerably higher." Although surely, 32-bit single precision float is only equivalent in resolution (significand/mantissa digit-wise) to fixed integer 24-bit, not "far greater" or "considerably higher"?

 
It depends. The lower the exponent is set, the higher the comparable resolution of floating point. It's only "equivalent" for samples that are near full scale. And if we consider that adding one bit doubles the resolution...

 
No it doesn't depend. 32-bit floating point and 24-bit fixed/integer are equivalent in resolution/precision:
 
http://www.bores.com/cour...tro/chips/6_precis.htm
 
 
drewfx1
Goddard
drewfx1
It's very simple. We can go through the math, but for people who aren't interested in going through the math (or doing controlled null tests), the answer is this:
 
Yes there are errors, but they accumulate quite slowly - to the extent that often relatively few of them even make it into 24bit output, much less at an audible level.


So you say, repeatedly, here and in other posts. While conveniently omitting any consideration of whether such errors may propagate through subsequent DSP operations and gain alteration so as to manifest toward audibility at the bigger end. Not that possible error manifestation due to error propagation via downstream DSP would show up when null testing the mix engine's output as you did with FX disabled anyway.

I think you will find that, as I conveniently did in the part you quoted here, I always say something like "the errors accumulate quite slowly". Or does "propagate" mean something different than "accumulate" to you here?

 
In DSP, rounding error may also accumulate/propagate quite rapidly (exponentially even):
 
http://www.dspguide.com/ch4/4.htm
 
 
drewfx1
Goddard
Now, I must admit that I hadn't paid much attention to your earlier so-called "null test" post before, once I'd noticed that you'd disabled all FX when exporting. But just now looking at your "null test" post again, your testing methodology does raise one question:
 
What exactly was being nulled?
 
I'm travelling right now and can't load up that same X2 demo project which you employed for your "null test", but IIRC, that demo project had an already-mixed down track soloed (with some FX (in the Pro Channel?) on it, sort of like the mixdown was being "mastered"?).
 
Now, if my recollection about that demo project is correct, then I wonder if, besides disabling all FX, did you also bother to disable track and bus mute/solo when exporting? There's absolutely no mention of that in your post above that I can see, so what should one infer from that?
Otherwise, seems only that soloed mixdown track would have been exported, without any gain alteration or mixing (or FX processing) actually being performed in the mix engine during the export, but merely the copying of only the (already rendered) mixdown track to the export destination files followed by nulling of the thusly exported files. If so, that could certainly account for the lack of any significant difference between the exported files when nulled against each other.

 
What should someone infer? I was hoping someone might infer at least a basic level of competence on my part.  

 
The results should not null to infinity. There should be a discernable difference in the low bits at least. Not saying it would necessarily be audible (at least, not without considerable gain boosting) but it should show up in a bitscope/bitmeter. 
 
drewfx1
I suggest you do your own null test (making sure you have absolutely no random processing going on so that the only difference is indeed the engine) and post your results. 


As noted above, it's already been done, back when. Didn't null completely. Was the difference audible? Hardly. But that's not really the point.
 
Now, just to be clear, I've never asserted that rounding errors when mixing (summing) in Sonar with single precision are necessarily audible, nor that Sonar's double precision engine sounds better than or even different from its single precision engine. If anything, I've always played the skeptic around here (and in this forum's earlier incarnations and the ng) as you may have noticed, never the "placebophile", and I've even been known on occasion to call into question what I've perceived as mere marketing hype.
 
But if I'm going to be mixing or mastering using any plug-ins which can handle 64-bit streams, particularly any convolution or recursive DSP FX (like say, EQ or reverb), well then, I'd prefer to mix with the dpe enabled.
 
Btw, while that CW whitepaper also discussed performance aspects when the dpe was used, I'd intentionally refrained from pointing to that in my earlier post as I wasn't sure whether those aspects still remained valid for more current systems, and in any case, the results reported in that whitepaper were based on CW's own internal benchmarking and test projects and I tend to view such results with skepticism unless they can be independently verified (my hidden agenda).
 
 
 
 
2013/12/11 11:24:05
Splat

2013/12/11 12:28:28
brconflict
From my perspective, and I'm speculating here, but from what i can muster, there are three ways of looking at possible advantages regarding 64-Bit Double-Precision:
 
1) Faster - If you performed 32-bit calculations within a 64-bit engine, with suitable CPU management from the OS and Application Engine, then there's a lot of speed that can be gained from a system powerful enough to handle it.
2) Decimal points - Think of calculating Pi. The more decimal places you can carry the calculation out, the more accurately Pi is represented. This, we know is also a possibility in longer byte words (64 vs. 32 vs. 16 vs. 8 and so on). The longer the byte word, the more accurate the sample
3) Redundancy - If I were to guess how Sonar uses Double-Precision, I would guess this was it. Like oversampling on an older CD player, the more measurements sampled and compared, the more likely the majority of similar samples would be correct. Here, if you were to perform a single 32-bit calculation twice in parallel, if the answer is the same for both calculations, pass the answer on to the next step. If the two calculation answers differ, then run them both again to see if the answers match. That would be handy!
 
As for the A/B comparison, I can't really audibly hear the difference, but I do make a habit to do a final Export using 64-bit DP before sending to Master. The test demos don't get 64-bit DP unless I just feel snazzy.
2013/12/11 16:35:40
drewfx1
GoddardNote that even when mixing with a 64-bit dpe, there will still be rounding errors in the lowest bit(s) which will be evident, e.g.
 
http://forum.cockos.com/showpost.php?p=626771&postcount=8

 
The point isn't that there aren't errors; it's that they aren't remotely audible.
 

Instead of going by RMS metering of the difference in Sound Forge (RMS metering is not really precise and SF's metering used to be rather buggy), try putting it through Sonar's included Bitmeter plug-in or Schwa's free Bitter bitscope plug-in (available from above-linked Stillwell site).

RMS metering is as precise as any other metering if you understand exactly what it is telling you in a given application, which does indeed vary. But regardless, using different metering won't change the conclusion.
 

drewfx1
Anyway....
 
THE MIX ENGINE HAS NO EFFECT ON WHAT BIT DEPTH PLUGINS ARE PROCESSING INTERNALLY WITH. The only potential difference is whether plugins get the higher bit depth at their input/output, not what they process with internally. Sonar has no control over what/how a plugin processes things internally and plugins already generally process at whatever bit depth is necessary (possibly down to the individual operation level).

 
No, Sonar's mix (audio) engine, when in dpe mode, passes/accepts 64-bit fp sample streams to/from any plug-ins which accept/output such (or otherwise, converts to/from 32-bit fp, this conversion presumably being done in the abstraction layer).

 
Go back and reread where I said where you quoted me a few lines above "The only potential difference is whether plugins get the higher bit depth at their input/output" and explain how that's different from what you just said. 
 
My point was that it's disingenuous to acknowledge that higher precision is necessary when performing recursive DSP but then exclude any recursive DSP FX plug-ins from your testing and assert that higher precision doesn't matter for mixing.

 
It's not disingenuous because Sonar doesn't determine what the recursive DSP FX is processing with internally. You are welcome to show some proof that this is incorrect if you disagree.
 
Mixing is essentially just summing and level changes with no recursion.
 

drewfx1 
You are of course welcome to do a "proper" (in your eyes) null test and post the results yourself. 

 
It's already been done:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#610928
 
It didn't null completely.

 
And the key info from that test:

the RMS difference is -148db

 
Which means that most of those errors won't make it into 24bit output.
 
Which confirms my results.
 
drewfx1
It depends. The lower the exponent is set, the higher the comparable resolution of floating point. It's only "equivalent" for samples that are near full scale. And if we consider that adding one bit doubles the resolution...

 
No it doesn't depend. 32-bit floating point and 24-bit fixed/integer are equivalent in resolution/precision:
 
http://www.bores.com/cour...tro/chips/6_precis.htm

 
From your link (sic):

Because the hardware automatically scales and normalises every number, the errors due to truncation and rounding depend on the size of the number. If we regard these errors as a source of quantisation noise, then the noise floor is modulated by the size of the signal. 

 
Which once again confirms exactly what I said in the quote of mine that you are replying to here. When the MSB's in 24bit are zero (i.e. a lower signal level for a given sample), 32bit indeed has higher precision. 
 

drewfx1
Goddard
drewfx1
It's very simple. We can go through the math, but for people who aren't interested in going through the math (or doing controlled null tests), the answer is this:
 
Yes there are errors, but they accumulate quite slowly - to the extent that often relatively few of them even make it into 24bit output, much less at an audible level.


So you say, repeatedly, here and in other posts. While conveniently omitting any consideration of whether such errors may propagate through subsequent DSP operations and gain alteration so as to manifest toward audibility at the bigger end. Not that possible error manifestation due to error propagation via downstream DSP would show up when null testing the mix engine's output as you did with FX disabled anyway.

I think you will find that, as I conveniently did in the part you quoted here, I always say something like "the errors accumulate quite slowly". Or does "propagate" mean something different than "accumulate" to you here?

 
In DSP, rounding error may also accumulate/propagate quite rapidly (exponentially even):
 
http://www.dspguide.com/ch4/4.htm

 
How many times have I made the point that mixing doesn't cause this type of accumulation? You are of course welcome to prove otherwise. I've already shown the proof (which you keep questioning), but was confirmed by the null test you yourself linked to above.
 
 
drewfx1
Goddard
Now, I must admit that I hadn't paid much attention to your earlier so-called "null test" post before, once I'd noticed that you'd disabled all FX when exporting. But just now looking at your "null test" post again, your testing methodology does raise one question:
 
What exactly was being nulled?
 
I'm travelling right now and can't load up that same X2 demo project which you employed for your "null test", but IIRC, that demo project had an already-mixed down track soloed (with some FX (in the Pro Channel?) on it, sort of like the mixdown was being "mastered"?).
 
Now, if my recollection about that demo project is correct, then I wonder if, besides disabling all FX, did you also bother to disable track and bus mute/solo when exporting? There's absolutely no mention of that in your post above that I can see, so what should one infer from that?
Otherwise, seems only that soloed mixdown track would have been exported, without any gain alteration or mixing (or FX processing) actually being performed in the mix engine during the export, but merely the copying of only the (already rendered) mixdown track to the export destination files followed by nulling of the thusly exported files. If so, that could certainly account for the lack of any significant difference between the exported files when nulled against each other.

 
What should someone infer? I was hoping someone might infer at least a basic level of competence on my part.  

 
The results should not null to infinity. There should be a discernable difference in the low bits at least. Not saying it would necessarily be audible (at least, not without considerable gain boosting) but it should show up in a bitscope/bitmeter. 

 
I think you didn't understand. The RMS indeed nulls to infinity, but only when truncated to 24bit - because the result was less than the LSB of 24bit fixed point. It's exactly the same as an undithered recorded signal being cut off below the LSB. As I posted, it did not null to infinity before it was reduced to 24bit.
 

drewfx1
I suggest you do your own null test (making sure you have absolutely no random processing going on so that the only difference is indeed the engine) and post your results. 


As noted above, it's already been done, back when. Didn't null completely. Was the difference audible? Hardly. But that's not really the point.

 
Um, that's been exactly my point all along - as I've repeatedly said, there are indeed errors some of which will make it into 24bit output. But they aren't audible.
 

Now, just to be clear, I've never asserted that rounding errors when mixing (summing) in Sonar with single precision are necessarily audible, nor that Sonar's double precision engine sounds better than or even different from its single precision engine. If anything, I've always played the skeptic around here (and in this forum's earlier incarnations and the ng) as you may have noticed, never the "placebophile", and I've even been known on occasion to call into question what I've perceived as mere marketing hype.
 

 
Then we essentially agree. I have nothing against the use of 64bit double precision; I am just pointing out that there is no audible benefit in the real world. So people can stop worrying about it and instead worry about more important things.
 

Btw, while that CW whitepaper also discussed performance aspects when the dpe was used, I'd intentionally refrained from pointing to that in my earlier post as I wasn't sure whether those aspects still remained valid for more current systems, and in any case, the results reported in that whitepaper were based on CW's own internal benchmarking and test projects and I tend to view such results with skepticism unless they can be independently verified (my hidden agenda).

 
An interesting question, but I suspect we would find that the mix engine is doing a relatively trivial number of calculations from a modern CPU's standpoint.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account