Helpful Reply64 bit engine?

Page: < 1234 > Showing page 3 of 4
Author
Splat
Max Output Level: 0 dBFS
  • Total Posts : 8672
  • Joined: 2010/12/29 15:28:29
  • Location: Mars.
  • Status: offline
Re: 64 bit engine? 2013/12/11 16:43:46 (permalink)


Sell by date at 9000 posts. Do not feed.
@48/24 & 128 buffers latency is 367 with offset of 38.

Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
#61
Goddard
Max Output Level: -84 dBFS
  • Total Posts : 338
  • Joined: 2012/07/21 11:39:11
  • Status: offline
Re: 64 bit engine? 2013/12/13 03:01:04 (permalink)
drewfx1
Goddard
Note that even when mixing with a 64-bit dpe, there will still be rounding errors in the lowest bit(s) which will be evident, e.g.
 
http://forum.cockos.com/showpost.php?p=626771&postcount=8

 
The point isn't that there aren't errors; it's that they aren't remotely audible.

 
Even if the rounding error (or quantization noise, if you will) might not be so significant as to be immediately audible when initially generated, it can still affect, and be affected by, any downstream processing to which the summed sample stream is subjected where further rounding occurs (such as another summation operation) or having noise gain (such as FX plug-ins through which the stream passes), so as to manifest more significantly towards audibility in the finally outputted 24-bit PCM stream.
 
drewfx1
Goddard
Instead of going by RMS metering of the difference in Sound Forge (RMS metering is not really precise and SF's metering used to be rather buggy), try putting it through Sonar's included Bitmeter plug-in or Schwa's free Bitter bitscope plug-in (available from above-linked Stillwell site).

RMS metering is as precise as any other metering if you understand exactly what it is telling you in a given application, which does indeed vary. But regardless, using different metering won't change the conclusion.

 
RMS measurement won't reveal peaks. And as already said, Sound Forge's RMS measurement (at least in SF 8.0 as was used for the differencing results reported in that old thread) was known to be rather buggy and unreliable:
 

Notable fixes/changes in version 9.0a
...
A bug has been fixed that caused the Statistics tool to report inaccurate RMS levels has been fixed.

http://dspcdn.sonycreativesoftware.com/releasenotes/soundforge90e_readme_enu.htm
 
Btw, in that same thread, the OP had reported a further test here:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#611251
 
 
and also, Ron K had previewed the AES presentation he was then about to give:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#611393
 
drewfx1
Goddard
drewfx1
Anyway....
 
THE MIX ENGINE HAS NO EFFECT ON WHAT BIT DEPTH PLUGINS ARE PROCESSING INTERNALLY WITH. The only potential difference is whether plugins get the higher bit depth at their input/output, not what they process with internally. Sonar has no control over what/how a plugin processes things internally and plugins already generally process at whatever bit depth is necessary (possibly down to the individual operation level).

 
No, Sonar's mix (audio) engine, when in dpe mode, passes/accepts 64-bit fp sample streams to/from any plug-ins which accept/output such (or otherwise, converts to/from 32-bit fp, this conversion presumably being done in the abstraction layer).

 
Go back and reread where I said where you quoted me a few lines above "The only potential difference is whether plugins get the higher bit depth at their input/output" and explain how that's different from what you just said. 

 
Ok, but while Sonar may not have any control over what/how (i.e., with what precision) a plug-in may process things internally, it does control what the plug-in processes, namely, whether the plug-in receives/returns a single precision fp stream or a double precision fp stream from/to the audio engine. That is, whether or not Sonar's DPE is enabled can influence the precision of the sample stream supplied by Sonar's audio engine as the input (i.e., "operand") to the plug-in for processing and also the precision at which the processed output stream is returned to Sonar's audio engine by the plug-in.
 
drewfx1
Goddard
My point was that it's disingenuous to acknowledge that higher precision is necessary when performing recursive DSP but then exclude any recursive DSP FX plug-ins from your testing and assert that higher precision doesn't matter for mixing.

 
It's not disingenuous because Sonar doesn't determine what the recursive DSP FX is processing with internally. You are welcome to show some proof that this is incorrect if you disagree.
 
Mixing is essentially just summing and level changes with no recursion.

 
But whether or not the DPE is enabled in Sonar can determine what the recursive DSP FX processes.
 
drewfx1
Goddard
drewfx1 
You are of course welcome to do a "proper" (in your eyes) null test and post the results yourself. 

 
It's already been done:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#610928
 
It didn't null completely.

 
And the key info from that test:

the RMS difference is -148db

 
Which means that most of those errors won't make it into 24bit output.
 
Which confirms my results.

 
As already noted, those reported RMS readings were rather liable to be unreliable. And in any case, both Ron K and Justin Frankel confirmed that single precision fp summing rounding errors do make it into the 24-bit PCM output.
 
drewfx1
Goddard
drewfx1
It depends. The lower the exponent is set, the higher the comparable resolution of floating point. It's only "equivalent" for samples that are near full scale. And if we consider that adding one bit doubles the resolution...

 
No it doesn't depend. 32-bit floating point and 24-bit fixed/integer are equivalent in resolution/precision:
 
http://www.bores.com/cour...tro/chips/6_precis.htm

 
From your link (sic):

Because the hardware automatically scales and normalises every number, the errors due to truncation and rounding depend on the size of the number. If we regard these errors as a source of quantisation noise, then the noise floor is modulated by the size of the signal. 

 
Which once again confirms exactly what I said in the quote of mine that you are replying to here. When the MSB's in 24bit are zero (i.e. a lower signal level for a given sample), 32bit indeed has higher precision. 

 
That's "scaling" (in the exponent), not precision (in the significand/mantissa).  And in any case, sampled values constantly vary (unlike e.g. filter coefficient values). Consider what occurs precision-wise in the significand when the sample value becomes so small/large that the exponent has to change.
 
Anyway, such a discussion is only academic. The point of real practical interest here is that processing 24-bit PCM audio with single precision floats can lead to errors in the processed (summed) 24-bit PCM audio output of the audio engine which do not manifest when double precision is employed.
 
drewfx1
Goddard
drewfx1
Goddard
drewfx1
It's very simple. We can go through the math, but for people who aren't interested in going through the math (or doing controlled null tests), the answer is this:
 
Yes there are errors, but they accumulate quite slowly - to the extent that often relatively few of them even make it into 24bit output, much less at an audible level.


So you say, repeatedly, here and in other posts. While conveniently omitting any consideration of whether such errors may propagate through subsequent DSP operations and gain alteration so as to manifest toward audibility at the bigger end. Not that possible error manifestation due to error propagation via downstream DSP would show up when null testing the mix engine's output as you did with FX disabled anyway.

I think you will find that, as I conveniently did in the part you quoted here, I always say something like "the errors accumulate quite slowly". Or does "propagate" mean something different than "accumulate" to you here?

 
In DSP, rounding error may also accumulate/propagate quite rapidly (exponentially even):
 
http://www.dspguide.com/ch4/4.htm

 
How many times have I made the point that mixing doesn't cause this type of accumulation? You are of course welcome to prove otherwise. I've already shown the proof (which you keep questioning), but was confirmed by the null test you yourself linked to above.

 
Perhaps you missed this by Ron K in that same thread?:
 

That's 1 bit per multiply/summing stage. In other words, the more tracks and buses you have, the more these bit errors can accumulate and start to be come 2 bit errors, 4 bit errors, etc.

http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#614682
 
drewfx1
Goddard
drewfx1
Goddard
Now, I must admit that I hadn't paid much attention to your earlier so-called "null test" post before, once I'd noticed that you'd disabled all FX when exporting. But just now looking at your "null test" post again, your testing methodology does raise one question:
 
What exactly was being nulled?
 
I'm travelling right now and can't load up that same X2 demo project which you employed for your "null test", but IIRC, that demo project had an already-mixed down track soloed (with some FX (in the Pro Channel?) on it, sort of like the mixdown was being "mastered"?).
 
Now, if my recollection about that demo project is correct, then I wonder if, besides disabling all FX, did you also bother to disable track and bus mute/solo when exporting? There's absolutely no mention of that in your post above that I can see, so what should one infer from that?
Otherwise, seems only that soloed mixdown track would have been exported, without any gain alteration or mixing (or FX processing) actually being performed in the mix engine during the export, but merely the copying of only the (already rendered) mixdown track to the export destination files followed by nulling of the thusly exported files. If so, that could certainly account for the lack of any significant difference between the exported files when nulled against each other.

 
What should someone infer? I was hoping someone might infer at least a basic level of competence on my part.  

 
The results should not null to infinity. There should be a discernable difference in the low bits at least. Not saying it would necessarily be audible (at least, not without considerable gain boosting) but it should show up in a bitscope/bitmeter. 

 
I think you didn't understand. The RMS indeed nulls to infinity, but only when truncated to 24bit - because the result was less than the LSB of 24bit fixed point. It's exactly the same as an undithered recorded signal being cut off below the LSB. As I posted, it did not null to infinity before it was reduced to 24bit.

 
You are focusing on an unreliable RMS measurement. The poster there also reported that the normalized differences were audible.
 
Rounding error (bit loss) which occurs in (the significand of) the 32-bit fp number will still be present when that 32-bit fp number is converted to 24-bit integer/fixed. This was clearly shown in the reported 24-bit PCM output results of CW's testing for a simple gain alteration and summing operation, and also confirmed by Justin F's comparision test post. Converting to 24-bit won't make any difference, null-depth-wise, in the difference.
 
Maybe try using Diffmaker:
 
http://libinst.com/Audio%20DiffMaker.htm
 
Or maybe try chained instances of Sonar's Bitmeter plug-in, with the first instance set to "24" and the second set to "float". See:
 
http://forum.cakewalk.com/What-is-Bit-Meter-for-m956916.aspx#956957
 
drewfx1
Goddard
drewfx1
I suggest you do your own null test (making sure you have absolutely no random processing going on so that the only difference is indeed the engine) and post your results. 


As noted above, it's already been done, back when. Didn't null completely. Was the difference audible? Hardly. But that's not really the point.

 
Um, that's been exactly my point all along - as I've repeatedly said, there are indeed errors some of which will make it into 24bit output. But they aren't audible.

 
And my point is that noise/distortion due to rounding error which manifests when mixing 24-bit audio using single precision even if not initially audible may become audible (or cause undesirable effects) when further downstream mixing or DSP is performed on the rounding error-bearing data.
 
How many low bits of a 24-bit PCM sample need to be lost (incorrect) before it's audible? How much gain is required before a low bit rounding error becomes audible?
 
I've already pointed to Ron K's "1 bit per multiply/summing stage" remark above.
 
 
Wrt certain types of digital filters (as might be employed in various DSP FX plug-ins such as EQ and multiband compressors), see e.g. "noise gain" here:
 
http://electronotes.netfirms.com/EN209.pdf
 
Some DSP FX such as dynamic processors or FX with sidechain inputs may normalize or otherwise raise the level of an input in order to derive an internal reference level. See e.g. (towards the bottom, at "Anyway what is the bottom line (finally)?"):
 
http://www.gearslutz.com/board/6016102-post259.html
 
 
I could go on, but I think we've both pointed out our respective points.
 
drewfx1
Goddard
Now, just to be clear, I've never asserted that rounding errors when mixing (summing) in Sonar with single precision are necessarily audible, nor that Sonar's double precision engine sounds better than or even different from its single precision engine. If anything, I've always played the skeptic around here (and in this forum's earlier incarnations and the ng) as you may have noticed, never the "placebophile", and I've even been known on occasion to call into question what I've perceived as mere marketing hype.
 

 
Then we essentially agree. I have nothing against the use of 64bit double precision; I am just pointing out that there is no audible benefit in the real world. So people can stop worrying about it and instead worry about more important things.

 
Perhaps we inhabit different "real worlds" then, dunno. You seemed to temper your position in the course of an earlier thread on this topic:
 
http://forum.cakewalk.com/More-questions-about-64bit-recording-m2444701.aspx
 
Always good to keep an open mind...
 
drewfx1
Goddard
Btw, while that CW whitepaper also discussed performance aspects when the dpe was used, I'd intentionally refrained from pointing to that in my earlier post as I wasn't sure whether those aspects still remained valid for more current systems, and in any case, the results reported in that whitepaper were based on CW's own internal benchmarking and test projects and I tend to view such results with skepticism unless they can be independently verified (my hidden agenda).

An interesting question, but I suspect we would find that the mix engine is doing a relatively trivial number of calculations from a modern CPU's standpoint.



My takeaway then was that there was no performance impact due to the DPE, even on a 32-bit system, so good to go. If anything, it would appear from the marketing blobs that since then both CPU capability and Sonar's code for performing the kind of SIMD operation stuff involved in summing and other fp processing of audio has only improved, e.g.:
 
http://software.intel.com/en-us/articles/utilizing-intel-avx-with-cakewalk-sonar-x1
 
http://software.intel.com/sites/billboard/article/cakewalk-intel-and-windows-8-bring-high-performance-touch-enabled-mobile-workflows-musicians
 
although of course independent verification remains pending...
post edited by Goddard - 2013/12/13 03:16:19
#62
jb101
Max Output Level: -46 dBFS
  • Total Posts : 2946
  • Joined: 2011/12/04 05:26:10
  • Status: offline
Re: 64 bit engine? 2013/12/13 07:15:45 (permalink)
Well, it will be fixed next week, so all this has been a little pointless.
 
Think about how much more time could have been spent making music instead of writing posts..

 Sonar Platinum
#63
Splat
Max Output Level: 0 dBFS
  • Total Posts : 8672
  • Joined: 2010/12/29 15:28:29
  • Location: Mars.
  • Status: offline
Re: 64 bit engine? 2013/12/13 07:21:26 (permalink)
I'm not sure. This might be great groundwork for the 65 bit precision engine.

Sell by date at 9000 posts. Do not feed.
@48/24 & 128 buffers latency is 367 with offset of 38.

Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
#64
jb101
Max Output Level: -46 dBFS
  • Total Posts : 2946
  • Joined: 2011/12/04 05:26:10
  • Status: offline
Re: 64 bit engine? 2013/12/13 07:28:20 (permalink)
Or the 66bit double prattle engine..

 Sonar Platinum
#65
bobguitkillerleft
Max Output Level: -72 dBFS
  • Total Posts : 944
  • Joined: 2011/05/17 17:28:58
  • Location: Adelaide Australia
  • Status: offline
Re: 64 bit engine? 2013/12/13 08:19:17 (permalink)
A ha so it affects the "low" end according to Ben,intriguing,as I should be able to hear a difference!
Bob

https://soundcloud.com/rks26https://en.wikipedia.org/wiki/The_Hitmen Lenovo W540 Factoryrefurb SONAR PLATINUM,Ozone 7 N.I. KA6 Komplete 9 SSD4 Platinum Epi L/H LP Custom Headstock broken twice and fixed.Gibson L/H Les Paul 2010 Wine Red Studio stupid Right Hand Vol.Tone for Left Hand?LH84Ibanez RS135 gen.FloydRose JB Marshall 100w 2203 4x25w Celestion Green backs
"You are what you is"-Frank Zappa "But I'm gonna wave my freak flag high"-Jimi Hendrix    
#66
Splat
Max Output Level: 0 dBFS
  • Total Posts : 8672
  • Joined: 2010/12/29 15:28:29
  • Location: Mars.
  • Status: offline
Re: 64 bit engine? 2013/12/13 11:59:38 (permalink)

 
 
Exactly...  I need one of these nowadays... 

Sell by date at 9000 posts. Do not feed.
@48/24 & 128 buffers latency is 367 with offset of 38.

Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
#67
drewfx1
Max Output Level: -9.5 dBFS
  • Total Posts : 6585
  • Joined: 2008/08/04 16:19:11
  • Status: offline
Re: 64 bit engine? 2013/12/13 14:21:42 (permalink)
Goddard 
RMS measurement won't reveal peaks. And as already said, Sound Forge's RMS measurement (at least in SF 8.0 as was used for the differencing results reported in that old thread) was known to be rather buggy and unreliable:
 

Notable fixes/changes in version 9.0a
...
A bug has been fixed that caused the Statistics tool to report inaccurate RMS levels has been fixed.

http://dspcdn.sonycreativesoftware.com/releasenotes/soundforge90e_readme_enu.htm

Perhaps you missed that I posted the peak errors as well (SF lists them as Minimum and Maximum - there are two, because both the positive and negative peaks are posted):
 
32bit vs. 64bit (no FX):
Left Channel Right Channel
Minimum sample value (dB) -138.739 -136.175
Maximum sample value (dB) -138.943 -138.192
RMS level (dB) -164.395 -164.148
 
I used SF 9.0c, so the RMS values had been fixed.
 
 

Btw, in that same thread, the OP had reported a further test here:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#611251
 
 

Yes, and his result was a 1dB lower error level.
 

and also, Ron K had previewed the AES presentation he was then about to give:
 
http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#611393
 

 

The verdict is that using 32-bit floats to mix 24-bit data introduces at most 6dB of error more than 25% of the time.

 
Are you asserting that "at most 6dB" is a problem?
 

Ok, but while Sonar may not have any control over what/how (i.e., with what precision) a plug-in may process things internally, it does control what the plug-in processes, namely, whether the plug-in receives/returns a single precision fp stream or a double precision fp stream from/to the audio engine. That is, whether or not Sonar's DPE is enabled can influence the precision of the sample stream supplied by Sonar's audio engine as the input (i.e., "operand") to the plug-in for processing and also the precision at which the processed output stream is returned to Sonar's audio engine by the plug-in.

 
This is a bad argument, as it only applies in borderline cases that don't exist in the real world. If the errors from using Sonar's single precision mix engine are far from audible (as they are), you need to do huge amounts of additional processing using 32bit single precision to get audible errors. But the audible errors are then not really from the mix engine, but from the additional processing. It's an argument that the additional processing should be done with more precision.
 
Do you understand how to calculate the effect of adding, say, a -140dB RMS error to a -120dB RMS error? If not, we can go through it, but the increase in error level of adding something 20dB lower is much less than many people might think. Basically, if you have two similar sources of error, you can completely ignore the lower level one if it's far enough below the higher level one.
 
But whether or not the DPE is enabled in Sonar can determine what the recursive DSP FX processes.

 
What makes you think that that matters? The DSP adds additional errors to what it gets or it doesn't.
 

As already noted, those reported RMS readings were rather liable to be unreliable. And in any case, both Ron K and Justin Frankel confirmed that single precision fp summing rounding errors do make it into the 24-bit PCM output.

 
I've repeatedly said that some errors will indeed make it into the 24bit output. The point is that they aren't remotely audible. 
 

drewfx1
Goddard
drewfx1
It depends. The lower the exponent is set, the higher the comparable resolution of floating point. It's only "equivalent" for samples that are near full scale. And if we consider that adding one bit doubles the resolution...

 
No it doesn't depend. 32-bit floating point and 24-bit fixed/integer are equivalent in resolution/precision:
 
http://www.bores.com/cour...tro/chips/6_precis.htm

 
From your link (sic):

Because the hardware automatically scales and normalises every number, the errors due to truncation and rounding depend on the size of the number. If we regard these errors as a source of quantisation noise, then the noise floor is modulated by the size of the signal. 

 
Which once again confirms exactly what I said in the quote of mine that you are replying to here. When the MSB's in 24bit are zero (i.e. a lower signal level for a given sample), 32bit indeed has higher precision. 

 
That's "scaling" (in the exponent), not precision (in the significand/mantissa).  And in any case, sampled values constantly vary (unlike e.g. filter coefficient values). Consider what occurs precision-wise in the significand when the sample value becomes so small/large that the exponent has to change.

 
No it's the same. An intrinsic property of floating point numbers is that the numbers have less/more precision compared to fixed point depending on the exponent.
 
For instance, in 24 bit fixed point if the 3 MSB's at the top are all zero (i.e. a lower signal level), you only have 21 bits of precision left for your actual data; in floating point the MSB is an implied value of 1 (except for subnormals), so you are always using every bit of precision (for all non-subnormal values) and the exponent scales it up or down. To put it another way, with fixed point the level of quantization error relative to the signal increases for lower level signals whereas with floating point it stays the same.
 

Anyway, such a discussion is only academic. The point of real practical interest here is that processing 24-bit PCM audio with single precision floats can lead to errors in the processed (summed) 24-bit PCM audio output of the audio engine which do not manifest when double precision is employed.

 
I would say that the "practical" point is not whether the errors exist, but whether they are audible. If the errors are present, but never audible I would call that academic. 
 

drewfx1
How many times have I made the point that mixing doesn't cause this type of accumulation? You are of course welcome to prove otherwise. I've already shown the proof (which you keep questioning), but was confirmed by the null test you yourself linked to above.

 
Perhaps you missed this by Ron K in that same thread?:
 

That's 1 bit per multiply/summing stage. In other words, the more tracks and buses you have, the more these bit errors can accumulate and start to be come 2 bit errors, 4 bit errors, etc.

http://forum.cakewalk.com/White-Noise-32bit64bit-Engine-Test-m610884.aspx#614682

 
This is indeed true. But don't confuse peak errors with RMS, which much more closely represents what we hear. If you test, you will find that the RMS error increases far more slowly than the peak error levels.
 
And you also need to consider that the true peak is dependent on the sample level not dBFS - so you only get the maximum peak if you happen to be unlucky enough to have a maximum calculation error occur on a sample near that is near full scale. This is unbelievably rare in the real world, and becomes increasingly rare as you do more calculations. 
 
And as noted earlier, I got a peak error of -136.175 dBFS in my tests (which became -138.474 dBFS when truncated to 24bits). But the dBFS peak value will vary much more from test to test than the RMS value for the reason I stated above - the errors are relative to the sample itself, not dBFS, so the peak dBFS error in any given test will depend on levels of the samples where you happened to get a peak calculation error.
 
 
drewfx1
I think you didn't understand. The RMS indeed nulls to infinity, but only when truncated to 24bit - because the result was less than the LSB of 24bit fixed point. It's exactly the same as an undithered recorded signal being cut off below the LSB. As I posted, it did not null to infinity before it was reduced to 24bit.

 
You are focusing on an unreliable RMS measurement.

 
As stated above, my RMS measurements were from a version of SF that had corrected that error.
 
And I find it very telling that you just assume that my numbers must be wrong.
 

The poster there also reported that the normalized differences were audible.
 
Rounding error (bit loss) which occurs in (the significand of) the 32-bit fp number will still be present when that 32-bit fp number is converted to 24-bit integer/fixed. This was clearly shown in the reported 24-bit PCM output results of CW's testing for a simple gain alteration and summing operation, and also confirmed by Justin F's comparision test post. Converting to 24-bit won't make any difference, null-depth-wise, in the difference.

 
SF reports the numbers in the bit depth of the project. You can't represent a number below -144.494 dBFS in 24bit, so SF reports -infinity. The point is that the RMS error level is below the level of 24bit's LSB.
 
Note that I got an RMS error of -164.148 dBFS RMS before truncation. Since this indicates that an overwhelming number of errors are below -144.494 dBFS and will thus get truncated, you will get an even lower RMS error level after truncation if we bother to calculate it and express it at a higher bit depth.
 

As noted above, it's already been done, back when. Didn't null completely. Was the difference audible? Hardly. But that's not really the point.
 
drewfx1  
Um, that's been exactly my point all along - as I've repeatedly said, there are indeed errors some of which will make it into 24bit output. But they aren't audible.

 
And my point is that noise/distortion due to rounding error which manifests when mixing 24-bit audio using single precision even if not initially audible may become audible (or cause undesirable effects) when further downstream mixing or DSP is performed on the rounding error-bearing data.

 
As noted above, it will be downstream errors that are audible, not the ones from the mix engine.
 

How many low bits of a 24-bit PCM sample need to be lost (incorrect) before it's audible? How much gain is required before a low bit rounding error becomes audible?

 
Ignoring masking, if your playback level puts the errors at a level that is below your absolute threshold of hearing, they are obviously inaudible.
 
When we consider masking from background noise in the listening environment, analog circuit noise, dither, quantization noise, noise in the signal and the signal itself, they are much further from being audible.
 
If you do a null test and listen to the difference signal, you will find that you have to add many tens of dB's of gain to hear anything, even without your signal there to mask it. 
 
I strongly suggest you do such a test to see for yourself.
 
 
Wrt certain types of digital filters (as might be employed in various DSP FX plug-ins such as EQ and multiband compressors), see e.g. "noise gain" here:
 
http://electronotes.netfirms.com/EN209.pdf
 
Some DSP FX such as dynamic processors or FX with sidechain inputs may normalize or otherwise raise the level of an input in order to derive an internal reference level. See e.g. (towards the bottom, at "Anyway what is the bottom line (finally)?"):
 
http://www.gearslutz.com/board/6016102-post259.html

 
I afraid I don't have time to read every link you post. I suggest that in the future you quote the relevant parts in addition to providing the links for people not inclined to follow every link.
 
Regardless, I fully understand and have repeatedly stated that higher precision is desirable/necessary for certain forms of processing involving thousands of calculations (or more), but this doesn't apply to the mix engine. 
 
Changing the reference level in floating point doesn't make the errors any more of a problem (perhaps it adds a few additional calculations, but they have essentially no effect on the error level).
 
 
Perhaps we inhabit different "real worlds" then, dunno. You seemed to temper your position in the course of an earlier thread on this topic:
 
http://forum.cakewalk.com/More-questions-about-64bit-recording-m2444701.aspx
 

 
I'm not sure what you mean here. My positions have not changed since those posts.
 

 In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
#68
Sycraft
Max Output Level: -73 dBFS
  • Total Posts : 871
  • Joined: 2012/05/04 21:06:10
  • Status: offline
Re: 64 bit engine? 2013/12/14 01:21:25 (permalink)
I think people have a bit of a misunderstanding about errors and such in output. Just because there is an error, doesn't mean it matters. This is because 24-bits is actually not only more than we need to reproduce the range of human hearing, but indeed more than we can use. You don't find DACs with 144dB or dynamic range. It is near impossible to get them with 130dB (22ish bits) and costs a ton. Even 120dB (20 bits) is pretty rare in an actual system implementation, meaning a DAC and an amp combined.
 
So, we really need to define levels of error one might get from decimation/quantazation:
 
-- If the error is below -144dB, below 24-bit, then it is non-existent as far as the final output is concerned. If you have an error of, say, -170dB, it is only present in the higher resolution intermediary files. When you go to 24-bit, it is gone.
 
-- If the error is above -144dB but below -130dB, then it is irrelevant. You won't find a DAC that can reproduce it, so it doesn't matter that it is there. It is nothing but noise in those bits anyhow, even to the highest of the high quality converters, so it has no relevance.
 
-- If the error is above -130dB but below -120dB, then it is inaudible, even in outside cases. The highest peak levels of anything you are going to realistically target is the LFE channel in theater reference material, and that is 115dB SPL peak. That means that even in that case, you are still 5dB under the threshold of hearing with -120dB of error, and that would even be assuming an exceedingly silent room (most rooms are far too noisy to hear anything down around 0dB SPL).
 
Only if you get above that, in to the 20th bit or above, are you talking about levels of theoretical audibility. Only then is it something to even start worrying about, and realistically it would need to be much higher before it is actually audible. So unless you have evidence indicating you are getting errors at that level (you aren't) then don't stress.
 
It is additionally silly to stress over it if you are using the console emulator. The reason is that the whole point of that thing is to change frequency response and add low level noise and distortion, just like real hardware. You raise the noise floor and distortion level, drastically compared to an unmodified digital signal. So if you are inserting an effect that moves the noise floor up anyhow, it gets really silly to worry about errors in the very least significant bits. If a totally clean signal with more dynamic range than any playback system is capable of is you goal, then you don't want something like a console emulator.
 
People seriously need to chill about this :P.
#69
mettelus
Max Output Level: -22 dBFS
  • Total Posts : 5321
  • Joined: 2005/08/05 03:19:25
  • Location: Maryland, USA
  • Status: offline
Re: 64 bit engine? 2013/12/14 01:26:28 (permalink)
Man... I cannot wait for the 128-bit machines to hit the streets so I can finally be rid of these grotesque computational errors... ewww.

ASUS ROG Maximus X Hero (Wi-Fi AC), i7-8700k, 16GB RAM, GTX-1070Ti, Win 10 Pro, Saffire PRO 24 DSP, A-300 PRO, plus numerous gadgets and gizmos that make or manipulate sound in some way.
#70
D K
Max Output Level: -66 dBFS
  • Total Posts : 1237
  • Joined: 2005/06/07 14:07:05
  • Status: offline
Re: 64 bit engine? 2013/12/14 10:47:51 (permalink)
Sycraft
 
It is additionally silly to stress over it if you are using the console emulator. The reason is that the whole point of that thing is to change frequency response and add low level noise and distortion, just like real hardware. You raise the noise floor and distortion level, drastically compared to an unmodified digital signal. So if you are inserting an effect that moves the noise floor up anyhow, it gets really silly to worry about errors in the very least significant bits. If a totally clean signal with more dynamic range than any playback system is capable of is you goal, then you don't want something like a console emulator.
 
People seriously need to chill about this :P.




 
^^^ Game, Set, Match ^^^ - That is.. for anyone whose primary concern is about performing,capturing, mixing and presenting...music 

www.ateliersound.com
 
ADK Custom  I7-2600 K
Win 7 64bit /8 Gig Ram/WD-Seagate Drives(x3)
Sonar 8.5.3 (32bit)/Sonar X3b(64bit)/Pro Tools 9
Lavry Blue/Black Lion Audio Mod Tango 24/RME Hammerfall Multiface II/UAD Duo
 
 
 
#71
Anderton
Max Output Level: 0 dBFS
  • Total Posts : 14070
  • Joined: 2003/11/06 14:02:03
  • Status: offline
Re: 64 bit engine? 2013/12/14 11:15:24 (permalink)
Here's an analogy I've used sometimes as to errors that happen at extremely low levels.
 
If a bus is going down the street right outside your window while you're recording a vocal, it will introduce unwanted sounds. If a bus is going down the street two blocks over, it may or may not introduce unwanted sounds because it will be much lower in level. If a bus goes down the street 500 miles away, it really won't make any difference to your vocals although the bus does exist and does make noise.
 
Speaking of really low-level signals, I find dithering very interesting because it seems to be right on the borderline of the perceptible. I've done several classical music projects involving solo acoustic instruments; some people could reliably identify dithered and non-dithered material, while others couldn't tell the difference.

The first 3 books in "The Musician's Guide to Home Recording" series are available from Hal Leonard and http://www.reverb.com. Listen to my music on http://www.YouTube.com/thecraiganderton, and visit http://www.craiganderton.com. Thanks!
#72
lawp
Max Output Level: -67 dBFS
  • Total Posts : 1154
  • Joined: 2012/06/28 13:27:41
  • Status: offline
Re: 64 bit engine? 2013/12/14 11:28:40 (permalink)
so the dpe is/was just marketing hype?
#73
Anderton
Max Output Level: 0 dBFS
  • Total Posts : 14070
  • Joined: 2003/11/06 14:02:03
  • Status: offline
Re: 64 bit engine? 2013/12/14 11:30:08 (permalink)
lawp
so the dpe is/was just marketing hype?




As I've said before...when the 64-bit engine was introduced, the world of audio engines was quite different and it was a major step forward.
 
As an analogy, at one point stereo was a huge deal and there was a major marketing push about how much better it was than mono, which it was. However these days, you won't see a lot of marketing based around stereo reproduction because the world has caught up to it.
 
Please note this is my personal opinion and does not speak for Cakewalk.

The first 3 books in "The Musician's Guide to Home Recording" series are available from Hal Leonard and http://www.reverb.com. Listen to my music on http://www.YouTube.com/thecraiganderton, and visit http://www.craiganderton.com. Thanks!
#74
Westside Steve
Max Output Level: -75 dBFS
  • Total Posts : 794
  • Joined: 2007/04/08 03:57:43
  • Location: Norton Ohio
  • Status: offline
Re: 64 bit engine? 2013/12/14 11:33:34 (permalink)
Of course there are guys who brag that they can hear that bus 500 miles away! We all know some of those guys...
;-)
WSS
#75
Noel Borthwick [Cakewalk]
Cakewalk Staff
  • Total Posts : 6475
  • Joined: 2003/11/03 17:22:50
  • Location: Boston, MA, USA
  • Status: offline
Re: 64 bit engine? 2013/12/14 14:00:07 (permalink)
The issue with console emulator buzz when the 64 bit engine was on has been fixed. It was actually caused due to a stereo mono mismatch with the plugin and SONAR.
This issue ironically was caused by a workaround in X3C for the line 6 PodFarm bug (problem when switching it to mono and then stereo). That workaround has now been removed from X3D and I have notified Line6 again. A fix for that issue will have to come from them now.

Noel Borthwick
Senior Manager Audio Core, BandLab
My Blog, Twitter, BandLab Profile
#76
John
Forum Host
  • Total Posts : 30467
  • Joined: 2003/11/06 11:53:17
  • Status: offline
Re: 64 bit engine? 2013/12/14 14:16:12 (permalink)
lawp
so the dpe is/was just marketing hype?


I don't think it is. Izotope had it in Ozone for ages. I know it was in Ozone 3 long before Sonar had it.

Best
John
#77
drewfx1
Max Output Level: -9.5 dBFS
  • Total Posts : 6585
  • Joined: 2008/08/04 16:19:11
  • Status: offline
Re: 64 bit engine? 2013/12/14 15:21:13 (permalink)
Unless someone can demonstrate that it's ever even borderline audible through some objective testing, I would say that there was some marketing going on.
 
But I think most of the hype part actually comes not from marketing, but from the odd (to me) psychology of people who want so very, very badly to believe they can hear something - despite any and all evidence to the contrary - just so they can click on a button and make it go away.
 
 
Personally, I would prefer if CW would just say something like, "turning on 64bit double precision essentially ensures that any mathematical errors that normally occur as a result of processing in the mix engine will never make it to your output audio" and leave it at that.

 In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
#78
Anderton
Max Output Level: 0 dBFS
  • Total Posts : 14070
  • Joined: 2003/11/06 14:02:03
  • Status: offline
Re: 64 bit engine? 2013/12/14 15:43:07 (permalink)
drewfx1
Unless someone can demonstrate that it's ever even borderline audible through some objective testing, I would say that there was some marketing going on.



It depends upon what you compare it to. When compared to a 16-bit fixed audio engine, you don't have to do too much DSP to hear an obvious, audible difference. With a 24-bit fixed engine, you have to work a lot harder to create a project where you can hear a difference. It is possible, but the project wouldn't have much relationship to real-world projects...unless your music consists of solo acoustic instruments recorded in isolation with noiseless mics, then bounced multiple times through precision reverbs and played back at really loud levels
 
Again to draw a comparison to dithering, I did a mastering seminar where I reduced the signal level dramatically and did comparisons with and without dithering. The difference was totally obvious, but only because the signal level was so low you could really hear what was happening with those least significant bits. People couldn't tell the difference at "normal" listening levels.
 
However, I always wondered if after people heard what dithering did to multiple low-level examples, it would train their ears sufficiently so they could learn to recognize the difference at normal listening levels. The ability of the ear to "learn" extremely subtle gradations would explain why some people hear very subtle audio cues while others don't.
 

The first 3 books in "The Musician's Guide to Home Recording" series are available from Hal Leonard and http://www.reverb.com. Listen to my music on http://www.YouTube.com/thecraiganderton, and visit http://www.craiganderton.com. Thanks!
#79
Splat
Max Output Level: 0 dBFS
  • Total Posts : 8672
  • Joined: 2010/12/29 15:28:29
  • Location: Mars.
  • Status: offline
Re: 64 bit engine? 2013/12/14 15:55:58 (permalink)
Well have X3C now, next week we will have X3D.
That is one letter more, and "D" sounds better than "C".
Which is the main reason why I will be installing this patch (finally I will be able to put my cucumber back into my pants).

Sell by date at 9000 posts. Do not feed.
@48/24 & 128 buffers latency is 367 with offset of 38.

Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
#80
mixmkr
Max Output Level: -43.5 dBFS
  • Total Posts : 3169
  • Joined: 2007/03/05 22:23:43
  • Status: offline
Re: 64 bit engine? 2013/12/14 16:03:01 (permalink)
However, I always wondered if after people heard what dithering did to multiple low-level examples, it would train their ears sufficiently so they could learn to recognize the difference at normal listening levels. The ability of the ear to "learn" extremely subtle gradations would explain why some people hear very subtle audio cues while others don't.
in the same way that you can highlight an instrument at the beginning of a song...then drop it a gazillion dB or more later on, and it is still clearly heard, as long as it's playing the same or similar part with the same tonality.

some tunes: --->        www.masonharwoodproject.bandcamp.com 
StudioCat i7 4770k 3.5gHz, 16 RAM,  Sonar Platinum, CD Arch 5.2, Steinberg UR-44
videos--->https://www.youtube.com/user/mixmkr
 
#81
drewfx1
Max Output Level: -9.5 dBFS
  • Total Posts : 6585
  • Joined: 2008/08/04 16:19:11
  • Status: offline
Re: 64 bit engine? 2013/12/14 18:03:46 (permalink)
Anderton
drewfx1
Unless someone can demonstrate that it's ever even borderline audible through some objective testing, I would say that there was some marketing going on.



It depends upon what you compare it to. When compared to a 16-bit fixed audio engine, you don't have to do too much DSP to hear an obvious, audible difference. With a 24-bit fixed engine, you have to work a lot harder to create a project where you can hear a difference. It is possible, but the project wouldn't have much relationship to real-world projects...unless your music consists of solo acoustic instruments recorded in isolation with noiseless mics, then bounced multiple times through precision reverbs and played back at really loud levels

 
Well, here we were comparing calculations done using 32 bit single precision floating point to 64 bit double precision floating point.
 
In terms of marketing, I too remember the days when we avoided at all costs any processing that wasn't absolutely necessary out of fear of audible damage from calculations being done at lower bit depths. And I agree that when CW introduced the 64 bit engine, we were not far removed from those days.
 
Personally, as I've expressed in various ways, I find some of CW's historical wording regarding the 64bit engine, shall we say, "unfortunate". But as a long time enthusiastic user of CW products, I put this in context of a company I otherwise have great respect for. 
 
I put equal (or more) blame on individuals' inclination to ignore basic questions of context: "There are errors? OK, how loud are they under typical conditions?" 
 
And no one ever seems to ask under what conditions a given problem is minimized or exacerbated. 
 
For some reason when it comes to audio, people want to believe that any artifact must be audible under all conditions if they just listen for it, but the real world just doesn't work that way. And intelligent people who profess themselves to be "skeptics" will sometimes readily accept all claims from one side without any even trivial doubt, but will demand endless proof that the other side has dotted every "i" and crossed every "t" without ever providing any contrary evidence of their own.
 
 
I agree that one could create a laboratory project with the express intent of making 32bit errors audible, but for real world usage I've never seen a shred of objective evidence that it's even close to making a difference. 
 
Mathematically, the size of the errors relative to the signal is dependent on the bit depth the calculations are done with, the number of calculations performed, and how they accumulate based on the nature of the calculations being done. With 32 bit floating point you are starting at a point far from ever being audible, and in mixing I will assert that the errors are typically distributed fairly randomly. Therefore you need to do lots and lots of calculations before the errors could accumulate enough to be worth worrying about. 
 
The math part is not really open to debate. But I would be quite interested in someone presenting objective evidence suggesting that the number of calculations under the 32bit mix engine is sufficient to make the errors audible, or that when mixing real world signals the errors might accumulate unusually rapidly to the point of being a problem.
 

Again to draw a comparison to dithering, I did a mastering seminar where I reduced the signal level dramatically and did comparisons with and without dithering. The difference was totally obvious, but only because the signal level was so low you could really hear what was happening with those least significant bits. People couldn't tell the difference at "normal" listening levels.
 
However, I always wondered if after people heard what dithering did to multiple low-level examples, it would train their ears sufficiently so they could learn to recognize the difference at normal listening levels. The ability of the ear to "learn" extremely subtle gradations would explain why some people hear very subtle audio cues while others don't.



If the dither/quantization_error at a normal listening level is below the absolute threshold of hearing, or is sufficiently masked by background noise and the audio itself, it will be inaudible. This is commonly the case for 16 bit audio, but you can certainly find (or create) conditions where it is audible.
 
In borderline cases, my understanding is that listeners being trained on what to listen for can make a very significant difference. And that, aside from hearing loss, training or "knowing how to listen" is the primary difference between different individuals ability to hear things or not - i.e. aside from hearing loss, it's not based on anyone having naturally superior hearing or anything like that. So it wouldn't surprise me if, as you suggest, some people have learned to hear details that escape others of us, but are still within the physiological limits of our hearing.
 
But I would also assert that it's often not all that difficult to differentiate between "conceivably borderline" cases and "below the physiological limits of human hearing" cases for listening levels that don't cause permanent hearing damage in the short period of time before you blow your speakers.

 In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
#82
Goddard
Max Output Level: -84 dBFS
  • Total Posts : 338
  • Joined: 2012/07/21 11:39:11
  • Status: offline
Re: 64 bit engine? 2013/12/14 22:03:00 (permalink)
Well this may be of some interest... (especially, Reference [1])
 
http://pure.ltu.se/portal/en/studentthesis/rounding-errors-in-floating-point-audio%286cd6adc7-83c9-4208-ad06-06e105892cc1%29.html
 
As some people claim not to have time to read stuff I post links to (yet still find time to post at length?), some selected highlights:

Rounding errors in floating point audio:
Investigating the effects of rounding errors on the fixed point output format of a simulated digital audio chain, using fixed point input, and floating point intermediate storage
 
Erik Grundström
2013
Bachelor of Arts
Audio Engineering
Luleå University of Technology
...
 
1. Introduction
...
There are claims of increased audio quality by using 64 bits in the marketing material of some DAWs [1] and plug-in manufacturers while others claim there are no increased audio quality [2][3]. The scientific literature on this subject is however extremely scarce [4]. This means that the subject should be systematically investigated since it is important for the audio engineer in deciding what equipment to use and also for the design engineer when new audio products are developed, both software and hardware.

1.1 Research question
Will the use of a 64 bit floating point intermediate signal chain produce less deviation from an original fixed point audio file than a 32 bit floating point signal chain after requantization to the original fixed point format?
...
2.2 Digital signal processing
Since digital audio consists of a series of binary values at equal increments of time, processing in the digital domain means that mathematical operations are applied to these values. For instance; to change the level of audio in the digital domain, a multiplication is carried out on every sample. This is one of the simplest types of processing that can be carried out since it processes each sample value independently. Therefore, the result of the process is not dependent on the surrounding sample values. More advanced processing may use several consecutive samples in its process and thus, each samples value is not independent after the processing. Some processes might even be recursive, meaning that the output of the processor is also passed back to its input. What commonly happens when processing is carried out is that the word length necessary to describe the result of the mathematical operation becomes longer than the word length of the original audio data. After each operation the result must be requantized to the word length of the intermediate container format. This requantization is likely to cause rounding errors. It is possible that these rounding errors will then be compounded by subsequent operations and requantizations. This is, however, not a given since it is possible that the rounding errors will balance out if their sign is random. This cannot be controlled as this would require that all parameters of both the signal and the processing would be known when the algorithm is developed [15]. It is quite obvious that this is not feasible for an audio processing unit or DAW.
...
 
3. Method
In order to answer the research question the following has been done:
 
• Generate test files
- One file consisting of all possible sample values in a 16 bit fixed point wav file
- One file consisting of all possible sample values in a 24 bit fixed point wav file
• Simulate a digital signal chain using floating point intermediate format in:
- 32 bit floating point
- 64 bit floating point
• A comparison program has been written that reads samples from the original and the processed file and compares them. The program then prints out the amount of differences between the two, the maximum difference and the mean of these differences and the cumulative deviation.

The simulated audio chain has been made in 6 different versions.
 
1. Converts the fixed point audio data to the two floating point formats and then back and writes a new .wav file with the resulting values.
 
2. This version attempts to provoke differences with an extreme gain change of -700 dB and then +700 dB
 
3. uses a more realistic gain processing of -3 dB and +3 dB
 
4. applies an additional stage and so changes gain by -3 dB, +3 dB, -8 dB and +8 dB
 
5. is the same as 4 but adds -16 dB, +16 dB, - 2dB and + 2 dB gain processes
 
6. is the same as 5 but adds -22 dB, + 22 dB, -12 dB, + 12 dB, -25 dB, +25 dB, -27 dB and +27 dB gain processes

All of the above audio chains should, in theory, not produce any deviation from the original. However, due to rounding errors in the requantization after each gain change, differences may occur.

To approximate the effect any error would have on real music signals, 6 additional test files were generated using random numbers.
• 1 is 16 bit white noise i.e. uniform random numbers
• 2 is 24 bit white noise
• 3 is 16 bit random numbers from a Gaussian probability density function
• 4 is 24 bit random numbers from a Gaussian probability density function
• 5 is 16 bit random numbers from a Laplacian probability density function
• 6 is 24 bit random numbers from a Laplacian probability density function
 
5. Discussion
5.1 Ramp file testing
...
If the conversion is transparent and the input and output is in 16 bit fixed point, a 32 bit and a 64 bit intermediate format will not produce any deviations from the original and thus the audio chain will be transparent. If the input and output is instead 24 bit fixed point the 64 bit intermediate format will not introduce deviations from the original, but the 32 bit will. The percentage of deviations will increase with the square root of the numbers of calculations as is seen in fig.30, fig 31, fig 32 and fig.33. The fact that the deviations seem to increase in such a predictable manner is important for the design engineer as this allows him/her to weigh these errors against the additional memory the audio will allocate in the primary memory of the computer. Perhaps these errors may be deemed acceptable for some calculations in memory intensive tasks. Not only does the number of deviations increase with the number of calculations but they also appear to grow in magnitude as both the maximum deviation and the mean deviation are increasing.
...
 
5.4 Practical implications
While it was stated in section 1.2 Purpose and Limitations, that, whether any differences detected were audible will not be treated, a small discussion on this subject may be appropriate.

The largest deviation from the original in the results section is 4 quantization levels in 24 bit output (see Table 16 in section 4.1.2). If it is assumed that this deviation is not correlated with the signal, this would result in noise at -132 dBFS. This is beyond the dynamic range of human hearing [5, p. 70] and thus it is unlikely to be audible. It is however possible to encode audio data into storage formats that require extreme bit transparency throughout the distribution or signal chain for successful decoding. This kind of packaged data could thus be severely impacted if it would be converted to 32 bit floating point and processed for whatever reason. This could cause unexpected noises and distortions of the audio or, even worse, complete failure to decode the data. Thus, when bit transparency is of the highest priority, it is highly recommended to use 64 bit floating point over 32 bit in the intermediate signal chain if the original fixed point format cannot be used.
...
Note also that the use of 64 bit floating point for all multiplication coefficients in this thesis results in an implicit conversion of 32 bit data to 64 bit during the calculation. The ecological validity of this is debatable. It would be logical to use 32 bit formats for processing coefficients if the intermediate format too is 32 bit. This may cause even more deviations from the intended results since there will be less precision in the number representation during the calculation and thus more, and larger, rounding errors may occur.
 
5.5 Reliability and validity
...
The ecological validity may, in part, be debatable. The use of 64 bits in the multiplication coefficients in the gain processes may not be applicable to real world scenarios when the intermediate audio format is 32 bits. It is also highly unlikely that an engineer would change the gain of an audio signal in one direction and then restore the gain at a later stage, at least not before any additional processing has taken place. It is not likely that an engineer would use counteracting processes that, in theory, would produce an output that is identical to the input. Furthermore it is unreasonable that an engineer would use such extreme level changes as 700 dB. The results, however, do show similar rounding errors for this as a more realistic level change of 3 dB, and thus this unrealistic scenario does not invalidate the results. Gain adjustments in general however, are very common in audio production. It is, in fact, likely the most common processing of all, and thus the study does show some strong ecological validity in this context.
...
6. Future research
This thesis has just barely scratched the surface on this topic. Therefore, based on the results of this thesis, a number of recommended questions for research can be made.
 
• Analyze what sample values are actually changed by the processing. Are there values that are more likely to be changed by processing and re-quantization than others?
• ...is there a relation between what processing is applied and what sample values are changed?
How will the word length of the intermediate format affect more advanced algorithms?...
Further investigate the relation between number of calculations and deviations
from the original file. This research could be done similarly to this but use a
greater number of calculations.
...
Investigate whether these deviations would be audible and if so, how many
steps of processing are required before they become audible.
 
9. References
[1]
http://www.cakewalk.com/Products/feature.aspx/SONAR-Core-Technology-and-64-bitDouble-Precision-Engine
 


Mixing (summing) of multiple streams was unfortunately not considered, only int -> fp -> int format conversion (casting) and reciprocal gain alterations (multiply, always peformed with 'doubles') upon individual streams.
 
Still, noteworthy that the processed 16-bit streams always nulled, even when using floats, whereas the 24-bit streams never nulled when floats were used. Hmm, maybe double precision does matter. Good to hear the bug's been squashed.
 
Hey, that Reference [1] cite calls to mind a past forum post:
 
Seth Perlstein |Cakewalk|
Rain
"There’s a reason SONAR just sounds better. SONAR's industry-first, end-to-end, 64-bit double precision floating point mix engine allows you to mix with sonic clarity using a suite of versatile effects, powerful mixing tools, and endless routing possibilities."
http://www.cakewalk.com/P...ouble-Precision-Engine

Sorry couldn't resist...

 
Yes, a 64-bit double precision audio engine will dos sound better than a 32-bit float, 24-bit, etc. engine. This can be proved mathematically that there will be less rounding errors in the summing with a 64-bit audio engine vs. others.
 
Comparing 64-bit audio engine to 64-bit audio engine, I doubt there would be a difference.

SP

http://forum.cakewalk.com/Sound-Quality-of-Sonar-X1-m2507939-p13.aspx#2513668
 
Sorry, couldn't resist either...
post edited by Goddard - 2013/12/14 22:14:13
#83
Goddard
Max Output Level: -84 dBFS
  • Total Posts : 338
  • Joined: 2012/07/21 11:39:11
  • Status: offline
Re: 64 bit engine? 2013/12/14 22:31:59 (permalink)
D K
 ^^^ Game, Set, Match ^^^ - That is.. for anyone whose primary concern is about performing,capturing, mixing and presenting...music

 
Seeing as how you feel compelled for some reason to keep score here, why don't you instead tell us all about how much improvement your heard in your Tango 24 after shelling out for that BLA mod?
 
Btw, we'll be expecting objective proof...

#84
Splat
Max Output Level: 0 dBFS
  • Total Posts : 8672
  • Joined: 2010/12/29 15:28:29
  • Location: Mars.
  • Status: offline
Re: 64 bit engine? 2013/12/14 22:46:33 (permalink)


Sell by date at 9000 posts. Do not feed.
@48/24 & 128 buffers latency is 367 with offset of 38.

Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
#85
Goddard
Max Output Level: -84 dBFS
  • Total Posts : 338
  • Joined: 2012/07/21 11:39:11
  • Status: offline
Re: 64 bit engine? 2013/12/14 23:41:54 (permalink)
Anderton
lawp
so the dpe is/was just marketing hype?


 
As I've said before...when the 64-bit engine was introduced, the world of audio engines was quite different and it was a major step forward.
 
Please note this is my personal opinion and does not speak for Cakewalk.



Craig, with all respect (and I've been enjoying your writings since Polyphony and Device days), "double precision" DAW audio engines had already been around for some time before Sonar's DPE. SAW used 64-bit processing (running native on PC), and Digi used 48-bit/56-bit DSP chips for PT TDM (with 24-bit paths between chips?) giving effective double precision (or at least, the necessary extended precision for accumulation when mixing 24-bit audio).
 
Iirc, CPA/Sonar's audio engine was built on DirectX and used single precision floats for processing. And iirc, even when Sonar was re-coded into a 64-bit application (Sonar 4?), it still processed using floats until Sonar 5 finally came out with the DPE.
 
Hopefully Noel will correct me if I'm wrong here, but seems to me that it was not until Intel and AMD's provision of streaming SIMD (SSE) functionality enhancement in their processors (supplanting the need to rely upon the x87 FPU), along with the move away from a DirectX foundation that it became performantly practical to natively implement higher precision processing in Sonar (as well as in plug-ins such as those running under VST which was revised for doubles around that time also).
 
That said, a 64-bit DAW with a 64-bit DPE was a pretty nifty trick back then (and still is).
 
But yeah, a DAW with a DPE is no longer a novelty. Even have one running on some i-devices now!
 
Otoh, some DAW developers still flatly reject double-precision, such as this one (who btw does know Jack):
 

64 bit processing is a completely bogus sales/marketing tactic. No (let me repeat that, no) double blind test has ever shown any difference to audio processing with 64 bits over 32. Synthesis is slightly different, but plugins are free to do whatever they want internally, and simply convert 32 bit for input and output. The same is true of any other processing. Certainly nothing that Ardour itself does to the signal would benefit from 64 bit processing. If you think that Reaper (or any other system) "sounds better" because of 64 bits, you need to setup a properly structured double blind test. I almost guarantee that your belief will be gone by the end of the test.

https://community.ardour.org/node/5812
 
Hey, remind you of any recent forum threads around here?
#86
drewfx1
Max Output Level: -9.5 dBFS
  • Total Posts : 6585
  • Joined: 2008/08/04 16:19:11
  • Status: offline
Re: 64 bit engine? 2013/12/15 13:31:11 (permalink)
GoddardStill, noteworthy that the processed 16-bit streams always nulled, even when using floats, whereas the 24-bit streams never nulled when floats were used. Hmm, maybe double precision does matter. Good to hear the bug's been squashed.
 



 
You crack me up. 
 
You just can't seem to comprehend the difference between an error being present and being audible or meaningful. Until you will admit that those are not the same thing, I will not waste anymore of my time on you.
 
But I will wish you good luck in your future endeavors.

 In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
#87
Splat
Max Output Level: 0 dBFS
  • Total Posts : 8672
  • Joined: 2010/12/29 15:28:29
  • Location: Mars.
  • Status: offline
Re: 64 bit engine? 2013/12/15 13:42:33 (permalink)


Sell by date at 9000 posts. Do not feed.
@48/24 & 128 buffers latency is 367 with offset of 38.

Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
#88
Anderton
Max Output Level: 0 dBFS
  • Total Posts : 14070
  • Joined: 2003/11/06 14:02:03
  • Status: offline
Re: 64 bit engine? 2013/12/15 17:38:01 (permalink)
Goddard
Anderton
lawp
so the dpe is/was just marketing hype?


 
As I've said before...when the 64-bit engine was introduced, the world of audio engines was quite different and it was a major step forward.
 
Please note this is my personal opinion and does not speak for Cakewalk.



Craig, with all respect (and I've been enjoying your writings since Polyphony and Device days), "double precision" DAW audio engines had already been around for some time before Sonar's DPE. SAW used 64-bit processing (running native on PC), and Digi used 48-bit/56-bit DSP chips for PT TDM (with 24-bit paths between chips?) giving effective double precision (or at least, the necessary extended precision for accumulation when mixing 24-bit audio).



According to the SAW site, the last version of SAW released in 2001 had a 24-bit audio engine. Pro Tools had a 48-bit fixed engine but bottlenecked to 24 bits when going through the TDM bus.

The first 3 books in "The Musician's Guide to Home Recording" series are available from Hal Leonard and http://www.reverb.com. Listen to my music on http://www.YouTube.com/thecraiganderton, and visit http://www.craiganderton.com. Thanks!
#89
slartabartfast
Max Output Level: -22.5 dBFS
  • Total Posts : 5289
  • Joined: 2005/10/30 01:38:34
  • Status: offline
Re: 64 bit engine? 2013/12/15 18:15:10 (permalink)
Wow. A lot of fantastic stuff here. 24 bits 32 bits 64 bits...
Hard to argue that 64 bits is not better in theory. There might conceivably be situations where it would make a difference. Anyone arguing for 64 bits for that reason ready to argue against 128 bits? 256 bits? 1024 bits? Computers  may be able to chug those out without a glitch, if not now then surely in the audio world of the future.
Or are we all going to accept that, short of an infinitely large internal representation, there is no reliable way to protect ourselves from the possibility of audible errors in an infinitely long or recursive processing chain.
#90
Page: < 1234 > Showing page 3 of 4
Jump to:
© 2025 APG vNext Commercial Version 5.1