• SONAR
  • The science of sample rates (p.21)
2014/01/24 18:45:27
soens
>>Clearly my superior superiority is vastly more superior than your superior lack of superiority.<<
 
To be truly superior, you must be superior in ALL directions. Infinity to infinity.
2014/01/24 19:06:41
brundlefly
John
Dave read Brundelfly's post and view the avatar closely. I think it explains what has been going on in this thread. 



  No serious offense intended, of course. I just couldn't resist.
 
 
2014/01/24 19:12:41
jb101
brundlefly
I have just one question. Is this an Input meter or an Output meter?
 





I always said brundefly was an observant chap..
2014/01/24 19:34:16
John
He sure is. I'm very glad we have him.
2014/01/24 19:41:28
Goddard
Noel Borthwick [Cakewalk]
Great article on The Science Of Sample Rates that discusses the pro's and con's of high sample rates.
Its long but well worth the read.




Ok, let me explain further why I don't think this is really such a great article.
 

Now in 2013, the 16/44.1 converter of a Mac laptop can have better specs and real sound quality than most professional converters from a generation ago, not to mention a cassette deck or a consumer turntable. There’s always room for improvement, but the question now is where and how much?

 
I've already explained my critique of the above passage in the article in several earlier posts here, but just to make it perfectly clear- because the author of the article certainly failed to pick up on this point- anyone with a recent vintage PC or Mac having an onboard "High Definition Audio" ("Intel HDA") codec chip is already equipped for performing playback of 24-bit/96k and 24/192k digital audio and evaluating for themselves the assertions made later in the article about high sampling rates being "harmful" to audio quality, even if their audio interface lacks 96k or 192k sampling.
 

Technology always advances and today, external clocking is far more likely to increase distortion and decrease accuracy when compared to a converter’s internal clock. In fact, the best you can hope for in buying a master clock for your studio is that it won’t degrade the accuracy of your converters as you use it to keep them all on the same page.
 
There are however, occasions when switching to an external clock can add time-based distortion and inaccuracies to a signal that some listeners may find pleasing. That’s a subjective choice, and anyone who prefers the sound of a less accurate external clock to a more accurate internal one is welcome to that preference.

 
The above-quoted passage points to a misunderstanding and miscontrual by the author of the SOS mag article "Does Your Studio Need A Digital Master Clock?" to which he linked.
 
The author's statements about some listeners finding time-based distortion and inaccuracies added to a signal when switching to an external clock pleasing and even preferring such apparently stem from these two remarks appearing in the linked SOS review:
 
SOS
So, although sonic differences may be perceived when using an external clock as compared to running on an internal clock, and those differences may even seem quite pleasant in some situations, this is entirely due to added intermodulation distortions and other clock-recovery related artifacts rather than any real audio benefits, as the test plots illustrate.
 
Overall, it should be clear from these tests that employing an external master clock cannot and will not improve the sound quality of a digital audio system. It might change it, and subjectively that change might be preferred, but it won’t change things for the better in any technical sense.

 
What I found seriously wrong with this portion of the article was that the author misunderstood (or just failed to grasp) the cause of the problem about which he was writing (that external clocking may cause some converters to distort) which cause was explained in the linked SOS article:
 
SOS
So even though a very good-quality external word clock is being supplied here, the performance of the A-D converter becomes noticeably (and audibly) worse than when running on its internal clock.
 
This is not an unusual situation by any means, and the reduction in audio quality is not related to the supposed quality of the reference clock source either...
 
Moreover, the implication is that the A-D converter’s external clock-recovery circuitry has a far more significant effect on the A-D’s performance than the quality or precision of the external reference clock source.
 
...it is certainly possible to synchronise an A-D to an external clock without affecting its performance, but that it takes a skillfully designed and manufactured clock-recovery system to do it.

 
Namely, the author failed to grasp that even if the internal clocking accuracy of converters has improved, the fact most of the converters tested by SOS produced distortion when clocked externally was not actually due to any lower relative accuracy of the external master clocks under test but because of deficiencies in the converters' own external clock extraction/recovery (i.e., slaving) capabilities when clocked by a more accurate and lower-jitter external master clock!
 
Moreover, I feel that the author blew up the remarks in the SOS review stating that external-clocking-caused distortion might be found pleasant, by further suggesting on his own that some listeners might prefer it and were welcome to their preference, and then suggesting that external clocking distortion be considered as one of many subjective choices/preferences, while entirely failing to note that the distortion as found by SOS when using external clocking was always very small and might not even be audible, as had been clearly pointed out in the SOS article:
 
SOS
It’s important to take on board that in all of the above examples, where there was an increase in noise and distortion when running on an external clock, the change was always very small, and arguably even negligible in some cases. Without superb monitoring conditions these subtle changes might be inaudible, and would certainly be much less significant than, say, a sub-optimally placed microphone as far as the overall quality of a recording is concerned.

 
The author's casting distortion caused when using external clocking into a "subjective preference" struck me as a rather bizarre focus on the problem revealed, and made me wonder if he understood that some people, such as anyone producing for film/video, might actually, solely as a matter of overriding practical necessity rather than out of any subjective preference for the sound of distortion, always need to slave to external clocks, as had been pointed out by SOS: 
SOS
The only situation where a dedicated master clock unit is truly essential is in systems that have to work with, or alongside, video, such as in music-for-picture and audio-for-video post-production applications. It’s necessary here because there must be a specific integer number of samples in every video picture-frame period, and to achieve that, the audio sample rate has to be synchronised to the picture frame rate. The only practical way to achieve that is to use a master clock generator that is itself sync’ed to an external video reference, or which generates a video reference signal to which video equipment can be sync’ed.
...
 
Moreover, the audible problems of not synchronising multiple digital devices together correctly are far worse than the very small potential increases in noise and distortion that may result from forcing an A-D to slave to an external reference clock.


 
In this light, the author's next paragraph:
 

This is a theme that we find will pop up again and again as we explore the issue of transparency, digital audio, sampling rates, and sound perception in general: Sometimes we do hear real, identifiable differences between rates and formats, even when those differences do not reveal greater accuracy or objectively “superior” sound.

 
revealed to me that the author, in taking the remarks from the linked SOS external clocking review and shaping them to fit the theme of his article had missed the real technical significance of and misconstrued the SOS review. In fairness, although the author did in fact point out that converters are more likely to perform better when internally clocked and may distort when externally clocked, that was the only thing which the author accurately related from the SOS review.
 
Namely, the SOS reviewer's remarks about some people possibly preferring such converter distortion and pointing out that the distortion was atonal IM distortion and thus not actually musical were given as a (perhaps sarcastic) warning to anyone preferring certain converters for their "warm" distortion feature (e.g. as offered by Lavry among others) and if the author had grasped that instead then he could have ridden that subjective preference matter horse home as well or instead.
 
In summary, the following

There are however, occasions when switching to an external clock can add time-based distortion and inaccuracies to a signal that some listeners may find pleasing. That’s a subjective choice, and anyone who prefers the sound of a less accurate external clock to a more accurate internal one is welcome to that preference.

 
was not only technically incorrect (the external clocks were more, not less, accurate than the internal clocks, and inacuracy of the external clocks was not the cause of the problem) but also made it seem that the choice to use external clocking is merely a matter of preferring the distortion such could produce, thus revealing to me a lack of knowledge on the author's part as well as a misconstruing of the SOS reviewer's remarks .
 
Next we come to this: 

Designers can oversample signals at the input stage of converter and improve the response of filters at that point. When this is done properly, it’s been proven again and again that even 44.1kHz can be completely transparent in all sorts of unbiased listening tests.

 
The problem I have with this part of the article is that the AES journal "engineering report" to which the author linked did not relate to oversampling, nor did it prove conclusively "that even 44.1kHz can be completely transparent" as the author alleged, and in any event, there are serious doubts surrounding the validity of the test results reported which have put its validity into question.
 
The test described in JAES report, which has become known as the "Boston Acoustic Society Double-Blind Test" (BAS DBT, full text here and further info here) evaluated whether listeners in a double-blind test could discriminate DVD-A/SACD content playback from the same content when passed through a 16/44.1kHz A/D/A "bottleneck" (a CD recorder with realtime monitoring) during playback, as was described in the report:
 
JAES
This engineering report, then, describes double-blind comparisons of high resolution
stereo playback with the same two-channel analog signal looped through a 16/44.1 A/D/A chain

 
It should be noted that there was no actual testing of any 44.1kHz source content (e.g. no CD-DA content}, but rather, only the use of a 16/44.1k A/D-D/A chain (the CD recorder's monitoring function) which could be switched into the output path of a DVD-A or SACD player to "degrade" the playback to "CD quality".
 
It's unclear to what oversampling of signals the author was referring to. The only reference to transparency which I've found in the BAS DBT report was in the introductory paragraph which referenced much earlier blind tests showing that CD-A was "transparent" in comparison to source tapes. If the author was referring to the SACD and DVD-A content used for the testing, then the author was possibly confusing oversampling with material recorded at higher sample rates. If the author was referring to oversampling which might be happening in the CD-recorder's converters, I found no mention of the particular CD-recorder employed nor any specfications given in the report itself although the later-added "explanation" webpage indicates that an HHB pro model was used although again, no specifications were given, so possibly the author was assuming its converters employed oversampling (as they likely may). 
 
The BAS DBT report received quite a lot of attention when it first emerged and has since been criticized for a number of reasons, including allegations that the DVD-A and SACD discs used for the test were ones which had been produced from older source material not originally recorded or produced in actual hi-res formats and thus the discs did not actually contain any hi-res content but only content which corresponded to 16/44.1k, and was thus not a true comparison against hi-res source material.
 
Moreover, despite the controversy and doubts surrounding the validity of the BAS DBT, no follow-up or repeatability testing has ever been conducted afaik and it thus remains only a single isolated and unverified instance, not support for "as proven again and again" as the author alleges in the article.
 
Ok, that's enough for now. Hopefully it's becoming clearer why I consider that the facetious scientist doesn't understand significant aspects of what he's writing about and as a consequence is spewing mis-info in pursuit of his subjective/objective theme.
 
If in doubt, and assuming you understand binary number precision, read the "32 Bits and Beyond" section of his article about bit depth here (and see if you don't think he should change the "You" in the title to read "I").
 
More to follow, maybe...
2014/01/24 19:52:48
Goddard
brundlefly
I have just one question. Is this an Input meter or an Output meter?
 





It's actually a warning, just like that "Facetious title" doodle.
 
As for I or O, people can think for themselves and arrive at their own conclusions about that.
 
But as for post technical content, I don't assert anything the accuracy of which I'm not certain or lack proof.
2014/01/24 20:19:41
John
You know Goddard its time to give it a rest. I'm glad you have such a analytical mind and see so many faults in someone else's work but at some point its just obsessive and not all that informative.  
2014/01/24 20:37:30
Vab
Goddard do you actually think anybody here is going to bother to read posts that long? No one has an attention span on the internet.
2014/01/24 20:46:20
Goddard
bitflipper
It's not trivial nor inexpensive to come up with good analog a-a and reconstruction filters for audio sampling, especially steep ones (e.g. brickwall) with good freq and phase characteristics as was the only option until DSP solutions became feasible. Think about why people complained that CDs sounded "harsh" and "metallic". Active filters helped, but the cost...

 
I'm sure you're aware that anti-aliasing filters in modern converters are not steep. They don't need to be, because the oversampled Nyquist frequency is hundreds or thousands of times higher than the top of the audio range. TBH, I haven't examined many interfaces with a magnifying glass, but my guess would be that in most cases the anti-aliasing filter consists of two capacitors and a resistor.

 
Yes, I'm familiar with current oversampling converters and their relaxed analog filter requirements. I'm also aware of NOS converters and some of the work being done on filters for those.
 
bitflipper
As to why people complained about early CDs, that comes down, I think, to early converters not being oversampled. They did need steep filters, and were prone to aliasing. But we're talking the 1960s. Anybody with a $5 RealTek chip today has a vastly more capable interface than those first-generation recorders.

 
CD-A came out in the early '80s. Apogee got started by supplying filter upgrades for a number of early PCM recorders which had pretty crap filters at the time (when people were complaining about the sound of their CDs).
 
Regarding the capabilities of Realtek codec chips, perhaps you missed my initial post in this thread.
 
(negotiated patent licenses for CD-A in the early '90s, so pretty familiar with a lot of the tech)
 

bitflipper
Re: the 4004. Man, you're as much of a dinosaur as I am! Back then I used to read electronics catalogs the way most young men devoured skin mags. I distinctly remember the week the new Intel catalog arrived that included the 4004. I had the school (where I was an instructor) order one - for the students of course - and built an analog sequencer with it.
 
It was the very same week a bucket-brigade analog shift register showed up on my desk. That BBD chip had cost a day's wages, but I was sure it was gonna be the future of audio echo units. Unfortunately, I immediately destroyed it with a static discharge, said the heck with it. A few years later along comes a company called Eventide Clockworks, who'd actually done it. That coulda been me, I thought, but for lack of a wrist strap! And laziness.



Yeah, been around since slick stick days (and exams with multiple-choice answers consisting of the same number having the decimal in different locations). Can still rock though.
 
Hey, I remember when BBD chips first came out (Reticon iirc). May still have a Matsu****a BBD in a parts bin somewhere for a project I never got to. Those were fun chips to play around with. As were many of Craig's projects.
 
2014/01/24 20:58:07
Goddard
Noel, thanks for all the info.
 
Btw, regarding filter designs in converters, you may find this of interest.
 
Noel Borthwick [Cakewalk]
In all 3 of the cases I listed depending on the corresponding render bit depth setting in preferences.
By default however SONAR only creates float files when doing bounces or freezes since the render bit depth is set to float.
 
Goddard
Noel Borthwick [Cakewalk]
The storage format on disk is always standard WAV file format (WAVE_FORMAT_PCM or WAVE_FORMAT_IEEE_FLOAT, WAVE_FORMAT_EXTENSIBLE in some cases). The bit depth is determined as follows:
 
1. Bit depth for recorded project audio data is determined by the record bit depth setting in preferences.
2. Bit depth for imported project audio data is determined by the import bit depth setting in preferences.
3. Bit depth for internally rendered (bounce, freeze, etc) project audio data is determined by the render bit depth setting.
 
Typically a SONAR project will contain multiple bit depth audio depending on the data it was created with and the intermediate bounce operations performed. The different bit depths are all converted to 32 or 64 bit float at playback time depending on the double precision mix engine setting.
 




Ah, I see. That's precisely what I was curious to know. In what cases would float format WAV file storage be done?








© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account