• SONAR
  • The science of sample rates (p.19)
2014/01/23 06:53:54
Vab
I am 96 KHz / 24 bit master race.

You are 44.1 KHz / 16 bit peasants and over a decade out of date.

Clearly my superior superiority is vastly more superior than your superior lack of superiority.
2014/01/23 09:03:08
chuckebaby
 

my head is just throbbing trying to digest all this information.
2014/01/23 09:07:35
Beepster
Spock in his home studio...
 
2014/01/23 10:02:06
Vab
chuckebaby
 

my head is just throbbing trying to digest all this information.


Its the same logic as CPU logic.

More cores is faster. 8 cores is twice as better than 4 cores. If I have an 8 core 4 Ghz CPU, then that means my CPU is frigging 32 Ghz fast!

Well, that's what some Macintosh user told me.
2014/01/23 11:19:10
jb101
Beepster
Spock in his home studio...
 





2014/01/24 05:12:36
Goddard
John T
Beepster
Ultra high sampling rates can lead to LESS accuracy because??? The current computers aren't fast enough??
 

 
Yes, that's the claim more or less, though the problem isn't computers per se, it's an issue for the analogue components in the converter too.

This is the one thing from the article that you could reasonably call contentious. It's Lavry's claim, and his reasoning is sound - faster sample rates potentially lead to less accuracy at each sample point, because the component voltages can't settle fast enough - but as far as I'm aware, nobody's actually blind tested this to see if it actually makes any difference, including Lavry.
 
The other side to this though, is that as in your summary of other points, there are no gains to ultra-high frequency, and that is tested and known.




People have unfortunately taken Layry's private whitepaper assertions as valid, no doubt  because of his reputation, although he's never submitted his assertions for scrutiny and peer-review in the JAES or other journal (for which he would also need to submit some verifiable proof).
 
Nyquist-Shannon only dictates the minimum sampling frequency required in respect of a band-limited continous=time (analog) input signal (and assumes an ideal reconstruction), but imposes no actual restriction on employing higher sampling frequencies and certainly does not imply that a sampling freq>Nyquist is deleterious, as is well understood by people who actually have to design sampling systems
 
Wescott
Sampling: What Nyquist Didn't Say, and What to Do About It
 
A Design Approach
So far in this discussion I have tried my best to destroy some commonly misused rules of
thumb. I haven’t left any rules of thumb in their wake. Why? Because solving problems
in sampling and anti-alias filtering is not amenable to rules of thumb, at least not general
ones. When you’re solving sampling problems you’re best off working from first principals
and solving the problem by analysis.

In designing sampled-time systems, the variables that we need to juggle are signal accuracy (or fidelity) and various kinds of system cost (dollar cost, power consumption, size, etc.). Measured purely from a sample rate perspective, increasing the signal sample rate will always increase the signal fidelity. It will often decrease the cost of any analog antialiasing and reconstruction filters, but it will always increase the cost of the system digital hardware, which will not only have to do it’s computations faster, but which will need to operate on more data. Establishing a system sample rate (assuming that it isn’t done for you by circumstances) can be a complex juggling act, but you can go at it systematically.

In general when I am faced with a decision about sampling rate I try to estimate the minimum sampling rate that I will need while keeping my analog electronics within budget, then estimate the maximum sampling rate I can get away with while keeping the digital electronics within budget. If all goes well then the analog-imposed minimum sample rate will be lower than the digital-imposed maximum sample rate. If all doesn’t go well then I need to revisit to my original assumptions about my requirements, or I need to get clever about how I implement my system.

The common criteria for specifying a sampled-time system’s response to an input signal are, in increasing order of difficulty: that the output signal’s amplitude spectrum should match the input signal’s spectrum to some desired accuracy over some frequency band; that the output signal’s time-domain properties should match the input signal’s time-domain properties to some desired accuracy, but with an allowable time shift; and that the output signal’s time domain properties should match the input signal to some desired accuracy, in absolute real time.

 
Note that "amplitude spectrum" refers to frequency response (amplitude of the output across the frequency band in relation to the input), while "time-domain properties" refers to latency as well as phase response (phase shift in output in relation to input), which are in many instances mutually exclusive (can't have good performance in one without seriously messing up the other) and thus impose major practical restrictions upon anti-aliasing and reconstruction filter design.
 
Unbelievable that anyone could refer to the designers of converters providing them a flat freq response and minimal phase shift from 20-20k as "lazy", whatever the sampling rate employed, and most disappointing if anyone would be taken in by and further perpetuate such guff.
2014/01/24 08:20:52
John T
Goddard


People have unfortunately taken Layry's private whitepaper assertions as valid, no doubt  because of his reputation, although he's never submitted his assertions for scrutiny and peer-review in the JAES or other journal (for which he would also need to submit some verifiable proof).
 
Nyquist-Shannon only dictates the minimum sampling frequency required in respect of a band-limited continous=time (analog) input signal (and assumes an ideal reconstruction), but imposes no actual restriction on employing higher sampling frequencies and certainly does not imply that a sampling freq>Nyquist is deleterious, as is well understood by people who actually have to design sampling systems
 



Man alive. Nobody, including Lavry, thinks that the sampling theorem imposes any such restriction.

Clearly, you like the sound of your own voice, and you're welcome to it. But I don't think the rest of us are asking too much if we expect your gum flapping to at least make the occasional stab at relevance.
2014/01/24 11:10:46
bitflipper
In designing sampled-time systems, the variables that we need to juggle are signal accuracy (or fidelity) and various kinds of system cost (dollar cost, power consumption, size, etc.).

True statement. It applies to all engineering in general, whether electronic or mechanical, even software engineering. Quality generally costs more. Even when you do find some elegant solution that raises quality while lowering cost, chances are it took you longer to figure it out, which costs money.
 
Measured purely from a sample rate perspective, increasing the signal sample rate will always increase the signal fidelity.

True again. However, the operative phrase is "measured purely from a sample rate perspective". We're not talking about digital oscilloscopes or RADAR here, but audio, which is unambiguously defined by the mechanical limitations of human hearing. The writer is using "signal fidelity" as a general concept, irrespective of any specific design goals. In the case of audio, "fidelity" is more specific, and means how closely the recorded signal sounds like the original. Ears are the final arbiter, not laboratory test equipment (which will always find shortcomings, even those that are inaudible). 
 
So while the statement is true in a broad sense, it is somewhat misleading to apply it to the discussion at hand. 
 
It will often decrease the cost of any analog antialiasing and reconstruction filters, but it will always increase the cost of the system digital hardware, which will not only have to do it’s computations faster, but which will need to operate on more data.

 
True again. I only take issue with "always". All digital components have upper speed limits, so the designer will choose them with the project's design requirements in mind. Usually, the selected components will be capable of performing far beyond the requirements. In purely digital circuitry, increasing speed and capacity is often trivial and incurs little or no additional cost, as long as the designer hasn't painted himself into a corner with bad early decisions. 
 
Also, the cost of analog anti-aliasing and reconstruction filters might be a factor in some equipment, but it's trivial in audio devices. We're talking 5-cent capacitors here.
 
Where design compromises most often come into play are in circuits that have an analog component, namely oscillators and sample-and-hold circuits.
 
With oscillators, it's mainly a cost tradeoff. Fortunately, at the frequencies used in oversampled ADCs, designing an accurate and stable oscillator is neither difficult nor expensive. You could shave a few pennies off at the expense of accuracy, but I can't imagine why anyone would bother given how crucial it is to the device's performance. Plus you'll normally only have one oscillator, even in a multi-channel interface, so it's not the most cost-effective place to save money anyway.
 
It's the sample-and-hold circuit that's going to have inherent tradeoffs regardless of cost, because its accuracy is not ever going to be consistent across all sampling frequencies. In this particular part of the ADC, the designer has no choice but to pick a target sample rate to optimize the S/H for. If his market is primarily professional studio, he'll assume 96KHz - and have to accept slightly less-than-optimal performance at 44.1KHz. This is why some interfaces perform better at one rate over the other. It's also why Dan Lavry refuses to sell a 192KHz interface, and why the ability to sample at 192KHz should not be taken as a predictor of how well a device will perform at more commonly-used rates.
 
(BTW, I have designed sampling circuits. Not for audio, but for industrial controls. Same principles, different requirements.)
2014/01/24 11:27:07
gswitz
Thanks, Bit. As always, you are very informative.
2014/01/24 11:45:31
The Maillard Reaction
bitflipper
It's the sample-and-hold circuit that's going to have inherent tradeoffs regardless of cost, because its accuracy is not ever going to be consistent across all sampling frequencies. In this particular part of the ADC, the designer has no choice but to pick a target sample rate to optimize the S/H for. 



This idea caught my attention. I'd like to learn more about this idea.
 
best regards,
mike
 
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account