Vab
Max Output Level: -87 dBFS
- Total Posts : 192
- Joined: 2013/12/24 18:15:50
- Status: offline
Re: The science of sample rates
2014/01/23 06:53:54
(permalink)
I am 96 KHz / 24 bit master race.
You are 44.1 KHz / 16 bit peasants and over a decade out of date.
Clearly my superior superiority is vastly more superior than your superior lack of superiority.
I7 980 | Asus Rampage III Extreme | 12 Gb ram | SLI GTX 680 | Creative X-fi Titanium HD | 2x4 Tb HDD | 128+512 Gb SSDs | Sonar X3 Producer | Yamaha DGX 630 | Samson Go Mic
|
chuckebaby
Max Output Level: 0 dBFS
- Total Posts : 13146
- Joined: 2011/01/04 14:55:28
- Status: offline
Re: The science of sample rates
2014/01/23 09:03:08
(permalink)
my head is just throbbing trying to digest all this information.
Windows 8.1 X64 Sonar Platinum x64 Custom built: Asrock z97 1150 - Intel I7 4790k - 16GB corsair DDR3 1600 - PNY SSD 220GBFocusrite Saffire 18I8 - Mackie Control
|
Beepster
Max Output Level: 0 dBFS
- Total Posts : 18001
- Joined: 2012/05/11 19:11:24
- Status: offline
Re: The science of sample rates
2014/01/23 09:07:35
(permalink)
Spock in his home studio...
|
Vab
Max Output Level: -87 dBFS
- Total Posts : 192
- Joined: 2013/12/24 18:15:50
- Status: offline
Re: The science of sample rates
2014/01/23 10:02:06
(permalink)
chuckebaby
my head is just throbbing trying to digest all this information.
Its the same logic as CPU logic. More cores is faster. 8 cores is twice as better than 4 cores. If I have an 8 core 4 Ghz CPU, then that means my CPU is frigging 32 Ghz fast! Well, that's what some Macintosh user told me.
I7 980 | Asus Rampage III Extreme | 12 Gb ram | SLI GTX 680 | Creative X-fi Titanium HD | 2x4 Tb HDD | 128+512 Gb SSDs | Sonar X3 Producer | Yamaha DGX 630 | Samson Go Mic
|
jb101
Max Output Level: -46 dBFS
- Total Posts : 2946
- Joined: 2011/12/04 05:26:10
- Status: offline
Re: The science of sample rates
2014/01/23 11:19:10
(permalink)
Beepster Spock in his home studio...
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 05:12:36
(permalink)
John T
Beepster Ultra high sampling rates can lead to LESS accuracy because??? The current computers aren't fast enough??
Yes, that's the claim more or less, though the problem isn't computers per se, it's an issue for the analogue components in the converter too.
This is the one thing from the article that you could reasonably call contentious. It's Lavry's claim, and his reasoning is sound - faster sample rates potentially lead to less accuracy at each sample point, because the component voltages can't settle fast enough - but as far as I'm aware, nobody's actually blind tested this to see if it actually makes any difference, including Lavry. The other side to this though, is that as in your summary of other points, there are no gains to ultra-high frequency, and that is tested and known.
People have unfortunately taken Layry's private whitepaper assertions as valid, no doubt because of his reputation, although he's never submitted his assertions for scrutiny and peer-review in the JAES or other journal (for which he would also need to submit some verifiable proof). Nyquist-Shannon only dictates the minimum sampling frequency required in respect of a band-limited continous=time (analog) input signal (and assumes an ideal reconstruction), but imposes no actual restriction on employing higher sampling frequencies and certainly does not imply that a sampling freq>Nyquist is deleterious, as is well understood by people who actually have to design sampling systems Wescott Sampling: What Nyquist Didn't Say, and What to Do About It A Design Approach So far in this discussion I have tried my best to destroy some commonly misused rules of thumb. I haven’t left any rules of thumb in their wake. Why? Because solving problems in sampling and anti-alias filtering is not amenable to rules of thumb, at least not general ones. When you’re solving sampling problems you’re best off working from first principals and solving the problem by analysis.
In designing sampled-time systems, the variables that we need to juggle are signal accuracy (or fidelity) and various kinds of system cost (dollar cost, power consumption, size, etc.). Measured purely from a sample rate perspective, increasing the signal sample rate will always increase the signal fidelity. It will often decrease the cost of any analog antialiasing and reconstruction filters, but it will always increase the cost of the system digital hardware, which will not only have to do it’s computations faster, but which will need to operate on more data. Establishing a system sample rate (assuming that it isn’t done for you by circumstances) can be a complex juggling act, but you can go at it systematically.
In general when I am faced with a decision about sampling rate I try to estimate the minimum sampling rate that I will need while keeping my analog electronics within budget, then estimate the maximum sampling rate I can get away with while keeping the digital electronics within budget. If all goes well then the analog-imposed minimum sample rate will be lower than the digital-imposed maximum sample rate. If all doesn’t go well then I need to revisit to my original assumptions about my requirements, or I need to get clever about how I implement my system.
The common criteria for specifying a sampled-time system’s response to an input signal are, in increasing order of difficulty: that the output signal’s amplitude spectrum should match the input signal’s spectrum to some desired accuracy over some frequency band; that the output signal’s time-domain properties should match the input signal’s time-domain properties to some desired accuracy, but with an allowable time shift; and that the output signal’s time domain properties should match the input signal to some desired accuracy, in absolute real time.
Note that "amplitude spectrum" refers to frequency response (amplitude of the output across the frequency band in relation to the input), while "time-domain properties" refers to latency as well as phase response (phase shift in output in relation to input), which are in many instances mutually exclusive (can't have good performance in one without seriously messing up the other) and thus impose major practical restrictions upon anti-aliasing and reconstruction filter design. Unbelievable that anyone could refer to the designers of converters providing them a flat freq response and minimal phase shift from 20-20k as "lazy", whatever the sampling rate employed, and most disappointing if anyone would be taken in by and further perpetuate such guff.
post edited by Goddard - 2014/01/24 05:27:59
|
John T
Max Output Level: -7.5 dBFS
- Total Posts : 6783
- Joined: 2006/06/12 10:24:39
- Status: offline
Re: The science of sample rates
2014/01/24 08:20:52
(permalink)
Goddard
People have unfortunately taken Layry's private whitepaper assertions as valid, no doubt because of his reputation, although he's never submitted his assertions for scrutiny and peer-review in the JAES or other journal (for which he would also need to submit some verifiable proof). Nyquist-Shannon only dictates the minimum sampling frequency required in respect of a band-limited continous=time (analog) input signal (and assumes an ideal reconstruction), but imposes no actual restriction on employing higher sampling frequencies and certainly does not imply that a sampling freq>Nyquist is deleterious, as is well understood by people who actually have to design sampling systems
Man alive. Nobody, including Lavry, thinks that the sampling theorem imposes any such restriction. Clearly, you like the sound of your own voice, and you're welcome to it. But I don't think the rest of us are asking too much if we expect your gum flapping to at least make the occasional stab at relevance.
http://johntatlockaudio.com/Self-build PC // 16GB RAM // i7 3770k @ 3.5 Ghz // Nofan 0dB cooler // ASUS P8-Z77 V Pro motherboard // Intel x-25m SSD System Drive // Seagate RAID Array Audio Drive // Windows 10 64 bit // Sonar Platinum (64 bit) // Sonar VS-700 // M-Audio Keystation Pro 88 // KRK RP-6 Monitors // and a bunch of other stuff
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: The science of sample rates
2014/01/24 11:10:46
(permalink)
☄ Helpfulby gswitz 2014/01/24 11:26:12
In designing sampled-time systems, the variables that we need to juggle are signal accuracy (or fidelity) and various kinds of system cost (dollar cost, power consumption, size, etc.). True statement. It applies to all engineering in general, whether electronic or mechanical, even software engineering. Quality generally costs more. Even when you do find some elegant solution that raises quality while lowering cost, chances are it took you longer to figure it out, which costs money. Measured purely from a sample rate perspective, increasing the signal sample rate will always increase the signal fidelity. True again. However, the operative phrase is "measured purely from a sample rate perspective". We're not talking about digital oscilloscopes or RADAR here, but audio, which is unambiguously defined by the mechanical limitations of human hearing. The writer is using "signal fidelity" as a general concept, irrespective of any specific design goals. In the case of audio, "fidelity" is more specific, and means how closely the recorded signal sounds like the original. Ears are the final arbiter, not laboratory test equipment (which will always find shortcomings, even those that are inaudible). So while the statement is true in a broad sense, it is somewhat misleading to apply it to the discussion at hand. It will often decrease the cost of any analog antialiasing and reconstruction filters, but it will always increase the cost of the system digital hardware, which will not only have to do it’s computations faster, but which will need to operate on more data. True again. I only take issue with "always". All digital components have upper speed limits, so the designer will choose them with the project's design requirements in mind. Usually, the selected components will be capable of performing far beyond the requirements. In purely digital circuitry, increasing speed and capacity is often trivial and incurs little or no additional cost, as long as the designer hasn't painted himself into a corner with bad early decisions. Also, the cost of analog anti-aliasing and reconstruction filters might be a factor in some equipment, but it's trivial in audio devices. We're talking 5-cent capacitors here. Where design compromises most often come into play are in circuits that have an analog component, namely oscillators and sample-and-hold circuits. With oscillators, it's mainly a cost tradeoff. Fortunately, at the frequencies used in oversampled ADCs, designing an accurate and stable oscillator is neither difficult nor expensive. You could shave a few pennies off at the expense of accuracy, but I can't imagine why anyone would bother given how crucial it is to the device's performance. Plus you'll normally only have one oscillator, even in a multi-channel interface, so it's not the most cost-effective place to save money anyway. It's the sample-and-hold circuit that's going to have inherent tradeoffs regardless of cost, because its accuracy is not ever going to be consistent across all sampling frequencies. In this particular part of the ADC, the designer has no choice but to pick a target sample rate to optimize the S/H for. If his market is primarily professional studio, he'll assume 96KHz - and have to accept slightly less-than-optimal performance at 44.1KHz. This is why some interfaces perform better at one rate over the other. It's also why Dan Lavry refuses to sell a 192KHz interface, and why the ability to sample at 192KHz should not be taken as a predictor of how well a device will perform at more commonly-used rates. (BTW, I have designed sampling circuits. Not for audio, but for industrial controls. Same principles, different requirements.)
 All else is in doubt, so this is the truth I cling to. My Stuff
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: The science of sample rates
2014/01/24 11:27:07
(permalink)
Thanks, Bit. As always, you are very informative.
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
Re: The science of sample rates
2014/01/24 11:45:31
(permalink)
bitflipper It's the sample-and-hold circuit that's going to have inherent tradeoffs regardless of cost, because its accuracy is not ever going to be consistent across all sampling frequencies. In this particular part of the ADC, the designer has no choice but to pick a target sample rate to optimize the S/H for.
This idea caught my attention. I'd like to learn more about this idea. best regards, mike
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 15:45:25
(permalink)
John T
Goddard
People have unfortunately taken Layry's private whitepaper assertions as valid, no doubt because of his reputation, although he's never submitted his assertions for scrutiny and peer-review in the JAES or other journal (for which he would also need to submit some verifiable proof). Nyquist-Shannon only dictates the minimum sampling frequency required in respect of a band-limited continous=time (analog) input signal (and assumes an ideal reconstruction), but imposes no actual restriction on employing higher sampling frequencies and certainly does not imply that a sampling freq>Nyquist is deleterious, as is well understood by people who actually have to design sampling systems
Man alive. Nobody, including Lavry, thinks that the sampling theorem imposes any such restriction.
Clearly, you like the sound of your own voice, and you're welcome to it. But I don't think the rest of us are asking too much if we expect your gum flapping to at least make the occasional stab at relevance.
Still vainly trying at some kind of comeuppance? No, Lavry didn't say Nyquist-Shannon imposes any upper restriction on sampling rate, Lavry himself seeks to impose that restriction, by urging that sampling at higher rates than his own converters happen to offer compromises audio accuracy and causes distortion, without actually ever proving that. According to Lavry, sampling audio at beyond a certain rate is "excessive". That sure sounds to me like Lavry is imposing an upper restriction on sampling rate. Hmm, "accuracy", is that anything like "fidelity"? What was it Wescott stated about fidelity and sampling rate? Wescott Measured purely from a sample rate perspective, increasing the signal sample rate will always increase the signal fidelity.
Perceive any relevance yet? Gee, while I could leave things at that, seeing how much you seem to enjoy countering anything I post with an assertion that it's not relevant rather than actually responding to any technical points asserted, here's some more for you to dodge. If you've actually bothered to read Lavry's 2004 and 2012 whitepapers I'm surprised your weren't displeased on doing so, seeing as how you've professed a dislike for "obscurantism" - plenty of that to be found there in how he tries to obscure the actual sampling rate of an oversampling converter (but then of course, he's also an "industry salesman" competing against companies marketing higher sample rate converters). He'd certainly altered his tune since his earlier paper on oversampling. Lavry 1997 whitepaper Oversampling: Most digital audio equipment uses higher sampling rates then required by the Nyquist receipt. Oversampling offers solutions to both "sinc problem" and "filter problem". Oversampling typically takes place first during the analog to digital conversion. The signal is then converted to "standard rate", for reduced storage and computations. Such a conversion can be done without recreating much of the "sinc and filter problems". Later oversampling during the digital to analog conversion, yields freedom from from such problems as well. Sampling twice as fast, makes the NRZ time interval half as long, thus closer to the theortical flat response. The "sinc filter shape" is moved up by an octave, but doubling the number of samples, overcomes amplitude attenuation. Sampling at twice the speed also provides an "energy free zone" between the desirable frequency band and the undesirable out of band frequencies. Our filter is steep enough to remove all unwanted high frequencies. In fact, the cutoff can be moved higher to pass all the inband with minimal attenuation and phase distortions. The following plots show the tone peaks for X2 and X4 oversampling. Note that sampling faster reduces the "4dB problem" to about .9dB at X2, and to .2dB at X4 oversampling Oversampling by X4 may still require a slight amplitude compensation (an easy task). Higher rates yield so little attenuation that often no compensation is necessary. Oversampling and "more bits": Your stereo dealer is selling a CD player with X8 oversampling 20 bits DAC. Do you hear 20 bits? Clearly, incoming samples with 16 bit accuracy can not be interpolated into 20 bits. The best of geographical surveying equipment yields errors when the reference markers are off. Oversampling interpolation is an "averaging concept" thus it yields some better "average accuracy", but each interpolated sample accuracy is limited to that of the input samples (16 bits in the case of CD players). Oversampling offers great benefits in terms of amplitude flatness response and easy filtering, with much freedom from unwanted inband phase linearity problems. These concepts are beyond reach for most consumers, thus the "marketing department" decided to equate it with "more bits". While there is some truth to the story, much is being "stretched" a bit to far (and sometimes 3 bits).
My what a difference a few years makes... Lavry 2004 whitepaper While this article offers a general explanation of sampling, the author's motivation is to help dispel the wide spread misconceptions regarding sampling of audio at a rate of 192KHz. This misconception, propagated by industry salesmen, is built on false premises, contrary to the fundamental theories that made digital communication and processing possible. ...one may be misled to believe that faster sampling will yield better resolution and detail. In fact all the objections regarding audio sampling at 44.1KHz, (including the arguments relating to pre ringing of an FIR filter) are long gone by increasing sampling to about 60KHz. ...how can we explain the need for 192KHz sampling? ...An argument in favor of microsecond impulse is an argument for a Mega Hertz audio system. There is no need for such a system. .... Clearly there are benefits to faster sampling: 1. Easier filtering (AD anti aliasing and DA anti imaging) 2. Reduction of higher frequencies attenuation at the DA side. Indeed, such faster sampling is common practice with both AD and DA hardware. Most AD's today are made of two sections: a front end (modulator) and a back end (decimator). The front end operates at very fast rates (typically at 64 -512 times faster then the data output rate). The reasons for such fast operation is outside the scope of this article. It is sufficient to state that anti alias filtering and flatness response becomes a non issue at those rates.
It is most important to avoid confusion between the modulator rate and the conversion rate. Sample rate is the data rate. In the case of AD conversion, the fast modulator rate (typically less bits) is slowed down (decimated) to lower speed higher bit data. In the case of DA converters, the data is interpolated to higher rates which help filtering and response. Such over sampling and up sampling are local processes and tradeoff aimed at optimizing the conversion hardware.
One should not confuse modulator speed or up sampling DA with sample rate, such as in the case of 192KHz for audio. AD converter designers can not generate 20 bits at MHz speeds, yet they often utilize a circuit yielding a few bits at MHz speeds as a step towards making many bits at lower speeds. The compromise between speed and accuracy is a permanent engineering and scientific reality.
Sampling audio signals at 192KHz is about 3 times faster than the optimal rate. It compromises the accuracy which ends up as audio distortions. While there is no up side to operation at excessive speeds, there are further disadvantages...
And I'm still looking for any illustration in Lavry's 2012 paper of how higher sampling rates reduce conversion accuracy. Lavry 2012 whitepaper In this paper, I will cover some of the myths of higher sampling rate and illustrate how higher sampling rates can actually reduce accuracy in audio conversion. Let’s talk about converter accuracy. In reality, good audio performance requires extremely low distortion because the ear is very sensitive and perceptive. Personally; I am for accuracy and do not advocate placing limits on accuracy.
In fact, high quality audio converters operating at sample rates no higher than 96 KHz offer results that are very close to the desired theoretical limits. Yet, there are many who subscribe to the false notion that operating above the optimal sample rate can improve the audio. The truth is that there is an optimal sample rate, and that operating above that optimal sample rate compromises the accuracy of audio.
Regarding the accuracy (or loss thereof) of audio at higher sampling rates, the following article gives a rather more realistic picture. Story JAES 2004 3 LIMITS TO ADC ACCURACY Some very general limits to the performance of ADCs are given in Fig. 1 along with a few key published performance points. The limits should be treated with some care, as befits such generalized presentations, but the figure summarizes the major problems relevant to audio. These performance measures do not involve either bits or sample rate; in general, these are format rather than performance issues. Usually either more bits or higher sample rates can be had with the application of more power or more silicon. If there is no corresponding increase in accuracy, however, they are of dubious use—so the accuracy obtainable is the limiting factor. There may be an exception at very high sample rates, where extra bits can cause power consumption problems. Audio needs accuracy above 100 dB (the limit of accuracy with matched components), but significantly below the limit set by SDTE. The accuracy requirement has caused audio to generate technically interesting solutions. 5 AUDIO REQUIREMENTS Digital audio needs a comparatively low sample rate, compared to what is currently available. Audio is traditionally defined in terms of an audible bandwidth and dynamic range, so the efficiency requirement has been translated to mean that sample rates used are only a little above the Nyquist minimum, and the word length is related to the dynamic range required. CD, for example, is based on the model of audio information extending to 20 kHz only, so a sample rate of just over 2 # 20 kS/s is adequate (hence 44.1 kS/s).
This format adds a requirement for substantial lowpass filtering—it looks for a flat frequency response to 20 kHz, but filtering to about "100 dB by 24.1 kHz (aliases back to 20 kHz), a roll-off rate of some 400 dB per octave. Analog filtering cannot handle this roll-off rate easily, so it is attractive to use digital filtering. If this is done, a substantially higher sample rate has to be output by the ADC, prior to the digital filtering, so that the analog filtering requirements are relaxed. The higher the ADC sample rate that is used, the easier is the analog antialiasing filtering prior to it. The output of the ADC is then digitally filtered, and the sample rate is decimated, to the required final rate. The filtering problem at the input to the ADC is quite severe. If the sample rate is increased to 176.4 kS/s (four times the CD rate), an analog filter with about 36 dB per octave rolloff is needed—achievable, but needs care, especially if the passband to 20 kHz is to be flat and ripple free. At 705.6 kS/s, sixteen times the CD rate, something over 20 dB per octave is still needed. A fourth order Butterworth filter would achieve this, but it still needs distressingly accurate components. Jitter decreases as the sample rate used increases.
(Although Lavry's '97 paper did explain certain benefits of oversampling in DACs, if anyone is curious a fuller explanation wrt oversampling multi-bit DSM ADCs can be found here.) And perhaps some others' perspectives on high sample rate audio may be useful here as well: Lesso & McGrath AES 2005 3. HIGH SAMPLE RATE AUDIO The advantages of high sample rate audio are widely debated. Several people have shown that the advantage of the higher sample-rate audio is not the extended bandwidth above 20kHz (even though there is some evidence that power above 20kHz has some effect [7]) but is instead the fact that we can use the extended bandwidth to tailor the transition band and reduce the time dispersion of the impulse response [11][13]. It has been argued that the advantage of DSD is that the time domain dispersion of the impulse response of the system is limited and so equivalently it makes sense to try to limit the extent of the impulse response of the filters discussed here.
Even at the higher sample rates there are trade-offs to be made. Figure 5 shows the impulse response of two filters for a 96kHz input, one with a cutoff at 20kHz and the other with a cutoff at 40kHz. The impulse response for the filter with the cutoff at 20kHz is under half the width and has considerably less pre-ringing1. The extra degree of freedom at higher sample rates therefore makes it possible to design much more interesting filters. Equivalently, the 40kHz filter has the same impulse response of a filter running at 48kHz, with 20kHz cutoff. This shows that at higher sample-rates it is possible to design filters that have substantially lower time dispersion, and perhaps this explains the reported improvement in audio quality associated with higher sample-rate audio [15].
An ultra high performance DAC with controlled time domain response Story AES1997 Digital audio systems have historically made use of this 20 kHz limit to set sampling rates. When CD formats were first established, the problem of storing the large amount of data needed for about 1 hours stereo playing time was substantial, so sample rates were set as low as reasonably possible, consistent with maintaining a 20 kHz bandwidth. 44.1 kS/s gave and still gives an unambiguous frequency range of 22.05 kHz (Nyquist principle).
In principle, if frequency response were the only issue, there would be no advantage in moving to formats with higher sampling rates. However, the evidence is otherwise. Direct comparisons of the same source material, recorded and reproduced at 44.1 kS/s, 96 kS/s and 192 kS/s show that there is an advantage in going to the higher rates - it sounds better! The descriptions of those used to making such comparisons tend to involve such terms as “less cluttered”, “more air”, “better hf detail” and in particular “better spatial resolution”. We are left wondering - what mechanism can be at work? It seems unlikely that we have all suddenly developed ultrasonic hearing capabilities.
Actually, a little thought also suggests that frequency response cannot be the only factor at work in our hearing apparatus. Figure 1 shows two waveforms that have identical (power) spectra, and yet sound very different - a bandlimited impulse (a click) and a type of white noise. Other waveforms can easily be generated that have the same amplitude response, but sound (substantially) different still. Something else must be going on. Energy Dispersion The ringing contains energy, and we can plot energy against time. For anti-aliasing filters we get the sort of shape shown in figure 3. This shows that although the energy in the input transient is concentrated at one time, the energy from the anti-alias filter is spread over a much longer time - the audio picture is “defocused”. We might be tempted to argue that the energy is ultrasonic, but this is certainly not the case at 44.1 or 48 kS/s - our bandwidth constraints mean that to get good anti-aliasing, we must filter as fast as we can, and only pass the audio bandwidth. Ergo - any energy in the output signal is in the audio band. At sample rates above the standard, the energy in the ring still has the full bandwidth of the passband - maths tells us so. Energy Dispersion at Different Sample Rates Figure 6 shows the energy associated with the transient responses. 44.1 and 48 kS/s filters spread audible energy over 1 msec or more. The 96 kS/s filter is much better, keeping the vast bulk of the energy within 100 msecs. The 192 kS/s filter can be very good indeed, keeping the energy within 50 msecs. Taking into account the speed of sound, we can convert energy defocusing in the time domain to “smear” in distance estimation by the ears. Energy spread over ±500 msecs is the same as a distance smear of ±15 cms. 96 kS/s keeps almost all the energy within about ±50 msecs, or ±1.5 cms. One of the observations people make4 about 96 kS/s material is that the spatial localisation of everything is very much better than 44.1 kS/s. 192 kS/s is better than this, although very dependent on amp and speaker performance to demonstrate it. One can get oneself into a bit of a twist thinking about the energy in the ringing. After all, if it is in the audio band, allowing extra energy at higher frequencies through the system surely cannot cancel out some that is in the audio band? It does, though - so although we may not be able to hear energy above 20 kHz, its presence is mathematically necessary to localise the energy in signals below 20 kHz, and it is possible (and our contention) that we can hear its absence in signals with substantial high frequency content. A high sample rate system allows it through (fact) - and allows the high frequency signals to sound more natural (contention) but allowing better spatial energy localisation (fact). It is our suggestion that some of the audible differences between conventional 44.1 kS/s and higher rates (88.2, 96, 176.4, 192 kS/s) may be related to this “energy smear” or defocusing caused by anti-alias filtering, and that the ear is sensitive to energy as well as spectrum. This is further backed up by our two original “same spectrum, sound diufferent” signals (figure 1). In the impulse, all the energy is concentrated at one time, whereas for the white noise the energy is uniformly spread over time. There is a precedent to this suggestion that the ear is sensitive to both spectrum and energy - the eye is as well. For sensitive vision or vision off the main beam, we use energy (luminance, or black and white information), whereas for detailed identification when we are looking at something, we use spectrum information (chrominance, or colour). In fact, most sensing processes are sensitive to energy. If the ear is sensitive to energy, it would almost certainly use the information for spatial localisation.
Multi Channel In conclusion, it is worth noting that if this suggestion is correct, then it would be sensible for any multichannel audio formats to use one of the higher sampling rates. The purpose of multichannel is for better spatial localisation of sound sources - so it needs a sampling rate that can support this!
http://www.cirlinca.com/include/aes97ny.pdf Your apparent inability to discern the glaring faults in that "Science of..." blog (or in other articles among that blogger's online body of work) coupled with your continued insistence that nothing I've posted is relevant only serve to reveal very plainly what little you actually know about digital audio and DSP, no matter how much you try to put me down. Consider the possibility, just for a moment, that the facetious scientist blogger, even if well-intentioned that people might not fall prey to marketing hype or "faith based" audio myths (which seems to be his recurring theme, casting everything as a "subjective" vs "objective case") and genuinely trying to spread good info written in a pleasing and easily digestible manner, might himself actually be so uninformed (or worse, misinformed) about what he writes/teaches that he doesn't fully grasp what he's writing about and consequently has himself been influenced by marketing hype and misinfo which he has rehashed into blog posts and is thereby now gleefully perpetuating. Because that is sadly apparent when one observes what little he actually seems to understand about much of what he writes. If you'd read Lavry's 2004 paper (and not just the brief excerpt in that blog post where it is glorified as "influential") or even the first links I posted in this thread (rather than ridiculing me for posting links), you might have confirmed for yourself that what I'd stated regarding converters sampling at MHz rates was in fact accurate (and had been discussed in the forerunner cakewalk audio newsgroup back in 1998), and could have avoided exposing yourself as a know-nothing platinum poser, er, poster. As already said, not posting here to impress you but so that others genuinely seeking knowledge here don't end up getting sold a load of perpetuated mis-info. If my posts really bother you that much, I'd suggest availing yourself of the drop-down "Block" button next to my username. Really handy for adjusting the forum SNR. Otherwise, I'd strongly suggest you develop some decent tech chops first if you really want to jam. Happy to discuss quantum computing even, although this isn't really the appropriate place (or two places at once) for that...
|
mettelus
Max Output Level: -22 dBFS
- Total Posts : 5321
- Joined: 2005/08/05 03:19:25
- Location: Maryland, USA
- Status: offline
Re: The science of sample rates
2014/01/24 15:54:05
(permalink)
My brain has become a shift register... every time something goes in... something falls out
ASUS ROG Maximus X Hero (Wi-Fi AC), i7-8700k, 16GB RAM, GTX-1070Ti, Win 10 Pro, Saffire PRO 24 DSP, A-300 PRO, plus numerous gadgets and gizmos that make or manipulate sound in some way.
|
John T
Max Output Level: -7.5 dBFS
- Total Posts : 6783
- Joined: 2006/06/12 10:24:39
- Status: offline
Re: The science of sample rates
2014/01/24 15:58:30
(permalink)
Block feature, yes. An excellent idea.
http://johntatlockaudio.com/Self-build PC // 16GB RAM // i7 3770k @ 3.5 Ghz // Nofan 0dB cooler // ASUS P8-Z77 V Pro motherboard // Intel x-25m SSD System Drive // Seagate RAID Array Audio Drive // Windows 10 64 bit // Sonar Platinum (64 bit) // Sonar VS-700 // M-Audio Keystation Pro 88 // KRK RP-6 Monitors // and a bunch of other stuff
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 16:01:21
(permalink)
bitflipper Also, the cost of analog anti-aliasing and reconstruction filters might be a factor in some equipment, but it's trivial in audio devices. We're talking 5-cent capacitors here.
Perhaps you're unaware of how Apogee started out in business. It's not trivial nor inexpensive to come up with good analog a-a and reconstruction filters for audio sampling, especially steep ones (e.g. brickwall) with good freq and phase characteristics as was the only option until DSP solutions became feasible. Think about why people complained that CDs sounded "harsh" and "metallic". Active filters helped, but the cost... bitflipper It's the sample-and-hold circuit that's going to have inherent tradeoffs regardless of cost, because its accuracy is not ever going to be consistent across all sampling frequencies. In this particular part of the ADC, the designer has no choice but to pick a target sample rate to optimize the S/H for. If his market is primarily professional studio, he'll assume 96KHz - and have to accept slightly less-than-optimal performance at 44.1KHz. This is why some interfaces perform better at one rate over the other. It's also why Dan Lavry refuses to sell a 192KHz interface, and why the ability to sample at 192KHz should not be taken as a predictor of how well a device will perform at more commonly-used rates.
I believe Lavry started out using non-oversampling sub-ranging or folding type converters (or some other type of multi-pass converters) for sampling audio to PCM, and perhaps that's what he may still use, dunno. But even so, those types of converters get used for sampling very high freq stuff, so should not impose limitations for sampling audio. In any case, technical progress has made the circuit and component related performance factors he raised pretty irrelevant anymore (for years now) as actual limitations to performance and accuracy, as is evident from the number of very capable 192k converters already out there. Others haven't been held back. Investing in new or redesigned products is always a burden for a boutique manufacturer, and I suspect that could have been a factor in why Lavry became so criticial of 192k until he saw that demand for 192k had risen (or his sales had slipped) sufficiently to justify joining the 192k club too as he now appears to be doing, rather than it being a matter of the technology advancing to where he could finally implement 192k without compromising the audio. Will be interesting to see whether 192k capability eventually migrates down into his lower cost converter ranges (his current USB-equipped gear is limited to 96k over USB by their USB 1.1 interfaces anyway so he'd need to move up to USB2 at least like Benchmark and others already have). Btw, worked on a few industrial systems employing ADCs in my time also, including one interfaced to an Intel 4004 but hey, Widrow had shown that you only needed 3 bits to find Venus on radar...).
post edited by Goddard - 2014/01/24 16:49:24
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
Re: The science of sample rates
2014/01/24 16:03:15
(permalink)
Hi Goddard, I appreciate that some one relevant is willing to take the time to elaborate on this stuff. Thanks. best regards, mike
|
brundlefly
Max Output Level: 0 dBFS
- Total Posts : 14250
- Joined: 2007/09/14 14:57:59
- Location: Manitou Spgs, Colorado
- Status: offline
Re: The science of sample rates
2014/01/24 17:42:20
(permalink)
I have just one question. Is this an Input meter or an Output meter?
SONAR Platinum x64, 2x MOTU 2408/PCIe-424 (24-bit, 48kHz) Win10, I7-6700K @ 4.0GHz, 24GB DDR4, 2TB HDD, 32GB SSD Cache, GeForce GTX 750Ti, 2x 24" 16:10 IPS Monitors
|
John
Forum Host
- Total Posts : 30467
- Joined: 2003/11/06 11:53:17
- Status: offline
Re: The science of sample rates
2014/01/24 17:48:37
(permalink)
brundlefly I have just one question. Is this an Input meter or an Output meter?

I feel so used! LOL
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: The science of sample rates
2014/01/24 18:17:28
(permalink)
It's not trivial nor inexpensive to come up with good analog a-a and reconstruction filters for audio sampling, especially steep ones (e.g. brickwall) with good freq and phase characteristics as was the only option until DSP solutions became feasible. Think about why people complained that CDs sounded "harsh" and "metallic". Active filters helped, but the cost... I'm sure you're aware that anti-aliasing filters in modern converters are not steep. They don't need to be, because the oversampled Nyquist frequency is hundreds or thousands of times higher than the top of the audio range. TBH, I haven't examined many interfaces with a magnifying glass, but my guess would be that in most cases the anti-aliasing filter consists of two capacitors and a resistor. As to why people complained about early CDs, that comes down, I think, to early converters not being oversampled. They did need steep filters, and were prone to aliasing. But we're talking the 1960s. Anybody with a $5 RealTek chip today has a vastly more capable interface than those first-generation recorders. Re: the 4004. Man, you're as much of a dinosaur as I am! Back then I used to read electronics catalogs the way most young men devoured skin mags. I distinctly remember the week the new Intel catalog arrived that included the 4004. I had the school (where I was an instructor) order one - for the students of course - and built an analog sequencer with it. It was the very same week a bucket-brigade analog shift register showed up on my desk. That BBD chip had cost a day's wages, but I was sure it was gonna be the future of audio echo units. Unfortunately, I immediately destroyed it with a static discharge, said the heck with it. A few years later along comes a company called Eventide Clockworks, who'd actually done it. That coulda been me, I thought, but for lack of a wrist strap! And laziness.
 All else is in doubt, so this is the truth I cling to. My Stuff
|
Splat
Max Output Level: 0 dBFS
- Total Posts : 8672
- Joined: 2010/12/29 15:28:29
- Location: Mars.
- Status: offline
Re: The science of sample rates
2014/01/24 18:28:36
(permalink)
"Perhaps you're unaware of how Apogee started out in business."
Sell by date at 9000 posts. Do not feed. @48/24 & 128 buffers latency is 367 with offset of 38. Sonar Platinum(64 bit),Win 8.1(64 bit),Saffire Pro 40(Firewire),Mix Control = 3.4,Firewire=VIA,Dell Studio XPS 8100(Intel Core i7 CPU 2.93 Ghz/16 Gb),4 x Seagate ST31500341AS (mirrored),GeForce GTX 460,Yamaha DGX-505 keyboard,Roland A-300PRO,Roland SPD-30 V2,FD-8,Triggera Krigg,Shure SM7B,Yamaha HS5.Maschine Studio+Komplete 9 Ultimate+Kontrol Z1.Addictive Keys,Izotope Nectar elements,Overloud Bundle,Geist.Acronis True Image 2014.
|
John
Forum Host
- Total Posts : 30467
- Joined: 2003/11/06 11:53:17
- Status: offline
Re: The science of sample rates
2014/01/24 18:30:21
(permalink)
Dave read Brundelfly's post and view the avatar closely. I think it explains what has been going on in this thread.
|
soens
Max Output Level: -23.5 dBFS
- Total Posts : 5154
- Joined: 2005/09/16 03:19:55
- Location: Location: Location
- Status: offline
Re: The science of sample rates
2014/01/24 18:45:27
(permalink)
>>Clearly my superior superiority is vastly more superior than your superior lack of superiority.<< To be truly superior, you must be superior in ALL directions. Infinity to infinity.
post edited by soens - 2014/01/29 17:48:05
|
brundlefly
Max Output Level: 0 dBFS
- Total Posts : 14250
- Joined: 2007/09/14 14:57:59
- Location: Manitou Spgs, Colorado
- Status: offline
Re: The science of sample rates
2014/01/24 19:06:41
(permalink)
John Dave read Brundelfly's post and view the avatar closely. I think it explains what has been going on in this thread.
 No serious offense intended, of course. I just couldn't resist.
SONAR Platinum x64, 2x MOTU 2408/PCIe-424 (24-bit, 48kHz) Win10, I7-6700K @ 4.0GHz, 24GB DDR4, 2TB HDD, 32GB SSD Cache, GeForce GTX 750Ti, 2x 24" 16:10 IPS Monitors
|
jb101
Max Output Level: -46 dBFS
- Total Posts : 2946
- Joined: 2011/12/04 05:26:10
- Status: offline
Re: The science of sample rates
2014/01/24 19:12:41
(permalink)
brundlefly I have just one question. Is this an Input meter or an Output meter?

I always said brundefly was an observant chap..
|
John
Forum Host
- Total Posts : 30467
- Joined: 2003/11/06 11:53:17
- Status: offline
Re: The science of sample rates
2014/01/24 19:34:16
(permalink)
He sure is. I'm very glad we have him.
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 19:41:28
(permalink)
Noel Borthwick [Cakewalk] Great article on The Science Of Sample Rates that discusses the pro's and con's of high sample rates. Its long but well worth the read.
Ok, let me explain further why I don't think this is really such a great article. Now in 2013, the 16/44.1 converter of a Mac laptop can have better specs and real sound quality than most professional converters from a generation ago, not to mention a cassette deck or a consumer turntable. There’s always room for improvement, but the question now is where and how much?
I've already explained my critique of the above passage in the article in several earlier posts here, but just to make it perfectly clear- because the author of the article certainly failed to pick up on this point- anyone with a recent vintage PC or Mac having an onboard "High Definition Audio" ("Intel HDA") codec chip is already equipped for performing playback of 24-bit/96k and 24/192k digital audio and evaluating for themselves the assertions made later in the article about high sampling rates being "harmful" to audio quality, even if their audio interface lacks 96k or 192k sampling. Technology always advances and today, external clocking is far more likely to increase distortion and decrease accuracy when compared to a converter’s internal clock. In fact, the best you can hope for in buying a master clock for your studio is that it won’t degrade the accuracy of your converters as you use it to keep them all on the same page. There are however, occasions when switching to an external clock can add time-based distortion and inaccuracies to a signal that some listeners may find pleasing. That’s a subjective choice, and anyone who prefers the sound of a less accurate external clock to a more accurate internal one is welcome to that preference.
The above-quoted passage points to a misunderstanding and miscontrual by the author of the SOS mag article "Does Your Studio Need A Digital Master Clock?" to which he linked. The author's statements about some listeners finding time-based distortion and inaccuracies added to a signal when switching to an external clock pleasing and even preferring such apparently stem from these two remarks appearing in the linked SOS review: SOS So, although sonic differences may be perceived when using an external clock as compared to running on an internal clock, and those differences may even seem quite pleasant in some situations, this is entirely due to added intermodulation distortions and other clock-recovery related artifacts rather than any real audio benefits, as the test plots illustrate. Overall, it should be clear from these tests that employing an external master clock cannot and will not improve the sound quality of a digital audio system. It might change it, and subjectively that change might be preferred, but it won’t change things for the better in any technical sense.
What I found seriously wrong with this portion of the article was that the author misunderstood (or just failed to grasp) the cause of the problem about which he was writing (that external clocking may cause some converters to distort) which cause was explained in the linked SOS article: SOS So even though a very good-quality external word clock is being supplied here, the performance of the A-D converter becomes noticeably (and audibly) worse than when running on its internal clock. This is not an unusual situation by any means, and the reduction in audio quality is not related to the supposed quality of the reference clock source either... Moreover, the implication is that the A-D converter’s external clock-recovery circuitry has a far more significant effect on the A-D’s performance than the quality or precision of the external reference clock source. ...it is certainly possible to synchronise an A-D to an external clock without affecting its performance, but that it takes a skillfully designed and manufactured clock-recovery system to do it.
Namely, the author failed to grasp that even if the internal clocking accuracy of converters has improved, the fact most of the converters tested by SOS produced distortion when clocked externally was not actually due to any lower relative accuracy of the external master clocks under test but because of deficiencies in the converters' own external clock extraction/recovery (i.e., slaving) capabilities when clocked by a more accurate and lower-jitter external master clock! Moreover, I feel that the author blew up the remarks in the SOS review stating that external-clocking-caused distortion might be found pleasant, by further suggesting on his own that some listeners might prefer it and were welcome to their preference, and then suggesting that external clocking distortion be considered as one of many subjective choices/preferences, while entirely failing to note that the distortion as found by SOS when using external clocking was always very small and might not even be audible, as had been clearly pointed out in the SOS article: SOS It’s important to take on board that in all of the above examples, where there was an increase in noise and distortion when running on an external clock, the change was always very small, and arguably even negligible in some cases. Without superb monitoring conditions these subtle changes might be inaudible, and would certainly be much less significant than, say, a sub-optimally placed microphone as far as the overall quality of a recording is concerned.
The author's casting distortion caused when using external clocking into a "subjective preference" struck me as a rather bizarre focus on the problem revealed, and made me wonder if he understood that some people, such as anyone producing for film/video, might actually, solely as a matter of overriding practical necessity rather than out of any subjective preference for the sound of distortion, always need to slave to external clocks, as had been pointed out by SOS: SOS The only situation where a dedicated master clock unit is truly essential is in systems that have to work with, or alongside, video, such as in music-for-picture and audio-for-video post-production applications. It’s necessary here because there must be a specific integer number of samples in every video picture-frame period, and to achieve that, the audio sample rate has to be synchronised to the picture frame rate. The only practical way to achieve that is to use a master clock generator that is itself sync’ed to an external video reference, or which generates a video reference signal to which video equipment can be sync’ed. ... Moreover, the audible problems of not synchronising multiple digital devices together correctly are far worse than the very small potential increases in noise and distortion that may result from forcing an A-D to slave to an external reference clock.
In this light, the author's next paragraph: This is a theme that we find will pop up again and again as we explore the issue of transparency, digital audio, sampling rates, and sound perception in general: Sometimes we do hear real, identifiable differences between rates and formats, even when those differences do not reveal greater accuracy or objectively “superior” sound.
revealed to me that the author, in taking the remarks from the linked SOS external clocking review and shaping them to fit the theme of his article had missed the real technical significance of and misconstrued the SOS review. In fairness, although the author did in fact point out that converters are more likely to perform better when internally clocked and may distort when externally clocked, that was the only thing which the author accurately related from the SOS review. Namely, the SOS reviewer's remarks about some people possibly preferring such converter distortion and pointing out that the distortion was atonal IM distortion and thus not actually musical were given as a (perhaps sarcastic) warning to anyone preferring certain converters for their "warm" distortion feature (e.g. as offered by Lavry among others) and if the author had grasped that instead then he could have ridden that subjective preference matter horse home as well or instead. In summary, the following There are however, occasions when switching to an external clock can add time-based distortion and inaccuracies to a signal that some listeners may find pleasing. That’s a subjective choice, and anyone who prefers the sound of a less accurate external clock to a more accurate internal one is welcome to that preference.
was not only technically incorrect (the external clocks were more, not less, accurate than the internal clocks, and inacuracy of the external clocks was not the cause of the problem) but also made it seem that the choice to use external clocking is merely a matter of preferring the distortion such could produce, thus revealing to me a lack of knowledge on the author's part as well as a misconstruing of the SOS reviewer's remarks . Next we come to this: Designers can oversample signals at the input stage of converter and improve the response of filters at that point. When this is done properly, it’s been proven again and again that even 44.1kHz can be completely transparent in all sorts of unbiased listening tests.
The problem I have with this part of the article is that the AES journal "engineering report" to which the author linked did not relate to oversampling, nor did it prove conclusively "that even 44.1kHz can be completely transparent" as the author alleged, and in any event, there are serious doubts surrounding the validity of the test results reported which have put its validity into question. The test described in JAES report, which has become known as the "Boston Acoustic Society Double-Blind Test" (BAS DBT, full text here and further info here) evaluated whether listeners in a double-blind test could discriminate DVD-A/SACD content playback from the same content when passed through a 16/44.1kHz A/D/A "bottleneck" (a CD recorder with realtime monitoring) during playback, as was described in the report: JAES This engineering report, then, describes double-blind comparisons of high resolution stereo playback with the same two-channel analog signal looped through a 16/44.1 A/D/A chain
It should be noted that there was no actual testing of any 44.1kHz source content (e.g. no CD-DA content}, but rather, only the use of a 16/44.1k A/D-D/A chain (the CD recorder's monitoring function) which could be switched into the output path of a DVD-A or SACD player to "degrade" the playback to "CD quality". It's unclear to what oversampling of signals the author was referring to. The only reference to transparency which I've found in the BAS DBT report was in the introductory paragraph which referenced much earlier blind tests showing that CD-A was "transparent" in comparison to source tapes. If the author was referring to the SACD and DVD-A content used for the testing, then the author was possibly confusing oversampling with material recorded at higher sample rates. If the author was referring to oversampling which might be happening in the CD-recorder's converters, I found no mention of the particular CD-recorder employed nor any specfications given in the report itself although the later-added "explanation" webpage indicates that a n HHB pro model was used although again, no specifications were given, so possibly the author was assuming its converters employed oversampling (as they likely may). The BAS DBT report received quite a lot of attention when it first emerged and has since been criticized for a number of reasons, including allegations that the DVD-A and SACD discs used for the test were ones which had been produced from older source material not originally recorded or produced in actual hi-res formats and thus the discs did not actually contain any hi-res content but only content which corresponded to 16/44.1k, and was thus not a true comparison against hi-res source material. Moreover, despite the controversy and doubts surrounding the validity of the BAS DBT, no follow-up or repeatability testing has ever been conducted afaik and it thus remains only a single isolated and unverified instance, not support for "as proven again and again" as the author alleges in the article. Ok, that's enough for now. Hopefully it's becoming clearer why I consider that the facetious scientist doesn't understand significant aspects of what he's writing about and as a consequence is spewing mis-info in pursuit of his subjective/objective theme. If in doubt, and assuming you understand binary number precision, read the " 32 Bits and Beyond" section of his article about bit depth here (and see if you don't think he should change the "You" in the title to read "I"). More to follow, maybe...
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 19:52:48
(permalink)
brundlefly I have just one question. Is this an Input meter or an Output meter?

It's actually a warning, just like that "Facetious title" doodle. As for I or O, people can think for themselves and arrive at their own conclusions about that. But as for post technical content, I don't assert anything the accuracy of which I'm not certain or lack proof.
|
John
Forum Host
- Total Posts : 30467
- Joined: 2003/11/06 11:53:17
- Status: offline
Re: The science of sample rates
2014/01/24 20:19:41
(permalink)
You know Goddard its time to give it a rest. I'm glad you have such a analytical mind and see so many faults in someone else's work but at some point its just obsessive and not all that informative.
|
Vab
Max Output Level: -87 dBFS
- Total Posts : 192
- Joined: 2013/12/24 18:15:50
- Status: offline
Re: The science of sample rates
2014/01/24 20:37:30
(permalink)
Goddard do you actually think anybody here is going to bother to read posts that long? No one has an attention span on the internet.
I7 980 | Asus Rampage III Extreme | 12 Gb ram | SLI GTX 680 | Creative X-fi Titanium HD | 2x4 Tb HDD | 128+512 Gb SSDs | Sonar X3 Producer | Yamaha DGX 630 | Samson Go Mic
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 20:46:20
(permalink)
bitflipper
It's not trivial nor inexpensive to come up with good analog a-a and reconstruction filters for audio sampling, especially steep ones (e.g. brickwall) with good freq and phase characteristics as was the only option until DSP solutions became feasible. Think about why people complained that CDs sounded "harsh" and "metallic". Active filters helped, but the cost...
I'm sure you're aware that anti-aliasing filters in modern converters are not steep. They don't need to be, because the oversampled Nyquist frequency is hundreds or thousands of times higher than the top of the audio range. TBH, I haven't examined many interfaces with a magnifying glass, but my guess would be that in most cases the anti-aliasing filter consists of two capacitors and a resistor.
Yes, I'm familiar with current oversampling converters and their relaxed analog filter requirements. I'm also aware of NOS converters and some of the work being done on filters for those. bitflipper As to why people complained about early CDs, that comes down, I think, to early converters not being oversampled. They did need steep filters, and were prone to aliasing. But we're talking the 1960s. Anybody with a $5 RealTek chip today has a vastly more capable interface than those first-generation recorders.
CD-A came out in the early '80s. Apogee got started by supplying filter upgrades for a number of early PCM recorders which had pretty crap filters at the time (when people were complaining about the sound of their CDs). Regarding the capabilities of Realtek codec chips, perhaps you missed my initial post in this thread. (negotiated patent licenses for CD-A in the early '90s, so pretty familiar with a lot of the tech)
bitflipper Re: the 4004. Man, you're as much of a dinosaur as I am! Back then I used to read electronics catalogs the way most young men devoured skin mags. I distinctly remember the week the new Intel catalog arrived that included the 4004. I had the school (where I was an instructor) order one - for the students of course - and built an analog sequencer with it. It was the very same week a bucket-brigade analog shift register showed up on my desk. That BBD chip had cost a day's wages, but I was sure it was gonna be the future of audio echo units. Unfortunately, I immediately destroyed it with a static discharge, said the heck with it. A few years later along comes a company called Eventide Clockworks, who'd actually done it. That coulda been me, I thought, but for lack of a wrist strap! And laziness.
Yeah, been around since slick stick days (and exams with multiple-choice answers consisting of the same number having the decimal in different locations). Can still rock though. Hey, I remember when BBD chips first came out (Reticon iirc). May still have a Matsu****a BBD in a parts bin somewhere for a project I never got to. Those were fun chips to play around with. As were many of Craig's projects.
|
Goddard
Max Output Level: -84 dBFS
- Total Posts : 338
- Joined: 2012/07/21 11:39:11
- Status: offline
Re: The science of sample rates
2014/01/24 20:58:07
(permalink)
Noel, thanks for all the info. Btw, regarding filter designs in converters, you may find this of interest. Noel Borthwick [Cakewalk] In all 3 of the cases I listed depending on the corresponding render bit depth setting in preferences. By default however SONAR only creates float files when doing bounces or freezes since the render bit depth is set to float.
Goddard
Noel Borthwick [Cakewalk] The storage format on disk is always standard WAV file format (WAVE_FORMAT_PCM or WAVE_FORMAT_IEEE_FLOAT, WAVE_FORMAT_EXTENSIBLE in some cases). The bit depth is determined as follows: 1. Bit depth for recorded project audio data is determined by the record bit depth setting in preferences. 2. Bit depth for imported project audio data is determined by the import bit depth setting in preferences. 3. Bit depth for internally rendered (bounce, freeze, etc) project audio data is determined by the render bit depth setting. Typically a SONAR project will contain multiple bit depth audio depending on the data it was created with and the intermediate bounce operations performed. The different bit depths are all converted to 32 or 64 bit float at playback time depending on the double precision mix engine setting.
Ah, I see. That's precisely what I was curious to know. In what cases would float format WAV file storage be done?
|