• SONAR
  • Do Your Record at Higher than 96 kHz and if so, Why? (p.6)
2014/11/24 13:29:01
drewfx1
lawp
drewfx1
lawp
as craig points out, it's all in the maths (whether you can hear it or not ;-)) i.e., generally, the bigger the digital-unit-to-realworld-unit ratio, the better or more accurate it all is




No.
 
Once you are more accurate than human hearing, you cannot get better accuracy unless you replace the human with something better. 
Yes, the more accurate the maths the more accurate the final output, you're talking about the final mix that the end listeners here, I'm talking about the mixing



Accuracy is generally always limited by the "least accurate" step. IOW, a noisy signal does not become "more accurate" by doing calculations on it at higher precision once any calculation errors are already buried in the noise present.
2014/11/24 13:36:09
Anderton
lawp
as craig points out, it's all in the maths (whether you can hear it or not ;-)) i.e., generally, the bigger the digital-unit-to-realworld-unit ratio, the better or more accurate it all is



If that's indeed what I said, that wasn't the conclusion I intended to have drawn...granted, you can get theoretically better accuracy, but that doesn't mean it has practical implications. Sometimes it does - like sample rate converters doing more accurate calculations now that we have longer word lengths. But increasing playback bit resolution from, say, 24 to 32 bits doesn't matter because no physical converters can take advantage of the extra 8 bits. It's down in the noise floor and quantization noise.
 
But my point isn't really about accuracy as related to frequency response, but whether there's some other element that just happens to be associated with a higher sample rate. Think of it as a follow-up to the "foldover distortion minimization" thang. For example, I found it intriguing that while perusing this subject on the web, one person theorized that the reason why "golden eared" people could hear a difference with 192 kHz compared to CDs in a particular test was because the 192 kHz material was coming from a hard drive and had less jitter than optical playback.
2014/11/24 13:37:05
brconflict
I've read in many cases where pretty much anything above 96Khz was pure marketing, which has worked, but I've heard this from a few different manufacturers: An A/D converter works best at one rate vs. another. The electronic circuitry (ala the A/D converter devices, and possibly the OP-AMPS) work better at 96Khz, but because of marketing, 192Khz is supported. If you buy an A/D marketed toward the higher-end of the market, 192Khz might be superior. However, for the lower-end market, where 96Khz and lower are supported, but the higher rates are simply added on for marketing purposes, 96Khz is better.
 
I don't even consider going higher than 96Khz, because most music in the industry simply doesn't benefit. If you're going for Audiophile recordings, where the ultimate medium is digital, played through a Wavac system or something mostly geared for an audiophile listener market, then 192Khz might be beneficial. I 'hear' it helps the outer rim of vinyl.
 
For the mass markets, the resulting medium will benefit better by a higher bit-rate vs. higher sampling rate unless the D/A conversion is not all that.
 
My $0.03
2014/11/24 13:44:52
Anderton
drewfx1
Sampling is nothing like photography.



Indeed, I think Milton is confusing bit resolution with sampling rate. Sampling is more like frame rates in video, and that's an ideal example of a point of diminishing returns - past a certain point, the human eye's persistence of vision is the limiting factor. You can increase the frame rate all you want, but the eye can't respond fast enough for it to make any difference.
 
I believe a lot of the "16 bits isn't enough" talk came about because when the CD was introduced, a dirty little secret was that a lot of CD players were playing back through 12-bit converters, which meant at best only 10 "real" bits of resolution (taking quantization noise, circuit board layout, glue components, etc. into account). 16-bit converters were more like 14 bits. When 20 bit converters appeared that could do a true 16 bits, it definitely made a difference -- not because they were 20 bits per se, but because they delivered true 16-bit resolution.
 
People also get tripped up over audio engine resolution vs. recording and playback resolution. Audio engines need more resolution because of the massive amounts of calculations that are always being done, but ultimately, those calculations are in service of a much lower bit resolution on playback, and based on much lower bit resolution when capturing signals.
2014/11/24 13:52:33
John
Anderton
drewfx1
Sampling is nothing like photography.



Indeed, I think Milton is confusing bit resolution with sampling rate. Sampling is more like frame rates in video, and that's an ideal example of a point of diminishing returns - past a certain point, the human eye's persistence of vision is the limiting factor. You can increase the frame rate all you want, but the eye can't respond fast enough for it to make any difference.
 
I believe a lot of the "16 bits isn't enough" talk came about because when the CD was introduced, a dirty little secret was that a lot of CD players were playing back through 12-bit converters, which meant at best only 10 "real" bits of resolution (taking quantization noise, circuit board layout, glue components, etc. into account). 16-bit converters were more like 14 bits. When 20 bit converters appeared that could do a true 16 bits, it definitely made a difference -- not because they were 20 bits per se, but because they delivered true 16-bit resolution.
 
People also get tripped up over audio engine resolution vs. recording and playback resolution. Audio engines need more resolution because of the massive amounts of calculations that are always being done, but ultimately, those calculations are in service of a much lower bit resolution on playback, and based on much lower bit resolution when capturing signals.


I would say that the above is a very fair and reasonable approach.
2014/11/24 13:55:14
Anderton
This is also why more testing needs to be done. There was a study done in Japan that showed different brain activity in people responding to frequencies above the theoretical range of human beings ("hypersonic" frequencies). There's an excellent paper here that was not written by someone from Sony LOL. Nor does it say "hypersonic frequencies are all good" or "all bad." Their research indicates that in some cases, frequencies between 20 and 32 kHz created a negative effect. Just quoting the conclusion doesn't do the paper justice, but it might encourage some to read the whole thing and check out some of the references:
 
Conclusion
By observing Alpha-2 EEG, it became clear that the emergence of the hypersonic effect changes either positively or negatively depending on the frequency of the HFC applied along with the audible sound. We showed that Alpha-2 EEG increases when HFCs above approximately 32 kHz are applied, which indicates that a positive hypersonic effect has emerged, as shown in our earlier studies. Our present study reports, for the first time, that Alpha-2 EEG decreases when HFCs below approximately 32 kHz are applied, which indicates the emergence of a negative hypersonic effect.
2014/11/24 13:58:29
AT
 
"



Ear training can indeed allow one to hear small details if they are audible.
 
But the "only with super gear in special room" is nonsense. Different artifacts caused by different things (including listening systems and the listening environment) just do not conveniently line up with each other that way to get masked. 
 
Can't hear an artifact in a given situation (assuming it's real and audible by humans)? Try turning up the volume a little. Or playing a "worst case" signal. Or moving closer to the speaker to reduce the role of the room or environmental noise. Suddenly it's audible with any gear.
 
The stuff that can only ever be heard with special equipment under any conditions always turns out to be imaginary. 
"




Drew, I'm not sure what you're arguing here.  If you are sticking to the sample rate part of the thread, I agree with you.  As stated, I use 44.1 since I can't hear any difference worth the bother.  And I agree if you can only hear a difference in an anechoic chamber wearing a tin-foil hat it probably doesn't have any real-world use, esp. since it likely doesn't exist.  But you are too categoric in your dismissal of gear, room and training as far as the art of music, and as I argued, the psychology, too.  Some days in the studio I hear different things as related to mixing before I touch a knob.  Maybe I need that tinfoil hat? ;-)
 
@
 
 
2014/11/24 14:05:27
Anderton
drewfx1
Accuracy is generally always limited by the "least accurate" step. IOW, a noisy signal does not become "more accurate" by doing calculations on it at higher precision once any calculation errors are already buried in the noise present.



+1. Stated in another context, the point of a high-accuracy audio engine isn't to improve a signal, but to make sure it's not made any worse
2014/11/24 14:24:10
drewfx1
Anderton 
But my point isn't really about accuracy as related to frequency response, but whether there's some other element that just happens to be associated with a higher sample rate. Think of it as a follow-up to the "foldover distortion minimization" thang.

Some (on topic) thoughts:
 
1. Sometimes (i.e. with Windows) things can get resampled under the hood to the native rate of the OS/HW, and the SRC routines in these cases are not necessarily very good. 
 
2. Assuming someone is hearing a real, non-imaginary difference, are they just assuming the higher rate is "better"? In some cases it can actually be worse, or just different. But when they know which one is which, people automatically tend to think that the thing they perceive should be better as not just "different" but "better" or "more accurate".
 
3. Converter design. Most modern converters are highly oversampled (and at low bit depths). For an ADC, this combines an analog anti-aliasing filter together with a digital decimation filter which together filter out everything above 1/2 the sample rate at the output of the ADC. Is the same analog circuitry used at different sample rates? And what design choices did they make for the decimation filters? Ironically, sometimes it's the makers of "high end" gear that make the more questionable choices.
 
 
Generally you can eliminate these questions by doing the (double blind) testing as follows:
 
a. Start with a signal at the highest sampling rate.
b. Convert a copy of this to the lower rate and then back to the original rate using a good SRC routine.
c. Do (double blind) listening tests to compare the two files.
 
Doing the SRC to a lower rate and back removes any higher frequencies present in the original. But since both files are at the same (higher) rate, it eliminates any differences in converters, etc. that might be confused with the sampling rate itself.
2014/11/24 14:37:14
John
Great stuff Drew and Graig. 
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account