rabeach
I never stated that a perfect reconstruction was necessary. Nor did I indicate the interpolation error could be heard. I simply stated a perfect reconstruction does not exist.
How do you define "perfect" in the real world? My position is in the real world "perfect" means any imperfections are completely lost in other bigger imperfections that exist in the real world. IOW, far enough below the noise floor due to other problems to be meaningless.
And invoking a theorem as justification for not sampling at higher frequencies is flawed in my opinion because the math required for the theorem to be held true is not implemented in real-world systems. So whether sampling at a higher frequency is beneficial on the many varied ADC to DAC systems should not be based on a perceived belief that a perfect reconstruction is occurring.
If the imperfections are far enough below other imperfections in the real world that they are completely buried, then they can't be improved upon. Period.
Perfectly reconstruction a waveform to the point where any errors are buried beneath the bit depth is only difficult at higher frequencies, at high levels.
That aside sampling at higher frequencies may sound better on some systems and not on others. My aardvark 24/96 was designed by an extremely competent engineer had very stable clocking and filters with extremely low noise and low distortion. It sounded great at 96kHz although I never worked with it at that frequency. My VS-100 sounds great at 44.1kHz (doesn't touch the aardvark though) but I don't care for the sound it has at 96kHz. Stable clocking and filter design are extremely challenging and are not implemented very well in most commercial systems.
Clocking is just not an issue in the real world today. Find a real world converter that you think has clocking problems and try and measure the distortion/noise due to jitter. Good luck with that.
Filter design trade offs are very well understood by many, many people. And it's not hard to evaluate a filter. They pretty much show everything you need to know for SRC filters here:
http://src.infinitewave.ca/ You will find examples there where, even when clearly visible, none of the issues on any chart for a given algorithm will make it into 24bit audio - IOW it
is perfect at 24bit resolution up to a certain frequency where it starts rolling off. Some others only really have issues above 18kHz or so. Most of the really, really poor ones look like they didn't even try (perhaps choosing to use as little CPU as possible instead).
Apparently filter design is not so challenging that many, many vendors haven't managed to be able to get it right (at least for SRC).
If the same converters do indeed sound different at different sample rates under objective conditions (i.e. under careful double blind testing or quantitative measurement), the interesting question is how the results differ and what changed under the hood and why the vendor made the choices they did. Since we don't generally know, we can only speculate. Here's some speculation: perhaps a vendor might have traded off quality for lower latency, and maybe that was a bad decision.
In my opinion empirical data collected on the varied systems in use trumps the belief that a perfect reconstruction is occurring and all systems will sound the same at 44.1kHz. But if the empirical data is ever collected and says otherwise I will promptly reverse my opinion.
If it's possible for gear to achieve perfect (meaning here any imperfections are buried in the existing noise floor) reconstruction in the real world over the desired frequency range, then it shows that sampling frequency is not the problem.
Different gear will have many, many design trade offs so not everything will necessarily sound the same at any SR. And indeed it makes sense to evaluate the trade offs together rather than individually, particularly when we have no way of doing getting at the individual pieces.
And if a given piece of gear sounds better (or worse) at one SR vs. another, then it is what it is.
But some vendors making that design trade off for whatever reason, does not mean it's difficult or impossible based on the SR itself. If you can show that is impossible, well by all means go ahead.
IMO the bigger problem is when people jump to the wrong conclusions based on limited information and without a full understanding of the technical trade offs and limitations.
Math is just a construct many people experience a sensory variance from using higher sampling frequencies.
Digital audio is math. Once the signal is digitized, aside from bugs, HW problems and incompetence there is nothing other than math until it gets converted back to analog.
If people are hearing a difference due to a digital decimation or interpolation filter, then it is purely a math problem.
There is a Burr Brown white paper that shows the implementation of a linear-phase filter somewhere in between a Butterworth and a Bessel response; It may be outdated by now I came across it in 1994.
http://www.ti.com/lit/an/sbaa001/sbaa001.pdf
The filter design trade offs are the same today, but the trade off involving available processing power has of course changed dramatically. There are also better tools readily available today for filter design and optimization than in 1994.