Noel Borthwick [Cakewalk]
mike_mccue
It seems like an actual comparison would require a protocol featuring side by side capture, mixing,
mastering, (all at the respective specifications) a final export to the distribution specification, and perhaps a digital to analog conversion.
If you arrive at the same conclusion after that, your conclusion will be based on an actual comparison rather than an implication that a comparison was made.
I didn't have the luxury of doing a side by side recording so couldn't do that test. However I'm curious why you consider my test to have been not valid. I compared a downsampled version with the original 96K recording by phase inverting snd mixing the two wave files. If there was something special in the original 96K recording the phase invert and mix test should have pulled out just the differences in the two recordings, right? I'm no mastering expert so I'm happy to be corrected here.
I don't see anything to correct, unless you're testing whether something recorded, mixed, and mastered at 44.1kHz is going to sound better than something recorded, mixed, and mastered at 96kHz or recorded, mixed, and mastered at 96kHz and then downsampled to 44.kHz. Your test gives 44.1kHz the "benefit of the doubt" by feeding it the [supposedly] higher-quality 96kHz source material.
One area where people might not take matters into account is the output filtering. I don't know if it changes for different sample rates, but for 44.1kHz, you have to brickwall pretty heavily (e.g., 96dB/octave) to keep the clock out of the output. With 96kHz, you can use a 48dB/octave filter and obtain even a bit more rejection.
In a comparison test at AES several years back between a 30 ips analog master tape and PCM reproductions at various sample rates as well as DSD, to me (and others in the audience) DSD sounded more like the reference music tape than 44.1kHz PCM. I've wondered if it's the technology, the fact that DSD can use gentler output filters, or something else altogether.