Noel Borthwick [Cakewalk]
I had a similar experience with the last project we recorded. It was tracked in PT at 96K, mixed at 96K in SONAR and mastered to at 96K and finally downsampled to 44.1 for the CD master. I did a test loading up both a 44.1K and 96K track in SONAR (in a 96K project). Then I phase inverted the two tracks and bounced the result down to a new track. I was surprised with the result. What was left was some noise that was completely inaudible even if I cranked the volume all the way up :)
So unless something went wrong during the capture or mastering process that I was unaware of, this probably means that there was nothing significant in the recording that took advantage of the 96K range. I can't generalize here since its definitely possible that you may get different results with other music that has more HF content - however in my case this didn't make anyy diffference for all practical purposes at all.
That hardly seems like a logical way to draw a conclusion. :-S
I happen to work at 44.1 and 48, so I'm have no compelling reason to justify any other choice, however I am bemused by a lack of veracity in the explanations offered as justification for conclusions being shared.
It seems like an actual comparison would require a protocol featuring side by side capture, mixing,
mastering, (all at the respective specifications) a final export to the distribution specification, and perhaps a digital to analog conversion.
If you arrive at the same conclusion after that, your conclusion will be based on an actual comparison rather than an implication that a comparison was made.
BTW, I think one of the more entertaining scaremonger phrases in the Science article is
"To people who study these things, it’s become clear that doubling and quadrupling sample rates instead of improving converters is a lousy economic tradeoff for consumers, as well as for the environment and the larger economy." I'm curious about the reasoning that supports this scientific statement:
"There is also the separate problem of distortion caused by the decreased sampling accuracy of a rate that is too fast." Too fast for what? I'm just guessing here; Is this some sort of implication that the current implementation of clocking in some of the high speed converters is not suitable for the task? Are we really to believe that this issue is inherently insurmountable or is it more probable that this may be the case in specific implementations?
Those are the types of statements presented in that article that seem like a load of hooey.
best regards,
mike