Anderton
But my point isn't really about accuracy as related to frequency response, but whether there's some other element that just happens to be associated with a higher sample rate. Think of it as a follow-up to the "foldover distortion minimization" thang.
Some (on topic

) thoughts:
1. Sometimes (i.e. with Windows) things can get resampled under the hood to the native rate of the OS/HW, and the SRC routines in these cases are not necessarily very good.
2. Assuming someone is hearing a real, non-imaginary difference, are they just assuming the higher rate is "better"? In some cases it can actually be worse, or just different. But when they know which one is which, people automatically tend to think that the thing they perceive should be better as not just "different" but "better" or "more accurate".
3. Converter design. Most modern converters are highly oversampled (and at low bit depths). For an ADC, this combines an analog anti-aliasing filter together with a digital decimation filter which together filter out everything above 1/2 the sample rate at the output of the ADC. Is the same analog circuitry used at different sample rates? And what design choices did they make for the decimation filters? Ironically, sometimes it's the makers of "high end" gear that make the more questionable choices.
Generally you can eliminate these questions by doing the (double blind) testing as follows:
a. Start with a signal at the highest sampling rate.
b. Convert a copy of this to the lower rate and then back to the original rate using a good SRC routine.
c. Do (double blind) listening tests to compare the two files.
Doing the SRC to a lower rate and back removes any higher frequencies present in the original. But since both files are at the same (higher) rate, it eliminates any differences in converters, etc. that might be confused with the sampling rate itself.