Anderton
The examples are NOT upsampled audio files, but rendered virtual instruments. In other words, the first example was a virtual instrument sitting in a 44.1 kHz project. It was never recorded at 44.1 kHz, it was rendered at 44.1 kHz. The second example has the same instrument, same track, same MIDI data feeding it, etc., and was also never recorded at 44.1 kHz. However, it was rendered at 96 kHz via upsampling, then downsampled to 44.1 kHz.
What you're seeing in the graph is what's so cool about the process; what is in the audio range is reproduced accurately when downsampled.
Thank you for the clarification, perhaps I was confused by the limited description of the process in your post
here, and added to the confusion with my "upsampled/downsampled" abbreviation. The issue with the difference in frequency distribution is not an indication of distortion introduced by the higher sample rate resampled to a lower rate per se, and I would not have expected that it was. Nor is the difference in the two clear evidence that the, to me, more pleasing sound of the second example is due to the elimination of foldover distortion. The argument I put forward in my post
here that such a removal of the aliased frequencies would not be manifested as additional power at new frequencies is the same regardless. The black box virtual instrument must have generated the new frequencies when sitting in a 96 K project that it did not generate in a 44.1 K project. That those new features were preserved in the downsampling is not amazing, although it is cool, since they represent features that were created by the virtual instrument under the altered conditons of the higher sample rate. The preservation of those features would be true of any rendered audio signal properly downsampled. Once they were generated you could upsample and downsample the rendered audio back and forth and they would be minimally altered.
With a virtual instrument, the issue of fidelity or distortion is problematic. The instrument in your example generates distinctly different tones in projects at different sample rates. Is the "true" tone the one produced in the lower or higher sample rate environment? Is the lower sample project output the one intended by the designer, or was the instrument unable for some reason to generate the intended features at the lower rate? As Noel suggests
here the source of the difference is in the DSP algorithms hidden in the instrument, and are not predictable without knowing what is happening inside the black box. Why the designer would have chosen algorithms that produce such different renderings is a mystery. The issue in this case is esthetic, and in that sense this local oversampling feature can be looked at as an effect, like highpass filtering followed by a chorus or flanger or the like. if you like the output of this particular instrument better with the upsampling on, then by all means turn it on. With other plugins like a limiter or compressor that are generally intended to be "transparent," affecting only a limited aspect of the audio input to the plugin, such as volume, this kind of behavior would be objectionable to most users.
Some readers of this thread may be under the impression that upsampling prior to engaging the plugin will result in some kind of general "repair" of faulty or deficient plugins, and that the mechanism of that repair is that the higher sample rate eliminates aliasing introduced by the plugin. In specific cases it may, but it can not be taken as a general rule. Nor can the fact that the output sounds "different" with the upsampling engaged be equated with it sounding better or more accurately processing the input without additional demonstration. On the other hand, if there is no difference in the output at different sample rates, the proposition that the plugin has been properly designed to produce the same output at both sample rates is more secure.