Just a thought about the terms "slow" and "fast" bounce. I believe these are misnomers. Better to call them real-time and non-real-time.
The non-real-time bounce is said to be fast because it doesn't wait for audio output to be delivered. However, it also means that it does not have to keep up with real time audio, hence can use signal processing algorithms that emphasize quality over efficiency. So theoretically, this could be the slower bounce, if the CPU is marginal for the project. But since projects might have significantly varying density or complexity in the timeline, the overall render time might indeed be shorter than real-time, even if the CPU would not be able to play the whole project in real time using "quality" algorithms.
Real-time bounce means it is probably producing essentially what you hear when you play the project. Since it is required to "keep up" with audio, it may be using a signal processing algorithm that emphasizes efficiency over quality.
Recall the oversampling that was added in Sonar. When first offered, oversampling would only be engaged during non-real-time export. If oversampling improves quality for a given track, then there should be a visible difference (esp. in the "null") compared with real-time bounce, and one would hope it sounds better.
In another thread it was also mentioned that some synths internally use different algorithms for real-time vs non-real-time (sometimes known as "render", here meaning the intention to place output in a file rather than sending it to audio).
So there are legitimate reasons for real-time audio to differ from non-real-time, not only bugs. Whether it sounds better, just different, or the same is not entirely predictable. And this discussion does not take into account the deliberate randomness mentioned above for some synths and effects, since that is seemingly unrelated to real-time vs non-real-time.