SonicExplorer
Hmmm... I did a lot of reading on the whole 32 vs. 64 bit mix engine last night and still came away not knowing what to believe. On paper it seems not realistic to hear a difference in most scenarios, but in practice some people claim to hear it.
In some practice people claim to hear the differences in cables, even though we can prove in blind testing they can't (and prove they are the same on a 'scope). People claim all kinds of stuff they can't substantiate. They want to hear a difference, so they do.
If enough things are rounded, eventually the decimals can add to pennies which can add to dollars, it all depends on how you group and round, and how many times that transpires. Being an old school analog relic I just don't know enough about the inner workings of this digital stuff.
That is the reason for having something like a 64-bit engine: Insurance against rounding. With 32-bit, rounding errors will occur. 32-bit FP is actuall 8-bits exponent, 24-bits significand. That means it has 24-bits of precision over a range of 8-bits, more or less. So when working at normal audio levels, assuming 24-bit input and output, you do get minor rounding errors. However they are going to be on the order or 1, maybe 2-bit at most in all likelihood. So you don't worry about it. Even the best DACs don't resolve more than maybe 21-22bits of signal and other kinds of noise will be WAY higher.
However in theory, enough operations, particularly ones that involve things like multiplication or division in sequence, could increase the rounding error. It doesn't really happen in practice, but it could in theory. Well we can easily prevent that just by going double precision. Now we have an 11-bit fraction and 53-bit of significand. That means we have so much precision we will never have any rounding error ever reach the 24-bit output signal, even in edge cases. Since modern processors kill at double precision performance, no real reason not to use it.
If you want to see a case where it does matter, look at graphics. Most monitors and video formats are 8-bits per channel output. However you need to do processing in 32-bits per channel (floating point) if you want truly smooth colour gradients. Do it in 16-bit half precision and you can see errors sometimes.
And not to derail my own thread, but I also read a comment last night that said XP sounds better than W2K. Which really left me scratching my head as I was unaware one OS version from another by itself could change the quality of sound. Huh??
For sound going through the OS layer? Sure. Remember that unless you use something like ASIO to talk straight to the soundcard, the OS is modifying the sound. It runs it through a mixer and resampling engine to allow for more than one program to play sound at once. Sometimes does other stuff too. How good or bad it does that can vary. In general newer OSes are better than older ones since there is more CPU power these days so they use better algorithms.