drewfx1
Um, because in the real world it's already a moot point with 32 bits?
The truth is the only reason people think 32 bit is an issue is because some marketing folks used carefully worded language to imply that it is.
But I bet if you go back and carefully parse what they actually say, you'll see that they don't actually ever say that there's an audible difference - instead their very careful wording talks about errors in the abstract and leaves it to the reader to jump to the conclusion that those errors are a problem.
Now here's the question for you: If this stuff was really a problem, then why would they use that very careful, manipulative wording?
Folks made records with 16Bit digital recording. Marketed to the public as being squeaky clean/clear/quiet and accurate
With 16Bit audio, rounding error (from multiple generations of destructive processing) was certainly audible... and it sounded nasty.
If the extra resolution resulted in a significant performance hit... I could see the debate.
When it comes at virtually no CPU load, I'll take the extra resolution... and never give summing accuracy another thought.