gswitz
I don't agree that the least significant bit doesn't matter. When i make a file with only the least significant bit from a 24 bit wave and normalize, i can identify the song.
There are good reasons for using 32-bit data, having to do with sinking rounding errors down into the noise floor and not letting them accumulate to the point of audibility. But I'm not talking about signal processing, but rather
rendering, in response to the original question. Once exported as a final product, human ears are incapable of differentiating between 24- and 32-bit data.
Not that I doubt your hearing acuity, Geoff. You're younger than me, so there's no question you hear better than I do. However, I think I can make the case that
no human can hear the difference between
rendered files at 24 vs. 32 bits.
Look at it this way...whatever differences exist between a 24-bit file and its 32-bit equivalent lie below
-144 decibels. The full range of human hearing is only ~120 dB. Now, if you were a barn owl, you actually
could hear something 14 dB below the threshold of (human) hearing, e.g. a mouse walking half a mile away. But you couldn't record that even with the best microphone on the planet, and even if you could you wouldn't be able to hear it.
The lowest known sound level ever achieved is -14 dBSPL (in an anechoic chamber at Microsoft). You'd have to install a jet engine in that chamber to realize a 144 dB range. If you were then able to pick out a mosquito buzzing 10 ft away - while the engine was running - then I'd believe you can hear what's happening at -144 dB and below.