Hmmm, I've seen in this conversation for decades now, I have suffered through several double blind tests at several top notch state of the professional studios and any never seen or even heard of any human being that could effectively tell the difference between 24/32/64 or even 192 bit recordings and "guess" correctly every time during the double blind tests.
Guess is the operative word here. As the prankster I tend to be during a double blind test between trying to distinguish between trying to tell the difference 16/24/32 bit tests, most discernable listeners could always tell the difference between 16 bit and 24 bit, but not every single time. However my prankster trick was introducing a solo acoustic version I recorded of Frank Zappa's "Why Does It Hurt When I Pee?" into the mix, and 2 out of 4 picked it at being 24 bit.
I had to just smile a wicked smile and say, "Well it was in fact recorded at 24/48 in SONAR X3 Producer with a 64 bit double precision sound engine, exported to native stereo broadcast .wav 24/48, than mastered in Sound Forge Pro 11, but what you just heard was a 16 bit 320 k/Bs .mp3."

Their ears just fooled them, because it was unexpected, they picked and chose it because funny and an enjoyable blindside to their egos.
I also have been recording at the industry standard 24 bit/48k/hz for decades now and have enjoyed the benefits of both 32 bit floating point and 64 bit floating point "processing" while recording all projects at 24/48 unless otherwise specified to 24/16.44 or 24/96.
I've also found to be true, the all audio has the same sound quality at 24 bit depth, the frequency only effects latency. The faster the frequency, the lower the latency during recording..
But my question is, when you upload a 32 bit "depth" .wav file to Sound Cloud, do they leave it that way, or does it get processed (dithered) down to a much more "STREAMING" server storage friendly industry standard 16/44.1?? Or maybe even a much more server storage and "streaming" friendly 16 bit 320k/bs.MP3??
I don't know, call me a skeptic, but I don't think a 32 bit depth.wav file wouldn't stream very well over the internet, if it streamed at all..
I would safely assume 99.9% of all consumer computers won't even recognize any bit depth over 48, modern day music lovers typically loath music with ultra wide dynamic ranges, what doesn't knock them out of their seats or scares them, generally gets lost under the everyday noise floor ever present in normal listening environments
most listener's couldn't care less about listening to humanly impossible frequencies they can't hear and don't listen to Sound Cloud with audiophile and or professional grade studio equipment that would give them a chance to tell the difference if they could? I know I typically don't bother, nor do I whip any metering or spectrum analyzers on stuff I stream from the internet..