drewfx1
Unless someone can demonstrate that it's ever even borderline audible through some objective testing, I would say that there was some marketing going on.
It depends upon what you compare it to. When compared to a 16-bit fixed audio engine, you don't have to do too much DSP to hear an obvious, audible difference. With a 24-bit fixed engine, you have to work a lot harder to create a project where you can hear a difference. It
is possible, but the project wouldn't have much relationship to real-world projects...unless your music consists of solo acoustic instruments recorded in isolation with noiseless mics, then bounced multiple times through precision reverbs and played back at really loud levels
Again to draw a comparison to dithering, I did a mastering seminar where I reduced the signal level dramatically and did comparisons with and without dithering. The difference was totally obvious, but only because the signal level was so low you could really hear what was happening with those least significant bits. People couldn't tell the difference at "normal" listening levels.
However, I always wondered if after people heard what dithering did to multiple low-level examples, it would train their ears sufficiently so they could learn to recognize the difference at normal listening levels. The ability of the ear to "learn" extremely subtle gradations would explain why some people hear very subtle audio cues while others don't.