Anderton
drewfx1
Unless someone can demonstrate that it's ever even borderline audible through some objective testing, I would say that there was some marketing going on.
It depends upon what you compare it to. When compared to a 16-bit fixed audio engine, you don't have to do too much DSP to hear an obvious, audible difference. With a 24-bit fixed engine, you have to work a lot harder to create a project where you can hear a difference. It is possible, but the project wouldn't have much relationship to real-world projects...unless your music consists of solo acoustic instruments recorded in isolation with noiseless mics, then bounced multiple times through precision reverbs and played back at really loud levels 
Well, here we were comparing calculations done using 32 bit single precision floating point to 64 bit double precision floating point.
In terms of marketing, I too remember the days when we avoided at all costs any processing that wasn't absolutely necessary out of fear of audible damage from calculations being done at lower bit depths. And I agree that when CW introduced the 64 bit engine, we were not far removed from those days.
Personally, as I've expressed in various ways, I find some of CW's historical wording regarding the 64bit engine, shall we say, "unfortunate". But as a long time enthusiastic user of CW products, I put this in context of a company I otherwise have great respect for.
I put equal (or more) blame on individuals' inclination to ignore basic questions of context: "There are errors? OK, how loud are they under typical conditions?"
And no one ever seems to ask under what conditions a given problem is minimized or exacerbated.
For some reason when it comes to audio, people want to believe that
any artifact
must be audible under
all conditions if they just listen for it, but the real world just doesn't work that way. And intelligent people who profess themselves to be "skeptics" will sometimes readily accept all claims from one side without any even trivial doubt, but will demand endless proof that the other side has dotted every "i" and crossed every "t" without ever providing any contrary evidence of their own.
I agree that one could create a laboratory project with the express intent of making 32bit errors audible, but for real world usage I've never seen a shred of objective evidence that it's even close to making a difference.
Mathematically, the size of the errors relative to the signal is dependent on the bit depth the calculations are done with, the number of calculations performed, and how they accumulate based on the nature of the calculations being done. With 32 bit floating point you are starting at a point far from ever being audible, and in mixing I will assert that the errors are typically distributed fairly randomly. Therefore you need to do
lots and
lots of calculations before the errors could accumulate enough to be worth worrying about.
The math part is not really open to debate. But I would be quite interested in someone presenting objective evidence suggesting that the number of calculations under the 32bit mix engine is sufficient to make the errors audible, or that when mixing real world signals the errors might accumulate unusually rapidly to the point of being a problem.
Again to draw a comparison to dithering, I did a mastering seminar where I reduced the signal level dramatically and did comparisons with and without dithering. The difference was totally obvious, but only because the signal level was so low you could really hear what was happening with those least significant bits. People couldn't tell the difference at "normal" listening levels.
However, I always wondered if after people heard what dithering did to multiple low-level examples, it would train their ears sufficiently so they could learn to recognize the difference at normal listening levels. The ability of the ear to "learn" extremely subtle gradations would explain why some people hear very subtle audio cues while others don't.
If the dither/quantization_error at a normal listening level is below the absolute threshold of hearing, or is sufficiently masked by background noise and the audio itself, it will be inaudible. This is commonly the case for 16 bit audio, but you can certainly find (or create) conditions where it is audible.
In borderline cases, my understanding is that listeners being trained on what to listen for can make a very significant difference. And that, aside from hearing loss, training or "knowing how to listen" is the primary difference between different individuals ability to hear things or not - i.e. aside from hearing loss, it's not based on anyone having naturally superior hearing or anything like that. So it wouldn't surprise me if, as you suggest, some people have learned to hear details that escape others of us, but are still within the physiological limits of our hearing.
But I would also assert that it's often not all that difficult to differentiate between "conceivably borderline" cases and "below the physiological limits of human hearing" cases for listening levels that don't cause permanent hearing damage in the short period of time before you blow your speakers.