I've tracked/mixed/produced/mastered a lot of classical works where the music has "space" and decaying tails, and as a result also had a lot of test subjects for audio minutiae like bit resolution, 44.1 vs 96, different dither types, etc. Under those circumstances, I found some people can reliably hear the benefits of dither; most can't. Some people can't hear dither at first but after hearing what it does to the music "under the microscope" and then A-Bing music over time, they "learn" to recognize its effects.
Hearing the effects of dither requires a trained ear and a quiet listening environment. So the way I see it, I do dithering for the benefit of those who
can hear it. (Similarly, the general public can't discriminate pitch as well as musicians but you try to have an instrument that's really in tune anyway, for the benefit of those who are sensitive to tuning discrepancies.)
This is the background for answering the OP. I'll take audio and reduce it to around -85 dB to -90 dB or so, which is a realistic floor for real-world D/A converters used in consumer CD players. How you reduce it matters. You can't reduce it in Sonar by normalization or gain changes, because the audio engine is
too good and still gives good resolution at those low levels.
You'll hear the buzziness because the least significant bits are essentially switching instead of responding smoothly to level changes. Apply different dithers, and you'll definitely hear a difference. Choose the one that sounds best for the material at hand. For example what might sound good with an orchestra might not sound right with a solo nylon-string guitar.
OTOH of you're recording dance music and everything sits in the top 6 dB of dynamic range, you can use any dither and you won't hear a difference, nor will you hear a difference if you don't use any dither at all. Dither is basically context-sensitive. You can have
guidelines, but I'm not sure you can have
rules.