Well, it stands to reason that if you can't pass 64-bit data to a plugin and/or it can't output 64-bit data, then the necessary truncation is going to negate
some of the benefit you might have derived from doing all your calculations with 64-bit precision.
The pertinent question is: does this loss of precision have audible consequences? I do not believe it does.
The benefits of higher-precision calculations are small. So small that they only become potentially significant cumulatively, over the entire end-to-end process. Even then, it's unlikely you'd
hear the difference except in extreme and unusual circumstances.
Other mitigating factors...
Many plugins that are unable to accept 64-bit input still perform their internal calculations with 64-bit precision, independent of the resolution of the audio engine.
Audio clips are ultimately stored as 32-bit data regardless of how that data was derived. It's as if you'd inserted a 64-bit-incapable plugin at the end of every fx bin on every track.
Even if you don't enable the 64-bit engine and do all calculations with 32-bit precision, cumulative rounding errors are still going to be sunk when the data is ultimately converted to integer data, whether a 24-bit wave, a 16-bit wave, or AAC/MP3/FLAC compressed file. All of these formats require discarding the fractional portion of every sample, and inserting random noise to mask the resultant quantization noise.
I would suggest an objective test, not to determine if 32-bit plugins are a problem, but rather whether the 64-bit engine is worth the CPU and RAM overhead it incurs. Simply export your project twice, once with the 64-bit engine enabled and again without it. Then do a controlled A/B/X test and see if the frequency with which you can discern them is greater than chance.
Whether you are able to distinguish them or not, ask yourself which would have the greater impact on the song's quality: high-precision calculations or raising your limiter threshold 1 dB.