SilverBlueMedallion
I have been reading that the HIGHER than sampling rate, the LOWER the latency?
That sounds wrong! Wouldn't a higher sampling rate cause the hard drive and CPU to work even harder thus making latency even longer?
Yes, it
sounds wrong because latency is more complicated than just how fast you shove bits into a buffer.
Computers can't process audio data one sample at a time. They just don't work that way. Any time we bring data into a computer it has to be in chunks, whether we're talking about recording audio or reading data from a disk drive or a network adapter. Data may come in one byte at a time, but it gets stashed in a buffer until the buffer is full, and only then is the data actually processed. How long it takes to fill the buffer is therefore the main determiner of latency.
You can therefore reduce latency by either making the buffer smaller or by filling it faster. At any given buffer size, latency can be reduced by sending it data more quickly, i.e. using a faster sample rate. However, you could also achieve the same effect by making the buffer smaller.
Regardless of which method you use, the limiting factor is how quickly your CPU can process the data. At some point you will be feeding too much data too fast for the CPU to keep up. At that point you have no choice but to increase the buffer size, which negates the benefit of the higher sample rate, at least in terms of latency.
But latency is more complicated than how fast you can get data into and out of the computer. There is also the overhead of what the computer's doing to that data. Many plugins necessarily introduce additional latency due to how they work - some must accumulate data within their own internal buffers because they work on chunks of data, too.
Bottom line: don't increase sample rate in order to reduce latency. Use higher sample rates because you need the wider bandwidth, e.g. songs for dogs, dolphins or bats.