Maybe an example of what the setting is designed to do might help.
I have a Waldorf microQ rack, which for anyone not familiar with it is a digital hardware synth designed quite a few years ago.
Let's say I sequence a MIDI part for it, with each note lying dead on the beat. I then send the MIDI out through a Motu USB MIDI interface which hands the MIDI on to the synth via 5-pin MIDI cables. I have an audio track that records the audio the synth produces.
Now, the process of sending the MIDI out through the interface takes a millisecond or two. The synth itself takes a few milliseconds to process the data and convert it to audio. Which means that Sonar receives the incoming audio at some time after the MIDI was placed in the timeline. The overall audio latency as adjusted by the ASIO buffer setting is irrelevant here, what matters is just the time it takes that MIDI signal to get converted to audio, which in the case of the synth in question is usually around 4-5 milliseconds.
That means that the audio in the track recording it will be delayed by 4-5 milliseconds compared to the MIDI. The preferences setting allows me to tell Sonar to automatically compensate for that by adjusting the relative timeline location of the MIDI and audio. Which is more convenient than shifting the audio to where it should be by hand.
The catch is that the preferences setting is a global one, and will therefore affect all audio resulting from MIDI in the same way. So if I also have my MS20mini hooked up, or a soft-synth running they too will have their audio tracks adjusted by the preference settings. Only they take a different amount of time to respond to MIDI, so the automated correction is wrong for them. So in that case I end up correcting any noticeable discrepancy by hand anyway.
The setting really dates from the time people often had a single synth, often a multi-timbral one, which they used for all synth-related purposes so a single automated correction factor made sense. And it is still useful if for some reason all MIDI and resulting audio is "out" by a fixed amount. Experimenting with it a bit is the best way to understand what it does.
I suggest running the experimental testing using MIDI drawn in the PRV or step sequencer or otherwise strictly quantified, because otherwise any timing discrepancies in the playing will mask the MIDI timing issue. We are talking about tiny amounts of time here, generally less than a human's level of accuracy especially if playing other than like a robot with perfect timing.
Personally I nudge audio if I feel it needs it, otherwise I generally leave it alone. A slight time difference can be useful to make quantified sequenced parts sound a bit less sequenced, and can help getting separation between different instruments at the mixing stage.