Okay...I did some tests by generating white noise and doing automation with 2ms and 1000ms decimation. In both cases, the automation graphic indicated a minimum time of around 40ms between nodes. I used Sonar's onscreen faders for automation, moving them as fast as I possibly could - so fast it would definitely be unmusical unless you wanted a
really fast random tremolo effect.
The following screenshot shows what happens when you render the tracks with the volume automation. It's clear the volume curves follow the automation graphic curves. Give how fast the changes are (the timeline is set to milliseconds), it would certainly seem that Sonar's automation resolution is about the same as what human perception is capable of discriminating.
As to whether the automation decimation parameter serves any useful purpose, I have no idea but it doesn't seem to make any difference.