A technique I use a lot for vocals, but which also applies to bass, is to select the region that needs a fix and use Sonar's DSP gain feature. You can set the crossfade time in Preferences for the region transition. Then the volume change is "baked" into the file and you have a consistent signal to which you can later apply overall automation if you like.
To me, automation is about restoring dynamics, adding level variations for parts, and creating level changes that augment the song's rhythm. Although you can of course use it to solve problems, I prefer solving problems at the file level. That way when I back up/export the file, I'm not dependent on automation for the processing. i'll even take out the parts between phrases on vocals to get total silence rather than use mute or automation. This also leaves the automation free for what I consider its most valuable function.
I'm not saying my approach is "better" or "worse" or anything, just that after trying multiple processes over the years the file level fixes leave me with the fewest long-term issues. But I'll also admit that I cheat, and since coming up with the EB5 Bass Expansion Pack, that has become my go-to bass instrument...especially with Melodyne's audio-to-MIDI conversion.