I think the Hendrix playing live example isn't that germane...to record that, you'd just stick a mic in front of the amp. You could achieve the same thing with SONAR by playing a hardware synthesizer's audio output into a track. In some ways I think that would be superior anyway, because odds are there will be a control surface that encourages real-time playing...particularly if it's a "one knob/one function" analog synth.
Personally, I don't see the difference between playing a synthesizer and having SONAR record my gestures compared to SONAR recording the audio that results from those gestures. For example, I consider mixing (with hardware or touch faders) as a performance, not as a set-and-forget ritual of setting faders. I slam 'em around, solo things, pan, etc. With analog consoles, what you mixed was what you heard. Now we have automation, which again to me, makes no audible difference compared to capturing the mixer output to a two-track in real time. (And as far as Mix Recall is concerned, YEAH BABY!!!).
I think in large part it comes down to psychology as to someone's preferred mode of working. Aside from some randomized parameter causing a spur-of-the-moment interaction, which I doubt is all that common, I really don't think the end result is technically any different whether you capture gestures or record audio. However, there's also no difference in playing guitar with a beautiful sunburst finish compared to one painted in Gas Station Green, but you're going to want to play the one with the beautiful finish because it puts you in a better frame of mind.
There's something exciting about playing without a safety net. If you turn off MIDI and just record audio in real time, that's what you're doing. I think
that's what it's all about, as opposed to the technicalities of the differences in capturing sound or gestures.