Earwax
The very fact that you have to qualify the implementation of the process (non-stochastic instruments, patching, use of loop backs and VST-to-WAV recorder plugins, feedback loops, MIDI timing, etc.) is the difference.
Actually I only qualified about stochastic devices (and I already said I understand why that is relevant) and the latency caused by monitoring through a computer without sufficient power to produce low enough latency. I said nothing about VST to WAV recorder plug-ins. I only mentioned patching and loopbacks if you
have to record audio in real time, but I have yet to see
any evidence that there's an audible or even a perceptual difference between recording the audio output of a VSTi and playing it back compared to recording the gestures that created the VSTi's audio output and playing those back to produce audio. So as far as I'm concerned, you don't need a loopback to record a VST given that its output already matches the output that would be produced by your playing in real time.
In my humble opinion, recording MIDI data for playback is no substitute for real time audio recording.
With synthesizers, aside from the stochastic caveat, there is
no difference.
The gestures produce the synthesizer's audio. Recording the gestures, upon playback, produces the same audio. With guitar, an amp sim is not a physical amp and never will be. You choose which sound you want first, then you figure out how to record it. If it's an amp, use a mic. If it's a sim, record the dry guitar track. I dunno, it all seems very simple to me.
Some of us, though, would love to see the same paradigm for recording VST/VSTi that we have for recording external audio.
Well, there are
lots of things I'd like to see too! But often, those pesky laws of physics rear their ugly heads. The instant a computer is involved, there will be latency caused by monitoring and there is no way around that at the present time. DAWs excel at capturing external audio and allowing you to edit that audio. VSTis leave the world of multitrack recording and enter the world of "in the box" computer-based production. That is why so many VSTis offer two versions: a
plug-in for use with a DAW, and a
stand-alone version for use as a (somewhat) traditional instrument.
Even if it was important to record a VSTi's audio because all or most of your synths do interesting random things and you hope to capture the one special performance where the randomness adds up just the way you want, by definition using computer-based instruments has to live within the limitations of a computer-based system - the biggest being latency.
I would absolutely love for someone to tell me how to achieve the results I want.
It's easy...load each VSTi into a decent laptop. You now have an external instrument, so we're comparing apples to apples (or I guess it would be windows to windows). Record the audio from the instruments into your DAW of choice.
To me the deal breaker isn't whether or not SONAR can record audio. The deal breaker is that before you can even consider that as a viable replacement for something like a PortaStudio or ADAT, latency has to be low enough to give the kind of experience you want or you're just going to have a real-time recording of an unpleasant performance experience. A VSTi in even a dual-core laptop will give low enough latency to be comparable to a hardware synthesizer. When that goes into the DAW, you can use zero-latency monitoring because you have already have your sound in the laptop.
Again, I understand the need to record stochastic devices. I just don't understand the need to record VSTis that produce the same output regardless of whether you listen to their audio, or you listen to the audio produced by the gestures that produced that audio originally.
Mixmkr is describing something different and it's a situation I ran into often when creating instructional videos for SONAR. For me the solution was two interfaces. SONAR used an ASIO one and Vegas used a WDM one. I patched the SONAR interface out to the Vegas interface in and recorded SONAR's out in Vegas. It's also possible to do this internally in Windows, but when the SONAR windows were on screen I wanted them to show an actual ASIO interface, like the kind people would use in their day-to-day work, rather than a Windows scenario that would have no relevance to them unless they were doing instructional videos
And FWIW, the CA-X parameters are automatable. A more ironclad example of why Mixmkr needs what he wants is inserting a stompbox effect
sans MIDI control between the guitar and interface.