• SONAR
  • X3 Producer: Why is it so difficult to record audio from a soft synth? (p.9)
2015/02/18 20:17:23
Earwax
Anderton
Earwax
 
Not if the final audio from the guitar is the output of an amp sim and VST effects in Sonar. If Sonar could playback and record output from VST/VSTi at the same time, from multiple sources, there would be no issue. It can't. In my examples, more than one person wants to record VST/VSTi audio at the same time. No way can Sonar do that.   



I'm really trying not to be dense here, but this doesn't make sense to me. SONAR can play the output from a VST and VSTi in real time (plus any latency, of course) and record the input to them at the same time, so it can reproduce the sound heard during the real time performance at a later time. Sure, it won't physically record the output onto a track, but why would that matter? The object of recording the real-time output would be so you can play it back at some point, right? But you can play it back now, with the same results as the real-time performance (with the exception of stochastic devices that produce variable outputs even with the same input). 
 
With non-stochastic devices, I'm just not seeing how there would be any difference in functionality or sound between hearing something in real-time and recording its output so you can play it back, and hearing something in real-time and recording its input so you can hear the same output you heard originally when playing it back. 
 
I'm really not trying to be argumentative or anything, I just don't understand how there's any practical difference between the two yet people here seem to believe sincerely there would be, so I want to know what I'm missing. However if this relates only to stochastic devices, then I do understand the difference.


Craig,
I guess the examples aren't working. So, let me put it another way. The very fact that you have to qualify the implementation of the process (non-stochastic instruments, patching, use of loop backs and VST-to-WAV recorder plugins, feedback loops, MIDI timing, etc.) is the difference. To me, recording audio in real time should be just that - recording audio in real time. I shouldn't have to think about whether my instruments of choice fit the "non-stochastic" model. I shouldn't have to deal with various and sundry work arounds to record audio. I shouldn't have to think about the source of the audio, or the type of instrument used to create it. I (and my fellow VST/VSTi-using musican buddies) should just be able to plug in, pull up all necessary VST/VSTi plugins, hit record, and go. And have everything (and I do mean EVERYTHING) recorded as I and my fellow musicians play. For the record, my computer is more than powerful enough to do what I want. My interface also enables me to implement the various kludges and workarounds suggested by some, without having to deal with additional DA/AD conversions.
 
In my humble opinion, recording MIDI data for playback is no substitute for real time audio recording. I have a "live" in-studio recording of Yes playing "The Gates of Delirium" straight through. That would be EXTREMELY difficult and time consuming (utterly impossible really) to do with a MIDI recording.
 
I have already said there are situations where doing what you suggest works okay - see my posts #31 and #59. And, if that's the way 99.9% of the Sonar users choose to work, and they're comfortable with it all of the time, great! Some of us, though, would love to see the same paradigm for recording VST/VSTi that we have for recording external audio.
 
I would absolutely love for someone to tell me how to achieve the results I want. I've had enough people tell me I shouldn't need to.
 
2015/02/18 20:36:56
tlw
Anderton
I'm really trying not to be dense here, but this doesn't make sense to me. SONAR can play the output from a VST and VSTi in real time (plus any latency, of course) and record the input to them at the same time, so it can reproduce the sound heard during the real time performance at a later time. Sure, it won't physically record the output onto a track, but why would that matter? The object of recording the real-time output would be so you can play it back at some point, right? But you can play it back now, with the same results as the real-time performance (with the exception of stochastic devices that produce variable outputs even with the same input).


Exactly. If a VSTi is MIDI controlled no function on it (if we disregard the rarely used NRPN side of MIDI) can have any value other than a fixed number between 0-127 (or 1-128). There are no nuances that a live performance contains that "fit between the numbers" because even if the VSTi can do it, the controller can't.

There may be VSTis that don't work internally within the restrictions of MIDI (I don't use enough software synths to know), but if the MIDI controller they are operated by can only send 128 fixed values that is all the synth is going to receive no matter whether the MIDI is recorded and the track then bounced/frozen or the synth's output is recorded during the performance as audio. The result will be the same.

Maybe I'm having a stupid week (it certainly wouldn't be the first time) but I genuinely don't see the difference between me, say, recording the output from the controllers on my microQ or Mopho as MIDI, then playing the MIDI back through the synth to track the resulting audio, and ignoring the MIDI and only recording the audio created by those controllers as I move them. Other than how easy (or not) it is to fix mistakes or make fine adjustments, that is.

Obviously my less MIDI capable/equipped synths have to be treated differently, and recorded "old school pre-MIDI style", or very near it. Though most voltage controlled synths can generally also handle at least MIDI note number and on/off these days.

Having said that, I do think the ability to "record" a VSTi's output would be useful where there is randomness involved in the sound generation or notes/whatever. And if Sonar could do that then it could also do what is being asked for here of course.

(Editted for typos, any I've missed will just have to live with it).
2015/02/18 21:34:52
tlw
Earwax
I would absolutely love for someone to tell me how to achieve the results I want. I've had enough people tell me I shouldn't need to.


OK, a serious suggestion. Maybe you ought to consider moving away from VSTis and towards hardware synths. It's a different, "old fashioned" way of working that's often much slower and comes with its' own frustrations, such as trying to get voltage-controlled gear to make the same sound twice. It does have a very different feel to it than working entirely "in the box".

Hardware brings it's own set of compromises, not least in terms of space and cost, but having started with (MIDI-less) hardware synths because that was the only kind of synth there was at the time, then tried software and never been really happy with the results, I took a decision to move back to hardware the day I picked up a DSI Mopho out of interest and tracked it against a couple of VSTis and the Mopho trampled all over them.

I just wish synths like that had been around 30 years ago at today's prices. Now is perhaps the best time there's ever been to be into analogue.

Despite being primarily a guitarist, I've never used digital/vst guitar amp/fx "emulators" much either as it happens, perhaps because I grew up with the real thing.
2015/02/18 21:46:56
Anderton
Earwax
The very fact that you have to qualify the implementation of the process (non-stochastic instruments, patching, use of loop backs and VST-to-WAV recorder plugins, feedback loops, MIDI timing, etc.) is the difference.

 
Actually I only qualified about stochastic devices (and I already said I understand why that is relevant) and the latency caused by monitoring through a computer without sufficient power to produce low enough latency. I said nothing about VST to WAV recorder plug-ins. I only mentioned patching and loopbacks if you have to record audio in real time, but I have yet to see any evidence that there's an audible or even a perceptual difference between recording the audio output of a VSTi and playing it back compared to recording the gestures that created the VSTi's audio output and playing those back to produce audio. So as far as I'm concerned, you don't need a loopback to record a VST given that its output already matches the output that would be produced by your playing in real time.
 
In my humble opinion, recording MIDI data for playback is no substitute for real time audio recording.

 
With synthesizers, aside from the stochastic caveat, there is no difference. The gestures produce the synthesizer's audio. Recording the gestures, upon playback, produces the same audio. With guitar, an amp sim is not a physical amp and never will be. You choose which sound you want first, then you figure out how to record it. If it's an amp, use a mic. If it's a sim, record the dry guitar track. I dunno, it all seems very simple to me.
 
Some of us, though, would love to see the same paradigm for recording VST/VSTi that we have for recording external audio.

 
Well, there are lots of things I'd like to see too! But often, those pesky laws of physics rear their ugly heads. The instant a computer is involved, there will be latency caused by monitoring and there is no way around that at the present time. DAWs excel at capturing external audio and allowing you to edit that audio. VSTis leave the world of multitrack recording and enter the world of "in the box" computer-based production. That is why so many VSTis offer two versions: a plug-in for use with a DAW, and a stand-alone version for use as a (somewhat) traditional instrument.
 
Even if it was important to record a VSTi's audio because all or most of your synths do interesting random things and you hope to capture the one special performance where the randomness adds up just the way you want, by definition using computer-based instruments has to live within the limitations of a computer-based system - the biggest being latency. 
 
I would absolutely love for someone to tell me how to achieve the results I want.

 
It's easy...load each VSTi into a decent laptop. You now have an external instrument, so we're comparing apples to apples (or I guess it would be windows to windows). Record the audio from the instruments into your DAW of choice.
 
To me the deal breaker isn't whether or not SONAR can record audio. The deal breaker is that before you can even consider that as a viable replacement for something like a PortaStudio or ADAT, latency has to be low enough to give the kind of experience you want or you're just going to have a real-time recording of an unpleasant performance experience. A VSTi in even a dual-core laptop will give low enough latency to be comparable to a hardware synthesizer. When that goes into the DAW, you can use zero-latency monitoring because you have already have your sound in the laptop. 
 
Again, I understand the need to record stochastic devices. I just don't understand the need to record VSTis that produce the same output regardless of whether you listen to their audio, or you listen to the audio produced by the gestures that produced that audio originally. 
 
Mixmkr is describing something different and it's a situation I ran into often when creating instructional videos for SONAR. For me the solution was two interfaces. SONAR used an ASIO one and Vegas used a WDM one. I patched the SONAR interface out to the Vegas interface in and recorded SONAR's out in Vegas. It's also possible to do this internally in Windows, but when the SONAR windows were on screen I wanted them to show an actual ASIO interface, like the kind people would use in their day-to-day work, rather than a Windows scenario that would have no relevance to them unless they were doing instructional videos  
 
And FWIW, the CA-X parameters are automatable. A more ironclad example of why Mixmkr needs what he wants is inserting a stompbox effect sans MIDI control between the guitar and interface.
 
 
 
2015/02/18 22:05:29
Earwax
tlw
Earwax
I would absolutely love for someone to tell me how to achieve the results I want. I've had enough people tell me I shouldn't need to.


OK, a serious suggestion. Maybe you ought to consider moving away from VSTis and towards hardware synths. It's a different, "old fashioned" way of working that's often much slower and comes with its' own frustrations, such as trying to get voltage-controlled gear to make the same sound twice. It does have a very different feel to it than working entirely "in the box".

Hardware brings it's own set of compromises, not least in terms of space and cost, but having started with (MIDI-less) hardware synths because that was the only kind of synth there was at the time, then tried software and never been really happy with the results, I took a decision to move back to hardware the day I picked up a DSI Mopho out of interest and tracked it against a couple of VSTis and the Mopho trampled all over them.

I just wish synths like that had been around 30 years ago at today's prices. Now is perhaps the best time there's ever been to be into analogue.

Despite being primarily a guitarist, I've never used digital/vst guitar amp/fx "emulators" much either as it happens, perhaps because I grew up with the real thing.

Hi. Thanks for the response. I'm a bit of a hardware synth hog. Current hardware I own includes, but is not limited to:
Sequential Circuits Prophet 600
Korg DSS-1
Roland MKS-20
Ensoniq ESQ-M
EMU Proteus 1
Akai Z4 sampler (I love this thing!)
Yamaha TG77
Fender Rhodes Mark I Stage Piano (yeah I know - not a synth!)
The first synthesizer I ever touched (and actually trained on) was a modular Moog 900 series system (complete with ribbon controller and two-tiered keyboard) in 1972. I trained on that monster for two years - 1972 and 1973, and went on from there. The sonic power of that thing puts present day VSTi to shame! Needless to say, I never actually owned a modular Moog, but working with one for 2 years was an experience that has changed me forever. The hardware I own now serves me well, but there are VSTi that do things that my hardware can't (Vaz Modular, SCOPE Modular III/IV, Reaktor, and any number of acoustic piano and Hammond B3 emulators spring to mind). So, I do work with external hardware.
 
The thing is, with hardware, it's plug in, hit record, and go. I want that exact same immediacy with my VST/VSTi.
 
I'm not a guitarist. I tried - can't play for s**t!! But, I do play Chapman Stick. So I hear you about the hardware amp thing. But, I use a Line 6 POD X3 Pro (2 inputs, one for each side of the Stick), and it does the job.
 
Anyway, enough rambling. Thanks for your contributions to the thread. You've got some good suggestions!
 
 
2015/02/18 22:24:19
Earwax
Anderton
Earwax
The very fact that you have to qualify the implementation of the process (non-stochastic instruments, patching, use of loop backs and VST-to-WAV recorder plugins, feedback loops, MIDI timing, etc.) is the difference.

 
Actually I only qualified about stochastic devices (and I already said I understand why that is relevant) and the latency caused by monitoring through a computer without sufficient power to produce low enough latency. I said nothing about VST to WAV recorder plug-ins. I only mentioned patching and loopbacks if you have to record audio in real time, but I have yet to see any evidence that there's an audible or even a perceptual difference between recording the audio output of a VSTi and playing it back compared to recording the gestures that created the VSTi's audio output and playing those back to produce audio. So as far as I'm concerned, you don't need a loopback to record a VST given that its output already matches the output that would be produced by your playing in real time.
 
In my humble opinion, recording MIDI data for playback is no substitute for real time audio recording.

 
With synthesizers, aside from the stochastic caveat, there is no difference. The gestures produce the synthesizer's audio. Recording the gestures, upon playback, produces the same audio. With guitar, an amp sim is not a physical amp and never will be. You choose which sound you want first, then you figure out how to record it. If it's an amp, use a mic. If it's a sim, record the dry guitar track. I dunno, it all seems very simple to me.
 
Some of us, though, would love to see the same paradigm for recording VST/VSTi that we have for recording external audio.

 
Well, there are lots of things I'd like to see too! But often, those pesky laws of physics rear their ugly heads. The instant a computer is involved, there will be latency caused by monitoring and there is no way around that at the present time. DAWs excel at capturing external audio and allowing you to edit that audio. VSTis leave the world of multitrack recording and enter the world of "in the box" computer-based production. That is why so many VSTis offer two versions: a plug-in for use with a DAW, and a stand-alone version for use as a (somewhat) traditional instrument.
 
Even if it was important to record a VSTi's audio because all or most of your synths do interesting random things and you hope to capture the one special performance where the randomness adds up just the way you want, by definition using computer-based instruments has to live within the limitations of a computer-based system - the biggest being latency. 
 
I would absolutely love for someone to tell me how to achieve the results I want.

 
It's easy...load each VSTi into a decent laptop. You now have an external instrument, so we're comparing apples to apples (or I guess it would be windows to windows). Record the audio from the instruments into your DAW of choice.
 
To me the deal breaker isn't whether or not SONAR can record audio. The deal breaker is that before you can even consider that as a viable replacement for something like a PortaStudio or ADAT, latency has to be low enough to give the kind of experience you want or you're just going to have a real-time recording of an unpleasant performance experience. A VSTi in even a dual-core laptop will give low enough latency to be comparable to a hardware synthesizer. When that goes into the DAW, you can use zero-latency monitoring because you have already have your sound in the laptop. 
 
Again, I understand the need to record stochastic devices. I just don't understand the need to record VSTis that produce the same output regardless of whether you listen to their audio, or you listen to the audio produced by the gestures that produced that audio originally. 
 
Mixmkr is describing something different and it's a situation I ran into often when creating instructional videos for SONAR. For me the solution was two interfaces. SONAR used an ASIO one and Vegas used a WDM one. I patched the SONAR interface out to the Vegas interface in and recorded SONAR's out in Vegas. It's also possible to do this internally in Windows, but when the SONAR windows were on screen I wanted them to show an actual ASIO interface, like the kind people would use in their day-to-day work, rather than a Windows scenario that would have no relevance to them unless they were doing instructional videos  
 
And FWIW, the CA-X parameters are automatable. A more ironclad example of why Mixmkr needs what he wants is inserting a stompbox effect sans MIDI control between the guitar and interface.
 
 
 


Well, I guess we can agree to disagree, and I'm good with that.  
 
I really didn't mean you specifically suggested all of the workarounds, including VST/VSTi-to-WAV recorders. It was a collective you, meaning taking into account all of the workarounds suggested by everyone, including you. Interesting, though you seem to indicate that what Mixmkr and I want are different, the solutions appear to be quite similar. Two machines (or at least two apps) running Vegas and Sonar using ASIO and WDM drivers for Mixmkr. And, two machines, one running Sonar and one running a standalone VSTi, for me. As I said before, with the laptop and another recording machine, been there, done that.

 
But hey, different perspectives for different people. I still don't think "The Gates of Delirium" could have been recorded in one take using MIDI data.
Thanks Craig.
2015/02/18 22:33:22
swamptooth
tlw
Exactly. If a VSTi is MIDI controlled no function on it (if we disregard the rarely used NRPN side of MIDI) can have any value other than a fixed number between 0-127 (or 1-128). There are no nuances that a live performance contains that "fit between the numbers" because even if the VSTi can do it, the controller can't.

There may be VSTis that don't work internally within the restrictions of MIDI (I don't use enough software synths to know), but if the MIDI controller they are operated by can only send 128 fixed values that is all the synth is going to receive no matter whether the MIDI is recorded and the track then bounced/frozen or the synth's output is recorded during the performance as audio. The result will be the same.



Unless you're dealing with a synth that accepts OSC(Open Sound Control) http://opensoundcontrol.o...tion-osc messages via non-traditional user interfaces.  MIDI isn't the only game in town.  Sonar can't record OSC messages (neither can most daws).  
And good luck recording any midi from a synth that has randomization functions - here's a pretty extreme example using Dimension pro and its built-in randomizers.   http://youtu.be/umWuQfwMnKk
Honestly, though, no skin off my nose because I have 3 other daws I can do this in, it would just be nice to have Sonar do it so I can get closer to a 100% complete solution.
This functionality is ideal for live resampling into other synths and tools.  Something I don't do a lot. but enough to justify shelling out 500 bucks for alternatives.   
 
 
2015/02/18 22:58:00
mettelus
I have been reading this and understand that workarounds do exist, but are not elegant; and can commiserate with the OP's point as I experienced this trying to do something I thought would be simple a year ago without having to come up with an intricate routing of spaghetti to achieve.
 
I appended the associated Feature Request thread with this post rather than contaminate this thread further (since it will achieve nothing), but the only thing I have seen (which will not work for SONAR itself) is a simple VST audio tap called "Spitter" that feeds Geist's internal sampling engine. Internally, Geist can see these and does not route the sampled audio back into the host's audio engine (to prevent a feedback loop).
 
[I am not sure what constitutes "cross threading" (my apologies if this is), but the bulk of the content is in the feature request where it may one day be implemented for SONAR users.]
2015/02/18 23:02:02
Earwax
swamptooth
tlw
Exactly. If a VSTi is MIDI controlled no function on it (if we disregard the rarely used NRPN side of MIDI) can have any value other than a fixed number between 0-127 (or 1-128). There are no nuances that a live performance contains that "fit between the numbers" because even if the VSTi can do it, the controller can't.

There may be VSTis that don't work internally within the restrictions of MIDI (I don't use enough software synths to know), but if the MIDI controller they are operated by can only send 128 fixed values that is all the synth is going to receive no matter whether the MIDI is recorded and the track then bounced/frozen or the synth's output is recorded during the performance as audio. The result will be the same.



Unless you're dealing with a synth that accepts OSC(Open Sound Control) http://opensoundcontrol.o...tion-osc messages via non-traditional user interfaces.  MIDI isn't the only game in town.  Sonar can't record OSC messages (neither can most daws).  
And good luck recording any midi from a synth that has randomization functions - here's a pretty extreme example using Dimension pro and its built-in randomizers.   http://youtu.be/umWuQfwMnKk
Honestly, though, no skin off my nose because I have 3 other daws I can do this in, it would just be nice to have Sonar do it so I can get closer to a 100% complete solution.
This functionality is ideal for live resampling into other synths and tools.  Something I don't do a lot. but enough to justify shelling out 500 bucks for alternatives.   
 
 


Okay, now you've whetted my appetite! What DAW(s) are you using that can do this??
2015/02/18 23:15:41
swamptooth
Cubase reason ableton and (occasionally) reaper though i hate its ui.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account