• SONAR
  • X3 Producer: Why is it so difficult to record audio from a soft synth? (p.13)
2015/02/20 21:04:28
Earwax
This is worth one last try –
 
In live audio recording, there are two events that occur that can, to a certain extent, be considered “stochastic” (random) in nature. One is the production of the sound that is being recorded. The other is the actual performance of the musicians involved as it evolves over time. The two are, of course, inexorably intertwined. While MIDI can deal with the first event (unless the sound generator itself produces sound stochastically like some synths) pretty well, it doesn’t deal with the second event very well at all. The issue is compounded when more than one musician wants to record at the same time.
 
The reason for this is quite simple. And, in another post, I used the live recording of Yes’ “Gates of Delirium” as an example to illustrate that reason. But really, you can use the recording of any piece that includes shifting tempi, complex time signatures, and polyrhythms. Maybe the MIDI implementation in Sonar has changed exponentially since 8.5.3. I don’t know. But 8.5.3's MIDI implementation does not handle those changes in musical time very well at all. And, based upon the searches I just did on the Sonar forum for user comments regarding the subject of MIDI time in Sonar, it would appear that not much (if anything) has changed. This doesn’t surprise me. MIDI is MIDI.
 
So yes, as a singular event, when a guitarist plays through his amp sim (the non-stochastic element in his playing), the capturing of his performance works because (1) the end result of his stochastic event (his audio signal) is being recorded as he plays, and (2) there is no MIDI time event, to speak of, that he has to worry about with regard to the non-stochastic element (his amp sim). Applying the amp sim to the recorded event is just like applying any other effect to his recorded “clean” guitar performance. He can play any amalgam of time signatures, tempi, and polyrhythms he wants, and that will all be captured faithfully, real time, in his audio recording. I get that. I’ve always gotten that part. I’ve done it.
 
My issue appears when a MIDI musician chooses to record live with the guitarist. Assume for the moment that the two VSTi the keyboardist chooses to use have NO stochastic sound-generating elements. He is only recording MIDI data from his controller. If whatever music they are recording uses shifting tempi, different time signatures, and/or polyrhythms, the resulting captured performance won’t work. Well, let me qualify that. You may be able to get it to sorta work if you MIDI-map the timeline (tempo changes, time signature changes, etc.) of the entire piece prior to the recording. I’ve tried that. The “feel” sucks. And, the immediacy, fluidity and spontaneous inspiration of the performance are lost.
 
Live audio recording has no such limitations.
 
It has been suggested that using another computer loaded up with VST/VSTi would solve the problem. Well, yes and no. As mentioned before, I actually have done this, as recently as last month. But, if the musician using the laptop wants to record two different synths at the same time, he can’t do it using the first synth in standalone mode unless his audio interface has multiclient ASIO drivers. So, in my example where the keyboard player wants to use two different VSTi  while performing, he has hit a brickwall. Chances are, his interface does not have multiclient ASIO drivers. In addition, the only way he can get his audio signal into the second computer is via digital-to-analog conversion, into the recording computer. Oh, just use his analog-to-digital out to the recorder you say? Okay, now we introduce clocking issues. Whoops, the digital drummer wants to join the fray with his laptop? Well, the digital clocking issue just went from bad to hell!
 
This, ladies and gentlemen, is the primary reason I want to see direct recording of VST/VSTi into Sonar. My guess is that most people reading this thread are one-person-in-studio-recording one-instrument-at-a-time individuals. I have already acknowledged that the suggested workarounds can work okay, depending on the elements involved. As an improvising musician who enjoys recording live with others, on the other hand, I find the current situation to be sorely lacking. If my computer can handle it, it makes sense I should be able to do it.
 
I would LOVE for someone to please tell me I am wrong in all of this, and show me the light
Sorry for the book, but I’m pretty passionate about this. Thanks for reading.
2015/02/20 21:11:13
codamedia
swamptooth
codamedia
gswitz
harmony gardens
I wonder what sort of difficulties the Bakers would face to get this done.  If it isn't too much trouble,,,,,, it would be a nice addition.   

 
The biggest thing is to warn users when they accidentally configure and infinite loop and block sound output (our substitute some sound that tells the user why their audio is not being routed).



I'm pretty sure this is the primary reason many DAW's do not implement this feature. Nobody wants the responsibility of blowing speakers and eardrums when somebody that doesn't know what they are doing gets in over their head.


If nobody wants the responsibility then why do most daws offer it?




I'm not defending anything one way or the other... I was merely pointing out one reason why some companies may not want to implement it.
2015/02/20 22:23:31
gswitz
How about this...
 
For people who use Sonar to run sound for a band, it might be nice to record the mix as it happened during the show without having to record 5 million envelopes along with it. You just want to record the monitor and mains mix (which you do beautifully using the multi-point touch).
 
But you can't in Sonar without using loopback in your interface (hardware or software).
 
Bummer.
2015/02/21 02:34:08
Sanderxpander
Earwax, you had already convinced me that live VST audio recording can be useful, but this is not a good example. If you have shifting tempos and time signature changes, you are presumably not recording with the metronome. So anything you record will just be a linear stream and let the bar indicators up top be damned. This is the same for midi and audio. You just can't quantize the midi but I don't see why you'd care to considering your original point. If I recorded one midi performance in full on a VST synth, and output that digitally to another computer, recording the audio simultaneously, there is no way that you'd ever be able to tell the difference during playback. They'd phase cancel completely.
2015/02/21 02:50:29
swamptooth
Heh.  You don't want to know the number of times I've recorded midi and forgot to turn on Write Automation.  D'oh!
2015/02/21 06:10:24
Earwax
Sanderxpander
Earwax, you had already convinced me that live VST audio recording can be useful...  If you have shifting tempos and time signature changes, you are presumably not recording with the metronome.

Why not? For polyrhythms and time signatures, you certainly need to keep steady time. For shifting tempos, it can definitely help to get the players back to the original tempo after slowing down, or speeding up.
 
 
 
Edit to restore thread title.
2015/02/21 09:37:01
JoseC.
One of the reasons why I like Sonar is because I have found that it keeps everything in sync better than other DAWs I've tried, better than my other DAW, that is Ableton live. With "everything" I mean audio, external midi and VST instruments. I am no audio software engineer, but I imagine that taking into acount plugin delay compensation, live audio recording and live midi recording in order to make eveything sync reasonably in a time oriented domain LIKE MUSIC IS, must be a pretty formidable task. When I think about now compounding those three at the same time in order to be able to record the output of a VSTi as I play it live, I certainly understand that compromises must be made, and I must ask myself whether I want those compromises made, or not, if they are going to tamper with the performance, somehow, and/or the general synchronization of the project. My personal answer to this is NO, and thus I am willing to accept certain routing rigidity as a trade off for better sync. More so if I have ways to work around this, be it routing audio back via audio card mixer, external hardware mixer or loopback cable and compensating only for that particular track delay, or even using an external recorder for that particular performance and importing it back later with the help of a recorded click track or whatever.

Maybe I am wrong, but I notice that Sonar keeps a separate buffer for midi, while other DAWs (Reaper, Live), will slave all midi timing to the audio buffer, and that creates inconsistencies that ar a pain to manage. I am all for having more routing flexibility, but I would like to know what trade offs are there, if any.
2015/02/21 10:16:13
Anderton
Earwax
My issue appears when a MIDI musician chooses to record live with the guitarist. Assume for the moment that the two VSTi the keyboardist chooses to use have NO stochastic sound-generating elements. He is only recording MIDI data from his controller. If whatever music they are recording uses shifting tempi, different time signatures, and/or polyrhythms, the resulting captured performance won’t work. Well, let me qualify that. You may be able to get it to sorta work if you MIDI-map the timeline (tempo changes, time signature changes, etc.) of the entire piece prior to the recording. I’ve tried that. The “feel” sucks. And, the immediacy, fluidity and spontaneous inspiration of the performance are lost.

 
If you recorded audio, whatever you recorded would be "frozen" and you would not be able to alter tempo changes or time signature changes. The feel you captured with audio would be the feel you captured and there is nothing you can do about it, except for intricate time-stretching, cutting, pasting, etc.
 
Just like audio, if you record MIDI in real time and it has the right feel, it will play back with the right feel. If you record MIDI in real time with the wrong feel, it will play back with the wrong feel. You do not have to conform MIDI to a tempo or grid, you can record it simply as free-flowing data.
 
Live audio recording has no such limitations.

 
Neither does MIDI, if all you want to do is capture a live performance of someone playing an instrument with MIDI.
 
It has been suggested that using another computer loaded up with VST/VSTi would solve the problem. Well, yes and no. As mentioned before, I actually have done this, as recently as last month. But, if the musician using the laptop wants to record two different synths at the same time, he can’t do it using the first synth in standalone mode unless his audio interface has multiclient ASIO drivers.

 
You don't have to use stand-alone. You could host multiple instruments within a program like SONAR or anything else capable of hosting VSTs, or use Reason's instruments and grab their separate outputs. Or if you have to use multiple stand-alone instruments, you could use Windows audio drivers, or Mac laptops with Core Audio. Apple's MainStage is designed specifically to host instruments for live performance because Logic, like SONAR, is designed for production and not live performance.
 
I have already acknowledged that the suggested workarounds can work okay, depending on the elements involved. As an improvising musician who enjoys recording live with others, on the other hand, I find the current situation to be sorely lacking. If my computer can handle it, it makes sense I should be able to do it.


What I Italicized in your quote is where "the rubber meets the road." I understand what you want, and why you want it. But I just don't think today's computers running sophisticated DAWs and power-hungry VSTis are capable of doing what you want with sufficiently low latency to match the experience of recording instruments into something like a stand-alone hard disk recorder. Sure, you could record the audio from a few instruments at a time in to the computer, but if you load the computer up with VSTis, all bets are off. You could record at 96 or 192 kHz to reduce latency, but then you're limited to the number of available real-time audio streams. As I mentioned, Thunderbolt II might be the answer but its widespread adoption is still a ways off. 
 
If you really want to do this and don't want to use a stand-alone recorder, I would suggest Ableton Live. Probably even a "lite" version would do what you need. Unlike SONAR and other DAWs, which are designed for production, Live is designed and optimized specifically for live performance. There are lots of things it can't do (like comping), but it's a very agile program and it's what I use for live performance because it does that better than any other computer-based program. Programs like SONAR and Pro Tools, regardless of whether or not they can physically record an audio output with the computer, were never designed for live performance. You'll still have latency issues with Live, but if Live can't do what you want, then there's probably no computer-based solution at this time that will do what you want.
 
Think of it this way: You have a van so that you can take your family places, pick up groceries, take vacations, transport your gear to the gig, etc. If you want to be able to take curves at 70 mph on the Amalfi drive, you need a sports car.
2015/02/21 10:43:16
BobF
Paul P
 
I just communicated with Vincent Burel of the above software and he recommends using his
Voicemeeter Audio Device Mixer for this application.
 
To me, that looks like overkill for just a loopback...
 




I'm going to try this one out as a USB device aggregator ...
2015/02/21 13:10:09
Earwax
JoseC.
One of the reasons why I like Sonar is because I have found that it keeps everything in sync better than other DAWs I've tried, better than my other DAW, that is Ableton live. With "everything" I mean audio, external midi and VST instruments. I am no audio software engineer, but I imagine that taking into acount plugin delay compensation, live audio recording and live midi recording in order to make eveything sync reasonably in a time oriented domain LIKE MUSIC IS, must be a pretty formidable task. When I think about now compounding those three at the same time in order to be able to record the output of a VSTi as I play it live, I certainly understand that compromises must be made, and I must ask myself whether I want those compromises made, or not, if they are going to tamper with the performance, somehow, and/or the general synchronization of the project. My personal answer to this is NO, and thus I am willing to accept certain routing rigidity as a trade off for better sync. More so if I have ways to work around this, be it routing audio back via audio card mixer, external hardware mixer or loopback cable and compensating only for that particular track delay, or even using an external recorder for that particular performance and importing it back later with the help of a recorded click track or whatever.

Maybe I am wrong, but I notice that Sonar keeps a separate buffer for midi, while other DAWs (Reaper, Live), will slave all midi timing to the audio buffer, and that creates inconsistencies that ar a pain to manage. I am all for having more routing flexibility, but I would like to know what trade offs are there, if any.

Jose, believe it or not, I think everyone involved in this thread would agree with you. There is no way that I, or I would venture to guess, anyone else, would want Sonar's programmers to reduce performance or synching capabilities just so VST/VSTi audio output can be recorded live. But, I'm wondering if you may have hit upon at least one reason that other DAWs have the capability to record VST/VSTi audio live, while Sonar doesn't. Maybe slaving MIDI timing to the audio buffer makes it easier to accomplish the task of VST/VSTi live audio recording. I don't know. It would be interesting indeed to find out what the trade offs are.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account