• SONAR
  • X3 Producer: Why is it so difficult to record audio from a soft synth? (p.6)
2015/02/17 19:52:27
Earwax
It seems as though quite a few people who don't see the difference/don't understand the desire for "live" VST/VSTi direct to audio recording may be approaching the recording process as a "one person-one instrument at a time" event. If so, fair enough. Let me present another type of scenario.
 
A five piece band walks into a "normal" recording studio. The band consists of two guitarists, a keyboardist, a bassist, and a drummer. The band is well rehearsed, and wants to record "live". They've played out together enough to know that is the best method to capture their performances. The instruments and amps are set up, the microphones are set up, the effects for each person are set up. All effects, amps, and instruments are external - no VST/VSTi. The keyboardist has a 61-key controller,an 88-key controller and some pedals controlling a bunch of hardware synths and effects. The two guitarists and bass guitarist all have a bunch of effects, including pedals. The drummer is playing an acoustic set, with mics "everywhere", with each kit piece being recorded on a track of its own. Everything is plugged into a beautiful hardware mixing console. Only audio is recorded.
 
The recording device is a computer running Sonar.
 
Sonar has absolutely no problem recording this band in real time - a fact the band loves. The recording is a success.
 
A second band comes in - same instrumentation, except keys, drums, and all effects are virtual - VST/VSTi hosted in Sonar. Again, drummer wants a separate track for each piece in his BFD3 kit, with some VST effects from different manufacturers to give him the sound that he wants. The guitarists are using amp sims and other VST effects for their sounds. The keyboardist is playing the VAZ Modular synth from the 61-key controller, and layering Ivory and Vienna Strings on the 88-key controller. He is using various VST effects. The band wants to  record audio - "live".
 
The recording device is a computer running Sonar.
 
Will Sonar have absolutely no problem recording this band "live", just like the first band? Will the band have the same "in the moment" experience playing and recording at the same time?
 
Notice, I haven't even got into any effects changes/tweaks, synth tweaks/patch alterations, happening while either band is recording.
 
This scenario could also hold true for a duo wanting to record live using VST/VSTi.
 
Since too many people can't seem to understand why a single person would want this, I won't go there again.
 
2015/02/17 20:01:09
Earwax
mixmkr
Anderton
I think the Hendrix playing live example isn't that germane...to record that, you'd just stick a mic in front of the amp. You could achieve the same thing with SONAR by playing a hardware synthesizer's audio output into a track.

The Hendrix example [to me] is relevant, because you couldn't play a "clean" guitar into Sonar, pull up TH2 and re-amp it and get the same results as the captured/recorded [live] audio performance.  Most notable would be the feedback loop Hendrix created thru his amp and guitar pickups and how it was controlled AT the time of recording.  You LOSE that interaction when re-amping.  (or playing back a MIDI sequence into a VSTi)

And yes, you could run the VSTi out thru an amp and mic it...(and I believe that was previously suggested), but now you've added quite a lot into the signal chain to do that, with the amp/mic combo "coloring" the sound the most.
 
The preference is to keep it all in the computer and *pure*.  As you've suggested, and I also agree (and have done) is to physically re-patch and mute where needed.  I think pretty much everyone agrees this is a method but in some situations apparently isn't practical for some.  (no patch bay, audio interface rack mounted...etc, etc)

What this boils down to is that there are two sides to this..  1) those that believe MIDI can NOT capture all the gestures of a live performance and thus will NOT duplicate what was done during the initial performance...(because of various reasons ...i.e.  the non repetitive nature of some VSTi and the interaction of the PLAYING that might cause.....  and 2)...those that think MIDI can do all that.


He gets it.............. I would only add that those of us who would like to see live VST/VSTi direct recording in Sonar already see the value of MIDI recording. Different situations (and moods) call for different methods.
 
2015/02/17 20:21:07
rabeach
When using the random generators in dimension pro's midi matrix as a source to control the destinations/parameters available for control, I'm interacting live while playing my wind/keyboard midi controller with the random generators in a way that bouncing to audio does not preserve. I have used both workarounds but it would be nice not to have to.
2015/02/17 21:15:45
Anderton
mixmkr
Anderton
I think the Hendrix playing live example isn't that germane...to record that, you'd just stick a mic in front of the amp. You could achieve the same thing with SONAR by playing a hardware synthesizer's audio output into a track.

The Hendrix example [to me] is relevant, because you couldn't play a "clean" guitar into Sonar, pull up TH2 and re-amp it and get the same results as the captured/recorded [live] audio performance.  Most notable would be the feedback loop Hendrix created thru his amp and guitar pickups and how it was controlled AT the time of recording.  You LOSE that interaction when re-amping.  (or playing back a MIDI sequence into a VSTi)


I think you're missing my point. As I said, it's about the experience (no pun intended, of course!). Comparing Jimi Hendrix playing live to using amp sims is not the point I was trying to make. I was trying to make the point that what he did was analogous to playing through a hardware keyboard, in real time, which has a control surface you can manipulate. And that's still an option, and as I said, very possibly a preferable one compared to trying to make soft synths, a mouse, and a QWERTY keyboard do the same thing as a Prophet-12. Even if you could record an instrument output live into a track, that to me is not a "live" experience in the sense of playing a dedicated hardware keyboard...or a guitar through an amp.
 
The other place the Hendrix analogy breaks down is you CAN play through an amp and interact with it, get feedback, etc. and mic it, while sending a direct feed to the computer. So the computer can incorporate the string sustain etc. for re-amping, although the sim will likely not have the same level of nuances as an amp playing in a room (unless of course it's one of my sims, LOL). Sticking a keyboard through an amp is nowhere near the same experience, because the amp and guitar literally "talk" to each other, with each affecting what the other does. A hardware keyboard will not react to an amp, although the player might.
 
Playing guitar and playing keyboard are so different that I don't think it's possible to draw analogies. As far as I can tell this discussion has little to do with instruments, but about how one feels playing live and knowing it's live and will never happen the same way again, as opposed to recording something and editing it later. It's not a question of "getting it" or not "getting it," any more than someone who watches a movie can't "get" live theater. It's a different experience. Personally, both have their place.
 
This thread reminds me what I tell people who are interested in Ableton Live - don't even bother unless you buy a hardware controller for it. Or, can you imagine running Traktor without a controller? Both are about live experience.
2015/02/17 22:00:12
Earwax
Anderton
mixmkr
Anderton
I think the Hendrix playing live example isn't that germane...to record that, you'd just stick a mic in front of the amp. You could achieve the same thing with SONAR by playing a hardware synthesizer's audio output into a track.

The Hendrix example [to me] is relevant, because you couldn't play a "clean" guitar into Sonar, pull up TH2 and re-amp it and get the same results as the captured/recorded [live] audio performance.  Most notable would be the feedback loop Hendrix created thru his amp and guitar pickups and how it was controlled AT the time of recording.  You LOSE that interaction when re-amping.  (or playing back a MIDI sequence into a VSTi)


I think you're missing my point. As I said, it's about the experience (no pun intended, of course!). Comparing Jimi Hendrix playing live to using amp sims is not the point I was trying to make. I was trying to make the point that what he did was analogous to playing through a hardware keyboard, in real time, which has a control surface you can manipulate. And that's still an option, and as I said, very possibly a preferable one compared to trying to make soft synths, a mouse, and a QWERTY keyboard do the same thing as a Prophet-12. Even if you could record an instrument output live into a track, that to me is not a "live" experience in the sense of playing a dedicated hardware keyboard...or a guitar through an amp.
 
The other place the Hendrix analogy breaks down is you CAN play through an amp and interact with it, get feedback, etc. and mic it, while sending a direct feed to the computer. So the computer can incorporate the string sustain etc. for re-amping, although the sim will likely not have the same level of nuances as an amp playing in a room (unless of course it's one of my sims, LOL). Sticking a keyboard through an amp is nowhere near the same experience, because the amp and guitar literally "talk" to each other, with each affecting what the other does. A hardware keyboard will not react to an amp, although the player might.
 
Playing guitar and playing keyboard are so different that I don't think it's possible to draw analogies. As far as I can tell this discussion has little to do with instruments, but about how one feels playing live and knowing it's live and will never happen the same way again, as opposed to recording something and editing it later. It's not a question of "getting it" or not "getting it," any more than someone who watches a movie can't "get" live theater. It's a different experience. Personally, both have their place.
 
This thread reminds me what I tell people who are interested in Ableton Live - don't even bother unless you buy a hardware controller for it. Or, can you imagine running Traktor without a controller? Both are about live experience.


Craig,
 
All that being said, can Sonar handle the two recording situations I cited above with equanimity? In other words, can Sonar record all of the VST/VSTi outputs (maybe some 30 or so tracks) at the same time, just like it recorded the audio outputs (again probably some thirty or so tracks) from the mixing desk? If it can't, then it doesn't do what I (and apparently others) want. The playing and recording of guitars, keyboards and drums are indeed exactly the same in this instance. It's about musicans playing and recording amp sims, VSTi, and other VST effects just like they would their hardware counterparts. If Sonar can't take the outputs from 8 or 9 VST/VSTi and record those outputs onto multiple tracks at the same time as it produces them, then it can't record VST/VSTi live. So, the musical experience of the second band is different than the musical experience shared by the first band. Hence the "getting it" implication.
 
To me, the implementation of this capability would go far in Sonar attaining a true "studio-in-a-box" piece of software. 
2015/02/17 22:11:30
swamptooth
Randomness generators are one area where this approach is ideal.  I think nobody has mentioned synths that use open sound control, like reaktor to a very deep level.  these are not midi messages, but do control events and parameters.  there are some osc to midi translators out there but the depth of recorded data can be lacking.  I would like to encourage anyone with reaktor to play with the newscool ensemble.  Note you cannot record any drawings you make interactively via midi so those variations will be lost in the wind without realtime recording.
Ideally, I would like to see implementation of soft synth or other track routing to a recordable track.  I would also like to see sonar implement release velocity recording and editing because all of their synths support the parameter - which is great.  Works well in the standalone versions of z3ta series and dp and rapture, but when those vsts are inserted into sonar that functionality goes out the door.  (*oops tangent*).
 
2015/02/17 22:11:40
perfectprint
I need this!!!
 
my trusty emu 0404 with patchmix software is on the fritz and I can no longer route audio back into sonar to record. 
my main issue is Sonar's freeze and bounce functions screwing with certain parameters in vsti sequencers (specifically Reaktor ones) and rendering the audio completely different, AND making setting irretrievable. 
There should be no need for a work around anyway. This should be a regular feature. 
 
keep voting: http://forum.cakewalk.com...ecording-m3099239.aspx
2015/02/18 00:22:13
Anderton
Earwax
 
Craig,
 
All that being said, can Sonar handle the two recording situations I cited above with equanimity?



Well, there are really two separate issues. The first is technical, and probably renders the second moot: If you're recording 30 tracks of heavy-duty VSTis into SONAR, the hit on the CPU is going to be significant, so the latency will be as well. That alone will be enough to prevent a live experience for the musicians regardless of whether you're recording audio or the gestures that cause the instruments to create that audio. 
 
Of course, computers keep getting faster and who knows, if Thunderbolt II becomes commonplace we might look back with amusement at the days when - imagine that! - musicians would hit a key on a keyboard and have the sound come out a dozen or more milliseconds later. So live recording using CPU-intensive, computer-based instrument setups seems pretty much like a non-starter anyway until computers and interfaces get faster. 
 
But consider the following. Assume someone plays a keyboard that is not subject to random "happy accidents," so changing what one plays based on these random events is a non-issue. And let's pretend audible latency doesn't exist, because someday it won't.
 
If I set up a VSTi stand-alone in a laptop (no recording software), send its output to a speaker, and play it from a keyboard controller with a bunch of useful controls as if it was a hardware instrument, then you have your "live experience of playing keyboard" except of course for the latency.
 
If you set up that VSTi in a DAW (SONAR or whatever), enable input echo, send its output to a speaker, and play it from a keyboard controller with a bunch of useful controls as if it was a hardware instrument, then you again have the same "live experience of playing keyboard." Yes?
 
Now if you take the above scenario with the only difference being that someone enabled "record" without your knowing it before you started playing, you would still have the "live experience of playing keyboard." However now if you chose to, you could play back your part.
 
In this scenario, again if you ignore latency and random happy accidents, on playback shouldn't you hear exactly what you played live?
 
This doesn't obviate the need for recording VSTi outs for those situations where there are variations generated within the instruments or by happy accidents, so it's not an argument against including that feature because obviously, some people want it and have valid reasons for that. What I don't understand is why, in the scenarios given above, recording the gestures used to create a sound are somehow different from recording the sound itself.
 
 
2015/02/18 03:25:27
Earwax
 
Anderton
Earwax
 
Craig,
 
All that being said, can Sonar handle the two recording situations I cited above with equanimity?

Well, there are really two separate issues. The first is technical, and probably renders the second moot: If you're recording 30 tracks of heavy-duty VSTis into SONAR, the hit on the CPU is going to be significant, so the latency will be as well. That alone will be enough to prevent a live experience for the musicians regardless of whether you're recording audio or the gestures that cause the instruments to create that audio. 
 

 
Okay – how about 8 tracks? My point was, it appears Sonar can’t handle the live simultaneous playback and recording of multiple VST/VSTi.
 
Anderton
Earwax
 

If I set up a VSTi stand-alone in a laptop (no recording software), send its output to a speaker, and play it from a keyboard controller with a bunch of useful controls as if it was a hardware instrument, then you have your "live experience of playing keyboard" except of course for the latency.

I actually just did this a couple of months ago, recording BFD2 drum tracks from my laptop “live” into a recorder for a CD project. I used an Alternate Mode Trapkat (not a keyboard – live drumming), triggering BFD2. I used my sound interface’s analog stereo out-to-recorder-in for several reasons. Not what I wanted to do, but there you go.
Anderton
Earwax
 

If you set up that VSTi in a DAW (SONAR or whatever), enable input echo, send its output to a speaker, and play it from a keyboard controller with a bunch of useful controls as if it was a hardware instrument, then you again have the same "live experience of playing keyboard." Yes?

Maybe, if (a) As you said, everything about the VSTi is MIDI controllable (b) You are only controlling that one VSTi (c) Nobody else is playing with you (d) The MIDI track actually does capture all of the performance nuances that you wanted it to (no MIDI timing screw ups for example). If any of the criteria aren’t met, so much for capturing the great live performance. As soon as one of the guitarists enters the picture, for example, the “live” recording situation breaks down, right?
 
Anderton
Earwax
 

Now if you take the above scenario with the only difference being that someone enabled "record" without your knowing it before you started playing, you would still have the "live experience of playing keyboard." However now if you chose to, you could play back your part.
 
In this scenario, again if you ignore latency and random happy accidents, on playback shouldn't you hear exactly what you played live?

So what you are saying is that, in the instance of recording BFD2 drums that I cited above, if the recorder had been Sonar, and BFD2 was sitting in Sonar’s instrument rack as a VSTi (instead of on my laptop), and I had wanted to record 8 tracks of drums, effected by a couple of Sonar VST, that I could have recorded that performance directly into Sonar from BFD2? And, if a bass player had wanted to record a track using a bass amp sim in Sonar’s effects rack at the same time, we could have done that? Or say, for example, if I want to record both sides of my Chapman Stick live into Sonar using one instance of one of your amp sims and one instance of another company’s amp sim (Sorry…) plus a couple of VST effects, I can do that?
 
Anderton
Earwax
 
 

This doesn't obviate the need for recording VSTi outs for those situations where there are variations generated within the instruments or by happy accidents, so it's not an argument against including that feature because obviously, some people want it and have valid reasons for that. What I don't understand is why, in the scenarios given above, recording the gestures used to create a sound are somehow different from recording the sound itself.

Because the guitarists in the bands cited in my examples are not recording gestures. They are recording sounds. They are recording sounds at the same time as the keyboardist and drummer are recording their sounds and some (depending on the VSTi and what performance tweaks they are using) of their gestures.  The live performance breaks down even in duo or trio recording without the ability to record VST/VSTi directly into Sonar.  
 
As I said, for the one-guy in the studio-one-keyboard-one-completely MIDI controllable VSTi-with no other VST effects kind of situation, I completely understand where some people are coming from. Fortunately or unfortunately, I very often don't work that way. It would be nice to have a DAW where, no matter which way I chose to work, my only concern (in this context) would be CPU horsepower and RAM.
2015/02/18 03:37:12
Earwax
One more thought. I can't imagine how much fun editing the MIDI track(s) would be if the piece of music being recorded involved multiple time signatures, tempo changes, etc.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account