• SONAR
  • Really incredible that we still can't record a soft synth's output in real time (p.7)
2015/07/07 12:19:06
Beagle
Anderton
pwalpwal
 
ted, tim, some synths add randomised stuff that is different each playback, so there's a use, but if your premise were true generally why do other hosts offer this? (this = recording the synth out into another audio track) cakewalk's previous explanation was to prevent rubbish users creating feedback loops...



Although I understand this is something some people want, it doesn't get in the way of my working with soft synths for the following reasons.
 
1. If you want to record real-time control tweaks, in most synths any parameter you can tweak in real time is recordable as MIDI or VST automation.
2. If the playback truly does something random, then you won't know whether you like what it does until you hear it do its thing. But that's also the case if you render and listen back. The only difference is whether you hear the change being generated in real time, or hear the change being generated in real time after rendering. So the only advantage of real-time recording is it would allow you to decide whether a part was a "keeper" or not right after it played, instead of rendering first and then evaluating.
3. You can always use the external insert to do a physical loopback. It requires going out of the box, but it works.
4. Jack Audio supposedly has a 64-bit version under development so you can do what Soundflower does on the Mac, which is more sophisticated than simply recording an instrument out.




Most of the soundcards these days have software which will allow you to do this internally.  for example in MOTU's units, simply choose RETURN as your input and you will record the output channels of the MOTU.  you'd need to silence the other tracks, which might cause you a problem if you're working to the internal metronome or if you're recording anything based on playback of other tracks, but those could also be routed to alternate outputs in some mix software or in hardware using alternate routing (but that's going to be the case with physical loopback as well - you'd need to use alternate outputs somewhere if you want to loopback ONLY the synth, so you have to have a soundcard with multiple outputs).
2015/07/07 12:29:51
Adq
It may sound weird, but there is opinion, that it is one of the things, that make difference between professional and hobby-oriented software. For hobby-oriented software it is more important to avoid scaring unexperienced users, than to fulfill professional needs.
If there is some truth in it, I'd prefer to have professional (true professional) version of Sonar, with feedback loops, custom Smart/Draw tool, fully customizable colors and views, and more other options.
Here is some popular video about feedback loop, it was interesting for me, maybe for someone too, if he didn't see it yet:
http://www.youtube.com/watch?v=MUb0Ln5GOCU
 
2015/07/07 12:32:58
azslow3
There are other problems with that approach.
 
At the moment, Sonar has pretty clean data flow model:
There are input MIDI and audio data. These input data can be recorded to the tracks. At the same time, they can be sent to real-time FXes and synthes, the result is collected and is sent to buses, which can be cascaded as you want.
 
In that model, synchronizing everything is a transparent job. Taking delays/buffers/processing time/look ahead information into account, it is possible to calculate all required compensations predictably well.
 
Let say you input MIDI, which you process with some Synth, record it's output, send this output to FX, send the result to some bus, send it back to some track (yet another thread with "great workarounds"), process it again and record yet another result.
 
No loops, no feedback, you have routed everything correctly. But... how all that recorded information should be synchronized? I see the only possible answer: "we do not care, you get what you have asked for and you are on your own...". Because in case you put Synth output in sync with MIDI (compensate for processing time), you are not able to "play" them both. In case you do not compensate, it is out of sync with other audio (which is compensated). Up to some level, (some) people will accept the result (and sync compensate manually when needed). But I guess there will be 100s threads with complains.
 
 
2015/07/07 12:36:03
Adq
Beagle
 
Most of the soundcards these days have software which will allow you to do this internally.  for example in MOTU's units, simply choose RETURN as your input and you will record the output channels of the MOTU.  you'd need to silence the other tracks, which might cause you a problem if you're working to the internal metronome or if you're recording anything based on playback of other tracks, but those could also be routed to alternate outputs in some mix software or in hardware using alternate routing (but that's going to be the case with physical loopback as well - you'd need to use alternate outputs somewhere if you want to loopback ONLY the synth, so you have to have a soundcard with multiple outputs).



I did it with Focusrite, but there is annoying latency added, maybe i didn't find the best way to do it...
But it is obvious that you can't beat doing it inside the DAW.
2015/07/07 12:48:13
Beagle
Adq
Beagle
 
Most of the soundcards these days have software which will allow you to do this internally.  for example in MOTU's units, simply choose RETURN as your input and you will record the output channels of the MOTU.  you'd need to silence the other tracks, which might cause you a problem if you're working to the internal metronome or if you're recording anything based on playback of other tracks, but those could also be routed to alternate outputs in some mix software or in hardware using alternate routing (but that's going to be the case with physical loopback as well - you'd need to use alternate outputs somewhere if you want to loopback ONLY the synth, so you have to have a soundcard with multiple outputs).



I did it with Focusrite, but there is annoying latency added, maybe i didn't find the best way to do it...
But it is obvious that you can't beat doing it inside the DAW.


Yes, your latency would have to be set to the lowest possible setting in order to do this without having issues.
 
I'm not advocating this is a better alternative than inside the DAW, I'm simply offering advice on how to do it with the current programming of Sonar.
2015/07/07 14:15:33
bvideo
Now there's a new consideration about recording the looped-back audio or even recording the digital output from a VSTi: if you want the new upsampling, real-time recording or bouncing won't do it, as far as I understand. Only on freeze/(fast)bounce/export.
2015/07/07 15:54:51
Bristol_Jonesey
Very good point Bill.
 
I fully agree with Craig regarding the "randomization" factor.
 
How do you know you'll like what you record until you play it back? If you don't you'll have to record it again.
 
How is this any different to freezing a synth (which in itself is a lot quicker than recording in real time) and auditioning the result (complete with randomization)?
2015/07/07 16:00:31
Adq
1. Record+Listen -> Good -> Ready
2. Freeze -> Listen -> Good -> Ready
1 is faster.
2015/07/07 16:08:41
Teds_Studio
Bristol_Jonesey
Very good point Bill.
 
I fully agree with Craig regarding the "randomization" factor.
 
How do you know you'll like what you record until you play it back? If you don't you'll have to record it again.
 
How is this any different to freezing a synth (which in itself is a lot quicker than recording in real time) and auditioning the result (complete with randomization)?




This is part of my thinking too.  If there are random nuances in the way the midi translates to the softsynth....and since the keyboard is actually playing the softsynth via midi....it would seem to me that if you record the midi data...then play it back thru the softsynth, you could play it back via the midi track multiple times and you would have the "different nuances" that has been talked about.  I have never realized these nuances myself...but then again, I'm not a true keyboard player.  I can play keys, but more of a guitar player.
 
I can understand how an analog synth could sound different every time you play the part live...but with a digital keyboard that is just transmitting midi to a sound generator (via hardware synth or softsynth)...it's hard for me to comprehend how the performance could be different, whether played with your hands or the midi recorded info playing it back....I would think it would be the same data info.  Someone please correct me if I'm wrong.
2015/07/07 16:13:19
Bristol_Jonesey
Rubbish
 
1: Record. (how long? 3 minutes? 10 minutes?)
2: Freeze. A freeze/unfreeze takes seconds
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account