There are other problems with that approach.
At the moment, Sonar has pretty clean data flow model:
There are input MIDI and audio data. These input data can be recorded to the tracks. At the same time, they can be sent to real-time FXes and synthes, the result is collected and is sent to buses, which can be cascaded as you want.
In that model, synchronizing everything is a transparent job. Taking delays/buffers/processing time/look ahead information into account, it is possible to calculate all required compensations predictably well.
Let say you input MIDI, which you process with some Synth, record it's output, send this output to FX, send the result to some bus, send it back to some track (yet another thread with "great workarounds"), process it again and record yet another result.
No loops, no feedback, you have routed everything correctly. But... how all that recorded information should be synchronized? I see the only possible answer: "we do not care, you get what you have asked for and you are on your own...". Because in case you put Synth output in sync with MIDI (compensate for processing time), you are not able to "play" them both. In case you do not compensate, it is out of sync with other audio (which is compensated). Up to some level, (some) people will accept the result (and sync compensate manually when needed). But I guess there will be 100s threads with complains.