I believe I may have a simpler solution.
Firstly, if you're making sonar responsible for handling synchronization and timing as the master clock (internal / audio), then anytime you change your sample rate, Sonar will actually be providing SRC for playback, and you may hear the artifacts of aliasing introduced depending on which resolution your sampling between for playback purposes, yet the pitch and tempo of the playback will remain the same.
Now... if you were to set Sonar so that it receives timing from an external clock (say your hardware interface), and you keep your project and thus Sonar's sample rate the same, but change your device's sample rate - theoretically Sonar will be trying to play back and record at the original sample rate, but your hardware interface will be sending and receiving at the newly set sample rate. This would in theory be the easiest way to emulate the tape speed effect in the digital domain.
Here's where some issues are likely to arise. Depending on the driver model, and the particular stability of that device, some Hardware interfaces may "force synchronicity" which means they will either adapt back to whatever the software is saying it's expecting, or simply complain of a mismatch. I can tell you there's quite a few studios who prefer using external / dedicated hardware clocks to gain the benefit of quickly implementing this technique, and also for shuffling between multiple audio sources in a given control room.