As Yoda said "There is another...." way to improve MIDI timing accuracy on a Windows machine. At least in theory...
Bear with me while I set the stage.
There are (at least) three major sources of MIDI jitter on a Windows DAW.
1) is any jitter that's contributed by the underlying transport (USB, Firewire, PCI). USB contributes at least 2 milliseconds of jitter, for reasons discussed above. Firewire contributes about 0.3 milliseconds of jitter. PCI - contributes very little jitter (probably 5-20 microseconds at most; depends on drivers, interrupt configuration, etc.). Anything that Both Firewire and PCI will be
2) is the nature of MIDI DIN. If you stuff too much data over the wire too fast -- you will get "MIDI logjam" effects, a.k.a. smearing. This happens if you do something like send lots of tightly-packed controller messages. It happens because MIDI DIN can only send 1 byte of data every 320 microseconds - period. Try to send more, and data will just queue up until the logjam clears. This "smears" the timing. Note that USB MIDI is *not* susceptible to MIDI logjam effects in quite the same way, because it doesn't have a built-in throttle limiting data transfer to 1 byte every 320 microseconds. However, this is only true when the USB MIDI -- does not have MIDI DIN jacks. If you are using USB MIDI to send data to/from MIDI DIN jacks (as opposed to a hardware synth with a USB interface) --- then MIDI logjam can still occur, even with USB MIDI.
3) The final MIDI jitter source is the infamous 1 millisecond Windows "MM" timer, which sequencers have historically used for timing MIDI data.
OK. Time to explain how it's possible (in theory, at least) to avoid the 1 millisecond quantizing/jitter effects.
First, use a PCI-based MIDI interface - not USB MIDI. This removes one major potential jitter source.
Then - don't use the "MM" timer to time incoming and outgoing MIDI data. This part requires kernel-level coding, but should be doable.
The standard Windows API calls for sending and receiving MIDI
do not actually use the MM timer directly. I know some posts have said they do - but those posts were wrong. For example, the API call for sending a single MIDI 'short" message looks something like midiOut(driverHandle,midiMessageData). There is no timestamp argument in that API call. Instead, Windows tries to send the data "as fast as possible" through the MIDI driver specified by 'driverHandle'.
If you call 'midiOut' from user-level code, then it can take a while for MIDI data to actually get sent out the MIDI DIN jack (Windows has to do a context switch from user to kernel level; various kinds of house-keeping may occur....).
But, if you call midiOut from your own kernel-level code, sending MIDI data can be much more 'immediate' - especially if you bypass some of the normal Windows 'wrapping' of MIDI drivers, and tickle the driver more directly.
Plus, if you are running kernel-level sequencer code - you can completely avoid using the 1 millisecond-resolution MM timer. What do you use instead? A higher-resolution, kernel-level timer (several kinds are available, Google on QueryPerformanceCounter for one option).
Putting all these pieces together --- a clever sequencer designer might be able to build a kernel-level 'event engine' that a) runs at kernel level, to avoid various layers of Windows cruft; b) talks to the MIDI interface drivers more directly, to bypass still more cruft; c) uses its own custom-shop high-performance timer, rather than the stock 'MM timer.
Easy? No. But it might be do-able.
Does Sonar do this? No. Noel said (later in this thread) that Sonar uses the "MM' timer.
Have I built code like this? No. All my sequencer engines were user-level stuff (lthe last one was Win98-vintage).
Will this make a different with USB MIDI drivers? Probably not. USB introduces enough other slop that it wouldn't be worth it.
How about Firewire? Possible, but more tricky to do than with PCI-based MIDI ports (because of the Firewire driver stack).
My gut tells me that a PCI-based interface still provides the best shot at minimizing jitter, because it has the lowest inherent 'transport jitter' (PCI bus is very very fast) and also has the simplest driver stack (less chance for Windows to muck things up).
Whew! If you plowed through all that, you probably deserve a beer
- Jim
Edit: Noel clarified that Sonar doesn't use a kernel-level event engine. Frankly, building one would be a pretty tricky piece of coding. It would be even more tricky to make this hypothetical 'kernel-level event engine' perform reliably, given all the other stuff that goes on in the kernel. Probably wiser not to go there...
"The difference between theory and practice in theory is much less than the difference between theory and practice in practice." [Randal L. Schwartz]