• SONAR
  • MIDI "Jitter" - It Does Exist (p.21)
2007/10/12 18:17:32
Jim Wright
Noel - good to see your input! I would also expect that softsynths should get exactly the same timestamps from Sonar at all times, during both regular playback, bounce, fast bounce etc. Any apparent softsynth jitter - seems likely to be caused by the particular softsynth, not by Sonar. (I'm assuming this because the direct Sonar-to-softsynth connection completely bypasses the particular Windows and MIDI-driver problems that I've seen cause jitter for MIDI traffic to/from the actual MIDI DIN ports.)

I will try to test some softsynths this weekend to see if I see any of the jitter issues that others have reported (I'll probably try Atmosphere - which has test tone programs - and Kontakt 2, rather than TTS-1; I may also try DimPro). I'd also like to do some tests with a padKontrol, external synth module and Sonar, to compare MIDI performance when the same source (padKontrol) is used to drive both an external module (Korg NS5R) and a softsynth (Battery 3, DimPro, Atmosphere....). How would I test? Here are some ideas.

- I'd route padKontrol to both NS5R and several MIDI ports (Edirol USB, EMU PCI, padKontrol USB...) and record all the MIDI inputs as separate tracks, simultaneously. Then, I'd look for jitter and skew in the various tracks. That would let me compare performance of padKontrol USB MIDI (up the USB cable to Sonar) with EMU 1820M MIDI IN (patching padKontrol MIDI DIN out to EMU MIDI DIN IN), and with Edirol USB MIDI (patching padKontrol MIDI DIN out to UM-550 MIDI DIN in, which appears as a different USB MIDI IN port to Sonar). By the way -- obviously, playing the padKontrol pads directly is not going to produce notes that fall exactly on any particular beat, or audio sample count, or millisecond. I don't care about the absolute timestamp of any particular MIDI event. What I'll be looking for is how much latency and jitter is introduced by different MIDI ports and drivers, that are all processing the same original source note (a single hit on a padKontrol pad) and are all being recorded in parallel.

- In Sonar, I'd route the live MIDI input (either padKontrol USB MIDI, Edirol USB MIDI or EMU PCI MIDI) to a softsynth (Battery 3, Atmosphere, Dim Pro...). I'd then record the audio output from the softsynth to a separate audio track. As a baseline, I'd also record the NS5R output to another audio track (I'm assuming the NS5R has low jitter - which might not be the case. I have some other hardware modules to try if need be). Then, I can (hopefully) compare the relative jitter of different softsynths as they are driven by live, incoming MIDI (handled by Sonar, but not yet recorded/played back by Sonar). I'd also record the live MIDI input for the next set of tests.

- Finally, I'd take the recorded MIDI data (captured during the previous set of tests) and use that to drive the same softsynths (and probably the NS5R too). This would result in another set of audio tracks. I could then compare all the different audio tracks (and MIDI tracks created using various MIDI input ports) -- and see what the data shows.

If anyone has other suggestions for evaluating real-world MIDI jitter, using something like a padKontrol as the MIDI note generator -- speak up!

Original: RTGraham

...Jim Wright has suggested that a MIDI interface could be built that references its own, more stable clock, and could also clock to an external stable clock, like word clock, and achieve much greater internal MIDI stability while still being able to reference Windows' current driver APIs.

That's not exactly what I said, but then I haven't given any details of the hypothetical interface. What I have in mind ... is hardware that basically synchronizes MIDI traffic directly with an audio stream, using a MIDI timebase that's inherently aligned with the audio sample rate. The guts of the hardware could probably be done using an FPGA (field-programmable gate array). Sorry to be so mysterious. I would like to take things further, but would need to get permission from my employer to open-source the idea (or find another way of developing it that complies with IP concerns).

- Jim
2007/10/12 18:28:07
RTGraham

ORIGINAL: Jim Wright
Original: RTGraham

...Jim Wright has suggested that a MIDI interface could be built that references its own, more stable clock, and could also clock to an external stable clock, like word clock, and achieve much greater internal MIDI stability while still being able to reference Windows' current driver APIs.

That's not exactly what I said, but then I haven't given any details of the hypothetical interface. What I have in mind ... is hardware that basically synchronizes MIDI traffic directly with an audio stream, using a MIDI timebase that's inherently aligned with the audio sample rate. The guts of the hardware could probably be done using an FPGA (field-programmable gate array). Sorry to be so mysterious. I would like to take things further, but would need to get permission from my employer to open-source the idea (or find another way of developing it that complies with IP concerns).

- Jim


Sorry, didn't mean to misrepresent you. You're correct, you did not in fact state exactly what I paraphrased - I sort of summarized a couple of different points that you made in a few posts, because they seemed related; but now it looks like I'm putting words in your mouth. My apologies - just trying to draw connections.
2007/10/12 18:30:14
dstrenz
ORIGINAL: brundlefly
...But I am seeing differences on the order of 100 samples from the expected interval between consecutive events when rendering through the TTS-1. I just did a test with the Dreamstation DXi, and got smaller errors, on the order of 30 samples. So it seems to be Synth-specific. I have not yet tried a VST instrument.


I haven't used it in a long time but the TTS-1 is not the ideal softsynth to use to test Sonar's midi timing. The reason I stopped using it? Download a midi file that contains several tracks and import into Sonar. Set all of the tracks to play through the TTS-1. Add an instance of a different soft synth and route one of the midi tracks through it. The timing is off by a mile.
2007/10/12 18:51:16
dstrenz
Jim, I ?may? have found a fault with the old test and would appreciate your opinion on this if you have any. The first thing done is record audio and midi simultaneously while the sequencer is sync'd to Sonar. But normally, I only record midi by itself without syncing anything. In an attempt to remove audio and synchronization from the equation, I did the following. The results are very surprising to me. Also, I followed your advice and used the emu 1820m's midi rather than the Fantom's USB midi.

1. Record 4 bars of quarter notes on the Fantom sequencer at 120bpm.
2. Save the file as an smf and import into Sonar. (Lets call this clip from the Fantom Fclip)
3. Set up Sonar to record midi from the Fantom (960ppq) hit record, and play on the fantom. Lets call the clip recorded in Sonar Sclip1)
4. With snap-to-grid on, slide SClip1 to line up with FClip to align the first notes.
5. Select both midi clips and View|Event list. Trk 4 = Sclip1, Trk5 = Fclip.


The range of accuracy is about 4 ticks. Very surprising..

Now, to add audio to the equation, I repeated the process but recorded an audio track along with the midi track. Call the new midi track Sclip2. Trk5=FClip, Trk2=Sclip2:



Very similar results!? Next I tried to add in midi sync but could not get it to work through the midi cables using the same procedure I used to make it work through USB.
2007/10/12 20:31:30
brundlefly
The range of accuracy is about 4 ticks. Very surprising..



This is virtually identical what I reported earlier: max error from expected time of about 1.5 ms = 3 ticks at 960PPQ, 120BPM.

In my experience, MIDI sync should put the events right on the same tick.
2007/10/12 20:42:09
brundlefly
Download a midi file that contains several tracks and import into Sonar. Set all of the tracks to play through the TTS-1. Add an instance of a different soft synth and route one of the midi tracks through it. The timing is off by a mile.


I did exactly this to record Two-Dude Defense. A couple of tracks of drums, and a bar of horn played through the TTS-1, with piano played through TruePianos. I haven't analyzed it to the sample like we are here, but it sounded fine, both in real-time and rendered:

http://www.soundclick.com/bands/songInfo.cfm?bandID=757783&songID=5856598

Admittedly I wasn't putting that much load on the TTS-1. A bigger problem with the TTS-1 in my opinion is that it sounds little better than a cheap soundcard. It's handy for ginning up a quick arrangement, though.
2007/10/12 21:52:44
Noel Borthwick [Cakewalk]
Still havent read this thread closely, so I may be missing some arguments, but there are a lot of variables with softsynths that can cloud the test. e.g. depending on the patch the synth might render its audio later than the actual MIDI data, or it might not be deterministic for a test. Basically its up to the synth to decide when to actually start rendering the audio. The MIDI timestamp is the cue to begin rendering thats all.

The ideal test is to take a minimal synth that renders audio instaneously when it receives a note on. A test VST to just emit a test tone in response to a note on would be ideal for this test. The test DXi Twonar from the DXi SDK that Ron wrote years ago could be used to test this with DXi's.
2007/10/12 21:59:34
brundlefly
It remains to be seen whether TruePianos superior performance is inherent to VSTi technology or just that they've got a better timing algorithm. If someone can turn me onto a freeware, downloadable VSTi somewhere, even a short-term demo like the TruePianos installation I'm using, I'll test it.


You know things are bad when you start quoting yoursself. Oh well. This is just getting more and more interestinger...

Long story short: downloaded and installed Jamstix 2 Demo. I get some noise out of it, but not drum sounds, and it runs my CPU into the ground, so I gave up on it.

Instead, I re-ran the rendering timing test with the Roland Groove Synth, another DXi bundled with Sonar. As before, I used its Claves patch which has a really nice sharp attack and was easily picked up by AudioSnap. I got yet another behavior pattern, though similar to the TTS-1:

The first transient was 136 samples late; the second was 140 late, the third was 144 late, etc. So the interval was consistent, but was 4 samples too long. This went on until at the 25th event, the rendered transient was 232 samples late. But then on the next event, the interval dropped form 2944 to 2816, so the transient was only 108 samples late - i.e. better than where I started. Huh?

So, on a hunch, I scrolled ahead to find the next place where the interval dropped from 2944 to 2816. I found it, and what do you know, it's exactly 32 events later. So I went out another 32 events, and found the same thing: one transient 232 samples late, and the next one back to 108 samples late.

Now, as everyone knows, 32, being a power of 2 is one of the magic numbers in computing. When things happen at intervals of exactly a power of 2, you want to take a look at your programming. I once found a bug in an early Olympus digital camera driver that was creating a "hot" pixel at 64-pixel intervals. I reported it, and a couple of weeks later a new driver came out, and the hot pixels were gone.

But, 32 events also happen to be exactly half a measure in my test setup, so...?
2007/10/12 22:04:29
pianodano
I have never used any of the supplied softsynths so I cannot comment on those. But as I stated earlier, I do use Realguitar constantly. I have tried numerous times to freeze a well tweaked Realguitar track and always found it necessary to unfreeze it. Something changes in the timing when frozen. Now I just put it on the tape recorder. Problem solved. YMMV.

Brundlefly, they may still have a demo of Realguitar over at the musiclab site. I'll go see and let you know.
2007/10/12 22:06:49
brundlefly
Still havent read this thread closely, so I may be missing some arguments, but there are a lot of variables with softsynths that can cloud the test. e.g. depending on the patch the synth might render its audio later than the actual MIDI data, or it might not be deterministic for a test. Basically its up to the synth to decide when to actually start rendering the audio. The MIDI timestamp is the cue to begin rendering thats all.


This is why I mentioned earlier that I was ignoring the delay for the time being, and just looking at the interval between transients. This interval should be consistent regardless of how much delay there is. This is what some are referring to as "jitter" in rendering.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account