• SONAR
  • MIDI "Jitter" - It Does Exist (p.17)
2007/10/11 22:18:47
pianodano

ORIGINAL: dewdman42

Here's another interesting article

http://www.soundonsound.com/sos/Oct02/articles/pcmusician1002.asp




You da man.

Thanks,

Danny
2007/10/11 23:25:04
T.S.
dewdman42
Hi dewdman42,

My hats off to you for persueing this like you are, although I'm not sure where your going to end up or how significant your findings will be. I've beat my head against the wall for years with this and all I got was a lumb on the head. Hehe, maybe that's why I'm so brain dead.

If your going to do this I'd suggest you use a sine wav or a square wav around 1Khz. Edit them by cutting them off both front and back at a zero crossing and put them in your softsampler. They'll be a lot more accurate than a sidestick or hihat.

Since this thread is so long, I haven't read much of it so if this has already been covered I'm sorry, please forgive me.

T.S.
2007/10/11 23:32:38
brundlefly
If your going to do this I'd suggest you use a sine wav or a square wav around 1Khz.


I've got the sample. A single cycle of a 1000Hz sqaure wave, I used it to do latency testing on audio. But I don't think I have a software smapler in my aresenal, unless something bundled with Sonar6 SE can do it.

Dave
2007/10/11 23:34:17
bitman
Dudes and Dudets,

My first instrument was drums. Then at 9 yrs I switched to guitar. When I was much older I got one of those Roland Midi drum sets, and after recording Midi drums I thought, good Lord I suck at this, why did I not hear it like that when I was recording!!?? So Quantize I did, much. Which sterilized a very good part I played.

Then I scored a cheap 5 piece acoustic kit for a "house kit" when I started doing others demos.

Guess what. I'm a good drummer on that kit.

- Midi jitters alright - alot.



2007/10/11 23:40:39
brundlefly
Here's another interesting article

http://www.soundonsound.com/sos/Sep02/articles/pcmusician0902.asp


Beware of anything or anyone purporting to know the "truth" about anything. From the second paragraph:

"The beauty of VST and DX Instruments is that their playback timing from a MIDI + Audio sequencer is always guaranteed to sample accuracy, since the waveforms are generated slightly ahead of time and outputted just like any other audio track."

Would that it were true. Even when rendering tracks offline, the timing of at least some soft synths is not anywhere near sample accurate. Seems like it should be an easy thing to do, but my testing says otherwise.

I generated a track of 15-tick-long MIDI note events at 64th-note intervals (60 ticks at 960BPM). I bounced to audio through the TTS-1, using the sharpest drum hit I could find with reverb off so it would be easy to pick out the beginning of transients, and they were all over the place. I haven't crunched the numbers yet, but I know some of them were 100 samples off (about 4 ticks at 120BPM). I'm still playing aorund with this, using different synths, and rendering methods, but it doesn't look too pretty right now. Offline timing seems no better than real-time.

Just trying to provide some more data points.
2007/10/11 23:51:40
dewdman42
Well I just did one simple test..just took a drum midi clip from EZ drummer. Listened to it, in sonar, turned it into a groove clip loop, stretched it out a while, listened to it for a while...seemed pretty solid to me.

I froze it. Still sounded solid.

I zoomed way in until I could see that 1ms of time was a few millilmeters of horizantal screen space. I could view both the midi drum map view and the wave data at the same time. I put PRV into the track view so I could see the detailed drum map there. Zoomed everything big. i can click on the PRV exactly on the beat lining up the midi event. the line shows up on the track view and exactly lines up with the wav file that was created when freezing the midi track through EZDrummer as a sound module

(shrug)

So far, I'm happy with that timing, no complaints... I have to go to gym now but later I will try routing the output from sonar into a wave file while playing to see if its off when not freezing or mixing...though its somewhat of a moot point.

I don't understand why others are having all-over-the-place results. We will need to dig deeper to try to understand what it is about their test that is causing this. So one of the people having problems, one of us that are not having problems should try to recreate exactly the same test. We should discuss various system aspects to see if that is the root cause also, but right now I'm inclined to say at least here, so far...I am not detecting any problem while mixing down.

I had metronome assigned to midi by the way instead of using one of the built in audio sounds.

Please give as many details as you can about the setup you used to create this problem and I will see if I can recreate the bad timing here.



2007/10/11 23:53:32
dewdman42


Would that it were true. Even when rendering tracks offline, the timing of at least some soft synths is not anywhere near sample accurate. Seems like it should be an easy thing to do, but my testing says otherwise.

We should try to identify some specific instruments suffering from this and see if everyone can replicate the problem.

You say TTS-1 consistently replicates it? I will try that when I get back from the gym tonight
2007/10/12 00:08:37
Jim Wright
Chris,

In current PC (and Mac) systems, the audio and MIDI devices do work asynchronously. However, the reasons why have to do more with the use of different timebases for audio and MIDI (and the completely asynchronous nature of MIDI message input/output) -- than with the use of buffers per se.

It's useful to consider how Firewire (1394) handles audio and MIDI. Both kinds of data are sent "over the wire' using CIPs (Common Isochronous Packets). Because audio and MIDI data are sent, literally, within the same data packets --- MIDI is forced to be sychronized exactly with the audio sample data.

Let me say that differently. At the moment when MIDI messages are packetized for transmission over Firewire -- the MIDI messages are no longer asynchronous -- they are now isochronous. Which is to say, the "implicit timestamps" of the MIDI messages are now fixed rigidly against the running audio sample count associated with the Firewire data packet that contains each particular MIDI message fragment. (I edited the MIDI-over-Firewire spec for the MMA; trust me on this).

Now, Firewire data transport - involves buffering. Both audio and MIDI data have to be "queued up" before they can be packetized and formatted into Firewire frames, and then finally sent over the wire. Note that the MIDI data is asynchronous on its way from the sequencer to the Firewire MIDI driver; it only becomes isochronous at the point, during internal driver procesing, where the asynchronous outgoing MIDI messages are converted to isochronous outgoing Firewire data bytes within particular packets.

Now -- if the sequencer could write the MIDI data directly into the outgoing Firewire data packets -- the sequencer could control, with very fine accuracy, exactly when particular MIDI messages would occur w.r.t. the audio data that's being sent over the same Firewire pipe. This is not how Firewire MIDI works currently - but it could.

Chris noted that a new driver structure would be needed for a hypothetical next-generation MIDI interface. New drivers, certainly. I don't think the driver APIs would necessarily have to change, because DirectMusic (on Windows) and CoreAudio (on Mac) both support high-res timestamps. We would just need a MIDI driver that integrates with the audio drivers to 'do the right things', in lockstep.

Ok. Enough theory and speculation for now.

---------------

As dewdman42 pointed out - we don't need sample accurate sync for MIDI. I think 1/3 millisecond would be fine for pretty much any purpose. Heck, a 1 millisecond time resolution that was both accurate and reliable should be enough.

As dewdman42 also pointed out -- the "MIDI jitter" that bothers most of us is probably much larger than 1-2 milliseconds, and is probably happening because of varous kinds of system configuration issues (or MIDI interfaces with lousy drivers, or various other reasons....) I think most of us would be content if current PC MIDI interfaces worked as well, in practice, as the ones on the 15-year-old hardware sequencers mentioned in many posts on this thread. Those products hardly use sample-accurate MIDI -but lots of people swear by them (and few swear at them, unlike software sequencers...)

As pianodano wrote:
>> I wonder if I am the only one getting the feeling that the existing underlying engineering/ architecture is a piecemeal affair ?

Nope. Frankly, it's totally piecemeal. MIDI was developed starting in 1979 (Dave Smith's 'Universal Synthesizer Interface' AES paper; I was in the audience), and first shipped in 1983. It was designed to be as cheap as possible. It was not designed with any thought of synchronizing with audio, or video, or doing anything but hooking two electronic music products together. Everything after that -- just grew.

Different companies proposed different ways of syncing up MIDI with audio, video, whatever. Some of 'em worked, some didn't. Some of the better ideas -- died, because the people with those ideas were lousy businessmen. Some of the sketchier ideas -- made it in the marketplace, because their inventors were much better businessmen than engineers. Kind of like the rest of the computer hardware/software revolution that happened over the last 30 years....

I've been involved with MIDI and AES standards work since 1981, off and on. There is no grand design. There's just a lot of people trying to work out the best compromises they can, considering both what we (music geeks) know how to build, or code, and what the marketplace seems to want.

Kind of shocking, isn't it ?

- Jim
2007/10/12 00:36:27
bvideo
I have spent a little time today studying the Roland MC-500mkII and my Sonar system
on a Pentium 930D (dual core) with a MOTU MIDI express 128 (USB) and Alesis Multimix
FW 16. I have no high precision lab gear, so I've done some recording of each system
to the other in the presence of some load in Sonar. The load is a bunch of audio
tracks, some perfect spaces and some vintage channels, all using about 50% of one
CPU. With buffer at 256 and multiprocessor deselected, no clicks were ever heard.

My first observation was it took longer to boot the MC-500 than windows XP.

I let Sonar use 960 ppq, while the MC-500 uses 96. So ticks are about a half a
millisecond in Sonar and about 5 milliseconds in the MC-500 (.00052 and .0052
seconds). Although I did do some experiments with Sonar and MC-500 taking turns
being master and slave, I think that is not very informative. So I adjusted Sonar's
tempo to 119.93 against MC-500's 120 so that the two were not noticeably out of sync
after eigth measures.

The data stream I used was eight measures of quantized quarter-note triads. I
triggered a JV-80 voice that was pitched and had a sharp attack.
There are two parts to this.
1 - view the jitter at the start of quarter note triads by examining an audio
recording.
2 - view the jitter among all the notes seen in a MIDI recording.

Audio
-----
With Sonar playing the MIDI, I recorded the synth. Using audiosnap to find the
transients, they were all in the range of 4 to 16 ticks (960 ppq) late. So I say the
average latency was 10 ticks and the jitter was -6 to +6. I don't know what part of
the system, including the JV-80, contributes to the latency or the jitter. But the
jitter could be said to be about -3 to +3 milliseconds.

With the MC-500 playing MIDI, I recorded the synth in Sonar and used audiosnap
again. Audiosnap found transients with jitter of -5.5 to +5.5 ticks, i.e. just a
little less than 3 ms plus or minus. The latency is irrelevant because it involves
me moving from one control panel to another to separately start the record and
playback. Anyway, this jitter for the MC-500 to trigger the JV-80 is equivalent to
what I saw with Sonar.

MIDI
----
I recorded MIDI to the MC-500 played from Sonar. The notes of the triad were
recorded either 0 or 1 tick (of 96 ppq) apart, and the triads were separated by 95
or 96 ticks accordingly. This is a jitter of -2.5 to +2.5 milliseconds. This means
the recording was "damaged" by the occasional insertion of a 96ppq clock tick
between two notes in a triad. I believe this is unavoidable under any circumstances
that have no electronic synchronization.

I recorded MIDI to Sonar played from the MC-500. The maximum jitter in that
recording was -7 to +7 960 ppq ticks, or -3.5 to +3.5 ms. The notes of the triads
were never more than 8 ticks from one another (i.e. -4 to +4 ticks, or +/-2
milliseconds). In this case, the triads are somewhat less damaged than the MC-500
recording because they are not separated as much from each other.

-------
To summarize, I think I saw equivalent jitter in every scenario, roughly +/- 3
milliseconds, and less within the triads. I'm sure this type of setup lacks the
precision to see better jitter, much less perform it. Same with me.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account