• SONAR
  • MIDI "Jitter" - It Does Exist (p.2)
2007/10/06 00:37:48
Rajay1
In all my years of midi recording I've never had noticeable jitter although I know it exists. Of course that was after I learned the effect that power fluctuations can have on accuracy. If you're sharing power with lots of other 60 cycle appliances that go on and off at different times and aren't using a good power conditioner, it happens more frequently. That knowledge made me install isolated outlets for my studio with separate grounds and add a power conditioner.
2007/10/06 01:34:42
jsaras
Jim Wright is infinitely more knowledgeable in this area than I am, but it seems like we should have MIDI 2.0 available by now. When my mouse is transmitting data at a higher bandwidth than a MIDI connection, it's time to rethink the whole concept. The fact that you can't really play a large chord and have all the notes sent down the MIDI cable at the exact same time (MIDI is a serial protocol) tells you how limited it can be for real-time performances. As a keyboard-centric musician, I long for the day when the response is truly as snappy as my hardware keyboards.

BTW, my live setup is MIDI free and I've never been happier.

2007/10/06 04:20:09
mbncp
In apps like Sonar, Cubase and most DAWs the live MIDI is quantized to the audio buffer when using VSTi. So with a 256 samples buffer you can have up to 6 msec of jitter.
There are a few apps like VstHost, Live, that preserve the delta time.

Not a big issue with small buffers, but if you have a large buffer because you use Sonar for playback of VSTs from another app, this starts to become very annoying.

Also when using a MIDI guitar, having all your strumming quantized (which means gone) is not fun at all :(

That's one of the reasons I separate the two, Sonar for the MIDI (still have some jitter, but it's much better), and VstHost for the VST(i)s (less than 1 msec jitter with any buffer size).
2007/10/06 08:48:38
Nick P
Great answers, all. I've been using MIDI almost since its inception circa 1982 (close to 25 years). I've been sequencing with all manner of hardware sequencers since 1986, and more recently software sequencing w/ 4 versions of Sonar. I definitely notice various sequencers being more or less accurate at recording and playing back un-quantized MIDI performances.

A couple of comments:

1) I am using 960PPQ. However I have recorded into MPCs w/ 1/10th that resolution (96PPQ) and gotten results which to my ears sound more accurate than what I hear with Sonar (maybe more accurately stated: with software-based MIDI recording solutions, since I've not tried other software products such as Logic, DP, Cubase, etc.). So from my perspective, clock resolution is not as important as marketers would have you believe. For example set a metronome at 120BPM. Listen to each beat and try to conceptualize it being divided into 96 parts. Now 960 parts. What's the difference. If MIDI were being recorded with perfect accuracy, 96PPQ would be completely sufficient.

2) As I understand it, MIDI is a relatively (being that it's now 2007) slow protocol, as well as being a serial interface. Since it seems that I was getting equal or better results 20 years ago, I must assume that Firewire vs. USB would not make a difference in terms of causing a MIDI bottleneck. Certainly it's a concern w/ audio, but I don't think so with MIDI. People have been sequencing with software-based solutions for at least 20 years, when computers were much, much less powerful than even today's budget models.

3) LOL, Dave, my PC's a little more modern than that.

My theory (completely unscientific), is that as these software DAW applications have gotten more and more bloated, they put less and less emphasis on tight MIDI timing in the rush to "gold-plate" the application. This combined with how a computer's OS deals with incoming and outgoing data (handling multiple housekeeping chores at once, but still in the final analysis, one at a time, such as mouse moves, screen repaints, etc...) just begs for MIDI timing to be less than 100% accurate. And why should a manufacturer put emphasis on it? Many Sonar users in this forum seem to be guitarists or bassists who are just thrilled to have audio recording available outside of an expensive commercial recording studio. Many users have no idea what MIDI is, how long it's been around, or how it differs from audio recording (get Scott Garrigus' book).

All I know is that my ears hear the difference when MIDI is recorded and played back accurately, versus sloppily, and I am very happy to see that at least one software manufacturer has acknowledged this issue, and has apparently done something about it.

I wish Cakewalk would put aside their rush to audio nirvana for a minute and do some serious MIDI recording and playback tests - i.e. have a seasoned professional keyboardist record MIDI data in real time into Sonar and at the same time record the audio into Pro Tools or something similar. Now record the output of Sonar's MIDI playback from the sound source again into Pro Tools. Compare the timing of the 2 tracks at a high level of magnification. I think a difference would be easily perceptible.

(Edited for grammar/spelling)
2007/10/06 08:58:00
Dave Modisette
I know for a fact that it is sometimes soft-synth dependent. If I get into latencies higher than 60ms, TTS-1 will lose timing cohesion within itself. I can playback a standard SONAR midi demo file and I can hear the drums start to get loose in timing if I route all the tracks to TTS-1.

If I split the tracks out to different soft synths, I'm ok. There is something that affects TTS-1 as well as the Groove Synth and it is related to higher latency settings (WDM driver) in my DAW.
2007/10/06 09:40:24
Rajay1
Wow! I thought I was an old man! You have approximately two years more seniority than me with MIDI. I think I remember a conversation with Susan and John where we were discussing some of the possible reasons why MIDI has lagged behind in innovation. It's still my theory that as long as it was the dominant medium there was no pressure to change. Then with the advent of Pro Tools and similar audio based mediums the focus had to change just so platforms like Sonar could keep up. That along with the fact that more and more platforms are geared toward the home studio. Now it seems they've noticed that there are in fact more people still using MIDI than was realized. The problem now is how to update MIDI without rendering all the MIDI products before obsolete. There are still the physical constraints to contend with. By the way, you're so old I'm sure you remember a company called SoftPacific that used to make some of the first software sequencers like Sonus for Apple and Commodore computers on 5 1/4" diskettes don't you? 2mb of storage space! I thought that was awesome!
-Rajay
2007/10/06 10:17:12
mbncp
I don't know who started with that stupid way of dealing with live MIDI events, sending events ASAP to the VSTi (actually removing the timestamp relative to the audio buffer).
It's true that a MIDI event arriving to the host is already late regarding to the audio buffer being played on your speaker. Now, compensating for a fixed latency is not a problem, but the way Sonar, Cubase are handling live MIDI, makes it really difficult to compensate due to the jitter, so the only choice is to use a really small buffer.

When we record live audio, we don't expect all the samples being mixed on a single one, so why is MIDI treated that way.

Btw, it's easy to test that, create a bunch of 16th events, send them to a virtual MIDI driver, and get the return back in Sonar, sent to a VSTi with a sharp attack (drum stick,..).

The problem is that I'm not even sure the devs understand the problem. Off course they are other factors, the MIDI inerface being one, the timing of the MIDI driver, but for the most part it can be handled by the software (keeping in sync with the different clocks), which is obviously not done in most apps.

MIDI was much better before the audio came in.
2007/10/06 10:43:30
Blades
For what it's worth, I was just doing some routing testing to check between the midi I'm recording, the audio that that midi is producing, and the sound of me actually whacking a drum pad. I set up a midi track and two audio tracks. I record the midi note being triggered, the audio from the drum brain, and the audio from a mic picking up my stick hitting the head of the drum.

I do these sorts of tests every so often to make sure that I understand what is being recorded and tell how close it is to what I'm playing.

I was impressed with how close they all were, actually, having experienced some midi to audio issues on my previous system.

I'm using a Layla 3G and a TD-20 drum brain. when zooming in as far as I can, and changing the ruler bar to display samples, I can measure the space between the three diferent recordings and it is REALLY tight. It's a little hard to measure the exact differences, but it seems like it's somewhere around 23 samples total diffefrence between the three hits - this is about 1/2 millisecond based on my calculations at 44.1khz.

I don't know how much it adds to this particular discussion, but I thought I'd throw it out here anyway.
2007/10/06 10:49:58
mbncp
The jitter introduced by the daw is only with live events sent to a VSTi, recorded events should be ok as they are recorded with there actual timestamps.
The problem is that you hear (playing live) is not the same when playing back a recorded track.
2007/10/06 21:21:23
Blades
There is also a switch in the ttsseq.ini file that allows you to adjust the "ignore midi in timestamps". In some cases, switching it ON (1) makes the midi tighter because the timestamp of the midi interface is ignored and instead stamped with the time Sonar got it.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account