Sorry: I overstated my issue: of course most midi files are crap (meaning quantized, or step-inputted etc., etc.).
What I was curious about is whether any advances had been made over the years in modeling human touch, beyond "HUMANIZE," et. al., which are crude and pretty much useless.
"There are no humanize functions (IMO) that really understand the content of the music." If this is true, it answers my question in a general way.
But the "randtime.cal", in theory at least, is an exception. A literally "solid" or "unbroken" chord at the piano is never heard in performance, and probably never intended. In fact, midi files of live performances show that the leading note, usually at the top of the chord, is ususally sounded AFTER the other notes of the chord. In Chopin the phenomenon is almost a necessity. Ergo: how about a CAL script that takes RANDTIME and adds a provision that allows the user to specify which note will sound LAST or FIRST in the random sequence?