• SONAR
  • Remember that 96K TH2 thread? I Just had my mind blown, big-time (p.11)
2014/06/04 01:42:37
John
mudgel
I'm certainly no rocket scientist when it comes to this particular discussion nor am I just a rock so I understand a reasonable amount of the discussion. BUT I'd like to congratulate all involved for the manner in which they've presented their points. Even when there's been some points of contention in what in other forums would start a war all has remained civil.

It's another example of the quality of the people that are part of the Sonar forum family.

There was a time pre X3 where there was a tension evident in the forum but generally speaking the forum is pretty much a pleasure to be part of. I think it was Craig who mentioned (a while back) a new gestalt on this forum and this thread is a classic example as is the one about your favourite underrated feature. Group hug. :-)

I have to agree the members on this thread have been great. 
2014/06/04 07:33:12
The Maillard Reaction
.
2014/06/04 08:55:38
BJN
All I want is a workable definition as applied to recording and mixing music for distribution via digital mediums.
 
Due to the ears ability and sensitivity to localization perception and prediction primarily we know where the prey or the predictor is even where our eyes cannot see.
 
How our hearing functions has meant our survival. It doesn't matter if it isn't perfect compared to another non record buying species hearing. LOL
 
The Science will tell us the reason 44.1 was chosen as the sampling frequency is it contained all the frequencies for human hearing.
 
Now we have powerful enough computers and even today's cheap converters are way ahead what we started with.
We have the storage space and the conversion quality. 
I say the ADDA conversion or DAW conversion algorithms is not the weakness in the chain.
 
I say the weakness in the chain is the limits of the relatively unchanged technology of microphones and the fad of using old preamps with "color" but with specs only suitable for tape machines. (Okay some great pieces have very good specs.)
 
In other words we are still dependent upon elecromagnetics to not only capture sound but to reproduce it as well.
To me it is not our ears that are imperfect.
 
Surely we have advanced technologically that a better diaphragm or loudspeaker design can match the capabilities of where digital can take us?  
I hope our ears are good enough to discern what the possibilities could be? 
  
I recall a little while ago coming across a new diagram (cell) design for earphones but for the life of me haven't located the article. 
 
 
 
 
 
2014/06/04 10:08:01
bitflipper
drewfx1
 
If the sound source is slightly to one side and some distance away it can have a very small ITD between your ears.
Or am I mis-Pythagorizing it? 



No, you're not mis-applying Pythagorus. That's why I said "at a given angle of incidence". Sound coming from in front of us will reach each ear almost simultaneously. That's why we have a much harder time pinpointing where sound is coming from when its origin is nearly straight ahead. It's why LRC panning has enjoyed a resurgence over the past decade, and why rhythm guitars are routinely double-tracked, panned wide and delayed for the Haas effect.
 
Practitioners of the Haas trick learn that the left-right delay has to fall within a certain range in order to be effective. Delays below 2ms don't work. Delays above 20ms are perceived as discrete echoes and also don't work.
 
Similarly, sound-widening plugins often use delays of a few milliseconds to shift different frequency bands - the operative qualifier being "milli". If sub-millisecond delays really had a significant impact on perception of sound quality, it would probably have been adopted as a widely-used mix technique. It hasn't. Sub-millisecond delays are perceptible, but only within the context of the undesirable effects of comb filtering.
 
An aspect that hasn't come up yet in this conversation is temporal masking. This is what happens when two sounds occur very close together in time, too close for the cochlear cilia to reset in between. This period can be as long as 100 ms (!), depending on the frequency, amplitude and envelopes of the events. Within the microsecond timeframes we've been talking about, temporal masking is going to be the primary limiter of perception - regardless of the resolution of our recording and quality of our speakers.
 
I'm still not buying the idea that faster sample rates sound better because transients are better-separated.
 
2014/06/04 13:02:38
abb
Actually sound localization involves more than binaural cues like interaural time difference (ITD; the difference between the times it takes a sound to reach the two ears) and interaural level difference (ILD; the difference in sound pressure level reaching the two ears).  There are also monaural cues based on the head-related transfer function (HRTF; a spectral cue derived from the way the pinna and head differentially affect the intensities of frequencies arriving at the ear).
 
What's more, all these mechanisms vary as a function of frequency, bandwidth, and direction (azimuth and elevation) to the sound source.  For example, sound 'shadows' created by the head that create ILDs only become appreciable around 1,700 Hz making this cue frequency dependent.  Another example is the bandwidth dependence of ITDs.  ITDs are not very effective for narrow band signals like sine waves because they recruit only a few hair cells in the cochlea resulting in a feeble signal for the brainstem nuclei to use when computing cross-correlation between the left- and right-ears.  Yet another example is the variation of the HRTF as a function of the azimuth and elevation of the sound source, leading to different localization estimates.
 
In the end we exploit the cues that are most salient and most reliable in given situation.  And the same is true for the visual system -- we use several different cues to (visually) locate objects in space.  Some of these are binocular, some are monocular.  Natural selection is very opportunistic in that it endowed us with many different ways to achieve the same end result.   Cheers...
2014/06/04 19:20:03
Anderton
Sanderxpander
I would think at timescales like this simply moving your head into a different angle would have a more significant effect (and thus negate any difference between the output of the two speakers). I haven't read Moorer's article yet but generally speaking this seems a pretty wild conclusion to draw from a carefully done experiment in very controlled conditions. That's not really how science works (although the media would like it to).



It's not just Moorer, check out the article I linked to from JARO. It has references going back to the late 50s. It almost seems this is an "everyone knows that" kind of thing in the field.
 
I'm not saying it's right or wrong. I haven't done the experiments myself. But I'm not arrogant enough to say that I flat out don't accept it because it doesn't seem right, or obsequious enough to flat out accept it because a bunch of researchers with doctorate degrees tell me it's so. 
 
The one thing I DO flat out accept is that instruments without oversampling sound better when recorded at 96kHz and I have files that demonstrate that to more than my satisfaction. And as I side note, I looked for instruments and processors that had switchable oversamping capabilities...there aren't that many. Either it's done internally and is transparent to the user, but then I don't understand why they sound better if run at a higher sample rate, or it's simply not built into the design.
2014/06/04 19:35:48
Anderton
bitflipper
Let's look at a practical example, an electric guitar played through a high-gain amp sim. You play a very high note on your guitar, say with a fundamental frequency of 1.3 KHz (an octave above an open high-E string). The amp sim will generate harmonics at 3x, 5x, 7x, 9x, etc. The 15th harmonic is 19500 Hz, still legal at 44.1 KHz. You have to get up to the 17th harmonic before changing the sample rate would deliver any benefit. I didn't do the math, but the level of the 17th harmonic is going to be down more than 90 dB from the fundamental. IOW, inaudible.



According to IK Multimedia's chief engineer, physical guitar amps generate harmonics well above the audible range and part of their emulation process is to reproduce those frequencies. He also said that high-gain amp sims often deliver 60dB of gain. So your assumptions of what is or is not audible has to take extreme amounts of gain into account. That "90dB down from the fundamental" could easily be 30dB down. Such distortion products would be audible when folded back into the audio range.
 
A fundamental problem I'm seeing here is that this is not a yes/no situation, there are shades of gray. Some processors will derive zero benefit from being run at higher sample rates. Some obviously derive benefits from running at higher sample rates. One-size-fits-all pro or con is not realistic or possible.
 
FYI IK does extremely sophisticated oversampling and filtering not for the plug-in as a whole, but for individual elements. The choose the amount of oversampling and what to apply it to on a processor-by-processor basis. As a result, he noted that AmpliTube running at 44.1kHz will perform better, and draw less CPU, when all the oversampling options are enabled compared to running it at 96kHz. However, this degree of attention to detail seems to be the exception rather than the norm in the industry. Cakewalk's instruments provide oversampling, which is why I had to choose the non-oversampled version of Z3TA+ 2 to emulate the results of instruments without oversampling, and so does Native Instruments for selected processes (e.g., saturation). iZotope's Ozone also provides oversampling for selected processes. 
2014/06/04 19:42:24
Anderton
I'm leaving for GearFest and then New Music Seminar, so I won't be participating much on the forums for the next several days. But really, I've made the only point I cared to make: Recording at 96kHz can improve the sonic accuracy of some soft synths and processors, even when sample rate-converted down to 44.1kHz. I don't think anyone can disagree that's a true statement.
 
I'll leave it up to the rest of you to run your own ferret experiments and knock 50 years of research on its butt. Wouldn't be the first time the conventional wisdom had to take a hit in the light of new knowledge. Remember when everyone was just so 100% sure that Venus would be a cold, dead planet like the moon? Oooops.
2014/06/04 22:40:20
The Maillard Reaction
.
2014/06/04 23:08:57
Razorwit
Mike, I know this is OT, but ST3 was in the Sweetwater catalog that came today. I gotta think it's gonna be soon.
 
Dean
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account