• SONAR
  • Remember that 96K TH2 thread? I Just had my mind blown, big-time (p.7)
2014/06/02 22:26:37
Splat
I know it's already been said, but the Z3TA is definitely set at high resolution for the 96 and 48 (or 44.1) projects?
 
Cheers...
2014/06/02 22:59:33
Anderton
sharke
I was quite surprised at the difference between the two files. The 96kHz clip definitely has way more high frequency content, in fact the 44.1kHz sounded to me like it has some gentle low pass filtering in comparison. 
 
As for Craig's two clips, I definitely prefer the sound of the 44.1kHz clip, it sure sounds a lot fatter and warmer, and the high frequencies in the 96kHz clip are way too harsh sounding. 



But again, let me emphasize that comparison is NOT about what "sounds" best. It's about which was able to reproduce higher frequencies with less distortion. I deliberately doubled and transposed what was already a high keyboard part up even higher to make sure there would be plenty of highs. As I said earlier, if I was using that sound in a piece of music, I would have reached for the LPF immediately. The fact that you heard a major difference is the point. You can also trim highs that are there, but you can't create highs that weren't there in the first place.
 
The other differences I heard with the amp sim and virtual drums were far more subtle, but there was a definite difference in the highs. So I did a caricature of the highs to hear what they would sound like.
 
If you listen to them on quality headphones you'll hear the "gauze" in the background of the 44 file from the aliasing which is not present in the 96 one.
 
The question about what's "accurate" for something generated in the box is valid. However, I spent many years designing devices with top octave divider organ chips. While they used digital technology to divide down a high frequency clock, they did not use digital audio technology in the sense of sampling, conversion, etc. and generated analog outputs, albeit via digital means. Because they generated square or pulse waves, I know what it sounds like to have lots of audible high-frequency content - they basically generated harmonics that just kept going and going. That sound is drilled into my brain, and that's the sound I heard from the 96kHz file.
 
I probably should have mentioned I have a genetic predisposition toward good high frequency response (thanks, dad!) - I couldn't go into some stores as a kid because the "ultrasonic" burglar alarms were audible and very painful. Even though I'm 65, I had my hearing tested not too long ago and could still hear 13kHz well, which is unusual. I also didn't watch a lot of TV as a kid because the 15kHz oscillator from cathode ray tubes would drive me nuts. (An interesting aside of the top octave thing was when I was commissioned to create an instrument that could play music around 80-90kHz for dolphin research, involving transducers borrowed from the navy...but that's a whole other subject.)
2014/06/02 23:29:34
Anderton
CakeAlexS
I know it's already been said, but the Z3TA is definitely set at high resolution for the 96 and 48 (or 44.1) projects?



No, it was set to 1.0 and high resolution, not 2.0 oversampling. I had already determined that I couldn't hear a difference between running instruments/processors when oversampled at 44 or run without oversampling at 96. Native Instruments confirmed that running GR at, say, 88.2 yields the same effective result as doing 2X oversampling.
 
What I'm getting from those who don't like the idea of my running a project at 96kHz is that I wouldn't have to do it if all the elements involved in a 44.1 project included properly designed oversampling. You may find the graphs in this thread revealing. Apparently even many high-end plug-ins have significant, audible aliasing at 44.1/48 and in this thread, one of the commonly suggested remedies is running at a higher sample rate. They don't talk about amp sims or virtual instruments, just compressors, EQs, etc., but I would think the principles are the same.
 
So if those graphs are to be believed, it's no wonder I think projects sound better run at a higher sample rate. Until everything is designed to be totally wonderful, I'm not going to avoid doing something that gives better sound quality.
 
But frankly even if the sound thing wasn't an issue, I enjoy the lower latency. And with Apple insisting on getting 24/96 masters going forward, it looks like we don't have much choice anyway.
 
2014/06/03 00:40:07
drewfx1
AndertonWhat I'm getting from those who don't like the idea of my running a project at 96kHz is that I wouldn't have to do it if all the elements involved in a 44.1 project included properly designed oversampling.

 
Correct. If one designs a plugin that creates distortion, then they should account for that.
 
But that doesn't mean that everything one might wish to use is properly designed, and if running at 96kHz makes them better then it is what it is.
 

You may find the graphs in this thread revealing. Apparently even many high-end plug-ins have significant, audible aliasing at 44.1/48 and in this thread, one of the commonly suggested remedies is running at a higher sample rate. They don't talk about amp sims or virtual instruments, just compressors, EQs, etc., but I would think the principles are the same.

 
What do you consider "significant, audible aliasing" regarding those charts? Any specific examples there? In many of the ones I looked at, the aliasing was often close to 100dB (or more) below the signal.
 
 
Compressors, technically speaking, do distort the signal and one needs to be especially careful when aggressive settings with very fast attack/release times are used. Less aggressive settings should create no problems.
 
EQ's should add no audible distortion unless it's put there by design in addition to the EQing itself, as in saturation in an analog emulation. You can get somewhat different EQ curves at high frequencies at different sampling rates (if it's not compensated for), but in that case which curve is "better" is generally a subjective assessment (as with any other EQ decision).
2014/06/03 02:13:30
mudgel
I'm certainly no rocket scientist when it comes to this particular discussion nor am I just a rock so I understand a reasonable amount of the discussion. BUT I'd like to congratulate all involved for the manner in which they've presented their points. Even when there's been some points of contention in what in other forums would start a war all has remained civil.

It's another example of the quality of the people that are part of the Sonar forum family.

There was a time pre X3 where there was a tension evident in the forum but generally speaking the forum is pretty much a pleasure to be part of. I think it was Craig who mentioned (a while back) a new gestalt on this forum and this thread is a classic example as is the one about your favourite underrated feature. Group hug. :-)
2014/06/03 02:43:12
drewfx1
Some might find this useful, from Blue Cat's Dynamics user manual (regarding their built in oversampling): 
http://www.bluecataudio.com/Doc/Product_Dynamics/
 

Oversampling

You can use Oversampling to reduce the aliasing artifacts that can be produced by the non-linearities of the dynamics processor. It can be particularly useful for audio content with higher frequencies, or if you use large compression ratios with short attack and release times. It is applicable if you work with lower sample rates (such as 44.1 or 48 kHz). With higher sampling rates you usually do not need to work with an oversampled signal.

Beware that each oversampling stage consumes a lot of CPU (more than double CPU usage). You should use 4x oversampling with the stereo version for mastering purposes only, or with very low attack/release times in Peak mode. This is typically a feature you don't want to activate on every single track of a project.

2014/06/03 05:41:09
BJN
Wow what a discussion. Toing and froing. 
I have seen discussions and even arguments on this subject before.
I too am glad to see a civil discourse about it.
 
I have come to three possibilities why higher frequencies sound better.
The Science says we should not hear an improvement at higher sampling rates.
Yet we do.
 
One theory is that broader harmonic content at higher rates  enhance the fundamental
information and thus discernable to the ear.
 
An other is that it proves the existence of the personality as the soul or spirit inhabiting the body and it is the spirit perceiving the higher frequency content. In other words there is more to the perception of the sound than just the mechanical ear and nerves processes of the body.
 
Lastly, it is a well kept secret by the high priests of the zen of audio engineering where science is used to disabuse anyone from even trying the idea for themselves. Yet even MAstering Engineers up sample and process from there.
 
There are three persuasions for you, 
The first might be the acceptable one. LOL
 
 
 
 
2014/06/03 10:10:07
Anderton
drewfx1
The sample time does not remotely equal the time resolution. This is a common misunderstanding of how sampling works. 
 
It depends on bit depth as well as sample rate, but the short answer is 48kHz has a timing resolution of FAR greater than 1/48,000.



Can you explain this? When I zoom in to the sample level, I see a straight line that last x number of microseconds. I understand that gets smoothed when it's reconstructed, but how can data shorter than one sample be encoded into a straight line? What Moorer is saying is that if you have two events 10 microseconds apart, those events cannot be encoded in something that cannot resolve fewer than 20 microseconds. 
 
For a film analogy, if the frame rate is 30 frames per second and you have two different, sequential visual events occurring during the time that one frame occurs, how can those two events play back? I don't see how they could be encoded in a single frame as two separate events.
2014/06/03 12:46:11
drewfx1
Anderton
drewfx1
The sample time does not remotely equal the time resolution. This is a common misunderstanding of how sampling works. 
 
It depends on bit depth as well as sample rate, but the short answer is 48kHz has a timing resolution of FAR greater than 1/48,000.



Can you explain this? When I zoom in to the sample level, I see a straight line that last x number of microseconds. I understand that gets smoothed when it's reconstructed, but how can data shorter than one sample be encoded into a straight line? What Moorer is saying is that if you have two events 10 microseconds apart, those events cannot be encoded in something that cannot resolve fewer than 20 microseconds. 
 
For a film analogy, if the frame rate is 30 frames per second and you have two different, sequential visual events occurring during the time that one frame occurs, how can those two events play back? I don't see how they could be encoded in a single frame as two separate events.




Rule #1: Never use analogies when trying to understand sampling -  they're almost always wrong (in whole or in part) because sampling just isn't intuitive and it doesn't really work the same way other stuff does.
 
Rule #2: Never argue the analogy, as it inevitably just takes things OT. 
 
Be careful when zooming in - many (most?) DAWs just show a picture that "connects the dots", which is extremely misleading as this has little to do with what a reconstructed signal looks like (or how a sampled signal is reconstructed). It gives you a reasonable picture at low frequencies, but a high frequency sine wave looks nothing like a sine wave - even though your DAC outputs a nice looking sine wave.
 
 
 
The short answer is what happens is if you move your signal a fraction of a sample in time (at a reasonable bit depth), then the sample values will change. 
 
Consider a 12kHz sine wave sampled at 48kHz:
 
1. Since 12 kHz is exactly 1/4th the sample rate, you get exactly 4 samples per cycle.
2. This means that successive samples are exactly 90° apart.
3. Let's say we take our first sample s1 at 0°. That means s2 is at 90°, s3 is at 180°, s4 is at 270° and so on.
4. Now let's move our signal back .5 samples in time.
5. Now s1 is at 45°, s2 at 135°, s3 at 225°, s4 at 315°, and so on.
 
Hopefully it's obvious that the sample values are not going to be the same when we are sampling the sine wave at different phases in its cycle.
 
So the question becomes, how far can you move the waveform on the x-axis (time) without having (almost) any of the y-axis (sample amplitude) values change?
 
 
Again, hopefully it's obvious that with higher resolution on the y-axis (increased bit depth), the amount you can move your signal in time without changing the sample values changes gets smaller.
 
 
An experiment to try:
1. Make a stereo wave form at 44.1/48kHz where every sample in L vs. R are absolutely identical (i.e. L=R).
2. Upsample by 2x (or higher).
3. Shift L by 1 sample in time at the higher rate. 
4. Downsample back to the original SR.
5. Zoom all the way in and compare L and R. 
 
You will find that not every sample in L and R are the same anymore -  because you time shifted L by a fraction of a sample.
2014/06/03 13:10:44
bitflipper
...if you have two events 10 microseconds apart, those events cannot be encoded in something that cannot resolve fewer than 20 microseconds.

Keep in mind that anything that happens entirely inside a 10-microsecond timeframe is much too fast to worry about. You only worry about those frequencies after you've bought your $2,000 oxygen-free polarized cables.
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account