• SONAR
  • Noel's win 8 article slam on gearslutz (p.4)
2012/11/18 12:55:18
FastBikerBoy
Razorwit


Only if you're being judged by The Count from Sesame Street Karl.

"THREE millisecod of round trip latency! FOUR milliseconds of round trip latency! AH HA HA HA HA!!!"

Dean


Now one millisecond off a lap time is really important, not life & death important, far more important than that.... races have been won and lost by those differences
2012/11/18 14:33:16
Anderton
I think the kind of think TAFKAT does is useful, but you also have to know enough to interpret the results. For example, if a PCIe card doesn't outperform a USB 1.1 interface, that's a problem :)

For many desktop applications a lot of this is splitting hairs, but if you're using amp sims live with a laptop, then latency becomes a very serious issue. You want to overcompensate on latency for safety, so with a high performance interface, you'll get lower latency and still have a comfortable margin.

It's also worth noting that not all interfaces report latency accurately, and his interface tests show this. For example, for some reason Line 6 was always able to be very precise, other interfaces less so.
2012/11/18 14:35:21
Anderton
[double post, sorry]
2012/11/18 15:51:39
Funkybot
Where tests like that are helpful in that you may be interested in 1) seeing how efficient Sonar is, 2) such as seeing how your DAW's load balancing compares to other DAWs (the dreaded core 1 is spiking the others are nowhere near 100%), 3) determining if there's another DAW that will allow you to squeeze a few more plugins/tracks into your mixes (this would be helpful if you had a fast PC but were still maxing out your CPU during large mixes). It could also help companies like Cakewalk in identifying where there can be room for improvement, by someone else doing free performance benchmarking for them.


It's completely legit, and if you read about their testing methods and how they distribute the plugins...I think it's a fairly unimpeachable method of comparison. If you also read his criticism of Noel's testing method, his points seem entirely valid. I'd love to see Noel's response to that.

What a test like that doesn't show is how well you interact with your DAW, it's workflow, and it's features. These are clearly the most important aspects in picking a DAW. Sonar still some has some things that drive me nuts (limited routing options, the terribly huge, non-resizable mixer, no varispeed, horrible notation) but overall, it's still the best DAW choice for me. I've yet to find the perfect DAW, and every one out there seems like it's good at some things and not so good at others, and overall Sonar has become my main choice. So to those who point out that the benchmarks don't address this aspect, I agree with you 100%. 

But that doesn't mean anyone should just blindly ignore the test results because Sonar didn't come out on top. At that point, people are turning DAW preference into a psuedo-religion by ignoring the science. I think we can all be a little more rational than that. 


I honestly don't think the people saying the test is completely irrelevant would be too upset if Cakewalk came out and said, "hey, we improved Sonar's efficiency by up to 25% in large projects by making adjustments to how we handle load balancing" or something like that as a result of benchmarks like these.
2012/11/18 16:00:51
Silicon Audio
I think it's actually very important to know if Windows 8 has any real-world benefits for us. A number of people are going to great trouble and (in some cases) expense to upgrade for a perceived performance gain. If that gain simply isn't there, we should know.
2012/11/18 16:01:46
The Maillard Reaction
Hi Craig,
  It occurs to me that one may not feel a need for 105 instances of convolution reverb yet the same person may also have a keen interest in placing a convolution based, console emulation process on each and every track and bus in their projects.  I imagine stuff like that will be added to the DAWbench tests someday.

 Hi Dean, 
  What I have found on my system is that the portion of the actual round trip latency that is not reported varies with the sample buffer I am using. So I have a selection of manual correction amounts and I use each when it pertains to arriving at actual sync under any given circumstance. 




 best regards, 

mike






written in firefox, formatted in chrome
2012/11/18 16:18:34
John
Silicon Audio


I think it's actually very important to know if Windows 8 has any real-world benefits for us. A number of people are going to great trouble and (in some cases) expense to upgrade for a perceived performance gain. If that gain simply isn't there, we should know.


For me its been a nice experience going from Vista 64 bit to Windows 8 64 bit. X2 was a little crash prone on Vista its very solid on Win 8. Plus core loading is much improved. For Windows 7 users I can't say if it will be a significant improvement or not.
2012/11/18 16:21:41
jb101
fbb - "I turn it on, I can tell which input is which, and it works. That's plenty enough for me thanks. "
 
God, that made me laugh..   
 
Thank you, fbb. 
2012/11/18 16:24:29
slartabartfast
Look, everyone has the right to have an opinion, but not every opinion is right. It probably does not matter to most of us in using our software for our usual tasks if there is a small difference in audio efficiency between Windows 7 and Windows 8. But there needs to be an objective way of testing that difference, if MICROSOFT is going to be able to evaluate their software. If we just use the opinions of Windows customers, and if the acid test is "Does it do what I usually do?" for most users, then MS can turn out a Windows 9 that just manages to play MP3's without skipping and will be able to satisfy 99% of their user base. Serious audio is being largely ignored already, and there is very little public information about how seriously MS is taking it. 

If somebody is not doing serious reproducible testing of high end applications and making results public, there is not much hope that newer OS versions will improve all that much. The testing reported on Brandon's blog is very superficial. Not to say it is inaccurate or done without honest intent, but it is in no way either exhaustive, comprehensive or reproduced on multiple platforms. It is an N=1 study, of the same significance as a single case report. You can do a more useful test by installing both windows versions on your own systems, and doing a head to head comparison. 

2012/11/18 16:58:46
John
slartabartfast


Look, everyone has the right to have an opinion, but not every opinion is right. It probably does not matter to most of us in using our software for our usual tasks if there is a small difference in audio efficiency between Windows 7 and Windows 8. But there needs to be an objective way of testing that difference, if MICROSOFT is going to be able to evaluate their software. If we just use the opinions of Windows customers, and if the acid test is "Does it do what I usually do?" for most users, then MS can turn out a Windows 9 that just manages to play MP3's without skipping and will be able to satisfy 99% of their user base. Serious audio is being largely ignored already, and there is very little public information about how seriously MS is taking it. 

If somebody is not doing serious reproducible testing of high end applications and making results public, there is not much hope that newer OS versions will improve all that much. The testing reported on Brandon's blog is very superficial. Not to say it is inaccurate or done without honest intent, but it is in no way either exhaustive, comprehensive or reproduced on multiple platforms. It is an N=1 study, of the same significance as a single case report. You can do a more useful test by installing both windows versions on your own systems, and doing a head to head comparison. 


Personally I don't know how valid the results are. Perhaps one can say safely they are valid for the system used to get the results. I have no idea if one can interpret the results  to apply to the larger world of computers. To even try to do that the tester would need at least a large enough sample of various machines with a good sampling of audio and MIDI interfaces as well as basic configuration for displays and so on.  Only then can a statistic have any meaning.   

What is been offered as a statistic is based on a population of at least a very few instances. That is not valid for statistical analysis.  

But than it is interesting but only in that context.  
© 2026 APG vNext Commercial Version 5.1

Use My Existing Forum Account

Use My Social Media Account