BobF
I am disappointed that effort is being expended to integrate and deploy a tool that is supposed to help Cakewalk figure out what needs to be done - where to focus work.
I'd argue you're misinterpreting the effort here, or at least missing the mark on how this could be helpful (whereas other current avenues of communication are not). As an example:
Noel mentioned "dropouts", so I'm sort of just reacting to that considering conversations I've been in repeatedly behind closed doors here. For years our developers have asked questions to those that have close interactions with end-users things like:
"What are the most common causes of crashes that users experience?" Previously we only knew if people reported them directly. We worked towards resolution to this by creating the fault reporter when SONAR crashes. Collecting, parsing, analyzing, and reporting crashes via this system is all automated. This has resulted in countless stability fixes. Of course crash reporting can all be done manually as well, but the amount of fixes that required zero human action (other then a developer fixing the crash) has proven to have been well worth the effort.
"What would you say are the most common questions customers call about for assistance?" We have a giant database of phone call & email history in our internal ticketing system that allow our support team to easily reference what the big ticket items are. The forum itself is also offers a ton of insight into user hangups. It's pretty easy to come to a conclusion on what users often need help with.
"What are the most common bugs users run into?" We created a Problem Reporter so that users and Cakewalk staff alike could log, report internally, and provide notification of bug resolutions directly to end users. This has resulted in countless fixes over the years. We're always working towards improvements to this, but even today it directly integrates with our internal bug tracking software and fault reporting system.
"What are the most requested features made by users" We currently have this Feature Request forum and are building a new Feedback Portal to improve upon this experience overall. In the past it was just a suggestion inbox. We're working on making this much better.
etc. etc. My point is mentioning this is that we're always trying to deliver information from end-users to development in a more efficient manner. Those are only a few brief examples. But here's the thing, Cakewalk developers often also ask things like:
"How often do customers experience dropouts?" Honestly, I'd love to know.
The answer to this question is always extremely subjective. Support representatives may say "quite often" because they're often on phone calls with customers using integrated sound cards with poor performing drivers before they've learned how to configure SONAR for use with their new audio hardware. QA might say, "occasionally" because they're used to testing and working with beta-testers who own and work with superior hardware but also know beta-builds can be unpredictable from time to time. Developers themselves might say "never" because optimizing their system for audio performance is completely second nature. End users themselves will give a different answer every time based on their own subjective experience.
The term "dropout" itself is also interpreted a few different ways. Often times customers report simply, "I get tons of dropouts", but what exactly are they referring to? Are they referring to clicking and popping during playback, or are they referring to the audio engine stopping? We've even had users refer to timeline intentionally stopping at the project end as a "dropout" (yes, this is true). So if a email/call/bug report comes through like that, what do we do with that data? Is it factual to refer to that as a "dropout", or do we make a clear distinction? How do we build an
accurate report of whether or not dropouts are a plaguing customers? How do we build an
accurate report of whether or not a particular build of SONAR we just released has increased or decreased the number of dropouts end-users are experiencing?
The problem is that nobody can really give the developers a helpful answer here. It's usually vague, very subjective, and lacking any specifics helpful to troubleshooting and making improvements.
Noel mentions an example of doing cool & helpful things to assist someone who many be experiencing recurring dropouts. This is part of the spirit of analytics. Seems like a lot of people are fearful that this is being implemented while feature requests right here on this forum are out in plain sight, but I'd argue that none of the previous systems we have in place can provide an answer to something like "how often do people experience audio dropouts". Very different goal if you ask me.
I'd also argue that as of today, it's very hard for us to communicate whether or not SONAR Newburyport experiences fewer dropouts then SONAR Braintree. I could dig up a report in regards to crash stability (because of our aforementioned fault reporter), but digging up a report in regards to audio engine performance would require benchmarking. In other words - much smaller set of data; difficult to get real metrics outside of control group; takes up lots very valuable time.
So I guess my ultimate argument is that effort in this will provide us with insight not currently (nor easily) available to us. It doesn't replace other areas we look to for insight.