Here's the paper presented at the AES 106th Convention, 1999, in Munich Germany:
http://www.sintefex.com/docs/appnotes/dynaconv.PDF It describes a method of dynamic convolution and it proposes a way to implement it.
As a rough guide they propose that you can use 128 IRs to simulate an audio device's response.
If you want to simulate the device with some of the knobs on the device turned to different places than the you'll want another 128 IRs for that too.
In other words, if you like the sound of your amp and you "model" it and then you twist the bass knob a bit and think "oh, I like that too" you are supposed to model it again... cause the first collection of IRs only mimic the way the amp works with the previous settings. You can just turn up the bass elsewhere... but it is not the same.
Yes. The implementation of this technology will get better and better.
Folks that are in to it will probably be buying a new rig every 2 years for the next decade or more.
Other folks will be able to buy yesterday's news, used or on closeout, for a fraction of what the early adopters pay for it.
The smart guys at Sintefex may have overestimated what it takes to reproduce some aural experiences, but they are probably not too far off the mark for the more challenging comparisons.
In fact they quickly discovered that the dynamic range of an throughput signal defines the number of IRs that are needed to convince people that the effect is working.
In other words, if you are playing a squashed, compressed, noisy signal then you can throw out most of the 128 IRs and just use the ones that cover the limited dynamic range that you hope to reproduce.
That's why guys can listen to some kind of *tones* and feel like they are listening to the real thing and probably why the countless demonstrations of this technology never seem to attempt to present all those other kinds of tones.
The paper was written when the CPU requirements for real time processing was beyond the budget of small businesses. Real time dynamic convolution has only been implemented in the music industry for the past 5 years or so, and I imagine there's few more years before it becomes full featured.
You can bet that the *native* software developers are chomping at the bit to use more and more IRs in their existing dynamic convolution processes. So if you like the idea... you're going to like what is coming.
Maybe enough people will like it that I'll be able to buy a used Trainwreck real cheap some day. :-)
best regards,
mike
edit spelling