Hi Mike,
This is a little known and even less understood approach to musical composition. I basically program the computer to do the composing, which I then edit. I feel it is the next step in musical composition. Because I can do so much more when assisted by the computer, the music has an exponentially large range of possibility. So the music sounds foreign to people right now. I use AI to extend the possibility of music. (AI in this case stands for Augmented Intelligence, not Artificial Intelligence.)
I render these files "natively" from Csound in multi-channel formats. I write algorithms that, among many other things, create spatial relationships amongst the sounds. For home playback I often mix these compositions down to 4.1. But they are intended for the concert hall where they are played back, usually, using Pro Tools. I also play the multi-channel files for art installations and for that I use Sonar or Ableton Live.
One of my areas of focus is 3D spatialization, where I not only have speakers left and right (1d - stereo), front and back (2d - quad) but also top and bottom (3d). So all three spatial dimensions are used. I programmed a system of Cartesian Coordinates in Csound that stipulate where, within the 3d Cube, a sound exists and if it is to be perceived as moving, how it moves...
So, as you can see, I cannot render a 144 channel file in separate channels any more than one could do so with a stereo file. It is not really practical.
You can find out more about my work at
http://www.perceptionfactory.com Thanks for your interest and I hope that answers your question,
Michael