So Iâ€™ve been doing some general synth research, and Iâ€™ve found that there are 6 main types of synths: subtractive, fm, wavetable, sample based, and physical model based synths.
My question is why can't you just have one master synth that does them all (except physical model I think). From my research it seems the only difference would be your sound source whether it is a waveform(s), or a sample, etc.
Also in project5 you can use an instrument as an effect. so if it is possible to have a master synth where you just select your sound source and type (this way it could also be audio/real instrument) then couldn't you not really even need synths. You could just have a device chain that starts with a sound (even a waveform) then adds "effects" such as LFOs, filters, additional waveforms, modulation etc as well delay, distortion and other commonly used effect typically used on audio. This would give you a componentized way to make a 'synth' and it would be very versatile. (For instance you could create a cool synth then change the sound source to audio and apply that device chain to your voice.) This would also you (cakewalk) to create what would've been a whole new synth (because of the overlap that all synths have) by just creating a new 'effect'. It seems this would also make testing easier for you, and for us, your customers, learning them way easier.
Any thoughts, anyone?