So, I had a lot of trouble making the video. It wasn't recording the sound from the interface until I hit the loopback button on the interface. So, I recorded it over and over and in the final left out a fair bit of things I had wanted to say.
Largely, I didn't have much interest in mid-side processing until I got my UCX interface which supports mid-side processing and then I started learning about it. It turns out I have one of those microphones that records out of front and back while using phase cancellation to cancel out sound from the sides (perfect for the reverb mic in a mid-side setup).
I really don't have a need to do radio broadcasts where I don't know if the audience is listening in stereo or mono, which I believe was the original reason for making mid-side recordings.
Really, it seems to me that the day of mid-side usefulness has passed. I mix everything in stereo and don't ever listen to my mixes in mono to see what they sound like.
Most mid-side examples I have seen focus on how to set up the microphones to capture the source on one track and the verb on another.
So, in my video, I wanted to just pretend we had captured the verb on one and the source on another, then process it using the mid-side button on the channel tools to show how as you flip if from stereo to mono the reverb cancels.
it strikes me that this is exactly the purpose of mid-side processing. The fact that you can use a pair of microphones to capture a live recording with the real life reverb and achieve the same thing for the purposes of a radio broadcast where listeners will be both mono and stereo.
My demo was only to show that the button pans one copy of the verb channel far right and one far left where the two are out of phase so when you listen in mono they cancel.
I think that this is all the button does. Do you agree? The purpose is for enabling recordings to sound good in mono as well as stereo where a large percentage of the listeners listen in mono.