For a start, it is very dependant on the music content. Of you have a lot of cymbals and high frequency stuff going on, it's going to be very audible. If you just have a simple track with a bass and deep mellow singing, things are getting more difficult. In my experience. The lower the mp3 bitrate, the lower the perceived frequency attacking (to my ears). Why do you think they use variable bit rate encoding? It's based on the fact that more complex material needs a higher bitrate to adequately 'trick' the human ear and simple stuff you can get away with a lower bitrate. I typically think a variable bitrate encoding of say 192-320 is more than adequate for everyday listening. It'll only go down to 192 if it needs to and stay at 320 if it needs to. So as long as the encoder reliably gets this, we are good!
128, I believe I can hear all the time if I listen. Some songs it stands right out at you and punches you in the ears. Other songs you don't really notice, till you listen, and then you can't listen to it anymore once you realised as you then hear it like daylight.
320, sometimes. I have done a direct A/B of some of my music before and I could hear a VERY subtle difference in the high frequency content. The .wav just seemed that teeny little bit clearer and sharper in the very top of the highs. But if you gave me a 10 minute break between listening to the two samples, I highly doubt I would be able to hear the difference. Thus my conclusion is 320 is perfectly adequate because I don't think I could pick it out in real world examples (ie direct A/B ing two versions is NOT real world - that's not how we listen to music).
And although I have never tested for or experienced it, there should be difference in different .mp3 encoders. The new encoders should be much better at 128 than the original ones at 128 when .mp3 first came out.