Let me try it another way.
Your notion is that a no-features stability release will carry fewer defects than the current release process, and it's an attractive idea. That might be true only because you won't be shipping any of the defects in the excluded new features. There will still be an appreciable number of brand new defects in the fixes-only codebase. The more defects that are fixed in a release, the more the potential for new defects. This is the point I believe Noel was making.
As you know, regression tests only test aspects of the code that were broken and then fixed. Unfortunately there are many usage contexts for which no test has yet been created, and regression tests will not uncover them because they are not regression defects. They are brand new.
To make things more interesting, as more fixes are applied, the more defects that subsequently arise tend to manifest themselves in increasingly complex and subtle ways. These defects also fall outside the scope of regressions and are damned difficult to write unit tests for.
Of course, over time, these issues are eventually addressed, additional regression tests are created and product managers then nervously declare that portion of the software to be probably mostly somewhat reliable. Maybe. :)
Broadly speaking, since the number of bugs in a proposed "stability release" may well not be appreciably fewer than in a standard release, the decision to do one over another is not a technical decision, it is a business decision.
Fix-only releases tend to encourage deployment of the software by the existing customer base, which is of course a good thing. Microsoft was forever fighting to get enterprises to deploy new releases of software, so service packs were key to making that happen. For smaller, more revenue constrained outfits, it's the new features that matter. Balance.
I once saw some math that covered a lot of this but I don't remember a bit of it.