Platinum Samples SToons
Hi Rail. Good to hear from you again regarding the discussion at hand.
Broad statement maybe, and perhaps taken out of it's original context, but if we assume that the library (well, snare) in your video is flawless then we have one for two in terms of those being discussed. I think consumers can expect better than 50/50.
No, that statement makes a conclusion that all libraries other than what you've experienced are faulty --
Those are your words. I concluded nothing of the sort. I said I had a beef with sample companies as a whole. As you said ... a broad statement. If you choose to make assumptions...
My statement can just as easily be interpreted to mean that, of the sample companies I have used and worked with, there are enough negative experiences to cause me to speak out.
For one, I have completely re-edited my original work. Instead of adjusting the volume of layers I have now adjusted every sample individually with a volume=x.y command in the SFZ as was necessary, a reasonable investment in time as it required measuring the peak of all the samples in the kit. One discovery in doing so is that any reliable test would also require cycling a single hit enough times that every hit, not just one, from a single velocity layer is recorded. What if one "random" hit in a layer is 8dB louder than another in the same layer?
There should be no random hits in a layer -- if there are, the producer should have caught it during beta testing and replaced it (I know we did with at least one 'drag' articulation.
For the benefit of others reading, a single velocity layer can have multiple samples. They can be programmed to "cycle" or to play "randomly" which is what I was referring to. And I agree, the fact that, in regards to the Old Zep kit, some samples in a layer have large discrepancies in volume should have been caught before now, that is the gist of this thread. However, I have caught them and subsequently adjusted them. That has also been previously covered in this discussion. If your company is more effective in terms of quality control then I applaud you and I'm sure that you'll see the fruits of your labour. Again, in respect to this thread, Cakewalk is not in the business of sample sales and production so it seems reasonable that you should be working more dilligently then they are to provide functional libraries.
No offence intended but your video offers me little useful information and it's hard not to wonder what interest you would have in some sort of direct comparison
The video was done by our tester to show that playing notes from 0 through 127:
a) play smoothly without any major gain jumps
and
b) have enough round robin samples not to sound repetitive (machine gun) when you trigger the same velocity range consecutively
I can't imagine you can get similar results simply by changing the SFZ file... but I'd be curious to see what you did manage to achieve.
I don't see why you would be surprised. You have four samples in a velocity layer, the SFZ I edited of Steven Slate's Zep kit also has four samples in each velocity layer. There are no volume jumps between velocity layers because I adjusted the volume of every sample in the kit accordingly and then modified the velocity response curve. On top of that, I'm in the process of cross-fading the layers which will make them virtually seemless as there is -no- issue concerning the volume between layers but the subtle timbral variations are noticeable to those with good ears. For now. Not for long.
As for the "roll" request ( the comment seems to have disappeared):
http://soundcloud.com/stoons-1 SSlate Ex2
Please note that example uses no cross-fading. That will come (hopefully) soon. That example took all of three minutes, it would not be difficult to program a convincing roll if I spent a little more time.
The only reason I responded was you made a blanket statement which implied that all content providers didn't know what they were doing... I quote: "my main beef is really with sample libraries as a whole"... If you'd written something like "my main beef is really with some (or a lot of) sample libraries" I wouldn't have taken offense. You can't come on here and tell everyone that my baby's ugly - especially if you've never seen my baby :)
My apologies to your babies, no personal insult was intended. My understanding is that "as a whole" implies "in general". I would suggest that maybe you took this a tad too personally as your company was never singled out. Or perhaps you see this as an opportunity. However, on the flipside, I'm not in the postion of endorsing you either. Perhaps I would complain about your libraries if I used them, perhaps not. I plead ignorance. There are alot of ugly babies out there but it's not my responsibility to tell others which ones they are, that is ultimately a personal perception.
For what it's worth, I retract my original statement. I will now state my main beef is really with
some (or a lot of) sample libraries.
how you prepare the samples in terms of normalization before dithering in regards to the S/N ratio on the quietest sample layer.
We never normalize -- that's just a terrible idea.. when you play an instrument in a room the room will talk differently depending on the volume of the instrument... so taking a loud snare hit and just gain changing it will not sound the same as hitting the snare softer.
I disagree, I don't think it's a terrible idea at all. No one suggested making a soft hit into a loud hit by normalizing. No one is suggesting dropping the sixteen velocity layers with four round-robin (I prefer "random") samples per layer model, I am suggesting that that model could be made more functional thru normalization of samples before editing. Normalizing before dithering might solve numerous issues, not the least being the ability to control velocity response in a superior manner. The "soft" hit would remain a soft hit if the velocity response is adjusted accordingly. It allows -far- greater flexibility in tailoring the dynamic response to individual songs and performances and allows a more symbiotic relationship to dynamics versus compression (and I have yet to discuss the issue of how tailoring velocity responses to compressed versus non-compressed samples is substantially different). Furthermore, it could improve S/N ration making the lower velocity levels far more functional and useable. A large amount of the drum sample libraries behave fine at louder volumes and are a little more suspect when programming using softer velocities only, one of my favorite sounds/regions of a kit. When the samples recorded at low volumes are dithered and not normalized first then subsequently adjusting them to be audible in a mix can introduce dither noise into the mix; if one is using numerous overlapping low velocity samples in a kit then the dither can become cumulative and audible in the mix, even without compression.
I'm not suggesting I'm right here, just relating my observations and experience. If there are opposing perspectives I'm all ears.
Creating a sample library is as much art as it is science... you can't just look at the numbers.. you have to use your ears and do extensive testing with folks who use these tools every day.
You mean those such as myself? No arguement there. Having done years of work with Scott Mitchell (AudioCompositor, Gigasampler), Eric Persing of Spectrasonics and others I am quite familiar with sampling. I may not be an expert but I'm far from a newbie.
Regards,
ST
Edited to change the b-word (female dog) into the word "complain". After posting I saw ***** appear and feared I would be severely reprimanded ;-p
post edited by SToons - 2012/08/01 05:08:58