Dilaco1
Max Output Level: -88 dBFS
- Total Posts : 150
- Joined: 2007/07/23 22:00:39
- Location: Australia
- Status: offline
Questions RE default 32 bit rendering bit depth
I work in the 24 bit mode for my projects, but I keep the setting for rendering audio at the default 32 bit resolution because it is supposed to achieve better results for rendering. So when I ‘bounce to clips’ it raises some questions as to what happens in the 24 bit project. Do I now have a hybrid file with some 32 bit clips (the rendered clips) and some 24 bit clips (the un-rendered clips) in the one project? Or are the rendered clips still 24 bit clips even though the rendering process is done in 32 bit mode? If the rendered clips are now 32 bit clips, how do I go about exporting the audio when the project is finished? In other words, if the file contains some 32 bit clips and some 24 bit clips, wouldn’t I have to apply dither if I want the resulting audio to be 24 bit? It would be a shame to have to add dither in this scenario, affecting the whole file when only some of the clips really require dithering down. Wouldn’t I be better off rendering clips (i.e. for the ‘bounce to clips’ function) in 24 bit resolution and save the added step of applying dither during export?
Cakewalk by Bandlab; RME Fireface 800 audio interface; Windows 7 (64bit);
|
noynekker
Max Output Level: -66 dBFS
- Total Posts : 1235
- Joined: 2012/01/12 01:09:45
- Location: POCO, by the river, Canada
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/13 05:40:34
(permalink)
Can't you just always render / export projects to 32 bit . . . then, afterwards dither down to 16 bit for CD, or 24 bit for online streaming, depending on where you are planning on having them played ?
Cakewalk by Bandlab, Cubase, RME Babyface Pro, Intel i7 3770K @3.5Ghz, Asus P8Z77-VPro/Thunderbolt, 32GB DDR3 RAM, GeForce GTX 660 Ti, 250 GB OS SSD, 2TB HDD samples, Win 10 Pro 64 bit, backed up by Macrium Reflect, Novation Impulse 61 Midi Key Controller, Tannoy Active Near Field Monitors, Guitars by Vantage, Gibson, Yamaki and Ovation.
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/13 14:19:29
(permalink)
There's theory and then there's practice. And in practice... 1. Dither isn't necessary when converting 32-bit files to 24 bits, so don't worry about dither. 2. Despite what you may have heard, the truth is there is almost never any audible difference between 32- and 24-bit files. 3. If your audio originated from an ADC then you never had either one to begin with, as audio interfaces are really only capable of between 20- and 22-bit resolution. The least-significant bits start out as random noise, and remain random noise whether you store them as 24, 32 or 64 bits. 4. Even if your audio is entirely generated in the box, e.g. 24-bit samples, it makes no quality difference if you render them at 32. All you're doing is tacking on some zeroes. Simplest approach is to stay at 32 bits throughout. There is no downside.
All else is in doubt, so this is the truth I cling to. My Stuff
|
Dilaco1
Max Output Level: -88 dBFS
- Total Posts : 150
- Joined: 2007/07/23 22:00:39
- Location: Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/14 03:08:28
(permalink)
Thank you Noynekker and Bitflipper. There is obviously a lot I don't understand about bit rate. Bitflipper, you said that it makes no difference if I render at 32 bit, that I am just adding zeros. So why don't I just render in 24 bit and keep my files smaller? (I assume 32 bit files are larger than 24?)
Cakewalk by Bandlab; RME Fireface 800 audio interface; Windows 7 (64bit);
|
BenMMusTech
Max Output Level: -49 dBFS
- Total Posts : 2606
- Joined: 2011/05/23 16:59:57
- Location: Warragul, Victoria-Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/14 06:11:04
(permalink)
bitflipper There's theory and then there's practice. And in practice... 1. Dither isn't necessary when converting 32-bit files to 24 bits, so don't worry about dither. 2. Despite what you may have heard, the truth is there is almost never any audible difference between 32- and 24-bit files. 3. If your audio originated from an ADC then you never had either one to begin with, as audio interfaces are really only capable of between 20- and 22-bit resolution. The least-significant bits start out as random noise, and remain random noise whether you store them as 24, 32 or 64 bits. 4. Even if your audio is entirely generated in the box, e.g. 24-bit samples, it makes no quality difference if you render them at 32. All you're doing is tacking on some zeroes. Simplest approach is to stay at 32 bits throughout. There is no downside.
Actually Bit you can hear audiable artifacts if you don't use 32bitfp audio files, and as some will know I use 64bitfp audio files. Where bit is right, is say you record an acoustic track with minimal processing. You will not hear much difference between 24bit 32bit and 64bit. And indeed if this is your goal in regards to recording or if you're planning to send your audio in and out of the box or you do your processing via outboard gear - then stick with plain old 24bit - unless you're planning to get Ifools to distribute, because they use a 32bit conversion process. Where bit is wrong is if you process your files heavily like I do. This includes Melodyne. The difference between a 24bit Melodyne file and 64bit is night and day. The 24bit file tends to sound a bit grainy. It's also night and day in regards to analogue emulation effects. Now bit and others might say 96khz fixes some of these problems...and they may, I have not done the tests to really know, and indeed it's been decided that a beginning point to hi resolution audio is a 24bit 96khz master file. But I believe, although I don't have a test file at this point to demonstrate indefatigably, that 64bitfp 48khz master files are the starting point in regards to hi resolution audio files. For the above reasons. There will be lots of hate poured onto me for saying what I've just said. But the proofs in the pudding, go to my website in signature to hear what I believe is hi resolution audio up until the distribution point or Youfool.
|
Dilaco1
Max Output Level: -88 dBFS
- Total Posts : 150
- Joined: 2007/07/23 22:00:39
- Location: Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/15 01:55:08
(permalink)
Thanks BenMMus So, as my audio interface (RME Fireface 800) is only officially 24 bit it makes no sense to record anything going through it at 32 bit, because it will still be a 24 bit recording. But for processing effects, 32 bit or higher will achieve a better result? Here’s the grey area for me: it seems that the place for changing the default bit rate of your projects is in the ‘Record Bit Depth’ box (Preferences > Audio Data > Record Bit Depth). But doesn’t this setting only affect the recording of an audio clip – and doesn’t apply to MIDI recording, or to the bit depth of anything you render, or to the final mixdown/export process? Question: does the ‘Render Bit Depth’ setting (Preferences > Audio Data > Render Bit Depth) also affect the final ‘Export Audio’ process? (In other words, does the Export Audio process come under the heading of Rendering, affected by the Render Bit Depth setting?) Or is it only the settings in the Export Audio window that apply in this process? For example, if I have a project with a bunch of plugins that have not been rendered yet on audio tracks, and a bunch of virtual instrument MIDI tracks that have not been bounced to audio yet, and I have my Render Bit Depth at 64 bit and I export the project, am I effectively getting a 64 bit rendered file as a result? Or do the settings in the Export Audio window take over at this point?
Cakewalk by Bandlab; RME Fireface 800 audio interface; Windows 7 (64bit);
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/16 02:04:00
(permalink)
Export settings take over.
I render to 32 bit.
I don't agree that the least significant bit doesn't matter. When i make a file with only the least significant bit from a 24 bit wave and normalize, i can identify the song.
Regardless of your dither settings, unless necessary they will not be applied. This from Noel, cto.
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/16 02:52:31
(permalink)
gswitz I don't agree that the least significant bit doesn't matter. When i make a file with only the least significant bit from a 24 bit wave and normalize, i can identify the song.
There are good reasons for using 32-bit data, having to do with sinking rounding errors down into the noise floor and not letting them accumulate to the point of audibility. But I'm not talking about signal processing, but rather rendering, in response to the original question. Once exported as a final product, human ears are incapable of differentiating between 24- and 32-bit data. Not that I doubt your hearing acuity, Geoff. You're younger than me, so there's no question you hear better than I do. However, I think I can make the case that no human can hear the difference between rendered files at 24 vs. 32 bits. Look at it this way...whatever differences exist between a 24-bit file and its 32-bit equivalent lie below -144 decibels. The full range of human hearing is only ~120 dB. Now, if you were a barn owl, you actually could hear something 14 dB below the threshold of (human) hearing, e.g. a mouse walking half a mile away. But you couldn't record that even with the best microphone on the planet, and even if you could you wouldn't be able to hear it. The lowest known sound level ever achieved is -14 dBSPL (in an anechoic chamber at Microsoft). You'd have to install a jet engine in that chamber to realize a 144 dB range. If you were then able to pick out a mosquito buzzing 10 ft away - while the engine was running - then I'd believe you can hear what's happening at -144 dB and below.
All else is in doubt, so this is the truth I cling to. My Stuff
|
Dilaco1
Max Output Level: -88 dBFS
- Total Posts : 150
- Joined: 2007/07/23 22:00:39
- Location: Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/16 11:43:45
(permalink)
Thanks Gswitz for chiming in and clarifying that the export settings take over. So just to clarify: even though the export setting takes over from the 'render bit depth' setting, exporting a project is still regarded as a form of rendering?
Cakewalk by Bandlab; RME Fireface 800 audio interface; Windows 7 (64bit);
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/16 18:52:22
(permalink)
Yes. "Render" just means to save a fully-processed track or mix, whether by bouncing, freezing or exporting.
All else is in doubt, so this is the truth I cling to. My Stuff
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/16 22:35:06
(permalink)
bitflipperIf you were then able to pick out a mosquito buzzing 10 ft away - while the engine was running - then I'd believe you can hear what's happening at -144 dB and below.
Oh i can't hear the diff between 24 and 32. I struggle with mp3 and 16 bit (remember the golden ears quiz?)... Haha. I was saying least significant bit isn't just noise in a polished master. I think I made a video where I polarity flipped a 24 bit wave import and a flac exported from sonar to show there was a 1 bit difference. That difference normalized made the song recognizable since you had 1 bit plus positive or negative (sign) giving three positions (+1,0,-1).
post edited by gswitz - 2018/07/17 13:29:24
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
BenMMusTech
Max Output Level: -49 dBFS
- Total Posts : 2606
- Joined: 2011/05/23 16:59:57
- Location: Warragul, Victoria-Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 07:12:51
(permalink)
What Bit, and many of his era (sorry Bit) don't understand is, you're trying to keep the audio at the highest quality throughout the process. And in this day and era, there is no reason not to switch the 64bitfp on and leave it. And furthermore, there isn't any reason not to bounce out a 64bitfp master file that can be used to create 24bit masters - I wouldn't use 16 bit and don't - to create Mp3s that could be considered hi resolution because no dither and because the 64bitfp file hasn't gone through one reduction before being reduced again via Mp3. And Soundcloud accepts 64bit96k files. Mp3 has no bit depth limit. Yes we all listen with 16 but converters, most these days would listen with 24 bit because most new phones have 24bit, but it's all about resolution and not losing resolution. I think what happens is, if again you process audio files heavily or rely on the analogue emulation aesthetic - if say you bounced down to a 16 bit master file and then created an Mp3 or uploaded to SoundCloud - rather than hear the subtle nuances of emulated 2nd and 3rd harmonic distortion...it just becomes distortion. And it's the same at 24bit and whilst not as bad at 32 bit...you still hear distortion. All I can describe it as is, if you listen very carefully and headphones is there's a grainy mid frequency sound that you can't get rid of.
Look. The oldies around this forum, and I'm sorry Bit because I know you're a good guy who had helped many on this forum since I've been around on this forum, but the oldies are living in the past and worse holding western art music back. There is no need for a studio, a band or 1000s of dollars of equipment. And here is where the proof is, in regards to 64 bit fp...everything. I use Notion 6, which is a score editor and orchestral instrument. I used to stick with Bit's rules...or if you like the general rules of 24bit audio. I would bounce out the files as 24bit and then a few years ago 32 bit. I would then sit and add console emulation and some tape sat as well. It's one of the steps you take to make a wooden robotic sampled violin and smooth it over to add back some realism. The issue was, for the life of me I couldn't understand where the mid range tinny distortion was coming from. It ruined my mixed. Last year, and I can't remember why...I started recording those same Notion instruments into Sonar as 64bitfp via rewire. Gone was the distortion. Enough said.
Now yep, we're talking about master files, but think about it? All your processing is done at either 32bitfp or 64bitfp, depending on what switch you've pressed. What happens to all that processing at 64bitfp, when you bounce down to 24bit or 16bit...it gets squashed into the noise floor and dither is added. Now granted, 24 bit wave files should be the defacto format today, but if you're only using this file as your listening master file...then it will be fine. You lose some very subtle nuances of the 64bit file, but until fp DA converters come on line...its the best we can do. Now the problem is, and I said it above, but the problem is if you what to turn that 24bit file into an MP3 or a Soundcloud or indeed the master is going to be used in an MP4 for YouFool. Because you're squashing a file that has already been squashed. I can't remember if it's on Soundclouds do's and don'ts list but I read and only again recently - its recomend when you create a file for Soundcloud that's it's not heavily processed. Why? Because all the verbs, delays and emulation effects sound distorted. I always wondered why my mixes sounded so horrible after I uploaded to Soundcloud. Yes, in the past I might have been a crap mixer, not anymore...but it's because we're only just starting to work out the digital medium and format, and a lot of the ideas Bit and others on this forum are from the bad old days of digital. Where Bit was 100 percent on the money.
For proof, my website is 1331.space. I'm about to put my last AV sonata I completed last week up.
Ben
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 13:17:31
(permalink)
Cool, Ben. Thanks for kicking in. Regarding midi processing, there is a lot of talk that increased depths and rates help. Craig Anderton has written copiously on it.
As you say, no real reason not to keep files at a higher depth if you want, regardless of whether improvements can be confirmed by others.
I don't hear what you describe, but I still record performers using microphones and preamps. My brother and I noticed that I do not have the equipment with a low enough noise floor to test/assert my ucx performs as rme claims. Everything is noisier than the converters.
Sonar has a feature for increasing the sample rate for certain processing. I don't use this much myself. I work at 96 when I want that. I mention it in case readers didn't know.
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
Dilaco1
Max Output Level: -88 dBFS
- Total Posts : 150
- Joined: 2007/07/23 22:00:39
- Location: Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 13:28:24
(permalink)
Bitflipper, thanks so much for answering my question.
Cakewalk by Bandlab; RME Fireface 800 audio interface; Windows 7 (64bit);
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 14:31:27
(permalink)
...in this day and era, there is no reason not to switch the 64bitfp on and leave it... There are 10 kinds of people in the world...
All else is in doubt, so this is the truth I cling to. My Stuff
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 16:07:10
(permalink)
bitflipper
...in this day and era, there is no reason not to switch the 64bitfp on and leave it... There are 10 kinds of people in the world...
lol
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 16:45:36
(permalink)
☄ Helpfulby tlw 2018/07/19 10:15:24
What people who don't understand technical details don't understand is that you can't increase resolution by changing something that isn't limiting resolution in the first place.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
Anderton
Max Output Level: 0 dBFS
- Total Posts : 14070
- Joined: 2003/11/06 14:02:03
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 18:12:20
(permalink)
I think the rules are different for what happens outside the box compared to what happens inside the box. If you record an acoustic guitar at 44.1 kHz with 24 bits of resolution and play it back, then it will be 24 bits no matter what. And, the sample rate won't matter because nothing can interfere with the clock frequency. But when you start processing, or creating sounds inside the box, that's when sample rates and resolution can become significant factors.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/17 19:37:39
(permalink)
Anderton I think the rules are different for what happens outside the box compared to what happens inside the box. If you record an acoustic guitar at 44.1 kHz with 24 bits of resolution and play it back, then it will be 24 bits no matter what.
Technically speaking, though the recording itself will be 24 bits, the actual resolution will depend on the playback level and the noise from all sources present - including the noise in the signal picked up by the mic when the recording was made. The limits of hearing could also apply if they weren't buried under the noise floor from everything else. IOW 24 bit is the theoretical limit of that particular digital format, not the real world resolution (which is always far less than 24 bits). But when you start processing, or creating sounds inside the box, that's when sample rates and resolution can become significant factors.
Though in certain conditions they can indeed be very significant factors, in other cases they're completely irrelevant. Again, you can't increase resolution by changing something that isn't limiting resolution in the first place. The problem for us here is that one needs to know some complicated DSP stuff to know when a given bit depth/sample rate might be really significant, completely irrelevant or somewhere in between. And people often assume that because something can be an issue - in certain cases, it applies to them in their particular case - all the time. Unfortunately it doesn't work that way, so in the real world we end up having to waste money on things like bigger SSD drives so we have enough space for all those 24 bit sample libraries that impress the true believers but in fact never contain more than 20 bits of resolution (and often far less). Thanks NI!
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
BenMMusTech
Max Output Level: -49 dBFS
- Total Posts : 2606
- Joined: 2011/05/23 16:59:57
- Location: Warragul, Victoria-Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 09:15:20
(permalink)
drewfx1 What people who don't understand technical details don't understand is that you can't increase resolution by changing something that isn't limiting resolution in the first place.
I can read the technical stuff Drew - I had to to pass my first degree, and what you're missing is the notion of rounding errors. It's so boring the technical stuff, and I already do everything - from writing to creating 3d animation - but the issue is pretty simple...as Craig has stated it's about processing audio within the box. To create the same sounds as analog - you have to process the audio heavily. Think of it this way: analogue on the way into the storage medium i.e. tape adds all the things we like in regards to classic recordings, but analog tape is a poor storage medium and prone to being very expensive. Digital on the other hand, when we can perfect things like storing data in synthetic DNA and also synthetic diamonds (off the top of my head), is the perfect storage medium because it adds nothing to the data. The issue is, it's taken if we go back to the original 70s digital recordings almost 45 years to get to the point where we go ah ha and we can fix it - this is the analogue emulation aesthetic, but as we all know it doesn't work if you use only 24bit processing and hence 32bit and finally 64bitfp processing. This is because, I think rounding errors that eventually become sharp mid distortion. I mentioned this in my last post in regards to Notion audio files and processing - I could not fix the mid-distortion until I went to 64bitfp. I'm not saying you can listen to a 64bit file, that would be ridiculous - what I am saying is you need to keep the master at 64bitfp and from there you create a whatever master file. The OP question was about rendering to 32bitfp, and here is where all of you are doing everyone on this board, who doesn't use a studio and expensive museum pieces to create - because I can tell you there is a huge difference in audio quality if you create an MP3 from a 16 bit file instead of a 64bit file, which includes most sites like Soundcloud. Don't ask me why, but I can tell you my ears can hear the difference. It's the same if you create MP4s or AVis or whatever visual standard too...you need to create the final visual product with a 64bit file. Itunes also uses a 32bitfp process to create Ifools files. Now, I only have one piece of evidence in regards to this matter, but why risk any kind of distortion by giving Crapple a 16 bit file for conversion or indeed a 24 bit one, if Crapple are only going to re-wrap the 24bit file in a 32bit container file?...and that's what you're also missing! You, and others keep rattling on about you can't hear a higher bit depth file...you can't! But you can protect the processed file from rounding errors which turn into distortion by sticking with 64bitfp until distribution. By all means, create 24bit audio wave files for audio masters, which is what I do, but make a 64bitfp master too, which is undithered for future proofing. I can tell you that the 24 bit file doesn't sound as good as the MP4 AV file I create from the 64bit master, because for one dither. And think about it, when you're squashing your track into a smaller format - when squashing the file it should be pretty obvious which file should be used. Mp3 encoders and the like aren't that good at picking which date to remove...so why squash a file that has already been squashed? Finally, to say that if you can't hear something means it doesn't exist - and I'm sorry but I'm going to metaphorically kick you here - is stupid. It's like saying that I can't see sound waves or light waves or the ****ing sun - so the sun or sound and light waves don't exist. You might as well join the loony flat earthers. For anyone wanting proof to my crazy ideas on sound, just go to my website 1331.space and to the music AV page. I can't get that 'pro' sound without my crazy theories, ones that I've taken 18 years to perfect. Peace and Love :)
|
stxx
Max Output Level: -82 dBFS
- Total Posts : 406
- Joined: 2010/01/31 17:32:02
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 14:24:47
(permalink)
It's simple, render at 32 for processing purposes. Then for you final production files, 24 or 16 bit when processing is completed. For home mastering, you should even up sample your bandwidth to even 96/24 . Many mastering engineers will up sample your tracks anyway before they start their process
Sonar Platinum, RME UFX, UAD 2, Waves, Soundtoys, Fronteir Alphatrack, X-Touch as Contl Srfc, , Console 1, Sweetwater Creation Station Quad Core Win 8.1, Mackie 824, KRK RP5, AKG 240 MKII, Samson C-Control, Sennheiser, Blue, AKG, RODE, UA, Grace, Focusrite, Audient, Midas, ART Song Portfolio: https://soundcloud.com/allen-lind/sets/oth-short
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 17:28:29
(permalink)
BenMMusTech
drewfx1 What people who don't understand technical details don't understand is that you can't increase resolution by changing something that isn't limiting resolution in the first place.
I can read the technical stuff Drew - I had to to pass my first degree, and what you're missing is the notion of rounding errors. No Ben. What you're missing is the notion of how errors do and don't accumulate. And how errors buried tens of dB under the noise floor can't just magically rise above it. What bit depth is needed for a given process is based on the noise floor and the number of calculations being done on the same data. Simple as that. Except it's not simple because most people tend not to know either of those things for the perfectly understandable reason that they aren't readily available. One can get the noise floor from not entirely trivial measurements, but the only way to understand the calculations is unfortunately to have some knowledge of DSP programming.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
wst3
Max Output Level: -55.5 dBFS
- Total Posts : 1979
- Joined: 2003/11/04 10:28:11
- Location: Pottstown, PA 19464
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 18:02:35
(permalink)
We are talking here about technical issues, and reasonably well settled ones at that.
All engineering endeavors (as opposed to actually composing/arranging/orchestrating/playing) are an exercise in optimization. All engineering endeavors, including audio engineering. In order to optimize you need to know the limits of all the elements of a system, in our case the listener, the listening environment, the signal sources, and the processing engine, which includes any code used to mix and/or render.
All before we take into account personal taste.
For recording a live source you simply don't need more than 24 bits because, as stated earlier, the A/D converter can't match that. For the same reason fixed point is more than adequate. As stated previously, you can never increase the resolution in terms of frequency response or noise floor, what you record is what you get. Sample rate is slightly more complex! If you are in a really quiet room with really good microphones, really good preamplifers, NO analog processing (unless you want to capture all the artifacts, which by the way will limit the dynamic range and pass band further) and a great instrument (great player doesn't hurt either) - if all of that is true then I'd probably record at 96 kHz. Otherwise 48 kHz is more than adequate.
I'd record at 96 kHz in order to preserve - in the recording - as much of the original signal as possible, or more even<G>! Keeping things as good as you can until the final mix is a great idea, and no, I don't think it is lost on us old folks (get off my lawn). Most of us "grew up" in a time when even 96 dB S/N ratio was unobtainable, and unnecessary since neither FM radio nor vinyl discs could reproduce it<G>!
Fortunately (for me) my recording space is awful - ok, that's not entirely fortunate, but it does mean I have no excuse for recording at anything more than 48 kHz/24 bit fixed point. Which means that I don't need to consider processor power or disk space (I suppose I could probably get away with 44.1/16, but even I won't go that far!)
Where things get more dicey - a lot more dicey in some cases - is when we start manipulating the audio data. All processing in the digital domain is nothing more than math and the limits of precision and accuracy are well understood.
But if you have the horsepower why not work at greater word length? You only lose disk space, and maybe processing power in extreme cases. You won't hurt the audio, but you won't improve the source, and that is important.
For my own case, I can hear the difference between different sample rates and word lengths for some plugins (most notably some of the UA stuff). Well, I can hear the difference in a proper studio, in my studio these things have no impact. So that's another issue for optimization.
With respect, without something to compare I really can't comment on the tracks at 1331.space with respect to this discussion. Personally I found many of the tracks to be a bit too busy for my tastes, and in some cases perhaps over processed (this coming from me is almost comical!).
TL;DR All this to say, if you are able to work with at least one order of magnitude greater resolution than you need you will be just fine.
-- Bill Audio Enterprise KB3KJF
|
Anderton
Max Output Level: 0 dBFS
- Total Posts : 14070
- Joined: 2003/11/06 14:02:03
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 18:07:15
(permalink)
drewfx1
Anderton I think the rules are different for what happens outside the box compared to what happens inside the box. If you record an acoustic guitar at 44.1 kHz with 24 bits of resolution and play it back, then it will be 24 bits no matter what.
Technically speaking, though the recording itself will be 24 bits, the actual resolution will depend on the playback level and the noise from all sources present - including the noise in the signal picked up by the mic when the recording was made. The limits of hearing could also apply if they weren't buried under the noise floor from everything else. IOW 24 bit is the theoretical limit of that particular digital format, not the real world resolution (which is always far less than 24 bits). I know that, but those 24 bits still contain information, even if it's just noise or zeroes. I think the world has standardized on saying converters have 24-bit resolution if technically, they're capable of it because there's no way of knowing the extent to which that resolution will be compromised in a final product. So I would differentiate between useable and theoretical resolution. The problem for us here is that one needs to know some complicated DSP stuff to know when a given bit depth/sample rate might be really significant, completely irrelevant or somewhere in between. I find the really significant ones are audible and obvious. For example, rendering a non-oversampling virtual instrument pulse wave at 44.1 or 192 kHz is such a huge difference you might as well be listening to a different patch altogether. I did a test file and sent it around to some people with whom I'd discussed high sample rates at some recent conferences. They were truly blown away at the difference, it's not even remotely subtle. However most of the time it doesn't matter. With one amp sim, rendering at 96 kHz didn't change the tone because the amps were oversampled, but the reverb's imaging was different compared to 44.1 kHz. Go figure. I think there's still a lot we don't know about digital audio and human perception. I don't play the "golden ears" thing much but I will say that back in the early days, when my converters went from 16 bits to 20 bits with the Ensoniq PARIS system, the difference was obvious and I could really hear the difference between 16 bits of real resolution (courtesy of 20-bit operation) and the 12-14 bits of real-world resolution the 16-bit converters delivered. My basic guideline is if I can hear a difference, I do it and if I don't hear a difference, I don't. That's why even though I'm convinced beyond a shadow of a doubt that rendering at 192 kHz can deliver superior virtual instrument sound quality, I still record at 44.1 kHz. Why? Because there are very few times that rendering at 192 kHz does make a difference, and I can always export the file, render at 192 kHz, then bring it back into the 44.1 kHz project to obtain the benefits of 192 kHz rendering in a 44.1 kHz project. And of course, CbB's upsampling is all you need in most cases anyway.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 19:50:17
(permalink)
wst3 But if you have the horsepower why not work at greater word length? You only lose disk space, and maybe processing power in extreme cases. You won't hurt the audio, but you won't improve the source, and that is important.
I agree completely. The problem is that some people will claim they can hear a difference regardless of how impossible that is. Which, since almost no one does careful measurements and/or carefully controlled listening tests, will cause others to believe they can hear it too when doing casual listening comparisons. Remember a while back when there was a bug with the 64 bit engine and people were freaking out during the time before it was patched? Over errors where very few of them even make it into 24 bit output, much less approach being audible? And is 64 bit really good enough? How do we know it doesn't sound "smoother" if we do all of our calculations at 128 bit? And 512 bit has to be better than that, right? I mean it has more resolution and the rounding errors are lower. Who could argue that? IOW, doesn't at some point someone who understands the relative importance of things have to point out that beyond a certain point things don't matter, and when that point has been reached?
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 20:23:47
(permalink)
Anderton
drewfx1
Anderton I think the rules are different for what happens outside the box compared to what happens inside the box. If you record an acoustic guitar at 44.1 kHz with 24 bits of resolution and play it back, then it will be 24 bits no matter what.
Technically speaking, though the recording itself will be 24 bits, the actual resolution will depend on the playback level and the noise from all sources present - including the noise in the signal picked up by the mic when the recording was made. The limits of hearing could also apply if they weren't buried under the noise floor from everything else. IOW 24 bit is the theoretical limit of that particular digital format, not the real world resolution (which is always far less than 24 bits). I know that, but those 24 bits still contain information, even if it's just noise or zeroes. I think the world has standardized on saying converters have 24-bit resolution if technically, they're capable of it because there's no way of knowing the extent to which that resolution will be compromised in a final product. So I would differentiate between useable and theoretical resolution.
Yeah, I was more just trying to clarify something to make sure people didn't misconstrue what you were saying.
The problem for us here is that one needs to know some complicated DSP stuff to know when a given bit depth/sample rate might be really significant, completely irrelevant or somewhere in between. I find the really significant ones are audible and obvious. For example, rendering a non-oversampling virtual instrument pulse wave at 44.1 or 192 kHz is such a huge difference you might as well be listening to a different patch altogether.
And that's a good example of something that can be unambiguously audible to the point that you don't have to bother with careful blind testing. And I'm betting that if I ask, what you'll describe hearing fits pretty much exactly with what we would expect you to hear. I'll just say that for some things what people describe "hearing" doesn't always match the artifact they claim to be hearing. Hmmm.... I should also add that even in this very good example of the benefits of oversampling, if the virtual instrument had been coded to use what's known as "band limited waveforms" then the oversampling might not have been of any value. The point being that it's hard to come up with simple guidelines. Having said that.... My basic guideline is if I can hear a difference, I do it and if I don't hear a difference, I don't.
That's a pretty good guideline. But I would suggest we just have to be careful when we start getting to the point where differences are harder to hear and/or can only be described with vague subjective language that even people of your level wouldn't be quite sure how to measure if we wanted to. Or one can do a careful double blind listening test, because unfortunately we are all basically programmed to sometimes imagine we hear stuff that isn't there. And as the saying goes, this applies to everyone, including the people who don't think it applies to them.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
rabeach
Max Output Level: -48 dBFS
- Total Posts : 2703
- Joined: 2004/01/26 14:56:13
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 20:26:27
(permalink)
it's quite possible that the higher sampling frequencies are delivering superior sound because the reconstruction filters are performing more effectively at the higher sampling frequency. possibly, the interpolation being performed between the discrete samples is more pleasing to the ear with higher frequency sampling. Nyquist-Shannon sampling theorem does not provide a simple, straight forward way to determine the correct minimum sample rate for a system because it requires that the signal be perfectly bandlimited and we cannot do that
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 21:21:04
(permalink)
rabeach Nyquist-Shannon sampling theorem does not provide a simple, straight forward way to determine the correct minimum sample rate for a system because it requires that the signal be perfectly bandlimited and we cannot do that
We don't live in a perfect world so we don't need a perfect filter. I would say you can determine it by starting with the following questions and then using a good filter design algorithm: 1. What frequency range are we interested in (i.e. what is the pass band)? 2. How much stop band attenuation do we need? 3. How flat does the pass band need to be? 4. How much latency/CPU can we afford? Doesn't that, and maybe a few other details pretty much spell it out?
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 22:02:12
(permalink)
rabeach it's quite possible that the higher sampling frequencies are delivering superior sound because the reconstruction filters are performing more effectively at the higher sampling frequency. possibly, the interpolation being performed between the discrete samples is more pleasing to the ear with higher frequency sampling. Nyquist-Shannon sampling theorem does not provide a simple, straight forward way to determine the correct minimum sample rate for a system because it requires that the signal be perfectly bandlimited and we cannot do that
You don't need a perfect filter, just one good enough to satisfy the parameters of the human auditory system. IOW, one that can attenuate unwanted frequencies to the point of inaudibility. The filters in modern converters are more than adequate. Remember, they are no longer attempting to chop off frequencies in the kilohertz range, but in the megahertz range - a much easier task that doesn't require sharp slopes or great precision. Fortunately, interpolation doesn't enter into it. As long as any missing frequencies are outside the range of human hearing, they're not going to be missed. You can prove that to yourself experimentally. Take a file recorded at 96 KHz, preferably one with a lot of high-frequency content, and load it into Cakewalk. Apply a low-pass filter set to 30 KHz or higher (you'll need a high-end equalizer; Pro-Q2, for example, can do 30 Khz). Insert an automation envelope to adjust the cutoff frequency of the HPF and throw in some wildly varying nodes such that the filter wanders from 30 KHz to 20 KHz with random stops in between. The reason you use automation is so you can listen blind, without knowing where the filter's cutoff is at any point in time. Trust me, you will not hear the filter doing anything at all.
All else is in doubt, so this is the truth I cling to. My Stuff
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/18 22:31:45
(permalink)
bitflipper Fortunately, interpolation doesn't enter into it. As long as any missing frequencies are outside the range of human hearing, they're not going to be missed.
I think he meant the interpolation done by the reconstruction filter. Or perhaps by the, um, interpolation filter in an upsampled DAC.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|