The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/07/20 11:58:03
|
BenMMusTech
Max Output Level: -49 dBFS
- Total Posts : 2606
- Joined: 2011/05/23 16:59:57
- Location: Warragul, Victoria-Australia
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/19 05:29:05
(permalink)
wst3 We are talking here about technical issues, and reasonably well settled ones at that.
All engineering endeavors (as opposed to actually composing/arranging/orchestrating/playing) are an exercise in optimization. All engineering endeavors, including audio engineering. In order to optimize you need to know the limits of all the elements of a system, in our case the listener, the listening environment, the signal sources, and the processing engine, which includes any code used to mix and/or render.
All before we take into account personal taste.
For recording a live source you simply don't need more than 24 bits because, as stated earlier, the A/D converter can't match that. For the same reason fixed point is more than adequate. As stated previously, you can never increase the resolution in terms of frequency response or noise floor, what you record is what you get. Sample rate is slightly more complex! If you are in a really quiet room with really good microphones, really good preamplifers, NO analog processing (unless you want to capture all the artifacts, which by the way will limit the dynamic range and pass band further) and a great instrument (great player doesn't hurt either) - if all of that is true then I'd probably record at 96 kHz. Otherwise 48 kHz is more than adequate.
I'd record at 96 kHz in order to preserve - in the recording - as much of the original signal as possible, or more even<G>! Keeping things as good as you can until the final mix is a great idea, and no, I don't think it is lost on us old folks (get off my lawn). Most of us "grew up" in a time when even 96 dB S/N ratio was unobtainable, and unnecessary since neither FM radio nor vinyl discs could reproduce it<G>!
Fortunately (for me) my recording space is awful - ok, that's not entirely fortunate, but it does mean I have no excuse for recording at anything more than 48 kHz/24 bit fixed point. Which means that I don't need to consider processor power or disk space (I suppose I could probably get away with 44.1/16, but even I won't go that far!)
Where things get more dicey - a lot more dicey in some cases - is when we start manipulating the audio data. All processing in the digital domain is nothing more than math and the limits of precision and accuracy are well understood.
But if you have the horsepower why not work at greater word length? You only lose disk space, and maybe processing power in extreme cases. You won't hurt the audio, but you won't improve the source, and that is important.
For my own case, I can hear the difference between different sample rates and word lengths for some plugins (most notably some of the UA stuff). Well, I can hear the difference in a proper studio, in my studio these things have no impact. So that's another issue for optimization.
With respect, without something to compare I really can't comment on the tracks at 1331.space with respect to this discussion. Personally I found many of the tracks to be a bit too busy for my tastes, and in some cases perhaps over processed (this coming from me is almost comical!).
TL;DR All this to say, if you are able to work with at least one order of magnitude greater resolution than you need you will be just fine.
I have ADHD - what you're hearing in regards to busy is just my head lol. I can make empty minimalist tracks too, it's just at the moment...I'm in Hulk mode lol and quite literally as I'm building my Hulk piece as speak. What happens though is when you process at a lower bit depth, the fatness of the tracks I've managed to create, and only on the first few tracks on my web page because I've only managed to perfect the my technique in the last year - becomes thinner. And you can hear that thinness on a 24bit master. It's small - barely audible but it's enough to convince me of the importance of 64bit fp masters. Again, and the OP was about rendering to 32bit, but what everyone is missing and I keep saying it is - create 24bit wave masters for general listening and even distribution, but for the creation of Mp3 and other formats including visual ones like MP4 render a non dithered 64bit master and keep that master for future proofing. As I think Craig has been saying, what happens is you process all that lovely time based effects not too mention all the other effects and like sample rate - as we know the tails of these effects can get lost in the last few bits and indeed get lost at lower sample rates. So you do all this processing at 64bit, and how do we fit 64 into 24...we truncate 40bits. Now as we know when we use a compression algorithm - it doesn't discriminate what information or culls, we why compress a truncated file, when you can compress a 64bit file - you're basically are creating a hi resolution MP3 or 4. And again...I prefer the MP4 created from a 64bit master to the 24bit wave file. The difference is negligible - we're talking a mm of warmth and fatness, the definition in the time based effects also becomes murkier and those time based effects start to sound slightly metallic. The sound soundcloud stream created from a 64bit 48khz master suffers in a similar way, but its ten times better than a 16bit or even a 24bit soundcloud stream. Remember, and I know I rabbit on and can probably be quite confusing - but a 64bitfp audio file is nothing more than a 24bit audio file but wrapped in a 64bit file, but once the 24bit file is wrapped in the 64bit file - it becomes a 64bit file. So everyone is right, you can't play that on anything apart from when it is Sonar and Sonar does it majik. But what everyone is missing is the extra bits allow the audio to be treated in the same way as analogue. Including digital varispeed. For example, sometimes I will bounce out a 64bit fp premaster to manipulate in Reaper. It's these subtle avant-garde techniques that have been missing from the digital realm, which were essential to creating some of the very best audio-visual and musical art of the 20th century. Think Strawberry Fields and Tomorrow Never Knows by The Beatles. Both rely on being able to manipulate the tape medium in a way more akin with sculpture and painting. It's the misunderstanding of the digital medium, which is causing this debate now. It's also the ingrained ideas of the old digital paradigm that is causing a lot of confusion too, as well as the lack of true boffins. Digital was crap, let's not forget this...I mean PCM or pulse code modulation was created in the 30s off the top off my head. No one really understands, it's the perfect storage medium for ethereal materials because it adds nothing to the source material. I think PCM was invented for telephone stuff too - so it was never designed for being a storage medium or anything like what PCM and the associated tech it's used for now. Now we understand some of these above ideas, and now analogue emulation has come of age...its still a long way off of being perfect, because from my ears...there's only 2 or 3 companies that know how to create accurate emulations, but some of us have worked out that you need to add back in what we once took for granted in the analogue realm. I'm still working out all the different languages and so I'm sorry if sometimes I go off script or I don't make much sense and worse confuse ideas and terms - but I have studied these ideas extensively and I know that 64bitfp solves everything. It creates a hi resolution audio recording format and possibly a hi resolution MP3/4 format and It future proofs your masters too by storing a master in an undithered format and container file. I would have to dedicate the next couple of years to fully go through all the data and science to really pull together all emerging concepts of 64bitfp together. I'm good, but that's not my trip. I think I've explained the reasons for 64bitfp now pretty well. And I've also explained the reasons for plain old 24bit too. And there are times when you should stick with vanilla. Now I'm sorry for the wanky art stuff...I hate wanky art stuff, and yes I have a Master of Philosophy in Fine Farts...it was an accident and I should have gone into composition - I'm trying to rectify this by doing a Phd in composition, but what's really missing from contemporary almost everything in regards to culture is everything is an understanding of aesthetics and mediums. Because, thanks to postmodernism wank...every button pusher Ifool owner now thinks they can create 'art'. They cannot lol, but when creating every decision you make needs to be thought through and the possible outcomes of those decisions also need to be thought through. For me, I've chosen a 64bit medium and the analogue emulation asethetic. In 10 years time, this may be considered hi resolution audio and 24bit may be considered low-for but neither is wrong and both are right so long as you own the choice you made when you created the work. Sorry, I might have digressed...I've been working late on my Incredible Bulk not Hulk lol parody :). But hopefully some of you might find my ideas useful. Ben.
|
mettelus
Max Output Level: -22 dBFS
- Total Posts : 5321
- Joined: 2005/08/05 03:19:25
- Location: Maryland, USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/19 07:45:10
(permalink)
I am still trying to glean the context of the 64-bit argument (a lot of TL/DR going on). To pass through a DAC it has to be the bit depth the DAC supports (either in a DAW, or Windows will do it for you). I get the feeling you are stating you are listening to 64-bit audio (??), so am most curious about what DACs you are listening through?
Computation precision and what makes it to headphones/speakers are not one and the same.
ASUS ROG Maximus X Hero (Wi-Fi AC), i7-8700k, 16GB RAM, GTX-1070Ti, Win 10 Pro, Saffire PRO 24 DSP, A-300 PRO, plus numerous gadgets and gizmos that make or manipulate sound in some way.
|
msmcleod
Max Output Level: -72 dBFS
- Total Posts : 920
- Joined: 2004/01/27 07:15:30
- Location: Scotland
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/19 09:23:27
(permalink)
The mistake is to mix up the 16bit / 24bit setting using in the ADC / DAC with the 32bit / 64bit rendering format. Although they're related, their subtely different. Once you've recorded a 16 or 24bit recording into CbB (which are lots of 16 bit or 24 bit integer values respectively, i.e. whole numbers), each integer number is converted internally in CbB as 32bit (or 64bit) floating point numbers. The reason being, that this limits the rounding errors associated with mixing tracks and processing audio when performing all the crazy DSP maths involved, as floating point numbers can deal with fractions. Once all the processing/mixing of tracks has taken place, the sound is converted back to 24bit integers to send back to your sound card. 64 bit floating point numbers take up twice as much buffer space for internal processing, but as its precision is higher, there are less rounding errors during processing, hence the better sound quality. When you mix down (usually down to 16bit 44.1KHz for CD), it has to take the 64bit floating point numbers and convert them into 16 bit integers. Rounding errors will occur at this point, but the dithering algorithms help to minimise the audible effect associated with this (especially when going from a different sample rate, e.g. 96Khz to 44.1Khz).
Mark McLeod Cakewalk by BL | ASUS P8B75-V, Intel I5 3570 16GB RAM Win 10 64 + Win 7 64/32 SSD HD's, Scarlett 18i20 / 6i6 | ASUS ROG GL552VW 16GB RAM Win 10 64 SSD HD's, Scarlett 2i2 | Behringer Truth B2030A / Edirol MA-5A | Mackie MCU + C4 + XT | 2 x BCF2000, Korg NanoKontrol Studio
|
Bristol_Jonesey
Max Output Level: 0 dBFS
- Total Posts : 16775
- Joined: 2007/10/08 15:41:17
- Location: Bristol, UK
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/19 11:10:26
(permalink)
mettelus I am still trying to glean the context of the 64-bit argument (a lot of TL/DR going on). To pass through a DAC it has to be the bit depth the DAC supports (either in a DAW, or Windows will do it for you). I get the feeling you are stating you are listening to 64-bit audio (??), so am most curious about what DACs you are listening through?
Computation precision and what makes it to headphones/speakers are not one and the same.
He isn't listening to 64 bit audio There is no converter in the world operating at that depth.
CbB, Platinum, 64 bit throughoutCustom built i7 3930, 32Gb RAM, 2 x 1Tb Internal HDD, 1 x 1TB system SSD (Win 7), 1 x 500Gb system SSD (Win 10), 2 x 1Tb External HDD's, Dual boot Win 7 & Win 10 64 Bit, Saffire Pro 26, ISA One, Adam P11A,
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/07/20 11:58:16
|
Cactus Music
Max Output Level: 0 dBFS
- Total Posts : 8424
- Joined: 2004/02/09 21:34:04
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/19 16:12:55
(permalink)
I am still in the habit of working at 24bit 44.1kHz and rely on the music to seem interesting enough to distract listeners from a preoccupation with technical matters. This!
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/19 17:01:20
(permalink)
mettelus I am still trying to glean the context of the 64-bit argument (a lot of TL/DR going on). To pass through a DAC it has to be the bit depth the DAC supports (either in a DAW, or Windows will do it for you). I get the feeling you are stating you are listening to 64-bit audio (??), so am most curious about what DACs you are listening through?
Computation precision and what makes it to headphones/speakers are not one and the same.
The argument is that if you're performing a number of calculations then the calculation errors due to limited precision that occur in the lowest bits (i.e. "rounding errors") can accumulate and work their way up to become audible. This is a real concern that's well understood and not in any way controversial. It's a reason why programmers use larger bit depths for certain operations that they know can or will cause problems. The controversy is that some believe for whatever reason that if they know about these errors then they must be audible to them. I would say that the correct answer is, "it depends". And when you try to get beyond that it gets technical fast. And technical arguments tend to be not particularly compelling to people who aren't technical and "I heard it!" arguments aren't compelling to technical types, so we go on. And on. And on. Sorry (on a number of counts).
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:48:56
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/20 13:36:03
(permalink)
noise sources... Fan in the room Mic Preamp Dac
I think noise alters values at all amplitudes but proportionately. This is why it isn't really distracting when listening to a strong recording.
I also want to point out that floating point is more precise as you approach zero [get quieter]. Someone who is working with low audio signals might benefit more fp than someone processing a higher signal.
Dacs are equally precise at all amplitudes which is why float doesn't help them.
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:49:05
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/20 16:53:37
(permalink)
Noise is just another signal mixed in with your signal at whetever level it's at - consider setting up mics and recording a complete band: The mics pick up vocals, guitar, keys, bass, drums and noise.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:49:16
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:49:27
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/20 19:07:12
(permalink)
Original Pranksta I juggled a few numbers in my head this morning, I made several casual assumptions, any of which may need correction: If 16bit files can store 96dB of dynamic range If a human is hard pressed to identify a 1dB change in amplitude. --see for yourself: ---- http://harmoniccycle.com/hc/sounds/wav/tones-1dB/4000Hz.wav ---- http://harmoniccycle.com/hc/sounds/wav/tones-1dB/12000Hz.wav If there are 32,767 integers in a 16bit audio file. And there are 32,767integers / 96dB = 341 integers / decibel in a logarithmic system. In other words the *granularity* of a 16bit file is something like 1/341 of a decibel.
It doesn't work quite that way. Decibels are logarithmic expressions of ratio, so you have to remember that and treat them accordingly. And there are 65,536 possible values in 2^16 (the sign bit still counts towards the number of possible values). Decibels as used in audio equate +20dB to a ratio of 10x (and -20dB is .1x). The formulas to convert between ratio and dB and back are: dB = 20 * log10(ratio) ratio = 10^(dB's/20) From this, if you do the math for a ratio of 2 (or .5), you'll get the well known ~6dB per bit resolution figure (because each bit doubles the number of possible values and 2x = +6.0206dB). Note here that +40dB is NOT a ratio of 20x (i.e. 2 * 10x); it's 100x (i.e. 10x * 10x). And thus +60dB=1,000x, +80dB=10,000x and +100dB=100,000x. You might note here that 96dB (i.e. 6*16) isn't far below 100dB and 65,536 (i.e. 2^16) isn't far below 100,000. The ratio of the largest signal to the smallest signal = 65,536:1 = 96.3296 dB = (6.0206 * 16) Anyway, what all of this means is that, using your example, the difference between 32767 and 32766 is indeed a tiny fraction of a dB whereas the difference between 2 and 1 is 6.0206 dB and you have to be careful not to confuse sample values - which have a given ratio to other sample values - with dB's - which are an expression of that ratio. It's tricky, and you're on the right track in your thinking, but you will see that it's often useful and easier to understand if you do certain calculations using the ratios instead of dB's - even if only to check your work. If it's not expressed in dB's you probably need to be thinking in terms of ratios when you do your calculations and then convert that to dB's.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/20 21:54:51
(permalink)
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:49:37
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/20 22:29:18
(permalink)
original pranksta,
The original recording may not benefit from float as it is recorded or if you convert it directly, but when you add reverb it could.
All the precision of the float will be used all the time. So when the signal is strong, the numbers have to represent the full signal. As the band stops and you have a reverb tail, all of the float is used for that reverb. There are no leading zeros. The most significant bit is presumed to be a 1 which is why floats often can't be zero without rounding. Making this assumption gives 1 more bit of resolution for every number.
So float is a little weird for audio to describe. Points near zero are vastly more precise than points near 1 in terms of decimal places represented.
Because of this, with 64 bit floats, you can probably reduce the level of every track to 1/2 and mix them add the gain back and actually lose none of the original precision.
Reducing the input gain on a bus loses nothing that will be exported in a final 24 bit wave if your levels are at all reasonable.
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:49:48
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 02:46:37
(permalink)
Original Pranksta The exercise demonstrated to me why it is difficult to find an accepted specification for the threshold of recognition regarding changes in sound levels. (I had made a very generalized statement when I mentioned 1dB in a previous post) Disregarding the variance of sensitivity across the frequency spectrum, it seems fair to say that a 6dB change at low amplitude levels is almost impossible to recognize while a 6dB change at high amplitudes will seem obvious.
In general, the absolute limit for perception of volume changes is somewhere between .1 and .3 dB. I don't remember the details, but this is for a reasonably loud listening level and we are less sensitive to changes at lower listening levels. In other words the 6dB change between 00001 and 00002 is, in my opinion, impossible to hear, so any subtlety lost due to lack of precision must also be impossible to hear.
But first, can you even hear the 00002 at all? Isn't what we really want to talk about here first is if it's possible to hear something at -90dB (or whatever)? The first question we can ask is, "Where is 0dBFS, in terms of dB SPL?". If we know where 0dBFS is, then we just subtract 90 from that and compare that to the noise floor in the listening environment and/or the threshold of hearing. IOW, if you're playing back your 16 bit audio loud enough so that the absolute peaks are at, say, 110dB SPL (i.e. loud), then -90 dBFS = 20 dB SPL and so on. I suspect a lot of people might be surprised how loud their 16 quantization error plus dither is played back at compared to the level of noise in their room. And the point is playback level matters. In terms of precision, if you do, let's say, 100 billion calculations in sequence on the same samples first with 32 bit floats and then 64 bit doubles, I'll be happy to put some money against you on that one. If you do 20 calculations, then no. So that matters, and sometimes double precision is actually necessary. The problem is one has to know: 1. Where do the errors start accumulating from? 2. How do they accumulate? 3. How many calculations we are doing? 4. What level we have to keep them below? The last one people can argue about a little (within reason), but I would submit that the first three are often either knowable or measurable (perhaps with a little ingenuity) for a given task. So do we need to speculate? Another interesting tidbit to play with: Let's say we have 2 similar noise sources, but not identical copies of the same noise. First let's say that they're both white noise of the exact same level. We might know that if they're the same level when we add them together then the peak level will go up by 6dB and the RMS level (the one we care about from a human perspective) by about +3dB. (If one tries it and gets +6dB for both peak and RMS, it means you either copied the noise or have a worthless noise generator.) But what if one of the noise sources is -16dB down compared to the louder one? How much does the RMS level go up by then when we add them together?
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 03:37:10
(permalink)
Do we need to know the amplitude of the first noise source to answer the question, teach?
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 05:47:34
(permalink)
gswitz Do we need to know the amplitude of the first noise source to answer the question, teach?
No. You just need to know that they're ratios and the formula to calculate the RMS for this situation (which is the only part that hasn't been discussed here yet - it's the square root of the sum of the squares). You just regard the larger one as 0dB (a ratio of 1:1). The idea is to find the ratio between the two added together and just the larger one alone.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:49:56
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 17:20:38
(permalink)
Let's say you have 2 numbers x and y and y is expressed as a ratio compared to x: y = .1 * x The .1 was randomly chosen in this case - all you need to do is convert -16dB (or whatever) into a ratio. Then: What is x + y if x = 1? (x = 1 because 0dB = 1:1 ratio) That's peak. What is the square root of (x squared + y squared)? That's RMS. But the point of all of this is that if you have 2 similar noise sources, the quieter one simply becomes completely and absolutely irrelevant once it falls a perhaps surprisingly small amount below the louder one - it contributes so very little to the overall noise that's it's simply meaningless if you get rid of it or nor. You will hear the exact same amount of noise even if you get rid of it completely.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 17:31:55
(permalink)
And we were talking about a 1dB difference before. If you solve for "how many dB's y is below x to result in a 1 dB difference" you'll find that by the time the quieter noise is -6dB getting rid of it completely will only reduce the overall noise by about 1dB RMS.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
gswitz
Max Output Level: -18.5 dBFS
- Total Posts : 5694
- Joined: 2007/06/16 07:17:14
- Location: Richmond Virginia USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 18:39:43
(permalink)
Drew,
I'm now wondering about the case of 2 mics on one acoustic guitar...
Both will have random noise from the pres and dac (+3). Room noise will be shared. (+6) Guitar shared (+6) in phase
So the signal to noise ratio might improve enough to notice? Maybe?
Practice says I don't notice. My pres and dac are mighty quiet.
In part I'm asking if this is a proper application of the learning.
StudioCat > I use Windows 10 and Sonar Platinum. I have a touch screen. I make some videos. This one shows how to do a physical loopback on the RME UCX to get many more equalizer nodes.
|
bitflipper
01100010 01101001 01110100 01100110 01101100 01101
- Total Posts : 26036
- Joined: 2006/09/17 11:23:23
- Location: Everett, WA USA
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 20:11:43
(permalink)
Original Pranksta ...It has been many decades since I slid a log...
And it's been decades since I've heard anyone use that expression! Geoff, my kneejerk reaction - without actually thinking about it much - is that two mics on a source would worsen the overall S/N ratio. But not by a lot, because though their noise is additive, so is their signal. Of course, the signals will likely not be identical (e.g. a Mid/Side mic setup) and therefore not a straight up 1+1=2 scenario. Noise, being random, could be far more cumulative.
All else is in doubt, so this is the truth I cling to. My Stuff
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/21 20:34:18
(permalink)
gswitz Drew,
I'm now wondering about the case of 2 mics on one acoustic guitar...
Both will have random noise from the pres and dac (+3). Room noise will be shared. (+6) Guitar shared (+6) in phase
So the signal to noise ratio might improve enough to notice? Maybe?
Practice says I don't notice. My pres and dac are mighty quiet.
In part I'm asking if this is a proper application of the learning.
First thought: Let's say you had one mic and split it into 2 channels. What would that do? If the mic pre/dac's noise floor is low enough then that part would be irrelevant based on the exercise above, so it's really no change. You can't change SNR by just splitting and recombining things, right? Which leads us to the question of whether phase addition vs. cancellation with two mics would be different for the signal vs. room noise. Most significant case I can think of - micing a tuning fork (because it's a very pure tone). Mics in phase at tuning fork's frequency = much better SNR Mics (mostly) out of phase at fork frequency = much worse SNR For anything else than something like a tuning fork, I'd think it'd be a contest for relative phase cancellation/reinforcement between the noise and signal. If you can get your signal largely in phase I'd expect to see a little better SNR.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|
The Maillard Reaction
Max Output Level: 0 dBFS
- Total Posts : 31918
- Joined: 2004/07/09 20:02:20
- Status: offline
∞
post edited by Original Pranksta - 2018/08/01 11:50:08
|
drewfx1
Max Output Level: -9.5 dBFS
- Total Posts : 6585
- Joined: 2008/08/04 16:19:11
- Status: offline
Re: Questions RE default 32 bit rendering bit depth
2018/07/22 18:04:16
(permalink)
You need to convert the -16dB to a ratio first. And at the end, you need to convert the answer back into dB's.
In order, then, to discover the limit of deepest tones, it is necessary not only to produce very violent agitations in the air but to give these the form of simple pendular vibrations. - Hermann von Helmholtz, predicting the role of the electric bassist in 1877.
|