Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Distinctive benefits of 24bit recording? (Read 21357 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Distinctive benefits of 24bit recording?

http://www.hydrogenaudio.org/forums/index....st=#entry583387

1. The above thread here says something like we can convert 16bit files into 24bit files. Can I get an explanation of how this works? Magnifying a low pixel number image to a large size isn't the same thing as having a high resolution image. And upsampling a 44.1khz recording to 192khz is of the same token. Is the bit depth of a different category?

2. What are "quasi-linear" and "nonlinear" processes? Are only the synthetic stuff nonlinear? Or can natural processing like convolution reverb come off as nonlinear as well?

3. I read somehwere that microphones can only do 18 bit at most, that's why the CDs released back in the 1990s use and advertise 20bit recording. So basically they are saying recording at 24bit is a waste of storage space. Is this true? Many of the mics used to record the expensive sample libraries are not built with the newest technology, in fact many of them appear to be vintage mics that cost tens of thousands of dollars. Can these devices designed back in the 1940s take advantage of a 24bit recorder?

Thanks.

Distinctive benefits of 24bit recording?

Reply #1
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms. Actually, most such processing is done in floating point, 32 or 64 bit, rather than 24 bit integer, but the principal is the same. As far as finished product goes, however, few releases unitize close to 16 bits and the available evidence is that there are no recordings where anything more that 16 bits is audible.

"microphones can only do 18 bit at most" is nonsense. The bit depth is simply the number of amplitude levels from nothing up to maximum signal level. All recordings made with 24 bit converters have nothing but noise in the lower order bits. Where the signal sets in the 24 bit range is a matter of adjusting the analogue chain amplification.

The best ADCs can capture about 20 bits before internal noise pretty much drowns out the signal. Very good quality in the preceding analogue chain, as well as exceptional recording conditions, are necessary to actually approach 20 bits of measurable signal in the recording..

Distinctive benefits of 24bit recording?

Reply #2
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms. Actually, most such processing is done in floating point, 32 or 64 bit, rather than 24 bit integer, but the principal is the same. As far as finished product goes, however, few releases unitize close to 16 bits and the available evidence is that there are no recordings where anything more that 16 bits is audible.

I meant to be used for digital sequencing, not just album mastering. The sequencers/players are going to mix and add effects (reverb etc...) to the instrument samples. A lot of sample libraries that I use are recorded at 44.1/16, while some are recorded at resolutions as high as 192/24bits. when we use 32bit processing, does it negate the advantage of 24bit inputs?

Distinctive benefits of 24bit recording?

Reply #3
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms. Actually, most such processing is done in floating point, 32 or 64 bit, rather than 24 bit integer, but the principal is the same. As far as finished product goes, however, few releases unitize close to 16 bits and the available evidence is that there are no recordings where anything more that 16 bits is audible.

I meant to be used for digital sequencing, not just album mastering. The sequencers/players are going to mix and add effects (reverb etc...) to the instrument samples. A lot of sample libraries that I use are recorded at 44.1/16, while some are recorded at resolutions as high as 192/24bits. when we use 32bit processing, does it negate the advantage of 24bit inputs?


Assuming the material is very well recorded, having 24 bit inputs gives you higher dynamic range, at least in theory.  In practice I doubt most of your 24 bit material actually has > 100dB dynamic range.  So I think the short answer is something like "there is not much of an advantage to be negated".

Distinctive benefits of 24bit recording?

Reply #4
Linearity is a very specific mathematical concept in signal processing/electrical engineering. But in this specific context, I think the most pertinent points about them are 1) a "nonlinear" operation on an audio signal synthesizes new frequency components which may not have previously existed before; and 2) For the vast majority of nonlinear operations you will encounter -- static distortion, clipping, crossover distortion etc -- the new frequency components will occur at the sum of existing frequency components. These new frequencies can exist above Fs/2, and if they do, they will be aliased.

So upsampling is commonly performed before these "nonlinear" operations, to ensure that any distortion products are not aliased.

Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

Distinctive benefits of 24bit recording?

Reply #5
Quote
A lot of sample libraries that I use are recorded at 44.1/16, while some are recorded at resolutions as high as 192/24bits. when we use 32bit processing, does it negate the advantage of 24bit inputs?
No.  If there is any "advantage", 32-bit floating-point is higher resolution and it doesn't hurt.

There are "mathematical" reasons for using floating-point...  with DSP you can be working with very-large and very-small numbers, even though the "answer" (sample values) remains within a 16 or 24 bit integer range.  (And, sometimes the "answer" goes out of the integer range too...  Floating-point  gives you a chance to bring the levels back to normal before saving.)

If you're working with integers, and you cut the volume of a 24-bit signal in half, now you've "only" got 23 bits worth of information.  With integers, you can re-boost the signal level and you still have 23 bits of resolution.    With 32-bit floating point, you've essentially got infinite dynamic range and you don't have this issue.  (You can mix & adjust levels without loosing resolution.)

Or, you could reduce the level of a 16-bit sample, and if you save it in 24 or 32-bits you won't loose resolution.    (But, unless you are re-boosting the signal later it doesn't matter!  i.e.  You can reduce the signal level to the point where you have only 4 bits left, and the signal & noise will be so low that you won't hear the lack of resolution.  It's only if you re-boost the signal that you might hear the loss of resolution/quality.)   

You can mix a 24-bit & 16-bit file together, and the 24-bit part will retain it's resolution.  (Except after mixing, which is done by addition,  you normally end-up reducing the level which means you loose the low-level bits anyway. 


If you add reverb to a 16-bit signal, you can end-up with more than 16-bits of data (i.e. as the reverb fades-out).  Even a simple fade-out or volume change can "take advantage" of additional bits that didn't exist in the original.  (But, you'll never hear it.  )

Distinctive benefits of 24bit recording?

Reply #6
Assuming the material is very well recorded, having 24 bit inputs gives you higher dynamic range, at least in theory.  In practice I doubt most of your 24 bit material actually has > 100dB dynamic range.  So I think the short answer is something like "there is not much of an advantage to be negated".

For which types of music will the 24bit start showing its advantage? Beethoven loud?

Quote
Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

Samplers like Halion have "nonlinear" reverbs, I guess those don't fall into the general category? Are all convolution based reverbs linear because they are recorded naturally?


Quote
Or, you could reduce the level of a 16-bit sample, and if you save it in 24 or 32-bits you won't loose resolution. (But, unless you are re-boosting the signal later it doesn't matter! i.e. You can reduce the signal level to the point where you have only 4 bits left, and the signal & noise will be so low that you won't hear the lack of resolution. It's only if you re-boost the signal that you might hear the loss of resolution/quality.)

What is the difference between stuff recorded at 16bits then padded into 24 or 32 bits, and stuff that was registered with 24bit ADCs?

Distinctive benefits of 24bit recording?

Reply #7
For which types of music will the 24bit start showing its advantage? Beethoven loud?

Actually you would have to sit right within the orchestra to possibly experience a dynamic range higher than 16bit=96dB (that's why the musicians wear earplugs ) The dynamic range experienced from the audience usually won't exceed ~80dB.

For cinema Dolby defined a maximum peak on LFE channel of 115dB. This would actually require 20bit to avoid a noticeable noise floor*. This is for movie sound effects however! Music, even very dynamic classical music like Beethoven, won't have that much dynamics.

*which is quite theoretical since the ambient noise of an auditorium will exeed 20dB by far.

Distinctive benefits of 24bit recording?

Reply #8
If I understand the question correctly, you're asking if there's a benefit in using a 24 or 32-bit format for production. In almost all cases, there absolutely is. People who tell you that 16 bits is enough may be technically correct but be sure that work requires analysis and careful optimization of your gain structure. Just use the higher resolution and put the energy you save not having to think so hard about all this stuff into the music.

When it comes time to produce your final master, create both 16 and 24-bit versions. There will possibly be additional information in the bottom 8 bits. No one will be able to hear it but some people will be willing to pay for it

Distinctive benefits of 24bit recording?

Reply #9
If I understand the question correctly, you're asking if there's a benefit in using a 24 or 32-bit format for production. In almost all cases, there absolutely is. People who tell you that 16 bits is enough may be technically correct but be sure that work requires analysis and careful optimization of your gain structure.


That would be for recording.  You have to be careful to set the gain right when recording, particularly for 16 bit.  If you're just mixing, then the gain is already set by whoever recorded it.  If they took the time to make 16 bit work, then its no trouble for you.


Distinctive benefits of 24bit recording?

Reply #10
What is the difference between stuff recorded at 16bits then padded into 24 or 32 bits, and stuff that was registered with 24bit ADCs?

The only difference is whether the additional bits contain zero, or they contain noise (or at least mostly noise).

Distinctive benefits of 24bit recording?

Reply #11
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms.


To be clear, all modern DAW software processes audio at 32 bits floating point. I think Pro Tools uses 40 or 48 bit fixed point, but it's the same idea. So there's no benefit from converting 16-bit files to more bits because the software already does that as it reads the data from disk, before applying processing.

) The dynamic range experienced from the audience usually won't exceed ~80dB.


I'd say 50 to 60 dB is more like it. Maybe a really great hall with exceedingly quiet air conditioning can beat those figures. However, 50 dB s/n in a concert hall is not the same as 50 dB s/n in a cassette deck. Tape noise is more broadband with more HF content. The noise in a hall is more rumble, and is less audible due to Fletcher-Munson.

--Ethan
I believe in Truth, Justice, and the Scientific Method

Distinctive benefits of 24bit recording?

Reply #12
For cinema Dolby defined a maximum peak on LFE channel of 115dB. This would actually require 20bit to avoid a noticeable noise floor*. This is for movie sound effects however! Music, even very dynamic classical music like Beethoven, won't have that much dynamics.

*which is quite theoretical since the ambient noise of an auditorium will exeed 20dB by far.

Is the range the difference between the peak and the ditch? Like for 115db you would need sounds 19db or under to hear a difference?

Distinctive benefits of 24bit recording?

Reply #13
Quote
Is the range the difference between the peak and the ditch?
Right...  Dynamic range is the difference between the quietest possible & loudest possible sound.  (The quietest sound is usually noise.)    Dynamic range usually refers to the equipment limits or file-format limits, not the actual sound/music.  Or, you could refer to the dynamic range of an orchestra, comparing the quietest instrument (maybe the triangle or a single plucked string) to a full orchestral crescendo.

Musicians use the term dynamic contrast to refer to the musical content.    Most classical music has lots of dynamic contrast, most popular music is "constantly loud" and has very little dynamic contrast.  Or we just say, "The music is very dynamic",  or "The music has lots of dynamics."

Quote
Like for 115db you would need sounds 19db or under to hear a difference?
He's saying that the background noise level in an quiet auditorium is at least 20dB SPL  (to give you a rough idea of the dynamic range you might hear in a concert hall).    This is over-simplified.... But the idea is, if the loudest sound is 115dB SPL, and the noise level is 20dB SPL, you need a 95dB dynamic range to record the sound/performace.  (The sounds below 20dB SPL get "lost in the noise", and any "extra bits" are just recording noise.)



----------------------------------------------------------
Just to get a perspective on the audibility of this stuff, here are a couple of experiments you can try:

- Take one of your files and reduce the volume by 20dB, 40db, 60dB, 80dB, 90dB, and listen to the results (without boosting the gain/volume).    Now, think about those teeny-tiny details in the 17th bit that are around -100dB!!!!  

- Take one of your 24-bit files and make a 16-bit version of it.  Subtract these two files and listen to the difference.  Again, listen to the true difference, don't boost the volume.    (If your audio editor doesn't have built-in subtraction, invert the polarity of one file and mix...  And just to make sure the process is working, try subtracting an exact copy of the file first.  If it's working you'll get silence.)

Distinctive benefits of 24bit recording?

Reply #14
Quote
To be clear, all modern DAW software processes audio at 32 bits floating point. I think Pro Tools uses 40 or 48 bit fixed point, but it's the same idea. So there's no benefit from converting 16-bit files to more bits because the software already does that as it reads the data from disk, before applying processing.
It is not true that all modern editors automatically convert to floating point upon opening a 16 bit file. The intermediate results of individual calculations may well be in floating point, but, when working on a 16 bit file, the data goes back to 16 bit at the calculation's end. This means either you add dither for each or you accept the resulting distortion. Whether the extra dither or the quantization distortion ever make an audible difference is a separate issue.

Distinctive benefits of 24bit recording?

Reply #15
Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

What are the exceptions? The Halion3 sampler does have this "nonlinear reverb" section, but other than that I can assume that the rest all follow the rule?

Distinctive benefits of 24bit recording?

Reply #16
The best ADCs can capture about 20 bits before internal noise pretty much drowns out the signal. Very good quality in the preceding analogue chain, as well as exceptional recording conditions, are necessary to actually approach 20 bits of measurable signal in the recording..


A pedantic point but...

If we are talking noise artifacts, current products seem to be claiming more like 21 bits.

In principle, the noise floor of converters can be reduced as desired by connecting them up in parallel with an approximate 3 dB gain for every doubling of the number of converters. This may sound silly, but in fact there is a commercial product that is composed of 8 converters in one package, and is speced for use as a single-channel device.

There's just no practical way to make a real-world live or studio  recording with even 16 bits of dynamic range. In my investigations I have found some recordings that actually come close to 15 bits.  Your typical live or studio recording, carefully done but not resorting to impractically extreme measures, will be in the 65-75 dB range.  IOW 12-14 bits.

Then you have the problem of playing the recording back in a listening room with a typical 45 dB SPL noise floor without going where OSHA says there will be ear damage! ;-)

Distinctive benefits of 24bit recording?

Reply #17
Quote
To be clear, all modern DAW software processes audio at 32 bits floating point. I think Pro Tools uses 40 or 48 bit fixed point, but it's the same idea. .
It is not true that all modern editors automatically convert to floating point upon opening a 16 bit file. The intermediate results of individual calculations may well be in floating point, but, when working on a 16 bit file, the data goes back to 16 bit at the calculation's end. This means either you add dither for each or you accept the resulting distortion.


Audacity by default keeps and saves the audio in floating point.  It only goes back to 16 fixed when you export to flac, wav, etc.
Soundforge, from what I gather, can do the same.  I don't know if that's the default.

That covers 2 popular choices (home and pro.) 


Distinctive benefits of 24bit recording?

Reply #18
Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

What are the exceptions? The Halion3 sampler does have this "nonlinear reverb" section, but other than that I can assume that the rest all follow the rule?

The "usual" or "naive" way you'd implement things like reverb/eq/mixing/etc solely involve the operations of time delay, summation, and constant gain changes (when you boil the algorithms down to the bare elements). These are all linear operations. There's nothing preventing a developer from devising a nonlinear algorithm to implement any of these, and depending on what they want to accomplish, that might be a good thing -- for instance, one could hypothesize a reverb engine that adds progressively more distortion to progressively longer echos, and find a good use for that (or perhaps even find a situation in the real world that one is actually trying to model).

All of the nonlinear examples I give intrinsically involve multiplication of two time-varying signals, and so are fundamentally nonlinear.

Distinctive benefits of 24bit recording?

Reply #19
I just found out that one of my sample libraries (a very prestegious one) was recorded in 16 bits then "remastered" into 24 bits. Do companies usually cheat like that? I suppose the Vienna Symphonic wouldn't but other companies like EastWest have a more sour reputation, and I'm wondering if their 1000 dollar libraries are recorded at 16bits then converted to 24bits?

Distinctive benefits of 24bit recording?

Reply #20
I would think it depends upon the age. Once 16 bits was as good as one could get, then for awhile more than 16 bits required significantly more expensive equipment, but for some time now 24 bits has been drop dead cheap and there isn't any reason to record professionally at 16 bits.

Distinctive benefits of 24bit recording?

Reply #21
http://www.hydrogenaudio.org/forums/index....st=#entry583387

1. The above thread here says something like we can convert 16bit files into 24bit files. Can I get an explanation of how this works? Magnifying a low pixel number image to a large size isn't the same thing as having a high resolution image. And upsampling a 44.1khz recording to 192khz is of the same token. Is the bit depth of a different category?

Sounds to me like bullsh*t.


3. I read somehwere that microphones can only do 18 bit at most, that's why the CDs released back in the 1990s use and advertise 20bit recording. So basically they are saying recording at 24bit is a waste of storage space. Is this true? Many of the mics used to record the expensive sample libraries are not built with the newest technology, in fact many of them appear to be vintage mics that cost tens of thousands of dollars. Can these devices designed back in the 1940s take advantage of a 24bit recorder?

Bit depth directly affects you headroom. So if you want a deeper dynamics of recordings (e. g. want to record explosion without limiting peaks with all beauties) with any most botique and vintage mics you'll got better details capturing in 24 than in 16 bits.

Distinctive benefits of 24bit recording?

Reply #22
I would think it depends upon the age. Once 16 bits was as good as one could get, then for awhile more than 16 bits required significantly more expensive equipment, but for some time now 24 bits has been drop dead cheap and there isn't any reason to record professionally at 16 bits.


Sure there is a good reason to record at 16 bits: 32 channels for an hour and a half. 

A one hour stereo recording at 16/44 fits on a CD. A 24 bit recording does not. Not even on 80 minute CD-Rs.

If there was some real benefit to recording at 24 bits, then the argument that it doesn't cost that much to do it has some merit. But there is no audible benefit.

Recording everything at 24 bits is like saying "I have lots of money so I will always pay twice list price for everything that I buy". If it floats your boat, by all means do it!

Distinctive benefits of 24bit recording?

Reply #23
Recording everything at 24 bits is like saying "I have lots of money so I will always pay twice list price for everything that I buy". If it floats your boat, by all means do it!

So the music industry has money to burn? 
Thank god those engineers don´t build bridges.

Distinctive benefits of 24bit recording?

Reply #24
Audacity by default keeps and saves the audio in floating point.  It only goes back to 16 fixed when you export to flac, wav, etc. Soundforge, from what I gather, can do the same.


Yes, and SONAR that I use does that too. Only when you do a final export is the 32-bit "result math" reduced to 16 or 24 bits.

In the grand scheme of things Arny is correct - you can't hear a difference. The main advantage of using 24 bits is it lets you be sloppy and careless when setting record levels.

--Ethan
I believe in Truth, Justice, and the Scientific Method