IPB

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
Dither Preceding Lossy Encoding
wottha
post Jun 15 2013, 09:31
Post #1





Group: Members
Posts: 4
Joined: 15-June 13
Member No.: 108671



Hi,
I asked this question at another forum, and was advised to get further information from the developers here. I've read on Wikipedia that TPDF dither should always be used if the signal being dithered is to undergo further processing. Given that the vast majority of online AAC and MP3 files are made from 16 bit dithered PCM masters (CDR, DDP, WAV) supplied by the mastering studio (further processing the signal), what settings or dither should be used or avoided in making the original 16 bit PCM masters? And what are the consequences, technical or audible, in the final AAC or MP3, of using the wrong dither or settings?
Also, is anyone here familiar with what's used in the UV22 and UV22HR, and would they be suitable for dithering the original 16 bit master, given that it would be used for AAC and MP3 encoding.
Thanks.

This post has been edited by wottha: Jun 15 2013, 09:34
Go to the top of the page
+Quote Post
DVDdoug
post Jun 15 2013, 18:03
Post #2





Group: Members
Posts: 2441
Joined: 24-August 07
From: Silicon Valley
Member No.: 46454



In theory, you should dither when you reduce the bit depth. Dither will have no effect on lossy compression (MP3 or AAC), for a couple of reasons... MP3 & AAC use floating-point (no fixed-integer bit depth) and lossy-compression throws-away stuff you can't hear. Since dither is somewhere around -80 to -90 dB, it's going to be thrown-away.

In practice, it's very unlikely that you can hear the effects of dither (at 16-bits or more). So, it's not too important if you use dither or not, or which dither algorithm you choose. If you think you can hear it, choose whatever sounds best. to you.
Go to the top of the page
+Quote Post
C.R.Helmrich
post Jun 15 2013, 23:07
Post #3





Group: Developer
Posts: 681
Joined: 6-December 08
From: Erlangen Germany
Member No.: 64012



QUOTE (wottha @ Jun 15 2013, 10:31) *
... what settings or dither should be used or avoided in making the original 16 bit PCM masters? And what are the consequences, technical or audible, in the final AAC or MP3, of using the wrong dither or settings?

Why don't you directly put the 24-bit masters into the AAC/MP3 encoder?

Using noise shapers like UV22 for AAC/MP3 prior to encoding doesn't help you much since their effect is destroyed in most AAC/MP3 decoders, which only round to 16-bit or use unshaped dither (if at all).

QUOTE (DVDdoug @ Jun 15 2013, 19:03) *
Since dither is somewhere around -80 to -90 dB, it's going to be thrown-away.

blink.gif Which encoder does that? For the record, the Winamp AAC encoder doesn't.

Chris

This post has been edited by C.R.Helmrich: Jun 15 2013, 23:14


--------------------
If I don't reply to your reply, it means I agree with you.
Go to the top of the page
+Quote Post
saratoga
post Jun 15 2013, 23:50
Post #4





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



Maybe I don't understand the question, but as C.R.Helmrich said, just feed the master directly to the encoder in whatever format you have. Do not convert it to any other format.
Go to the top of the page
+Quote Post
mixminus1
post Jun 16 2013, 03:35
Post #5





Group: Members
Posts: 684
Joined: 23-February 05
Member No.: 20097



Well, maybe downsample it if it's above 48 kHz?

I know the AAC format itself supports 96 kHz, but is it a *requirement* that all decoders support it?


--------------------
"Not sure what the question is, but the answer is probably no."
Go to the top of the page
+Quote Post
saratoga
post Jun 16 2013, 03:45
Post #6





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



Downsampling is usually a good idea for space and compatibility reasons, but there is no need to dither. You can feed the output of your resampler directly to the encoder at > 16 bit resolution.

This post has been edited by saratoga: Jun 16 2013, 03:45
Go to the top of the page
+Quote Post
wottha
post Jun 16 2013, 08:26
Post #7





Group: Members
Posts: 4
Joined: 15-June 13
Member No.: 108671



Thanks everyone. I realize that encoding is better done from higher res sources, but 44.1k 16 bit is still the required delivery format for masters which will be used for MP3 and AAC encoding (Mastered for iTunes excepted, which requires 24 bit). That's what I'm talking about, the 16 bit master (CDR,DDP,WAV) which will be sent from the mastering studio to the record label for MP3 and AAC encoding. My question was about the dither used on that 16 bit master, and what the technical bad effect of using anything other than TPDF would be on the encoded MP3s and AACs.

It would be great if the record labels required or accepted 32 bit float or 24bit 96k files for encoding to MP3, with reduced level to prevent clipping, as with MFiT, but that sort of thing is still only in the beginning stages, and hasn't happened yet.

(And if anyone knows what's used in UV22HR, and if it's considered equivalent to TPDF, I'm still looking for an answer to that.)

This post has been edited by wottha: Jun 16 2013, 09:18
Go to the top of the page
+Quote Post
C.R.Helmrich
post Jun 16 2013, 12:44
Post #8





Group: Developer
Posts: 681
Joined: 6-December 08
From: Erlangen Germany
Member No.: 64012



OK, if you're bound to 16-bit mastering you can use a gentle noise shaper like UV22HR. Noise shaping is similar to TPDF dither, but the dither noise will sound less loud (it uses a pseudo-random noise and error feedback to spectrally shape the error introduced by the 24-to-16-bit conversion. UV22(HR) looks roughly like the SNS1 curve plotted here).

Chris

This post has been edited by C.R.Helmrich: Jun 16 2013, 12:51


--------------------
If I don't reply to your reply, it means I agree with you.
Go to the top of the page
+Quote Post
greynol
post Jun 16 2013, 14:42
Post #9





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Are there any 24-bit samples that demonstrate choosing one type of dither over another when reducing depth to 16 bits as a problematic for lossy encoding?


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
lvqcl
post Jun 16 2013, 15:45
Post #10





Group: Developer
Posts: 3212
Joined: 2-December 07
Member No.: 49183



There was a thread about noise shaping/dither before lossy compression
http://www.hydrogenaudio.org/forums/index....st&p=567064
Go to the top of the page
+Quote Post
greynol
post Jun 16 2013, 16:53
Post #11





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Thanks for the link.

So should I assume that for all the 16-bit content which was encoded to lossy at transparent settings and played back without audible problems, special consideration was made when reducing bit depth from the higher resolution masters when prepared to CD in order to achieve this?

I'm all for best practice, but I really must challenge the portion of the OP that relates to audible consequences for non-test tone material.

This post has been edited by greynol: Jun 16 2013, 17:10


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
C.R.Helmrich
post Jun 16 2013, 21:31
Post #12





Group: Developer
Posts: 681
Joined: 6-December 08
From: Erlangen Germany
Member No.: 64012



QUOTE (greynol @ Jun 16 2013, 15:42) *
Are there any 24-bit samples that demonstrate choosing one type of dither over another when reducing depth to 16 bits as a problematic for lossy encoding?

I don't think any dither is problematic for the process of lossy encoding. Even a "stupid" encoder wasting bits on the inaudible high-frequency noise "bump" that some noise shapers produce should still be able to code transparently if the bit-rate is high enough.

True, when you lossy-encode a 24-bit signal and put it through a 16-bit decoder which only truncates, you might end up with harmonic and/or time-varying distortion which isn't present in the original master. But you might have the same problem when you play a lossless 24-bit file through a 16-bit truncating decoder.

IMHO, now that most DACs and OSs can handle 24-bit audio, there shouldn't be any 16-bit lossy decoders any more. The nice side-effect would be that you could both encode and decode in 24-bit, allowing more of the potential dynamic range of the 24-bit master to reach listeners with 24-bit-capable hardware.

Chris


--------------------
If I don't reply to your reply, it means I agree with you.
Go to the top of the page
+Quote Post
greynol
post Jun 16 2013, 23:35
Post #13





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Again, I'm cool with best practice; I just think the discussion needs a little grounding for those who might otherwise get carried away. IMO, when there are readers with varying degrees of knowledge, anal-retention from the experts can foster fear, uncertainty and doubt. That was the feeling I got from that other discussion, though maybe I should read it again.

Thanks for the response.

This post has been edited by greynol: Jun 17 2013, 07:26


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 20th April 2014 - 07:43