Lame 3.99.5z, a functional extension
Lame 3.99.5z, a functional extension
Sep 18 2012, 23:06
Joined: 9-October 05
From: Dormagen, Germany
Member No.: 25015
You can download it from here.
Whatís the functional extension?
It offers VBR quality settings -V3+ to -V0+ and -V0+eco (economic version of -V0+).
What are -Vn+ and -V0+eco good for?
They improve pre-echo behavior.
Beyond that, they combine the quality advantages of VBR (regarding pre-echo) with the quality advantages of CBR/ABR (with respect to ringing and other tonal issues).
Lame users can be classified into three categories:
a) Users who donít care about rare quality issues and/or care much about small file size.
The common way for these users to work with Lame is to use -V5, -V4, or similar.
b) Users who donít like to have obvious and especially ugly issues in their music even when theyíre rare but who care about file size as well.
The common way for these users to work with Lame is to use -V2, -V3, or similar.
CBR 192 or similar, or ABR in this bitrate range, is an alternative (but seldom used).
c) Users who want overall transparency or at least a quality which comes close to it, and who donít care much about file size.
The common way for these users to work with Lame is to use -V0, -V1, or similar as a VBR method, or to use CBR 320 or 256. Using very high bitrate ABR is an alternative (seldom used).
For users of group b) and c) -Vn+/-V0+eco offers significant quality advantages:
We have two major issue classes with most of the lossy codecs:
- temporal smearing (pre-echo) issues
- ringing (tremolo) and other tonal issues.
Letís look at the worst samples I know for these classes:
- eig (extremely strong temporal smearing)
- lead-voice (extreme ringing, for instance at sec. 0..2)
With samples like these users of group b) canít be very happy when using -V2 or -V3, because the ringing issues are very obvious and ugly. The temporal smearing of eig is pretty obvious as well, especially around sec. 3. Using CBR/ABR 192 or similar is a good procedure to fight the ringing, but temporal smearing is much worse than with VBR, itís real ugly.
Things donít really change when using slightly increased quality settings.
For users of group c) itís exactly the same thing, with quality requirements and quality received just both on a higher level.
So the traditional way of doing things isnít totally satisfactory.
Users of group b) can use -V3+ or -V2+ (recommended) and get much better results in the overall view.
Users of group c) can use -V1+ or -V0+eco (recommended) or -V0+ (recommended for the paranoid like me) and get transparency or close-to-transparency. Sure itís impossible to prove transparency for the universe of music, but itís true for the samples mentioned. And as these are very outstanding samples within their problem classes and because of the technical details of -Vn+ described below itís plausible that the approach works rather universally.
How is it done?
-Vn+ uses -Vn internally (-V0+eco uses -V0), but the accuracy demands for short blocks are increased. Short blocks are used when the encoder takes care of good temporal resolution. Audio data bitrate is kept rather high also with long blocks which are normally used.
These audio data requirements are helpful for any kind of problem, they are not restricted to ringing or pre-echo issues.
Moreover a strategy is used which is targeting at providing close to maximum possible audio data space for short blocks.
Whatís the price to pay?
Compared to -Vn the increased accuracy demands of -Vn+ raise average bitrate. As -Vn+ is targeting at significant quality improvements compared to -V2 for real bad samples, we need an average bitrate around 200 kbps at least.
-V3+ and -V2+ are designed for users of group b) above, and as such take care of average bitrate not to be much higher than 200 kbps. For my test set of various pop music average bitrate is 205 kbps for -V3+, and 217 kbps for -V2+.
For users of group c) I allow for the full quality resp. average bitrate range mp3 can offer.
-V1+ takes an average bitrate of 257 kbps for my test set, -V0+ takes 317 kbps.
-V0+eco (economic version of -V0+) takes 266 kbps. So -V0+eco comes nearly for free as -V0 takes 260 kbps for my test set.
Unlike versions I published before, mp3packer isnít really needed any more to squeeze the unused bits out of the mp3 file (with the exception of fractional settings like -V0.5+ between -V1+ and -V0+).
mp3packer brings average bitrate down by only 1 kbps maximum for -Vn+ between -V3+ and -V2+, by 1 to 2 kbps for -Vn+ between -V2+ and -V1+, and by 2 kbps for -V0+ and -V0+eco. So I think we can forget about mp3packer with these settings.
sets the minimum audio data bitrate for short blocks to x [kbps] when using -Vn+ or -V0+eco, with x in the range 150..450.
Defaults are 360,370,420,440,440 kbps for -V3+,-V2+,-V1+,-V0+eco,-V0+ resp.
sets the minimum audio data bitrate for long blocks to x [kbps] when using -Vn+ or -V0+eco, with x in the range 110..310.
Defaults are 160,170,215,220,290 kbps for -V3+,-V2+,-V1+,-V0+eco,-V0+ resp.
prints detailed information for each frame (L/R or M/S representation, blocktype of both granules, available audio data bits, audio data bits used, etc.). Works for both -Vn and -Vn+.
Musepack --quality 7
Nov 6 2012, 21:08
Joined: 17-September 06
Member No.: 35307
BFG, I can't speak for Robert, but my own thoughts are along the psymodel lines.
I think that:
a fairly large proportion of cases where Lame3.99.5 -Vn has problems that halb27's -Vn+ version fixes consist of sharp transient (highly localized in the time-domain but spread out in the frequency domain) simultaneous with a tonal signal (highly-localized in the frequency domain but spread out in the time domain).
The time-frequency product tradeoff type characteristic (localized in one means spread-out in the other) is analagous to Heisenberg's Uncertainty Principle (Δt.Δf ~ constant).
The mathematics of transforms such as MDCT (or FT) means that:
if you have a long block, you have a lot of frequency bins, each of which is fairly narrow in bandwidth, allowing fairly precise reproduction of long-duration tonal signals (localized peaks in the frequency domain) even with relatively imprecise values* stored for each frequency bin (the imprecision implies lower bit-depth and hence lower bitrate). As these tonal signals are spread out in the time domain, any time-domain variation is slow enough not to need precise representation.
*these frequency-domain values are complex numbers, basically implying that they carry information about both amplitude and phase. Values from neighbouring bins actually interfere when transformed into the time domain, allowing reproduction of frequencies more precisely defined than the bin-width itself.
Stlll in a long block, if you have an event that is localized in time, however, such as a transient, you can reproduce it, but it requires much greater precision for the values of each frequency bin to sum together in the time domain with correct phase to reproduce the time-localization to prevent it from being smeared out like a soft noise over a longer time (which produces pre-echo and post-echo, though post-echo is more readily masked). Such precision (or bit-depth) over so many frequency bins requires a high bitrate.
An alternative is to detect these time-localized transients and split the time into, say three short blocks. There are now fewer frequency bins in each short block (each having greater bandwidth) but there's less smearing of time (the maximum smearing being the duration of the whole short block), and sufficient time-localization can be achieved with a modest precision of the values for each frequency bin, thus a modest bit-depth and bit-rate (at the expense of frequency-smearing). As time-localized signals are frequency-unlocalized (broad spectrum, noiselike) that's often not a problem.
If there is a tonal (frequency-localized but time-smeared) signal to be represented within the short block that we don't think will be masked by the loud transient, its frequency can be reproduced more accurately only by increasing the precision of the values for each of the frequency bins (because the summation of interfering components of neighbouring broad bandwidth frequency bins when we convert back to the time-domain will then accurately preserve the frequency and phase of the tonal signal. This greater precision, as before, requires greater bit-depth to represent the values in the transform-domain and thus higher bitrate.
It's this latter case that -Vn+ seems to solve, but it doesn't currently detect that there actually IS an important tonal component that isn't masked by the transient (pre-masking and post-masking), it just assumes that there might be, so to be on the safe side, employ a much higher bitrate (much higher precision of bin values) during all short blocks.
For any encoder, with enough processing time, it should be possible to derive an extra measurement on the analysis FFT in the psymodel, but only do the check once short blocks have been triggered (and only test the check on short blocks). That check would look for tonal signals (frequency-localized) during these short blocks, and probably during the switching windows too (long->short and short->long), to determine whether any of them might not be masked entirely by the transient and whether they require higher precision in the transform-domain quantization (and thus higher bitrate) to maintain their frequency precision despite the wide bin-width. It might be possible to determine a suitable mathematical function to determine the required quantization precision from listening tests on tone+transient signals of varying relative amplitudes (and varying tone frequency ranges) and to build in enough margin of safety to account for practical limitations arising from window functions and the like, or failing that to simply determine a threshold of 'tonality' that triggers the encoder to turn up the precision to the maximum for the affected short blocks. Either way would mainly solve the problem cases without boosting bitrate for many general unproblematic short blocks, which is the efficient approach normally adopted in LAME VBR tuning.
Robert has improved the lead-voice problem sample in the latest 3.100 alpha, which I'd have put into this category, so I'll do some keen listening tests to see what might be fixed. Having taken a quick look at the diffs for the latest psymodel.c seem to include a good deal of stuff relating to tonality measures, so I'm hopeful that a lot of the problem samples are going to be hard to ABX using 3.100 alpha2 when I get time to try.
There remain some problems that don't fit this tonal+transient during short-block description, so halb27's -Vn+ modes will still have mileage while the psymodel hasn't fixed them.
|Lo-Fi Version||Time is now: 6th December 2013 - 12:55|