Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: --blocksize: info on its impact on strength and how to use correctly? (Read 7167 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

--blocksize: info on its impact on strength and how to use correctly?

Recently I discovered --blocksize.
Bryant, could you tell some more about its impact on strength and how to use it correctly?
I saw some topics mentioning that setting it very low can have positive impact on some sources.
I tried the opposite, setting it to maximum on my quick and dirty high-res, multichannel dataset and it turned out to be slightly better than default, 0.05% on average, winning on 22 of 29 samples. Still, it was 0.71% worse than flake. Taking the best result from default / --blocksize reduced the flake's advantage to 0.63%.

--blocksize: info on its impact on strength and how to use correctly?

Reply #1
Increasing the block size from the default will normally improve compression slightly (as you’ve seen) because the overhead of the block header becomes a smaller and smaller percentage of the total data size. However, there are negative implications of this and so I would not recommend it unless you are just archiving and going for the best possible compression ratio. The problem is that the resulting files require more memory to play (especially multichannel files) and some players (like Ffmpeg) might refuse to play them at all. I have even been considering reducing the default block size for some modes to reduce the playback memory footprint even though it might reduce compression slightly.

The times that setting a low block size improves compression is only in situations where there is redundancy in the LSBs of the audio samples. The only two cases of this that I know of are files from LossyWAV and decoded HDCD files, but there might be others. In the cases where the amount of redundancy is changing often, using smaller blocks help WavPack take advantage of this, and of course with LossyWAV you simply want the block size to match the block size that LossyWAV is using. Note that the --merge-blocks option must be used with --blocksize to get this.

The easiest way to get better compression is to use the -x switch. Although this can be very slow during encoding, there is no cost during decoding. I would start with -x4 to get an idea how much improvement you might get. Of course, -h and -hh improve compression too, but those will result in somewhat slower decoding.

--blocksize: info on its impact on strength and how to use correctly?

Reply #2
Thanks for the answer. All the numbers that I gave were with -hhx6. Which is what I call 'rather fast', but 'week/very week'.
I'll have to consider increasing blocksize, do some testing etc. While the main use is archival, portability is worth something too and the gains seem tiny.

--blocksize: info on its impact on strength and how to use correctly?

Reply #3
-hhx6 rather fast?! Are you using a supercomputer?
WavPack 5.6.0 -b384hx6cmv / qaac64 2.80 -V 100

--blocksize: info on its impact on strength and how to use correctly?

Reply #4
-hhx6 rather fast?! Are you using a supercomputer?

Nah. It's a matter of throughput and latency. Slow compression increases the time from getting some music to having it integrated with the library. That's latency. But for the throughput, the bottlneck is me. I don't acquire music faster than my computer compresses it. For this reason I have most of my library compressed with OptimFrog --bestnew --optimise best. I call it 'very slow', but wavpack is still rather fast in comparison.
I'm moving to wavpack because OptimFrog is not portable enough. And I find wavpack to be the most reliable portable codec around.

--blocksize: info on its impact on strength and how to use correctly?

Reply #5
All the numbers that I gave were with -hhx6.

Ah, okay. I'm surprised that Flake gives better compression than that. I'm on Flake 0.10-3 (with -12) and it's consistently worse than WavPack's best, but perhaps there were compression improvements in later versions.

Anyway, I'm glad WavPack is working out for you. 


--blocksize: info on its impact on strength and how to use correctly?

Reply #7
I used SVN version of flake and it's significantly better than the latest release. For my limited experience, compared to wavpack -hhx6, it's weaker on stereo, but stronger on multichannel audio.