lossyWAV Development
Reply #717 – 2008-01-06 21:58:54
I had an idea, it might already be this way. I didn't check. But if the correction file was encoded with the bits reversed (16=1/15=2/etc) for whatever bit-depth needed, wouldn't lossless codecs encode that more efficiently? Since it's not really an audio file someone would use directly, that makes sense for starters, and it shouldn't complicate things if lossyWAV-compliant decoding were ever to be built into any lossless decoders. And there should be practically no speed hit at all. Unfortunately, as not all of the differences are positive, this wouldn't help as the new bit 1 of every negative difference would be 1. Something David said *ages* ago popped back into my head and I want to get a second opinion with respect to my understanding: Looking for zero values in the sample data was mentioned. I took this to mean "look for FFT's where all the input values are zero" and have implemented a checking mechanism as follows: When filling up the FFT input array, OR each sample value with a "running total" which is initialised as zero before the filling starts. If the resultant "running total" is zero then the FFT is full of zeros and does not require to be calculated. For every FFT not calculated, do not take a 0db value when calculating the bits_to_remove for the codec_block in question (as rounded zero's are still zero = no added noise). If *every* FFT was full of zeros, set bits_to_remove to zero and simply store the codec_block. [edit] This approach reduces my FLAC'd processed 53 sample set by a whole 95 bytes. However, it may slightly increase processing throughput..... [/edit]