Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Yalac - Comparisons (Read 210856 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Yalac - Comparisons

Reply #100
Quote
' date='May 17 2006, 06:58' post='393177']
I think people looking at this codec are looking at speed also.
...
As such, I would make both modifications.  Make fast faster by 15%, and make a (new) fastest preset, faster than what we currently have.

I agree the speed aspect is a Yalac specialty.

Yes, without the speed it would be quite useless. Then i would prefer Monkey or OptimFrog for maximum compression.

Hence speed will be my highest priority. I will not implement compression ratio improvements that significantly reduce decoding speed. It's a different matter at the encoding side.

Quote
So, in regards to speed:

- Machines like mine using ATA100 I usually max-out at about 82x decoding speed for 16bit 44KHz stereo
- The fastest encoding speed I clocked was about 98x with my outdated codec
- Yalac decodes from all modes at about the maximum ATA performance
- any speed enhancements for YALAC Fast(est) encoding should take into account the limitations of average HDD performance, otherwise the performance boost will not be noticed


Yes, disk-IO is the limit. On my good old pentium III-866 i can achieve higher decoding speeds than you with your fast machines, if i turn off the decoder output (i should give public access to this option for evaluation purposes...).

Quote
Given that, I'd love to see the speed increase or a new fastest preset.

For your information, Thomas, the archival encoder I use achieved a speed of 97.83x with a ratio 47.94%, while Yalac 0.06 Fast clocked at 68.11x with an impressive 46.14% ratio with "free" super-fast decompression speed. Good work!

Thanks to you and thanks to ShadeST for your work and the encouragement!

I am not sure, if i can make Yalac that fast. Main reason seems to be, that yalac has to use bigger data blocks than symmetrical encoders for the disk io. Possibly things will change if i implement asynchronous file io, which unfortunately is not beeing supported by my Windows 98...

And to achieve the maximum encoding speed for FASTEST, i would have to build a special variant of my encoder. Currently FASTEST would do much work unneccesarily twice. My encoder has not been designed for those ultra fast modes.

  Thomas

Yalac - Comparisons

Reply #101
Possibly things will change if i implement asynchronous file io, which unfortunately is not beeing supported by my Windows 98...
Asynchronous I/O would be harder to integrate into a future (cross-platform) library which I hope you plan to create.  Such a library should let the caller handle I/O operations instead of directly trying to open a file itself. In the simplest case, the caller would feed chunks of data to the library to encode or decode data linearly. Obviously, things get a little more complicated, if you want to support playback in audio players where it is desirable to be able to seek.
I am no expert regarding audio codec libraries (meaning I haven't used many), but I think Monkey's Audio has a nice library interface.

Yalac - Comparisons

Reply #102
Asynchronous I/O would be harder to integrate into a future (cross-platform) library which I hope you plan to create.  Such a library should let the caller handle I/O operations instead of directly trying to open a file itself. In the simplest case, the caller would feed chunks of data to the library to encode or decode data linearly.

Good point. But asynchronous io would only be an option. I may define an abstract interface for file io, which has to be implemented by the user and can use asynchronus io if available.

Yalac - Comparisons

Reply #103
Current Progress (V0.07)

Done

- New option for the channel decorrelation to detect bigger lags between channels. Without this option files with lags outside of the search range of the standard decorrelation algorithm can not use channel decorrelation. It's usually some "All or nothing" case: Either it helps compression or not. Don't expect too much, on my test corpus it only helps about 1 of 20 files. And it slows encoding down.

- Evaluation of other possibilities to improve the channel decorrelation. Most of them didn't work well. Some of them look promising but have to be further evaluated (the file format is perepared now for their later use). And some would help but considerably slow down both encoding and decoding. That's not acceptable.

- New preset FASTEST, which is about 50 percent faster than FAST on my system and compresses about 0.9 percent worse than FAST on my test corpus.

To do (for V0.07)

- Clean up of my new channel decorrelation code.

- Speed optimization of the new channel lag detection. Possibly i will wait for test results of the new algorithm, before i optimize it. If it doesn't help compression a bit, its optimization has no priority.

- Implementation of an extra path within my codec to avoid unnecessary processing when using preset FASTEST. May give another 25 % speed up.

- Possibly new options for the variation of some parameters of the disk io to tune the performance on individual systems.

Yalac - Comparisons

Reply #104
Firstly, thanks for the update Thomas.  As I said before, I, and I'm sure the other testers, find your reports very interesting.  I can't believe that you are squeezing even more speed out, let alone a 50% increase!

Now, the reason for my post:

So, in regards to speed:

- Machines like mine using ATA100 I usually max-out at about 82x decoding speed for 16bit 44KHz stereo
- The fastest encoding speed I clocked was about 98x with my outdated codec
- Yalac decodes from all modes at about the maximum ATA performance
- any speed enhancements for YALAC Fast(est) encoding should take into account the limitations of average HDD performance, otherwise the performance boost will not be noticed
Yes, disk-IO is the limit. On my good old pentium III-866 i can achieve higher decoding speeds than you with your fast machines, if i turn off the decoder output (i should give public access to this option for evaluation purposes...).
Joseph Pohm and I have been discussing this issue for the past three days.

Joseph has very eloquently highlighted some anomolies in my results, due to IO limitations.  He has some superb charts which articulate the issues I am encountering with speeds over 60x or so.  My max is around 80x, while his is 120x.

We are currently looking at Timer as a replacement for TimeThis, which I have used previously.  Joseph's initial tests suggest that Timer can accurately report CPU time only (what it calls "Process Time", as opposed to its "Global Time", which is the figure that TimeThis reports).  This would be very useful for me to record times unaffected by IO.  In fact, I could scrape both CPU (Processed) and CPU+IO (Global) times from the report, for a comparison...

While it is likely that we will pursue this further, we thought that others may find this useful information now.  Although no-one else is currently using TimeThis, the times reported by Yalac itself suffer the same fate, and will therefore be affected by IO latency also.  Therefore, if you are interested in seeing results unaffected by IO latency, but can't wait for Thomas to provide a switch to turn off decoding output, you may want to take a look at Timer.
I'm on a horse.

Yalac - Comparisons

Reply #105
Firstly, thanks for the update Thomas.  As I said before, I, and I'm sure the other testers, find your reports very interesting.  I can't believe that you are squeezing even more speed out, let alone a 50% increase!

Fine. I like to post my reports. One reason: It forces me to define goals for the next release. Otherwise there is always a fair chance, that i am loosing myself in little structured evaluations of my daily ideas.

For the speed increase: I did achieve 50 percent, but the next release will only bring 20 percent for FAST. I will explain the reasons in another post.
Quote
Joseph has very eloquently highlighted some anomolies in my results, due to IO limitations.  He has some superb charts which articulate the issues I am encountering with speeds over 60x or so.  My max is around 80x, while his is 120x.

Yes, Joseph has many superb charts. The Html-reports he is sending me are seldom smaller than 500 KB...
Quote
Joseph's initial tests suggest that Timer can accurately report CPU time only (what it calls "Process Time", as opposed to its "Global Time", which is the figure that TimeThis reports).  This would be very useful for me to record times unaffected by IO.  In fact, I could scrape both CPU (Processed) and CPU+IO (Global) times from the report, for a comparison...

The comparison of the pure processing time would be very intersting for speed evaluations of my code optimizations. And it would give a hint for what to expect if faster pc's or hard disks become available in the future

But it possibly wouldn't be optimal for general comparisons of different compressors. On my system the frame size of my encoder significantly affects encoder and decoder performance. The frame size here equals the block size of the disk io's. On my system 100 ms would be the optimum for speed, but 250 ms are providing maximum compression. This is caused by the way may encoder works. Other -possibly symmetrical- encoders may be able to process smaller frames and therefore could use the optimum of 100 ms on my system. Hence their possible speed advantage would be caused by the design of the encoder and shoud be taken into account for fairness (even if this is an disadvantage for Yalac...).

My 2 cents...

Yalac - Comparisons

Reply #106
Yes, Joseph has many superb charts. The Html-reports he is sending me are seldom smaller than 500 KB...
His dedication to the art is phenominal.  I have been trying to pursuade him to post more, as I believe you have.  I don't think I can actually imagine the amount of data that you must be sent following a new release.

The comparison of the pure processing time would be very intersting for speed evaluations of my code optimizations. And it would give a hint for what to expect if faster pc's or hard disks become available in the future
I am intending to perform some runs this weekend, targetting the faster decoders, to see how the figures all tie up.  I have some data from Jospeh to compare with.  My slower decoders are faster than his, as my machine is faster, but currently he gets better speeds with the faster decoders, by minimising the IO latency. Hopefully I can use his figures, accounting for the difference in PCs, to make a comparison.  I'll be sure to post here.

My 2 cents...
I must admit that I'm not sure that I understand the paragraph.  You state that "it possibly wouldn't be optimal for general comparisons of different compressors" but conclude that it "shoud be taken into account for fairness".  I hope I have not taken those quotes completely out of context.    Do you basically mean that Yalac may not fair so well if IO latency is taken out of the equation?

Joseph and I have discussed the issue that IO latency will be a "real world" issue for most users,  when decoding to WAVE.  However, when decoding to RAM (playback) it will not be a factor.  It also seems to me to be a perverting influence for test conditions, unless I could somehow provide data that allowed you to detimine the degree to which the IO latency was affecting the speed.

There's another $0.02 for the pot.
I'm on a horse.

Yalac - Comparisons

Reply #107
If I use YALAC, I will be using it for playback and transcoding purposes (in the eventuality of a foobar2000 playback plugin).  As such, I really don't care much for IO speeds.  Don't waste too much time, if you don't need to

Yalac - Comparisons

Reply #108
But this is it.  The way I understand it we are currently analysing IO speeds.

To analyse the speed to decode, for transcoding or playback, we need to look at the time taken in memory only, and not the memory plus IO time (which is what TimeThis and Yalac.exe will report).

IMHO both rates are of great interest, but until now I was blissfully unaware of the difference.
I'm on a horse.

Yalac - Comparisons

Reply #109
I must admit that I'm not sure that I understand the paragraph.  You state that "it possibly wouldn't be optimal for general comparisons of different compressors" but conclude that it "shoud be taken into account for fairness".  I hope I have not taken those quotes completely out of context.    Do you basically mean that Yalac may not fair so well if IO latency is taken out of the equation?

No. What i wanted to say: If smaller frame  sizes speed up disk io, then the ability of other encoders to use smaller frame sizes (than yalac) is a strength of them, which shoudn't be neglected, because it woud be relevant for the practical use. Hence general comparisons of encoder speed should include disk-io. If yalac should perform better without disk-io, that would be interesting but not too important for the performance under real world conditions. (I am voting against a test setup, which would possibly favour yalac...).

I hope it's more clear now. Have to learn some better english...

Yalac - Comparisons

Reply #110
Thanks for the clarification Thomas.  Your English is infinately better than my German.

From what I understand though, CPU-only timings are also relevant in "real world" scenarios, when decoding to memory while playing, rather than decoding to file (WAVE).  Therefore, both timings are of interest: CPU-only/decoding to memory, and CPU+IO/decoding to file.

Thankfully Timer will report both.

I may change my database so that both timings can be reported.  I would certainly like to, but time is an issue as always.  I thought I had concluded all my little projects, but it always seems there's another just around the corner.  I suppose I should stop peering around corners so often...
I'm on a horse.

Yalac - Comparisons

Reply #111
From what I understand though, CPU-only timings are also relevant in "real world" scenarios, when decoding to memory while playing, rather than decoding to file (WAVE).  Therefore, both timings are of interest: CPU-only/decoding to memory, and CPU+IO/decoding to file.

I forgot about this.

Yalac - Comparisons

Reply #112
Current Progress (V0.07)

In my last report i was talking about a speed advantage of 50 percent for the new preset FASTEST over FAST. But i did deceide against it.

I did play around with some parameters to speed up FAST. I could achieve more than 50 percent more speed with an compression penality of about 1 percent. But one single parameter variation gave me 20 percent speed up with a penality of only 0.1 percent. Obviously a far better speed-compression ratio.

Therefore i dropped the FASTEST preset. V0.07 will instead contain a 20 percent faster FAST preset.

Then i looked at the other presets. I always had the feeling, that they are not very well balanced. After many evaluations of existing test data and new parameter variations i came up with a new configuration of the presets:

- Presets: FAST, NORMAL, HIGH, EXTRA, INSANE.
- INSANE sets any accessible parameter to the maximum to evaluate the currently possible maximum compression (Joseph Pohm may call it I2...).
- With the exception of INSANE, any preset should be only two times slower than it's predecessor. I find this simple rule more user friendly.
- Therefore NORMAL and HIGH needed a speed up. HIGH is now about 80 percent faster than before.
- EXTRA now uses the parameter configuration of the old HIGH preset with the addition of the new improvements of the stereo decorrelation.

To make it clear: I didn't perform any code optimizations. The new preset configuration is only based upon new selections of the underlying encoder parameters.

On my test corpus the new presets are considerably faster than the old ones with a compression penality of about 0.1 percent. I hope that your evaluation of V0.07 will confirm this. Otherwise i may have to perform some more fine tuning.

I am not sure, what more i will do for V0.07. Possibly i will drop some other plans for now (for instance the possibility to vary disk-io parameters), because there is allreday enough to be evaluated.

Yalac - Comparisons

Reply #113
Questions regarding release date and license have been moved to Yalac: Miscellaneous Questions.

FYI, we now have:
I'm on a horse.

Yalac - Comparisons

Reply #114
V0.07 is done

Changes:

Compression algorithms:

- New option for the channel decorrelation to detect bigger lags between channels. Without this option files with lags outside of the search range of the standard decorrelation algorithm can not use channel decorrelation. Don't expect too much, on my test corpus it only helps about 1 of 20 files. And it slows encoding and -in case of a bigger lag- even decoding down. Sometimes this option slightly (less than 0.05 percent) decreases compression efficiency.

Presets:

- Reconfiguration of the existing presets and inclusion of the new preset EXTRA.
- With the exception of INSANE, any preset should be only two times slower than it's predecessor. I find this simple rule more user friendly.
- INSANE sets any accessible parameter to the maximum to evaluate the currently possible maximum compression (Joseph Pohm may call it I2...).
- EXTRA now uses the parameter configuration of the old HIGH preset with the addition of the new improvements of the stereo decorrelation.
- Speedup of the presets FAST (20 %), NORMAL (35 %) and HIGH (60 %). The compression efficiency should be only 0.1 percent worse than before.

Internals:

- New way to scale sample values down for the reduced precision arithmetic of my 16 Bit DSP. Can slightly change compression ratio of individual files.

GUI:

- Debug option "No Output" of the GUI-Version now disables generation of output files for both encoding and decoding.

Command line:

- Specify the new switch "-debug1" to disable file output when encoding or decoding.

- New test cases to evaluate the new channel decorrelation option "Extra lag":
Code: [Select]
-c0 = Off
-c1 =  8
-c2 = 16
-c3 = 32
-c4 = 64


Other:

- Clean up of source code. Could give new errors...
- File format changed!

Release:

I hope to send the new version to the following testers within the next 24 hours:

Destroid
Josef Pohm
Shade[ST]
Synthetic Soul

Only reason for this selection: I haven't heard from the other testers within the last 10 days and i don't want to fill their mail box with new versions they possibly currently don't need.

Any of them can sent me an email anytime, and i will sent the current version.

What should be tested:

- Comparison with V0.06: Speed and compression performance of presets FAST, NORMAL, HIGH and EXTRA.
- If the new preset EXTRA should perform better than HIGH of V0.06, then it would make sense to further evaluate the new channel decorrelation option "Extra lag". You may try preset EXTRA with bigger lags: -c3 or c4 for 32 / 64. The protocol file contains a new section with a distribution of the lags the encoder has used.
- A verification of the decoded files makes most sense for preset EXTRA. There is no need to verify the output of the other presets.

Plans for V 0.08:

- I am always working on the optimization of some new filter algorithm, which can significantly improve compression. Probably the next version will contain a first suboptimal implementation.

- Some complex modifications of my frame partitioning may provide a small increase of compression efficiency. I am not quite sure, if it is worth the considerable amount of work.

  Thomas

Yalac - Comparisons

Reply #115
What should be tested:

- Comparison with V0.06: Speed and compression performance of presets FAST, NORMAL, HIGH and EXTRA.
- If the new preset EXTRA should perform better than HIGH of V0.06, then it would make sense to further evaluate the new channel decorrelation option "Extra lag". You may try preset EXTRA with bigger lags: -c3 or c4 for 32 / 64. The protocol file contains a new section with a distribution of the lags the encoder has used.
- A verification of the decoded files makes most sense for preset EXTRA. There is no need to verify the output of the other presets.


I am writing the batch file for a full-scale test/comparison now for 0.06 vs. 0.07 and all the other major compressors  I am going assume that EXTRA will be tested four times using -c[n] variants. If it's worth bothering, would testing INSANE with -c3 and -c4 be of interest?

And yes, I'll be adding FC /b verification for the EXTRA settings.
"Something bothering you, Mister Spock?"

Yalac - Comparisons

Reply #116
I am writing the batch file for a full-scale test/comparison now for 0.06 vs. 0.07 and all the other major compressors  I am going assume that EXTRA will be tested four times using -c[n] variants. If it's worth bothering, would testing INSANE with -c3 and -c4 be of interest?

Wow! That's very exciting!

If -c3 and -c4 don't help EXTRA, there would be little to expect for INSANE. But if you can do it automatically...

Yalac - Comparisons

Reply #117
Another album comparison benchmark  This album is rock and multiple types of instruments (acoustic and electric guitar, electric bass, woodwinds, drums) with monologue and chorus vocals.
Code: [Select]
King Missile - Mystical S**t/Fluting on the Hump  697,725,548 bytes (65:55)
===========================================================================
name/params Ratio EncTime/CPU% DecTime/CPU%
--------------------- ------ ------------ ------------
Yalacc 0.06 -p0 55.04% 76.54x / 76% 92.42x / 55%
Yalacc 0.07 -p0 55.16% 84.01x / 72% 93.40x / 56%
MAC 4.01 beta2 -c1000 55.56% 66.31x / 96% 51.70x / 85%
FLAC 1.1.2 --fast 62.60% 86.09x / 68% 83.15x / 52%
WavPack 4.3 -f 58.03% 90.62x / 74% 87.16x / 60%
OFR 4.520 --mode fast 54.80% 24.70x / 87% 34.41x / 98%
--------------------- ------ ----------- ------------
Yalacc 0.06 -p1 54.73% 33.27x / 94% 87.80x / 60%
Yalacc 0.07 -p1 54.76% 43.49x / 90% 84.83x / 60%
MAC 4.01 beta2 -c2000 54.60% 49.61x / 96% 42.71x / 92%
FLAC 1.1.2 (default) 58.12% 59.22x / 83% 85.89x / 57%
WavPack 4.3 (default) 57.15% 77.69x / 79% 80.53x / 67%
OFR 4.520 (default) 54.10% 17.88x / 91% 24.76x / 99%
MP4ALS RM17 (default) 56.45% 29.76x / 95% 55.96x / 63%
LA 0.4 normal 53.05% 6.20x / 99% 7.72x / 99%
--------------------- ------ ----------- ------------
Yalacc 0.06 -p2 54.57% 11.92x / 99% 83.29x / 61%
Yalacc 0.07 -p2 54.63% 17.59x / 98% 82.40x / 64%
MAC 4.01 beta2 -c3000 54.32% 43.41x / 98% 38.13x / 93%
MAC 4.01 beta2 -c4000 53.90% 23.97x / 99% 23.40x / 98%
FLAC 1.1.2 --best 57.95% 12.12x / 99% 88.97x / 60%
WavPack 4.3 -h 55.27% 52.10x / 89% 66.47x / 87%
OFR 4.520 --best 53.84% 4.85x / 95% 6.57x / 99%
MP4ALS RM17 -7 55.07% 0.99x / 99% 12.57x / 96%
LA 0.4 high 52.89% 4.61x / 99% 5.45x / 99%
--------------------- ------ ----------- ------------
Yalacc 0.06 -p3 54.51% 4.47x / 99% 84.66x / 62%
Yalacc 0.07 -p3 * 54.57% 11.58x / 99% 84.20x / 63% (FC /b = OK)
Yalacc 0.07 -p3 -c3 * 54.57% 11.43x / 99% 84.00x / 62% (FC /b = OK)
Yalacc 0.07 -p3 -c4 * 54.57% 11.16x / 99% 84.04x / 62% (FC /b = OK)
Yalacc 0.07 -p4 54.51% 3.36x / 99% 85.72x / 63%

* Denotes modes with files compared after encoding and decoding


No noticable difference in EXTRA using -c3 and -c4 except for encoding times, so I didn't bother using the switches on INSANE mode. Perhaps different material would make a noticable difference. As noted above, the EXTRA mode had no problems with file integrity as proven by file comparison.

There are some nice improvements in performance here. Awesome

Oh yeah  -- System A64 3000+, 512MB, Caviar 80GB, Win2K
"Something bothering you, Mister Spock?"

Yalac - Comparisons

Reply #118
http://synthetic-soul.co.uk/comparison/yalac/

As before, all Yalac runs (0.02-0.07) can be viewed by using http://synthetic-soul.co.uk/comparison/yalac/?all=1.

NB: This table uses Timer's Global Time, which equates to the time reported by TimeThis (CPU+IO).  I do have the Process times recorded as well, but I need to get some time to create version 2 of my system to report them.  If I can't find time soon I may just upload a/some suplimentary spreadsheet/s.

My scripts now automatically check the MD5 on decode against the source MD5, and all hashes match.
I'm on a horse.

Yalac - Comparisons

Reply #119
Many thanks again to Destroid and Synthetic Soul!

For me the reconfiguration of the presets looks advantageous. There are some nice speed improvements for FAST, NORMAL and HIGH and the reduction in compression efficiency is on average less than 0.1 percent. I would like to keep the new presets. What do you think?

The extended search range of the stereo decorrelation (preset EXTRA) seems useless. Nothing really new for me: About 95 percent of my work on optimizations of the compression efficiency were useless. But you have to try it before you know it... I will probably remove the extra lag option from the encoder. Fortunately there is another currently hidden option aimed to some different purpose within the frame decorrelation which can replace the extra lag option to some extent, if it should be really needed.

I am currently working on the new prefilter option. Earlier (when applied to yalac 0.01) it gave 0.35 percent better compression, but now only 0.2 percent on average... Maybe that some other optimizations of yalac are allready providing some of the benefits of the prefilter. On the other hand there are some files which compress more than 4 percent better with the filter. Possibly there is some room for improvements left.

It seems, as if i am near to the maximum compression my codec design can achieve. In this case i would call the 0.2 percent improvement significant...

That doesn't mean, that there will be no more improvements. For instance the frame partition calculator isn't fully optimized yet.

Yalac - Comparisons

Reply #120
I am currently working on the new prefilter option. Earlier (when applied to yalac 0.01) it gave 0.35 percent better compression, but now only 0.2 percent on average... Maybe that some other optimizations of yalac are allready providing some of the benefits of the prefilter. On the other hand there are some files which compress more than 4 percent better with the filter. Possibly there is some room for improvements left.

It seems, as if i am near to the maximum compression my codec design can achieve. In this case i would call the 0.2 percent improvement significant...

That doesn't mean, that there will be no more improvements. For instance the frame partition calculator isn't fully optimized yet.

TBeck, are you a good wizard or something?
I will definitely start using your codec right after the fb2k plugin comes out!
Infrasonic Quartet + Sennheiser HD650 + Microlab Solo 2 mk3. 

Yalac - Comparisons

Reply #121
Thomas, as per previous request, 0.07 values have now replaced 0.06 values in the multi-encoder comparison:

http://www.synthetic-soul.co.uk/comparison/lossless/

As previously, non-standard settings (-cX) can be viewed by appending ?all=1 (and removed by using ?all=0):

http://www.synthetic-soul.co.uk/comparison/lossless/?all=1


Personally, I like the way that the new presets have a good spread of encoding speed (roughly halfing, except for insane which just goes all out).  Compression does not alter drastically, so you can pick a preset according to your speed requirements, and know that it won't make a horrendous difference, and that the difference will be exponentially relative to the speed benefit.  It seems a very tidy way of doing things, and easy for a user to find the best trade-off.
I'm on a horse.

Yalac - Comparisons

Reply #122
Hi Thomas. You are realy a wizard

As about the presets, they look perfectly balanced except the 0.7 extra. Looking at SyntheticSoul's and Destroid's results 0.6 HIGH gives better or equal compression compared to 0.7 EXTRA while being 5-10% faster.

Waiting for the FooBar plugin.

Yalac - Comparisons

Reply #123
Yalac fast mode beats them all, when you sort by encoding rate / size ratio, but monkey's normal is faster AND compresses better than yalac...  I don't know if anything is to be done with this, but right now fast mode seems fine.  Maybe if you could concentrate on the other modes..

Yalac - Comparisons

Reply #124
Quote
' date='May 27 2006, 14:48' post='396419']monkey's normal is faster AND compresses better than yalac...
It decodes twice as slowly though.

I'm not sure that you can compare a preset name so directly.  Yalac can't expect to compete in every league.

The difference in compression and encoding rate is minimal.  However, trying to equal Monkey's Audio in either value would be at the detriment to the other... unless Thomas can squeeze another 0.3-0.4% compression out, so that he can afford to speed encoding up a little.

Hey, if he can do it then I'll not complain!

I just think that tackling Monkey's Audio head on is unrealistic, especially if you look at Monkey's High surpassing Yalac's Insane compression marginally, and encoding rate substantially.
I'm on a horse.