Because i am always talking about my secret test corpus:
The first section contains CD-Ripps of the beginning of the following songs:
Song_02 Dire Straits "(Forgot the title)", 1:19
Song_04 Bruce Cockburn (Cover) "Red ships take off...", 1:27
Song_06 King Crimson "Lady of the dancing waters", 1:29
Song_08 Peter Gabriel "Mercy Street", 3:10
Song_10 Thin Lizzy "Whisky in the jar", 1:19
Song_12 Tracy Chapman "Mountains o' things", 1:02
Song_14 Bruce Cockburn (Cover) "Silver wheels", 1:20
Song_16 Kunze "Dein ist mein ganzes Herz", 1:12
The second section contains files from http://www.rarewares.org/test_samples/:
Bach_01 Bachpsichord
Bartok_01 Bartok_strings2
Debussy_01 Debussy
Mahler_01 Mahler
Speech_01 female_speech
I think, i have to clarify in which aspects i found this selection to be representative. I did test many more songs and found, that some of them were most useful for my evaluation purposes, because they showed differences between mine and other encoders or because they were most sensitive to the encoder parameters i wanted to optimize. For exapmle the Song_xx-Files are ordered by the degree they could benefit from an increase of the prediction order: Song_02 uses up to 384 predictors while Song_16 doesn't significanttly benefit from more than 16 predictors.
So these files should cover the whole spectrum of differences in signal characteristics, that affect my encoder.
But that doesn't mean, that they are representative for some typical Audio collection!
I myself forgot this fact. But the first test results in this thread brought it back to my mind, because the performance advantage of yalac was significantly lower as in my tests.
Possibly the files with high predictor orders are over represented in my test corpus.
Thomas