Welcome Guest ( Log In | Register )

MPC vs OGG VORBIS vs MP3 at 175 kbps, listening test on non-killer samples
post Jul 12 2004, 00:50
Post #1

Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420


• My access to internet is now very limited. Therefore, the encoder I’m using for my tests are not necessary the most recent available on the web. Here, tests were done when vorbis 1.1 RC1 was released, but I didn’t have access to this information…

• This test is something like a work-in-progress. I plan to add more results with time.


Like many people of this board, my principal motivation for audio encoding lie in the possibility to listen and enjoy music in high quality directly from computer, which allows a very fast browsing and the access to an entire record collection. High quality encoding is a requirement, security a need. I used successively lame mp3, musepack audio and now lossless, which offer the security of identical digital-data with CD.
Nevertheless, lossy encoding is still interesting: modern hard disks are not necessary big enough for all collections, and I think that there’s some benefits to feed expensive digital jukebox with “better than just good” quality audio encodings, like AAC/Vorbis 128 – fine but perfectible.

The choice of the best lossy encoder isn’t really problematic. Musepack (mpc) is still winning most approvals, and is considered as fully transparent with --standard preset. Some elements encouraged me to seriously question this leading position of mpc.

• 1/ by testing occasionally the standard preset of mpc, I discovered that small differences are sometimes audible with usual music. Now if mpc isn’t fully transparent at 175 kbps, this format is definitively comparable (it doesn’t mean “equal”) to other lossy solution, which are suffering from the same report.

• 2/ the leading position of mpc was admitted long time ago. It was defined as “best lossy format” when challengers where not very strong: beta of vorbis, lame < 3.90, suboptimal aac encoders. But now, there are powerful vorbis encoders (the recent “megamix” merging looks like a serious challenger), optimized AAC encoders (QuickTime CBR and Nero VBR), and mature MP3 solutions (VBR presets of lame). The leading position must therefore be questioned again, at least by people able to detect differences.

• 3/ This challenge becomes necessary with the growing numbers of device supporting new audio formats like AAC and Vorbis. MPC is still confined to computer, or in best case on PDA – and is maybe doomed to this limited usage.

In consequence, I’ve tried to oppose to mpc --standard other serious encoding solutions, in order to have a better, modern and personal idea of the relative quality of this encoder compared to modern and convenient challengers.


Against musepack --standard, I decided to oppose two formats: MP3 with lame 3.97a3 and OGG VORBIS with the recent combined encoder named “megamix”. Explanations.

• first, no AAC encoder in the arena. I was tempted to use Nero AAC, but the last version I have ( have some recognized quality problems and is promise to an imminent conceptual death, with the Third version of Ivan Dimkovic encoder. No need to test something outdated… I was also tempted to take QuickTime AAC, though it’s not VBR and not very flexible (nothing between 160 and 192 kbps: annoying for fair comparison with MPC --standard). But this encoder is not really suitable in my opinion for HQ listening, at least when user is found of opera and when most of his CD absolutely need a real gapless playback. AAC will be add later, but for now, it’s absent from this test.

• the choice of lame MP3 version is highly problematic too. Three choices are possible: the last “tested” release (3.90.3), the last gold release (3.96) or the last alpha release (3.97 alpha 3). I’ve decided to not use 3.90.3. I know that for some people this encoder is the best mp3 codec ever released; I also know that for historical reason 3.90.3 is probably the safest choice. But the difference between 3.90.x dead branch and the active 3.9x one is not only related to quality: 3.9x are much faster (not a luxury considering slowness of 3.90.x presets), more complete (full and redesigned VBR preset scale: the nice –V 5 used in Roberto listening test is for example a new feature inaccessible for 3.90.x), and last but not least in perpetual evolution. There’s nobody to correct flaws on 3.90.x, whereas bug audible with 3.9x could be corrected or lowered by Gabriel, Robert, Takehiro and other developers.
I definitively forget the choice of 3.90.x for another important reason: there’s no VBR preset corresponding to the MPC –standard bitrate. –alt-preset standard is clearly too high, --medium too low, -Y switch a hack, and ABR is probably not efficient enough. With 3.9x branch, there’s an existing preset between –standard and –medium: -V 3. And –V 3 average bitrate should be close to the MPC –standard one.

Then: 3.96 “gold” or 3.97 alpha? I’ve decided for the alpha release. I know the risks (for regression but also for progress). But I also know that 3.96 is buggy on –fast mode: it decides me to use a corrected release, even if the test doesn’t concern the fast mode of lame.

• the choice of vorbis version is less problematic. Recent tests were done. CVS/GT3b2 couldn’t resists against aoTuV/GT3/QK32 dream team (aka megamix), at least up to 5,99. And even higher, GT3b2 (previous reference encoder for high bitrate) doesn’t really sound superior (except maybe for one family of problems: micro-attacks). I also recall that I’ve began this test by being unaware of the release of 1.1 RC1. This last encoder nevertheless seems to be inferior to “megamix” (the essential but maybe ‘excessive’ tuning of Garf, used at bitrate > -q 5,00, are apparently missing from this RC1 version). The use of “megamix” is therefore pertinent, and my test is probably not outdated by this enjoying pre-release of oggenc 1.1

• I don’t forget the promising WMApro: I was really pleased and even enthusiastic by the quality reached by this format with classical music at mid-bitrate. Nevertheless, I didn’t include this format in the test. First, I had to limit the number of competitors. Then, I’m not familiar with this encoder and don’t know what setting is the best (which VBR mode? And is WMApro VBR implementation reliable, or isn’t ABR 2-PASS preferable, etc…). Last: still no hardware device for WMApro (though it’s not a reason to exclude an audio format from a test including MPC, it’s a disappointing situation).


Mid/High bitrate tests are, for me at least, especially painful. It doesn't mean that I hate them, quite the opposite. ...
Samples only concern « classical music », with one exception. I deliberately limited my choice on the music I like. It's not by snobbism; and it's not an egocentric attitude: other music is much harder for me to ABX, and my motivation would quickly disappear with music I don't really like. In other word, the impact of these results is VERY LIMITED: they concern my subjectivity (and only mine), and a particular genre of samples (natural instruments, recording according to high-fidelity principles - and not to the marketing “loud” one).
There are solo instruments (organ with Dom Bedos; harpsichord with Fuga; trombones with Orion II), instruments with small accompaniment (cymbals with Krall and Marche Royale, drums with Marche Royale, 2nd part), orchestra (Weihnachts-Oratorium and Platée), chorus (Ich bin der Welt abhanden gekommen) and voice ( “Dover, giustizia, amor” ). Additional information (artist, performer…) are available on file tags.


Comparing VBR encoders/settings is problematic. The ideal thing is to fix a target bitrate, and then to find the corresponding preset for each encoder. I followed the usual (and IMHO the best) methodology: the setting must be related to a wide selection of music, and not to the selected samples.
The targeted bitrate is the average bitrate of MUSEPACK --standard preset. The average bitrate can’t be evaluated precisely: it’s something comprise between 170 and 180 kbps. 175 kbps approximately. I have verified this value with classical music library, and people have reported similar value with completely different music.
The remaining task is now to find the corresponding VBR settings for LAME MP3 and Vorbis “megamix”.
The problems are beginning…


• The biggest problem lies in the average bitrate’s difference of vorbis, occurring at the same setting, depending on the kind of encoded music. Classical is bitrate friendly compared to most other stereo and modern material. With CVS encoder, I estimated this difference at 10…15 kbps on average for –q 5…6. With “megamix” (or other GT3b2 based encoders), this difference might reach 25…30 kbps for the same setting. I don’t know what to do…
- by testing vorbis with a –q value corresponding to 175 kbps for classical but 200…210 kbps on pop/rock… people may blame me for opposing to musepack an advantaged vorbis challenger.
- by testing vorbis with a –q value corresponding to a 175 kbps for pop/rock but 140 kbps on classical, the test will be pointless for me (the winner between mpc@175 and vorbis@145 isn’t very hard to guess…).
- by testing vorbis with a half-baked –q value, I fear that the test won’t corresponding to neither of both situation.

• The second big problem is related to vorbis rupture in the linearity of the quality scale. Between -5,99 and -6,00, there’s a consequent bitrate difference (~10 kbps), also corresponding to a serious quality difference, at least with vorbis 1.00 – 1.01 (including GT3b2). aoTuV (and therefore “megamix”) is based on the same code, but the tuning tried to correct or to minimize the quality gap between the two settings. I discovered that for classical music, the fair vorbis setting is very close to this 5,99 value. 6,00 is slightly to high, and I could disadvantage mpc by comparing it to vorbis –q 6,00. On the other side, I have the feeling that -q6,00 would show the full potential of vorbis, and that the extra 8…10 kbps could be worth for daily use. Would someone renounce to the correction of a quality bug at low prince (+5% increase in filesize), especially with archiving in mind? Seriously, I don’t think so.

For all these reasons, I’ve decided to put in the arena vorbis megamix at three different settings:
-q 6,00: clearly to “heavy” compared to mpc --standard with non-classical music, but interesting to test against -q 5,99 (to see if the frontier between these two settings still exists with aoTuV/Megamix/1.1)
-q 5,99: the corresponding setting for a matching bitrate with mpc –standard for classical music (still too heavy with other music), but maybe suboptimal quality for vorbis
-q 5,50: more universal setting for acceptable test against mpc --standard. It would be interesting to compare the quality difference between 5,50/5,99 and 5,99/6,00. I suspect (and fear) a much greater jump between the last pair than with the first one.


I discovered that bitrate of –V 3 preset (lame 3.97a3) is really close to the average bitrate of mpc --standard. This applies at least for classical music (I don’t have enough material to measure average bitrate on other musical genre). –V 3 will therefore be tested.
I’ve also decided to add –V 2 (--preset standard). The bitrate is higher, but I really want to see if this historical leading preset of lame MP3 is competitive against musepack. It would also be interesting to see how will perform lame –V 2 compared to vorbis megamix, also playable on portable player, but with bad consequences on battery life.


Instead of posting of a bitrate table of the short samples used for the test, I prefer posting data about more audio material. Average bitrate for ~20 albums (classical for most), and additionnal datas for track coming from 50 different CDs (+15 other in mono) are available on the following tables:
OpenOffice: http://audiotests.free.fr/tests/200...RATE175kbps.sxc
Excel: http://audiotests.free.fr/tests/200...RATE175kbps.xls


First comment: I've add 10 points to each note. I had to find a solution to prevent misinterpretation of notes which could first appear as excessively severe. I didn't use low anchor for this test, and slight flaws sometimes appear as terribly annoying on such tests, lowering very much the notes. By artificially changing all notes, I also had in mind to disconnect the notation I used from the EBU scale (4= "perceptible but not annoying"; 3 = "slightly annoying", etc...).

With 10 results only, I couldn’t make strong conclusions. But some elements of conclusions are now appearing:

MPC –standard has serious chance to be the best of the three competitors. Eight time on the first place, one time second, and never on the last. Very good performances. We could also note that –standard setting wasn’t sufficient for reaching the “transparency” level (except for the organ sample, with negative ABX tests). Nevertheless, I could seriously expect full transparency with higher setting: none of this sample (except maybe the chorus one) showed severe artifacts, but just slight differences. It’s typically the kind of “problems” that disappear with a higher bitrate. Anyway, I’m impressed, because I didn’t thought that MPC –standard was so in advance...

LAME MP3 has few chances in my opinion to compete with vorbis and musepack at ~175 kbps. The new –V 3 setting sit on the last place eight times: too much… even with a limited set of samples. It doesn't mean that -V 3 sounds bad, but it's just inferior to modern lossy format at similar bitrate. But with improvements, who knows...
But the –V 2 setting (aka –alt-preset standard) is apparently competitive, and could fight (and sometime win) with vorbis “megamix” –q5.50 and –q5.99. Only problem: bitrate is not the same anymore (195 kbps vs something comprise between 162 and 180 kbps, but with classical music only). But it’s imperative to precise that LAME –V2 and –V3 suffers from huge artifacts (the harpsichord and the organ samples are severely wounded to my ears), whereas vorbis artifacts were never so bad (except, maybe, with Orion II sample – micro-attack problems).
To be short, LAME –V2 (--preset standard) is apparently competitive with VORBIS “megamix” –q 5,99, at least with classical music. It would be interesting to see how will perform both contenders with other kind of music at the same setting, which implies a completely different bitrate range (+10..15% with vorbis, and maybe – x% with lame).

• I expected a lot from the vorbis mixture. The progress between “megamix” and CVS are really impressive compared to CVS encoder, and I really wondered how it’ll perform against other challengers. I’m finally disappointed. For some reasons:
- First, the coarse sounding problem of the format is still audible with “megamix” up to 5,99. No need to suspect any of GT3b2 or QK32 tuning to ruin the benefits of original aoTuV in this area: the noise problem is particularly audible on “tonal” moments, encoded with pure aoTuV code (bit to bit identical samples between aoTuV encoder and megamix one). This additional noise is probably not too disturbing on daily listening, but on direct comparison with other challengers, the contrast is still annoying. The problem not really lies on noise, but on coarse rendering of voice or instruments: lack of subtlety, fat texture… I think that this problem is a legacy of internal change occurring during RC3 development of Vorbis, in spring 2002. I think I’ve established this fact at ~128 kbps some months ago (correct me if I’m wrong), and I suppose that’s still true at ~160…170 kbps, even with aoTuV (based on the same buggy “final” CVS code).
- Second reason to be disappointed: due to this remaining coarse problem occurring up to –q5,99, there’s still a consequent quality gap between this setting and the rounded -q 6,00. It’s my fault: I’ve expected from aoTuV tuning to erase the existing frontier between –q 5,99 and –q 6,00: this encoder only reduced the gap. There are ~10 kbps difference between 5,50 and 5,99 but few quality improvements. There are also 10 kbps difference between 5,99 and 6,00, but huge quality progress are audible. For a daily use of vorbis encoder, there’s no real problem with this difference: the 10 additional kbps of –q6,00 are obviously worth if someone is looking for high quality or archiving, and there’s no need to hesitate. But for my test or any similar one, this difference is much more problematic. On one side, I couldn’t oppose mpc –standard to megamix –q 6,00 on fair bases (average bitrate doesn’t match anymore). And on the other side, it’s pointless to compare mpc –standard to an handicapped vorbis setting (5,99). It’s like using musepack at –quality 4.99, which also suffers from problems (and bitrate gap) that don’t exist anymore at –quality 5.00. Cruel dilemma…
- Third reason to be disappointed: even at –q 6,00 (and 10 exceeding kbps), megamix couldn’t apparently reach the quality of musepack –standard. More samples are of course needed to enforce this beginning of conclusions, but I really fear the solution doesn’t lie on a selection of samples, but rather on further development.

As I said it at the very beginning, I consider this test as a first step. Additionnal results should and will normally complete this first phase. I expect a quick release of Nero AAC encoder in version to add some spice to the test. External test, opposing vorbis megamix to the new 1.1 must also be done, in order to be sure that megamix is the best vorbis encoder at this bitrate.

I'd also like to see this test followed by other people. It would help to compare different HQ encoders on empirical bases. Feel free to post some results, even for one sample, on this topic.


I've upload all samples on a temporary link. I couldn't keep them on-line too long. So don't wait if you're planning to do personal tests. ABX logs are available in each archive. Samples are in OptimFROG lossless audio format.

This post has been edited by guruboolez: Dec 29 2005, 21:46
Go to the top of the page
+Quote Post
Start new topic
post Aug 22 2004, 13:53
Post #2

Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420

...::: 8 additional results :::...


Few changes since last bunch of test: same hardware, same kind of music (classical), same software. I’ve nevertheless drawn the conclusion of past discussion with pio2001, and fixed a number of trials for all ABX test: 12 trials, no more, no less. This drastic condition implies a lot of concentration, many rests, and is therefore very time-consuming. Tests are less enjoying in my opinion (motivation is harder to find). Other consequence of this: there are now 5.0 [transparent] notation. If I failed [EDIT: "completely failed"] to ABX something, I cancelled my ABC/HR notation and gave a nice 5.0 as final note. I nevertheless kept trace of my initial feeling in the "general comment".


I tried to vary as much as possible the samples (kind of instruments/signal). There aren't known-killers. All samples should be ‘normal’, with no correspondences to typical lossy/perceptual problems (as sharp attacks and micro-attacks signal for exemple).

Eight more samples. Two are from harashin:
- Liebestod: opera (soprano voice with orchestra)
- LadyMacbeth: festive orchestra, with predominant brass and cymbals

Six others are mine:
- Trumpet Voluntar: trumpet with organ (noisy recording)
- Vivaldi RV93: baroque strings, i.e period instruments (small ensemble)
- Troisième Ballet: cousin of bagpipes, playing with a baroque ensemble
- Vivaldi – Bassoon [13]: solo bassoon, with light accompaniment
- Seminarist: male voice (baritone) with a lot of sibilant consonants and piano accompaniment
- ButterflyLovers: solo violin playing alternately with full string orchestra


3.1. eight new results

3.2 cumulative results

3.3. comments about results

No big differences between the two parts of the test:
TEST    MP3_V2  MP3_V3  MPC_Q5  MGX5,5  MGX5,99 MGX6,00
NO.1    12,3    11,9    13,8    12,2    12,3    13,2
NO.2    12,7    11,9    13,9    12,1    12,3    13,4

Average notation is very stable, except maybe for lame --preset standard, in slight progress for these eight new samples. Hierarchy is identical. The conclusions are therefore the same as those posted in my first post smile.gif


I fed ff123’s friedman.exe application with the following table:

LAME_V2   LAME_V3   MPC_Q5    OGG5.5    OGG5.99   OGG6.00  
2.00      1.50      3.00      2.00      2.00      3.20      
1.50      1.00      4.00      2.90      2.90      3.50      
3.00      2.50      2.80      3.00      3.30      4.00      
3.00      2.00      4.00      2.00      2.00      2.30      
1.50      1.00      4.90      2.50      2.50      3.30      
3.00      1.80      3.80      2.20      2.40      3.00      
1.50      1.20      3.50      1.80      2.30      3.40      
1.50      2.70      4.00      2.00      2.00      2.30      
3.00      2.80      4.20      1.60      1.50      3.00      
3.00      2.30      4.00      2.30      2.50      3.50      
2.00      2.00      4.00      2.50      2.50      3.50      
3.50      2.50      5.00      1.50      1.50      4.00      
1.50      1.00      4.00      2.00      2.50      3.00      
1.40      1.20      3.50      1.70      2.00      2.20      
4.00      3.00      5.00      4.00      4.00      4.50      
2.50      1.30      3.50      1.70      1.70      2.70      
3.00      1.20      3.00      1.40      2.00      2.20      
3.50      3.00      3.00      2.00      2.00      5.00      

[interesting to note: the conclusions and values computed by the tool are exactly the same if I keep the original notation [e.g. 12.3 and not 2.30].

The ANOVA analysis conclusion is:

FRIEDMAN version 1.24 (Jan 17, 2002) http://ff123.net/
Blocked ANOVA analysis

Number of listeners: 18
Critical significance:  0.05
Significance of data: 0.00E+000 (highly significant)
ANOVA Table for Randomized Block Designs Using Ratings

Source of         Degrees     Sum of    Mean
variation         of Freedom  squares   Square    F      p

Total              107         102.73
Testers (blocks)    17          23.75
Codecs eval'd        5          49.48    9.90   28.53  0.00E+000
Error               85          29.49    0.35
Fisher's protected LSD for ANOVA:   0.390


MPC_Q5   OGG6.00  LAME_V2  OGG5.99  OGG5.5   LAME_V3  
 3.84     3.26     2.47     2.31     2.17     1.89  

---------------------------- p-value Matrix ---------------------------

        OGG6.00  LAME_V2  OGG5.99  OGG5.5   LAME_V3  
MPC_Q5   0.004*   0.000*   0.000*   0.000*   0.000*  
OGG6.00           0.000*   0.000*   0.000*   0.000*  
LAME_V2                    0.430    0.137    0.004*  
OGG5.99                             0.481    0.034*  
OGG5.5                                       0.153    

MPC_Q5 is better than OGG6.00, LAME_V2, OGG5.99, OGG5.5, LAME_V3
OGG6.00 is better than LAME_V2, OGG5.99, OGG5.5, LAME_V3
LAME_V2 is better than LAME_V3
OGG5.99 is better than LAME_V3

And now, the “most statistically correct” tukey-parametric analysis one:

FRIEDMAN version 1.24 (Jan 17, 2002) http://ff123.net/
Tukey HSD analysis

Number of listeners: 18
Critical significance:  0.05
Tukey's HSD:   0.574


MPC_Q5   OGG6.00  LAME_V2  OGG5.99  OGG5.5   LAME_V3  
 3.84     3.26     2.47     2.31     2.17     1.89  

-------------------------- Difference Matrix --------------------------

        OGG6.00  LAME_V2  OGG5.99  OGG5.5   LAME_V3  
MPC_Q5     0.589*   1.378*   1.533*   1.672*   1.956*
OGG6.00             0.789*   0.944*   1.083*   1.367*
LAME_V2                      0.156    0.294    0.578*
OGG5.99                               0.139    0.422  
OGG5.5                                         0.283  

MPC_Q5 is better than OGG6.00, LAME_V2, OGG5.99, OGG5.5, LAME_V3
OGG6.00 is better than LAME_V2, OGG5.99, OGG5.5, LAME_V3
LAME_V2 is better than LAME_V3

According to the last analysis, lame –V3 and vorbis megamix1 –q 5,50/5,99 offers comparable performances (they are tied). In other word, I can't say that megamix is at -q 5,99 is superior to lame -V 3, even if 13 samples (72%) are favorable to megamix 5,99, one identical (6%) and four only (22%) favorable to lame V3. If I understand correctly, for me and the set of 18 tested samples, I should admit that lame is tied with vorbis even if this last one is superior on 72% of the tested samples! It’s totally insane in my opinion… There's maybe a problem somewhere, or are 18 samples still not enough?
The ANOVA analysis is slightly more acceptable: it concludes on megamix 5,99 superiority for the 18 samples, but still not on megamix 5,50 one (66% of favorable samples).

But both analysis concludes on:
1/ full MPC -Q5 superiority (even against Vorbis megamix1 -Q6
2/ megamix1 Q6 superiority on lame -V2 and V3 and on megamix Q5,50 and Q5,99
3/ LAME V2 > LAME V3

More schematically:
• ANOVA: MPC_Q5 > OGG_Q6 > OGG_Q5,99/Q5,00/MP3_V2/MP3_V3
• ANOVA: OGG_Q5,99 > LAME V3


In other words, it means that for me, and after double blind tests on non-critical material:
- musepack --standard superiority is not a legend, and isn't infirmed by recent progress made by lame developers and vorbis people.
- lame --standard preset is still competitive against vorbis, at least up to q5,99, which still suffers from audible and sometimes irritating coarseness. Nevertheless, quality of lame MP3 quickly drops below this standard preset. It's interesting to note, in case of hardware playback.
- vorbis aoTuV/CVS 1.1 begins to be suitable for high quality at q 6,00, but absolutely not below this floor.


ABX logs are available here:
The eight new log files are merged in one single archive

Samples are not uploaded. I could do it. Is someone interested?

This post has been edited by guruboolez: Dec 29 2005, 21:51
Go to the top of the page
+Quote Post

Posts in this topic
- guruboolez   MPC vs OGG VORBIS vs MP3 at 175 kbps   Jul 12 2004, 00:50
- - guruboolez   Additionnal results (2004.08.22): Cumultative ...   Jul 12 2004, 00:52
- - westgroveg   QUOTE The problem not really lies on noise, but on...   Jul 12 2004, 01:34
|- - guruboolez   QUOTE (westgroveg @ Jul 12 2004, 01:34 AM)Fro...   Jul 12 2004, 01:41
|- - indybrett   QUOTE (guruboolez @ Jul 11 2004, 07:41 PM)QUO...   Jul 12 2004, 02:54
|- - kjoonlee   QUOTE (indybrett @ Jul 12 2004, 10:54 AM)Is i...   Jul 12 2004, 03:16
|- - indybrett   QUOTE (kjoonlee @ Jul 11 2004, 09:16 PM)QUOTE...   Jul 12 2004, 03:17
|- - kjoonlee   QUOTE (indybrett @ Jul 12 2004, 11:17 AM)So, ...   Jul 12 2004, 03:21
- - phong   Outstanding work as usual guru. I don't know ...   Jul 12 2004, 01:36
|- - guruboolez   QUOTE (phong @ Jul 12 2004, 01:36 AM)Outstand...   Jul 12 2004, 01:49
|- - westgroveg   QUOTE (guruboolez @ Jul 12 2004, 12:49 PM)Any...   Jul 12 2004, 01:59
|- - guruboolez   QUOTE (westgroveg @ Jul 12 2004, 01:59 AM)QUO...   Jul 12 2004, 02:07
- - westgroveg   I think to be a fair test MPC should use q7/Insane...   Jul 12 2004, 01:44
- - Faelix   QUOTE (guruboolez @ Jul 11 2004, 08:50 PM)MPC...   Jul 12 2004, 03:33
- - QuantumKnot   Very interesting test. It does confirm the well-k...   Jul 12 2004, 05:15
- - Dologan   QUOTE (Faelix @ Jul 11 2004, 08:33 PM)QUOTE (...   Jul 12 2004, 06:27
- - Gabriel   Very interesting...   Jul 12 2004, 09:06
- - Pio2001   Thank you for sharing. Do you have details on th...   Jul 12 2004, 11:58
|- - Pio2001   By the way, did you use ABC/HR ?   Jul 12 2004, 12:00
|- - guruboolez   QUOTE (Pio2001 @ Jul 12 2004, 11:58 AM)Do you...   Jul 12 2004, 12:11
- - robUx4   Could you consider adding WavPack hybrid mode (onl...   Jul 12 2004, 12:13
|- - guruboolez   QUOTE (robUx4 @ Jul 12 2004, 12:13 PM)Could y...   Jul 12 2004, 12:18
- - guruboolez   Log files are more easily accessible >> HERE...   Jul 12 2004, 12:25
- - manusate   Very interesting as always, Guruboolez. Thank you ...   Jul 12 2004, 13:17
- - dev0   Celsus' trolling attempt has been split into t...   Jul 12 2004, 13:44
- - Pio2001   Since you used sequencial ABX tests, with a max nu...   Jul 12 2004, 19:59
|- - Pio2001   I fed this table in ff123's analyzer : CODEMP...   Jul 12 2004, 20:18
- - guruboolez   First, thanks for the analysis. I can't do thi...   Jul 13 2004, 02:08
|- - ff123   QUOTE (guruboolez @ Jul 12 2004, 05:08 PM)Are...   Jul 13 2004, 02:17
- - guruboolez   But how do you explain the fact than this analysis...   Jul 13 2004, 02:30
|- - ff123   QUOTE (guruboolez @ Jul 12 2004, 05:30 PM)But...   Jul 13 2004, 04:03
|- - Pio2001   QUOTE (ff123 @ Jul 13 2004, 04:03 AM)QUOTE (g...   Jul 13 2004, 11:15
- - guruboolez   OK. Another question: are these "confidence v...   Jul 13 2004, 04:13
|- - ff123   QUOTE (guruboolez @ Jul 12 2004, 07:13 PM)OK....   Jul 13 2004, 05:07
- - guruboolez   Anyway, I plan to progressively add more results w...   Jul 13 2004, 12:01
|- - westgroveg   QUOTE (guruboolez @ Jul 13 2004, 11:01 PM)Any...   Jul 13 2004, 13:16
- - 2Bdecided   Fascinating thread. Thank you guruboolez! D.   Jul 13 2004, 13:07
- - phong   This may be the thread that pushes me into actuall...   Jul 13 2004, 15:53
|- - guruboolez   QUOTE (phong @ Jul 13 2004, 03:53 PM)This may...   Jul 13 2004, 16:09
|- - QuantumKnot   QUOTE (phong @ Jul 14 2004, 12:53 AM)A common...   Jul 14 2004, 04:16
- - ScorLibran   Thanks for the time and effort you put into this t...   Jul 14 2004, 04:07
- - indybrett   @Guruboolez Do you think Megamix II would improve...   Jul 14 2004, 04:09
|- - QuantumKnot   QUOTE (indybrett @ Jul 14 2004, 01:09 PM)@Gur...   Jul 14 2004, 04:18
|- - guruboolez   QUOTE (indybrett @ Jul 14 2004, 04:09 AM)@Gur...   Jul 14 2004, 11:32
|- - QuantumKnot   QUOTE (guruboolez @ Jul 14 2004, 08:32 PM)For...   Jul 14 2004, 11:41
- - indybrett   I would really like to see FAAC in future tests, u...   Jul 15 2004, 03:04
|- - QuantumKnot   QUOTE (indybrett @ Jul 15 2004, 12:04 PM)Edit...   Jul 15 2004, 04:12
- - guruboolez   Well, I can't test every encoder. I'm not ...   Jul 15 2004, 03:39
- - indybrett   Nero would be great, except you have to buy a rath...   Jul 15 2004, 03:41
- - guruboolez   indybrett, or someone having some experience of fa...   Jul 16 2004, 23:22
- - indybrett   -q 135 should be pretty close to 175kbs on rock mu...   Jul 17 2004, 02:41
- - LoFiYo   -q135 should sound OK on non-killer rock/pop sampl...   Jul 17 2004, 04:29
- - guruboolez   Thanks for report. I've very quickly tried to ...   Jul 17 2004, 12:37
- - indybrett   I did another test with a more common CD. Pink Fl...   Jul 17 2004, 16:45
- - harashin   Hello. Although this test had been done last week,...   Jul 22 2004, 11:59
- - guruboolez   Thanks a lot for results and samples. I can't ...   Jul 22 2004, 12:15
|- - harashin   QUOTE (guruboolez @ Jul 22 2004, 08:15 PM)I...   Jul 22 2004, 12:24
- - Pio2001   Thank you for your work. However, considering the ...   Jul 23 2004, 23:42
|- - guruboolez   QUOTE (Pio2001 @ Jul 23 2004, 11:42 PM)I reca...   Jul 24 2004, 00:39
|- - Pio2001   This way of doing things completely screws the res...   Jul 24 2004, 01:53
- - guruboolez   QUOTE (Pio2001 @ Jul 24 2004, 01:53 AM)QUOTE ...   Jul 24 2004, 02:40
- - Pio2001   I understand what you are saying. But imagine that...   Jul 24 2004, 14:03
|- - ff123   QUOTE (Pio2001 @ Jul 24 2004, 05:03 AM)By the...   Jul 24 2004, 17:02
|- - Pio2001   Guruboolez, I've got no time to answer your la...   Jul 24 2004, 17:29
- - guruboolez   QUOTE About your tests, Guruboolez, you see that i...   Jul 24 2004, 14:37
|- - Pio2001   QUOTE (guruboolez @ Jul 24 2004, 02:37 PM)If ...   Jul 24 2004, 15:03
- - guruboolez   QUOTE (Pio2001 @ Jul 24 2004, 03:03 PM)QUOTE ...   Jul 24 2004, 15:38
- - Pio2001   I said precisely QUOTE (Pio2001 @ Jul 24 20...   Jul 24 2004, 17:31
- - guruboolez   QUOTE (Pio2001 @ Jul 24 2004, 05:31 PM)ABX re...   Jul 24 2004, 17:50
- - ff123   When comparing different codecs in abchr.exe, the ...   Jul 24 2004, 17:57
- - guruboolez   QUOTE (ff123 @ Jul 24 2004, 05:57 PM)(...) Pi...   Jul 24 2004, 18:18
|- - ff123   QUOTE (guruboolez @ Jul 24 2004, 09:18 AM)P.S...   Jul 24 2004, 18:23
- - Pio2001   Keep in mind that if someone says this, we will fi...   Jul 24 2004, 20:49
|- - ff123   QUOTE (Pio2001 @ Jul 24 2004, 11:49 AM)It sho...   Jul 24 2004, 21:43
|- - Pio2001   QUOTE (ff123 @ Jul 24 2004, 09:43 PM)You...   Jul 24 2004, 23:56
|- - ff123   QUOTE (Pio2001 @ Jul 24 2004, 02:56 PM)I chos...   Jul 25 2004, 06:52
|- - Pio2001   QUOTE (ff123 @ Jul 25 2004, 06:52 AM)QUOTE By...   Jul 25 2004, 16:45
- - guruboolez   QUOTE (Pio2001 @ Jul 24 2004, 08:49 PM)QUOTE ...   Jul 24 2004, 22:04
- - guruboolez   QUOTE (Pio2001 @ Jul 24 2004, 11:56 PM)So I r...   Jul 25 2004, 01:43
|- - ff123   QUOTE (guruboolez @ Jul 24 2004, 04:43 PM)May...   Jul 25 2004, 07:05
|- - ff123   QUOTE (ff123 @ Jul 24 2004, 10:05 PM)Edit:...   Jul 26 2004, 03:40
- - bleh   With 3.00 and 3.01 for one codec and two scores of...   Jul 25 2004, 17:10
|- - Pio2001   QUOTE (bleh @ Jul 25 2004, 05:10 PM)With 3.00...   Jul 25 2004, 18:38
|- - ff123   QUOTE (Pio2001 @ Jul 25 2004, 09:38 AM)QUOTE ...   Jul 25 2004, 21:58
- - ff123   For anybody interested in seeing exactly where in ...   Jul 25 2004, 22:20
- - Pio2001   In the meantime, I found some infos on the web : ...   Jul 26 2004, 00:53
- - deaf   It is interesting to see, that applying scientific...   Jul 26 2004, 02:38
|- - ff123   QUOTE (deaf @ Jul 25 2004, 05:38 PM)It is int...   Jul 26 2004, 03:10
- - Pio2001   FF123, could you explain us in common language why...   Aug 4 2004, 23:23
|- - ff123   QUOTE (Pio2001 @ Aug 4 2004, 02:23 PM)FF123, ...   Aug 5 2004, 01:04
- - guruboolez   ...::: 8 additional results :::... I. TESTING C...   Aug 22 2004, 13:53
|- - ff123   QUOTE (guruboolez @ Aug 22 2004, 04:53 AM)Acc...   Aug 22 2004, 15:16
|- - Pio2001   Thank you very much for you work and analysises ...   Aug 23 2004, 00:44
- - eagleray   So many numbers...ouch. Thany you Guruboolez for ...   Aug 22 2004, 14:31
- - kuniklo   Thanks very much for taking all the time to do the...   Aug 22 2004, 17:28
- - guruboolez   I don't understand what these random numbers s...   Aug 23 2004, 09:42
|- - ff123   QUOTE (guruboolez @ Aug 23 2004, 12:42 AM)I d...   Aug 23 2004, 16:03
|- - guruboolez   QUOTE (ff123 @ Aug 23 2004, 04:03 PM)(...) Th...   Aug 23 2004, 19:58
|- - ff123   QUOTE (guruboolez @ Aug 23 2004, 10:58 AM)In ...   Aug 23 2004, 21:35
- - Pio2001   If someone runs an ABX test of which I am the list...   Aug 23 2004, 11:58
2 Pages V   1 2 >

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:


RSS Lo-Fi Version Time is now: 17th April 2014 - 23:55