Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Is MPC better than mp3? (Read 78341 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Is MPC better than mp3?

Reply #50
Quote
That provides an extra layer of psychologic protection for those who worry excessively about transparency and problem samples wreaking havoc on said transparency.
[a href="index.php?act=findpost&pid=379255"][{POST_SNAPBACK}][/a]
LOL  if I worry about transparency I'll go lossless.

No, but seriously.

Suppose I use a codec "Z" (letter chosen to hopefully not denote any known lossy codec) which achieves transparency at level "50" out of "100", giving a bitrate of ... let's say 100 kbps.

If I need to be more sure of transparency, then I can bump up the setting to level "60", giving a bitrate of 110 kbps.

Now let's say there's another codec called "W" (again, hopefully not denoting any known lossy codec) that achieves transparency at level "Q" out of "Z", giving a bitrate of ... let's say 120 kbps. The next quality level is "R" at 130 kbps.

Why should I use "W" at level "R" instead of "Z" at level "60"?

Is MPC better than mp3?

Reply #51
You lost me at variable "Q." 

Seriously, that does make sense...format bias comes into play as well.  But indeed there's no reason to bother with "W" and throw away bits if ABXing has shown you've reached perpetual transparency at a lower bitrate with codec "Z."

Is MPC better than mp3?

Reply #52
lol!
when I wrote this at 4th of April, kwanbis repeated afterwards his post showing 2 different multiformat tests to compare...........

weird...
April jokes ?

I knew (and posted) during the prediscussion of the latest 128k multiformat test, that cross-test-references are necessary and interesting in general,
and that without this, sooner or later somebody will come and connect the old tests (eg. with mpc versions) and the new test (eg. without mpc), to show "somewhat", haha, a comparison between mpc measured/ranked in old tests to compare the old "value" with formats in new tests and their new "values".
You made my day !

Is MPC better than mp3?

Reply #53
lol!
when I wrote this at 4th of April, kwanbis repeated afterwards his post showing 2 different multiformat tests to compare...........

weird...
April jokes ?

I knew (and posted) during the prediscussion of the latest 128k multiformat test, that cross-test-references are necessary and interesting in general,
and that without this, sooner or later somebody will come and connect the old tests (eg. with mpc versions) and the new test (eg. without mpc), to show "somewhat", haha, a comparison between mpc measured/ranked in old tests to compare the old "value" with formats in new tests and their new "values".
You made my day !


So, you're not aware that results can be extrapolated between tests?

Is MPC better than mp3?

Reply #54
when I wrote this at 4th of April, kwanbis repeated afterwards his post showing 2 different multiformat tests to compare...........

as far as i understand, or my logic understands, the only problem with doing so, was

1) diferent samples
2) diferent people

but, if we consider that

1) they were considered problem samples
2) statistically, it shouldnt matter

the extrapolation of the test should be ok, as it is related to the perception of a encoded file quality against an original.

Is MPC better than mp3?

Reply #55
Sometimes I wonder...

... if a codec (name your favorite here, I have mine) already performs transparently at a lower bitrate...

... then why encode at a higher bitrate?


OK, I'm only relatively new here, but I'd like to offer two answers to the question posed by pepoluan.  I'f I'm talking rubbish then I'd be very grateful if a more knowledgeable / experienced HA member could point out my errors.

Firstly, transparency isn't just a function of how good your ears are.  It's also affected by how good your equipment is.  If you upgrade your equipment, music that previously sounded transparent might no longer sound transparent.  But if you encode at a higher bitrate than you might have thought necessary, you have a bit of insurance in that direction.

Secondly, if you want to transcode down to a lower bitrate (e.g. to use on a DAP) then a higher bitrate to start with will hopefully give a better end result.

So for example I tend to encode music to MP3 using LAME -V0.  Given the quality of my audio equipment, and the fact that I don't often just sit and listen carefully to music, I strongly suspect -V2 would be transparent to me for most if not all practical purposes.  (I haven't done any serious listening tests to confirm this, and I can't be bothered to.  Life's too short.)  But I can afford the extra 25% storage space required by -V0, and that reassures me that next time I do want to listen hard to something I won't be bothered by compression artefacts.  Also, when I want to transcode down to -V5 for use on my Walkman, I expect to get better-sounding results transcoding down from -V0 than from -V2.  (Again, I haven't tested this and I don't want to test it.)

Basically that extra 25% storage space buys me a form of insurance.  Does this make any sense at all?

Is MPC better than mp3?

Reply #56
It makes some sense, only that when V2 is clearly stuffing up then V0 is equally useless. On small differences V0 might be a little better. I've have numerous samples that had problems with v5 and often v4 or v3 made little difference. I've had one that was abxable at V2, not at V1, yet abaxable again at V0 ! - even one that v5 was better than V4.

I am now sure is that when the psymodel is doing funny things then quality isn't increasing and the bits are wasted. MPC is better but its still the same in principle. The non-perceptuals like wavpack lossy and Optimfrog DS don't have this problem at all - though bitrate is more expensive.

Is MPC better than mp3?

Reply #57
Firstly, transparency isn't just a function of how good your ears are.  It's also affected by how good your equipment is.  If you upgrade your equipment, music that previously sounded transparent might no longer sound transparent.  But if you encode at a higher bitrate than you might have thought necessary, you have a bit of insurance in that direction.
Well, it actually boils down to your ears then  whether you can hear the difference between the lossy and the lossless encoded file. The equipment only helps.

But anyways, of course I am talking about same equipment here. It is absolutely pointless trying to compare my iPaq2210 output (fed into an amp & speaker) with my desktop computer output...

Quote
Secondly, if you want to transcode down to a lower bitrate (e.g. to use on a DAP) then a higher bitrate to start with will hopefully give a better end result.
Repeat after me: Transcoding from lossy to lossy - bad. Transcode from lossless to lossy - good.

Is MPC better than mp3?

Reply #58
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.

You could "extrapolate" old tests with new tests, if you would have included the "comparable anchor format", ie. a tested encoder of an old test together with the new test probants.
Then you could say, that eg. 4.7 rating of new test matches 4.5 rating of old test or whatever, and to watch, how a relative ranking of newer formats has developed towards older formats/encoders.

Please reread my posts during the preparation of the 128k multiformat test, I asked for including some "comparable anchor" to the new test, but the conductors haven't taken this idea.
There wasn't said (neither by kwanbis, nor by Roberto), that comparable anchor is not necessary to compare new with old test, due to this or that fact or argument, reason.

So, to come now with a comparison between old and new tests, reveals, that those guys, who already creeped into mpc threads in past, to argue against mpc, when mpc had still the crown alone, are continuing now with their propaganda. Sorry, fitting together those old and new tests graphs, sounds like cheap marketing.

Is MPC better than mp3?

Reply #59
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.


It is perfectly possible, and has been done several times before. The anchor indeed helps, but if any, my tests show that rankings have been consistent to quality across tests no matter if you use an anchor as reference or not.

You are just nitpicking here. If you want to seriously nitpick, you could determine that tests could only be compared ("extrapolated") if you used at least one encoder in common, the same listeners, the same samples, and the same conditions. That's not only unfeasible, that is impossible, as your hearing isn't perfectly the same day after day, and unmeasurable factors like mood and tiredness play a very important role in your testing habilities.

We are trying to compromise here for the sake of information.

Is MPC better than mp3?

Reply #60
I'm not a statistics wizard but I don't think user is nitpicking. I really think that having different anchors makes extrapolation much harder than not having the same listeners, same samples and same conditions, if these are representative of real world scenario in both tests.

Maybe extrapolation could be done, but not as slightly as just putting both graphics side by side.

Is MPC better than mp3?

Reply #61
You are just nitpicking here. If you want to seriously nitpick, you could determine that tests could only be compared ("extrapolated") if you used at least one encoder in common, the same listeners, the same samples, and the same conditions. That's not only unfeasible, that is impossible, as your hearing isn't perfectly the same day after day, and unmeasurable factors like mood and tiredness play a very important role in your testing habilities.

We are trying to compromise here for the sake of information.


Thank you,
you have written down, why cross comparisons (with absolute differences from 4.x to 4.y) between tests are difficult till impossible without my proposed "comparable anchor", which would allow a relative ranking of formats between old and new tests with careful interpretation.
It is obvious, who nitpicks.
Information based on pseudo-scientific-looking graph? At least, now we read, that the goal of this obscure graph should be "information"
Yellow press compromises its "information" sometimes, too. Not a serious way to inform people?

Is MPC better than mp3?

Reply #62
Thank you,
you have written down, why cross comparisons (with absolute differences from 4.x to 4.y) between tests are difficult till impossible without my proposed "comparable anchor"


Nope, I said it would be difficult till impossible if one nitpicked as bad as you do. Don't try to distort what I said.

Quote
Information based on pseudo-scientific-looking graph?


Again, if you want to nitpick so badly (as you obviously do), even my tests were pseudo-scientific, as they weren't formally conduced as per the ITU guidelines.

So, feel free to ignore all my tests and forget these things happened. Have a nice day.

Is MPC better than mp3?

Reply #63
Dear friend,
you have conducted tests as conductor.
^^that's nitpicking

Logic tells about nitpicking here.

See, what
m0rbidini Posted Yesterday, 06:29 PM :

  I'm not a statistics wizard but I don't think user is nitpicking. I really think that having different anchors makes extrapolation much harder than not having the same listeners, same samples and same conditions, if these are representative of real world scenario in both tests.

Maybe extrapolation could be done, but not as slightly as just putting both graphics side by side.

Is MPC better than mp3?

Reply #64
Still, even the listening tests done rjamorim do not justify the comment that "other formats are struggling to reach the same quality". MPC already tied with Vorbis and QT AAC back in 2003 at 128 kbps. And logic suggests that at higher bitrates, for most people, differences between formats will become smaller, not bigger.

So what point are you actually trying to defend?
"We cannot win against obsession. They care, we don't. They win."

Is MPC better than mp3?

Reply #65
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.


It is perfectly possible, and has been done several times before. The anchor indeed helps, but if any, my tests show that rankings have been consistent to quality across tests no matter if you use an anchor as reference or not.

You are just nitpicking here. If you want to seriously nitpick, you could determine that tests could only be compared ("extrapolated") if you used at least one encoder in common, the same listeners, the same samples, and the same conditions. That's not only unfeasible, that is impossible, as your hearing isn't perfectly the same day after day, and unmeasurable factors like mood and tiredness play a very important role in your testing habilities.

We are trying to compromise here for the sake of information.


Well, you've lost me there. Where have your tests shown this? (That's a request for information, not a rhetorical remark)

Extrapolating results between two tests with any common anchor looks pretty hazy to me, and it's not something I'd accept as solid in any way without some strong indication that in the given circumstances it's a valid compromise to make.

Is MPC better than mp3?

Reply #66
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.

You could "extrapolate" old tests with new tests, if you would have included the "comparable anchor format", ie. a tested encoder of an old test together with the new test probants.
Then you could say, that eg. 4.7 rating of new test matches 4.5 rating of old test or whatever, and to watch, how a relative ranking of newer formats has developed towards older formats/encoders.


I'm sorry, but I know of no formal proof that this or that extrapolation is a valid one and this or that one isn't. If you're looking for black and white, there won't be any.

The conditions for an extrapolation to be valid are pretty much the same that are required for the test itself to be valid. There must not be a way to show how it could, in a manner that has a reasonable likelihood of occuring, lead to wrong results. More abstractly and generally, what determines the goodness of a test is whether the results will lead to consistent improvement. And more specifically again: a test that is not solid wouldn't be able to lead to improvement at some point, or at the very least, it can be shown that this would happen.

What people will consider a valid test is also based on the above; but the above is not a black and white issue: the likelyhood the results could get flawed can vary, and the cirumstances under which it can happen, could too. By clearly stating the methodology, you allow everyone to make a decision for himself whether they consider the flaws important or not. If you use a good methodology, most people will consider that is not the case, and your results will be "accepted".

I wrote the above directly concerning this thread, but if you think about it, it's exactly the same what happens in science. If you call it unscientific and funny, you are wrong.

In a discussion, it's valid not to accept a conclusion, extrapolation or test results. But be aware that any data is still better than no data at all (and that's something different from "data so invalid you could as well toss a coin"). Waiving a result because of a minor issue is something you can do, but unless you're willing to come up with some results of your own, don't expect people to take you very seriously.

I'd like to see rjamorim's data and reasoning that leads him to believe an extrapolation would be valid. If we see it, we can think about what the flaws could be, how likely they are, and consequently, how much attention this extrapolation should get.

Is MPC better than mp3?

Reply #67
For me, its like a race, you compare lap times of race 1 to the one in race 20, and see that racer X, of race 20, had a better time than racer H on race 1, so racer 20 gets the "record lap". But:

1) Diferent racers
2) Diferent cars
3) Diferent climate

Still nobody argues about it. As roberto said (ups, we agree once more), you can compare, if you want to be picky, you probably would find some statistic problem, as with everything done in life. I can argue that you must do 100% of the world population, or the test have no meaning.

People subjetivelly heard some samples, and decided to give MPC 4.47, against the originals. Then another group of people did the same, and give 4.74 to iTunes AAC, again, against the originals.

EDIT: If both groups where statistical representations of the universe, universe = universe, so we can assume the same group did the test.

Is MPC better than mp3?

Reply #68
As roberto said (ups, we agree once more), you can compare, if you want to be picky, you probably would find some statistic problem, as with everything done in life. I can argue that you must do 100% of the world population, or the test have no meaning.


I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

Is MPC better than mp3?

Reply #69
Thanks Garf for questioning, what rjamorim leads to the opinion, that putting both graphics side by side could be a valid extrapolation.
If he would have continued defending this assumption, I'd asked it myself.

As a long time is between those listening tests,
different samples,
different encoders (ie. no anchor encoder),
different people,
maybe same people who aged during both tests,
it is very unlikely to be able to compare one encoders' absolute ranking ("4.x") of the old test with other encoders' ranking ("4.y") at another test like done in that side-by-side-graph.

morbidini wrote :
Maybe extrapolation could be done, but not as slightly as just putting both graphics side by side.

This could not have been written better.

Even kwanbis  wrote at Apr 2 2006, 03:02 AM Post #12 to his (imo) unlucky 2-graph-comparison :
it could be argued that diferent samples were used ... even diferent people probably submited results ... anyway ....


edit addon:

hm, some post above kwanbis compared  statistical listening tests with race laps and measuring times.
hmhm.
Any comments (necessary)?

The point of abx and ABC/HR here has been and is, that the results are valid for the samples, the tested encoders, tested people, the test situation as such, and not more.
The public multiformat tests with a bigger group of testers mirrors the ranking of general people's impression, but only focussed on the actual test (conditions).

Is MPC better than mp3?

Reply #70
I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

well, statistics have proven wrong many times. Thats why, for example, nobody can predict anything 100% with statistics.

Is MPC better than mp3?

Reply #71
As a long time is between those listening tests,
different samples,
different encoders (ie. no anchor encoder),
different people,
maybe same people who aged during both tests,


Some of these don't matter at all (different people for example), some may not matter, some may matter a lot.

My corncern is that I believe people tend to rate the encoders more against each other, rather than against the ratings scale itself ("Perceptual but not annoying" etc). I know that I myself have this tendency, and I have participated in the tests.

*BUT* transparency is a hard anchor, since it's always 5.0 in any test. This may be enough to anchor the high bitrate tests together.

I'd just like to see more data so I can reach my own conclusion.



I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

well, statistics have proven wrong many times. Thats why, for example, nobody can predict anything 100% with statistics.


Statistics proven wrong? Eh?

If you say something is true with 95% confidence, you know you will be wrong 5% of the time.

How can you prove that wrong? As I already asked, are you going to rewrite mathematics?

Statistical sampling is a known method, for which we know the pitfalls and accuracy very well. It tells us we don't need to ask the entire population of the world something in order to make a statement about it. You haven't come one inch closer to supporting your original entirely wrong statement, and you won't ever get an inch closer, either.

Is MPC better than mp3?

Reply #72

I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

well, statistics have proven wrong many times. Thats why, for example, nobody can predict anything 100% with statistics.


err,
1st kwanbis takes the results of statistics, ie. the 2 graphs, mixes them up,
and now he questions the principles of maths &  HA?

just a hint: statistics is not about predicting something with 100%,
but about measuring something with some safety of measuring correct and not guessing.
ie. probability of  a percentage lower than 100%.
Statistics hasn't been proven wrong.
Maybe certain test setups and used statistics and the interpretations were flawed.
(Like it looks here with high probability to put those 2 graphs side by side to demonstrate whatever. The 2 single graphs are not questioned (by me or HA or anybody with sense), but the putting side-by-side.)

Is MPC better than mp3?

Reply #73
Quote
I'd like to see rjamorim's data and reasoning that leads him to believe an extrapolation would be valid. If we see it, we can think about what the flaws could be, how likely they are, and consequently, how much attention this extrapolation should get.

I agree with this part. rjamorim was fast writing "So, you're not aware that results can be extrapolated between tests?" as if it was a given. My objection, however, is how this "extrapolation" (if you can call it that) is being made without any kind of explication, just by overlapping the ratings graphics.

Quote
The conditions for an extrapolation to be valid are pretty much the same that are required for the test itself to be valid.

Can't you can have two perfectly valid tests and not be able to do a simple extrapolation between them (like the one being tried here)? Aren't there more conditions, like having a valid way to relate the different anchors?

Is MPC better than mp3?

Reply #74
Quote
The conditions for an extrapolation to be valid are pretty much the same that are required for the test itself to be valid.

Can't you can have two perfectly valid tests and not be able to do a simple extrapolation between them (like the one being tried here)? Aren't there more conditions, like having a valid way to relate the different anchors?


Yes, of course. I perhaps didn't explain myself clearly there. I was talking about the conditions for a method of extrapolation to be valid, in the scientific sense; see the following sentence for example. I don't mean the validity of the test is linked to the validity of an extrapolation (except in the obvious way that it would be hard to make a valid extrapolation out of an invalid test ). Just that the way of determining whether it is is the same.