Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Measuring/ranking lossy difference from input+implications for quality (Read 8277 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Measuring/ranking lossy difference from input+implications for quality

I'm of the mind that a SNR of 130 dB could be described as "better" than 120 dB, even though I'm confident human beings would be unable to tell.

I'm of the mind a car that can accelerate from 0 to 60 mph in 9 seconds is "better" than one which takes 9.01 seconds, even though no human being could tell, either.

This person's question seems perfectly legit to me and I would think there must be a "null test" [please forgive my layman's term] comparing the compressed song to the uncompressed, using the two codecs in question, which can analyze the difference and give an answer in quantifiable terms, at least for a given song in question.

[If my term "null test" isn't understood by all, I can elaborate, but basically it means to play the two songs simultaneously and combining them but out of phase with each other. The distortion and artifacts introduced by the codec are then exposed in their pure state, without the music to mask them, and the level of these distortions can be given a value compared to the level of the actual song from moment to moment. Do we have such a test? With amplifiers I believe it was David Hafler who first thought of this in the 70's .]

Results would be along the lines of "Codec A had distortions -70 dB below the music level on average with occassional peaks of -60 dB, whereas Codec B was, um , "better" because its distortions were -80 dB below the music level on average with occassion peaks of only -75 dB. Does it matter to a human listener? NO. But that doesn't mean we are no longer allowed to use the term "better", in my opinion.

Granted some sorts of distortions and artifacts are more annoying (if discernable) than others, and weighting this regarding where the ear is most sensitive (say around 3.5 kHz) makes sense to me, but I still think an automated system with absolute numbers we can compare, instead of "Go ABX your entire 500 GB music collection your self to see what the answer is" would help this person out.

Measuring/ranking lossy difference from input+implications for quality

Reply #1
I'm of the mind that a SNR of 130 dB could be described as "better" than 120 dB, even though I'm confident human beings would be unable to tell.

A single metric, which might not even be correlated well if at all with perceived audio quality is never a good thing. Look at video codec development, where some people heavily rely (relied?) on the PSNR metric. Encoders mainly optimized for that metric can fail spectacularly in real world perception tests. Noise level is certainly not the only determining factor, not in audio and not in video.

Granted some sorts of distortions and artifacts are more annoying (if discernable) than others, and weighting this regarding where the ear is most sensitive (say around 3.5 kHz) makes sense to me, but I still think an automated system with absolute numbers we can compare, instead of "Go ABX your entire 500 GB music collection your self to see what the answer is" would help this person out.

While I try to adhere to the ToS here at this forum, I can accept the possibility of a metric with high correlation with perceived audio quality to be useful, but so far I haven't seen one in the audio world. Also, ABX tests are not as involved as you suggest, just take a small sample of music you listen to, encode it, and conduct the tests. If you don't have enough time, even a single song might be enough. There is also no reason to go OCD over your codec choice, if you find a music piece which doesn't encode well, just turn up the quality dial a bit on your encoder, and see if it helps. It is also no problem to just choose a "overcompensating" higher encoding setting overall, if you don't want to test a lot, but keep in mind that this is wasteful and not a real solution.

Transparency of lossy encoder setting results is inherently subjective, there is no objective perfect setting for every person. You can never be sure that a certain encode is transparent to everyone. That's the reason why we tell everyone to just ABX themselves. And that's the point why selling lossy audio is a bad concept to begin with, too.
It's only audiophile if it's inconvenient.

Measuring/ranking lossy difference from input+implications for quality

Reply #2
@mzil: A set of numbers wouldn't help anyone out, it would simply perpetuate the myth that measuring is a substitute for listening. Also, no-one is suggesting that anyone should ABX a 500GB music collection, and that sort of baseless hyperbole doesn't help the discussion.

The flaws in measuring numerical differences between compressed and uncompressed music as a mechanism for evaluating the quality of lossy music compression have been discussed repeatedly. Try this topic as an example.

Measuring/ranking lossy difference from input+implications for quality

Reply #3
[…] I still think an automated system with absolute numbers we can compare, instead of "Go ABX your entire 500 GB music collection your self to see what the answer is" would help this person out.
Kohlrabi is right. There’s no point designing some catch-all algorithm to evaluate perceptual quality when there’s no way to determine a priori what is perceptually transparent to any one user.

Quote
I'm of the mind that a SNR of 130 dB could be described as "better" than 120 dB, even though I'm confident human beings would be unable to tell.
This is rather irrelevant, given that SNR is much more easily quantified than is the probability of perceptual transparency.

Quote
If my term "null test" isn't understood by all, I can elaborate, but basically it means to play the two songs simultaneously and combining them but out of phase with each other. The distortion and artifacts introduced by the codec are then exposed in their pure state, without the music to mask them, and the level of these distortions can be given a value compared to the level of the actual song from moment to moment. Do we have such a test? With amplifiers I believe it was David Hafler who first thought of this in the 70's .
Zooming in:
Quote
without the music to mask them
…which is largely the purpose of lossy encoding.

Again, if the listener cannot tell a difference, it doesn’t matter what the difference signal or any other derived metric says about the ‘quality’. As Kohlrabi said, they can bump down to as low a setting as they can stand and never have to worry about it. That’s the aim of lossy formats.

Measuring/ranking lossy difference from input+implications for quality

Reply #4
I'm of the mind that a SNR of 130 dB could be described as "better" than 120 dB, even though I'm confident human beings would be unable to tell.
You clearly aren't talking about lossy encoding.

Quote
If my term "null test" isn't understood by all, I can elaborate, but basically it means to play the two songs simultaneously and combining them but out of phase with each other.
Such tests are easy to conduct.  Unfortunately they are completely useless.

Quote
The distortion and artifacts introduced by the codec are then exposed in their pure state, without the music to mask them
...which is the entire point of a perceptual coder!!!!!

Quote
that doesn't mean we are no longer allowed to use the term "better"
TOS #8 is quite clear about what one must provide as evidence in order to be allowed to use the term "better" as it relates to sound quality.  Difference signals do not qualify!

Quote
instead of "Go ABX your entire 500 GB music collection your self to see what the answer is" would help this person out.
What part of "find music with transients" equates to "ABX your whole collection?"

There’s no point designing some catch-all algorithm to evaluate perceptual quality when there’s no way to determine a priori what is perceptually transparent to any one user.
If there was such a test and it worked then it would completely revolutionize lossy encoding. Until that day comes, sound quality of perceptual encoding must be judged by the ears and nothing more. Graphs, SNR, null tests and the color of your car are irrelevant metrics.

Measuring/ranking lossy difference from input+implications for quality

Reply #5
Of course, I should have said trying to design and probably failing rather than “designing”.

Measuring/ranking lossy difference from input+implications for quality

Reply #6
I'm of the mind that a SNR of 130 dB could be described as "better" than 120 dB, even though I'm confident human beings would be unable to tell.

A single metric, which might not even be correlated well if at all with perceived audio quality is never a good thing. Look at video codec development, where some people heavily rely (relied?) on the PSNR metric. Encoders mainly optimized for that metric can fail spectacularly in real world perception tests. Noise level is certainly not the only determining factor, not in audio and not in video.

[emphasis mine]

Quote
This is rather irrelevant, given that SNR is much more easily quantified than is the probability of perceptual transparency
.-db1989

and
Quote
You clearly aren't talking about lossy encoding.
- greynol.

Correct, I'm not.

Yikes! I clearly shouldn't have given an analogy that had anything to do with audio perception and should have just used the car acceleration 0-60 mph analogy, since my choice seems to have caused some confusion with several people here. My bad. I was attempting to give examples where instrumentation easily exceeds the limits of human perception, that's all. SNR as it addresses the topic at hand was not my point. Oops. It was a terrible choice on my part since I see now how people would have thought I really was talking about SNR in particular. It was just a fluky coincidence.
---

TOS #8 is fantastic. I love it. However it doesn't address the point I was attempting to discuss because it relates to things which are being argued as be perceptible to humans, hence its use of the term "subjective sound quality":

"8. All members that put forth a statement concerning subjective sound quality, must -- to the best of their ability -- provide objective support for their claims. Acceptable means of support are double blind listening tests (ABX or ABC/HR) demonstrating that the member can discern a difference perceptually, together with a test sample to allow others to reproduce their findings. Graphs, non-blind listening tests, waveform difference comparisons, and so on, are not acceptable means of providing support."

I never argued that humans can tell a difference between 9.00 seconds and 9.01 seconds, however in our endeavour to more precisely determine what is and what isn't perceptible, having the instrumentation to record such subtle differences (which are beyond what humans can detect), I think has value and is worthy of discussion, however I will do my best and refrain from describing one figure as "better" than another, because it seems to be a sticky point as to what "better" means. I never meant to imply that "better" always means "perceptible to humans," yet it seems to be taken that way by some here, so I will stop using the term.

Quote
What part of "find music with transients" equates to "ABX your whole collection?"

Rather than listen to any of my own collection at all, I'd be more inclined to ABX "killer samples" that have been selected from a vastly larger library than I have. Also transients in particular aren't my main concern, but I can't speak for the OP. I can hear the pre-echo problem in some killer samples of electronic music with sharp click sounds but can't say I've ever experienced the same problem with the music I actually listen to.

[I asked in another thread if there was a name for the kind of distortion artifact I'm more concered with but wasn't given a more precise answer other than "undercoded", if I recall correctly.]

Measuring/ranking lossy difference from input+implications for quality

Reply #7
A single metric, which might not even be correlated well if at all with perceived audio quality is never a good thing. Look at video codec development, where some people heavily rely (relied?) on the PSNR metric. Encoders mainly optimized for that metric can fail spectacularly in real world perception tests. ...

Yes , they need work and refinement, however should we just give up and rely on human testing forever? Or should we try to determine what things the humans are keying on and then learn how to quantify those things using instrumentation? I vote for the latter.

Quote
...I can accept the possibility of a metric with high correlation with perceived audio quality to be useful, but so far I haven't seen one in the audio world.
Maybe some day, I hope.

edit to add: I also don't think we need to limit it to a single metric. We could have several working together at once.

Measuring/ranking lossy difference from input+implications for quality

Reply #8
Quote
If my term "null test" isn't understood by all, I can elaborate, but basically it means to play the two songs simultaneously and combining them but out of phase with each other.
Such tests are easy to conduct.  Unfortunately they are completely useless.

That's great news.  [That they seem to exist  ]I would be interested if there is software which will allow my to do this null test on my own. Might you recommend some for me, a newb, to try out? Thanks.

Measuring/ranking lossy difference from input+implications for quality

Reply #9
Rather than listen to any of my own collection at all, I'd be more inclined to ABX "killer samples" that have been selected from a vastly larger library than I have.

I couldn't care less about a killer sample if it doesn't occur in my library, personally.

I can hear the pre-echo problem in some killer samples of electronic music with sharp click sounds but can't say I've ever experienced the same problem with the music I actually listen to.

hi-hat, snare drum, acoustic guitar
...if I had a penchant for harpsichord (I don't)

Or should we try to determine what things the humans are keying on and then learn how to quantify those things using instrumentation?

...as if this hasn't been done over the the evolution of perceptual coding. 

I would be interested if there is software which will allow my to do this null test on my own. Might you recommend some for me, a newb, to try out?

Adobe Audtion -> mix paste + invert

Knock your socks off!

Measuring/ranking lossy difference from input+implications for quality

Reply #10
I would be interested if there is software which will allow my to do this null test on my own. Might you recommend some for me, a newb, to try out? Thanks.

Audacity is free and can easily do this.  Open the first file, then import the second.  Invert one of them and push play.
Others have already commented on the value of this exercise.  What exactly do you plan to compare?  AAC to MP3 would be particularly meaningless.  FLAC to AAC then FLAC to MP3 is at least curiously interesting.  Let us know what you find out.

Measuring/ranking lossy difference from input+implications for quality

Reply #11
FLAC to AAC then FLAC to MP3 is at least curiously interesting.

If by interesting you mean misleading, then sure.

One word: masking.  I'd put that in 1000 pt. font if I thought it would make a difference.

Measuring/ranking lossy difference from input+implications for quality

Reply #12
Adobe Audtion/Cool Edit Pro:
mix paste + invert

Great. I will try a trial version of Cool Edit Pro when I get a chance!
---

Quote
The flaws in measuring numerical differences between compressed and uncompressed music as a mechanism for evaluating the quality of lossy music compression have been discussed repeatedly. Try this topic as an example.


Thanks, Ouroboros. That thread looks like pay dirt. I will read it.

In my attempt to search for the topic I used the term "null test" only because thats what Hafler called it back in the day, but is there some better terminology I should be using for lossy codec testing using this method?

Measuring/ranking lossy difference from input+implications for quality

Reply #13
If by interesting you mean misleading, then sure.

Ha!  I didn't say that I was going to waste my time doing this.  Anything that gets the OP to do some work to answer his own question to his own satisfaction seems like a step forward.

Measuring/ranking lossy difference from input+implications for quality

Reply #14
I will read it.
My condolences.

In my attempt to search for the topic I used the term "null test" only because thats what Hafler called it back in the day, but is there some better terminology I should be using for lossy codec testing using this method?
"Difference signal" or "error signal" though they are not necessarily better worse.

Please don't dignify it by referring to it as a "method." It is not a method.

Measuring/ranking lossy difference from input+implications for quality

Reply #15
In my attempt to search for the topic I used the term "null test" only because thats what Hafler called it back in the day, but is there some better terminology I should be using for lossy codec testing using this method?
"Difference signal," though it is not necessarily better worse.

Please don't refer to it as a "method." It is not a method.

It is a method (or "way")  to determine the difference between a lossy compressed and uncompressed audio file. No inference that the
difference found proves the two original files are necessarily audibly different to humans should be made, nor was implied by me.

Measuring/ranking lossy difference from input+implications for quality

Reply #16
It is a method (or "way")  to determine the difference between a lossy compressed and uncompressed audio file. No inference that the
difference found is necessarily audible to humans should be made, nor was implied by me.

The difference if listened to by itself may very well be audible, but this is not useful.

Lossy encoding works by taking advantage of masking. Discovering what was masked is trivial. Any encoding that retains more of the masked audio is not doing a better job.


Measuring/ranking lossy difference from input+implications for quality

Reply #18
The difference if listened to by itself may very well be audible, but this is not useful...


My wording was poor/sloppy, sorry,  so I have edited my post that you just quoted me on and therefore wont comment on the above.


Quote
.Lossy encoding works by taking advantage of masking. Discovering what was masked is trivial.
The difference file generated by the null test ( I'm talking about) is not exclusively the masked material that was discarded during encoding, it is also the added artifacts/distortions, such as pre-echo, which have been inadvertently added to the lossy compressed version that never existed in the original file.

As for it being "useful" to have this at hand, that would depend what one wants it for. If you are suggesting that I mean it in some way "proves" the audibility between the original and lossy compressed version of the audio sample, you'd be mistaken.

Measuring/ranking lossy difference from input+implications for quality

Reply #19
The difference file generated by the null test ( I'm talking about) is not exclusively the masked material that was discarded during encoding, it is also the added artifacts/distortions, such as pre-echo, which have been inadvertently added to the lossy compressed version that never existed in the original file.

Are you sure you're talking about two different things here? I suspect you might find that there's more than a little overlap between "masked material that was discarded" and "added artifacts/distortion".