Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Does it sense to listening test of lossy codecs at higher bitrates? (Read 3656 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Does it sense to listening test of lossy codecs at higher bitrates?

HI!
Does it sense perform a listening tests of lossy codecs at higher bitrates (192k and higher) as many of them are already considered as "transparent"?
If the listening test won't show the difference between various codecs and encoder profiles, are the spectral graphs the only measure of the audio quality (instead doing a listening test)? And what should be important by judging the audio data quality from spectral view? Thanks 

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #1
HI!
Does it sense perform a listening tests of lossy codecs at higher bitrates (192k and higher) as many of them are already considered as "transparent"?

Perhaps, but as the audible difference approach zero, the effort needed to separate good ones from lesser ones approach infinite.
Quote
If the listening test won't show the difference between various codecs and encoder profiles, are the spectral graphs the only measure of the audio quality (instead doing a listening test)? And what should be important by judging the audio data quality from spectral view? Thanks 

You should not judge the quality of a lossy audio codec from some spectral view. Lossy audio codecs are built to satisfy human listening, and that alone. If human listening can detect no flaws, then all is good.

-k

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #2
HI!
Does it sense perform a listening tests of lossy codecs at higher bitrates (192k and higher) as many of them are already considered as "transparent"?


The last public MP3 listening test here had all codecs statistically tied at 128kbps. There's no reason to believe >128kbps and newer codecs could produce a different result, on the contrary.

Quote
If the listening test won't show the difference between various codecs and encoder profiles, are the spectral graphs the only measure of the audio quality (instead doing a listening test)? And what should be important by judging the audio data quality from spectral view? Thanks 


If the listening test tells you that there's no difference, then that is the result. You can't judge the quality of an audio codec by looking at images.

The only other thing that you can do is to try to make your test more sensitive, by for example using trained listeners (but you result won't reflect the general population any more) or having more listeners and samples.

For the tests conducted here, the general observation is that the harder the test, the more difficult it is to get people to send in results, which is why nobody has tried high bitrate tests since.

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #3
Yep I know, the ears are heavily imperfect so ears don't really know what is perfect and what is bilked. So there's no rely on the listening experience and only valuable assessment of resulting audio quality can be analysis, a relevant key to determine the quality from spectral graphs or another numerical based analysis of the audio data, something like that?

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #4
A usual listening test at high bitrate doesn't make sense for the reasons mentioned.

Getting a feeling however about the amount codecs can fail occasionally even at very high bitrate is worthwhile IMO. This means collecting problem samples for various codecs and finding out how bad various people judge them.

lame3995o -Q1.7 --lowpass 17

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #5
Yep I know, the ears are heavily imperfect so ears don't really know what is perfect and what is bilked.

If the ear couldn't be fooled, then there would never have been these codecs in the first place; at any bitrate.

Quote
only valuable assessment of resulting audio quality can be analysis

WRONG! There is no valuable assessment other than double-blind listening tests regardless of the bitrate.  No matter how many times you present this notion that there is an alternate means to judge perceptual codecs, the answer is the same: the notion is faulty.

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #6
Yep I know, the ears are heavily imperfect so ears don't really know what is perfect and what is bilked.


This sentence is 100% meaningless. What could the "perfect" you speak of possibly even mean?

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #7
So there's no rely on the listening experience and only valuable assessment of resulting audio quality can be analysis, a relevant key to determine the quality from spectral graphs or another numerical based analysis of the audio data, something like that?
what

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #8
WRONG! There is no valuable assessment other than double-blind listening tests regardless of the bitrate.  No matter how many times you present this notion that there is an alternate means to judge perceptual codecs, the answer is the same: the notion is faulty.


To me, the most reliable assessment of any signal conversion is still to subtract the output from the input, look at how large the residuals (i.e. the things lost in the processing) are in comparison to the original signal, and whether the residuals sound recognizably like the music encoded (which is bad, as in that case recognizable parts of the original recording are distorted or left out of the encoding). As you raise the bitrate - no matter if dealing with MP3, AAC, or other compression scheme - the residuals get smaller and less "music-like". If you are getting residuals that are far below the signal level at any moment and frequency, and thus always sure to be masked by the signal itself, you can safely say that this or that codec will be transparent for this or that kind of input signal, and you are not just lucky that the combination of your particular playback equipment and your own pair of ears does not notice any audible difference: Some artifacts or losses only become audible with certain kinds of speakers or headphones that accidentally happen to have a peak in their response exactly at the affected frequencies so the smallish actual error of the codec sticks out like the proverbial sore thumb. This can be a really bad surprise if you upgrade your speakers and suddenly find your older encodes unlistenable - so much for listening being the "only reliable way" to judge a lossy encoding!

Of course, results will vary with different input material: Some of the most difficult things to encode are e.g. inherently crackly analogue recordings sourced from vinyl or 78rpm discs that have not been denoised/declicked before encoding, or recordings that contain non-steady broadband noises as part of the desired sound (e.g. "nature sounds" like wind, rain, surf, trains, frying-pans etc. as they all do occur occasionally in movie soundtracks or audio plays / radio drama where you would usually expect that narrow-bandwidth "spoken word" settings might do the job). Settings that provide perfectly good sound with cleanly recorded music of any genre might not be sufficient for these kinds of sound, while OTOH slow-moving music with few transients and percussive sounds might still sound fine at a far smaller bandwidth.


Does it sense to listening test of lossy codecs at higher bitrates?

Reply #10
Yep I know, the ears are heavily imperfect so ears don't really know what is perfect and what is bilked. So there's no rely on the listening experience and only valuable assessment of resulting audio quality can be analysis, a relevant key to determine the quality from spectral graphs or another numerical based analysis of the audio data, something like that?

Think of it like this: the main goal of a toothbrush is to make your teeth clean. Now, if most brands of toothbrushes were able to accomplish perfect clean teeth, would you choose brands based on which toothbrush was best at cleaning other parts of your body? Of course not, a toothbrush is for cleaning teeth, and that is the main performance characteristic one should use to benchmark them (of course price, attractive colors etc might matter as well, just like processing load can be a factor for lossy codecs).

-k

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #11
I think it's pretty ironic that this topic was originally posted in the Listening Tests subforum.

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #12
To me, the most reliable assessment of any signal conversion is still to subtract the output from the input, look at how large the residuals (i.e. the things lost in the processing) are in comparison to the original signal, and whether the residuals sound recognizably like the music encoded


You don't seem to understand what lossy compression is about.  Note that none of the mainstream formats are designed to this metric even though it would seem the most straightforward to implement and test.

Lossywav is the only one I know that goes in this direction.  You might want to look up some of the threads on that.


Does it sense to listening test of lossy codecs at higher bitrates?

Reply #13
To me, the most reliable assessment of any signal conversion is still to subtract the output from the input, look at how large the residuals (i.e. the things lost in the processing) are in comparison to the original signal, and whether the residuals sound recognizably like the music encoded (which is bad, as in that case recognizable parts of the original recording are distorted or left out of the encoding).

Try testing a 1 millisecond delay using that method.

Then try identifying a 1 millisecond delay in a listening test.

-k

Does it sense to listening test of lossy codecs at higher bitrates?

Reply #14
To me, the most reliable assessment of any signal conversion is still to subtract the output from the input...

That would be true if the objective were to accurately reproduce the waveform. However, what we are dealing with here is the perception of the signal.

Take as an example human color vision. The visible spectrum covers over 200 nanometers, and yet the human eye has only four kinds of light receptors. The color that we perceive is a result of the combination of wavelengths and their intensities, along with the spectral response curves of the receptors.

When you see something as a particular color, your eye can't tell which wavelengths are present at which levels. Two spectra that are very different from one another can look like the identical color. But when you subtract the two you get a very large difference between them. Does this make spectral subtraction a good way of testing color matching? Obviously not.