Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: is 128 joint-stereo better than 160 stereo? (Read 5479 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

is 128 joint-stereo better than 160 stereo?

So, obviously there are lots of variables to consider in mp3 quality.
But let's assume we could isolate the effect of mp3 stereo mode from all other factors.
If so, would a 128kbps mp3 encoded in joint-stereo be higher quality than a 160 kbps mp3 encoded in stereo (at the equivalent of 80 kbps per channel)?

And please don't say, "well just ABX away." I understand that is a way to solve things, but this strikes me as an area where there likely exists sufficient understanding of the mp3 format to provide an answer on theoretical grounds and on the basis of what is already known.
God kills a kitten every time you encode with CBR 320

is 128 joint-stereo better than 160 stereo?

Reply #1
I don't mean to be an asshat, but clearly it varies greatly depending on how much stereo separation there is in the source material.
No separation (mono) = 128 per channel vs 80.
Total separation = 64 per channel vs 80.
Since a total lack of coupling between channels is extremely rare in modern music, I would assume 128 JS wins most the time, as it is close even in the worst case.

Creature of habit.

 

is 128 joint-stereo better than 160 stereo?

Reply #2
Soap, you're totally right that it depends on the degree of stereo separation in the music, sorry I should have mentioned that in my initial post.

I conducted a simple test using LAME. I know that LAME's joint-stereo algorithm is reasonably efficient though not 100% efficient (e.g., when encoding a "stereo" file in which the two channels are bit-for-bit equal, the bitrate of the resulting mp3 will be slightly smaller if "-m m" is specified in the commandline).

1. I started with a wav file from a CD (a 6-second clip of "I Will Survive" by Cake), with reasonable stereo difference (two channels aren't equal, but they're not totally separate music either).
2. Then I made a two-channel version with both channels taken from the left channel, and another two-channel version with both channels taken from the right channel (so, mono files that still have two channels).
3. I encoded all three files at -V2 using Lame 3.98.2
4. I again encoded all three files at -V2 -m m

Resulting bitrates:
stereo encoded normally: 167 kbps
2-ch mono encoded normally: 115 kbps (both files)
stereo encoded force-mono: 102 kbps
2-ch mono encoded force-mono: 102 kbps (both files)

Now, assuming the forced-mono doesn't mess up the quality-selection that Lame is doing, I can conclude the following. For the given wav-file I was working with,
A. forced-mono is 12% more efficient than allowing joint-stereo to encode a 2-channel mono file. The ratio of 102 and 115 kbps. This is likely stable for most 2-channel mono wav files.
B. total separation stereo would be 102 kbps per channel, so 204 kbps total, whereas joint-stereo V2 encode is 167 kbps. That's 82% of the full stereo.
This is approximately the ratio of 128 to 160 (0.8) so for the wav file I was using, the answer to my original question would be, "it's close to a wash, but 160 kbps stereo wins very slightly over 128 kbps joint-stereo.
God kills a kitten every time you encode with CBR 320

is 128 joint-stereo better than 160 stereo?

Reply #3
I like tests which challenge my long-held assumptions.
Creature of habit.

is 128 joint-stereo better than 160 stereo?

Reply #4
A lot of this would depend on how much of the audio being encoded can be mathematically stored as M/S data instead of L/R data.

Techo/Trance/Dance Club music can quite often be represented very well as M/S joint stereo.  Watching the LAME command line histogram yields frequent percentage levels of around 90% M/S with 10% L/R.

That means that 90% of that track was encoded by LAME as M/S joint stereo, and 10% as L/R standard stereo.

On the other hand, acoustic music and classical music is frequently the opposite.  Most of my acoustic and classical music is encoded by LAME with levels giving about 50 or 60% L/R and about 40 or 50% M/S.

Much of the current "popular" artists (i.e. Beyonce, Madonna, Britney Spears, Rihanna, Janet Jackson, etc.) are in the middle of those two ranges, with about 75% M/S and 25% L/R as the ratios for their music tracks.

EDIT:

Furthermore, I have some audio language CDs that I've ripped and encoded with LAME.  They are frequently encoded using -V6 and are about 99% M/S with 1% L/R.


is 128 joint-stereo better than 160 stereo?

Reply #6
B. total separation stereo would be 102 kbps per channel, so 204 kbps total, whereas joint-stereo V2 encode is 167 kbps. That's 82% of the full stereo.
This is approximately the ratio of 128 to 160 (0.8) so for the wav file I was using, the answer to my original question would be, "it's close to a wash, but 160 kbps stereo wins very slightly over 128 kbps joint-stereo.


I don't think you can conclude that, less even from a sample clip of one song.

First, like i't been suggested, the L/R and M/S percentages that LAME reports may give a hint on the efficiency of the process, for that song.
You can also thrown in  a -m s , so that it is encoded in simple stereo (all L/R) and see if that matches your supposed 204kbps.

Last but not the least, in most encoders (if not all) several internal settings change when changing the bitrate (lowpass frequency, some noise thresholds...), so upping the bitrate also tells the encoder to try to use more bits to have more quality. If it needs it for stereo, it is obvious they won't be there for the frequencies.