Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Interesting papers on Rasch Modeling (Read 3291 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Interesting papers on Rasch Modeling

I ordered these papers from the AES preprints site:

http://www.aes.org/publications/preprints/search.html

Measurement of Small Impairments of Perceptual Audio Coders Using a 3-Facet Rasch Model, with Mark Moulton, Ph.D., 104th Convention of the Audio Engineering Society, Amsterdam, Netherlands, May, 1998.

Codec "Transparency," Listener "Severity," Program "Intolerance:"Suggestive Relationships between Rasch Measures and Some Background Variables, with Mark Moulton, Ph.D., 105th Convention of the Audio Engineering Society, San Francisco, CA, September, 1998.

The latter paper, especially, might be worth your $10 ($5 if you're an AES member) if you are interested in this sort of thing.

In essence, a probabilistic model is created as the conjoint effect of three facets:  the "transparency" of the codec, the "severity" of the listener, and the "intolerance" of the audio program to codec artifacts.

The Rasch model (this is what the model is referred to as) is typically used in psychometrics, e.g., for test equating, but the papers show it can be useful for analyzing results of audio codec testing as well.

I don't know if the model can be successfully employed on Roberto's tests.  There need to be enough listeners who listen to all samples presented.

ff123


Interesting papers on Rasch Modeling

Reply #2
Quote
http://www.moultonlabs.com/slides.htm

Have you seen these?

Yes I have.  They're unfinished though, and have been since I last looked at them, probably at least a year ago.  It's almost good enough to get a feel for what's going on, although there's nothing like seeing the equation in the paper and actual results of codec comparisons analyzed the traditional way (sampling a population, diff scores, ANOVA, etc.) vs. using a Rasch model.

BTW, I'm not totally convinced that Rasch is the best way to analyze things in all cases.  It assumes, for example, that "program intolerance" is unidimensional, that is that it can be modeled as a single property.  But we know that different codecs have different failings, and that some samples really highlight these shortcomings for some codecs but not for others.

The Rasch model allows you to throw out samples or codecs or listeners which don't fit the model, but then what you're left with is not as good a fit to reality as before.  In the 128kbit/s extension test, for example, we might have had to throw out the "death2" sample, since it appears that wma9pro fails unusually badly on it.  But then we'd be left with a model which requires qualification (wma9pro is great except for those samples which have heavy and repeated transients).  It would lose predictive power, which is one of its advantages over traditional methods.

Still, it's an interesting approach.

ff123