IPB

Welcome Guest ( Log In | Register )

2 Pages V   1 2 >  
Reply to this topicStart new topic
Should HA promote a more rigorous listening test protocol?, was: "HA -- guilty as charged?" (TOS #6)
krabapple
post Nov 23 2012, 19:01
Post #1





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



I was taken aback to read today this exchange on gearslutz, from earlier this year

QUOTE ("Bob Ohlsson")
It's important to understand that what JJ considers a listening test and what the ABX/Hydogen Audio skeptics crowd considers a listening test are two very very different things.


QUOTE (Kees de Visser")
Perhaps JJ can explain what he considers a listening test and how it's different from the Hydrogenaudio standpoint.
I was somehow under the impression they were not that different.


QUOTE ("j_j")
Including positive and negative controls, lots of training for the test as well as familiarity with the equipment and music, and equiment validation are the biggies.

Test evaluation might be an issue, too. Many tests, including some of the MPEG tests and 1116 make assumptions that the entire population reacts the same to impairments. While basic masking is universal, what people dislike when they can hear something is NOT universal.



http://www.gearslutz.com/board/7672621-post329.html
http://www.gearslutz.com/board/7674886-post337.html
http://www.gearslutz.com/board/7677113-post348.html


Now, I agree with Kees -- I don't think the HA community 'take' on listening tests is that different from what JJ mentions. Few here, I suspect, would dismiss the real utility of training , or of positive controls, or familiarity etc., in making a listening test maximally sensitive. (as for the rest, I confess I;m not really clear whether JJ's criciticsm of test evaluation is directed at HA)

What I think is happening is a difference in what listening tests are used for. Most individual HA reports of ABX tests are from users wanting to know if file X sounds different from file Y to them, as they are now, using the equipment they have, not as they would be after training to hear artifacts, on the most revealing equipment. They aren't doing basic research into a difference's audibility, as JJ did, for example, when developing lossy codecs. For that purpose, trained listeners, positive & negative controls, familiarity and 'validated' equipment are necessities.

Still, HA *does* host mass listening tests from time to time -- which are more akin to 'basic research' -- and its few 'official' guidelines on setting up listening tests -- the HA wiki, and Pio's sticky threads -- make no mention of training, +/- controls, etc. as factors in such tests.

Time to change this?

This post has been edited by krabapple: Nov 23 2012, 19:05
Go to the top of the page
+Quote Post
saratoga
post Nov 23 2012, 19:16
Post #2





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



Lots of the personal listening tests are by people with considerable training. As for abx tests by me members the goal is determine if a given file or system is good enough for that individual. In this case training may not even be desirable let alone necessary. I think it comes down to what you want to measure and how you analyze the results.
Go to the top of the page
+Quote Post
greynol
post Nov 23 2012, 19:24
Post #3





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Pio's post does make mention of relegating ABX testing as practice trials, so training is touched upon at least indirectly.

I don't see that we should go out of our way to engage in some debate by proxy. Maybe those players who are members here can have the debate here. Those who are not members can certainly join so long as they do so in compliance with our rules, namely TOS12. Personally I'm not interested in advocating for yet another thread dedicated to trolling TOS8, however (somewhat related to those who aren't welcome back per TOS12).

This post has been edited by greynol: Nov 23 2012, 19:27


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
Canar
post Nov 23 2012, 23:16
Post #4





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



With all due respect to Mr. J., while his criticism of many of our public mass listening tests is valid, we do not stick to that approach dogmatically. Our intent in those tests is to get ordinary-citizen feedback regarding codec quality. All of his criticisms can be addressed without violating TOS8 in any way, they just make tests harder to conduct. We're aiming to maximize the audience we get feedback from, not maximize the quality of the results. Furthermore, as an Internet-based community, some criticisms are nigh impossible to address, such as equipment validation.

I think the criticisms were made in good faith without the intent of demeaning what we do. I fear that this thread will become divisive.


--------------------
∑:<
Go to the top of the page
+Quote Post
krabapple
post Nov 24 2012, 04:02
Post #5





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



QUOTE (greynol @ Nov 23 2012, 13:24) *
Pio's post does make mention of relegating ABX testing as practice trials, so training is touched upon at least indirectly.

I don't see that we should go out of our way to engage in some debate by proxy. Maybe those players who are members here can have the debate here. Those who are not members can certainly join so long as they do so in compliance with our rules, namely TOS12. Personally I'm not interested in advocating for yet another thread dedicated to trolling TOS8, however (somewhat related to those who aren't welcome back per TOS12).



That gearslutz thread wasn't a debate about HA's practices -- it was about 'Mastered for iTunes'. The three posts from March were a minor and fleeting sidenote there -- but one IMO obviously pertinent to HA, for reasons of personnel (2 of 3 'players' being respectable HA posters too), and content. I'd certainly have posted about it here earlier, if I'd read it earlier. Certainly I hope Kees and JJ will both participate in this thread.

I'm not advocating or encouraging TOS8 trolling, and i honestly don't see how you went from what I posted, to that. And I haven't a clue who you are referring to re:TOS12. Kees? JJ? Bob Ohlsson? I do hope Kees and JJ will both participate in this thread! Guilty as charged, if that's the charge.

This post has been edited by krabapple: Nov 24 2012, 04:20
Go to the top of the page
+Quote Post
krabapple
post Nov 24 2012, 04:09
Post #6





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



QUOTE (Canar @ Nov 23 2012, 17:16) *
With all due respect to Mr. J., while his criticism of many of our public mass listening tests is valid, we do not stick to that approach dogmatically. Our intent in those tests is to get ordinary-citizen feedback regarding codec quality. All of his criticisms can be addressed without violating TOS8 in any way, they just make tests harder to conduct. We're aiming to maximize the audience we get feedback from, not maximize the quality of the results. Furthermore, as an Internet-based community, some criticisms are nigh impossible to address, such as equipment validation.


Sounds reasonable to me. Where I'm heading is a discussion of whether there should be a revision of whatever formal HA guidelines exist for conducting listening tests. And before someone snarks 'knock yourself out', yes, I'm willing to help craft such revision....*after* discussion.


QUOTE
I think the criticisms were made in good faith without the intent of demeaning what we do. I fear that this thread will become divisive.



I'm already a bit perplexed (not defensive) at the responses. People are concerned that a discussion about listening test rigor as defined by JJ will devolve into TOS8 and TOS12 violations? Seriously? Is it because I used the phrase 'guilty as charged'? Would it help if I put air-quotes around "guilty" and "charged"?

This post has been edited by krabapple: Nov 24 2012, 04:11
Go to the top of the page
+Quote Post
krabapple
post Nov 24 2012, 04:10
Post #7





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



QUOTE (saratoga @ Nov 23 2012, 13:16) *
Lots of the personal listening tests are by people with considerable training. As for abx tests by me members the goal is determine if a given file or system is good enough for that individual. In this case training may not even be desirable let alone necessary. I think it comes down to what you want to measure and how you analyze the results.


That, and how you interpret them. What claims you make from them.
Go to the top of the page
+Quote Post
Canar
post Nov 24 2012, 04:39
Post #8





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



Honestly, I think our procedure is fine, given what we're trying to achieve. We get statistically significant results. There's no need to change anything. We can run tests with altered procedure, should there be a desire, but what would the goal of such a test be?


--------------------
∑:<
Go to the top of the page
+Quote Post
greynol
post Nov 24 2012, 05:00
Post #9





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



My concern about people coming here to argue that TOS8 is for fools and that they are attracted to threads such as this one is hardly unfounded. That said, I hope this discussion doesn't devolve into this.

About the external discussion which inspired this one, I didn't bother to look at it. As for TOS #12, perhaps it's something not apparent to non-staff. Like everyone else, I would like to see constructive participation and welcome new members. My invitation only extends to truly new members. Those who have been previously banned need not apply.

EDIT: Since my caveat about TOS#12 seemed to stir a mini shitstorm, let me be clear, any member here who is able to post freely is in good standing. Kees and JJ are fine. As far as I am aware, Bob Ohlsson has never registered here and is perfectly free to do so (I never said nor implied otherwise, unless he was indeed previously banned).

This post has been edited by greynol: Nov 26 2012, 18:29


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
krabapple
post Nov 24 2012, 14:07
Post #10





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



QUOTE (Canar @ Nov 23 2012, 22:39) *
Honestly, I think our procedure is fine, given what we're trying to achieve. We get statistically significant results. There's no need to change anything. We can run tests with altered procedure, should there be a desire, but what would the goal of such a test be?



Some of the caveats , I would think, apply more to 'no difference' results than to statistically signficant (positive) results.
Go to the top of the page
+Quote Post
greynol
post Nov 24 2012, 17:36
Post #11





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (krabapple @ Nov 23 2012, 19:09) *
Would it help if I put air-quotes around "guilty" and "charged"?

Not that it has anything to do with attracting TOS8 bashing, but you should suggest a new title that is compliant with TOS #6. The current one doesn't make the grade, with or without scary quotes.

Also, if we're talking about forum policy, this discussion belongs in site related discussion, not listening tests. Please read the subforum descriptions if you haven't already.

This post has been edited by greynol: Nov 24 2012, 17:40


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
Axon
post Nov 25 2012, 08:04
Post #12





Group: Members (Donating)
Posts: 1984
Joined: 4-January 04
From: Austin, TX
Member No.: 10933



There's a tradeoff going on here.

One the one hand, reducing the barriers for Joe Sixpack forum readers to contribute listening test results is extremely important, and the policies of HA's listening tests have been very, very good with that.

On the other hand, listening testers might self-select anyways, so that those who go to the trouble to take such tests may very well find the request for additional documentation of their listening experience, training, etc. to be reasonable. And such documentation would be extremely useful to use HA test results as an adjunct for clinical-/institutional-grade listening tests, of the sort that jj describes.
Go to the top of the page
+Quote Post
Woodinville
post Nov 25 2012, 09:20
Post #13





Group: Members
Posts: 1402
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



Ok, I'm a little confused here. How does what I said have anything to do with TOS 8 bashing? I'm asking for better tests, and yes, there should ALWAYS be positive and negative controls in a test, and no, they aren't that hard to add, and yes, you can add them in varying levels of positive control and get some very useful information. So you should. I'm standing absolutely firm on that position, because I see so many tests that I can't even evaluate the results coming to me in capacities as reviewer or editor, tests that have no way to relate them to other sets of results in any fashion. (no, I don't mean you should combine results)

As to evaluating for multiple axis, that's only for tests that do more than "can you detect" testing, obviously. I am known to be a very serious unfan of the "impaired signal multiple choice" tests people are using these days. (I am avoiding the name of the popular test, I've been accused of stealing a trademark once when I mentioned the name of this test in a critical fashion.) One of the big failures of that kind of testing is the forced ranking. Such tests assume that relative rankings are transitive. We all know better.

I am frankly surprised at the apparent offense taken to what I said. I'm simply describing standard practice.

This post has been edited by Woodinville: Nov 25 2012, 09:22


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
krabapple
post Nov 25 2012, 16:34
Post #14





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



QUOTE (greynol @ Nov 24 2012, 11:36) *
QUOTE (krabapple @ Nov 23 2012, 19:09) *
Would it help if I put air-quotes around "guilty" and "charged"?

Not that it has anything to do with attracting TOS8 bashing, but you should suggest a new title that is compliant with TOS #6. The current one doesn't make the grade, with or without scary quotes.



OK, how about, 'Should HA promote a more rigorous listening test protocol'?


QUOTE
Also, if we're talking about forum policy, this discussion belongs in site related discussion, not listening tests. Please read the subforum descriptions if you haven't already.


Seems to me it's a it's a bit of both, and Listening Tests is the more specific of the two. But feel free to move it wherever you think it fits best.
Go to the top of the page
+Quote Post
greynol
post Nov 25 2012, 17:31
Post #15





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (Woodinville @ Nov 25 2012, 00:20) *
How does what I said have anything to do with TOS 8 bashing

My reply may sound defensive, but I don't care. Please show me where I called any particular individual out on bashing TOS8. You can't because I didn't. If I didn't make myself clear enough earlier, I don't want this thread to attract yet another set of placebophile trolls to railroad the discussion into another referendum on TOS8. I could link discussions and name names if you want, but I don't see the point; except to demonstrate that you and Kees do not provide cause for concern.

QUOTE
I am frankly surprised at the apparent offense taken to what I said. I'm simply describing standard practice.

I agree with you on principle, but I am frankly surprised you haven't taken the opportunity to talk about it here; rather you seem to only talk about it in forums which either don't require objective evidence or worse, forums where this criteria is rejected and even shunned by a sizable portion of it's more vocal and respected members.

Hopefully this thread will prove me wrong, assuming that I'm not wrong already, though I've closely followed this forum and your contributions in particular for many years now.

This post has been edited by greynol: Nov 25 2012, 17:37


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
greynol
post Nov 25 2012, 18:17
Post #16





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (krabapple @ Nov 25 2012, 07:34) *
QUOTE
Also, if we're talking about forum policy, this discussion belongs in site related discussion, not listening tests. Please read the subforum descriptions if you haven't already.


Seems to me it's a it's a bit of both, and Listening Tests is the more specific of the two. But feel free to move it wherever you think it fits best.

You're right, it is a bit of both. Thanks for the updated title.


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
Porcus
post Nov 26 2012, 08:25
Post #17





Group: Members
Posts: 1779
Joined: 30-November 06
Member No.: 38207



I agree with Axon, if that is what is being discussed (which is also a bit unclear to me). If it is the public listening tests, then they seem not to have the scope of e.g. identifying annoyances in order to address them in development. Any reason that they should?


--------------------
One day in the Year of the Fox came a time remembered well
Go to the top of the page
+Quote Post
2Bdecided
post Nov 26 2012, 13:58
Post #18


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



Do that many tests meet BS.1116? It's a long time since I read it, but IIRC the requirements for the listening room, number of listeners and trials, selection of test content, and training all pose a challenge.

Certain organisations find it far easier to provide a suitable listening room.
Certain groups of people find it far easier to identify problem samples.

Cheers,
David.
Go to the top of the page
+Quote Post
dhromed
post Nov 26 2012, 14:22
Post #19





Group: Members
Posts: 1244
Joined: 16-February 08
From: NL
Member No.: 51347



I am frankly surprised that there is no sticky at the top of the Listening Tests forum that explains what a reasonably good listening test entails, how to set it up, and how to present the results.
Go to the top of the page
+Quote Post
IgorC
post Nov 26 2012, 18:14
Post #20





Group: Members
Posts: 1506
Joined: 3-January 05
From: Argentina, Bs As
Member No.: 18803



Great. A lot of problem statements.
Now people can start make a propositions and formulate an alternative solutions.

As a reminder, Hydrogen Audio is the community created purely on enthuasist's resources.
So Somebody have a real deal and is eager to work on it on his/her spare time for free, Welcome.

QUOTE (Woodinville @ Nov 25 2012, 05:20) *
One of the big failures of that kind of testing is the forced ranking. Such tests assume that relative rankings are transitive. We all know better.

Sorry, "transitive" doesn't describe enough well your central idea and I'm quite sure people interpret it different (read as wrong) ways.
You're questioning not only HA's methodic but the whole ABC/HR, hence all previous tests which were used for standarization of lossy encoders. But that's not an issue. Everybody is free to beleive and express an ideas freely.

Hydrogen Audio as the rest of the internet is for free speech here so if You have an ideas You can start to work on them and share. We are open to talk about anything but someone should start to work on it and make a new steps.

QUOTE (Axon @ Nov 25 2012, 04:04) *
On the other hand, listening testers might self-select anyways, so that those who go to the trouble to take such tests may very well find the request for additional documentation of their listening experience, training, etc. to be reasonable. And such documentation would be extremely useful to use HA test results as an adjunct for clinical-/institutional-grade listening tests, of the sort that jj describes.

You are simply not aware of the fact that the documentation was provided. http://listening-tests.hydrogenaudio.org/i...96-a/readme.txt

And the whole job maded with every single participant!
You simply don't know that.



We all have suggestions, now does anybody want to work on them? Huh?

This post has been edited by IgorC: Nov 26 2012, 18:14
Go to the top of the page
+Quote Post
greynol
post Nov 26 2012, 18:30
Post #21





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Krabapple, the author of this discussion, did in fact graciously offer his time an effort towards improvement.

This post has been edited by greynol: Nov 26 2012, 23:21


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
Canar
post Nov 26 2012, 18:38
Post #22





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



With the talk about "including positive and negative controls", isn't this base already covered? We've been including low and high anchors for a while now. Is there more to this criticism than just the two forms of anchor?

This post has been edited by Canar: Nov 26 2012, 18:38


--------------------
∑:<
Go to the top of the page
+Quote Post
Woodinville
post Nov 27 2012, 02:27
Post #23





Group: Members
Posts: 1402
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (IgorC @ Nov 26 2012, 09:14) *
Sorry, "transitive" doesn't describe enough well your central idea and I'm quite sure people interpret it different (read as wrong) ways.
You're questioning not only HA's methodic but the whole ABC/HR, hence all previous tests which were used for standarization of lossy encoders. But that's not an issue. Everybody is free to beleive and express an ideas freely.


I'm doing no such thing. ABC/hr is doing individual rankings, not confusing things like the ones with 4 anchors, 10 probe conditions, and that asks you to rank the lot of them on one scale? Not ABC/hr or BS1116, although I do have some questions about some of the evaluations following some 1116 tests.

So what are you talking about?

ETA: Graynol, this is why I hesitate to say anything here. Just like in audiophile forums, it seems that anything you say can and will be used against you, even if you didn't say it. In case you weren't aware, I'm tired of audio, tired of audio enthusiasts of all sorts, and multiply-tired of the people who like to grind axes.

This post has been edited by Woodinville: Nov 27 2012, 02:30


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
Woodinville
post Nov 27 2012, 02:32
Post #24





Group: Members
Posts: 1402
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (Canar @ Nov 26 2012, 09:38) *
With the talk about "including positive and negative controls", isn't this base already covered? We've been including low and high anchors for a while now. Is there more to this criticism than just the two forms of anchor?


A negative control is A vs. A, present as ABX or ABC/hr, of course. If that's what you mean by 'high anchor', that's good.

A positive control might be a low anchor, but you would then perhaps want multiple anchors. So anchors that are not tests of identity can all be positive controls IF they should all be audible.

Basically, you want a positive control of a level equal to your desired test sensitivity. Yes, I know, this isn't the easiest thing in the world to spec.

But any test result has to show the results of the controls.

Anchors are generally for a different purpose, that of relating one test to another, of course.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
Dynamic
post Nov 27 2012, 15:05
Post #25





Group: Members
Posts: 793
Joined: 17-September 06
Member No.: 35307



QUOTE (Woodinville @ Nov 27 2012, 01:32) *
A negative control is A vs. A, present as ABX or ABC/hr, of course. If that's what you mean by 'high anchor', that's good.

A positive control might be a low anchor, but you would then perhaps want multiple anchors. So anchors that are not tests of identity can all be positive controls IF they should all be audible.

Basically, you want a positive control of a level equal to your desired test sensitivity. Yes, I know, this isn't the easiest thing in the world to spec.

But any test result has to show the results of the controls.

Anchors are generally for a different purpose, that of relating one test to another, of course.


I think I understand now. We're talking about Control as in Control Condition in a Controlled Experiment, where the Control is used to compare against the Test Condition.

Negative Control in this case does not refer to negative or positive numbers, but to a Null Condition where no difference should be expected.
This means that the Negative Control is there to catch False Positives (where listeners falsely detect non-transparency)
We are comparing the original sample (or possibly the high anchor) with itself, so should expect no difference. This eliminates testers who claim to discern a difference when they cannot, but might believe they can because of expectation bias or something similar and also those who might be tempted to score somewhat at random.

All the recent HA public listening tests include in their methodology a method of excluding results for any sample & tester in which the reference sample is rated for impairment. Given that ABC/HR is used, in the case of uncertainty (i.e. non-obvious flaws) a tester should be performing an ABX to verify that a difference is discernible before committing their ranking.

I think it's then obvious that the meaning of Positive Control is a sample that should be obviously inferior to the reference to all listeners, but not necessarily inferior to all the samples under test.
The Positive Control is there principally to catch False Negatives (where people thing it is transparent).

It's difficult to get the idea that 'negative' = 'bad' out of one's mind. In this case 'negative' means 'good' as in 'unable to detect the difference from the reference'.

In some cases, it's a low-pass filtered sample. In the case of the recent speech codec comparisons conducted by Google to evaluate Opus (in its SILK and Hybrid modes) versus other speech codecs, there was both a 3.5 kHz LPF and a 7 kHz LPF in the test which should function as a Positive Control and something of an anchor to provide comparison between different listening tests.

In recent HA tests the low anchor has consistently been scored low by all participants who weren't excluded, if I recall correctly, which tends to indicate that False Negatives (false transparency results) have been excluded. Usually the low anchor is below all the tested codecs on every sample. There may be scope for using an intermediate anchor whose quality should fall consistently in about the range of impairments expected by the codecs under test. The problem may be that the nature of impairment is consistent, making it too easy to detect the anchor.

We usually do plot the low anchor in HA public listening tests, but not the reference, though one or two tests did use a high anchor that was not the original audio and plotted it. Where ranked references result in exclusion from the results, the screened results will obviously place the Negative Control (for False Positives) at the screening level (typically 5.0), making a plot of these values trivial.

This post has been edited by Dynamic: Nov 27 2012, 15:06
Go to the top of the page
+Quote Post

2 Pages V   1 2 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 20th April 2014 - 01:14