IPB

Welcome Guest ( Log In | Register )

7 Pages V  « < 5 6 7  
Reply to this topicStart new topic
What is "time resolution"?
2Bdecided
post Nov 21 2006, 11:51
Post #151


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (ChiGung @ Nov 20 2006, 18:41) *
QUOTE (2Bdecided @ Nov 20 2006, 12:59) *

Let me draw a parallel. Try performing a time domain analysis on a nice linear-phase graphic equaliser with the bass at +6dB and the treble at -6dB. Would it change the time domain signal? Of course! It must change the time domain - you can't change one domain without affecting the other!
But what we have is very much a frequency domain phenomena,
any anyone who tries to analyse solely in the time domain is going to look very stupid - especially if they say they're going to do it for a random set of audio signals. Think about it - what on earth would it tell you?

Check the veracity of your focus here, you confirm "you can't change one domain without affecting the other" but then anounce that "we have a frequency domain phenomenon" (only modified conviently with the words "very much")
This is like bending a piece of steel, and anouncing "what we have is 'very much' a structural phenomenon" - therefore, anyone who wishes to observe how the structural phenomenon affects temperature is going to look very stupid.


Oh come on. The frequency and time domains are mathematically related. Of course you can't change one without changing the other.

However, there is little practical use or sense in trying to analise certain changes in one domain, when all the important stuff is hidden in that domain, but apparent (obvious) in the other.


That's why I gave the example I did, because it's analogous to what you're doing. Let me put it like this:

I give you a black box to test. You decide the way you're going to test it by taking a random selection of audio signals (e.g. tracks off commercial CDs), and looking at the time domain output.

If that black box is a linear phase graphic equaliser, with the bass at +6dB and the treble at -6dB, you're really going to struggle to understand that with your time domain analysis! Whereas if you do a frequency domain analysis and/or if you use some suitable test signals, you can find out what's happening almost immediately.


You're probably thinking "but I want to know what's happening in both domains" - that's fair enough, but those of us who understand audio know that we can grab the data in whichever domain it makes most sense, and then know exactly what is happening in the other domain.


To put it really simply, we already know all the stuff you're trying to demonstrate, and more importantly we understand why it happens, and why it's not very interesting or important. That, in a nutshell, explains the "attitude" you've been getting over these 6 pages!

QUOTE
I admit, i cant locate the comments on quantisation which you asked me to respond to. Generaly i would expect issues regarding quantisation will only reinforce limits on resolution of time & level detail within pcm records.....


Post 109 - after the word "conclusion". I explain why quantisation does represent a limit in amplitude resolution, while low pass filtering does not represent a limit in time resolution.

QUOTE
@SebastianG
QUOTE (SebastianG @ Nov 20 2006, 01:57) *
I would have loved to hear from you about application examples where this does matter.

An example application where real-world 'time resolution' would matter would be in rangefinding. Imagine a very accurate visual sensor records a flash, and a sound sensor records a shock wave following it. Employing knowledge of the speed of light, and the speed of sound, we can estimate the distance of the cause of the flash and the shockwave (assuming the same event produced both).

[speed of sound] * [time interval] (between the flash and the shockwave)
=[distance of sensor] (from the cause)

or more accurately,
distance from event =observed time between sight and sound/(1/soundspeed-1/lightspeed)

This example is basicaly a filling out of the 'tekkie's spike' example.
If rangefinding processing used pcm, wouldnt 'time resolution' be one factor that limits its maximum "range resolution"?

It surely is an odd thing to hear engineers report that they cant think of any applications where practical 'time resolution' of an employed encoding format, might matter.


I think Sebastian has already explained and demonstrated precisely how low pass filtering does not reduce the accuracy in such real word experiments, and is often used as a simple way of removing noise and improving accuracy.

It's precisely because it works in the real world, as well as in theory and all the examples given here, that your continued objection is so funny.

If you were right, we'd have to stick with analogue electronics and timers for no end of things which have been digital for decades!!!

Cheers,
David.
Go to the top of the page
+Quote Post
MoSPDude
post Nov 21 2006, 14:44
Post #152





Group: Members
Posts: 175
Joined: 24-July 06
From: Sheffield, UK
Member No.: 33249



@ChiGung, are you actually performing your analysis in both domains or purely in the time domain? As everyone has stated previously, low pass filtering effects are easily visible in the frequency domain - and awkward/impossible to visualise in the time domain especially if your working with complex signals. Can you show both the phase and magnitude characteristics of the low pass filters you've been examining? Is it a linear phase filter etc.?
Go to the top of the page
+Quote Post
ChiGung
post Nov 21 2006, 16:22
Post #153





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (2Bdecided @ Nov 21 2006, 10:51) *
If that black box is a linear phase graphic equaliser, with the bass at +6dB and the treble at -6dB, you're really going to struggle to understand that with your time domain analysis! Whereas if you do a frequency domain analysis and/or if you use some suitable test signals, you can find out what's happening almost immediately.

Whatever the black box does, if you want to find out how it affects details in the time domain, you need to examine the time domain. It is irrelevant that the black boxes process might be more difficult to discover in the time domain, its affect on the time domain is apparent in the time domain.

QUOTE
You're probably thinking "but I want to know what's happening in both domains" - that's fair enough, but those of us who understand audio know that we can grab the data in whichever domain it makes most sense, and then know exactly what is happening in the other domain.

Not really, here in this thread, the issue examined is 'time resolution', not 'frequency resolution'. "Those of you who understand audio" should understand that just because you can only make sense of a proccess in one domain, does not mean its affects on another are insignificant.

QUOTE
To put it really simply, we already know all the stuff you're trying to demonstrate, and more importantly we understand why it happens, and why it's not very interesting or important.

That is a simple account which you keep presenting, yet you continue to also present explicity falacious arguements against the investigation, and no there have not been any confident predictions made about the results you might expect, only attempts to colour the results as somehow irrelevant to 'time resolution'


QUOTE
Post 109 - after the word "conclusion". I explain why quantisation does represent a limit in amplitude resolution, while low pass filtering does not represent a limit in time resolution.

I see, then the comment i made on it generaly, stands then.

A pity you again choose to try and explain your untouchable understandings, instead of commenting on any of the very accessible points I made to you earlier.

QUOTE
QUOTE
It surely is an odd thing to hear engineers report that they cant think of any applications where practical 'time resolution' of an employed encoding format, might matter.

I think Sebastian has already explained and demonstrated precisely how low pass filtering does not reduce the accuracy in such real word experiments, and is often used as a simple way of removing noise and improving accuracy.

That is not another 'get out' clause. You are saying, that innacuracy of the input signal usually eclipses the capabilities of the record. You are saying that practical time resolution of pcm is even lesser than the limit i am investigating.
(That is what you are saying there) And elsewhere you present the claim that pcms time resolution is near infinite as algebra's, and leave my contentions about that unanswered.

As long as you all maintain a united front of experte denial, no one need feel silly right?

QUOTE (MoSPDude)
@ChiGung, are you actually performing your analysis in both domains or purely in the time domain? As everyone has stated previously, low pass filtering effects are easily visible in the frequency domain - and awkward/impossible to visualise in the time domain especially if your working with complex signals.

I am not studying effects of filters in the frequency domain, in the same way as if i wanted to measure how bending steel changed its temperature, i wouldnt need to bring a protractor....well, yes, to measure the bend, but here the filters have a set 'bend' which we can double check if we want, but its the temperature change that is being investigated.
I can look at things in 'a' frequency domain, with a frequency renderer ive made myself, but it would only be useful for looking into details of how the filters work, not what how they affect time domain information.

QUOTE
Can you show both the phase and magnitude characteristics of the low pass filters you've been examining? Is it a linear phase filter etc.?

I will try to contrast results from some differently implemented lowpass filters. I dont have a raft of tools available to investigate the particulars of the filters i use, so others might be able to scrutinise them better. But i will mostly use the best i get my hands on such as sox, and ssrc. Or if anyone could provide prefiltered samples that would be great.

Ive made a couple of refinements to nodecouple.java, fixed a problem with the 'nodeselection' clause which was unfairly selecting nodes towards the middle of sample intervals, and fewer towards edges. And a few other fixes.
I should get time to generate some data with it, within a day or two.

i also thought of imposing a minimum distance between nodes to attempt to correlate, in order to make extra sure that closer nodes arent 'misplaced'
There is the potential to select between many thousands of accute nodes in source tracks, so employing an unbiased subselection of them will not lead to sparse results.

Here is what im thinking of examining:

First a correlation between two different tracks of white noise, as a control to check the degree and evenness of correlation between random sources.

Then a correction between two different tracks of pink noise, to double check same.

Then a correlation between a pink noise track and itself upsampled x4, but with the unupsampled track internaly scaled to fit the upsampled one.
This is to observe the difference between the simple node locating formula used by the code, and improvements possible with high quality upsampling.

I also want to check, the evenness of intersample node location (not coupling), so ill output a distribution of intersample node locations (-0.5 to +0.5) of a sample, for pink noise, and then for the music track which will be used.

That will all be for pretesting of the program, and of the source material.

Then I can proceed to outputing distributions of correlated timing of detectable conditions between tracks lowpassed at different levels, to simulate different samplerates.....

so, in a couple of days hopefuly.

cheers'
cg

This post has been edited by ChiGung: Nov 21 2006, 16:28


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
2Bdecided
post Nov 21 2006, 18:17
Post #154


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



[quote name='ChiGung' date='Nov 21 2006, 16:22' post='451177']
[quote name='2Bdecided' post='451141' date='Nov 21 2006, 10:51']If that black box is a linear phase graphic equaliser, with the bass at +6dB and the treble at -6dB, you're really going to struggle to understand that with your time domain analysis! Whereas if you do a frequency domain analysis and/or if you use some suitable test signals, you can find out what's happening almost immediately.[/quote]
Whatever the black box does, if you want to find out how it affects details in the time domain, you need to examine the time domain. It is irrelevant that the black boxes process might be more difficult to discover in the time domain, its affect on the time domain is apparent in the time domain.

[quote]You're probably thinking "but I want to know what's happening in both domains" - that's fair enough, but those of us who understand audio know that we can grab the data in whichever domain it makes most sense, and then know exactly what is happening in the other domain.[/quote]
Not really, here in this thread, the issue examined is 'time resolution', not 'frequency resolution'. "Those of you who understand audio" should understand that just because you can only make sense of a proccess in one domain, does not mean its affects on another are insignificant.
[/quote]

No one said that had to be the case.

However, for a linear system, simply knowing the frequency+phase response tells you the impulse response, and therefore everything there is to know about the system. Simply knowing the impulse response tells you the frequency+phase response, and therefore everything these is to know about the system.

[quote]
[quote]To put it really simply, we already know all the stuff you're trying to demonstrate, and more importantly we understand why it happens, and why it's not very interesting or important.[/quote]
That is a simple account which you keep presenting, yet you continue to also present explicity falacious arguements against the investigation, and no there have not been any confident predictions made about the results you might expect, only attempts to colour the results as somehow irrelevant to 'time resolution'
[/quote]

I have given you exact numerical predictions twice now!


[quote]
[quote]Post 109 - after the word "conclusion". I explain why quantisation does represent a limit in amplitude resolution, while low pass filtering does not represent a limit in time resolution.[/quote]
I see, then the comment i made on it generaly, stands then.

A pity you again choose to try and explain your untouchable understandings, instead of commenting on any of the very accessible points I made to you earlier.
[/quote]

?!

[quote][quote][quote]It surely is an odd thing to hear engineers report that they cant think of any applications where practical 'time resolution' of an employed encoding format, might matter.[/quote]
I think Sebastian has already explained and demonstrated precisely how low pass filtering does not reduce the accuracy in such real word experiments, and is often used as a simple way of removing noise and improving accuracy.[/quote]
That is not another 'get out' clause. You are saying, that innacuracy of the input signal usually eclipses the capabilities of the record. You are saying that practical time resolution of pcm is even lesser than the limit i am investigating.
(That is what you are saying there)
[/quote]

That's not what myself or seb are saying. We've both said very clearly you can downsample to low sample rates (or just low pass filter), and use the sub-sample accuracy very easily. In the real world.

[quote]And elsewhere you present the claim that pcms time resolution is near infinite as algebra's, and leave my contentions about that unanswered.
[/quote]
Oh no - I've just said it again! You've had the proofs several times over.


[quote]
As long as you all maintain a united front of experte denial, no one need feel silly right?
[/quote]

I'm trying to get my head around ChiGung world, and it's a very strange place!

Cheers,
David.
Go to the top of the page
+Quote Post
ChiGung
post Nov 21 2006, 19:36
Post #155





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE ('2B')
QUOTE ('cg')
2Bdecided: "You're probably thinking "but I want to know what's happening in both domains" - that's fair enough, but those of us who understand audio know that we can grab the data in whichever domain it makes most sense, and then know exactly what is happening in the other domain."
Not really, here in this thread, the issue examined is 'time resolution', not 'frequency resolution'. "Those of you who understand audio" should understand that just because you can only make sense of a proccess in one domain, does not mean its affects on another are insignificant.

No one said that had to be the case.
However, for a linear system, simply knowing the frequency+phase response tells you the impulse response, and therefore everything there is to know about the system. Simply knowing the impulse response tells you the frequency+phase response, and therefore everything these is to know about the system.

I said it had to be the case, that statements about time resolution must inform us of details expressed in the time domain. Details expressed in the time domain are most securely derivable from the time domain.(ie by discerning timings)

QUOTE
I have given you exact numerical predictions twice now!

Of how details will be susceptable to move/change in the time domain for different source material types and levels of bandlimitation?
Sorry, this thread is rather long now, and I missed that. Could you recall them for me?

QUOTE ("2bdecided")
QUOTE
A pity you again choose to try and explain your untouchable understandings, instead of commenting on any of the very accessible points I made to you earlier.
?!


Here is an example of most recent fundamental explainations which you have avoided responding to:
QUOTE (chigung)
Just because a peak level created by any combination of frequencies can occur at any subsample location in time, does not imply that a peak represented in pcm as a neccessarily bandlimited frequency spread (limited by the nyquist f), can have its temporal position in the source predicted absolutely precisely.

When you suppose 'resolution' is dependant on the smallest values with which a record can confidently resolve itself Then with pcm, resolution of time and level is effectively infinite because the 'resolution' of algebra is effectively infinite. With such a flattering use of the term 'resolution' (which has been relied on here) the amount of information in a record has almost no contribution at all to its reportable resolution - all you really need are 2 samples in order to have a record which you can say you can resolve with infinite resolution.

Put another way, ....
What is "level resolution" then? Doesn't a pcm record of 8bit words differ in 'level resolution' from one of 16bit words? Or do you all contend that to a properly educated engineer, both records resolution of level are effectively infinite as well?


QUOTE
QUOTE
'2B':"I think Sebastian has already explained and demonstrated precisely how low pass filtering does not reduce the accuracy in such real word experiments, and is often used as a simple way of removing noise and improving accuracy."
cg:"That is not another 'get out' clause. You are saying, that innacuracy of the input signal usually eclipses the capabilities of the record. You are saying that practical time resolution of pcm is even lesser than the limit i am investigating."

That's not what myself or seb are saying. We've both said very clearly you can downsample to low sample rates (or just low pass filter), and use the sub-sample accuracy very easily. In the real world.

My summarisation of your points there, was straightforward and accurate.
You respond that you can "use the sub-sample accuracy very easily"
I am aware that you can only do this when you are not significantly bandlimiting the frequency distribution of the source material. Which may often be true for CD audio, but does not hold for pcm generaly.
To force you to deal with the practical reality which you denied, and you and most others collectively riddiculed my understanding of, I have devised and begun an investigation of actual uncertainty of time domain details caused by bandlimitation. Amongst you, because of your own concerns, none can even acknowledge the relevance of the investigation to the topic.
I contend - very certainly, that good, unaffected engineers, could acknowledge the veracity of my investigation and what it is designed to prove to you all.

QUOTE
cg:"And elsewhere you present the claim that pcms time resolution is near infinite as algebra's, and leave my contentions about that unanswered."
Oh no - I've just said it again! You've had the proofs several times over.

You are saying that time and level resolution are infinite in practice, and that you have defended this claim effectively somewhere. It is an impossible claim to defend, the best attempt so far, has been to deny the relevance of the common meaning of the term "resoltuion"

QUOTE
cg:"As long as you all maintain a united front of experte denial, no one need feel silly right?"

I'm trying to get my head around ChiGung world, and it's a very strange place!

Fair enough, i dont argue with that really.
The whole world is a strange place too, i dont try and deny it.

when the fog of war lifts, the country will be revealed sleep.gif
'cg

This post has been edited by ChiGung: Nov 21 2006, 19:38


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
krabapple
post Nov 22 2006, 06:18
Post #156





Group: Members
Posts: 2157
Joined: 18-December 03
Member No.: 10538



You should be made aware, ChuGung, that your rhetoric resembles that of a crackpot. I suggest you try to explain your theories and data in reference to current norms, to avoid such rhetoric.
Go to the top of the page
+Quote Post
2Bdecided
post Nov 22 2006, 11:04
Post #157


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (krabapple @ Nov 22 2006, 06:18) *
You should be made aware, ChuGung, that your rhetoric resembles that of a crackpot. I suggest you try to explain your theories and data in reference to current norms, to avoid such rhetoric.


I guess the clue is in the name. Google for ChiGung.
Go to the top of the page
+Quote Post
2Bdecided
post Nov 22 2006, 12:07
Post #158


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



[quote name='ChiGung' date='Nov 21 2006, 19:36' post='451227']
[quote='2B'][quote='cg']2Bdecided: "You're probably thinking "but I want to know what's happening in both domains" - that's fair enough, but those of us who understand audio know that we can grab the data in whichever domain it makes most sense, and then know exactly what is happening in the other domain."
Not really, here in this thread, the issue examined is 'time resolution', not 'frequency resolution'. "Those of you who understand audio" should understand that just because you can only make sense of a proccess in one domain, does not mean its affects on another are insignificant.[/quote]
No one said that had to be the case.
However, for a linear system, simply knowing the frequency+phase response tells you the impulse response, and therefore everything there is to know about the system. Simply knowing the impulse response tells you the frequency+phase response, and therefore everything these is to know about the system.[/quote]
I said it had to be the case, that statements about time resolution must inform us of details expressed in the time domain. Details expressed in the time domain are most securely derivable from the time domain.(ie by discerning timings)
[/quote]

Thanks for fixing the quotes.

This is a side issue, but if you don't accept that you can analyse in one domain, and then perfectly transform the results into the other domain, then you're denying well known and proven maths, and the FFT function itself!


[quote][quote]I have given you exact numerical predictions twice now![/quote]
Of how details will be susceptable to move/change in the time domain for different source material types and levels of bandlimitation?
Sorry, this thread is rather long now, and I missed that. Could you recall them for me?
[/quote]

Post 109 was the long explanation (though I wrote "sample" when I meant "cycle")
Post 145 was the correcy summary


[quote][quote="2bdecided"][quote]A pity you again choose to try and explain your untouchable understandings, instead of commenting on any of the very accessible points I made to you earlier.[/quote]?![/quote]

Here is an example of most recent fundamental explainations which you have avoided responding to:
[quote=chigung]Just because a peak level created by any combination of frequencies can occur at any subsample location in time, does not imply that a peak represented in pcm as a neccessarily bandlimited frequency spread (limited by the nyquist f), can have its temporal position in the source predicted absolutely precisely.
[/quote]
[/quote]

No one has argued that the waveform peaks in a signal with spectral content above arbitrary frequency f will always remain in the same location after removing all content above frequency f using a low pass filter. Everyone has agreed with you that the peaks will move.


[quote]When you suppose 'resolution' is dependant on the smallest values with which a record can confidently resolve itself Then with pcm, resolution of time and level is effectively infinite because the 'resolution' of algebra is effectively infinite. With such a flattering use of the term 'resolution' (which has been relied on here) the amount of information in a record has almost no contribution at all to its reportable resolution - all you really need are 2 samples in order to have a record which you can say you can resolve with infinite resolution.
[/quote]

It's got nothing to do with the resolution of algebra. It has always been and continues to be a disagreement about the definition of time resolution.

No one has ever doubted that, in "your" definition of time resolution (which is basically about what happens to waveform peaks when a low pass fitler is applied) it is, at worst, +/- 1/2 a wavelength at the cut-off frequency.


[quote]
Put another way, ....
What is "level resolution" then? Doesn't a pcm record of 8bit words differ in 'level resolution' from one of 16bit words? Or do you all contend that to a properly educated engineer, both records resolution of level are effectively infinite as well?
[/quote]

8-bit audio has 256 different possible quantisation levels. 16-bit audio has 65536 different possible quantisation levels.

Any instantaneous amplitude will be quantised (rounded, truncated, etc) to the nearest available level.

This is an easily measurable error. Without dither, this causes distortion, and a discrete "level resolution" limit.

With dither, the error is noise-like, rather than distortion-like, and it makes much more sense to talk about the noise floor.


With dither, if the signal is known to be stationary, then averaging successive samples can indeed increase the resolution. (It works, try it. It becomes just a standard noise-averaging calculation). However, audio signals aren't stationary, so that doesn't strictly apply. Despite this, there is a perceptual effect in frequency analysis (in the ear, in paper, or on a PC!) which works rather well to average the noise in any frequency bands of interest. Just basic filtering and/or frequency analysis really, but the results in dB terms can be surprising.


The similar argument with "time resolution" is that, with correct filtering (anti-alias and anti-image), inter-sample peaks do survive. Without correct filtering, they don't (and you don't know what or where they are).


So the "original" signal must be changed in two ways to survive sampling without further corruption: filter at or below the Nyquist frequency before sampling to prevent aliasing, and add noise at the least significant bit level before quantisation to prevent distortion. The signal that is preserved perfectly in the digital domain is the filtered, noisy signal. You simply select a suitable sampling frequency and bit depth such that the filter cut off and noise level does not perceptibly degrade the signal. No one would claim that the digital version is any better than its parameters allow, and infinite precision is not possible without infinite data.



If you compare the "correct" digital signal (i.e. the one from a filtered, dithered source) with the original analogue signal, then yes - at each instant the amplitude error may be up to +/- 2 least significant bits; the position of individual waveform peaks may have moved by up to +/- 1 sample.

Interestingly, if you do not dither, and do not filter, then the amplitude error is only +/- 1/2 a least significant bit, and the position of individual waveform peaks which survive will only move by up to 1/2 a sample.

Slightly interestingly, if you use noise shaped dither, then the raw amplitude errors get worse, but perceived noise gets less.

However in the first and third examples, the amplitude errors are "just" noise at a pre-defined level, and the waveform peak movements are "just" due to bandlimiting at a pre-defined level. In the second example, the amplitude errors are nasty distortion, and the waveform peaks can be lost entirely (even if they are due entirely to in-band signals), and spurious ones can be created (which have nothing to do with in-band signals).


That's why it's pointless defining amplitude and timing errors in the way you're trying to.

When the digital system is set up properly (filter, dither), then you have a noise floor and a bandwidth limit, not a level or temporal resolution limit.


You can try to argue that the noise floor represents a resolution limit, but that is a misuse of the word resolution.

You can try to argue that the bandwidth limit represents a resolution limit, but that is a misuse of the word resolution.


The resolution is infinite (or "complete", if you like) within the noise and bandwidth limits, and non-existent outside of it.


[quote][quote][quote]'2B':"I think Sebastian has already explained and demonstrated precisely how low pass filtering does not reduce the accuracy in such real word experiments, and is often used as a simple way of removing noise and improving accuracy."
cg:"That is not another 'get out' clause. You are saying, that innacuracy of the input signal usually eclipses the capabilities of the record. You are saying that practical time resolution of pcm is even lesser than the limit i am investigating."[/quote]
That's not what myself or seb are saying. We've both said very clearly you can downsample to low sample rates (or just low pass filter), and use the sub-sample accuracy very easily. In the real world.[/quote]
My summarisation of your points there, was straightforward and accurate.
You respond that you can "use the sub-sample accuracy very easily"
I am aware that you can only do this when you are not significantly bandlimiting the frequency distribution of the source material. Which may often be true for CD audio, but does not hold for pcm generaly.
[/quote]

The example already raised, rangefinding, works perfectly well.

If I though you would learn anything from it, I'd provide some examples, but you've shown yourself almost uniquely unwilling to learn anything.


[quote]To force you to deal with the practical reality which you denied, and you and most others collectively riddiculed my understanding of, I have devised and begun an investigation of actual uncertainty of time domain details caused by bandlimitation. Amongst you, because of your own concerns, none can even acknowledge the relevance of the investigation to the topic.
I contend - very certainly, that good, unaffected engineers, could acknowledge the veracity of my investigation and what it is designed to prove to you all.
[/quote]

Anyone with any "engineering" knowledge, experience, or even qualification, would laugh at you.

I think you underestimate the high standard of HA's members (and I'm not talking about myself). HA isn't full of idiots who waffle about things they don't understand - the founding members were and are experts in their field who actually do this stuff, either for a living, or for enjoyment. The widespread use of Lame mp3, Nero AAC etc, the developers of which are regular posters on this board, speaks volumes.

This isn't some audiofool repository where people discuss the effect of fairies on the sound of their loudspeaker cables.



If you want to do something interesting, instead of trying to prove something which we all already accept (but don't think is important), why not try to find a mechanism by which it might be important! E.g. the human ear isn't a linear time-frequency analyser, so is it possible that a low pass filter's effect in the frequency domain can be inaudible, yet it's effect in the time domain is audible?

It's a question many have pondered. Though the human ear is clearly very non-linear, I can't bring myself to believe that any of the mechanisms in there allow it to behave like that.

The only aspects where the human ear's "time resolution" is better than you would expect from the frequency response are exactly the same aspects where the "time resolution" of PCM "is infinite". If you can show that the human ear, while it cannot hear frequencies above, say, 25kHz, can detect the movement of those waveform peaks caused by low pass filtering at 25kHz, you would be on to something.

Cheers,
David.
Go to the top of the page
+Quote Post
ChiGung
post Nov 22 2006, 17:48
Post #159





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



@2B and others,

Having examined the effects of bandlimitation on time domain details ......
I am in a position to thank you now for your continued efforts to argue the situation,
And to relinquish some truth to the claims made that time resolution of pcm
can be practicaly negligable.
And to acknowledge the failure of previous arguements I made which contended otherwise.

.....And by association, to perhaps acknowledge some merits in experience garnered with formal education wink.gif

After I checked and tweaked the selection clause of accute reversals of apparent level (peaks/valleys) in my investigative program. I did find that nodes temporal position between differently lowpassed samples was very unaffected unless significant energy was present in the band removed by the lowpass.

A great 'immunity' to the removal of unenergetic frequency bands is surely a sublime and powerful aspect of bandlimited pcm interpretation.
I had not expected or described correctly, or acknowledged sensibly, that cutoff frequencies can normaly contain so little energy as to have negligable affect on timing details.

In fact i had expected and suggested that very low levels of rounding / quantisation / dither /noise should have more impact than i have been able to observe.

Without being able to find another explaination for the central spike in my correlation plots, i have had to acknowledge these conclusions, that an innate
ability of pcm is the capability of maintaining accuracy of time details far finer than employed sample rates for adequately bandlimited source.
And also, for sources not completely adequately prebandlimited for the target sample rate, considerable resilience of temporal accuracy is also displayed.

I can understand now, why 'time resolution' of pcm is a discouraged/redefineable concept and that its ultimate limit can be that of the error limits of the systems used to process it.

Later I will post a simple sequence of correlation charts which illustrate the matter.

@2Bdecided
- the quotes break, when there are too many used in the reply (wether they are correctly closed or not)
QUOTE
This is a side issue, but if you don't accept that you can analyse in one domain, and then perfectly transform the results into the other domain, then you're denying well known and proven maths, and the FFT function itself!

I have understood this situation, but went straight to observing the time domain, for time domain details because I am not practiced in the methods which you might use to get the time domain details otherways.

QUOTE
Post 109 was the long explanation (though I wrote "sample" when I meant "cycle")
Post 145 was the correcy summary
I can read that more clearly now, but with the level of mutual exasperation at the time I got tripped up by expressions such as this:
QUOTE
Magic! Thus we "prove" (though it's more of a hand waving explanation!) your +/- 1/2 sample "time resolution", but we see it's really about frequency resolution.

In that case, it could be said that combinations of bandwidth of signal and bandlimitation of rate can produce timing uncertainties. I accept the veracity of the point that different bandwidths of the same source may not contain the same events anymore to correlate, but not yet the exclusivity of that point, maybe in time....

QUOTE
No one has ever doubted that, in "your" definition of time resolution (which is basically about what happens to waveform peaks when a low pass fitler is applied) it is, at worst, +/- 1/2 a wavelength at the cut-off frequency.

I had not confirmed that figure, only expected it from observing 'ealry principals' but if accurate there is something natural/relevant about that figure(?) perhaps that when storing 'least ideal' signals that will be the maximum expectable accuracy of readable time details?

QUOTE
....The similar argument with "time resolution" is that, with correct filtering (anti-alias and anti-image), inter-sample peaks do survive. Without correct filtering, they don't (and you don't know what or where they are).

Somewhere, i also imagined small higher frequencies which modify the forms created by collections of larger lower frequencies. But it turned out such details seem to be miniscule unless there is a great deal of noise in the higher band.

QUOTE
Interestingly, if you do not dither, and do not filter, then the amplitude error is only +/- 1/2 a least significant bit, and the position of individual waveform peaks which survive will only move by up to 1/2 a sample.

I can now more readily expect your word for such things'

Its a pity, that post is lost in quotemess, i cant respond to all of it now, maybe you can cut out some quotes to make it display properly for others browsing the topic.

QUOTE
This isn't some audiofool repository where people discuss the effect of fairies on the sound of their loudspeaker cables.

Accepted, but when criticisms employ riddicule, it can read that way.

I hope those of you who put substantial effort into clarifying matters, might forgive me for misreadings and polemic i exhibited.

Appreciating most of your outputs blush.gif

-R&Dchung


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Canar
post Nov 22 2006, 19:10
Post #160





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



Heh, I've been there, though maybe to a lesser degree. Good to see you came around eventually. smile.gif


--------------------
∑:<
Go to the top of the page
+Quote Post
ChiGung
post Nov 22 2006, 19:49
Post #161





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



Thanks Canar, i was unsure whether to display your post - but see that you have have actualy been quite straightforward and civil.

-truce,

ill just run off these plots and then my levels of humility and stress should finaly normalise rolleyes.gif

best'
cg


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
ChiGung
post Nov 22 2006, 21:46
Post #162





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



using a cut from the 96kHz 'bismark' recording sample, first a correlation against itself as control:

CODE
---------- Run java ----------
Reading in biss.wav

  Opening biss.wav
  SampleRate:96000
  Stereo input
  Samples 983043
  Trackime at 96000Hz :10.240

reading 10.240 seconds (of 1 channel)(Change chnkread for more)
Locating Nodes...

Total nodes in: biss.wav:8791 Total nodes in: biss.wav:8791
Missednodes=223, discreps=8568, Searchednodes=8791

8568:                                        **                                    
8032:                                        **                                    
7497:                                        **                                    
6961:                                        **                                    
6426:                                        **                                    
5890:                                        **                                    
5355:                                        **                                    
4819:                                        **                                    
4284:                                        **                                    
3748:                                        **                                    
3213:                                        **                                    
2677:                                        **                                    
2142:                                        **                                    
1606:                                        **                                    
1071:                                        **                                    
1535:                                        **                                    
----------------------------------------------------------------------------------------
Sums:    0     0     0     0     0     0    8568   0     0     0     0     0     0  
Devs: <  58    48    38    28    17    7     2     12    23    33    43    53    64  >

Distribution chart, range=128n (256n=1 sample)

Normal Termination
Output completed (1 sec consumed).

(The plot is currently tallying discrepancies of up to 1/2 a 96kHz sample, either way there)

And at the same scale, the origional 96k compared to:
soz.exe biss.wav biss48.wav filter 0-24000
("sinc windowed lowpass w/len=128" simulating downsample to 48kHz:)

CODE
Total nodes in: biss48.wav:8506
Total nodes in: biss.wav:8791

Missednodes=741, discreps=7765, Searchednodes=8506

3950:                                        **                                    
3703:                                        **                                    
3456:                                        **                                    
3209:                                        **                                    
2962:                                        **                                    
2715:                                        **                                    
2468:                                        **                                    
2221:                                        **                                    
1975:                                        **                                    
1728:                                        **                                    
1481:                                        **                                    
1234:                                        ** **                                
_987:                                        ** **                                
_740:                                     ** ** **                                
_493:                                     ** ** **                                
_246:                                  ** ** ** ** **                              
----------------------------------------------------------------------------------------
Sums:    2     7     10    21   110   307   3950  429   123    39    14    5     3  
Devs: <  58    48    38    28    17    7     2     12    23    33    43    53    64  >

Distribution chart, range=128n (256n=1 sample)


And at the same scale, the origional 96k compared to:
soz.exe biss.wav biss32.wav filter 0-16000

CODE
Total nodes in: biss32.wav:7089
Total nodes in: biss.wav:8791

Missednodes=1769, discreps=5320, Searchednodes=7089


  534:                                        **                                    
  500:                                        **                                    
  467:                                     ** ** **                                
  433:                                     ** ** ** **                              
  400:                                     ** ** ** **                              
  367:                                  ** ** ** ** ** **                          
  333:                               ** ** ** ** ** ** ** **                        
  300:                               ** ** ** ** ** ** ** **                        
  267:                            ** ** ** ** ** ** ** ** **                        
  233:                            ** ** ** ** ** ** ** ** **                        
  200:                            ** ** ** ** ** ** ** ** ** **                    
  166:                         ** ** ** ** ** ** ** ** ** ** **                    
  133:                      ** ** ** ** ** ** ** ** ** ** ** ** **                  
  100:                ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **              
   66:             ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **      
   33:    ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
----------------------------------------------------------------------------------------
Sums:    40    57   112   151   288   387   534   443   335   160    92    76    65  
Devs: <  58    48    38    28    17    7     2     12    23    33    43    53    64  >

Distribution chart, range=128n (256n=1 sample)


and finaly, at half previous scale, the origional 96k compared to:
soz.exe biss.wav biss24.wav filter 0-12000

CODE
Total nodes in: biss24.wav:5374
Total nodes in: biss.wav:8791

Missednodes=2136, discreps=3238, Searchednodes=5374


  280:                                        **                                    
  262:                                     ** **                                    
  245:                                     ** ** **                                
  227:                                     ** ** **                                
  210:                                  ** ** ** ** ** **                          
  192:                                  ** ** ** ** ** **                          
  175:                                  ** ** ** ** ** **                          
  157:                            ** ** ** ** ** ** ** ** ** **                    
  140:                            ** ** ** ** ** ** ** ** ** **                    
  122:                         ** ** ** ** ** ** ** ** ** ** **                    
  105:                      ** ** ** ** ** ** ** ** ** ** ** ** **                  
   87:                ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **              
   70:             ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **              
   52:          ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **        
   35:    ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
   17:    ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
----------------------------------------------------------------------------------------
Sums:    38    57   101   115   158   211   280   223   170   109    67    49    43  
Devs: < 117    97    76    56    35    15    5     25    46    66    87   107   128  >

Distribution chart, range=256n (256n=1 sample)


While making these i rewrote the correlation tallying loop, and a central
spike in all plots was reduced, because the previous tallying loop was buggy
like this: counting, vals from -3 to -2, -2 to -1, -1 to 1, 1 to 2, 1 to 3 etc..
(it was making the middle tally twice as big as it should have been),
This would have had something to do with my previous concessions.
But they probably should have been made even without the appearance of the spike,
and of course the simple soltution ive used to locate nodes has not been checked either.

So ill leave it here for now.

-trying (in many ways),
cg


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
AstralStorm
post Nov 22 2006, 22:12
Post #163





Group: Members
Posts: 745
Joined: 22-April 03
From: /dev/null
Member No.: 6130



QUOTE (ChiGung @ Nov 22 2006, 21:46) *
using a cut from the 96kHz 'bismark' recording sample, first a correlation against itself as control:

<snip>

While making these i rewrote the correlation tallying loop, and a central
spike in all plots was reduced, because the previous tallying loop was buggy
like this: counting, vals from -3 to -2, -2 to -1, -1 to 1, 1 to 2, 1 to 3 etc..
(it was making the middle tally twice as big as it should have been),
This would have had something to do with my previous concessions.


Not at all. You're bandlimiting the signal below Nyquisit's theorem rule, therefore time domain change must happen.
Try that with a 1000-sample non-square wave instead and compare the effect.

Your starting signal requires exactly 96 kHz frequency resolution; 1/96000 s time resolution (and is half that frequency) to be encoded properly.
There are no such real-world audio signals short of USG (where you should use something in MHz range maybe?) or noise.

Just sample at a higher rate than required and you're okay.

This post has been edited by AstralStorm: Nov 22 2006, 22:21


--------------------
ruxvilti'a
Go to the top of the page
+Quote Post
ChiGung
post Nov 22 2006, 23:38
Post #164





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (AstralStorm @ Nov 22 2006, 21:12) *
QUOTE (ChiGung @ Nov 22 2006, 21:46) *
While making these i rewrote the correlation tallying loop, and a central
spike in all plots was reduced, because the previous tallying loop was buggy
like this: counting, vals from -3 to -2, -2 to -1, -1 to 1, 1 to 2, 1 to 3 etc..
(it was making the middle tally twice as big as it should have been),
This would have had something to do with my previous concessions.

Not at all.

To be clear, it was the appearance of an exaggerated central bar in the plots which made me reconsider previous arguements, but as i said i think that was probably a good thing.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
2Bdecided
post Nov 23 2006, 12:51
Post #165


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



Hey CG,

You'd got it in post 159 - quit while you're ahead. smile.gif

Anyway, you managed to say what everyone was saying - there has to be something up there which you're removing in order for the peaks to move. If there's little up there, it has little effect, so removing it doesn't do much. Even your "correct" plots show this.

What you're doing is a kind of perverse frequency analysis - the proportion of peaks which "move" is "kind of, on average" somewhat proportional to the amount of content you've removed. So dropping a recording with lots of HF energy down to 32kHz sampling will move more peaks than dropping a recording with little HF energy down to 32kHz sampling will.

Have fun. Glad you came back to say what you'd found.

As Canar says, we've all been there. This was my best moment...

http://www.hydrogenaudio.org/forums/index....mp;#entry183857
(you also need to read Pio2001's reply immediately below!)


My own "Nyquist was wrong" moments have been argued through with my PhD Professor, and on my own with lots of pieces of paper covered with scribbled diagrams and numbers! That's why I can be so confident that Nyquist was right! wink.gif The full implications are more interesting than most people take the time to realise.

Cheers,
David.
Go to the top of the page
+Quote Post
ChiGung
post Nov 23 2006, 16:39
Post #166





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (2Bdecided @ Nov 23 2006, 11:51) *
Anyway, you managed to say what everyone was saying - there has to be something up there which you're removing in order for the peaks to move. If there's little up there, it has little effect, so removing it doesn't do much. Even your "correct" plots show this.

To be clear (so not to be misrepresented) I repeatedly confirmed throughout this thread when presented with this that i understood that (signals with little energy above the cutoff would not have timings changed much within them)
The appearance of an inexplicable spike in my plots forced me to accept that there maybe something else about the system I handnt perceived -which is always a good thing.
So the concession was about that posibility (which stands, always) and over understandable use of terms.

QUOTE
What you're doing is a kind of perverse frequency analysis

What i did was attempt to move the debate about pcm 'time resolution' to examine the effects of bandlimitation (simulating sampling rates) on discrete timeable details. I dont think I can ever accept such an examination as 'perverse' If I could have used an adequate method to calculate times, then the plots generated would inform us about timing uncertainties to expect for different source types at different rates. With the less than adequate/untested method i used, it is just an illustration of 'what could be seen'

By 'details' there, i mean conditions in the waveform which may be physicaly/electricaly/origionaly formed by a wider band than that which is selected to record.

I can accept your assumption that the desireable section of reality to store in pcm is only the bandlimited section we might hear, so for your purposes the resolution of the record can be said to be infinite, but my focus is not the same, in my personaly developed conception of the matter (for no doubt different purposes), the discarded band is only completely dismissable when it is completely empty. Because i am not focused on audio capabilities but on informational capabilities.

QUOTE
- the proportion of peaks which "move" is "kind of, on average" somewhat proportional to the amount of content you've removed.

This is valid knowledge. Uncertain tendencies are quantifiable and useful.

Ive always understood the term 'resolution' to mean the finest detail with which a record can infer the reality with which its purpose is to store. -It may be a peculiar or plain incorrect use of the term for some subjects, but for me its a natural summary of most ways the term is employed.

QUOTE
My own "Nyquist was wrong" moments have been argued through with my PhD Professor, and on my own with lots of pieces of paper covered with scribbled diagrams and numbers! That's why I can be so confident that Nyquist was right! wink.gif The full implications are more interesting than most people take the time to realise.

I had a nyquist is wrong slip here, a year or so I think - that wasnt my error here. It was perhaps foremost, to not listen well enough and separate my own concepts from the solidified technical ones put to me. And to try and debate with a whole group of outspoken initiates of a feild of study, uninitiated notions concerning the same material as their feild.- An ultimately masochistic activity. Some errors in expression, others in manner and temper.

I do hope some day to be in a position to study the academic findings and methods around this material, and my understanding will be broadened by it.
Only now im not in that position, and i have unusual original methods to explore now which consume most of my available attentioned.
Thankyou for your representation and giving me some familiarity with your approach, before the times when I can learn the particulars of it.

all the best'
cg

This post has been edited by ChiGung: Nov 23 2006, 16:40


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Woodinville
post Nov 24 2006, 08:53
Post #167





Group: Members
Posts: 1401
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (ChiGung @ Nov 22 2006, 08:48) *
After I checked and tweaked the selection clause of accute reversals of apparent level (peaks/valleys) in my investigative program. I did find that nodes temporal position between differently lowpassed samples was very unaffected unless significant energy was present in the band removed by the lowpass.



Hence my advice to try the summed Gaussian pulses, which you first spurned, then denied, then ignored.

Yes, it really is that simple, and, yes, you can make transients out of summed sine waves, and yes, the value of examining things in various domains is that what is hard in one is easy in the other.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post

7 Pages V  « < 5 6 7
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 17th April 2014 - 18:17