IPB

Welcome Guest ( Log In | Register )

7 Pages V  < 1 2 3 4 5 > »   
Reply to this topicStart new topic
What is "time resolution"?
ChiGung
post Oct 8 2006, 20:28
Post #51





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Canar @ Oct 8 2006, 20:08) *
I would suggest you consider the possibility of the former, as we consider the possibility of the latter and try and understand exactly what you're trying to convey here.

Deal


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
ChiGung
post Oct 8 2006, 21:44
Post #52





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



I think that it is being claimed, almost unanimously that, 'conditions' (which would indicate 'events' or 'energy spikes') precise location in a PCM record, accurately informs us of their precise location in real time.
I have been trying to explain how this is untrue. That in real time, rather than the sampled approximation of it, the precise time of any condition can differ from what is idealy indicated by the record -by up to a sample interval (or maybe half a sample interval - I am not certain of the amount)

The correlation between the records indication of timing of instantaneous conditions (such as level= x) and the actual timing of indications would be something like this (loosely from intuition):

Indicated time is within 1 sample period of real time: odds ~1/1
Indicated time is within 1/2 sample period of real time: odds ~1/2
Indicated time is within 1/4 sample period of real time: odds ~1/3
Indicated time is within 1/8th sample period of real time: odds ~1/4

I have presented an intuitive guess of probability of accuracy there to drive through what I am generaly talking about - The uncertainty of PCM records, with regards to potential sources having significantly higher bandlimits. As is often the case with Redbook standard PCM (downsampled from production formats) and others.

It is the unknown frequencies above the samplerates implicit bandlimit which cause this uncertainty. We interprate the PCM record as though the frequnecies beyond the bandlimit must always have been flat, but in for example a production formats samplerate at 96kHz, they were not neccessarily flat (or else there would be little point in using those formats.)

The example of the 'tekkie' locating the spike with a record too precisely was a straight forward one.
The rebuttal of the example that the spike could indicate any position therefore it must indicate the true position securely was invalid for the reason that to ensure the precise positioning of the spikes peak with the true peak, all the other samples would have to be employed to refine that single detail, and they cannot normaly be employed just to do that as they have to convey their own detail as well.
Revise the lumpy mattresse methaphor. It is not a silly one.

The situation I have been pointing out is very complex with great subtleties and many gotchas involved. I am very familiar with the technologies limitations because I have spent a great deal of time pondering it and programming for it, particularly over the past year. I have for example completed my own frequency analyser from first principles without reference to any text books or reported methods. It produces very fine output and the mechanics of it are now being employed in my own compression codec. Unfortunately it will be along time since Ill be able to talk about it in detail in public. But you see, (unless im lying wink.gif ) I am not wishfully lecturing in an area which I have no experience. I dont believe I have said anything unconfirmable or insensible in this thread (at least which is not transient to the arguement -everyones human), there may be certain mistakes or difficulties in expression to get caught up in, but the subject is outside of many peoples familiarity zone -even people involved here. If anyone reads my explainations open mindedly, and links the parts of the explainations which they can interprate and skips the bits they cant, there is a good chance they will acknowledge an under-reported aspect of PCM 'time resolution'.

I will leave this thread now, confident in the explaination Ive invested here.
If it really is a silly as everyone seems to think it is, I guess it will end up in the recycle bin but I do believe that it would be an uncommon shame on HA.org to do so.

Sincerely.
twerpy' smartass' fat-tongued, Cheegunge


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
legg
post Oct 9 2006, 01:32
Post #53





Group: Members
Posts: 175
Joined: 5-March 05
From: Morelia, Mexico
Member No.: 20386



QUOTE (ChiGung @ Oct 8 2006, 15:44) *
I have presented an intuitive guess of probability of accuracy there to drive through what I am generaly talking about - The uncertainty of PCM records, with regards to potential sources having significantly higher bandlimits. As is often the case with Redbook standard PCM (downsampled from production formats) and others.

It is the unknown frequencies above the samplerates implicit bandlimit which cause this uncertainty. We interprate the PCM record as though the frequnecies beyond the bandlimit must always have been flat, but in for example a production formats samplerate at 96kHz, they were not neccessarily flat (or else there would be little point in using those formats.)


Preciselly that's why you must low pass the signal BEFORE sampling, otherwise the content above FS/2 will get mixed with the frequencies below FS/2 causing aliasing. On a bandlimited signal there's no "spike" that can not be represented in the sampled version, even if it lies within 2 samples. It isn't rocket science.

The straightforward solution to be able to capture your so called "spikes" is to increase the sample rate, but that by no means imply that the sampling theorem is flawed in any way. The theorem does imply that you must sample fast enough to have perfect reconstruction, at least from a mathematical point of view. In practice we all know that there's no ADC with a perfect delta dirac.

I'm still waiting to see your MATLAB code proving everyone wrong.

This post has been edited by legg: Oct 9 2006, 02:14


--------------------
Home page: http://lc.fie.umich.mx/~legg/indexen.php
Go to the top of the page
+Quote Post
ChiGung
post Oct 9 2006, 02:16
Post #54





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (legg @ Oct 9 2006, 01:32) *
QUOTE (ChiGung @ Oct 8 2006, 15:44) *

....snip vaguest part of my post....

Preciselly that's why you must low pass the signal BEFORE sampling, otherwise the content above FS/2 will get mixed with the frequencies below FS/2 causing aliasing. It isn't rocket science.

The straightforward solution to be able to detect "spikes" is to increase the sample rate, but that by no means imply that the sampling theorem is flawed in any way. It does imply that you must sample fast enough to have perfect reconstruction, at least from a mathematical point of view. In practice we all know that there's no ADC with a perfect delta dirac.

I'm still waiting to see your MATLAB code proving everyone wrong.

Another one tries to wriggle under the full case that has set on a plate for you and seasoned liberaly.
Dont say Im talking about detecting 'spikes' just after I have just described the uncertainty of detecting the realtime location of any 'conditions' -such as spikepeaks, level values or waveform gradients -any conditions that could be locateable in an instant of time indicateable by a PCM record.

Ive never gone near MATLAB because I dont need it, I code in low level java/c syntax, and what I code ends up working sir.

Trying to drag it back to who was right or wrong like that is pathetic.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
legg
post Oct 9 2006, 03:27
Post #55





Group: Members
Posts: 175
Joined: 5-March 05
From: Morelia, Mexico
Member No.: 20386



Fine forget about the code and do try to provide mathematical proof of your statements instead of blabbering. I'm sure a person that names himself smart ass will be able to provide such proof.

FYI, java is NOT a low level language, it is actually FAR from being one. C is closer but it isn't considered low level either, the actual term to describe C is middle level.


--------------------
Home page: http://lc.fie.umich.mx/~legg/indexen.php
Go to the top of the page
+Quote Post
MedO
post Oct 9 2006, 09:35
Post #56





Group: Members
Posts: 341
Joined: 24-August 05
Member No.: 24095



If I understand you right, you are saying that the time when the signal reaches a certain level in the recorded PCM waveform may be different from the time in reality.

This is equivalent to saying that the recorded waveform is different from the real one. Which is true if you are sampling with less than twice the highest frequency which will occur in the source.

The signal will be reconstructable to great accuracy if it is bandlimited to half the sampling frequency. It won't if it isn't. Why make it so complicated?
Go to the top of the page
+Quote Post
2Bdecided
post Oct 9 2006, 15:08
Post #57


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (ChiGung @ Oct 8 2006, 21:44) *
The example of the 'tekkie' locating the spike with a record too precisely was a straight forward one.
The rebuttal of the example that the spike could indicate any position therefore it must indicate the true position securely was invalid for the reason that to ensure the precise positioning of the spikes peak with the true peak, all the other samples would have to be employed to refine that single detail, and they cannot normaly be employed just to do that as they have to convey their own detail as well.


This is the heart of your misunderstanding.

A bandwidth limit (we agree there is such a thing in PCM) implies that, what you believe to be some kind of contradiction, is in fact the simple reality of the situation. Let me show you why with something less abstract...


For it to work properly, PCM requires two filters - one anti-alias at (before) the A>D, the other anti-image at (after) the D>A.


Forgetting PCM for a second, if those filters themselves cause an audible problem, then we have a problem. I don't think we do. However, you have expressed a wish to tackle this issue separately, so let us leave it to one side for now.


So, we have two filters. If _both_ filters block everything above fs/2, then the sampling stage itself will be transparent - lossless, if you like. In other words, these two systems would be identical...

1. input, filter1, filter2, output
2. input, filter1, sampling (no quantisation), filter2, output

Indeed, if you have two black boxes containing systems 1 and 2, there would be no way to tell these boxes apart (though number 2 may introduce a time delay in practice).

If you believe this to be false, you must bring something to disprove it. (This would disprove Nyquist, so good luck!).


Your example of non-adjacent samples having an impact on the apparent position of an inter-sample peak does not disprove it - this is just a consequence of the required filtering. Even a novice in filter design knows that output sample number N depends on the value of more than one input sample, unless the filter is a non-filter! This is all that is at work here. It's not magic. It's not a problem either.

Cheers,
David.
Go to the top of the page
+Quote Post
2Bdecided
post Oct 9 2006, 15:24
Post #58


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



Here are some nice pictures...

Attached Image


I worked at 16-bits throughout.

I started at 441kHz (i.e. 10x CD sample rate). I generated a single impulse. To prove the point, I also generated a second impulse one sample later on the other stereo channel. This is the left hand pair of plots.

I resampled to 44.1kHz (i.e. CD sample rate). The result is shown in the middle pair of plots. Interestingly, Cool Edit Pro's visual interpolation hints at what is represented by those samples - i.e. a 1/10th of a sample time delay between the two channels.

I resampled back to 441kHz. The result is shown in the right hand pair of plots. The peaks of the waveforms are clearly in the correct place relative to each other. Time resolution equivalent to 1/10th of a sample at 44.1kHz clearly survives this sample rate.


Of course the peaks are low amplitude (less energy) and longer (spread out in the time domain) - but this is just what happens when you low pass filter a click.


So, this is a simple, repeatable example proving the sub-sample accuracy of sampled systems, without a sine wave in sight!

Cheers,
David.
Go to the top of the page
+Quote Post
cabbagerat
post Oct 9 2006, 16:54
Post #59





Group: Members
Posts: 1018
Joined: 27-September 03
From: Cape Town
Member No.: 9042



QUOTE (2Bdecided @ Oct 9 2006, 06:08) *
It's not magic.
It's not? I suppose that explains why the FIR filter I designed with tarot cards didn't work.

Seriously, nice diagrams.


--------------------
Simulate your radar: http://www.brooker.co.za/fers/
Go to the top of the page
+Quote Post
Axon
post Oct 9 2006, 17:30
Post #60





Group: Members (Donating)
Posts: 1984
Joined: 4-January 04
From: Austin, TX
Member No.: 10933



So I was mainly pissed off in my earlier post because it looked like this was about to turn into a rematch of ChiGung vs. the world. Which wound up happening, and it's not like I agree with him on much, but I already went through all of that on SH.tv.

The formula described by KikeG (and others) does help, but it doesn't really satisfy me. It seems to coincide well with what I've computed (with 1/20000 sample delays being feasible) - I suppose that the 1/(fs*2^n) number is a theoretical limit rather than an upper bound on period, and depending on the vagaries of the upsampling/downsampling implementations, the real testable interval may wind up being much higher. (In theory, I ought to be able to get a 1/65536 sample delay working?)

This gives a lot of wiggle room for audiophiles to claim that there could be large differences in performance based on how good the upsampling/downsampling filters are, resulting in numeric performance improvements to the minimum reproducible delay. However, one could pretty conclusively argue that even the implementation-tested periods are lower than the minimum audible delays by a wide margin. And it does give an exact definition to beat people over the head with, which is what I was wanting.
Go to the top of the page
+Quote Post
Woodinville
post Oct 9 2006, 18:00
Post #61





Group: Members
Posts: 1401
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (ChiGung @ Oct 8 2006, 11:57) *
Yeah you did forget that. There is no downsample involved there, just a shifting of a record.


Now you're simply being evasive.

Since the signal in question will not be affected by any decent lowpass filter (it has no out-of-band components) your assertions are shown to be wrong.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
Woodinville
post Oct 9 2006, 18:41
Post #62





Group: Members
Posts: 1401
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (Axon @ Oct 9 2006, 09:30) *
The formula described by KikeG (and others) does help, but it doesn't really satisfy me. It seems to coincide well with what I've computed (with 1/20000 sample delays being feasible) - I suppose that the 1/(fs*2^n) number is a theoretical limit rather than an upper bound on period, and depending on the vagaries of the upsampling/downsampling implementations, the real testable interval may wind up being much higher. (In theory, I ought to be able to get a 1/65536 sample delay working?)


Well, consider this...

Let us take a sine wave at nearly half the sampling frequency...

It's slope (using +-1 for amplitude) is 2*pi*f at maximum (crossing zero).

You want to figure out when you can distinguish one LSB. That's when
2*pi*f > 2/(2**bits).

Give or take. Since we have to dither, the numerator of the right side is bigger by some extent.

This is where the number comes from. It's a classic phase analysis problem.

Of course, when we dither, we can also average many cycles and get a better result.

Now, as strange as it seems, the single-cycle example is directly germane to the question at hand.

I leave it to ChiGung to explain to us how this completely controverts all of his assertions completely.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
ChiGung
post Nov 15 2006, 01:16
Post #63





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



Hello all, I left this discussion in a tizz and have only just checked again to find these constructive replies.

A 'thought' experiment occured to me which would illustrate my point about PCMs 'time resolution' -which could be performed computationaly to generate precise data... Ill set it out hopefully:

I assume 'resolution' can refer to the ability to resolve discrete details of source material in pcm records. And there must be a difference in the potential of detail resolution in 'the source' and that in 'the record'.
eg if the source was just a CD, and the target record was 11kHz pcm,
- then their potential to have detail resolved in them would differ.
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record.

That would go,
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'

oddly, here and elsewhere this formula is eclipsed with not insubstantial excursions into test pattern replications and counter intuitive textbook quotations.

If i needed exact data on the capability of 'time resolution' in PCM records, here is how I would go about generating it:
Write a small program to read in pcm, and locate exact time of specifiable conditions in it. Conditions such as (level=0) or level=p(test),
or gradient =0, or gradient =p.
To avoid porting in or trying to write my own bandlimited solution of the pcm record, id write the code for simple linear interpolation and feed it high quality upsamples in order to achieve near 'bandlimited accuracy' in discernment of 'time location' of 'conditions'.

So a program can read in a bandlimited upsample of source pcm, and generates a list of times of all matching conditions which are found/resolvable within the upsample.

eg. hq upsample of cd track at 44kHz to ~192kHz
>list of times of peaks and troughs (gradient=0) found in 192kHz pcm rendering.

Next, high quality downsample the cd track to 11kHz (1/4 sample rate)
Then upsample to 192 again (for hq interpolation), and generate its list of (gr=0) times.
//(the upsample to 192 is only to facilitate high quality bandlimited interpolation)

At this stage in the thought experiment, I would note, that although both list of timings look for the same condition, there may be considerably less occurences of the condition (peaks or troughs) found in the predownsampled record, which would depend on the nature of the source material.

The two lists plotted on a graph should illustrate an observable time correlation between conditions found in each record, as well as 'orphaned' conditions represented only in the higher pcm.

Disregarding the orphaned conditions, the detail of 'time resolution' rests on how closely correlated the pairable condition times turn out to be.
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?

Additional explores: Compare accuracy of correlation of surviving details, in CD to 11kHz, then CD to various other rates. White noise, to some rates, then pink noise..etc.. Also do some comparisons in using different upsample rates, to discern the programs simpler linear discernments inherent innacuracy.

If we can spot a condition occuring at a time in a pcm record, with correlation data, we could indicate probabilities of that condition occuring within temporal distances in higher sampled records of the same kind of source material

It would be interesting to look at.

If I ever spend my sparse powers of collected concentration to generate the info myself, Ill post it here for all your troubles'

best'
cg

This post has been edited by ChiGung: Nov 16 2006, 18:20


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
kwwong
post Nov 15 2006, 09:50
Post #64





Group: Members
Posts: 113
Joined: 28-December 05
Member No.: 26686



QUOTE (ChiGung @ Nov 14 2006, 19:16) *
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'


For a fixed frame size,

44kHz pcm 'frequency resolution' = 4*11kHz pcm 'frequency resolution',

44kHz pcm 'time resolution' = 0.25*11kHz pcm 'time resolution' biggrin.gif

This post has been edited by kwwong: Nov 15 2006, 09:51
Go to the top of the page
+Quote Post
2Bdecided
post Nov 15 2006, 13:00
Post #65


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



ChiGung,

Your experiment wouldn't work. By knocking the sample rate down to 11kHz (and implicitly limiting the bandwidth to 5.5kHz) you would change the waveform dramatically. For simple synthetic waveforms, we could say correctly whether none, some, or all peaks would stay in the same place depending on the content of the original signal. However, for complex waveforms, we can't say anything sensible about what would happen to individual waveform peaks.

For example, if you have a bass drum and a high hat playing at the same time, most of the waveform excursion will be due to the bass drum (which will survive the 5.5kHz low pass filter), but the exact peak location will also depend on the "wiggles" in the waveform due to the high hat itself. These high frequency "wiggles" will be butchered by a 5.5kHz low pass filter, so the peak will move!

The only way you can be absolutely sure that it's a fair experiment, and that the low pass filter isn't significantly moving the peak by removing part of the signal that forms the peak itself, is to ensure that the low pass filter doesn't remove anything - i.e. that the original doesn't contain any frequencies above 5.5kHz, or, to put it another way, that the downsampled version still satisfies Nyquist with reference to the original content.

Nyquist is right yet again - what a surprise!



The basic problem is here:

QUOTE
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record.

That would go,
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'


You are implying that these two things are directly proportional in a real and limiting sense, whereas, until you get to the absolute limit, many orders of magnitude better than the limits of human hearing, and many orders of magnitude better than anything we expect the system to achieve, the two things are completely independent.


Rather than talking about sample rate and "time resolution", let's talk about the number of hours I'm awake in a day, and the number of bananas I eat that day.

Fundamentally, the number of bananas I eat if I'm awake for 8 hours "should" seem to be 1/2 of that if I'm awake for 16 hours.

What's wrong with this statement? On the face of it, it seems like intuition. However, the reality of the situation is that the number of bananas I eat in a day has nothing to do with how long I'm awake. I might not have any bananas in the house, and I might not go shopping. I might go to the market I buy a big bunch of bananas at a bargain price and eat several of them. I'll probably just have one a day for my lunch (I'm so boring and predictable) no matter how long I'm awake. The simple truth is that, whatever blind intuition may try to tell you, the practical real world truth is that the number of bananas I eat in a day is completely independent of how many hours I'm awake.


Similarly, the time resolution of a PCM system is independent of the sample rate!



QUOTE
oddly, here and elsewhere this formula is eclipsed with not insubstantial excursions into test pattern replications and counter intuitive textbook quotations.


"oddly"?! What's odd about reality not conforming to blinkered misguided uninformed intuition?!


As for "excursions into test pattern replications" - if you look carefully at my previous post, and the waveforms, I've performed your latest thought experiment already - but with the only type of waveform where it will work - a carefully controlled one!


Finally...
QUOTE
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?


In a suitably controlled version of your experiment (like mine) it wouldn't be a bell curve, it would be a single point! That would certainly be the case with the parameters you propose (44.1>192kHz).

If you try it with real music, it might be a bell curve, or it might be some other distribution (with a cut off corresponding to the point you decide the peaks don't match) - but that's got nothing to do with the temporal resolution of PCM, and everything to do with an experiment where you change something intentionally at random, and then measure how much you've changed it!

Cheers,
David.
Go to the top of the page
+Quote Post
ChiGung
post Nov 15 2006, 13:45
Post #66





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (2Bdecided @ Nov 15 2006, 12:00) *
......... For example, if you have a bass drum and a high hat playing at the same time, most of the waveform excursion will be due to the bass drum (which will survive the 5.5kHz low pass filter), but the exact peak location will also depend on the "wiggles" in the waveform due to the high hat itself. These high frequency "wiggles" will be butchered by a 5.5kHz low pass filter, so the peak will move!

The experiment cant not work, it is just designed to generate the surviving correlation distribution data, so that we can refer to data about different sample rates relative and absolute abilities to accurately record timings of discrete conditions in source. You restate the practical 'damage' done to 'time resolution' of isolateable conditions in natural sources (saying> the exact peak location will also depend on "wiggles" butchered by a 5.5kHz lowpass) Documenting the average degree of that 'butchery' is the purpose of the experiment, no more, no less.

QUOTE
The only way you can be absolutely sure that it's a fair experiment, and that the low pass filter isn't significantly moving the peak by removing part of the signal that forms the peak itself, is to ensure that the low pass filter doesn't remove anything.
- i.e. that the original doesn't contain any frequencies above 5.5kHz, or, to put it another way, that the downsampled version still satisfies Nyquist with reference to the original content.

That is plainly not fair. You are assuming preconditions which mean only informationaly lossless downsamples are considerable. I think you are unwilling to broaden your examination of 'reality' to a degree which would qualify the objections I have made about reported subsample 'time resolution' capabilities. I have been talking about reality. When we want to know what the timing resolution of a pcm record is, we would fundamentaly compare the capabilites of a record to the full potential of an ideal source.
The experiment would compare the capabilites of lower rates with higher rates. Presupposing all records in the higher rates must be additionaly bandlimited as lower rates, is not sensible.

QUOTE
QUOTE
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record.

That would go,
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'


You are implying that these two things are directly proportional in a real and limiting sense, whereas, until you get to the absolute limit, many orders of magnitude better than the limits of human hearing, and many orders of magnitude better than anything we expect the system to achieve, the two things are completely independent.

Now you are talking of psychoacoustics. That is an entirely different matter, "what differences could we hear". I described a process to generate the correlation data of timing of surviving conditions between fully utilised (non extraneously bandpassed) pcm sample rate records. The data will "scale" according to simple principles. The time resolution of 1 Hz will be equal to 1/44100th of the time resolution of 44100 Hz - there is no doubt about that relationship.

QUOTE
QUOTE
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?


In a suitably controlled version of your experiment (like mine) it wouldn't be a bell curve, it would be a single point! That would certainly be the case with the parameters you propose (44.1>192kHz).


That would be the pointless version of the experiment - or rather one which just examines rounding error.

I have the code mostly written to perform (comparison of different sampling rates time resolution of conditions with various sources) I will hold off finishing it until it is acknowledged here that it presents a valid investigation (if done accurately enough and naturaly -without flattering extraneous bandpassing)

l8r,
cg

This post has been edited by ChiGung: Nov 15 2006, 13:54


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
2Bdecided
post Nov 15 2006, 14:29
Post #67


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



So, in short, you want to run an experiment to see what effect a low pass filter has?
Go to the top of the page
+Quote Post
ChiGung
post Nov 15 2006, 14:45
Post #68





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (2Bdecided @ Nov 15 2006, 13:29) *
So, in short, you want to run an experiment to see what effect a low pass filter has?

Yes.

As particular sample rates, do have implicit unavoidable lowpasses -the process of comparing the capabilites of different samplerates, refactors as comparing effects of different lowpasses. It is almost the same thing, although actualy doing the full downsample (as well its implied lowpass) investigates an attained quality of the full process, so would preferable for this charge for actual proof of subsample source/record ambiguity.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
cabbagerat
post Nov 15 2006, 15:22
Post #69





Group: Members
Posts: 1018
Joined: 27-September 03
From: Cape Town
Member No.: 9042



QUOTE (ChiGung @ Nov 15 2006, 05:45) *
It is almost the same thing, although actualy doing the full downsample (as well its implied lowpass) investigates an attained quality of the full process, so would preferable for this charge for actual proof of subsample source/record ambiguity.
There are two issues at stake here. The first is the question of audibility of low pass filters. This has been dealt with here and elsewhere at great length and could be easily rigourously tested. Such a test has been done before, but a repeat including filters with non-flat phase responses might offer some new information.

The second is that you seem to doubt whether lowpass->sample->reconstruct can be shown to have the same effect as just the lowpass. Without quantization, the theory says that the two processes are identical. If you wish to question this then a mathematical treatment will probably be necessary before your demonstration is accepted.


--------------------
Simulate your radar: http://www.brooker.co.za/fers/
Go to the top of the page
+Quote Post
ChiGung
post Nov 15 2006, 15:51
Post #70





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (cabbagerat @ Nov 15 2006, 14:22) *
There are two issues at stake here. The first is the question of audibility of low pass filters. This has been dealt with here and elsewhere at great length and could be easily rigourously tested. Such a test has been done before, but a repeat including filters with non-flat phase responses might offer some new information.

Audibility of the timing capabilities has never been my interest here, so I have avoided refering to it and isolated it as extraneous to the empirical measurement or estimation of time resolution, whenever it has been brought up.

QUOTE
The second is that you seem to doubt whether lowpass->sample->reconstruct can be shown to have the same effect as just the lowpass.

That is not my contention, I have recently acknowledged these processes are potentialy identical. Their equivalence does nothing to invalidate the 'coupling correlation between sample rates' test described. Their equivalence only provides an accelerated to method of generating the data.

QUOTE
Without quantization, the theory says that the two processes are identical. If you wish to question this then a mathematical treatment will probably be necessary before your demonstration is accepted.

I havent questioned the equivalence of:
A high quality downsample followed by high quality upsample
= A high quality lowpass to the downsamples nyquist frequency

I have suggested that the locations of any isolatable conditions in a normaly utilised source record (ie with energy potentialy up to its own nyquist freq), can be correlated with the the best fitting locations of the same conditions in a downsampled (or equivalently lowpassed) record > to provide data (with bias towards best fits) on how accurately time of conditions can be resolved in an implicity bandlimited pcm record, against their actual potential placement in source material/records.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
2Bdecided
post Nov 15 2006, 16:18
Post #71


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



I wish you understood the theory CG, because without it, I can't begin to explain the complete and utter pointlessness of what you're suggesting.

It's a fair enough experiment to ask an undergrad to do in order to practice computer programming and audio processing, but in terms of what it actually tells you about anything, all I can do is just sit here slowly shaking my head!


FWIW, given a random selection of audio signals (real or synthetic) the lower the low pass filter, the further the peaks will move (and, to say the almost same thing differently, the more peaks will completely disappear). The major stumbling block to doing the experiment exactly as you propose will be in determining when a peak has moved vs when a peak has vanished - or, to put it another way, tracking the "same" peak between different versions. Various possible attempts to do this "correctly" (and it will be near-impossible) will mean your results might be unexpected!


The major problem is that every reasonable definition of time-resolution leads to a proof that PCM audio has no issues with time resolution - so now you've invented a new definition in order to prove the opposite. Your success here will not be down to your experiment (which will certainly show some change), but down to your strange definition of time resolution.

Cheers,
David.
Go to the top of the page
+Quote Post
SebastianG
post Nov 15 2006, 16:48
Post #72





Group: Developer
Posts: 1317
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



I also don't see the point in checking the positions of zero crossings or peaks after lowpassing. This won't prove anything except that if you further limit the bandwidth of a signal these points may move, vanish, or appear at places where there previously havn't been any.

Assuming the lowpass filter's impulse response is symmetric the following is true: If your signal shows a certain symmetry within an interval with the same size of the lowpass' filter response a certain class of points within that interval will be at the exact same position.

Example signal:
first two harmonics of a square wave: you'll get 4 peaks within a cycle
after lowpassing (only the fundamental left): 2 peaks within a cycle (it's a sine)
The zero crossings are the same (two within a cycle) because there's a "point symmetry" at those points. (rotate the curve around the point 180° and it'll be the same)
Tell me what we have learned by that, ChiGung.

In the context of transform coding time resolution usually refers to the partition of the time/frequency plane that's done by a critically-sampled filterbank AFAIK. Without any noise shaping filter tricks this will effectivly limit how well we can control the quantization noise distribution in specific time/frequency regions only by choosing scalefactors. However, noise shaping filters can be and usually are used to improve this. (With "ANS" enabled Musepack can do better in terms of controling the noise's distribution in the frequency domain than what the filterbank suggests --one subband is 670 Hz wide, though ANS manages to shape the noise within a subband. With "TNS" enabled AAC can do better in terms controling the noise's distribution in the time domain than what the filterbank suggests.)

This post has been edited by SebastianG: Nov 15 2006, 17:18
Go to the top of the page
+Quote Post
ChiGung
post Nov 15 2006, 17:01
Post #73





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (2Bdecided @ Nov 15 2006, 15:18) *
It's a fair enough experiment to ask an undergrad to do in order to practice computer programming and audio processing, but in terms of what it actually tells you about anything, all I can do is just sit here slowly shaking my head!

Yet you cant explain what is pointless about generating the data described, without indestinct reference to some 'theory' which you believe I dont understand.

Or is there an attempt here...
QUOTE
FWIW, given a random selection of audio signals (real or synthetic) the lower the low pass filter, the further the peaks will move (and, to say the almost same thing differently, the more peaks will completely disappear). The major stumbling block to doing the experiment exactly as you propose will be in determining when a peak has moved vs when a peak has vanished - or, to put it another way, tracking the "same" peak between different versions. Various possible attempts to do this "correctly" (and it will be near-impossible) will mean your results might be unexpected!

I dont need to be informed of possible surprises. I understand very well what you have written there, it is the very situation that I have described repeatedly in this thread re: the 'time resolution' of PCM. I understand what your prefered theoretical statements about 'time resolution' are, and because I have understood what they are not, I have brought the practical situation to your attentions - that the 'spike', the 'radar blip', the 'cymbal peak' etc. cannot be confidently estimated much beyond the sampling interval - [u]precisely because[/i] of the unknown butchery of higher frequency information in the normaly utilised source - the true situation is as you, and as I have described.

Only you consider the true situation completely and utterly pointless to investigate.
And it seems many have felt patronised by my attempts to explain, that it is not utterly pointless to try to correlate actual conditions within record types - that it is in fact how you securely measure such accuracy of correlation. Measuring what is practicaly achieveable in real, normal (not extraneously bandpassed) records.

QUOTE
so now you've invented a new definition in order to prove the opposite. Your success here will not be down to your experiment (which will certainly show some change), but down to your strange definition of time resolution.

My invented definition of time resolution in pcm? - the ability to discern the times of conditions in a pcm record in contrast to the original (natural resolution) material which the pcm is merely a record of.

I think you guys have been mostly presenting the potential 'time resolution' of related partly analogous algebraic systems.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
ChiGung
post Nov 15 2006, 17:16
Post #74





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (SebastianG @ Nov 15 2006, 15:48) *
I also don't see the point in checking the positions of zero crossings or peaks after lowpassing. This won't prove anything except that if you further limit the bandwidth of a signal these points may move, vanish, or appear at places where there previously havn't been any.

It would just document our ability to compare time pin-pointable conditions in a waveform and provide a method of correlating conditions within different pcm records that fully utilise their sample rates.
It would document the confidence with which we can resolve such indicateable conditions in a waveform in comparision to what would be possible with a natural record of near infinite 'time resolution'
It would investigate actual data, rather than the isolated formula presented here which indicate 'algebraic resolutions' or 'time resolution of presumed lossless conversions'

QUOTE
Example signal:
first two harmonics of a square wave: you'll get 4 peaks within a cycle
after lowpassing (only the fundamental left): 2 peaks within a cycle (it's a sine)
Tell me what we have learned by that, ChiGung.

Someone may learn, that downsampling can not only damage time resolution, but also the topology of the waveform.

QUOTE
In the context of transform coding time resolution usually refers to the partition of the time/frequency plane that's done by a critically-sampled filterbank AFAIK. Without any noise shaping filter tricks this will effectivly limit how well we can control the quantization noise distribution in specific time/frequency regions only by choosing scalefactors. However, noise shaping filters can be and usually are used to improve this. (With "ANS" enabled Musepack can do better in terms of controling the noise's distribution in the frequency domain than what the filterbank suggests. With "TNS" enabled AAC can do better in terms controling the noise's distribution in the time domain than what the filterbank suggests.)

I dont argue, you have a valid context there, but I have been explicity writing at length about the context of practical recovery of the source material from provided PCM records. There seems to be difficulties with getting the context of practical source reproduction examined.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
SebastianG
post Nov 15 2006, 17:30
Post #75





Group: Developer
Posts: 1317
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



I happened to code a subpixel detector for "x-corners" (checkerboard corners) for the purpose of calibrating cameras. Luckily it can be shown that the areas around these x-corners show the mentioned symmetries which enables me to accurately measure the subpixel position of those x-corners (=saddle points) by analysing an optically-low-passed-&-sampled image of a checkerboard. Simulations showed that the real bottleneck is the censor noise actually. Without (simulated) censor noise I got an accuracy of 1/300 pixel -- possibly restricted by a little bit of aliasing that's left in the image generation / subpixel-detector code.

The interesting part is: If you capture an image at high resolution with some censor noise and use a high quality resampler to reduce the image resolution you'll get pretty much the same locations for those x-corners -- meaning that the subpixel accuracy increased by the same factor I downsampled the image. In fact, lowpassing is an integral part of the detector to minimize the effect of noise it has on the estimated x-corner positions. So, it's not surprising that the subpixel detector's performance (measured in pixels) was better on the smaller image. By your definition of time resolition (spatial resolution for images) this would mean that the two images would have the same spatial resolution. But of course the 2nd one is a downsampled one which doesn't look as sharp. So what good is your definition?

This post has been edited by SebastianG: Nov 15 2006, 17:52
Go to the top of the page
+Quote Post

7 Pages V  < 1 2 3 4 5 > » 
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 17th April 2014 - 15:16