IPB

Welcome Guest ( Log In | Register )

7 Pages V  « < 3 4 5 6 7 >  
Reply to this topicStart new topic
What is "time resolution"?
ChiGung
post Nov 17 2006, 00:41
Post #101





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



Hi MoSP' Ill try not to confuse things with my ..deviances..
but just add a few carefull comments
QUOTE (MoSPDude @ Nov 16 2006, 23:21) *
If we do these tests digitally, and the real-world source is band limited below the Nyquist frequency, then the Nyquist–Shannon sampling theorem will hold and values representing that signal will theoretically reproduce an exact perfect signal,

Its not clear which 'Nyquist frequency' you mean there, that of the recording, or that of the target samplerate. To be clear -i acknowledge that when they are the same, we have very great, potentialy lossless accuracy of reproduction of each (to each other).

QUOTE
In terms of the peaks moving after a low pass filter etc, I was taught these such things were due to the phase characteristic of the filter....

This issue of filters which change phases of frequencies is a different one.

Peaks can move after bandwidth filtering, because the whole employed spectrum contributes to every observable detail in a waveform, so if we remove a portion of the frequency spectrum, every peak, trough, slope and level in the waveform is suspectable to being affected. Its not a case of 'one' peak shifting like an isolated part of a sinusoid changing phase, its 'one' peak (along with all the other instants) being rendered by a bundle of different superimposed sinusoids, and then how a peaks rendering might change, when sines' are removed from the origional bundle.

hth,
regards'
cg


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
MoSPDude
post Nov 17 2006, 01:11
Post #102





Group: Members
Posts: 175
Joined: 24-July 06
From: Sheffield, UK
Member No.: 33249



All good comments and corrections. smile.gif

@ChiGung, the Nyquist frequency I was referring to was the one of half the target sample rate. And again, the source signal would have to be band-limited to below this frequency. I'm interested in what you are saying, but in the heat of the argument in previous posts you were offering some confusing comments.

In terms of the posts original question at the top, " what is "time resolution"? ", I don't believe that sample rate conversion would affect the continuous time position of peaks and troughs of any arbitrary signal if, again, the signals you are up/down sampling are contained below the Nyquist frequency of the lower sample rate, the interpolation was theoretically ideal and subject to suitable filtering bringing in no mag/phase changes.

As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??

Its all good stuff, just hard to read with the heated debate going on.
Go to the top of the page
+Quote Post
saratoga
post Nov 17 2006, 01:16
Post #103





Group: Members
Posts: 4715
Joined: 2-September 02
Member No.: 3264



QUOTE (MoSPDude @ Nov 16 2006, 17:11) *
As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??


O(n^2) time.

In practice you'd have problems with precision since after enough samples the contribution from the next sample would be less then the smallest number your CPU could do.
Go to the top of the page
+Quote Post
Garf
post Nov 17 2006, 01:17
Post #104


Server Admin


Group: Admin
Posts: 4853
Joined: 24-September 01
Member No.: 13



I would have thought it goes without saying that ad hominem attacks are way out of bounds in the Science/R&D forum, but unfortunately, it seems a reminder is required. Don't let it happen again.
Go to the top of the page
+Quote Post
MoSPDude
post Nov 17 2006, 01:30
Post #105





Group: Members
Posts: 175
Joined: 24-July 06
From: Sheffield, UK
Member No.: 33249



@Mike Giacomelli, it was only a passing thought biggrin.gif
Go to the top of the page
+Quote Post
ChiGung
post Nov 17 2006, 01:44
Post #106





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (MoSPDude @ Nov 17 2006, 00:11) *
but in the heat of the argument in previous posts you were offering some confusing comments.

I understand that, but ive lost my cool over 'various attack' here and.... i dont speak fluent 'text' either wink.gif I do try though.

QUOTE
In terms of the posts original question at the top, " what is "time resolution"? ", I don't believe that sample rate conversion would affect the continuous time position of peaks and troughs of any arbitrary signal if, again, the signals you are up/down sampling are contained below the Nyquist frequency of the lower sample rate, the interpolation was theoretically ideal and subject to suitable filtering bringing in no mag/phase changes.

But that is a rare condition.
For instance, i like lowpassed music, i prefer listening to music sampled at 24k than 44k - because its more comfortable to my ears. But the time resolution of the 24k isnt the same as the 44k cd track. I dont care, i listen to what sounds nice. i dont need to tell myself the 'time resolution' is a magic quality beyond the samplerate - to enjoy the results.

Potential reductions in frequencyband / timeresolution
(resulting from samplerate.)
Natural Source
> Original recording/mastering formats
> CD
> 32,24,22,8..khz wav

Each has its own limits on ...frequencyband / timeresolution.
Im feeling more comfortable with using those terms interchangeably now tongue.gif

QUOTE
As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??

Sounds like Sebs area of finesse'

QUOTE
Its all good stuff, just hard to read with the heated debate going on.
Cheers I appreciate your comments, I hope it does cool down here.



QUOTE (2Bdecided @ Nov 16 2006, 13:22) *
No one is doubting that the peaks will move and/or vanish with most signals.

I personally am doubting you can perform the experiment on arbitrary samples, because you won't be able to track the peaks.

Thats not a problem. I will simply loop through all the nodes in the lowersampled record and try to match each against any node in the highersampled record. If a matching node is not found within small distance of a partner in the other record, it will be discounted. It represents a flattering measure of temporal consistency between samples.

Why would I do this? Well it started with this quote in an otherwise great article on Vinyl Myths in the HA wiki:
"PCM can encode time delays to any arbitrarily small length. Time delays of 1us or less - a tiny fraction of the sample rate - are easily achievable. The theoretical minimum delay is 1ns or less."

or

PCM can encode "time delays" to any arbitrarily small length. "Time delays" of 1us or less - a tiny fraction of the sample rate - are easily achievable. However, actual locatable conditions such as zero crossings, gradients or levels are only located semi-reliably within 1/2 sample at a given sample rate...

This post has been edited by ChiGung: Nov 17 2006, 02:34


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
cabbagerat
post Nov 17 2006, 07:19
Post #107





Group: Members
Posts: 1018
Joined: 27-September 03
From: Cape Town
Member No.: 9042



QUOTE (MoSPDude @ Nov 16 2006, 16:11) *
As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??
As Mike says, it's O(n^2) - every sample must be looped over for every other sample. Many resamplers (such as libsamplerate, SoX and others) use the windowed sinc interpolation algorithm described by Julius O Smith - you can see the details on his resampling page. Performing the entire bandlimited interpolation calculation would be extremely prohibitive for any decent number of samples (for example 1 minute of 44100 sound would require 1.4x10^13 multiplies). The solution they choose is to window the interpolation function to a certain number of zeros.

The length of the trunctation and the window used effects the parameters of the resampler - width of it's passband and SNR. For sound applications Smith chose the Kaiser window with a relatively high Beta. I am using a similar calculation for other means (doppler simulation) and am busy evaluating the Nutall window to give the trade between passband and SNR I want.


--------------------
Simulate your radar: http://www.brooker.co.za/fers/
Go to the top of the page
+Quote Post
2Bdecided
post Nov 17 2006, 12:03
Post #108


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



Let me consider this most practical of issues first...

QUOTE (ChiGung @ Nov 17 2006, 01:44) *
QUOTE (2Bdecided @ Nov 16 2006, 13:22) *
No one is doubting that the peaks will move and/or vanish with most signals.

I personally am doubting you can perform the experiment on arbitrary samples, because you won't be able to track the peaks.

Thats not a problem. I will simply loop through all the nodes in the lowersampled record and try to match each against any node in the highersampled record. If a matching node is not found within small distance of a partner in the other record, it will be discounted. It represents a flattering measure of temporal consistency between samples.


It may be flattering, but it could simply be so wrong as to hide any "useful" (I use that word in a very lose sense!) data. A given peak could vanish as soon as you apply your first (highest) low pass. However, for a random signal, the chance of finding the wrong peak within a given "small distance" will be roughly proportional to the low pass filter cut-off frequency, since the higher the cut-off, the more peaks will remain. (The lower the cut off, the fewer peaks will remain). This you can probably prove - but it tells you nothing about that specific original peak.



QUOTE (ChiGung @ Nov 17 2006, 00:41) *
Hi MoSP' Ill try not to confuse things with my ..deviances..
but just add a few carefull comments
QUOTE (MoSPDude @ Nov 16 2006, 23:21) *
If we do these tests digitally, and the real-world source is band limited below the Nyquist frequency, then the Nyquist–Shannon sampling theorem will hold and values representing that signal will theoretically reproduce an exact perfect signal,

Its not clear which 'Nyquist frequency' you mean there, that of the recording, or that of the target samplerate. To be clear -i acknowledge that when they are the same, we have very great, potentialy lossless accuracy of reproduction of each (to each other).


That's good. It means you accept that the signal from any microphone (which is inherently band-limited by simple mechanics), or any analogue recording (which is inherently band-limited by mechanics, electronics, particle properties etc depending on the type of recording) can be (potentially) losslessly reproduced using PCM sampling, given a sufficiently high sampling rate.

That's handy, isn't it? Especially when you look at the bandwidths of some of these things, and realise that 192kHz sampling is enough, even for your definition of "enough"!

Cheers,
David.
Go to the top of the page
+Quote Post
2Bdecided
post Nov 17 2006, 13:25
Post #109


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



To go back to the beginning...

This is what I think: The peaks move, not because you've done anything to the "time resolution", but because you've removed spectral components that contributed to the exact peak position. This is not the same as time resolution.


This is what you think: What really changes is time resolution. Maybe the time resolution really is about 1/2 a sample. The "moving peaks" show this.


Let me show you why this "works", but is silly. (This is long - casual readers might like to skip to the conclusion!)

Arbitrary content could have peaks anywhere.

Here is statement 1 (feel free to disprove it): "For a peak to move (but not vanish) due to low pass filtering, that peak must have been "built" from at least two spectral components, at least one below the filter cut off, and at least one above."

The statement holds for any number of spectral components, even infinite, so don't complain we're not talking about real signals here! What it really says is (a) that if low pass filtering a signal changes the signal, there was something above the low pass filter, which was removed (obvious), and (b) that if we really are correctly considering the same peak, then that peak must have been formed by spectral components which overlapped in time. (A bad explanation, but I think you know what I mean.)


You can separate the signals above and below the cut off frequency using complementary low pass and high pass filters. Adding these two resulting signals together would give the original signal. If you hate the idea of complementary filters, just have a low pass filter, then subtract the result from the original to give the high pass version.

Let us consider the high pass output (or the part we're throwing away by low pass filtering, if you like). The lowest possible frequency component in the high pass section would have peaks spaced by just less than 1 over the filter cut off frequency. Higher frequencies would have more closely spaced peaks, but there can be no lower frequencies with more widely spaced peaks. Adding this signal back to the low pass version can, at most, move an existing peak by this inter-peak distance. It can't move it any further if the signal meets statement 1, since any further and you are looking at a new peak, not the original one.


Magic! Thus we "prove" (though it's more of a hand waving explanation!) your +/- 1/2 sample "time resolution", but we see it's really about frequency resolution. To move a peak, what you remove in the frequency domain must contrive to move the peak in the time domain. If it does not (e.g. it's co-timed, or there's nothing above the frequency cut-off anyway), the peak won't move.


So you come down to proving something rather trivial: "if I do something that I know will change the shape of the waveform, then the shape of the waveform will change".

Genius!

It tells you nothing about anything. Why? Because the peak could start anywhere, and could end up anywhere. You are only proving that it could move by up to that amount - you are not proving that it is "quantised" by that amount (which would prove a limit in time resolution), since a peak whose location was entirely due to a frequency component below the filter cut-off could have its position at the same point as the "original" signal.


Conclusion...

Let me give an example of something that is a limit in resolution: quantisation. Quantisation limits the amplitude resolution. The amplitude at that instant can be one quantisation step, or the next, but it cannot be any value in between. (If you dither before quantisation, this error becomes random-like noise, but it is still an error, and still a limit in instantaneous amplitude resolution).

Similarly, low pass filtering introduces a limit in bandwidth. For an ideal brick wall filter, we can have any frequency up to the cut off frequency, but none of the ones above.

So with the +/- 1/2 sample example from CG, are we looking at a limit in time resolution?

Consider this: Signal A has a peak at position A, and frequency components above some arbitrary frequency F. Removing those frequency components moves the peak to position A2, which is slightly different from A.
(Look, says CG, a limit in time resolution. Hold on! says DR...)

However a different signal, signal B, which has no frequency component above frequency F, can have a peak at exactly position A, and that peak will not be moved by removing frequency components above frequency F.

Thus we show that suggesting "the move from A to A2 demonstrates some limit of time resolution due to low pass filtering" is factually incorrect. A low pass filtered signal can have a peak at A, A2, or anywhere else.

All you have shown is that frequency components in signal A contributed to the position of peak A, and these have been removed, thus moving the peak. However, there is no limit to where peaks can occur. It is not like the quantisation amplitude resolution limit at all! It is not a time resolution limit, just a predictable effect of low pass filtering a signal.



I don't think I can put it any clearer than that!

(I shall not be giving up my day job wink.gif )

Cheers,
David.

This post has been edited by 2Bdecided: Nov 17 2006, 13:31
Go to the top of the page
+Quote Post
bhoar
post Nov 17 2006, 16:29
Post #110





Group: Members (Donating)
Posts: 612
Joined: 31-May 06
Member No.: 31326



I'm treading into an area I know nothing about, but I'll make a short comment anyway:

The location of a single mathematical sample peak should not be assumed to be the defined location of
the attack of particular instrument. Just because filtering leads to a peak slightly
before or after the original peak does not mean the location of the attack has changed
temporally.

I would think that what we audibly hear as an attack would be more precisely defined as a zone, or
perhaps the middle of a series of PCM peaks and troughs of a certain nature. I suspect using these
sorts of definitions, you would find that there is no temporal movement of the instrument's attack
due to quality low-pass filtering above the typical human hearing range before digitization or
in the digital domain.

Summary: PCM peak != attack

Meta-Summary: I might be completely wrong. Or I might be stating the completely obvious.

-brendan


--------------------
Hacking CD Robots & Autoloaders: http://hyperdiscs.pbwiki.com/
Go to the top of the page
+Quote Post
ChiGung
post Nov 17 2006, 19:02
Post #111





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (2Bdecided @ Nov 17 2006, 12:25) *
To go back to the beginning...

This is what I think: The peaks move, not because you've done anything to the "time resolution", but because you've removed spectral components that contributed to the exact peak position. This is not the same as time resolution.

That is like saying: "the tin can crumples, not because you are doing anything to its form, but because you are destroying its structural integrety"

The difficulties displayed here by those "in the know" in admitting anything is being done to "time resolution" -as sample rate is reduced (!) - is an odd phenomenon.

Explicity - sample rate is the rate of provided instances through time.

QUOTE
Maybe the time resolution really is about 1/2 a sample. The "moving peaks" show this.
Let me show you why this "works", but is silly. (This is long - casual readers might like to skip to the conclusion!)

hmmm, "works" but is silly.... getting a bit obscure, it feels like you are erecting a wall of denial....

QUOTE
Arbitrary content could have peaks anywhere.
Here is statement 1 (feel free to disprove it): "For a peak to move (but not vanish) due to low pass filtering, that peak must have been "built" from at least two spectral components, at least one below the filter cut off, and at least one above."
This is not difficult for me to visual, Ive made such points all throughout this thread.... ill cut to the chase.
QUOTE
Magic! Thus we "prove" (though it's more of a hand waving explanation!) your +/- 1/2 sample "time resolution", but we see it's really about frequency resolution. To move a peak, what you remove in the frequency domain must contrive to move the peak in the time domain. If it does not (e.g. it's co-timed, or there's nothing above the frequency cut-off anyway), the peak won't move.

What I like about this is the honesty of your description, and it makes the logic quite clear.
But your present feeling that you can simply declare what 'results' are 'about' is wrong.
Saying 'in a way' 'it is like' ..time resolution, but really "its about" frequency resolution -its insubstantial, a whimsical fig leaf. How you choose to approach a circumstance conceptualy is your choice, but if you want to invalidate an approach such as dealing with the "time domain", you cant simply anounce you find it 'silly' or not the same as your approach which seems now to be exclusively the "frequency domain" These things are sides of a coin.

QUOTE
So you come down to proving something rather trivial: "if I do something that I know will change the shape of the waveform, then the shape of the waveform will change".

If i reduce the number of provided samples throughout time, the potential accuracy of placement of detail throughout time will decrease, the potential resolution of detail through time will decrease > time resolution is decreased.

I have to snip, because although involved, the objections you are providing here cant be argued against because they are ruleless.

QUOTE
All you have shown is that frequency components in signal A contributed to the position of peak A, and these have been removed, thus moving the peak. However, there is no limit to where peaks can occur. It is not like the quantisation amplitude resolution limit at all! It is not a time resolution limit, just a predictable effect of low pass filtering a signal.

If there is no limit to where the peaks can occur, how come they cant be arranged to occur in the correct place between records of differing sample rates? Why must they be susceptable to unknown distortions during downsamples with your "no limitations" arguement? Because their precise subsample positions are limited - by every other sample in the record which they are a part of. Thats why you cant normaly use the "unlimited" resolution which you can infer - without distorting all other samples to create the subsample details. (done to provide some limited demostrations in this thread, but not possible in practice where all samples must be treated equally)

QUOTE
I don't think I can put it any clearer than that!
(I shall not be giving up my day job wink.gif )

Theres no need to bring professional pride into this. These matters are similarly misreported by many professionals. Anyway we are all professionals.

I should be able to post the data tonight on the actual accuracy of reproduction possible of event timings between well utilised sample rates.

regards'
cg

This post has been edited by ChiGung: Nov 17 2006, 19:03


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Woodinville
post Nov 17 2006, 19:59
Post #112





Group: Members
Posts: 1401
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (ChiGung @ Nov 17 2006, 10:02) *
Thats why you cant normaly use the "unlimited" resolution which you can infer - without distorting all other samples to create the subsample details. (done to provide some limited demostrations in this thread, but not possible in practice where all samples must be treated equally)



You can't have unlimited resolution without unlimited bandwidth.

This does not mean that peaks move.

As I pointed out in the discussion of the sum of two gaussians, peaks will move if and only if you remove frequency components that contribute to the envelope in a way that moves it.

Again, look up "Hilbert Envelope". You're doing nobody any good by failing to read up on the field you're talking about before you work.

As to your experiments, they prove nothing until you specificially produce the ***exact*** equations, and explain your reasoning.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
ChiGung
post Nov 18 2006, 19:04
Post #113





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Woodinville @ Nov 17 2006, 18:59) *
You can't have unlimited resolution without unlimited bandwidth.
Of course, that has been my point.
QUOTE
This does not mean that peaks move.
Your criticisms are not consistent. When I quoted you as implying peaks would not move due to lowpassing, you said I misquoted you while crying 'abuse'. So you have me warned while berating my lack of contemporary study and making inconsistent demands to perform various pet excercises.
Im not your student.
QUOTE
You're doing nobody any good by failing to read up on the field you're talking about before you work.
Untrue. I limit my research to protect my originality. Education is to Innovation what Masterbation is to Procreation rolleyes.gif I have learned adequate tools at home, school and university and beyond to suit my own designs. Ive never been forced to research previous solutions to computational problems -except for a while when I began learning to program -quite intensively, around the age of 9.

QUOTE
As to your experiments, they prove nothing until you specificially produce the ***exact*** equations, and explain your reasoning.

You demand exact equations and provide only irrelevant ones. To suggest I have not explained my reasoning in this thread is absurd.

@all,
I have only been trying to defend sensible, intuitive, practical use of terminology here by argueing against the idea forwarded - that "time resolution" of pcm is practicaly finer than the samplerate.
As explained previously, i understand that processes can maintain timing details in source which is suitably limited to particular samplerates bandwidth. But processes cannont recover timing details once bandwidth is limited. This means downsampling potentialy and normaly does damage 'time resolution'.

Despite a heavy bias to not examine the practical limitations on time resolution of samplerate, some expert contributers in this thread, have acknowledged that measureable timing details will be routinely distorted by downsampling at normal audio rates. I am not concerned with the audibility of such distortion, only its existence and its limits.

It has been much work trying to have it fairly examined in such a heated and onesided discussion. I do suspect others have better experience to estimate the situation than myself, but seem ideologicaly unwilling to do so.

I have tried to rise to the groups challenge and do some work to illustrate the situation,
heres a scrappy program, not quite finished, but I hope some might appreciate my input so far...

The central chunks of code are:

CODE
int nodi=1; //count of node found
int pregrad,pstgrad; //approximate derivatives

for( int ndfi=0; ndfi<samsGet; ndfi++) //ndfi = time index( samples loop )
{ smx2=smx1; smx1=smx0; smx0=retSam[ndfi];
pregrad=smx1-smx2;
pstgrad=smx0-smx1;

if((pregrad*pstgrad<-1)&&(Math.abs(pregrad-pstgrad)>100))
{ nodes[nodi++]=(ndfi<<8)+( (512*pstgrad+1) / ( ((pstgrad-pregrad)*2)+1) ); }
//record simple linear solution of 1st derivative=0
//expression: (( (512*pstgrad+1) / (((pstgrad-pregrad)*2)+1) )) produces ~ 0-255
//node units are samplesize/256 (1/256th of sample interval)
}//end for(ndfi)


This finds 'nodes' in pcm, approximately the presence and position of major peaks and troughs

and
CODE
for(int aninx=1; aninx<=endnodea; aninx++) //ninx is node index
{ int findtime=anodes[aninx]; //next node to find (units of sample*256)
findfit=maxoff; //Maximum distance for fit
paired=false;

for(int findnode=lastfind; (findnode<endnodeb)&&(bnodes[findnode]<(findtime+maxoff)); findnode++)
{ if(Math.abs(findfit)>Math.abs(findtime-bnodes[findnode]))
{ findfit=findtime-bnodes[findnode];
lastfind=findnode; paired=true;
}
}
if(paired){ discreps[nndi++]=findfit; paired=false; }
else{ missnode++; }

}//end looping through 1 buffer

This finds a nodes nearest neighbour in parallel track and records its descrepancy

whole program (warning spaghetti/frankenstien)
CODE
import java.io.*;

class nodecouple
{ //int[] series = new int[80000];

public static void main(String[] args) /* main */
{ nodecouple nodecoup = new nodecouple(args); //creates instance of self for javas nonstatic context hooha
System.exit(1); } //graceless exit better than hang

public nodecouple(String[] args)
{ int chnksize=1024; //what length
int chnkread=1000; //how many
int srcsamx=4; //?
int maxoff=128; //maximum distance between pairings
int mgfy=1; //magnify record

System.out.println("Reading in"+args[0]+"\n");
getNodesWave tookWav1 = new getNodesWave(args[0], chnksize/mgfy);
//int[] test0=tookWav1.getnodes(0);
System.out.println("And Reading "+args[1]+"\n");
getNodesWave tookWav2 = new getNodesWave(args[1], chnksize);

if(tookWav1.totlFrames<chnksize*chnkread)
{ chnkread=tookWav1.totlFrames/chnksize; };

long ThScnds=((chnksize*chnkread)*1000/tookWav1.Samprate);

System.out.print("Only reading first "+ThScnds/1000+"."+(ThScnds%1000)/100+""+(ThScnds%100)/10+""+(ThScnds%10));
System.out.print(" seconds");
if(tookWav1.channels>1)
{ System.out.print(" (of 1 channel)"); }
System.out.println("(Change chnkread for more)");

int[] discreps= new int[(chnksize*chnkread)]; //(impossible maximum size for discreps)
int nndi=0; //count of nodes found
int missnode=0;
int endnodea=0,endnodeb=0;
int endnodeSuma=0,endnodeSumb=0;

System.out.println("\nLocating Nodes...");

for(int chnk=0; chnk<chnkread; chnk++) //refresh chunks to read
{ int[] anodes=tookWav1.getnodes(0); //nodes in a's chunk
int[] bnodes=tookWav2.getnodes(0); //nodes in b's chunk

endnodea=anodes[0]; //length of array is stored in first position (lowsampled one)
endnodeb=bnodes[0]; //length of array is stored in first position

endnodeSuma+=endnodea;
endnodeSumb+=endnodeb;

/*debout("a:"+endnodea, 9); debout("b:"+endnodeb, 6);
if(chnk%6==5){ debout("\n", 0); } */

int timedif;
int lastfind=1; //to be remind of position of previous couple
boolean paired = false;
int findfit;

for(int aninx=1; aninx<=endnodea; aninx++) //ninx is node index
{ int findtime=anodes[aninx]*mgfy; //next node to find (units of sample*256)
findfit=maxoff; //Maximum distance for fit

paired=false;

//note: sign of nodetime could be bodged to reflect peak or valley.
for(int findnode=lastfind; (findnode<endnodeb)&&(Math.abs(bnodes[findnode])<(findtime+maxoff)); findnode++)
{ if(Math.abs(findfit)>Math.abs(findtime-Math.abs(bnodes[findnode])))
{ findfit=findtime-bnodes[findnode];
if(findfit==-1){ debout("!",2); }
lastfind=findnode; paired=true;
}
}
if(paired){ discreps[nndi++]=findfit; paired=false; }
else{ missnode++; }

}//end looping through 1 buffer

}//end looping through all buffers

System.out.println("\nTotal nodes in: "+args[0]+":"+endnodeSuma);
System.out.println("Total nodes in: "+args[1]+":"+endnodeSumb);
System.out.println("\nMissednodes="+missnode+", discreps="+nndi+", Searchednodes="+endnodeSuma+"\n");

/*System.out.println("\nTime differences List..."); //dump all discreps
for(int tal=0; tal<nndi; tal++)
{ debout(""+discreps[tal],5); if(tal%16==15){ System.out.println(); }
}*/

int maxtallie=1;
int histbars=25;
int histrange=maxoff*2;
int[] tallie = new int[histbars];
int budge=histrange/((histbars-1)*2);

for(int npair=0; npair<nndi; npair++) //do tallie count
{ int valo=discreps[npair];

if(valo>-histrange/2)
{ for(int talo=1; talo<=histbars; talo++)
{ if (valo<(histrange*talo/histbars-histrange/2))
{ tallie[talo-1]++; talo=histbars+1; }
}
}
}
for(int x=0; x<histbars; x++)
{ if(tallie[x]>maxtallie){ maxtallie=tallie[x]; } }

debout("\n",0);

int histhigh=16;
for( int g=histhigh; g>0; g--)
{ debout(""+(maxtallie*g/histhigh)+": ",9);
for(int i=0; i<histbars; i++)
{ if(tallie[i]<(maxtallie*g/histhigh))
{ debout(" ",3);}
else
{ debout("**",3);}
}//i

debout("\n",0);
}//g


for(int x=0; x<histbars-3; x++)
{ System.out.print("----"); }
System.out.print("\n Sums: ");
for(int x=0; x<histbars; x+=2)
{ deboutm(""+tallie[x], 6); }
System.out.print("\n Devs: <");
for(int x=-histbars/2; x<(histbars+1)/2; x+=2) //printing hist headers
{ deboutm(""+Math.abs((int)( (x+0.5)*histrange)/histbars), 6); }

System.out.print(">");
System.out.println("\n\nDistribution chart, range="+histrange+"n (256n=1 sample)\n");
}
private void debout(String deb, int size) //writing to size, for debug output
{ size=size-deb.length();
for(int o=0; o<size; o++) { System.out.print(" ");}
System.out.print(deb);
}//end debout method

private void deboutm(String deb, int size) //writing to size, for debug output
{ size=size-deb.length();
int pesize=size/2;
int posize=size-pesize;

for(int o=0; o<pesize; o++) { System.out.print(" ");}
System.out.print(deb);
for(int o=0; o<posize; o++) { System.out.print(" ");}
}//end debout method

}//end nodecouple

class getNodesWave {
/**
* Wavfile gurgitor
*
* Casual version - it only understands simple wav file formats
*
* This utility is written in private as part of a study
* portfolio in developement.
*
* neutron.soupmix@ntlworld.com */

/* Public discernables are channels, sampling rate, timeframes, (samples/channels).... */

private int hdChnkSz,fmt,dmt,wavFrmtTag; //metaMetas
public int channels, rawPcmByts, totlFrames, bitsPerSample, Samprate; //stream details

byte[] loaded;
byte[] bpass= new byte[0]; //possibly bugged tweak! -appears to work smile.gif
private int samsGet, framPerGet, FrmsBfd, rdIn, bytsRd, bytLft;

private int[] samsGvn;
private short[] retSam;
private short[][] samRng;
private byte samStep;
private int ndfi; //node find index
private int smx0,smx1,smx2;

public FileInputStream wavIn;
public File wavFile;

public static void main(String[] args)/* main */
{ System.out.println("getNodesWave has no main");
System.exit(1); } //graceless exit better than hang

public getNodesWave(String wvName, int samsPerGet){ //bytbffa is sampls per modwork area

System.out.println(" Opening "+wvName);
try
{ wavFile = new File(wvName);
FileInputStream wavIn = new FileInputStream(wavFile);

byte[] hdrz = new byte[44];
int hdrzlen = wavIn.read(hdrz);
byte hdrzi = 12; //wavIn.skip(12);

fmt // header should be fmt, fmt 4 bytes
= ((hdrz[hdrzi++]&0x00ff)<<24) | ((hdrz[hdrzi++]&0x00ff)<<16)
| ((hdrz[hdrzi++]&0x00ff)<<8) | ( (hdrz[hdrzi++]&0x00ff));

if (fmt!=0x666D7420)
{ System.out.println(" fmt sig not encountered - Failure Likely...");
System.out.println(" fmtsig val="+fmt+"\n");}

hdChnkSz //size in bytes of a header chunk
= ((hdrz[hdrzi++]&0x00ff) | ((hdrz[hdrzi++]&0x00ff)<<8)
| ((hdrz[hdrzi++]&0x00ff)<<16) | ((hdrz[hdrzi++]&0x00ff)<<24));
if (hdChnkSz!=16)
{ System.out.println(" Complex wav header - Failure Possible..."); }

wavFrmtTag = (hdrz[hdrzi++]&0x00ff)|((hdrz[hdrzi++]&0x00ff)<<8 );

channels = (hdrz[hdrzi++]&0x00ff)|((hdrz[hdrzi++]&0x00ff)<<8 ); //channels

Samprate
= ((hdrz[hdrzi++]&0x00ff) | ((hdrz[hdrzi++]&0x00ff)<<8) //samprate
| ((hdrz[hdrzi++]&0x00ff)<<16) | ((hdrz[hdrzi++]&0x00ff)<<24));
System.out.println(" SampleRate:"+Samprate);

hdrzi+=6; //skip byterate and blockalign
bitsPerSample = (hdrz[hdrzi++]&0x00ff)|((hdrz[hdrzi++]&0x00ff)<<8 );

//data chunk
dmt // header should be dmt, 4 bytes
= ((hdrz[hdrzi++]&0x00ff)<<24) | ((hdrz[hdrzi++]&0x00ff)<<16)
| ((hdrz[hdrzi++]&0x00ff)<<8) | ( (hdrz[hdrzi++]&0x00ff));

if (dmt!=0x64617461)
{ System.out.println(" No data signature found in wav - Abort! Abort! ");
System.out.println(" dmtsig val="+dmt+"\n");}

rawPcmByts //size in bytes of pcm data
= ((hdrz[hdrzi++]&0x00ff) | ((hdrz[hdrzi++]&0x00ff)<<8)
| ((hdrz[hdrzi++]&0x00ff)<<16) | ((hdrz[hdrzi++]&0x00ff)<<24));

//hdrzreading finished
//hopefuly wav chunk headers overwith

totlFrames=rawPcmByts/(2*channels);

if (channels==0){ System.out.println(" No Channels Arrrgh! Abort! Abort!"); channels=1; }
if (channels==1){ System.out.println(" Mono input"); }
if (channels==2){ System.out.println(" Stereo input"); }
if (channels>=3){ System.out.println(" "+channels+" channels (weird, trying as mono)" ); channels=1; }

long ThScnds=(rawPcmByts)/(2*Samprate*channels/1000);
System.out.println(" Samples "+(rawPcmByts)/(2*channels));
System.out.println(" Trackime at "+Samprate+"Hz :"+ThScnds/1000+"."+(ThScnds%1000)/100+""+(ThScnds%100)/10+""+(ThScnds%10)+"\n");
wavIn.close();

samStep=0;
retSam=new short[samsPerGet];
samRng=new short[channels][4096];
samsGvn=new int[channels];
samsGet=samsPerGet;
FrmsBfd=0;
bytsRd=0;
smx0=0;smx1=0;smx2=0;
rdIn=6144*channels; //reading in 6k a time per channel,
//6144 samples fitting into 4096wide ringbuffs
//1024 sample was safe get for two channel access


}//end io try

catch(FileNotFoundException fnfe)
{ System.out.println("File Not Found, " + wvName);
return; }// end catch FileNotFoundException

catch(IOException ioe)
{ System.out.println("IO Error: " + ioe);
return; }// end catch IOException*/

finally
{ } // end LoadArray

return;
}//end constructor (open wav, get header details)

private void debout(String deb, int size) //writing to size, for debug output
{ size=size-deb.length();
for(int o=0; o<size; o++) { System.out.print(" ");}
System.out.print(deb);
}//end debout method

private void debout(String deb) //writing to size, for debug output
{ System.out.println(deb);
}//end debout method

public void setStep(byte set)
{ samStep = set; return; }

public int[] getnodes( int chnnel )
{ if( (samsGvn[chnnel]+samsGet)>=(totlFrames) )
{ samsGet=totlFrames-samsGvn[chnnel]; //check if last get
if(samsGet==0){ System.out.println("wavin buffer empty"); }
retSam = new short[samsGet]; }

if(FrmsBfd<(samsGvn[chnnel]+samsGet)) { movebf(); } //check if ringbuff needs moved

for(int samsOut=0; samsOut<samsGet; samsOut++)
{ retSam[samsOut]=samRng[chnnel][(samsGvn[chnnel]++)%4096]; }

//** review channel returned.
//retSam is series of pcm levels
int[] nodes= new int[2008]; //fairly safe sized node list
int nodi=1; //count of node found
int pregrad,pstgrad; //approximate derivatives

for( int ndfi=0; ndfi<samsGet; ndfi++) //ndfi = time index( samples loop )
{ smx2=smx1; smx1=smx0; smx0=retSam[ndfi];
pregrad=smx1-smx2;
pstgrad=smx0-smx1;

if((pregrad*pstgrad<-1)&&(Math.abs(pregrad-pstgrad)>100))
{ nodes[nodi++]=(ndfi<<8)+( (512*pstgrad+1) / ( ((pstgrad-pregrad)*2)+1) ); }
//record simple linear solution of 1st derivative=0
//expression: (( (512*pstgrad+1) / (((pstgrad-pregrad)*2)+1) )) produces ~ 0-255
//node units are samplesize/256 (1/256th of sample interval)

//What was producing the central spike in
//an approxomated but balanced (equally distributed error) measure of a peaks intensity/accuteness,
//is a simple discernment of the second derivative (change of change) =
//the method used to select peaks strength, peaks in the middle had to be less powerful to
//be selected than peaks at the sides, (of the considered sample length)
//this then allowed nodes to be matched more often coincidentaly close, tahn not)
//the new expression used has no such bias

}//end for(ndfi)


nodes[0]=nodi-1; //first in nodelist stores length of nodelist

return nodes; //return list of nodes to analysis director
}// end getsams method

public void movebf() //fetch next load of samples
{
try
{ FileInputStream wavIn = new FileInputStream(wavFile);
wavIn.skip(44+bytsRd);
if((bytsRd+rdIn)>rawPcmByts){ rdIn=rawPcmByts-bytsRd; }

if(bpass.length!=rdIn){ bpass= new byte[rdIn]; }
bytsRd+= wavIn.read(bpass);
loaded=bpass;
wavIn.close();
}
catch(IOException ioe){ System.out.println("IO Error: " + ioe); return; }// end catch IOException
finally{ } // end LoadArray

if(channels==1)
{ for( int loadi=0; loadi<rdIn ; FrmsBfd++)
{ samRng[0][FrmsBfd%4096] = (short)( (loaded[loadi++]&0x00ff)|((loaded[loadi++]<<8)&0xff00) );
loadi+=samStep*2; } }
else
{ for( int loadi=0; loadi<rdIn ; )
{ samRng[0][FrmsBfd%4096] = (short)( (loaded[loadi++]&0x00ff)|((loaded[loadi++]<<8)&0xff00) );
samRng[1][(FrmsBfd++)%4096] = (short)( (loaded[loadi++]&0x00ff)|((loaded[loadi++]<<8)&0xff00) );
loadi+=samStep*4; } }

return;
}//end eatWav

}//end getNodesWave


Here is a current output:
CODE
---------- Run java ----------
Reading innofun22176.wav
  SampleRate:176400
  Stereo input
  Samples 1850146
  Trackime at 176400Hz :10.497

And Reading nofun176400.wav
  SampleRate:176400
  Stereo input
  Samples 1850134
  Trackime at 176400Hz :10.497

Only reading first 1.160 seconds (of 1 channel) (Change chnkread for more)

Locating Nodes...

Total nodes in: nofun22176.wav:1132
Total nodes in: nofun176400.wav:1444

Missednodes=127, discreps=1005, Searchednodes=1132

   253                                                                    ***          
   237                                                                    ***          
   221                                                                    ***          
   205                                                                    ***          
   189                                                                    ***          
   173                                                                    ***          
   158                                                                    ***          
   142                                                                    ***          
   126                                                                    ***          
   110                                                               ***  ***          
    94                  ***  ***                                     ***  ***          
    79                  ***  ***                                     ***  ***  ***    
    63             ***  ***  ***  ***                                ***  ***  ***    
    47             ***  ***  ***  ***  ***                           ***  ***  ***    
    31             ***  ***  ***  ***  ***                      ***  ***  ***  ***    
    15        ***  ***  ***  ***  ***  ***  ***  ***            ***  ***  ***  ***  ***
Sums:     3   28   76  102  103   67   50   22   22    5    9   36  125  253   83   21
Devs:  1920 1664 1408 1152  896  640  384  128  128  384  640  896 1152 1408 1664 1920

Distribution chart, range=4096n (256n=1 sample)

Output completed (0 sec consumed).


-This is a curious output that cant be trusted at this stage of the programs developement.
It compares a clip of cd audio upsampled x4, with one downsampled x2 and then upsampled x8
by ssrc_hp.exe
A 176400 (44100*4) Hz sample length in the chart is 256,
a 44100 Hz sample length in the chart is 1024
( these relate to the distribution bands )
more investigation/developement may follow..

Its obviously troublesome to start interprating unsecurely tested/debuged programs, but if some recognise the effort ive put into it (the code ended up taking some valuable hours to get to this stage).
- Maybe they could give me the benefit of their prediction of the best distribution possible of detectable conditions in a waveforms pairing between different samplerates/bandwidths.

regards'
cg

edit: inserted smaller chart

This post has been edited by ChiGung: Nov 22 2006, 21:49


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Canar
post Nov 18 2006, 21:15
Post #114





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



QUOTE (ChiGung @ Nov 18 2006, 11:04) *
Education is to Innovation what Masterbation(sic) is to Procreation


That is quite a ridiculous claim, especially in the context of computer science. Yes, you can probably come up with some silly sort algorithm that works, but in real life, experienced coders use the best sort algorithm for the expected data by researching the sort algorithms and selecting the best. These sort algorithms were not devised by some unschooled hack, these sort algorithms were designed by highly-educated and talented individuals, working in academic settings with other highly-educated and talented individuals.

Whether you believe it or not, education has the benefit of saving everyone time on meaningless missteps such as this thread. The concept of a "peak" holds little relevance to anything in electrical engineering; any form of frequency filtration changes all these peaks anyhow. Lowpass filter a Dirac pulse and instead of a single peak, you get a repeating set of peaks, with the highest peak potentially moving depending on the structure of the filter. A single peak, when lowpassed, can result in multiple peaks, all offset by some factor.

Your issue is trying to perceive frequency domain effects while visualizing the effects solely in the time domain. "Peaks" are time-domain phenomena that do not have any meaning in the frequency domain. All your down- and upsampling is affecting the frequency domain. Changes in the frequency domain imply changes in the time domain, but the way in which the two are related is less trivial to understand than you seem to think it is.

As you no doubt notice, your program does not simply show that changing spectral content shifts peaks, it shows that there is not a single peak that remains unaffected (if I am understanding the output correctly).

The time and frequency domains are mathematically identical, but are very different to conceptualize. If you wish to discover the ways in which a frequency domain transform affects audio, you must look at it from a frequency domain perspective if you wish to understand well. What you are doing is looking at a frequency domain transformation from a time domain perspective, and are getting complex results that you don't really understand. Because you don't understand them, you assume there's something weird happening there. There isn't. You're just not looking at it correctly.

This post has been edited by Canar: Nov 18 2006, 21:16


--------------------
∑:<
Go to the top of the page
+Quote Post
ChiGung
post Nov 18 2006, 21:55
Post #115





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Canar @ Nov 18 2006, 20:15) *
As you no doubt notice, your program does not simply show that changing spectral content shifts peaks, it shows that there is not a single peak that remains unaffected (if I am understanding the output correctly).

If you had the capability to run the program, you could check such a conclusion on other tracks -trying different methods of lowpassing/downsampling.
And you might have noticed that the tracks compared there where of slighlty different lengths due to ssrc's particulars.

QUOTE
The time and frequency domains are mathematically identical......What you are doing is looking at a frequency domain transformation from a time domain perspective, and are getting complex results that you don't really understand. Because you don't understand them, you assume there's something weird happening there. There isn't. You're just not looking at it correctly.

That is your misunderstanding - that "looking at a frequency domain transformation from a time domain perspective" is problematic. As both domains are equaly valid perspectives on the same data, they can be considered in parallel. To consider time resolution without reference to the time domain is folly, if not impossible.

In the frequency domain a downsample manifests as a reduction in bandwidth. In the time domain a downsample manifests as a reduction in resolution of level through time.
These statements are simply not sensibly refutable.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Woodinville
post Nov 18 2006, 22:20
Post #116





Group: Members
Posts: 1401
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (ChiGung @ Nov 18 2006, 10:04) *
QUOTE (Woodinville @ Nov 17 2006, 18:59) *
You can't have unlimited resolution without unlimited bandwidth.
Of course, that has been my point.


And, nobody has argued otherwise.
QUOTE
QUOTE
This does not mean that peaks move.
Your criticisms are not consistent. When I quoted you as implying peaks would not move due to lowpassing, you said I misquoted you while crying 'abuse'. So you have me warned while berating my lack of contemporary study and making inconsistent demands to perform various pet excercises.
Im not your student.


My statements are absolutely consistant.

I think that it is telling that you will not try a trivial, simple experment, one that requires much less code than you already appear to have posted, that will show you some of the errors implicit in your complaint.

QUOTE (ChiGung @ Nov 18 2006, 12:55) *
In the frequency domain a downsample manifests as a reduction in bandwidth. In the time domain a downsample manifests as a reduction in resolution of level through time.
These statements are simply not sensibly refutable.



And, that does not mean that peaks must move. The question of a peak moving or not, using a linear-phase (constant delay) filter, is strictly a question of what frequency content is removed. No more, no less.

As phase and frequency are absolutely equal to a time delay, I think you'll need to readjust your thinking here quite a bit, and simply accept the reality.

I'll say it again.

Generate two gaussian pulses, one of amplitude .1 centered at 10kHz with bandwidth of 1 kHz (just for simplicity's sake. Center it at the third sample. Add to that sequence a gaussian pulse of amplitude 10, bandwidth 1kHz, at 30kHz, centered at the 5th sample.

Do this at a sampling rate of 96kHz.

Downsample this to 48 and back up. Use linear phase filters.

Wait. You don't even have to downsample. Just filter the 96kHz stream at 20kHz with a 192 tap FIR. Look at the results.

You didn't change the sampling rate. You DID move the peak. Why? Because you filtered out some frequency bands. That's all. No more, no less. Nothing special here.

Do the work.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
ChiGung
post Nov 18 2006, 22:49
Post #117





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Woodinville @ Nov 18 2006, 21:14) *
QUOTE (ChiGung @ Nov 18 2006, 10:04) *

QUOTE (Woodinville @ Nov 17 2006, 18:59) *
You can't have unlimited resolution without unlimited bandwidth.
Of course, that has been my point.
And, nobody has argued otherwise.

Not quite, but 'abilities' to locate details at "arbitrarily small" times in pcm samples, keep being presented to me, to counter my claims that "pcm cannot reliably record the times of real events/conditions with subsample accuracy" nor can it maintain such accuracy with further downsampling/lowpassing.

QUOTE ("Woodinville")
This does not mean that peaks move.....
QUOTE ("chigung")
Your criticisms are not consistent....

My statements are absolutely consistant.

QUOTE ("me")
When I quoted you as implying peaks would not move due to lowpassing, you said I misquoted you

Well I dont know what your position is, maybe thats fault of your expression or my comprehension, or combination of both, it happens.

QUOTE
I think that it is telling that you will not try a trivial, simple experment, one that requires much less code than you already appear to have posted, that will show you some of the errors implicit in your complaint.

Maybe I shall some day, or you could just explain how it would turn out, as ive tried to do with my 'experiments'. But the thing is, I have already devised a valid grasp of comparing timeable details between samplerates/bandwidth, and im tired of your attempts to tutor me. Im sure you are an excellent tutor, surely better than I am an attentive student wink.gif

Im very interested to hear about other & better methods of locating any sort of conditions in time in a waveform, and we might be able to implement them here to improve our real world study of pcms time resolution.

regards'
cg

This post has been edited by ChiGung: Nov 18 2006, 22:51


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
ChiGung
post Nov 18 2006, 23:05
Post #118





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE
And, that does not mean that peaks must move. The question of a peak moving or not, using a linear-phase (constant delay) filter, is strictly a question of what frequency content is removed.

I have also restated this relationship throughout this thread.
But it follows that if frequency content is removed, ( and this is normaly the case when transfering between common rates, such as 44kHz > 22kHz )
Then timings of waveform conditions - such as peaks, all throughout the waveform will be distorted.

This claim wasnt accepted, so I have started a programming project to not only prove it, but record approximately how much conditions can move when their bandwidth/samplerate is reduced.

QUOTE
I'll say it again. You didn't change the sampling rate. You DID move the peak. Why? Because you filtered out some frequency bands. That's all. No more, no less. Nothing special here.


Thanks for that description. You have shown that a waveform will distort in time and level when its bandwidth is reduced. This is the very same mechanism that distorts waveforms when their sampling rate is reduced.
Observing the situation as you did there, you should acknowledge the same occurs from 96>44 or 44>24.

That was basicaly an experiment which proves the existence the effect which I wrote the program to measure.
QUOTE
Do the work.

I have been.

This post has been edited by ChiGung: Nov 18 2006, 23:11


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Canar
post Nov 18 2006, 23:10
Post #119





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



QUOTE (ChiGung @ Nov 18 2006, 13:55) *
That is your misunderstanding - that "looking at a frequency domain transformation from a time domain perspective" is problematic. As both domains are equaly(sic) valid perspectives on the same data, they can be considered in parallel. To consider time resolution without reference to the time domain is folly, if not impossible.


I don't disagree. There's a blend of phenomena going on here, and I don't blame you for getting lost and confused. Try follow this: You have data. You understand the meaning of this data in time-domain format. You apply a frequency-domain transform. You re-analyze the data and notice that the representation has changed quite markedly. From here, you are coming to the invalid conclusion that as the representation has changed, there's some loss of "time resolution", a term that is nigh meaningless.

The problem is simple: after the frequency transform, the form of the time-domain data is going to be completely different looking and cannot be sensibly compared to the original time-domain data. In order for the initial and final set of data to be comparable, you must use a representation that allows for comparison. In particular, "peaks" or "nodes" are completely meaningless comparisons. Like myself and many others mentioned before, when a Dirac pulse is lowpassed, you get a time offset and multiple new peaks. The initial and final forms of the data are completely different, and comparisons using "peaks" are no longer valid. This is all due to the frequency-wise transformation. This transformation also can introduce a calculable, constant delay to the signal. These are known phenomena, but do not imply that "time resolution" as I understand it has changed at all.

QUOTE
In the time domain a downsample manifests as a reduction in resolution of level through time.


Could you restate this in other terms, please? I'm not understanding what you mean by this. I understand what downsampling does to time-domain data, but the way in which you state this is ambiguous. I suspect that by elaborating on this point, we may be able to get to the bottom of your misunderstanding.

This post has been edited by Canar: Nov 18 2006, 23:22


--------------------
∑:<
Go to the top of the page
+Quote Post
ChiGung
post Nov 18 2006, 23:19
Post #120





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Canar @ Nov 18 2006, 22:10) *
QUOTE (ChiGung @ Nov 18 2006, 13:55) *
That is your misunderstanding - that "looking at a frequency domain transformation from a time domain perspective" is problematic. As both domains are equaly(sic) valid perspectives on the same data, they can be considered in parallel. To consider time resolution without reference to the time domain is folly, if not impossible.


I don't disagree. There's a blend of phenomena going on here, and I don't blame you for getting lost and confused.

You are incredible...... laugh.gif
I do have to laugh, even if it might get me in more trouble here.
I have to be honest. Im not gonna be anyones doormat.

QUOTE
QUOTE
In the time domain a downsample manifests as a reduction in resolution of level through time.

Could you restate this in other terms, please?

Afraid not, it should be easy to interprate if you are in a position to guide this thread.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Canar
post Nov 18 2006, 23:30
Post #121





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



Ignoring the objective elements of my comments and focusing on the subjective ones will not advance your position here, I am afraid. If my position is so flawed and I am so clueless, then my statements should be absolutely child's play to refute.

QUOTE
Afraid not, it should be easy to interprate(sic) if you are in a position to guide this thread.


All I was asking for was something simple. Your phrasing was ambiguous and non-rigorous. I was hoping to gain insight into your position by having you restate it in other terms. But go ahead, disregard whatever you like. Your character is showing.

This post has been edited by Canar: Nov 18 2006, 23:33


--------------------
∑:<
Go to the top of the page
+Quote Post
ChiGung
post Nov 18 2006, 23:39
Post #122





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Canar @ Nov 18 2006, 22:30) *
Ignoring the objective elements of my comments and focusing on the subjective ones will not advance your position here,

Im sorry but ill have to pass. I refer others to the rest of my multitudious replies replies amounting to thousands of words now, several illustrations and runnable statistics program. Whatever clarifications you would like to present to others you are of course free to do so without my consideration.

bye


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Canar
post Nov 18 2006, 23:44
Post #123





Group: Super Moderator
Posts: 3327
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



QUOTE
I refer others to the rest of my multitudious replies...
...all of which have been refuted quite thoroughly insofar as they present "new" ideas.

QUOTE
several illustrations...
...none of which prove anything new.

QUOTE
and runnable statistics program...
...which again proves nothing new.

So, at the end, you step away, having proven nothing other than that you like to talk about things that other people seem to think you don't really understand.

This post has been edited by Canar: Nov 18 2006, 23:54


--------------------
∑:<
Go to the top of the page
+Quote Post
ChiGung
post Nov 19 2006, 00:34
Post #124





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



I think this thing might be working smile.gif
CODE
Here is sample nofun examined at 96kHz (bandlimited interpolation) >with frequency response to 22kHz
Compared to itself just relowpassed at 22kHz (for experimental control) -
sox.exe was used for lowpassing (sinc windowed),
and lowpass was checked in a frequency spectrum to be good, but a slighlty gradual cutoff...

---------- Run java ----------
Reading innofun9622.wav
  Opening nofun9622.wav
  SampleRate:96000
  Stereo input
  Samples 1006876
  Trackime at 96000Hz :10.488

And Reading nofun96.wav
  Opening nofun96.wav
  SampleRate:96000
  Stereo input
  Samples 1006877
  Trackime at 96000Hz :10.488

Only reading first 2.133 seconds (of 1 channel)(Change chnkread for more)
Locating Nodes...

Total nodes in: nofun9622.wav:10078
Total nodes in: nofun96.wav:10129

Missednodes=297, discreps=9781, Searchednodes=10078

  9776                                           ***                                  
  9165                                           ***                                  
  8554                                           ***                                  
  7943                                           ***                                  
  7332                                           ***                                  
  6721                                           ***                                  
  6110                                           ***                                  
  5499                                           ***                                  
  4888                                           ***                                  
  4277                                           ***                                  
  3666                                           ***                                  
  3055                                           ***                                  
  2444                                           ***                                  
  1833                                           ***                                  
  1222                                           ***                                  
   611                                           ***                                  
Sums:     0    1    0    1    0    0    0    0 9776    0    0    1    0    1    0    1
Devs:   480  416  352  288  224  160   96   32   32   96  160  224  288  352  416  480

Distribution chart, range=1024n (256n=1 sample)

Normal Termination

>> Here is 22kHz lowpassed, compaired to 11kHz lowpass (simulating 22kHz downsample)

---------- Run java ----------
Reading innofun9622.wav
  Opening nofun9622.wav
  SampleRate:96000
  Stereo input
  Samples 1006876
  Trackime at 96000Hz :10.488

And Reading nofun9611.wav
  Opening nofun9611.wav
  SampleRate:96000
  Stereo input
  Samples 1006876
  Trackime at 96000Hz :10.488

Only reading first 2.133 seconds (of 1 channel)(Change chnkread for more)

Locating Nodes...

Total nodes in: nofun9622.wav:10078
Total nodes in: nofun9611.wav:7807

Missednodes=3320, discreps=6758, Searchednodes=10078

  3280                                           ***                                  
  3075                                           ***                                  
  2870                                           ***                                  
  2665                                           ***                                  
  2460                                           ***                                  
  2255                                           ***                                  
  2050                                           ***                                  
  1845                                           ***                                  
  1640                                           ***                                  
  1435                                           ***                                  
  1230                                           ***                                  
  1025                                      ***  ***                                  
   820                                      ***  ***                                  
   615                                      ***  ***                                  
   410                                      ***  ***                                  
   205        ***  ***                 ***  ***  ***  ***                 ***  ***    
Sums:    34  226  271  189   96   72  247 1166 3280  297   51   75  177  286  250   41
Devs:   480  416  352  288  224  160   96   32   32   96  160  224  288  352  416  480

Distribution chart, range=1024n (256n=1 sample)

Normal Termination

>> Here is lp 22 compared to lp 5.5kHz.... (@96k bndlimint)


---------- Run java ----------
Reading innofun9622.wav

  Opening nofun9622.wav
  SampleRate:96000
  Stereo input
  Samples 1006876
  Trackime at 96000Hz :10.488

And Reading nofun9605.wav

  Opening nofun9605.wav
  SampleRate:96000
  Stereo input
  Samples 1006876
  Trackime at 96000Hz :10.488

Only reading first 2.133 seconds (of 1 channel)(Change chnkread for more)

Locating Nodes...

Total nodes in: nofun9622.wav:10078
Total nodes in: nofun9605.wav:1455

Missednodes=9046, discreps=1032, Searchednodes=10078


   161                                           ***                                  
   150                                           ***                                  
   140                                           ***                                  
   130                                           ***                                  
   120                                           ***                                  
   110                                           ***                                  
   100                                           ***                                  
    90                                           ***                                  
    80                                           ***            ***                    
    70                                      ***  ***            ***                    
    60             ***  ***  ***  ***  ***  ***  ***       ***  ***  ***              
    50        ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***          
    40        ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***    
    30        ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***    
    20   ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***    
    10   ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***  ***
Sums:    26   57   64   67   66   68   63   70  161   56   69   84   69   52   44   16
Devs:   480  416  352  288  224  160   96   32   32   96  160  224  288  352  416  480

Distribution chart, range=1024n (256n=1 sample)

Normal Termination


It does strike me that a central spike persists.
Ill have to go through the code again before I can be sure that is not a buggy artifact of the prog. whistling.gif
Food for thought though.

cheers'
cg

This post has been edited by ChiGung: Nov 19 2006, 00:35


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Woodinville
post Nov 19 2006, 02:18
Post #125





Group: Members
Posts: 1401
Joined: 9-January 05
From: JJ's office.
Member No.: 18957



QUOTE (ChiGung @ Nov 18 2006, 13:49) *
Im very interested to hear about other & better methods of locating any sort of conditions in time in a waveform, and we might be able to implement them here to improve our real world study of pcms time resolution.

regards'
cg



Then how come you won't accept them when they are offered? Try it. Learn.

QUOTE (ChiGung @ Nov 18 2006, 15:34) *
It does strike me that a central spike persists.
Ill have to go through the code again before I can be sure that is not a buggy artifact of the prog. whistling.gif
Food for thought though.

cheers'
cg



I have a suggestion, go load Cygwin, Xserver, and Octave.

It will save you lots of time and eventually you'll understand that phase shift and time delay are the same thing, that you can build any real-world waveform out of continuous sine waves, etc.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post

7 Pages V  « < 3 4 5 6 7 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 17th April 2014 - 15:26