IPB

Welcome Guest ( Log In | Register )

2 Pages V   1 2 >  
Reply to this topicStart new topic
Getting the best vinyl transfer using different copies
odyssey
post Feb 3 2008, 21:22
Post #1





Group: Members
Posts: 2296
Joined: 18-May 03
From: Denmark
Member No.: 6695



I was wondering how the best approach is to get the absolute best Vinyl transfer. It come to my mind that while it's possible to subtract two different tracks from each other to get the difference, wouldn't it be possible to extract the similarities of two tracks?

If this is doable, what if one had two similar copies of a vinyl and made two identical transfers of these, then you should have two pretty similar tracks with mostly just the noise differ from the other, right?

Any thought?


--------------------
Can't wait for a HD-AAC encoder :P
Go to the top of the page
+Quote Post
AndyH-ha
post Feb 3 2008, 23:40
Post #2





Group: Members
Posts: 2192
Joined: 31-August 05
Member No.: 24222



If you try making two, or more, transfers of the same track from a single LP, you will find each is different when examined closely. One factor is that TT speed is never constant enough from the digital timing viewpoint, even when you can’t hear any problems. Another is that there may be modifications of the vinyl itself; again too small to be heard, but still obvious in a close-up on-screen examination. A third, I believe, is that there is always some dirt, no matter how careful your cleaning. This interacts with the stylus, at least in minor ways, and modifies the playback.

Try taking any audio track on your hard drive. When you do the ‘subtraction’ you are referring to with the file itself, the result is all digital zeros; there is no difference. Now offset the process by only one sample point, i.e. start one copy at sample 1 and the other at sample 2. This will produce a major difference. It will probably play back as a more or less complete, but poorer quality, copy of the original.

What I’m saying is that I don’t thing you could do anything useful by this route using two separate LPs. If you have two (or more) damaged LPs, you can probably make one better copy doing cut and paste between them, selecting the best from each. It will require careful work to make all the cuts and joins flow seamlessly.
Go to the top of the page
+Quote Post
pdq
post Feb 4 2008, 00:08
Post #3





Group: Members
Posts: 3305
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



@AnhyH-ha: I tend to disagree with you on this one. While I am not aware of any software to do this, I can see no theoretical reason why it couldn't be done. Cross correlation within a small time region would give you the exact time offset between the two signals to a fraction of a sample, then simply offset one of them appropriately and extract whatever is correlated between them. In fact I would be surprised if something like this hadn't been developed for applications like the space program, radio telescopy, etc.
Go to the top of the page
+Quote Post
AndyH-ha
post Feb 4 2008, 07:24
Post #4





Group: Members
Posts: 2192
Joined: 31-August 05
Member No.: 24222



Nothing exists to a fraction of a sample. The signal has been sampled. You have no information about what went on between samples.

The timing of two separate recordings from a TT is not going to the same to anywhere near sampling rate frequency. You can probably synch the two recordings of the same LP to play together so that one does not notice much difference, but the subject comparison depends on exact sample to sample matches.

Suppose you have some way of saying this sample at the beginning of recording 1 correlates to that sample near the beginning of recording 2. What would you then do that might be useful?

Your proposition seems to me to belong to two separate but simultaneous recordings of a single event. In this case we are talking about separate recordings of separate events.
Go to the top of the page
+Quote Post
pdq
post Feb 4 2008, 13:20
Post #5





Group: Members
Posts: 3305
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



QUOTE (AndyH-ha @ Feb 4 2008, 02:24) *
Nothing exists to a fraction of a sample. The signal has been sampled. You have no information about what went on between samples.

According to Nyquist we know exactly what happened between sample, as long as the signal was bandwidth-limited to half the sampling frequency.
QUOTE (AndyH-ha @ Feb 4 2008, 02:24) *
The timing of two separate recordings from a TT is not going to the same to anywhere near sampling rate frequency. You can probably synch the two recordings of the same LP to play together so that one does not notice much difference, but the subject comparison depends on exact sample to sample matches.

Suppose you have some way of saying this sample at the beginning of recording 1 correlates to that sample near the beginning of recording 2. What would you then do that might be useful?

Your proposition seems to me to belong to two separate but simultaneous recordings of a single event. In this case we are talking about separate recordings of separate events.

We are not talking about separate recordings but two pieces of vinyl pressed from the same master. As for timing, long-term timing will certainly drift by many milliseconds, probably even seconds, but in the short term, if we break up the tracks into short segments, say 100 milliseconds or less, correlate the tracks at the beginning and end of the segment, and adjust one to match the other, then you should get sub-sample matching over that time frame.
Go to the top of the page
+Quote Post
2Bdecided
post Feb 4 2008, 13:43
Post #6


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



It's been proposed before (even back in the 1980s) but I haven't seen anyone make this work...yet!

Cheers,
David.


It's been proposed before (even back in the 1980s) but I haven't seen anyone make this work...yet!

Cheers,
David.
Go to the top of the page
+Quote Post
knutinh
post Feb 4 2008, 14:21
Post #7





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



I do think that averaging several recordings of a vinyl record could average out noise or inaccuracy. This could even be exploited by recording the exact same record multiple times, if the noise is generated by a time-variant process (such as dirt/dust that moves around, or amplifier noise).

The practical problem is that the time/phase-variation has to be extracted from the same source as the noisy signal that is to be averaged out.

You basically have something like:

x(t) being the "true" analog signal recorded in the studio
xi(t) being the signal on a physical record given perfect playback-speed/phase
xi(t+vi(t)) being the same signal with time-base variation
ni(t) being the sum of stochastic and deterministic noise-signal across time and different records
yi(t) being the signal available from the RIAA-stage

y1(t) = x1(t+v1(t)) + n1(t)

By compensating for the time/phase-variation, you want to calculate:
y_mean(t) = {x1(t) + n1(t) + x2(t) + ... + xn(t) + nn(t)}/N

As long as y1...yn is strongly correlated, while n1...nn is non-correlated, you will have amplitude-addition for the signal and power-addition for the noise. A good thing :-)

Calculating the block-wise crosscorrelation for finding "best match" may be a starting-point, then interpolating the offset times for applying a slow-varying continous resampler to offset one vs the other.

You might find that even though a clever implementation reduces noise, it may sound perceptually worse due to artefacts of blending slightly offset sounds, similar to a chorus-effect.

There are some interesting projects to scan vinyl records using flatbed scanners. Perhaps some of the algorithms for matching separate scans of different parts could have some use?

-k

This post has been edited by knutinh: Feb 4 2008, 14:42
Go to the top of the page
+Quote Post
Axon
post Feb 4 2008, 18:15
Post #8





Group: Members (Donating)
Posts: 1984
Joined: 4-January 04
From: Austin, TX
Member No.: 10933



Averaging only works, mathematically, for randomly distributed samples. That is, there needs to be some distortion that affects the signal randomly, so that averaging the recordings averages out the distortions to (hopefully) achieve something closer to the original. Most vinyl distortions don't work like that. They are correlated, one way or another, with the number of plays.

For instance, high frequency loss due to groove wear (small though that might be) is correlated to the number of plays: If you record something 100 times, and average the results, you're always going to get something of inferior frequency response to the very first play.

Another example: the stylus is believed by some people to scrape dust off the groove, and essentially polish it, when played back repeatedly with a high quality stylus. That is, quality improves with the number of plays. So if you averaged 100 recordings you might get worse performance than the last play.
  • So beyond the obvious (and very hard) time sync issues, averaging recordings just doesn't make sense with a lot of the distortions vinyl is known for.
Go to the top of the page
+Quote Post
odyssey
post Feb 4 2008, 19:22
Post #9





Group: Members
Posts: 2296
Joined: 18-May 03
From: Denmark
Member No.: 6695



QUOTE (knutinh @ Feb 4 2008, 14:21) *
There are some interesting projects to scan vinyl records using flatbed scanners. Perhaps some of the algorithms for matching separate scans of different parts could have some use?

These look interresting, but in practice it seem impossible looking at this site
Maybe you have another reference?

I believe I've read something about a liquid transfer, but couldn't find anything on google. Anyone know more about this?


--------------------
Can't wait for a HD-AAC encoder :P
Go to the top of the page
+Quote Post
pdq
post Feb 4 2008, 19:51
Post #10





Group: Members
Posts: 3305
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



QUOTE (Axon @ Feb 4 2008, 13:15) *
Averaging only works, mathematically, for randomly distributed samples. That is, there needs to be some distortion that affects the signal randomly, so that averaging the recordings averages out the distortions to (hopefully) achieve something closer to the original. Most vinyl distortions don't work like that. They are correlated, one way or another, with the number of plays.

For instance, high frequency loss due to groove wear (small though that might be) is correlated to the number of plays: If you record something 100 times, and average the results, you're always going to get something of inferior frequency response to the very first play.

Another example: the stylus is believed by some people to scrape dust off the groove, and essentially polish it, when played back repeatedly with a high quality stylus. That is, quality improves with the number of plays. So if you averaged 100 recordings you might get worse performance than the last play.
  • So beyond the obvious (and very hard) time sync issues, averaging recordings just doesn't make sense with a lot of the distortions vinyl is known for.

What was being proposed was not repeated plays of the same record, but two separate records pressed from the same master. Obviously the result can never be better that either record in terms of frequency response etc., that was not being suggested. Nor were we talking about "averaging" the two. What was being discussed was correlating the information between the two in such a way as to reject as much as possible the effects of dirt and scratches on either record, since that would be uncorrelated.
Go to the top of the page
+Quote Post
knutinh
post Feb 4 2008, 20:46
Post #11





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



QUOTE (Axon @ Feb 4 2008, 18:15) *
Averaging only works, mathematically, for randomly distributed samples.

of course
QUOTE
So beyond the obvious (and very hard) time sync issues, averaging recordings just doesn't make sense with a lot of the distortions vinyl is known for.

I am by no means an expert on noise mechanisms in vinyl, but I am guessing that two records from the same pressing will have quite a lot of scratches, dust, etc that is stochastic.

High-frequency loss is not "noise" in my book.

-k

QUOTE (pdq @ Feb 4 2008, 19:51) *
What was being proposed was not repeated plays of the same record, but two separate records pressed from the same master. Obviously the result can never be better that either record in terms of frequency response etc., that was not being suggested. Nor were we talking about "averaging" the two. What was being discussed was correlating the information between the two in such a way as to reject as much as possible the effects of dirt and scratches on either record, since that would be uncorrelated.

I believe that playing back the same record twice was my proposal. It is really a twist on the same idea, depending on if the noise-mechanisms can be considered uncorrelated

For two instances I think that averaging is quite good. For three or more you might get better results taking the median, but averaging still is quite good and robust.

What are you suggesting then? Some statistical model of each error-mechanism and correlating that to the channel-differences?

-k
Go to the top of the page
+Quote Post
DVDdoug
post Feb 4 2008, 22:53
Post #12





Group: Members
Posts: 2441
Joined: 24-August 07
From: Silicon Valley
Member No.: 46454



Odyssey,

I've thought about this too... alternating between two recordings and keeping the best one at any moment. But I've never had two copies of a vinyl record on hand, and it would be very tedious (and error prone) to cut & paste "by hand". It would be farily easy to write software that allows you to manually toggle between two recordings, but I don't know if any existing audio editors make this process easy.

Here's an experiment you can try that might help determine if this is at-all possible:
Record the same record twice, align and subtract. If you get silence it is possible. If you have two copies of a record, you might try subtracting the two copies. If you are left with only the clicks & pops, this again confirms the possibility of this working. But, I suspect Andy is right, and you will get some sort of noise... it might be an interesting experiment.

That's not going to give you the final result, but it will tell you if it's possible to isolate the noise.

Now in order to completely automate this process, I think you would need three copies of the record (and some special software). Wherever two (or 3) samples agree, this "data" is good. Where one sample is different from the other two, the data is bad.
Go to the top of the page
+Quote Post
knutinh
post Feb 5 2008, 10:11
Post #13





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



QUOTE (DVDdoug @ Feb 4 2008, 22:53) *
Here's an experiment you can try that might help determine if this is at-all possible:
Record the same record twice, align and subtract. If you get silence it is possible. If you have two copies of a record, you might try subtracting the two copies. If you are left with only the clicks & pops, this again confirms the possibility of this working. But, I suspect Andy is right, and you will get some sort of noise... it might be an interesting experiment.

I have tried this with CD. The drift/jitter is large enough that eventually, the two tracks will drift one or more sample out of phase, even if they are matched at the beginning. Try comparing 20kHz tones that are offset by half the wavelength :-)

Continous correlation/PLL/something is needed. Now, practically _any_ noise can be modelled by allowing each sample to point to anywhere else, and in that case you have no noise "left" to remove. Clearly not what you want. So you have to establish some limits on how "smooth" the time-variation is likely to be, and minimize the error given that. Then any remaining error should be defined as noise and eliminated.
QUOTE
Now in order to completely automate this process, I think you would need three copies of the record (and some special software). Wherever two (or 3) samples agree, this "data" is good. Where one sample is different from the other two, the data is bad.

I dont think so. Any correlated noise/degradation cannot be removed (this way) anyways. Non-correlated noise can be improved by simply taking the average of 2,3 (or generally N) records. By adding N such tracks, you get the SNR of N*signal/(sqrt(N)*noise). By taking the average of two records _given that all noise is non-correlated_, we get an improvement of sqrt(2) in SNR, or 3dB.

Now, there might be other (possibly better) methods of comparing N records, median being mentioned.

-k

This post has been edited by knutinh: Feb 5 2008, 10:12
Go to the top of the page
+Quote Post
bryant
post Feb 6 2008, 18:24
Post #14


WavPack Developer


Group: Developer (Donating)
Posts: 1287
Joined: 3-January 02
From: San Francisco CA
Member No.: 900



Several years ago I started thinking about this exact technique. At first the idea was to just make it possible to unambiguously determine whether or not a transient noise was a real click or was part of the source material. Once the undesired clicks were identified, the identical source from the clean copy could be substituted. However, as I thought about it further I realized that much more could potentially be achieved.

As was mentioned above, if the two sources are perfectly aligned and then summed together, the original source material is boosted 6 dB while all uncorrelated noise would be boosted only 3 dB (or less, depending on its relative presence in the two recordings). This is a well understood technique for noise reduction.

Also, the speed variations between the two transfers could also be averaged. In other words, the final merged version would represent the temporal midpoint of the two recordings (in addition to being the amplitude midpoint). With multiple transfers (even of the same LP) this technique could reduce the magnitude of turntable wow and flutter down to arbitrarily low levels.

Although this idea would work best with two (or more) copies of the same LP (or tape), some of the benefit could even be achieved with multiple transfers of the same copy. For example, the effects of turntable speed variation, turntable rumble and preamplifier hiss would all be reduced (although preamp hiss is likely to be well below the LP noise floor anyway).

The short story is that in the middle of last year (2007) I took off from work and began to implement this. I knew it was going to be difficult to get the transfers to be aligned perfectly enough for it to work well. Ideally, it should be possible to align the sources well enough that when they were subtracted you would hear only the clicks and surface noise (the desired signal would be completely canceled), and I estimated that this would require alignment accurate to within 1/1000 of a sample (at 44 kHz that would be about 20 ns of jitter).

Well, it turned out to be even harder than I originally thought and the resulting program is very complicated and rather slow and doesn't always work correctly. I had lots of improvements in mind and wanted to work further on it, but unfortunately it was time to go back to work (you know, food smile.gif) and I haven't had a chance to get back to it since. However, I did get it to a point where I successfully processed a few tracks and the results indicate that the method is certainly viable and promising.
Go to the top of the page
+Quote Post
bhoar
post Feb 6 2008, 20:47
Post #15





Group: Members (Donating)
Posts: 612
Joined: 31-May 06
Member No.: 31326



My expectation with the above approach would be that you'd probably end up with a "chorus" effect after a few additions. Sample-edge alignment is possible...the issue is that each digitization will be skewed at a some non integer (sub-)sample length. Sub sort of super-duper oversampling (beyond the point where the timing smear is audible to humans) might be necessary.

Or perhaps we only notice the phase/alignment mis-match at rougher sampling levels already?

In addition, skew from the turntable itself (platter, motor, tonearm, etc. non-linearity) seem like unfun issues as well.

-brendan


--------------------
Hacking CD Robots & Autoloaders: http://hyperdiscs.pbwiki.com/
Go to the top of the page
+Quote Post
DVDdoug
post Feb 6 2008, 21:06
Post #16





Group: Members
Posts: 2441
Joined: 24-August 07
From: Silicon Valley
Member No.: 46454



QUOTE (knutinh @ Feb 5 2008, 01:11) *
QUOTE

Now in order to completely automate this process, I think you would need three copies of the record (and some special software). Wherever two (or 3) samples agree, this "data" is good. Where one sample is different from the other two, the data is bad.

I dont think so. Any correlated noise/degradation cannot be removed (this way) anyways. Non-correlated noise can be improved by simply taking the average of 2,3 (or generally N) records. By adding N such tracks, you get the SNR of N*signal/(sqrt(N)*noise). By taking the average of two records _given that all noise is non-correlated_, we get an improvement of sqrt(2) in SNR, or 3dB.

Now, there might be other (possibly better) methods of comparing N records, median being mentioned.

-k
NOTE - I'm not saying this will work with real vinyl transfers in the "real world".

But, sure you can find and remove the noise (error) with 3 only records/files, if the noise is infrequent enough that it occurs on only one recording at any point in time time... You don't have to do any averaging... You just "throw-out" the bad data. A digital audio file is just a series of integers. Look at the following 3 series of numbers and you can easily see the mismatched data, and it's equally easy to make a new-corrected series:

File1--File2--File3--Corrected
1000 1000 1000 1000
1024 1024 1023 1024
1111 1111 1111 1111
1230 1234 1234 1234
1005 1003 1003 1003
1997 1997 1897 1997
1500 1500 1500 1500
Go to the top of the page
+Quote Post
saratoga
post Feb 6 2008, 21:17
Post #17





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (bhoar @ Feb 6 2008, 14:47) *
My expectation with the above approach would be that you'd probably end up with a "chorus" effect after a few additions. Sample-edge alignment is possible...the issue is that each digitization will be skewed at a some non integer (sub-)sample length. Sub sort of super-duper oversampling (beyond the point where the timing smear is audible to humans) might be necessary.


The ultrasound people have studied this process in great detail as its essential for cardiac flow measurements. As it turns out, at reasonable SNRs, you can very, very precisely locate the time offset for a pair of recordings of the same event. Typically you can do so to a tiny fraction of the sampling period using oversampling + cross correlation.

QUOTE (bhoar @ Feb 6 2008, 14:47) *
In addition, skew from the turntable itself (platter, motor, tonearm, etc. non-linearity) seem like unfun issues as well.


I'm not sure they matter so much. You could divide the signal into small enough windows that each was essentially constant, oversample, correlate and realign. Finally lowpass filter and decimate. Assuming they weren't audible to begin with, they won't be after processing either I think.
Go to the top of the page
+Quote Post
pdq
post Feb 6 2008, 21:30
Post #18





Group: Members
Posts: 3305
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



QUOTE (bryant @ Feb 6 2008, 13:24) *
Ideally, it should be possible to align the sources well enough that when they were subtracted you would hear only the clicks and surface noise (the desired signal would be completely canceled), and I estimated that this would require alignment accurate to within 1/1000 of a sample (at 44 kHz that would be about 20 ns of jitter).

While I agree that complete cancelation of the correlated signal would be a neat demonstration of the technique, I don't think that what we are discussing here requires anywhere near that alignment accuracy.

Let's say that you want to add the two signals and get double the correlated content, but less than double the uncorrelated content. If two sine waves of, say 10 kHz are added, the effect on amplitude of a small phase shift is only the cosine of half the phase angle. For example, if the sine waves are 10 degrees out of phase then the resulting amplitude is only lowered by 4%. 10 degrees of a 10 kHz sine wave is 2.78 microseconds, or about 0.12 sample intervals at 44.1 kHz.

As far as the "chorus effect", I think the signals would have to differ by one or more cycles for this to happen. Less than that and the amplitude is affected but the ear still hears only one signal.
Go to the top of the page
+Quote Post
bryant
post Feb 7 2008, 08:19
Post #19


WavPack Developer


Group: Developer (Donating)
Posts: 1287
Joined: 3-January 02
From: San Francisco CA
Member No.: 900



QUOTE (pdq @ Feb 6 2008, 12:30) *
QUOTE (bryant @ Feb 6 2008, 13:24) *

Ideally, it should be possible to align the sources well enough that when they were subtracted you would hear only the clicks and surface noise (the desired signal would be completely canceled), and I estimated that this would require alignment accurate to within 1/1000 of a sample (at 44 kHz that would be about 20 ns of jitter).

While I agree that complete cancelation of the correlated signal would be a neat demonstration of the technique, I don't think that what we are discussing here requires anywhere near that alignment accuracy.

Let's say that you want to add the two signals and get double the correlated content, but less than double the uncorrelated content. If two sine waves of, say 10 kHz are added, the effect on amplitude of a small phase shift is only the cosine of half the phase angle. For example, if the sine waves are 10 degrees out of phase then the resulting amplitude is only lowered by 4%. 10 degrees of a 10 kHz sine wave is 2.78 microseconds, or about 0.12 sample intervals at 44.1 kHz.

Yes, if the alignment error was constant, you're absolutely right that it would just result in HF attenuation.

But there's also jitter in the error which would make it more likely to be audible. In this case I was able to low-pass the jitter because I knew the offset difference between the transfers would be low-passed by the turntable platter's mass. In the case of tape transfers though this might not be the case.

In any event, inverse mixing the signals was an easy way to locate spots where the matching algorithm was getting off... smile.gif
Go to the top of the page
+Quote Post
krabapple
post Feb 7 2008, 09:36
Post #20





Group: Members
Posts: 2159
Joined: 18-December 03
Member No.: 10538



QUOTE (odyssey @ Feb 4 2008, 13:22) *
QUOTE (knutinh @ Feb 4 2008, 14:21) *

There are some interesting projects to scan vinyl records using flatbed scanners. Perhaps some of the algorithms for matching separate scans of different parts could have some use?

These look interresting, but in practice it seem impossible looking at this site
Maybe you have another reference?



Something a little more technologically advanced:

http://irene.lbl.gov/

This post has been edited by krabapple: Feb 7 2008, 10:02
Go to the top of the page
+Quote Post
knutinh
post Feb 7 2008, 10:11
Post #21





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



QUOTE (DVDdoug @ Feb 6 2008, 21:06) *
QUOTE (knutinh @ Feb 5 2008, 01:11) *
QUOTE

Now in order to completely automate this process, I think you would need three copies of the record (and some special software). Wherever two (or 3) samples agree, this "data" is good. Where one sample is different from the other two, the data is bad.

I dont think so. Any correlated noise/degradation cannot be removed (this way) anyways. Non-correlated noise can be improved by simply taking the average of 2,3 (or generally N) records. By adding N such tracks, you get the SNR of N*signal/(sqrt(N)*noise). By taking the average of two records _given that all noise is non-correlated_, we get an improvement of sqrt(2) in SNR, or 3dB.

Now, there might be other (possibly better) methods of comparing N records, median being mentioned.

-k
NOTE - I'm not saying this will work with real vinyl transfers in the "real world".

But, sure you can find and remove the noise (error) with 3 only records/files, if the noise is infrequent enough that it occurs on only one recording at any point in time time... You don't have to do any averaging... You just "throw-out" the bad data. A digital audio file is just a series of integers. Look at the following 3 series of numbers and you can easily see the mismatched data, and it's equally easy to make a new-corrected series:

File1--File2--File3--Corrected
1000 1000 1000 1000
1024 1024 1023 1024
1111 1111 1111 1111
1230 1234 1234 1234
1005 1003 1003 1003
1997 1997 1897 1997
1500 1500 1500 1500

I am well aware of the idea that you suggest, but you said that 3 records was necessary. As my post shows, this is wrong, as improvement can (in our theoretical sandbox at least) be done for only 2 tracks, and any number above that.

As I also said, the plain averaging that I suggested can concievably be improved, especially if one has knowledge of the error mechanisms in the medium. I mentioned the median, and the table you drew is an example of a median (or mode) operation. Of course, the median is different from the mean only for 3 inputs or more.

As has been suggested by others, discussing techniques for comparing N sub-sample aligned renditions of a record is a "luxury-issues". The real problem is how to do continous subsample alignement, while perfectly separating jitter/fluctuations from noise is a difficult trade-off.



I think that an intuitive approach would be something like:
1. Calculating the crosscorrelation between blocks of track B and track A
2. Resampling track A by 8x or 128x, called A'.
3. Using the peak of the crosscorrelation as a startingpoint for every sample in blocks of B and finding the smallest subsample offsets that minimize the difference (i.e. the +/- offset where the error change polarity)
4. Now you have a sample-by-sample offsetvector that explain ALL difference as time-variation. Do a frequency-analysis and decide what is "true" time-variation, and what is noise. The threshold would be device/media-dependant
5. Now you have a crude separation of time-variation and noise, eliminating noise is now possible.

Low-frequency time-alignement should be very accurate (perhaps well into the subsample accuracy). High-frequency time-alignement should not be that accurate, since that would model noise as time-variation (not something we want). Therefore, I am suggesting an accurate time-tracking with suppression of fast changes.

-k

This post has been edited by knutinh: Feb 7 2008, 10:59
Go to the top of the page
+Quote Post
AndyH-ha
post Feb 7 2008, 10:53
Post #22





Group: Members
Posts: 2192
Joined: 31-August 05
Member No.: 24222



It might well be that I have such a poor understanding of one or more concepts discussed that any attempt I make to consider this question is laughable, but maybe one of you favoring the possibility of success can put some of the process into simple enough terms that I can follow the argument.

To summarize:
We make a digital recording from an LP. This recording contains signal, which is the audio that is supposed to be on the disk, and noise, which is anything else, but most importantly is the else that comes from damage to the disk.

We have two copies of the disk (LP). We assume they contain identical signal; we should be able to correlate the signal between digital recordings of the two. We do this by comparing the two recordings on a sample by sample basis. Anything not the same on both is noise we don’t want.

No one seems to be saying that timing differences between digital recordings of two copies of an LP won’t be a significant issue to the goal of comparing them. There seems to be the belief that this could be overcome by two mechanisms.

First is that the timing be broken down to a small part of a sample by resampling to a very high sample rate. This is to overcome the disability that there are no corresponding samples in the two recordings to begin with. We work with very small increments in order to be able to get to the same part of the signal in both recordings. This could (hopefully) allow adequate alignment of what would be considered as the common starting point of the two recordings.

Next is that the recordings themselves be broken down into very small segments (100 milliseconds was suggested). The corresponding segments are aligned and processed independently. This should overcome the timing differences that will exist between the two segments relative to the beginning of each recording.

If the above alignment process is successful, then a mix of one recording with an inversion of the other would cancel the signal, leaving only the noise. Having that alignment, we might also make other comparisons between the two recordings.

If my understanding of the above is reasonably close to the proposals, explain how software could align the segments. How could a program figure out what in one recording goes with what in the other? Without this, everything else is useless.
Go to the top of the page
+Quote Post
2Bdecided
post Feb 7 2008, 13:30
Post #23


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



It's already been said - you do a correlation. There's a peak where the two signals match.

The major problem is that for any periodic waveform (e.g. music!) there are lots of peaks. It's not always obvious (to a very simple algorithm) which is the correct one.

Cheers,
David.
Go to the top of the page
+Quote Post
pdq
post Feb 7 2008, 14:07
Post #24





Group: Members
Posts: 3305
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



Subtraction of the two perfectly-aligned tracks would result in the difference, which contains anything that is in just one of the tracks. Theoretically you could then remove anything that shows up by adding or subtracting it from one of the signals. The tricky part is deciding which to do on a case-by-case basis, add it or subtract it. Since we have a good idea of what a click looks like, we can probably do a pretty good job of doing the correct assignment. Other kinds of noise we would probably have to settle for just taking the average of the two signals.

P.S. Thanks to the OP for initiating such an interesting thread.
Go to the top of the page
+Quote Post
SebastianG
post Feb 7 2008, 15:57
Post #25





Group: Developer
Posts: 1317
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



I guess aligning (somehow) + adaptivly fading between both versions should work pretty well. In the absense of clicks and pops and assuming a similar noise level in both versions a 50/50 mix is expected to reduce noise by 3 dB. In presence of clicks and pops that occur locally only in one version the clik/pop-free version could be weighted by 100% locally ...

The assumption is that after the alignment the difference of both version shows no correlation to the original undistorted signal -- which might not be true -- for example when one vinyl exhibits attenuated high frequencies.

my 2 cents,
SG
Go to the top of the page
+Quote Post

2 Pages V   1 2 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 21st April 2014 - 05:44