IPB

Welcome Guest ( Log In | Register )

3 Pages V  < 1 2 3 >  
Reply to this topicStart new topic
44 KHz (CD) not enough !? (Nyquist etc.), plethora of distortion frequencies?
morelli
post May 14 2003, 13:06
Post #26





Group: Members
Posts: 27
Joined: 16-June 02
Member No.: 2319



it's all too complex for me to understand.
but i always wondered why you could get more clarity and detail
when using higher sampling rates for recording.
but that has to go along with a higher bitrate too ofcourse.

morelli


--------------------
why does 'saving settings' take about 3 minutes and my modem is working while shutting down my windows2000 ?
Go to the top of the page
+Quote Post
zephirus
post May 14 2003, 13:39
Post #27





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (SikkeK @ May 14 2003 - 03:36 AM)
I think your snippet has alot of frequence components above 22.05 kHz......

This seems to be the problem I searched for but was unable to find...

zephirus
Go to the top of the page
+Quote Post
2Bdecided
post May 14 2003, 13:40
Post #28


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



Someone do a search. I'm too lazy. But I've written what amounted to an essay on the subject of time resoluion near the nyquist limit in some forum or another - probably more than once.

Basically, if a tone stops dead, it stops with a click. A click has (theoretically) all possible frequencies. But you're filtering out all the ones above 22.05kHz.

In otherwords, a 21kHz tone can't just "stop dead" in a band-limited system. In "stopping dead", it creates high frequencies. If you remove them, it doesn't "stop dead". Or, to put it another way, to avoid generating higher frequencies, you have to fade-out the 21kHz tone, rather than stop it dead.


If you start looking at this, it's all physics (or maths, but I never liked maths). Time and frequency are inter-twined (rather like the heisenberg uncertainty principle), the more accuracy (or greater restriction) is forced on one, the less accuracy (or less restriction) can be imposed on the other. To make sure that a signal is exactly any frequency, it has to exist for eternity. Confine a signal to a single moment, and it will contain all frequencies. With your experiment, you're trying to do both - have an exact frequency signal, and confine it exactly to a give time. For most sampling, this is inconsequential - confining a 1kHz tone to the band below 20kHz still gives sufficient time-resolution to allow it to start and stop almost instantaneously. But confining a 21kHz tone to the band below 22.05kHZ doesn't allow enough time resolution for it to start and stop almost instantaneously. In fact, both tones can start and stop within a few samples, but those few samples a several cycles at 21kHz, but just a fraction of a cycle at 1kHz. In both cases, in a "perfect" system, the extra cycles will be at 22.05kHz. That's just another way of looking at it: it's the ringing of the low pass filter.


You can also gain an interesting insight by generating 1 second of silence, then 1 second of a tone in Cool Edit. Examine the transition from silence to tone closely. See what the interpolation will be. It's a strange world.


After reading that long paragraph, I think I should have searched for the last time I wrote it! There are so many different ways of looking at the same thing. The important thing is that it's not a new constraint - it's just nyquist. It's one of those consequences of band-limiting that I hinted at much earlier. Since your ear is also band limited, it also has similar time-constraints. So, in theory, if everything is working properly, it shouldn't be a problem. Hmmm...

Cheers,
David.
Go to the top of the page
+Quote Post
KikeG
post May 14 2003, 14:04
Post #29


WinABX developer


Group: Developer
Posts: 1578
Joined: 1-October 01
Member No.: 137



From zephirus:
QUOTE
A continuous signal of 21800 Hz (with 44.1 KHz sampling, -1.5 dB, 0.5s duration) looks very much amplitude modulated in Cool Edit.

This is because the visual interpolation CE does is not perfect, at high frequencies it doesn't interpolate well enough.

QUOTE
An ideal 192 KHz upsampling filter will create a correct (not modulated) signal regardless (the Cool Edit upsampling does a pretty good job here as well in highest quality mode).


That would be a more proper interpolation, but takes some time to compute well.

QUOTE
But now (before upsampling) letīs silence 0.0000-0.0017 and 0.0023-0.0100. What remains is a small snippet between 0.0017 and 0.0023 (with silence around it).
Without the context around this short snippet, no filter on earth (or in the mathematical domain) should be able to know if that short snippet is meant as a low amplitude signal at around 21800 Hz or a full amplitude signal at exactly 21800 Hz (the upsampling filter will go for the wrong interpretation, and "smears" the signal as well).


This snippet has an only, non-ambiguous, interpretation, given that it doesn't contain any frequencies over 22050 Hz: the one that the accurate upsampling filter does. You say that it smears the signal: well, that is a consecuence of assuming that there are no components over 22050 Hz. If there were no smearing, the signal would have components over 22050 Hz, and it would be impossible to sample and reproduce it properly with a sampling frequency of just 44100 Hz, because it would violate Nyquist theorem.

QUOTE
So it seems: The Nyquist theorem only works for long, continuous signals, but not for short ones. Which are distorted well below the Nyquist frequency. Even with mathematically perfect filters.


No, it is not because of Nyquist theorem, it works always without exception. The problem is on the signal itself: any time-limited signal has infinite frequency components, and the reverse: any frequency-limited signal has infinite duration, from a mathematical point of view only infinitely periodic signals have finite frequency components.

Nyquist theorem says that in order to perfectly sample a signal, it has to be frequency-limited to below fs/2. That signal, in theory, will have infinite duration, it will "ring" forever (infinitely periodic) if it is totally frequency-limited. In real world there are no perfect filters, and since the frequency limiting is not perfect, the time-ringing is limited too: that is the cause of temporal smearing. The sharper (and more ideal) the reconstruction filter, the more time smearing you will get.

This is a problem that happens no matter how high the sampling rate; it is just a question of degrees. How much smearing do you consider acceptable, taking into account human hearing? (smearing=frequency limiting). What is the acceptable duration, amplitude and frequency of the ringing due to the frequency-limiting, taking into account human hearing? (the ringing is of a frequency equal to the filter cutoff, and the duration and amplitude depend on the sharpness of the filter and the amplitude of the signal at the cutoff frequency).

I think that these issues, using a 44.1 KHz sampling rate and typical DAC filters, are not a problem considering human hearing limitations.

Edit: it seems that 2Bdecided was faster than me, but it's basically same explanation.
Edit: some minor modifications done in order to make it more understandable.

This post has been edited by KikeG: May 14 2003, 14:15
Go to the top of the page
+Quote Post
DonP
post May 14 2003, 14:26
Post #30





Group: Members (Donating)
Posts: 1469
Joined: 11-February 03
From: Vermont
Member No.: 4955



For an example of this frequency vs time resolution in an easier to hear range, find somebody with
a good subwoofer setup such that you can turn off (or disconnect) the main speakers and only
hear a lowpass filtered subwoofer cut off at, say, 80 Hz.

All the normally "sharply defined" bass notes like bass guitar or kick drums will sound smeared, like
mumbling elephants. Adding the main speakers back in gives you those overtones that define the
edge of the note.
Go to the top of the page
+Quote Post
zephirus
post May 14 2003, 14:40
Post #31





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (DonP @ May 14 2003 - 04:00 AM)
A pure single frequency by its nature exists for all time.  If you want to limit the time of a signal, you
have to introduce other frequencies which sum up to what you want.  In other words, during the time
the signal is decaying it is not a pure sine.  The shorter this pulse of signal is in relation to its
period (1/frequency), the stronger the other frequency components will be compared to your base
frequency component.

If I understand your clarifications sufficiently well, then:

If the human ear was able to perceive all frequencies up to exactly 22050 Hz (and not anything above that at all, as if it had a perfect "brick wall" filter), then the ear could not perceive the beginning or end of a 22050 Hz signal (one would hear such a signal more or less always, regardless if it was real or not - since any frequencies below 22050 Hz wouldnīt be enough to stop such a signal, because frequencies higher than that would be needed as well).

QUOTE
Anyhow, if you have just a very few non-zero samples, it will be ambigous how
to reconstruct the signal, but that ambiguity is due to components of the original signal higher  than the Nyquist frequency.

I suspect I better believe you here...

Thanks,
zephirus
Go to the top of the page
+Quote Post
mrosscook
post May 14 2003, 15:07
Post #32





Group: Members
Posts: 82
Joined: 14-December 02
From: Amherst MA
Member No.: 4077



Zephirus,

Harry Nyquist formulated his sampling theorem in, I think, 1928 -- 75 years ago. If there were any true, practical limitations on its application to real-world signal processing, they would have been worked out and made clear decades ago. This is the case with the requirement for an ideal filter, for example, mentioned above by 2BDecided, Doctor, and KikeG.

You can't really believe that you are going to poke holes in the theorem now by futzing around with signal samples in CoolEdit -- you are clearly a very bright and reasonable person, so just think about it.

I'm more interested in WHY you have this feeling that 44.1 kHz/16 bit is not good enough. 2B has mentioned before that he has a similar subjective feeling, and in the thread that I linked to above, bryant (another bright and reasonable guy) seems to agree also. And there are probably others, who are reluctant to speak because they don't want to get pounded for expressing an opinion that they can't back up.

When I hear such opinions from people who are clearly just audiosnobs, they are easy for me to dismiss; these are the people who insist on gold-plated speaker cables thick enough to jump-start a car, and who clean their vinyl only with brushes made of hair torn from the tail of a pregnant oryx. It's clear what that's about.

But if you and 2B and bryant claim you hear some kind of distortion or limitation in the current CD standard, I don't want to just dismiss it out of hand. So -- can you elaborate on what it is that you hear? Under what conditions do you hear a problem? What kind of music, what environment, what hardware, what software? What kinds of distortion do you hear? I'd be interested to know.
Go to the top of the page
+Quote Post
DonP
post May 14 2003, 15:21
Post #33





Group: Members (Donating)
Posts: 1469
Joined: 11-February 03
From: Vermont
Member No.: 4955



QUOTE (zephirus @ May 14 2003 - 08:40 AM)
\
If I understand your clarifications sufficiently well, then:

If the human ear was able to perceive all frequencies up to exactly 22050 Hz (and not anything above that at all, as if it had a perfect "brick wall" filter), then the ear could not perceive the beginning or end of a 22050 Hz signal (one would hear such a signal more or less always, regardless if it was real or not - since any frequencies below 22050 Hz wouldnīt be enough to stop such a signal, because frequencies higher than that would be needed as well).

Yes, that's pretty much the result, once you start hypothesizing about
ears containing perfect low pass filters and signals exactly at the
corner of the perfect filter. Just remember that those conditions aren't realistic.

The Nyquist criterion just says that the sample frequency has to be > 2x the maximum signal
frequency (note: that is ">", not ">=".) Not addressed there is that when you want to look critically at signals arbitrarily close to the limit, it becomes more and more important that the implementation be more precise..
that is, small variations in the sampling time (aka jitter) and quantization errror become more visible.
So in a good design, you have some margin and don't try to exactly reproduce frequencies at
49.99% of the sample frequency.

Given that human hearing becomes less precise at high frequencies, a person hearing a signal near his own limit wouldn't hear errors that you could easily see on a graph.

This post has been edited by DonP: May 14 2003, 15:27
Go to the top of the page
+Quote Post
budgie
post May 14 2003, 16:07
Post #34





Group: Members
Posts: 341
Joined: 27-November 02
Member No.: 3901



QUOTE (mrosscook @ May 14 2003 - 06:07 AM)
I'm more interested in WHY you have this feeling that 44.1 kHz/16 bit is not good enough.  2B has mentioned before that he has a similar subjective feeling, and in the thread that I linked to above, bryant (another bright and reasonable guy) seems to agree also. And there are probably others, who are reluctant to speak because they don't want to get pounded for expressing an opinion that they can't back up.

But if you and 2B and bryant claim you hear some kind of distortion or limitation in the current CD standard, I don't want to just dismiss it out of hand.  So -- can you elaborate on what it is that you hear?  Under what conditions do you hear a problem?  What kind of music, what environment, what hardware, what software?  What kinds of distortion do you hear?  I'd be interested to know.

Main problem is poor (slovenly) CD mixing and mastering... That's really a problem. After all the years I've spent with music I firmly believe 16 bit/44,1 kHz is more than good for, say, 99.x% of all people.

20 bit /48 kHz would probably remove all the problems and complaints, but there were apparently other forces then which insisted on "lower" standard...
Go to the top of the page
+Quote Post
zephirus
post May 14 2003, 18:07
Post #35





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (2Bdecided @ May 14 2003 - 04:40 AM)
After reading that long paragraph, I think I should have searched for the last time I wrote it! There are so many different ways of looking at the same thing. The important thing is that it's not a new constraint - it's just nyquist. It's one of those consequences of band-limiting that I hinted at much earlier. Since your ear is also band limited, it also has similar time-constraints. So, in theory, if everything is working properly, it shouldn't be a problem.

Thanks for your very detailed and enlightening explanations!

I surely learned a lot during this discussion. But I hope I was not too much of a waste of your time (and the time of the others).

Itīs an interesting observation of yours that physicists have similar problems regarding quantum physics.

Thanks,
zephirus

This post has been edited by zephirus: May 14 2003, 18:08
Go to the top of the page
+Quote Post
ye110man
post May 14 2003, 21:46
Post #36





Group: Members
Posts: 90
Joined: 27-April 03
Member No.: 6233



QUOTE (DonP @ May 14 2003 - 06:21 AM)
The Nyquist criterion just says that the sample frequency has to be > 2x the maximum signal
frequency (note: that is ">", not ">=".)  Not addressed there is that when you want to look critically at signals arbitrarily close to the limit, it becomes more and more important that the implementation  be more precise..
that is, small variations in the sampling time (aka jitter) and quantization errror become more visible.
So in a good design, you have some margin and don't try to exactly reproduce frequencies at
49.99% of the sample frequency.

i thought it was that any curve can be reconstructed with a sampling rate 2 times the frequency. you don't need more than that. of course this isn't factoring in jitter or quantization error but those aren't components of nyquist's theorem.
Go to the top of the page
+Quote Post
DonP
post May 14 2003, 22:22
Post #37





Group: Members (Donating)
Posts: 1469
Joined: 11-February 03
From: Vermont
Member No.: 4955



QUOTE (ye110man @ May 14 2003 - 03:46 PM)
i thought it was that any curve can be reconstructed with a sampling rate 2 times the frequency. you don't need more than that.

I could say something flippant like "Why did you think that?"

Here's a counter example: If the signal frequency is exactly half the sampling frequency
then all the data points could be on the zero crossings. It would appear to be no
signal at all. In fact, no matter where the data points fall, they will be at the exact same
2 phase angles of the curve for every cycle, 180 degrees apart. Since you can't tell from the points what those angles are, you can't reconstruct the amplitude and phase of the origninal sine wave.

If the sampling frequency is just a little higher than 2x your signal, then each pair or data points will
shift to slightly different phase angles for each repetition, giving you new information; enough to
reconstruct the signal.

So you do need more than 2x, but just enough to qualify as ">"

Its like an asymptote... close but no touch.

This post has been edited by DonP: May 14 2003, 22:26
Go to the top of the page
+Quote Post
KikeG
post May 15 2003, 00:03
Post #38


WinABX developer


Group: Developer
Posts: 1578
Joined: 1-October 01
Member No.: 137



QUOTE (budgie @ May 14 2003 - 04:07 PM)
20 bit /48 kHz would probably remove all the problems and complaints, but there were apparently other forces then which insisted on "lower" standard...

I don't know for sure what you want to say, but AFAIK when developing cd-audio specs, 14 bits was thought to be enough. 16 bits was chosen at last just because it was a more "round" number (two bytes).
Go to the top of the page
+Quote Post
budgie
post May 15 2003, 08:51
Post #39





Group: Members
Posts: 341
Joined: 27-November 02
Member No.: 3901



QUOTE (KikeG @ May 14 2003 - 03:03 PM)
I don't know for sure what you want to say, but AFAIK when developing cd-audio specs, 14 bits was thought to be enough. 16 bits was chosen at last just because it was a more "round" number (two bytes).

DAT...
Go to the top of the page
+Quote Post
dillee1
post May 17 2003, 10:44
Post #40





Group: Members
Posts: 27
Joined: 20-April 03
Member No.: 6075



Prove of Nyquist theory always use stationary signals. Sure problem does'nt occurs as stationary signals have no information about time.

According to the result of FT we knows that non-stationary signals are not band limited. Real life signals are not stationary. A direct concequencies if we band limit the signal is that the frequency components are no longer pin pointed in the time line, it smears.(uncertainty principle)

Does it mean that as we samples at 44.1KHz(bandlimit at 22.05KHz) cause us to lost resolution in time????
huh.gif

This post has been edited by dillee1: May 17 2003, 10:45
Go to the top of the page
+Quote Post
DonP
post May 17 2003, 13:11
Post #41





Group: Members (Donating)
Posts: 1469
Joined: 11-February 03
From: Vermont
Member No.: 4955



QUOTE (dillee1 @ May 17 2003 - 04:44 AM)
Prove of Nyquist theory always use stationary signals. Sure problem does'nt occurs as stationary signals have no information about time.

Why do you say that? I think you are confusing "proof" and "example".


QUOTE
Real life signals are not stationary. A direct concequencies if we band limit the signal is that the frequency components are no longer pin pointed in the time line, it smears.(uncertainty principle)

Does it mean that as we samples at 44.1KHz(bandlimit at 22.05KHz) cause us to lost resolution in time????


SInce you are citing Fourier transforms, you know that frequency components are never pinpointed in time
in that context.

The extent to which you could hear the loss in time resolution due to bandwidth limiting is directlly related
to your ability to hear in the eliminated frequency range.
Go to the top of the page
+Quote Post
zephirus
post May 17 2003, 17:37
Post #42





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (KikeG @ May 14 2003 - 05:04 AM)
This snippet has an only, non-ambiguous, interpretation, given that it doesn't contain any frequencies over 22050 Hz ...

It believe it does contain frequencies above 22050 Hz since I did not obtain it by properly downfiltering it (from e.g. 192 KHz). But instead by fiddling with Cool Edit in the 44.1 KHz domain. But I agree that there is, in fact, only one proper interpretation (for the frequencies below 22050 Hz).

QUOTE
: ... the one that the accurate upsampling filter does. You say that it smears the signal: well, that is a consecuence of assuming that there are no components over 22050 Hz. If there were no smearing, the signal would have components over 22050 Hz, and it would be impossible to sample and reproduce it properly with a sampling frequency of just 44100 Hz, because it would violate Nyquist theorem.

Yes. Nevertheless: I compared the Cool Edit upsampling algorithm with several of itīs filtering algorithms. I found: Doing the silence fiddling in the 192 KHz domain and then filtering out anything above 22050 Hz looks much better than doing that fiddling in the 44.1 KHz domain and then upsampling it.

Both should be equivalent - since effectively itīs both just frequency filtering. This simply might mean that the Cool Edit upsampling algorith works worse than itīs filtering algorithms. Or that upsampling may be much harder than just low-pass filtering all frequencies above 22050 Hz. But this is not of too much interest anyways.

QUOTE
QUOTE
So it seems: The Nyquist theorem only works for long, continuous signals, but not for short ones. Which are distorted well below the Nyquist frequency. Even with mathematically perfect filters.

No, it is not because of Nyquist theorem, it works always without exception. The problem is on the signal itself: any time-limited signal has infinite frequency components, and the reverse: any frequency-limited signal has infinite duration, from a mathematical point of view only infinitely periodic signals have finite frequency components.

Well, I believe that I see this now.

QUOTE
I think that these issues, using a 44.1 KHz sampling rate and typical DAC filters, are not a problem considering human hearing limitations.

It shouldnīt. I had a bad conflict with the (Nyquist) theory this all is based on anyways.

Thanks,
zephirus
Go to the top of the page
+Quote Post
zephirus
post May 19 2003, 15:05
Post #43





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (mrosscook @ May 14 2003 - 06:07 AM)
Harry Nyquist formulated his sampling theorem in, I think, 1928 -- 75 years ago.  If there were any true, practical limitations on its application to real-world signal processing, they would have been worked out and made clear decades ago.  This is the case with the requirement for an ideal filter, for example, mentioned above by 2BDecided, Doctor, and KikeG.

Yes. My attempts at finding problems with (basically) the Nyquist theorem do seem rather clumsy in hindsight. Especially since my knowledge of the underlying theory is mostly nonexistent.

On the other hand: In principle it does seem to be a good idea to constantly challenge widely accepted scientific theories or belief systems. As it seems, many of such theories or beliefs can successfully be argued to most likely be wrong by knowledgeable people. Despite the fact that such theories or beliefs have persisted for many decades and are generally believed to be identical, or near to, absolute truth. Of course, the word "knowledgeable" is very important. Nonetheless: If there is no doubt, there is no progress.

My personal conclusion: My doubts in the CD format do remain, but they lost considerable energy. I learned a lot during the discussion, and I will listen to my CDs (and .ape or lossy or non-lossy .wv files) with more confidence now.

The major problem with this discussion primarily seems to be that I wasted the time of knowledgeable persons for the sake my own education. But for me, this discussion was very valuable.
At least, this discussion may be a worthwhile link if similar topics turn up in the future.

QUOTE
I'm more interested in WHY you have this feeling that 44.1 kHz/16 bit is not good enough.  2B has mentioned before that he has a similar subjective feeling, and in the thread that I linked to above, bryant (another bright and reasonable guy) seems to agree also. And there are probably others, who are reluctant to speak because they don't want to get pounded for expressing an opinion that they can't back up.

When I hear such opinions from people who are clearly just audiosnobs, they are easy for me to dismiss; these are the people who insist on gold-plated speaker cables thick enough to jump-start a car, and who clean their vinyl only with brushes made of hair torn from the tail of a pregnant oryx.  It's clear what that's about.

But if you and 2B and bryant claim you hear some kind of distortion or limitation in the current CD standard, I don't want to just dismiss it out of hand.  So -- can you elaborate on what it is that you hear?  Under what conditions do you hear a problem?  What kind of music, what environment, what hardware, what software?  What kinds of distortion do you hear?  I'd be interested to know.

At times, and with a number of recordings, I do hear distortions that are rather faint but very ugly nonetheless. And they surely (in my opinion) cannot be an intended part of the original signal.

Right now it seems to be a very good idea to attribute such distortions to poor recording, poor mastering, or poor reproduction equipment. And not to the CD standard.

Perhaps, in theory, a higher sampling frequency/resolution than 44.1 KHz/16 bit might indeed make sense and be helpful for practical reasons. The necessary filtering of the signal in the output equipment (cd player) might be easier, and therefore, usually of higher quality (perhaps). Also, more headroom may be a good idea since 44.1/16 is somewhat uncomfortably close to the limits of human hearing.

A major problem is that, just as most people, I cannot do a proper listening test. Since I lack the necessary high end hardware (96 or 192 KHz, 20-24 bit) and the necessary native test material at such resolution as well.

Thanks,
zephirus

This post has been edited by zephirus: May 20 2003, 21:04
Go to the top of the page
+Quote Post
KikeG
post May 19 2003, 15:48
Post #44


WinABX developer


Group: Developer
Posts: 1578
Joined: 1-October 01
Member No.: 137



If I could have access to some 24/96 music in wav format, I could set up one of those tests.

I would downsample the 24/96 wave to 16/44.1 using "realistic" filtering + resampling to 44.1 KHz in CEP (that should be equivalent to the filtering of a good ADC), and convert it to 16 bit with dither.

Then, I would play this wave with one of my cards, using its converters running at 44.1 KHz, and record the result with my other card at 48 KHz 16 bit. Ideally it should be recorded at 24/96, but I just have 1 good non-resampling card at 44.1 KHz that is used for playback, and happens to be the same one that also supports 24/96. My other card is non-resampling just at 48 KHz, but is limited to 16/48, although I think has pretty good quality. I think the the fact that the recording is done at 16/48 instead of 24/96 wouldn't have much influence in the test, specially if no differences are found at last. But this point should be analyzed with detail to see what problems it could have, and improved if found necessary. At first, I think that this 16/48 recording would capture all the "nastieties" (aliasing, smearing, etc) of the 16/44.1 DAC, but would introduce some of the 16/48 ADC, although at higher frequencies.

Then, I would upsample to 24/96 this 16/48 record, and people with good 24/96 cards could perform some blind listening tests comparing the original vs. the this other wave that has been downsampled, played, recorded and upsampled.


At the PCABX site there are some avalable clips to perform tests of this kind ( http://64.41.69.21/technical/sample_rates/index.htm ). They are interesting and show that AFAIR there is no difference in practice, but the clips available are not very representative of real-word conditions. They are very short on duration (and typical fans of hig-res formats won't consider them very representative), and in fact no real-world DAC has been used in their generation, just software processing.

This post has been edited by KikeG: May 19 2003, 15:49
Go to the top of the page
+Quote Post
budgie
post May 19 2003, 15:58
Post #45





Group: Members
Posts: 341
Joined: 27-November 02
Member No.: 3901



QUOTE (zephirus @ May 19 2003 - 06:05 AM)
A major problem is that, just as most people, I cannot do a proper listening test. Since I lack the necessary high end hardware (96 or 192 KHz, 20-24 bit) and the necessary native test material at such resolution as well.

Major problem is, you just jabber about the things you don't understand... If you lack the necessary high end hardware and the necessary native test material at such resolution, how do you know how does it sound? I have the opportunity, possibility and it is inevitable for me to deal everyday with this high end resolution workstations and I can swear it's not needed for listening purposes. It's needed for recording, adding effects, mixing and mastering, not for the real life. 16/44,1 is more than enough when made in an appropriate way... I've told it here at HA at least for hundred times.
Go to the top of the page
+Quote Post
2Bdecided
post May 19 2003, 16:02
Post #46


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (zephirus @ May 17 2003 - 04:37 PM)
QUOTE (KikeG @ May 14 2003 - 05:04 AM)
This snippet has an only, non-ambiguous, interpretation, given that it doesn't contain any frequencies over 22050 Hz ...

It believe it does contain frequencies above 22050 Hz since I did not obtain it by properly downfiltering it (from e.g. 192 KHz). But instead by fiddling with Cool Edit in the 44.1 KHz domain. But I agree that there is, in fact, only one proper interpretation (for the frequencies below 22050 Hz).

Doh! we were doing so well! wink.gif If it's sampled at 44.1kHz, it doesn't contain anything unique above 22.05kHz. Anything above this is a copy of what's below it. And anything below it that you intended to be above it, isn't! (I'm laughing, not at you, but with you, because it's just the kind of tangle I get myself in sometimes)

QUOTE
QUOTE
: ... the one that the accurate upsampling filter does. You say that it smears the signal: well, that is a consecuence of assuming that there are no components over 22050 Hz. If there were no smearing, the signal would have components over 22050 Hz, and it would be impossible to sample and reproduce it properly with a sampling frequency of just 44100 Hz, because it would violate Nyquist theorem.

Yes. Nevertheless: I compared the Cool Edit upsampling algorithm with several of itīs filtering algorithms. I found: Doing the silence fiddling in the 192 KHz domain and then filtering out anything above 22050 Hz looks much better than doing that fiddling in the 44.1 KHz domain and then upsampling it.


There shouldn't be any difference.

QUOTE
Both should be equivalent - since effectively itīs both just frequency filtering. This simply might mean that the Cool Edit upsampling algorith works worse than itīs filtering algorithms. Or that upsampling may be much harder than just low-pass filtering all frequencies above 22050 Hz. But this is not of too much interest anyways.


Maybe not. But you should be able to get a very similar result in CEP with the right filter. Really!

Cheers,
David.

P.S. CD "sound" (if there is one) is glassy, and less realistic than 24/96. This is what I heard with professional DCS convertors, comparing A>D>A at 44.1kHz and 96kHz (both 24-bits, so both are actually better than CD) with the original analogue signal.

I can't hear a difference using an audiophile 2496 and some Sennheiser HD580s, but I believe the effect is/was/will be much greater with speakers in a real room than via headphones (or even via speakers in an anechoic chamber - just a hunch - no evidence for this claim). The effect I heard was subtle, but then most things seem subtle the first time you notice them (e.g. coding artefacts). I can quite believe record producers who use this equipment everyday (and who hear live music every day) when they say that, to them, the difference is significant.

But almost all of the faults I hear with CDs at home are almost certainly the fault of bad mastering.

Cheers,
David.
Go to the top of the page
+Quote Post
KikeG
post May 19 2003, 16:39
Post #47


WinABX developer


Group: Developer
Posts: 1578
Joined: 1-October 01
Member No.: 137



QUOTE (2Bdecided @ May 19 2003 - 04:02 PM)
If it's sampled at 44.1kHz, it doesn't contain anything unique above 22.05kHz. Anything above this is a copy of what's below it. And anything below it that you intended to be above it, isn't! (I'm laughing, not at you, but with you, because it's just the kind of tangle I get myself in sometimes)

Well, from the point of operation of sampling theorem, the signal to be sampled is supposed to have no content at all over fs/2. In the same context, the reconstructed signal is supposed to have no content at all over fs/2. Is in this context where I say that the sampled points define a non-ambiguous analog signal given that it won't have anything over fs/2. If not, it depends exclusively on the reconstruction filter implementation, but here we are deviating from what Nyquist said.

QUOTE
P.S. CD "sound" (if there is one) is glassy, and less realistic than 24/96. This is what I heard with professional DCS convertors, comparing A>D>A at 44.1kHz and 96kHz (both 24-bits, so both are actually better than CD) with the original analogue signal.


I don't trust these kind of comparisons, that are not rigorously controlled. Could you give more details? Was it blind? Were the converters at your disposal? Were levels properly matched? Was the program material generated properly? Could the 44.1 KHz converters have been "customized" for the test or something similar? Who prepared the test?

QUOTE
I can't hear a difference using an audiophile 2496 and some Sennheiser HD580s, but I believe the effect is/was/will be much greater with speakers in a real room than via headphones (or even via speakers in an anechoic chamber - just a hunch - no evidence for this claim). The effect I heard was subtle, but then most things seem subtle the first time you notice them (e.g. coding artefacts). I can quite believe record producers who use this equipment everyday (and who hear live music every day) when they say that, to them, the difference is significant.


I have read from a couple of respected and experienced professional recording engineers the opposite. One says that he can't hear anything different with 96 KHz as opposed to 44.1 KHz. The other says that the differences he could hear using 192 KHz in comparison with 44.1 KHz were minimal, almost insignificant, and not worth the use. Even here, the listening was not blind. However, according to this same person, on his tests, DSD did a great difference!!. He knows that this doesn't make much sense (24/192 is clearly superior to DSD from a technical point of view) , but that's what he heard. What guarantees in this case that the DSD (SACD) player was not doing "something" to the signal?

That's why I don't trust that kind of sighted, non-controlled, listening tests, because if not controlled, there are lots of things that can be making a difference apart from just the sample rates.

(Edit: added some more minor things.)

This post has been edited by KikeG: May 19 2003, 19:11
Go to the top of the page
+Quote Post
Pio2001
post May 19 2003, 20:00
Post #48


Moderator


Group: Super Moderator
Posts: 3936
Joined: 29-September 01
Member No.: 73



QUOTE (KikeG @ May 19 2003 - 06:39 PM)
I have read from a couple of respected and experienced professional recording engineers the opposite.

Not to mention Budgie's post above.

QUOTE (2Bdecided @ May 19 2003 - 06:02 PM)
QUOTE

Yes. Nevertheless: I compared the Cool Edit upsampling algorithm with several of itīs filtering algorithms. I found: Doing the silence fiddling in the 192 KHz domain and then filtering out anything above 22050 Hz looks much better than doing that fiddling in the 44.1 KHz domain and then upsampling it.


There shouldn't be any difference.

Resampling algorithms lead to more or less alias = more or less steep filtering.
Have a look at the § 3 of this page : http://perso.numericable.fr/~laguill2/cdr/cdr.htm
There are the spectrums of 3 different quality of resampling from 48 to 96 kHz in SoundForge.
Go to the top of the page
+Quote Post
Pio2001
post May 19 2003, 20:16
Post #49


Moderator


Group: Super Moderator
Posts: 3936
Joined: 29-September 01
Member No.: 73



QUOTE (zephirus @ May 19 2003 - 05:05 PM)
The major problem with this discussion primarily seems to be that I wasted the time of knowledgeable persons for the sake my own education. But for me, this discussion was very valuable.
At least, this discussion may be a worthwhile link if similar topics turn up in the future.

Forums are not only for developers, they are also for people who like to share their knowledge. Having to explain something that we think we know well often leads to question ourselves, and improve our knowledge of the topic.
Be assured that if this discussion was valuable for you, it was also valuable for other people who read but don't post.

BTW, this discussion was added to the FAQ 4 days ago, on top of the "High definition digital audio" section.
Go to the top of the page
+Quote Post
budgie
post May 20 2003, 11:12
Post #50





Group: Members
Posts: 341
Joined: 27-November 02
Member No.: 3901



QUOTE (2Bdecided @ May 19 2003 - 07:02 AM)
P.S. CD "sound" (if there is one) is glassy, and less realistic than 24/96. This is what I heard with professional DCS convertors, comparing A>D>A at 44.1kHz and 96kHz (both 24-bits, so both are actually better than CD) with the original analogue signal.

I can't hear a difference using an audiophile 2496 and some Sennheiser HD580s, but I believe the effect is/was/will be much greater with speakers in a real room than via headphones (or even via speakers in an anechoic chamber - just a hunch - no evidence for this claim). The effect I heard was subtle, but then most things seem subtle the first time you notice them (e.g. coding artefacts). I can quite believe record producers who use this equipment everyday (and who hear live music every day) when they say that, to them, the difference is significant.

But almost all of the faults I hear with CDs at home are almost certainly the fault of bad mastering.

You can't hear the difference, because it's so subtle that's not worth mentioning... It really doesn't affect anything important as for the resulting sound. And that's what's really important - under normal, usual listening conditions you don't hear any difference.

And you're right, the faults you can hear from CD under normal listening conditions are definitely the fault of bad mastering...
Go to the top of the page
+Quote Post

3 Pages V  < 1 2 3 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 20th April 2014 - 02:32