IPB

Welcome Guest ( Log In | Register )

4 Pages V  < 1 2 3 4 >  
Reply to this topicStart new topic
High Playback Sampling Frequencies, Why are they becoming popular?
IgorC
post Feb 20 2012, 03:50
Post #26





Group: Members
Posts: 1506
Joined: 3-January 05
From: Argentina, Bs As
Member No.: 18803



QUOTE (saratoga @ Feb 19 2012, 23:22) *
They are mathematically related in a very complex way

That will be: 4x oversampling leads to additional +1 bit of bit depth, right?

This post has been edited by IgorC: Feb 20 2012, 03:50
Go to the top of the page
+Quote Post
Walter_Odington
post Feb 20 2012, 04:53
Post #27





Group: Members
Posts: 13
Joined: 20-February 12
Member No.: 97258



Your right that my understanding is limited given the company I'm in, and I totally accept that, but am i the only person here who thinks quantisation error is rounding error? I'll endeavour to be useful and use correct terminology.

What I meant originally was that by converting to a new sample rate, some of the new samples will be have a position in time in between old samples, and as such require some computational decision to be made on their value (interpolation or rounding or prediction or whatever system is in place). Whatever the process involved, the new values will have a lesser accuracy and the signal will have been degraded. And so keeping a constant sample rate may be better in some instances, be because a step of processing (and resultant degradation) can be avoided. Counter to that, I believe there are processes from which working at a higher sample rate does improve the sound:

Ill try to elaborate on the suggestion that granular time stretching can benefit from a higher sample rate. When time stretching, the sound is split up in to little grains of the ms magnitude. The larger the grain, the more noticeable the granulation when the tempo is lowered - this is because of the grains starting from a very similar point with reference to the source sound relative to total grain duration ie the onset of a new grain results in going back in time above a critically significant quantity (some may cite haas effect for thresholds of perception on sound events). The result is a weird "gl-gl-gl-gl" sound you can hear on a lot of 90s dance tracks, Ripgroove by Double99 ft Top Cat being a prime example where the man says "Brock out". To counter this undesirable sound processing artefact the grain size needs to be reduced so that each grain onset is less perceptible. At a higher resolution you can get smaller grain sizes, get less perception of the granulation, and some may say a better sound. I'll return with some audio to illustrate this.

Similarly, when sampling a sound and then playing it back at a different pitch there is benefit to processing at a higher sample rate. Samplers re-pitch sound, and this is almost always done by resampling. If the sample contains audio up until half the sample frequency then it follows that a 48kHz sample played back at the original pitch has an upper limit of 24khz. If this is played back two octaves lower then the upper limit of the sound spectrum is 6khz. If we had a sample of 192khz then it would have an upper limit of 96khz, and played back two octaves below would still have frequency content wih an upper limit of 28khz. The sound sampled at 48khz will sound dull when pitched down, the sound at 192 will maintain more sense of the original full frequency sound. I'll return with examples to illustrate this too.

Edit: tried to make it clearer

This post has been edited by Walter_Odington: Feb 20 2012, 05:10
Go to the top of the page
+Quote Post
wakibaki
post Feb 20 2012, 05:14
Post #28





Group: Members
Posts: 37
Joined: 23-July 11
Member No.: 92474



Just as an aside, bit depths and sampling rates are interchangeable, you can trade one for the other, with the proviso that the Nyquist limit still holds. A simple on-off signal can encode any analogue signal to an arbitrary degree of accuracy as pulses of equal magnitude and duration if the duration of the pulses is reduced sufficiently. Suppose a 44k1/16 signal were to be recoded into a bitwide stream. If in every sample period (1/44100 sec) between 0 and 65535 bits (ones) (depending on the numeric value of the sampled signal) could be transmitted then the signal would be effectively recoded. For reconstruction purposes the numeric values could be recovered at a rate of 44k1 by a counter if by no other expedient.

The increasing popularity of high playback sampling rates could be attributed to a not entirely irrational desire for margin (overkill).

Amplifiers with very low THD are desired by some (me even). In a circumstance where the limits of perception are difficult to establish, for one reason or another, a simple strategy for ensuring the inaudiblity of error is to exceed the probable limits by a margin, even a large margin.

w

This post has been edited by wakibaki: Feb 20 2012, 05:16


--------------------
wakibaki.com
Go to the top of the page
+Quote Post
saratoga
post Feb 20 2012, 05:59
Post #29





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (IgorC @ Feb 19 2012, 21:50) *
QUOTE (saratoga @ Feb 19 2012, 23:22) *
They are mathematically related in a very complex way

That will be: 4x oversampling leads to additional +1 bit of bit depth, right?


2x. When you oversample by 2x you have the same error per sample, but 2x the bandwidth. So the error per bandwidth is 1/2 as much meaning one more bit. At least thats the case assuming the error is randomly distributed which may or may not be true.

QUOTE (Walter_Odington @ Feb 19 2012, 20:34) *
What I meant originally was that by converting to a new sample rate, some of the new samples will be have a position in time in between old samples, and as such require some computational decision to be made on their value (interpolation or rounding or prediction or whatever system is in place). Whatever the process involved, the new values will have a lesser accuracy and the signal will have been degraded.


The process is called interpolation, and no, thats not generally a problem. Interpolation can be done with essentially unlimited accuracy, such that it is widely regarded (assuming proper implementation of course) as having no impact on quality. Do a search, this has been discussed to death and in far more detail then I have time for.

QUOTE (Walter_Odington @ Feb 19 2012, 20:34) *
Ill try to elaborate on the suggestion that granular time stretching can benefit from a higher sample rate. When time stretching, the sound is split up in to little grains of the ms magnitude. The larger the grain, the more noticeable the granulation when the tempo is lowered


Since any time stretch algorithm can set the sampling rate as high as it likes for processing purposes, it doesn't really matter what the input sampling rate is.

QUOTE (Walter_Odington @ Feb 19 2012, 20:34) *
Similarly, when sampling a sound and then playing it back at a different pitch there is benefit to processing at a higher sample rate. Samplers re-pitch sound, and this is almost always done by resampling. If the sample contains audio up until half the sample frequency then it follows that a 48kHz sample played back at the original pitch has an upper limit of 24khz. If this is played back two octaves lower then the upper limit of the sound spectrum is 6khz.


Well yes, for recording ultrasounic information, higher sampling rates are quite obviously useful. But we're talking about music. Not, bat calls. Music is generally assumed to occupy the range of frequencies humans can here, and for those 48k is quite sufficient. If you wish to record things that humans cannot hear, then by all means go buy a 1MHz A/D.
Go to the top of the page
+Quote Post
saratoga
post Feb 20 2012, 06:09
Post #30





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (wakibaki @ Feb 19 2012, 23:14) *
The increasing popularity of high playback sampling rates could be attributed to a not entirely irrational desire for margin (overkill).

Amplifiers with very low THD are desired by some (me even). In a circumstance where the limits of perception are difficult to establish, for one reason or another, a simple strategy for ensuring the inaudiblity of error is to exceed the probable limits by a margin, even a large margin.


I don't really agree with this analogy. Lower THD on a amplifier means higher accuracy and more headroom when driving more difficult loads. Its generally useful and a reasonable metric of 'goodness'.

Higher sampling rates are not generally useful, nor are they an acceptable measure of how good something is. I cannot look at a ADC, see that it supports 96k and know that it has generally good performance. Plenty of very bad ADC devices support 96k. If I want to know how good an ADC is, one of the last things I would ever ask is the range of needlessly high sampling rates it supports. In contrast, I can look at an amp, see that it has 0.01% THD into a 4 ohm load and be more then reasonably sure it'll perform well for another 4 ohm load.
Go to the top of the page
+Quote Post
knutinh
post Feb 20 2012, 08:34
Post #31





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



QUOTE (saratoga @ Feb 20 2012, 07:09) *
QUOTE (wakibaki @ Feb 19 2012, 23:14) *
The increasing popularity of high playback sampling rates could be attributed to a not entirely irrational desire for margin (overkill).

Amplifiers with very low THD are desired by some (me even). In a circumstance where the limits of perception are difficult to establish, for one reason or another, a simple strategy for ensuring the inaudiblity of error is to exceed the probable limits by a margin, even a large margin.


I don't really agree with this analogy. Lower THD on a amplifier means higher accuracy and more headroom when driving more difficult loads. Its generally useful and a reasonable metric of 'goodness'.

Higher sampling rates are not generally useful, nor are they an acceptable measure of how good something is.
...

I think talking about "exceeding probable merits with a large margin" has some validity when talking about possible artifacts from lowpass-filtering found in any ADC/DAC. If the sampling rate is 10kHz, this filter will be heard by most people. If the sampling rate is increased to 32 kHz, a well-designed filter will probably not be heard. Going to 44.1kHz, 48kHz, 88.2kHz etc could be seen as "safeguards" if one can expect everything else to be equal, especially since this move has close to zero cost for many applications. It may well be an excessive, redundant safeguard that can be easily reduced by attending a lower-level course in digital signal processing and/or human hearing 101, but still be a safeguard.

Any assertion that this _does_ affect sound, significantly, across many systems, would make by BS-detector go "beep".

-k

This post has been edited by knutinh: Feb 20 2012, 08:36
Go to the top of the page
+Quote Post
icstm
post Feb 20 2012, 11:32
Post #32





Group: Members
Posts: 121
Joined: 25-January 12
Member No.: 96698



As I wrote the OP, I am going to wade in here smile.gif

Those of you talking about time stretching and other effects are (in my mind) essentially creating music, rather than playing it.
Those of you talking about safeguards I think this only applies to the digitisation process (ie capturing in the ADC at higher sampling rates)

Once all adjustments are complete, why not encode in 44.1 / 16?


Talking about THD, gives an example of the relevant attributes to check in anologue components.

Just to check another stupid question (ok the first one wasn't stupid, but I wanted to make sure no one could convince me otherwise).

Are we sure that humans have no reaction to near ultrasonic frequencies?
Go to the top of the page
+Quote Post
zaentzpantz
post Feb 20 2012, 12:02
Post #33





Group: Members
Posts: 3
Joined: 15-February 12
Member No.: 97175



QUOTE (icstm @ Feb 17 2012, 15:25) *
OK, at the risk of sounding completely stupid?

Why do people listen to music with high sampling frequencies?
What is 88k+ providing them?
I understand why you might record at a high sampling rate, but why keep that for playback?

Looking through the FAQ, there are threads from 2003 that point out that the sampling frequency and bit-dept work in tandem. So the quantisation error of 16bit at 44.1k has the opportunity to be corrected sooner at a higher sampling rate, so in some ways is like a dithering pattern.

However given noise introduced in the analogue systems required to listen to music, a SNR within a 16bit signal of ~96dB seems pretty good.

So assuming that speakers struggle to produce the sounds that a 192k sampling frequency allow (eg 96kHz) and assuming that 16bits were sufficient when compared to the analogue equipment in the system, what have I missed in these high sampling playback formats?


The main benefit of a higher sampling rate is that the anti-aliasing filtering can have a flatter roll-off. There is a school of thought that the harsh sound of cds is due to the brickwall filter at around 20kHz and the effects of it's phase/time aberrations. With 96kHz, the filter only needs to cut off by 48kHz so a gentle slope from 20kHz-48kHz is used with much less phase/time aberrations.
Another is the as yet unexplained effect of supersonic frequencies on the sound, discovered by engineers at George Martins' Air studios in the 70s. One channel module always sounded better than the others in the recording console, and after research by the designer (Rupert Neve) the only difference found was an omitted termination resisitor which allowed the frequency response to extend up to around 90kHz. When fitted it then sounded the same as the other modules and the frequency response was reduced to around 40kHz. Nobody has found out why this could be heard, but the people involved were sufficiently respected to be believed and has given birth to the odd practise of extending audio response beyond our hearing limits.


--------------------
Digital is great but you can always tweak analogue...
Go to the top of the page
+Quote Post
2Bdecided
post Feb 20 2012, 12:21
Post #34


ReplayGain developer


Group: Developer
Posts: 4945
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



People are selling music in high sample rates and bitdepths because enough people believe it sounds better to make it marketable (at least in a niche market!). Whether it actually sounds better or not is kind of irrelevant.

Where multiple versions are released, it can also be the place where a less compressed master is used, and/or more information/artwork/etc are included etc - or sometimes not. It's a nice excuse to charge more for these features. Just like Sony used to use better masters for making gold CDs in the 1990s. They'd sound just as good on a normal CD, but there'd be less excuse to charge extra.


The audio industry, which released a format good enough for all possible two channel music recordings in 1983 and managed to sell all the music from the past 100 years all over again in that format, is now looking on in envy at the film industry which has managed to pull the trick three times: VHS, DVD, and now BluRay. The crippled audio industry is desperate to pull the same trick again.


In recording studios, a lot of it is Emperor's New Clothes - and a lot of it is "why not?". There's no meaningful cost differential to working at 24/96 than 20/48. No one in a real studio worked at 16/44.1 during the last 20 years anyway.

Cheers,
David.
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Feb 20 2012, 14:06
Post #35





Group: Members
Posts: 3535
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (zaentzpantz @ Feb 20 2012, 06:02) *
The main benefit of a higher sampling rate is that the anti-aliasing filtering can have a flatter roll-off. There is a school of thought that the harsh sound of cds is due to the brickwall filter at around 20kHz and the effects of it's phase/time aberrations.


AFAIK that so-called school of thought was actually a school of non-thought and non-observation. It has been conclusively shown repeatedly that the alleged harsh sound of PCs were unrelated to the reb book CD's technical parameters or even the portions of the implementation of it that were unique to the CD.

The biggest problem with the CD was that it was such a big technical improvement that it generally made things sound different by actually removing a bunch of technological veils that just about everybody with a serious interest in audio technology knew full well about.

From a practical viewpoint, the introduction of the CD was the most successful advance in media technology until the introduction of the DVD.
Go to the top of the page
+Quote Post
icstm
post Feb 20 2012, 14:43
Post #36





Group: Members
Posts: 121
Joined: 25-January 12
Member No.: 96698



QUOTE (zaentzpantz @ Feb 20 2012, 11:02) *
The main benefit of a higher sampling rate is that the anti-aliasing filtering can have a flatter roll-off. There is a school of thought that the harsh sound of cds is due to the brickwall filter at around 20kHz and the effects of it's phase/time aberrations. With 96kHz, the filter only needs to cut off by 48kHz so a gentle slope from 20kHz-48kHz is used with much less phase/time aberrations.

So use higher sampling frequencies in the capturing and the digitisation of the music, then store for playback at 44.1? surely? unsure.gif
QUOTE (zaentzpantz @ Feb 20 2012, 11:02) *
Another is the as yet unexplained effect of supersonic frequencies on the sound, discovered by engineers at George Martins' Air studios in the 70s. One channel module always sounded better than the others in the recording console, and after research by the designer (Rupert Neve) the only difference found was an omitted termination resisitor which allowed the frequency response to extend up to around 90kHz. When fitted it then sounded the same as the other modules and the frequency response was reduced to around 40kHz. Nobody has found out why this could be heard, but the people involved were sufficiently respected to be believed and has given birth to the odd practise of extending audio response beyond our hearing limits.
this is interesting, I like to see if this has been copied elsewhere.
Go to the top of the page
+Quote Post
knutinh
post Feb 20 2012, 15:17
Post #37





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



QUOTE (2Bdecided @ Feb 20 2012, 13:21) *
...in envy at the film industry which has managed to pull the trick three times: VHS, DVD, and now BluRay. The crippled audio industry is desperate to pull the same trick again.

Perhaps four times: Bluray 3D.

If physical formats is deemed a viable source of income, may I suggest to the audio industry that they release something within the Bluray standard that:
1. Contains new content and/or higher mastering standards (as in "sounds better", not "our numbers are larger than yours")
2. Is multichannel or binaural
3. Prioritize customer experience over DRM paranoia

I do believe that many (like me) are willing to pay more than 1$ a song if the product is perceived as good enough. A lot of this can be done within the CD-format, but multichannel may be hard.

-k
Go to the top of the page
+Quote Post
WernerO
post Feb 20 2012, 15:22
Post #38





Group: Members
Posts: 70
Joined: 21-November 06
Member No.: 37858



QUOTE (zaentzpantz @ Feb 20 2012, 12:02) *
There is a school of thought that the harsh sound of cds is due to the brickwall filter at around 20kHz and the effects of it's phase/time aberrations. With 96kHz, the filter only needs to cut off by 48kHz so a gentle slope from 20kHz-48kHz is used with much less phase/time aberrations.


1) The vast majority of ADC and DAC chips employ linear phase FIR filters for anti-aliasing and anti-imaging. So there are no phase/time aberrations.

2) While one could use a shallow slope rolling off between 20kHz and 40kHz with 2x sample rate, this is not what the vast majority of ADCs or DACs do. They cut off just as steeply (well, almost) as in the 1x sample case.


QUOTE (icstm @ Feb 20 2012, 14:43) *
So use higher sampling frequencies in the capturing and the digitisation of the music, then store for playback at 44.1? surely?


Prior to 'storing for playback at 44.1kHz' one has to downsample, which implies and includes steep anti-alias filtering at 22.05kHz. Sort of back to square 1, not?

Go to the top of the page
+Quote Post
greynol
post Feb 20 2012, 17:05
Post #39





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Have we conveniently forgotten that CD players commonly employed over sampling in order to aid in reconstruction?



--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
icstm
post Feb 20 2012, 17:19
Post #40





Group: Members
Posts: 121
Joined: 25-January 12
Member No.: 96698



QUOTE (greynol @ Feb 20 2012, 16:05) *
Have we conveniently forgotten that CD players commonly employed over sampling in order to aid in reconstruction?

In the DAC? That is fine, they are not trying to create information that is not there they are doing signal processing. I am no expert, but I think I am comfortable with a DAC using oversampling and for the rest of the consensus of the argument posted through the thread to remain.

QUOTE (WernerO @ Feb 20 2012, 14:22) *
Prior to 'storing for playback at 44.1kHz' one has to downsample, which implies and includes steep anti-alias filtering at 22.05kHz. Sort of back to square 1, not?
Why would I have to do that. The signal has already been sampled. I already have the information in a desired format. The issue is that I have "too much" information. So I need to drop some of it. An anti-alias filter would be used if I was ambiguous about the information I have, but (assuming I used one before whilst working with the higher sampling rates) I am not. I know what frequencies I have.

In fact, before I sample to 44.1, I could process my signal at the higher sampling rate to not include frequencies above 22k. Remember that this is not an anologue information, where I might worry about a low-pass filter, this is in the digital domain.

Does that make sense?
Go to the top of the page
+Quote Post
xnor
post Feb 20 2012, 17:30
Post #41





Group: Developer
Posts: 382
Joined: 29-April 11
From: Austria
Member No.: 90198



QUOTE (icstm @ Feb 20 2012, 17:19) *
In fact, before I sample to 44.1, I could process my signal at the higher sampling rate to not include frequencies above 22k.

Which is called a low pass filter that acts as an anti-aliasing filter.

This post has been edited by xnor: Feb 20 2012, 17:36
Go to the top of the page
+Quote Post
greynol
post Feb 20 2012, 17:33
Post #42





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Right, I see that we were talking about removal of greater than Nyquist in the digital domain prior to reconstruction. Still lots of concern about anti-aliasing/anti-imaging, but not a lot of objective data demonstrating that it's been a problem over the last decade if not longer?


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
saratoga
post Feb 20 2012, 17:35
Post #43





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (icstm @ Feb 20 2012, 11:19) *
QUOTE (greynol @ Feb 20 2012, 16:05) *

Have we conveniently forgotten that CD players commonly employed over sampling in order to aid in reconstruction?

In the DAC?

Yes. Modern DACs are oversampled. It gives you better bit depth and less risk of aliasing essentially for free.

QUOTE (icstm @ Feb 20 2012, 11:19) *
In fact, before I sample to 44.1, I could process my signal at the higher sampling rate to not include frequencies above 22k. Remember that this is not an anologue information, where I might worry about a low-pass filter, this is in the digital domain.

... which is what a modern oversampling ADC does. I suspect you think that a 44.1khz ADC/DAC actually runs at 44.1khz. This is not the case and has not been for perhaps 2 decades with the advent of practical CMOS transistors. Common 384x oversampling implies frequencies in the MHz where digitally applying the anti-alias filter is simple.

So yes, what you're saying is a sound idea. So sound its already integrated into every ADC/DAC you've ever used smile.gif

Edit: wrote DAC when I meant ADC

This post has been edited by db1989: Feb 21 2012, 13:32
Reason for edit: second quote was mistakenly attributed to WernerO (see Recycled topic 93594)
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Feb 20 2012, 18:19
Post #44





Group: Members
Posts: 3535
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (greynol @ Feb 20 2012, 11:05) *
Have we conveniently forgotten that CD players commonly employed over sampling in order to aid in reconstruction?


Let's compare and contrast oversampling which is what just about all modern digital<->analog converters do, with upsampling or simply sampling the same analog signal at a far higher rate.

An oversampled converter can be thought of as a black box that is an interface between digital data at a certain data rate, and analog data that for all the world seems to have been sampled at that same sample rate. If we apply the rules of ducks to these gizmos in a standard red book CD player, they walk like they run at 44.1 KHz, they quack like they run at 44.1 KHz, and they look like they run at 44.1 KHz. For all practical purposes they [b]are[/b} 44.1 KHz converters. To a certain degree, what happens inside of them is none of our concern.

Upsampling is a completely different thing. If we think of this process as a black box, it is an interface between stuff that for all the world seems to have been sampled at one sample rate and is now seems to be at some other sample rate. Only by analyzing the data at some deeper level of detail do we get the harsh surprise that first impressions are wrong, and the data actually has properties that are the same or a little worse than it had at the lower sample rate.

Sampling at a higher data rate is again a completely different thing. If we think of this process as a black box, it is an interface between stuff that for all the world seems to have been sampled at a sample rate and is now seems to be like data taken at the same sample rate. However there are no harsh surprises in terms of data. The data really has the properties of data taken at the sample rate that is right there before us.

However, when we are talking about audio signals that are presented with 44.1 KHz sampling and 16 bit resolution, there may still be some harsh surprises. It turns out that due to some inherent limitations of our ears, the data is generally indistinguishable from data taken at even a lower sample rate (ca. 16 Khz), and with less resolution (ca. 13-14 bits).

Hey I didn't make this world, I just try to make reliable observations of it! ;-)
Go to the top of the page
+Quote Post
IgorC
post Feb 20 2012, 18:20
Post #45





Group: Members
Posts: 1506
Joined: 3-January 05
From: Argentina, Bs As
Member No.: 18803



QUOTE (knutinh @ Feb 20 2012, 11:17) *
QUOTE (2Bdecided @ Feb 20 2012, 13:21) *
...in envy at the film industry which has managed to pull the trick three times: VHS, DVD, and now BluRay. The crippled audio industry is desperate to pull the same trick again.

Perhaps four times: Bluray 3D.

Even more. Blu-Ray/Blu-Ray-3D have very good sales and video industry already thinks of new technology (Ultra High Definition Television, 16x resolution of Blu-Ray).
While the audio industry tries to push better-than-CD DVD-A/SACD without much success.

The prices of blu-ray players (starting from only 100$) and HD displays are falling very fast.
While some audiophile audio hardware has the same price as from the first day of introduction 10 years ago. My guess is that the manufacturers of audiophile class hardware keep the prices high because it's impossible to decrease the costs as only very small percentage of people who is interested to buy such hardware.
Go to the top of the page
+Quote Post
Ethan Winer
post Feb 20 2012, 20:01
Post #46





Group: Members
Posts: 248
Joined: 12-May 09
From: New Milford, CT
Member No.: 69730



QUOTE (Bartholomew MacGruber @ Feb 19 2012, 09:14) *
I'm still confused as to why studios use really high sampling frequencies. I have a vague understanding of why higher bit depths might be needed for adujsting levels, but I don't get why they need higher sampling rates.


Because even professional recording engineers can be subject to the same magical thinking and lack of scientific rigor we commonly see among audiophiles. It's just as easy to start a religious war about this stuff in a recording forum as it is in a hi-fi forum.

--Ethan


--------------------
I believe in Truth, Justice, and the Scientific Method
Go to the top of the page
+Quote Post
Ron Jones
post Feb 20 2012, 20:46
Post #47





Group: Members
Posts: 412
Joined: 9-August 07
From: Los Angeles
Member No.: 46048



QUOTE (Ethan Winer @ Feb 20 2012, 12:01) *
Because even professional recording engineers can be subject to the same magical thinking and lack of scientific rigor we commonly see among audiophiles.

Define "magical thinking".
Go to the top of the page
+Quote Post
knutinh
post Feb 20 2012, 21:24
Post #48





Group: Members
Posts: 568
Joined: 1-November 06
Member No.: 37047



QUOTE (Ron Jones @ Feb 20 2012, 21:46) *
Define "magical thinking".

1. "Some guy that I admire use a magical stone in his room for better sound"
2. "If I do the same, I will have equally good results"
(3. Some company approach the person in 1. and are allowed to use his name in marketing, or pay him something direct/indirect to have their magic stone prominently featured in an interview with some magazine)

We are all susceptible to being "fooled" by the wonderfully complex thing between our ears. Audiophiles, recording engineers and regular engineers.

-k
Go to the top of the page
+Quote Post
saratoga
post Feb 20 2012, 22:23
Post #49





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (Ethan Winer @ Feb 20 2012, 14:01) *
QUOTE (Bartholomew MacGruber @ Feb 19 2012, 09:14) *
I'm still confused as to why studios use really high sampling frequencies. I have a vague understanding of why higher bit depths might be needed for adujsting levels, but I don't get why they need higher sampling rates.


Because even professional recording engineers can be subject to the same magical thinking and lack of scientific rigor we commonly see among audiophiles.


Yes this has been my experience as well. The skills required of a good recording engineer are quite different then that of a good DSP/software developer. Their skills have a little overlap, but are still quite distinct.
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Feb 21 2012, 13:43
Post #50





Group: Members
Posts: 3535
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (Ron Jones @ Feb 20 2012, 14:46) *
QUOTE (Ethan Winer @ Feb 20 2012, 12:01) *
Because even professional recording engineers can be subject to the same magical thinking and lack of scientific rigor we commonly see among audiophiles.

Define "magical thinking".


Magical Thinking defined in Wikipedia, and that definition looks pretty good to me. Ethan partially defined it in his statement - it involves a lack of scientific rigor. The underlying logical error is usually confusing correlation with causality.

In recording, there are many cases where an engineer will put some highly touted new component into his recording chain and the next recording he makes sounds unusually good to him. The perceived improvement is in his view obviously due to the new component.

On balance, IME actual working recording engineers tend to be far more pragmatic than audiophiles, but there are a lot of poorly-trained dilettantes who make a few pretty good recordings and then try to pass themselves off as world-class experts.

This post has been edited by Arnold B. Krueger: Feb 21 2012, 13:44
Go to the top of the page
+Quote Post

4 Pages V  < 1 2 3 4 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 19th April 2014 - 04:11