Sound, the human ear, and the digital world
2003-09-16 13:58:00
Hello all, Side note first: Please bare with me, I know this was discussed before in the 'lossless' thread, and maybe in some other places, but I still want (with your help) to understand more issues that wasn't covered in previous threads. Now to the actual subject in mind Lately I started to be interested in sound, what it is, how it's replicated via digital means, and how much of it can be actually heard. And this I've understood so far as facts. 1. The human ear can actually hear frequencies until 20,000 Hz, which actually makes sense to me via some personal testing in CoolEdit, since I wasn't able to hear nothing when creating a sample sine wav. 2. According to the 'Nyquist theorem' (Which by what Garf said can be proved mathematically, and I'll take his word for it, as my math skills on this are not very good), 40,000 samples per seconds is more then enough to replicate digitally that frequency. A Normal 44.1/16 CD should even replicate frequencies until 22,500 Hz which should be more then enough 'headroom' just-in-case. 3. According to what I read, and also is a fact, a 16bit integer that is used in CDs is sufficient to describe 65,536 amplitude rates, which translate to a theoretical playback system optimum of 96 dB dynamic ranges. Now 'quantization errors' which what I've understood is a nice word for amplitude errors, should only become apparent near the edges when the sound is very quiet, AKA: -96dB range. Now to the questions I'm asking: 1. Why does sampling rates higher then 44,100 exist? Since, nobody can ever hear beyond the 20,000 Hz frequency? For example, DVDs are sampled at 48,000 samples per second, DVD-A and SuperCD are sampled at 96,000 per second, and I think I even heard of 192,000 samples per second. What did the people designing those sample-rates hoped to achieve? edit: Main question, is why even bother to preserve frequencies nobody will ever able to hear? <Sarcasm> To test aliens hearing in the future? </Sarcasm> Possible guesses: a. Storage become so cheap, people just thought, hey why not, lets get some *more* headroom b. Obscure copy protection, to prevent devices limited to 44.1 couldn't touch the audio stream c. More quality (?!?) 2. Why does a bit rate of over 16bit is really needed (aka the infamous 24bit)? For what those 16777216 amplitude rates are needed to provide the amazing 144dB range is actually needed? I can't imagine people hearing normal music/voice/whatever will be sensitive to quantization errors at the -96dB level. If I'm wrong, please explain how so Again, my possible guesses: a. Storage is cheap, so again, why not. b. Even more headroom for some sound processing, which doesn't explain why some sound cards/software or sound devices are so proud at 24bit output. c. More quality (?!?) To sum up my post, I really want to know if those advancements really contribute (if only by something) to the sound quality that can be heard by the end consumer (Or for that matter the human ear), or all of them are in the end, just placebo, and this is the real reason DVD-A, and SuperCD formats are failing.
You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time. - Abraham Lincoln