Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Bell Speakers (Read 9536 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Bell Speakers

Reply #25
Doesn't the ear, like our eyes, have a number of discrete wavelength receptors (hair cells)?  Unlike the eye, there are many more, and I don't know any reason to think that the wavelengths are the same for different people since the response would be based on size rather than chemistry.

Anyway, it would seem that with the correct frequencies (or "primary pitches") you could give "full spectrum" sound in the same way 3 primary colors can represent the whole visible spectrum


The eye basically has four receptors, and if we can disregard the rod cells, then you have a three-dimensional colourspace. The analogy would be that you had a chord of thee tones. However, the eye can also tell the colour of to nearby spots from each other, so each eye's perception of a still picture is a 'red-volume', a 'green-volume' and a 'blue-volume' for each pixel (down to the eye's resolution) on a two-dimensional surface. Each ear's perception of a sound, is a volume for each 'frequency pixel' (that is, down to the ear's resolution). Here it looks like the eye catches much more (2 dimensions into 3 rather than 1 into 1), but it does of course boil down to resolution.

But to get a sound analogue -- with noise cancelling -- I guess you would have to stick to the light-through-a-slit experiment?

You need a time dimension, of course, as the problem about the bell speakers is that they ring out in time in a less-than-perfectly-controlled way. If we suppose that the 'bell speaker' cannot dampen the bells, only cancel the noise by a phase-inverted noise, then I guess a picture analogy could be as follows:
- a 'good synthesizer' (no bells yet) would be akin to a set of lasers pointing at the screen behind the slit.
- the 'bells' are at first less precise than the synth, and it has overtones, so: exit lasers, enter some lightbulbs.
- the bells would also reverberate out; if we assume that the bells cannot be dampened, only their sound cancelled by ringing another bell, that would correspond to the screen behind the slit being phosphorecent?


Hm.  Even the analogy is tricky.

Bell Speakers

Reply #26
Quote
Anyway, it would seem that with the correct frequencies (or "primary pitches") you could give "full spectrum" sound in the same way 3 primary colors can represent the whole visible spectrum


Ears are spectral analyzers with some limited bandwidth, where each input frequency stimulates a single group of cilia, which' output is mapped to a mental scale.

Eyes are a completely different device. It responds to only three distinct frequencies, though with fairly gradual rolloff between them. The logical combination of these three inputs is what creates colour perception.

You cannot compare them so lightly.

That was not a pun.

Bell Speakers

Reply #27
Quote
Anyway, it would seem that with the correct frequencies (or "primary pitches") you could give "full spectrum" sound in the same way 3 primary colors can represent the whole visible spectrum


Ears are spectral analyzers with some limited bandwidth, where each input frequency stimulates a single group of cilia, which' output is mapped to a mental scale.

Eyes are a completely different device. It responds to only three distinct frequencies, though with fairly gradual rolloff between them. The logical combination of these three inputs is what creates colour perception.



DOn't the cillia work analogously to the cones in having overlapping bandwidths so you can detect and place tones between the center frequencies of adjacent cillia by the relative response?


 

Bell Speakers

Reply #28
Some googling tells me that the overlap is significant and the response rolloff rather shallow (see bottom image, fig 12.8).

In any case, the ear is like a 1-pixel eye that responds to many frequencies, while the eye has many pixels and only three frequencies.

So, back to your original point, you can get proper sound by combining sine waves (after all, this is essentially what lossy audio compression is based on), but I still would not call it "in the same way as primary colors", since audio is a plain mapping of frequency onto reponse and vision is about combining relative intensities of three values.

Lossy image compression of the jpeg variety also exploits decomposition into frequencies, but this happens in the spatial domain, per-channel and after a transform of RGB to Lab, so again, I don't want to compare audio and vision so easily and prefer to keep similes like that out of the discussion.