Excerpts from Fons Adriaensen's message of 2011-06-17 19:48:10 +0200: > On Fri, Jun 17, 2011 at 10:57:57AM +0200, Philipp Überbacher wrote: > > > After a little off-list discussion with Gabriel and refreshing my basic > > trigonometry a bit I understand your formula, but I still have no idea > > what it tells us. The main problem might be that I have no idea how > > velocity vectors relate to sound and what the decreasing magnitude tells > > us. > > It's not easy to explain without some wave physics and > maths. But I will try :-) Note that I'm simplifying > things a bit, and leaving out a lot of 'ifs' and 'whens' > - some of the things presented below are true only on > some conditions (which you can assume to be satisfied). Thanks for your extensive explanation Fons. I slept over it a couple of days, hoping to understand a little better now. > Sound consists of variations of air pressure that > propagate as waves. In each point (x,y,z) in space > we have a pressure that is a function of time. Written > as a function, we have P(x,y,z,t), which is called > the _pressure field_. > > To generate those pressure variations some air must > move. In each point (x,y,z) the small volume of air > surrounding it has some velocity which is also a > function of time: V(x,y,z,t), _the velocity field_. > > While P() is a single value for any given (x,y,z,t), > V() is a vector: it has not only a magnitude but also > a direction. In 3-D space, a vector can be seen as a > combination of 3 independent values, one for each of > the three cartesian coordinates. > > One way to look at sound waves is to see them as the > result of the interaction between P and V: they sort > of generate each other, which is what makes the wave > propagate in space. I found a bit of explanation of wave propagation in one of my books, but it seems to differ slightly. It basically takes energy and heat into account and says (simplified) that there are basically two states, one without motion but increased pressure and heat, one with maximum motion and little pressure/heat, and everything in between. I guess this corresponds to P() and V() in your explanation? > For a real single sound source the direction of the > vector V(t) is that towards the source, and P(t) and > V(t) in any given point are closely related. Towards the source? No idea whether it matters, just wondering. > They are > of course measured in different units (Pascal, and > meters/second resp.), one is a scalar and the other > a vector, but they are proportional. Proportional or inverse proportional? Again I'm thinking of the model from the book, with the two states of pressure and velocity. With this in mind pressure would be large when velocity is small and vice versa. > So given P(t), > we know the magnitude of V(t) - they are the same > signal. An omni mic gives a signal proportional to > P(t), while a figure-of-eight mic gives a signal > proportional to the projection of V(t) on its axis. > For a single source they will produce the same signal > (if you point the bidirection mic to the source). > > This is no longer true if we have the same signal > reproduced by two sources, e.g. two stereo speakers > driven by the same signal to generate a virtual > source at the center. The P() will add up, but the > V() add as vectors, so the sum will be shorter than > the sum of the magnitudes, by the cosine factor > mentioned before. So we do not longer have the > fixed relation between P and the magnitude of V. I understand that the vector sum is different with two speakers, however, it's somewhat hard to grasp for me that if P() and V() are generating each other that P() can on the one hand generate proportional V() and out-of-proportion V(). > At low frequencies (where the wavelenght is much > larger than the size of a human head), all the > information we have to determine the direction > of the source is the phase difference between > the signals at the two ears. This is not just a > single value, we can (and do) move our heads and > 'explore' this phase difference in function of > those movements. > > Now some wave physics and maths will show that > this phase difference depends only on the relative > magnitudes of P() and V() that would exist at the > point halfway between the ears if our head were not > there, and on the direction of our head w.r.t. to > that of the vector V() (and of course on frequency). > What we perceive as the direction of the source is > the direction of V(). But if the magnitudes of P() > and V() don't have the right ratio, the phase > difference will not be as expected by our brain, > and this will make the virtual source less stable > and convincing. > > As said this is valid only for low frequencies. > At mid and high frequencies other mechanisms > take over, but these also can be analysed in > terms of the ratio between a scalar sum and > the magnitude of a vector sum, and lead to > similar conclusions. I guess this sort of analysis or model is used for more complex systems like ambisonics as well? > HTH, Yes, thank you very much for your explanation. I've not fully grasped the "whys", but I understand the idea. Best regards, Philipp _______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/listinfo/linux-audio-user