Can someone explain to me the exact method that ALSA uses to detect underrun with OSS emulation? I have a driver that reports underrun on almost every period when I use OSS emulation and I'm playing with a non-supported sample rate. So I'm assuming that maybe my hardware is playing the audio too fast or too slow, and the driver is returning periods before ALSA expects them. What I don't understand is: how does ALSA know that a period was finished too early. Does it use a timer, or is it purely application driven? -- Timur Tabi Linux kernel developer at Freescale _______________________________________________ Alsa-devel mailing list Alsa-devel@xxxxxxxxxxxxxxxx http://mailman.alsa-project.org/mailman/listinfo/alsa-devel