Hi PulseAudionauts, I've been meaning to experiment a bit with low-latency voice codecs and naturally want to add as little latency as possible to what is imposed by the codec on both capture and playback. (My guess is that the latency added would be between min(capture_latency, playback_latency) and capture_latency+playback_latency, depending on how well capture end and playback begin are synchronized.) Q: Does it matters for latency if I program against ALSA or PulseAudio? This is assuming a setup like on Ubuntu, where the default ALSA device is using a PulseAudio backend. (Portability and code complexity may favor one solution or the other, but that's not what I'm asking.) Yesterday I would have guessed that anything that's possible with the PulseAudio API should also be possible with the ALSA API, but after reading http://0pointer.de/blog/projects/pulse-glitch-free.html I'm not so sure. Unfortunately neither the FAQ or http://0pointer.de/blog/projects/guide-to-sound-apis.html was enough to clue me in. Anything other must-know knowledge for someone curious about low-latency audio who has previously mostly dabbled with GStreamer and similar level APIs? -- Philip J?genstedt