Hi, I am still trying to configure PulseAudio for a low-latency app, and I could not figure why the latency reported is much higher than what I specified. For example, with tsched=0 and four fragments of 5ms, the latency wouldn't go below 40ms, no matter how I specified the latency parameter. Turns out that in protocol-native.c, we have this piece of code: ? ?/* FIXME: This is actually larger than necessary, since not all of ? ? * the sink latency is actually rewritable. */ ? ?if (tlength_usec < s->configured_sink_latency + 2*minreq_usec) ? ? ? ?tlength_usec = s->configured_sink_latency + 2*minreq_usec; Essentially this test limits our ability to reduce the latency below twice the size of the hardware buffer. I believe it is overkill for low-latency apps where you don't really want to rewrite in the hardware buffer in the first place. If there's any mixing to be done, it could start when the next write happens, no need to rewind. The only thing you would want is to have one fragment stored in the server buffer, and this might even be overkill. Interestingly I commented out this piece of code, yet I still show a latency in the 40ms, looks like sink_input_update_max_request_cb() changes tlength and hence the latency as well. Not sure why? Feedback welcome, - Pierre