> Not sure what you mean with "sink" and "ring buffer". When mixing, data > goes from the sink-input / "client-server" buffer into the DMA buffer > directly. Please look at protocol-native.c. I am not sure why there is this division of latency in two, for low-latency you can probably decrease the client buffer some. /* So, the user asked us to adjust the latency of the stream * buffer according to the what the sink can provide. The * tlength passed in shall be the overall latency. Roughly * half the latency will be spent on the hw buffer, the other * half of it in the async buffer queue we maintain for each * client. In between we'll have a safety space of size * 2*minreq. Why the 2*minreq? When the hw buffer is completely * empty and needs to be filled, then our buffer must have * enough data to fulfill this request immediately and thus * have at least the same tlength as the size of the hw * buffer. It additionally needs space for 2 times minreq * because if the buffer ran empty and a partial fillup * happens immediately on the next iteration we need to be * able to fulfill it and give the application also minreq * time to fill it up again for the next request Makes 2 times * minreq in plus.. */ if (tlength_usec > minreq_usec*2) sink_usec = (tlength_usec - minreq_usec*2)/2; else sink_usec = 0; pa_log_debug("Adjust latency mode enabled, configuring sink latency to half of overall latency."); >> and events up to 4ms apart. Has anyone tried the changes we pushed >> recently at the kernel level to properly handle the ring buffer pointer >> and delay? I believe some of the underruns may be due to the ~1ms >> inaccuracy that we had before these changes. If your driver is already >> giving you a 25% precision error no wonder things are broken? > > Right now we have bigger issues, such as why nobody is responding to > messages such as this one [1] :-( Quite frankly I did not understand the problem you are facing and what these measurements show. Maybe you're on to something but it's hard to provide feedback here.