On 26.11.2015 19:43, Tanu Kaskinen wrote: > On Thu, 2015-11-26 at 09:55 +0100, Georg Chini wrote: >> On 26.11.2015 08:41, Georg Chini wrote: >>> On 26.11.2015 01:49, Tanu Kaskinen wrote: >>>> On Wed, 2015-11-25 at 22:58 +0100, Georg Chini wrote: >>>>> On 25.11.2015 19:49, Tanu Kaskinen wrote: >>>>>> On Wed, 2015-11-25 at 16:05 +0100, Georg Chini wrote: >>>>>>> On 25.11.2015 09:00, Georg Chini wrote: I am not really sure where >>>>>>> this discussion >>> is leading to. We are also mixing up different topics at the moment. >>> The first one is a matter of the safeguards. As already said in a >>> previous mail, in my opinion those safeguards only have to cover >>> the most common cases and do not need to be perfect because >>> the controller will take care at runtime. >> Let's pick up the safeguard discussion again. Maybe there is a reasoning, >> based on your calculations in the previous mails. As far as I can tell, we >> are talking about 3 different cases: >> >> 1) Interrupt driven alsa device >> >> I would propose to accept that for this kind of device the buffer_latency >> needs to be one default-fragment-size + some safety margin. This is a >> value proven in practice and when I remember correctly you reason in >> another mail that it could indeed fit in that special case. > Our calculations yield the same result in the usb sound card example > that we've been using, because your "average sink/source latency" is > effectively the same thing as the maximum sink buffer fill level, and > default-fragment-size in buffer_latency compensates for the missing > maximum source buffer fill level in the total latency calculation. > > In my formula, the "baseline" for buffer_latency is zero. By that I > mean that if there weren't any complications like rate errors, > scheduling delays, latency measurement errors or "surprise latencies" > (latencies in sinks and sources that are higher than the configured > latency), then buffer_latency could be zero without any risk of > underruns. > >> 2) timer based alsa device >> >> I arrived at the conclusion that buffer_latency has to be 0.75 * >> sink_latency. >> Would it be reasonable to argue that we have to keep one configured sink >> latency of audio around on the source side? If yes, it would make clear >> where >> that 0.75 factor is coming from. >> The correct value for buffer_latency would then be >> buffer_latency = configured_sink_latency - 0.25 * configured_source_latency >> which gives 0.75 * configured_sink_latency if both are equal. > None of this makes sense to me, sorry. The starting point for > calculations is that the total latency has to be big enough to handle > the case where both the sink and the source buffers are full at the > same time. Once you have made sure that the total latency is at least > "configured source latency + configured sink latency", buffer_latency > doesn't have to be proportional to the sink latency. > >> 3) the general case, so none of the above >> >> This is still unclear. As far as I understand, in this case there will never >> be timer based scheduling. Am I correct? If yes, it can be distinguished >> from case 1) just by checking the name of the device. >> In this general case we probably have to choose very conservative >> settings as we don't know exactly how the device will behave. >> >> If source and sink are of different device type we have to take the >> larger value. > I think all three cases can be handled using the same formula, when > "configured latency" is replaced with "maximum buffer fill level". If > my formula gives an overly conservative result in some case, that must > mean that the maximum sink or source buffer fill level is less than we > think. In case of timer-based scheduling in the alsa devices, I believe > it's correct to assume that the configured latency is the maximum > buffer fill level, modulo some safety margin that the alsa code uses > when configuring the wakeup timer. No, after understanding what you mean, it matches exactly my experience, so it seems your assumptions are correct.