On Wed, 2016-08-17 at 17:06 +0200, Georg Chini wrote: > On 17.08.2016 15:22, Tanu Kaskinen wrote: > > > > On Wed, 2016-08-17 at 14:58 +0200, Georg Chini wrote: > > > Yes, the benefit would be the same, but with much more overhead > > > and also with the drawback of discontinuities in the beginning. > > What overhead? Do you think there's more overhead than one if > > check? > > > > if (delay < 0) { > > Â Â Â Â Â pa_smoother_adjust_reference_point(smoother, delay); > > Â Â Â Â Â delay = 0; > > } > > What about the function? There is a pa_smoother_set_time_offset() > function, but it does not add up the consecutive changes as would > be necessary, so we would have to store the current offset elsewhere. > This is an additional variable that must be tracked and initialized. To me > this seems more complex than copying an already existing function. Adding pa_smoother_adjust_time_offset() probably wouldn't be complicated. > > > > > > and going through the effort of making > > > them positive just because it looks nicer does not make sense to me. > > "Looking nicer" in this case means less complexity in the core. Maybe > > you think that has no value? I think it does have value. > > As said above it does not look more complex to me, but that's a matter > of opinion. Anyway, additional complexity only arises because the > latencies returned by pa_*_get_latency_*() may not be negative, so that > I have to establish a new function to get the unbiased values. > If you would simply accept negative numbers, there would even be less > code. I think changing all the get_latency functions to deal with signed numbers would be fine. Maybe they should have a bool flag to tell whether negative values should be filtered out, if there are many call sites that need to do the filtering, but from a quick glance there seem to be also many places where negative values are fine. > > > It is much easier to accept them as they are because you do not gain > > > anything by the transformation. > > > Your approach would not even deliver more accurate results, because > > > due to the error of the measurements and your idea to accumulate > > > those negative values, you would end up with always overestimating > > > the offset by the maximum error of the measurement. > > Over-adjusting the offset by the maximum error of the measurement case > > only occurs if the maximum error happens on a measurement that is done > > when the real latency is exactly zero (which it never is, but it may be > > close). What magnitude do you expect the maximum non-offset-related > > error to be? I expect it to be less than what the offset error, in > > which case shifting the offset should result in more accurate results > > than not doing that, but maybe I'm wrong about the relative magnitudes > > of offset errors and other errors? > > I think the overall error will be (negligible) smaller if you live with the > offset. > Why should the error of the reference point be any different from > the error of following measurements? In the end, the reference > point consists of a single measurement. The initial reset of the smoother is done in a different situation than regular latency queries, and I got the impression that you observed bad accuracy specifically when starting a sink or source. Maybe that impression was wrong. --Â Tanu