On 17.08.2016 00:30, Tanu Kaskinen wrote: > On Tue, 2016-08-16 at 20:33 +0200, Georg Chini wrote: >> On 16.08.2016 18:04, Tanu Kaskinen wrote: >>> Permanent offset? Surely the smoother smooths out any initial >>> errors? >> The smoother relies on some kind of start time. This is true for the >> current smoother code and also for the new code I have been using. >> An error in that start time is never corrected in both cases. > Ok, I wasn't aware of that. > >>> We're talking about sub-millisecond errors here. Are there any >>> perceivable problems due to the unneeded corrections? >>> >> I don't know what you mean with perceivable here. When you mean >> hearable I don't think it is, but the controller is disturbed and the goal >> is to get sub-millisecond stability of the end-to-end latency. If that >> wasn't the goal, I could leave out half of the code. With the final >> loopback code and the new smoother code (which you probably will >> not accept) I am reaching 50 usec stability. Take a look at the result >> section of the document I sent with the code. >> In the end it is definitely wrong to sometimes ignore an offset, >> sometimes ignore part of it (if it is not fully negative) and sometimes >> include it. It should be either included or excluded always, and due >> to the fact that we don't know the exact amount, it can only be always >> included. With the inclusion, the reported latency varies smoothly >> without jumps. > What would you think about the following idea: > > Let's say that the latency calculation in the alsa code results in -123 > usecs. Negative latencies are impossible, so we know that the value is > wrong by at least 123 usecs. If the error is due to a wrong reference > point that was set in the beginning, we can improve this and all > subsequent reports by moving the smoother's reference point by 123 > usecs. It will cause a jump in the latency reports, but it will happen > only once (well, there's no guarantee that the reference point error > isn't bigger than 123 usecs, so negative values can still occur and > more jumps can happen, but each jump makes all subsequent jumps > smaller, until there's no more error left to fix). > > The values produced by the smoother are estimates, so they can contain > also other errors than the error in the reference point, so I guess > overcompensation is possible in this scheme, but I'd expect it to be > negligible in magnitude. > What would this be good for? Only not to have negative numbers? The final end-to-end latency is not what you configure anyway, there are (as we discussed) other offsets around that we cannot see. So why bother to correct half a millisecond here? Accepting negative latencies is just to keep the reports smooth and continuous. If you prefer, I can try your approach, but I do not see the benefit.