[PATCH 09/13] loopback: Track the amount of jitter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09.12.2015 07:47, Alexander E. Patrakov wrote:
> 09.12.2015 01:47, Tanu Kaskinen пиÑ?еÑ?:
>> On Wed, 2015-02-25 at 19:43 +0100, Georg Chini wrote:
>>> ---
>>>   src/modules/module-loopback.c | 18 +++++++++++++++++-
>>>   1 file changed, 17 insertions(+), 1 deletion(-)
>>
>> The commit message should say something about why the jitter is tracked.
>>
>>> diff --git a/src/modules/module-loopback.c 
>>> b/src/modules/module-loopback.c
>>> index cbd0ac9..b733663 100644
>>> --- a/src/modules/module-loopback.c
>>> +++ b/src/modules/module-loopback.c
>>> @@ -95,6 +95,8 @@ struct userdata {
>>>
>>>       pa_usec_t source_latency_sum;
>>>       pa_usec_t sink_latency_sum;
>>> +    pa_usec_t next_latency;
>>> +    double latency_error;
>>>
>>>       bool in_pop;
>>>       bool pop_called;
>>> @@ -263,15 +265,22 @@ static void adjust_rates(struct userdata *u) {
>>>                   (double) current_latency / PA_USEC_PER_MSEC,
>>>                   (double) corrected_latency / PA_USEC_PER_MSEC,
>>>                   ((double) u->latency_snapshot.sink_latency + 
>>> current_buffer_latency + u->latency_snapshot.source_latency) / 
>>> PA_USEC_PER_MSEC);
>>> -    pa_log_debug("Latency difference: %0.2f ms, rate difference: %i 
>>> Hz",
>>> +    pa_log_debug("Latency difference: %0.2f ± %0.2f ms, rate 
>>> difference: %i Hz",
>>
>> What does "± %0.2f ms" mean? Is the real latency difference between
>> those bounds with 100% confidence, or less than 100% confidence?
>
> Of course less than 100% confidence.
>
>>
>>>                   (double) latency_difference / PA_USEC_PER_MSEC,
>>> +                (double) 2.5 * u->latency_error * final_latency / 
>>> PA_USEC_PER_MSEC,
>>
>> Why is that 2.5 there?
>
> Maybe it would be more scientific to track not the "average of 
> absolute value of jitter", but some "root-mean-square" value. Then we 
> can use the two-sigma or two-and-a-half-sigma rule to get 95% or 98% 
> confidence that the latency is within the bounds that we log.
>

I do not think this makes more sense, because using sigma means that you 
assume that the
noise is Gaussian. This is definitely not the case, most of the noise 
consists of systematic
errors introduced by the resamplers. Especially when you switch the rate 
you will see huge
errors of up to a few hundred usec. The value I am tracking is only 
needed to have an indication
how reliable the last couple of measurements have been.


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux