[PATCH] protocol-native: Fix source latency calculation in ADJUST_LATENCY mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13 April 2015 at 18:05, Arun Raghavan <arun at accosted.net> wrote:
> On 13 April 2015 at 17:49, David Henningsson
> <david.henningsson at canonical.com> wrote:
>>
>>
>> On 2015-04-13 11:26, arun at accosted.net wrote:
>>>
>>> From: Arun Raghavan <git at arunraghavan.net>
>>>
>>> This fixes buffer attr calculation so that we set the source latency to
>>> the requested latency. This makes sense because the intermediate
>>> delay_memblockq is just a mechanism to send data to the client. It
>>> should not actually add to the total latency over what the source
>>> already provides.
>>>
>>> With this, the meaning of fragsize and maxlength become more
>>> meaningful/accurate with regards to ADJUST_LATENCY mode -- fragsize
>>> becomes the latency the source is configured for (which is then
>>> approximately the total latency until the buffer reaches the client).
>>> Maxlength, as before, continues to be the maximum amount of data we
>>> might hold for the client before overrunning.
>>
>>
>> So the current behaviour is that if you ask for 20 ms of fragsize in
>> ADJUST_LATENCY mode, then you will get packets of 10 ms each? That seems a
>> bit odd.
>
> Yup, that's exactly what is happening.
>
>> Still, I'm not so sure about this. Part of that is because we're changing
>> things that can break existing clients that rely on specific buffer
>> semantics, and part of it is, I think the reasoning that we're trying to
>
> I disagree with this one because the buffer attr semantics are not
> part of the API. I'd rather not be forced to adhere to our (imo bad)
> calculations right now for this reason. If you feel it's essential, we
> can try to mitigate the risk by requesting additional usage, making a
> lot of noise about the change, etc. but I don't think we should hold
> back on changing things that are wrong.
>
> (and yes, I know we've been bitten by this in the past with Skype, but
> that exposed a bug in Skype code, so I'd count it as being positive in
> the grand scheme of things :))
>
>> compensate for latencies in other parts of the system. I e, in order to get
>> every sample to you within 20 ms (counted from when the ADC put a sample in
>> the buffer), then you can't have 20 ms of fragsize, because then the total
>> latency would be 20 ms plus latencies in the system. Hence, we choose 10 ms
>> and gamble that the system latencies are less than 10 ms, so that the
>> samples will reach the client in time.
>
> The current math halves the requested latency blindly -- so with 200ms
> of latency, we'll end up with 100ms in software and 100ms in flight.
> It's pretty unlikely that the samples will actually spend anywhere
> near that much time in flight.
>
> We _could_ try to budget for the latency of transfer + scheduling, but
> imo this isn't too valuable, since it'll vary quite a bit between
> systems. We're talking about best effort, and not latency guarantees
> atm, so I'm okay with the inaccuracy.
>
> You did make me think of one caveat in this -- if the actual source
> latency is lower than fragsize, we'll end up passing back smaller
> chunks than requested. This isn't any worse than what we have right
> now, though, and if needed, in the future we can try to send out the
> blocks after collecting enough.

Any other opinions on this? I'd like to push this out sooner in the
7.0 cycle rather than later.

-- Arun


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux