Re: [PATCH] serial: imx: reduce RX interrupt frequency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> writes:

> On Tue, Jan 04, 2022 at 11:32:03AM +0100, Tomasz Moń wrote:
>> Triggering RX interrupt for every byte defeats the purpose of aging
>> timer and leads to interrupt starvation at high baud rates.
>> 
>> Increase receiver trigger level to 8 to increase the minimum period
>> between RX interrupts to 8 characters time. The tradeoff is increased
>> latency. In the worst case scenario, where RX data has intercharacter
>> delay slightly less than aging timer (8 characters time), it can take
>> up to 63 characters time for the interrupt to be raised since the
>> reception of the first character.
>
> Why can't you do this dynamically based on the baud rate so as to always
> work properly for all speeds without increased delays for slower ones?

I don't like the idea of dynamic threshold as I don't think increased
complexity is worth it.

In fact the threshold works "properly" on any baud rate, as maximum
latency is proportional to the current baud rate, and if somebody does
care about *absolute* latency, increasing baud rate is the primary
solution. At most I'd think about some ioctl() to manually tweak the
threshold, for some exotic applications.

That said, are there examples in the kernel where this threshold is
dynamic? If not, then it's another argument against introducing it, and
if yes, then the OP will have reference implementation of exact
dependency curve.

Thanks,
-- Sergey Organov



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux PPP]     [Linux FS]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Linmodem]     [Device Mapper]     [Linux Kernel for ARM]

  Powered by Linux