Re: [PATCH v3 2/2] drivers/tty: use a kthread_worker for low-latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/11/2015 10:30 AM, Jakub Kiciński wrote:
> On Thu, 11 Jun 2015 10:07:02 -0400, Peter Hurley wrote:
>> On 06/11/2015 08:15 AM, Peter Hurley wrote:
>>> On 06/11/2015 05:38 AM, Jakub Kiciński wrote:
>>>> On Wed, 10 Jun 2015 07:32:52 -0400, Peter Hurley wrote:
>>>>> On 06/09/2015 02:05 PM, Steven Walter wrote:
>>>>>> Use a dedicated kthread for handling ports marked as low_latency.  Since
>>>>>> this thread is RT_FIFO, it is not subject to the same types of
>>>>>> starvation as the normal priority kworker threads.  
>>>>>
>>>>> This is not a problem unique to the tty subsystem; many subsystems use
>>>>> kworkers to handle i/o after the initial ISR.
>>>>>
>>>>> Without careful design, high-prio userspace RT threads can effectively starve
>>>>> themselves of i/o.
>>>>>
>>>>> In any event, solutions to this problem belong either in the core workqueue
>>>>> (for example, an i/o-specific unbounded workqueue) or in the CONFIG_PREEMPT_RT
>>>>> patch.
>>>>
>>>> But kthread_worker *is* the workqueue subsystem's answer to users who
>>>> require low-latency processing.
>>>
>>> Not really; you even note below the lack of RT support in wq.
>>>
>>>> SPI subsystem has been using it successfully for message pump handling
>>>> for last few releases.
>>>
>>> SPI is trivial compared to tty.
>>>
>>>> Lack of RT functionality in workqueue subsystem and prevalence of wq use makes
>>>> them a huge pain on embedded/rt systems.  I would like to see more
>>>> generic solution to this problem as well but I can't think of one :/
>>>
>>> Direct support from workqueue is the generic solution, insofar as there is a
>>> generic solution. Ultimately, RT priority inversion is a upward spiral to 99.
>>>
>>> tty uses the system_unbound_wq; as long as the substitute wq has similar
>>> guarantees (guaranteed forward progress and unbounded worker running times)
>>> using a different wq for any given tty_port would be ok.
>>
>> This could be done for non-RT now, by creating a separate unbound wq and apply
>> wq attrs with max nice value; low_latency ports would use this wq.
> 
> My scheduler knowledge clearly needs refreshing but doesn't nice value
> only influence timeslice length giving no latency improvement?

I would think just using a unique unbound wq would improve latency because
the time accounting would be specifically only for input handling of the
the low_latency ports, so at each wakeup from new i/o, that kworker would
most likely be the task with the lowest running time (relative to other
running tasks). And the negative nice value would preserve it's low running
time status.

Of course, I could be wrong :)

And that wouldn't solve the RT priority problem either.

> If that's true maybe we could just extend wq attrs to include RT prio? 
> The mechanism to manage pwqs based on scheduler params seems to be there
> already...

That was my thought.

Another (potential) solution would be to allow low_latency ports to directly
execute the worker, like it used to. However, that solution would need to
stay in the CONFIG_PREEMPT_RT patch because of the sleeping locks.

Regards,
Peter Hurley



--
To unsubscribe from this list: send the line "unsubscribe linux-serial" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux PPP]     [Linux FS]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Linmodem]     [Device Mapper]     [Linux Kernel for ARM]

  Powered by Linux