On Thu, 11 Jun 2015 10:07:02 -0400, Peter Hurley wrote: > On 06/11/2015 08:15 AM, Peter Hurley wrote: > > On 06/11/2015 05:38 AM, Jakub Kiciński wrote: > >> On Wed, 10 Jun 2015 07:32:52 -0400, Peter Hurley wrote: > >>> On 06/09/2015 02:05 PM, Steven Walter wrote: > >>>> Use a dedicated kthread for handling ports marked as low_latency. Since > >>>> this thread is RT_FIFO, it is not subject to the same types of > >>>> starvation as the normal priority kworker threads. > >>> > >>> This is not a problem unique to the tty subsystem; many subsystems use > >>> kworkers to handle i/o after the initial ISR. > >>> > >>> Without careful design, high-prio userspace RT threads can effectively starve > >>> themselves of i/o. > >>> > >>> In any event, solutions to this problem belong either in the core workqueue > >>> (for example, an i/o-specific unbounded workqueue) or in the CONFIG_PREEMPT_RT > >>> patch. > >> > >> But kthread_worker *is* the workqueue subsystem's answer to users who > >> require low-latency processing. > > > > Not really; you even note below the lack of RT support in wq. > > > >> SPI subsystem has been using it successfully for message pump handling > >> for last few releases. > > > > SPI is trivial compared to tty. > > > >> Lack of RT functionality in workqueue subsystem and prevalence of wq use makes > >> them a huge pain on embedded/rt systems. I would like to see more > >> generic solution to this problem as well but I can't think of one :/ > > > > Direct support from workqueue is the generic solution, insofar as there is a > > generic solution. Ultimately, RT priority inversion is a upward spiral to 99. > > > > tty uses the system_unbound_wq; as long as the substitute wq has similar > > guarantees (guaranteed forward progress and unbounded worker running times) > > using a different wq for any given tty_port would be ok. > > This could be done for non-RT now, by creating a separate unbound wq and apply > wq attrs with max nice value; low_latency ports would use this wq. My scheduler knowledge clearly needs refreshing but doesn't nice value only influence timeslice length giving no latency improvement? If that's true maybe we could just extend wq attrs to include RT prio? The mechanism to manage pwqs based on scheduler params seems to be there already... -- To unsubscribe from this list: send the line "unsubscribe linux-serial" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html