On Sun, 22 Mar 2020 00:00:28 +0100 Pavel Machek <pavel@xxxxxxx> wrote: > Hi! > > > > > > Does this patch help? > > > > > > > > I don't think so. It also failed, and the failure seems to be > > > > identical to me. > > > > > > > > https://gitlab.com/cip-project/cip-kernel/linux-cip/tree/ci/pavel/linux-cip > > > > https://lava.ciplatform.org/scheduler/job/13110 > > > > > > > > > > Can you send me a patch that shows the difference between the revert that > > > you say works, and the upstream v4.19-rt tree (let me know which version > > > of v4.19-rt you are basing it on). > > > > I was using -rt44, and yes, I can probably generate better diffs. > > > > But I guess I found it with code review: how does this look to you? I > > applied it on top of your fix, and am testing. 2 successes so far. > > And I'd recommend some kind of cleanup on top. The code is really > "interesting" and we don't want to have two copies. Totally untested. > > Looking at the code, it could be probably cleaned up further. > > Signed-off-by: Pavel Machek <pavel@xxxxxxx> > > Best regards, > Pavel I applied this patch, does this work for you. It's slightly different than yours as I thought only the condition needed to be saved, not the lists themselves. -- Steve Index: stable-rt.git/kernel/irq_work.c =================================================================== --- stable-rt.git.orig/kernel/irq_work.c 2020-03-30 15:11:13.849875145 -0400 +++ stable-rt.git/kernel/irq_work.c 2020-03-30 15:18:54.365242025 -0400 @@ -70,6 +70,12 @@ static void __irq_work_queue_local(struc arch_irq_work_raise(); } +static inline bool use_lazy_list(struct irq_work *work) +{ + return (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ)) + || (work->flags & IRQ_WORK_LAZY); +} + /* Enqueue the irq work @work on the current CPU */ bool irq_work_queue(struct irq_work *work) { @@ -81,11 +87,10 @@ bool irq_work_queue(struct irq_work *wor /* Queue the entry and raise the IPI if needed. */ preempt_disable(); - if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ)) + if (use_lazy_list(work)) list = this_cpu_ptr(&lazy_list); else list = this_cpu_ptr(&raised_list); - __irq_work_queue_local(work, list); preempt_enable(); @@ -106,7 +111,6 @@ bool irq_work_queue_on(struct irq_work * #else /* CONFIG_SMP: */ struct llist_head *list; - bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT_FULL); /* All work should have been flushed before going offline */ WARN_ON_ONCE(cpu_is_offline(cpu)); @@ -116,10 +120,7 @@ bool irq_work_queue_on(struct irq_work * return false; preempt_disable(); - - lazy_work = work->flags & IRQ_WORK_LAZY; - - if (lazy_work || (realtime && !(work->flags & IRQ_WORK_HARD_IRQ))) + if (use_lazy_list(work)) list = &per_cpu(lazy_list, cpu); else list = &per_cpu(raised_list, cpu);