Le Tue, Sep 24, 2024 at 05:13:36PM -0700, Andrew Morton a écrit : > > The patch titled > Subject: irq_work: replace wait loop with rcuwait_wait_event > has been added to the -mm mm-nonmm-unstable branch. Its filename is > irq_work-replace-wait-loop-with-rcuwait_wait_event.patch > > This patch will shortly appear at > https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/irq_work-replace-wait-loop-with-rcuwait_wait_event.patch > > This patch will later appear in the mm-nonmm-unstable branch at > git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm > > Before you just go and hit "reply", please: > a) Consider who else should be cc'ed > b) Prefer to cc a suitable mailing list as well > c) Ideally: find the original patch on the mailing list and do a > reply-to-all to that, adding suitable additional cc's > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > The -mm tree is included into linux-next via the mm-everything > branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm > and is updated there every 2-3 working days > > ------------------------------------------------------ > From: Steven Davis <goldside000@xxxxxxxxxxx> > Subject: irq_work: replace wait loop with rcuwait_wait_event > Date: Thu, 19 Sep 2024 11:43:26 -0400 > > The previous implementation of irq_work_sync used a busy-wait loop with > cpu_relax() to check the status of IRQ work. This approach, while > functional, could lead to inefficiencies in CPU usage. > > This commit replaces the busy-wait loop with the rcuwait_wait_event > mechanism. This change leverages the RCU wait mechanism to handle waiting > for IRQ work completion more efficiently, improving CPU responsiveness and > reducing unnecessary CPU usage. > > Link: https://lkml.kernel.org/r/SJ2P223MB10267BCF19A4A37747A52330F7632@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > Signed-off-by: Steven Davis <goldside000@xxxxxxxxxxx> > Cc: Ankur Arora <ankur.a.arora@xxxxxxxxxx> > Cc: Frederic Weisbecker <frederic@xxxxxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > kernel/irq_work.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > --- a/kernel/irq_work.c~irq_work-replace-wait-loop-with-rcuwait_wait_event > +++ a/kernel/irq_work.c > @@ -295,8 +295,7 @@ void irq_work_sync(struct irq_work *work > return; > } > > - while (irq_work_is_busy(work)) > - cpu_relax(); > + rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), TASK_UNINTERRUPTIBLE); This misses the wake_up part and also the PREEMPT_RT special case cleanup. It should be something like the below untested (we could also benefit from the successful atomic_cmpxchg() implicit smp_mb() and provide an unordered version of rcuwait_wake_up() but that's for later): diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 2f4fb336dda1..0962ec749dbc 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -227,9 +227,7 @@ void irq_work_single(void *arg) */ (void)atomic_cmpxchg(&work->node.a_flags, flags, flags & ~IRQ_WORK_BUSY); - if ((IS_ENABLED(CONFIG_PREEMPT_RT) && !irq_work_is_hard(work)) || - !arch_irq_work_has_interrupt()) - rcuwait_wake_up(&work->irqwait); + rcuwait_wake_up(&work->irqwait); } static void irq_work_run_list(struct llist_head *list) @@ -288,15 +286,7 @@ void irq_work_sync(struct irq_work *work) lockdep_assert_irqs_enabled(); might_sleep(); - if ((IS_ENABLED(CONFIG_PREEMPT_RT) && !irq_work_is_hard(work)) || - !arch_irq_work_has_interrupt()) { - rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), - TASK_UNINTERRUPTIBLE); - return; - } - - while (irq_work_is_busy(work)) - cpu_relax(); + rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), TASK_UNINTERRUPTIBLE); } EXPORT_SYMBOL_GPL(irq_work_sync);