On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote: > The GHES code calls memory_failure_queue() from IRQ context to schedule > work on the current CPU so that memory_failure() can sleep. > > For synchronous memory errors the arch code needs to know any signals > that memory_failure() will trigger are pending before it returns to > user-space, possibly when exiting from the IRQ. > > Add a helper to kick the memory failure queue, to ensure the scheduled > work has happened. This has to be called from process context, so may > have been migrated from the original cpu. Pass the cpu the work was > queued on. > > Change memory_failure_work_func() to permit being called on the 'wrong' > cpu. > > Signed-off-by: James Morse <james.morse@xxxxxxx> > Tested-by: Tyler Baicar <baicar@xxxxxxxxxxxxxxxxxxxxxx> > --- > include/linux/mm.h | 1 + > mm/memory-failure.c | 15 ++++++++++++++- > 2 files changed, 15 insertions(+), 1 deletion(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 5a323422d783..c606dbbfa5e1 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3012,6 +3012,7 @@ enum mf_flags { > }; > extern int memory_failure(unsigned long pfn, int flags); > extern void memory_failure_queue(unsigned long pfn, int flags); > +extern void memory_failure_queue_kick(int cpu); > extern int unpoison_memory(unsigned long pfn); > extern int get_hwpoison_page(struct page *page); > #define put_hwpoison_page(page) put_page(page) > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index a96364be8ab4..c4afb407bf0f 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work) > unsigned long proc_flags; > int gotten; > > - mf_cpu = this_cpu_ptr(&memory_failure_cpu); > + mf_cpu = container_of(work, struct memory_failure_cpu, work); > for (;;) { > spin_lock_irqsave(&mf_cpu->lock, proc_flags); > gotten = kfifo_get(&mf_cpu->fifo, &entry); > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work) > } > } > > +/* > + * Process memory_failure work queued on the specified CPU. > + * Used to avoid return-to-userspace racing with the memory_failure workqueue. > + */ > +void memory_failure_queue_kick(int cpu) > +{ > + struct memory_failure_cpu *mf_cpu; > + > + mf_cpu = &per_cpu(memory_failure_cpu, cpu); > + cancel_work_sync(&mf_cpu->work); > + memory_failure_work_func(&mf_cpu->work); > +} > + > static int __init memory_failure_init(void) > { > struct memory_failure_cpu *mf_cpu; > I could apply this provided an ACK from the mm people. Thanks!