On Thu, 2022-05-12 at 09:50 +0100, Mel Gorman wrote: > Changelog since v2 > o More conversions from page->lru to page->[pcp_list|buddy_list] > o Additional test results in changelogs > > Changelog since v1 > o Fix unsafe RT locking scheme > o Use spin_trylock on UP PREEMPT_RT > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote > per-cpu lists drain support" -- avoid interference of a high priority > task due to a workqueue item draining per-cpu page lists. While many > workloads can tolerate a brief interruption, it may be cause a real-time > task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum, > the draining in non-deterministic. > > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. > The local_lock on its own prevents migration and the IRQ disabling protects > from corruption due to an interrupt arriving while a page allocation is > in progress. The locking is inherently unsafe for remote access unless > the CPU is hot-removed. > > This series adjusts the locking. A spinlock is added to struct > per_cpu_pages to protect the list contents while local_lock_irq continues > to prevent migration and IRQ reentry. This allows a remote CPU to safely > drain a remote per-cpu list. > > This series is a partial series. Follow-on work should allow the > local_irq_save to be converted to a local_irq to avoid IRQs being > disabled/enabled in most cases. Consequently, there are some TODO comments > highlighting the places that would change if local_irq was used. However, > there are enough corner cases that it deserves a series on its own > separated by one kernel release and the priority right now is to avoid > interference of high priority tasks. FWIW tested this against our RT+nohz_full workloads. I can have another go if the locking scheme changes. Tested-by: Nicolas Saenz Julienne <nsaenzju@xxxxxxxxxx> Thanks, -- Nicolás Sáenz