On Fri, 24 Jun 2022 13:54:16 +0100 Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote: > Some setups, notably NOHZ_FULL CPUs, may be running realtime or > latency-sensitive applications that cannot tolerate interference due to > per-cpu drain work queued by __drain_all_pages(). Introduce a new > mechanism to remotely drain the per-cpu lists. It is made possible by > remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has > two advantages, the time to drain is more predictable and other unrelated > tasks are not interrupted. > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote > per-cpu lists drain support" -- avoid interference of a high priority task > due to a workqueue item draining per-cpu page lists. While many workloads > can tolerate a brief interruption, it may cause a real-time task running > on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is > non-deterministic. > > Currently an IRQ-safe local_lock protects the page allocator per-cpu > lists. The local_lock on its own prevents migration and the IRQ disabling > protects from corruption due to an interrupt arriving while a page > allocation is in progress. > > This series adjusts the locking. A spinlock is added to struct > per_cpu_pages to protect the list contents while local_lock_irq is > ultimately replaced by just the spinlock in the final patch. This allows > a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave > to be converted to spin_lock to avoid IRQs being disabled/enabled in > most cases. The follow-on patch will be one kernel release later as it > is relatively high risk and it'll make bisections more clear if there > are any problems. I plan to move this and Mel's fix to [7/7] into mm-stable around July 8.