On Thu, Jun 16, 2022 at 11:02 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > On 6/13/22 14:56, Mel Gorman wrote: > > struct per_cpu_pages is no longer strictly local as PCP lists can be > > drained remotely using a lock for protection. While the use of local_lock > > works, it goes against the intent of local_lock which is for "pure > > CPU local concurrency control mechanisms and not suited for inter-CPU > > concurrency control" (Documentation/locking/locktypes.rst) > > > > local_lock protects against migration between when the percpu pointer is > > accessed and the pcp->lock acquired. The lock acquisition is a preemption > > point so in the worst case, a task could migrate to another NUMA node > > and accidentally allocate remote memory. The main requirement is to pin > > the task to a CPU that is suitable for PREEMPT_RT and !PREEMPT_RT. > > > > Replace local_lock with helpers that pin a task to a CPU, lookup the > > per-cpu structure and acquire the embedded lock. It's similar to local_lock > > without breaking the intent behind the API. It is not a complete API > > as only the parts needed for PCP-alloc are implemented but in theory, > > the generic helpers could be promoted to a general API if there was > > demand for an embedded lock within a per-cpu struct with a guarantee > > that the per-cpu structure locked matches the running CPU and cannot use > > get_cpu_var due to RT concerns. PCP requires these semantics to avoid > > accidentally allocating remote memory. > > > > Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > > ... > > > @@ -3367,30 +3429,17 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, > > return min(READ_ONCE(pcp->batch) << 2, high); > > } > > > > -/* Returns true if the page was committed to the per-cpu list. */ > > -static bool free_unref_page_commit(struct page *page, int migratetype, > > - unsigned int order, bool locked) > > +static void free_unref_page_commit(struct per_cpu_pages *pcp, struct zone *zone, > > + struct page *page, int migratetype, > > + unsigned int order) > > Hmm given this drops the "bool locked" and bool return value again, my > suggestion for patch 5/7 would result in less churn as those woudn't need to > be introduced? > > ... > > > @@ -3794,19 +3805,29 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, > > struct list_head *list; > > struct page *page; > > unsigned long flags; > > + unsigned long __maybe_unused UP_flags; > > > > - local_lock_irqsave(&pagesets.lock, flags); > > + /* > > + * spin_trylock_irqsave is not necessary right now as it'll only be > > + * true when contending with a remote drain. It's in place as a > > + * preparation step before converting pcp locking to spin_trylock > > + * to protect against IRQ reentry. > > + */ > > + pcp_trylock_prepare(UP_flags); > > + pcp = pcp_spin_trylock_irqsave(zone->per_cpu_pageset, flags); > > + if (!pcp) > > Besides the missing unpin Andrew fixed, I think also this is missing > pcp_trylock_finish(UP_flags); ? spin_trylock only fails when trylock_finish is a NOP.