On 9/25/20 12:54 PM, David Hildenbrand wrote: >>> --- a/mm/page_isolation.c >>> +++ b/mm/page_isolation.c >>> @@ -15,6 +15,22 @@ >>> #define CREATE_TRACE_POINTS >>> #include <trace/events/page_isolation.h> >>> >>> +void zone_pcplist_disable(struct zone *zone) >>> +{ >>> + down_read(&pcp_batch_high_lock); >>> + if (atomic_inc_return(&zone->pcplist_disabled) == 1) { >>> + zone_update_pageset_high_and_batch(zone, 0, 1); >>> + __drain_all_pages(zone, true); >>> + } >> Hm, if one CPU is still inside the if-clause, the other one would >> continue, however pcp wpould not be disabled and zones not drained when >> returning. Ah, well spotted, thanks! >> (while we only allow a single Offline_pages() call, it will be different >> when we use the function in other context - especially, >> alloc_contig_range() for some users) >> >> Can't we use down_write() here? So it's serialized and everybody has to >> properly wait. (and we would not have to rely on an atomic_t) > Sorry, I meant down_write only temporarily in this code path. Not > keeping it locked in write when returning (I remember there is a way to > downgrade). Hmm that temporary write lock would still block new callers until previous finish with the downgraded-to-read lock. But I guess something like this would work: retry: if (atomic_read(...) == 0) { // zone_update... + drain atomic_inc(...); else if (atomic_inc_return == 1) // atomic_cmpxchg from 0 to 1; if that fails, goto retry Tricky, but races could only read to unnecessary duplicated updates + flushing but nothing worse? Or add another spinlock to cover this part instead of the temp write lock...