On Mon, Jul 22, 2024 at 11:27:48AM -0700, Dennis Zhou wrote: > Hello, > > On Mon, Jul 22, 2024 at 11:03:00AM -0700, Boqun Feng wrote: > > On Mon, Jul 22, 2024 at 07:52:22AM -1000, Tejun Heo wrote: > > > On Mon, Jul 22, 2024 at 10:47:30AM -0700, Boqun Feng wrote: > > > > This looks like a data race because we read pcpu_nr_empty_pop_pages out > > > > of the lock for a best effort checking, @Tejun, maybe you could confirm > > > > on this? > > > > > > That does sound plausible. > > > > > > > - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) > > > > + /* > > > > + * Checks pcpu_nr_empty_pop_pages out of the pcpu_lock, data races may > > > > + * occur but this is just a best-effort checking, everything is synced > > > > + * in pcpu_balance_work. > > > > + */ > > > > + if (data_race(pcpu_nr_empty_pop_pages) < PCPU_EMPTY_POP_PAGES_LOW) > > > > pcpu_schedule_balance_work(); > > > > > > Would it be better to use READ/WRITE_ONCE() for the variable? > > > > > > > For READ/WRITE_ONCE(), we will need to replace all write accesses and > > all out-of-lock read accesses to pcpu_nr_empty_pop_pages, like below. > > It's better in the sense that it doesn't rely on compiler behaviors on > > data races, not sure about the performance impact though. > > > > I think a better alternative is we can move it up into the lock under > area_found. The value gets updated as part of pcpu_alloc_area() as the > code above populates percpu memory that is already allocated. > Not sure I followed what exactly you suggested here because I'm not familiar with the logic, but a simpler version would be: diff --git a/mm/percpu.c b/mm/percpu.c index 20d91af8c033..fc54d27e5786 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1891,8 +1891,10 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, mutex_unlock(&pcpu_alloc_mutex); } - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) - pcpu_schedule_balance_work(); + scoped_guard(spinlock_irqsave, &pcpu_lock) { + if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) + pcpu_schedule_balance_work(); + } /* clear the areas and return address relative to base address */ for_each_possible_cpu(cpu) I.e. just locking while checking. Regards, Boqun > We should probably annotate pcpu_update_empty_pages() with: > lockdep_assert_held(&pcpu_lock); > > Thanks, > Dennis > > > Regards, > > Boqun > > > > ----->8 > > diff --git a/mm/percpu.c b/mm/percpu.c > > index 20d91af8c033..729e8188238b 100644 > > --- a/mm/percpu.c > > +++ b/mm/percpu.c > > @@ -570,7 +570,8 @@ static void pcpu_isolate_chunk(struct pcpu_chunk *chunk) > > > > if (!chunk->isolated) { > > chunk->isolated = true; > > - pcpu_nr_empty_pop_pages -= chunk->nr_empty_pop_pages; > > + WRITE_ONCE(pcpu_nr_empty_pop_pages, > > + pcpu_nr_empty_pop_pages - chunk->nr_empty_pop_pages); > > } > > list_move(&chunk->list, &pcpu_chunk_lists[pcpu_to_depopulate_slot]); > > } > > @@ -581,7 +582,8 @@ static void pcpu_reintegrate_chunk(struct pcpu_chunk *chunk) > > > > if (chunk->isolated) { > > chunk->isolated = false; > > - pcpu_nr_empty_pop_pages += chunk->nr_empty_pop_pages; > > + WRITE_ONCE(pcpu_nr_empty_pop_pages, > > + pcpu_nr_empty_pop_pages + chunk->nr_empty_pop_pages); > > pcpu_chunk_relocate(chunk, -1); > > } > > } > > @@ -599,7 +601,8 @@ static inline void pcpu_update_empty_pages(struct pcpu_chunk *chunk, int nr) > > { > > chunk->nr_empty_pop_pages += nr; > > if (chunk != pcpu_reserved_chunk && !chunk->isolated) > > - pcpu_nr_empty_pop_pages += nr; > > + WRITE_ONCE(pcpu_nr_empty_pop_pages, > > + pcpu_nr_empty_pop_pages + nr); > > } > > > > /* > > @@ -1891,7 +1894,7 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, > > mutex_unlock(&pcpu_alloc_mutex); > > } > > > > - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) > > + if (READ_ONCE(pcpu_nr_empty_pop_pages) < PCPU_EMPTY_POP_PAGES_LOW) > > pcpu_schedule_balance_work(); > > > > /* clear the areas and return address relative to base address */ > > @@ -2754,7 +2757,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, > > tmp_addr = (unsigned long)base_addr + static_size + ai->reserved_size; > > pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, dyn_size); > > > > - pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages; > > + WRITE_ONCE(pcpu_nr_empty_pop_pages, pcpu_first_chunk->nr_empty_pop_pages); > > pcpu_chunk_relocate(pcpu_first_chunk, -1); > > > > /* include all regions of the first chunk */ > >