Hi Oliver, On Tue, Jul 23, 2024 at 02:09:38PM +0800, Oliver Sang wrote: > hi, Dennis Zhou, > > On Mon, Jul 22, 2024 at 10:50:53PM -0700, Dennis Zhou wrote: > > On Mon, Jul 22, 2024 at 01:53:52PM -0700, Boqun Feng wrote: > > > On Mon, Jul 22, 2024 at 11:27:48AM -0700, Dennis Zhou wrote: > > > > Hello, > > > > > > > > On Mon, Jul 22, 2024 at 11:03:00AM -0700, Boqun Feng wrote: > > > > > On Mon, Jul 22, 2024 at 07:52:22AM -1000, Tejun Heo wrote: > > > > > > On Mon, Jul 22, 2024 at 10:47:30AM -0700, Boqun Feng wrote: > > > > > > > This looks like a data race because we read pcpu_nr_empty_pop_pages out > > > > > > > of the lock for a best effort checking, @Tejun, maybe you could confirm > > > > > > > on this? > > > > > > > > > > > > That does sound plausible. > > > > > > > > > > > > > - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) > > > > > > > + /* > > > > > > > + * Checks pcpu_nr_empty_pop_pages out of the pcpu_lock, data races may > > > > > > > + * occur but this is just a best-effort checking, everything is synced > > > > > > > + * in pcpu_balance_work. > > > > > > > + */ > > > > > > > + if (data_race(pcpu_nr_empty_pop_pages) < PCPU_EMPTY_POP_PAGES_LOW) > > > > > > > pcpu_schedule_balance_work(); > > > > > > > > > > > > Would it be better to use READ/WRITE_ONCE() for the variable? > > > > > > > > > > > > > > > > For READ/WRITE_ONCE(), we will need to replace all write accesses and > > > > > all out-of-lock read accesses to pcpu_nr_empty_pop_pages, like below. > > > > > It's better in the sense that it doesn't rely on compiler behaviors on > > > > > data races, not sure about the performance impact though. > > > > > > > > > > > > > I think a better alternative is we can move it up into the lock under > > > > area_found. The value gets updated as part of pcpu_alloc_area() as the > > > > code above populates percpu memory that is already allocated. > > > > > > > > > > Not sure I followed what exactly you suggested here because I'm not > > > familiar with the logic, but a simpler version would be: > > > > > > > > > > I believe that's the only naked access of pcpu_nr_empty_pop_pages. So > > I was thinking this'll fix this problem. > > > > I also don't know how to rerun this CI tho.. > > we could test this patch. what's the base? could we apply it directly upon > 24e44cc22a? > > BTW, our bot is not so clever so far to auto test fix patches, so this is kind > of manual effort. due to resource constraint, it will be hard for us to test > each patch (we saw several patches in this thread already) or test very fast. > Ah yeah that makes sense. If you don't mind testing the last one I sent, the one below, that applies cleanly to 24e44cc22a. Thanks, Dennis > > > > --- > > diff --git a/mm/percpu.c b/mm/percpu.c > > index 20d91af8c033..325fb8412e90 100644 > > --- a/mm/percpu.c > > +++ b/mm/percpu.c > > @@ -1864,6 +1864,10 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, > > > > area_found: > > pcpu_stats_area_alloc(chunk, size); > > + > > + if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) > > + pcpu_schedule_balance_work(); > > + > > spin_unlock_irqrestore(&pcpu_lock, flags); > > > > /* populate if not all pages are already there */ > > @@ -1891,9 +1895,6 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved, > > mutex_unlock(&pcpu_alloc_mutex); > > } > > > > - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) > > - pcpu_schedule_balance_work(); > > - > > /* clear the areas and return address relative to base address */ > > for_each_possible_cpu(cpu) > > memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);