On Wed, 2024-12-18 at 12:20 -0800, Shakeel Butt wrote: > On Wed, Dec 18, 2024 at 11:56:04AM -0500, Rik van Riel wrote: > [...] > > > > +static bool should_lru_add_drain(void) > > +{ > > + struct cpu_fbatches *fbatches = > > this_cpu_ptr(&cpu_fbatches); > > You will need either a local_lock or preempt_disable to access the > per > cpu batches. > Why is that? Can the per-cpu batches disappear on us while we're trying to access them, without that local_lock or preempt_disable? I'm not trying to protect against accidentally reading the wrong CPU's numbers, since we could be preempted and migrated to another CPU immediately after returning from should_lru_add_drain, but do want to keep things safe against other potential issues. > > + int pending = folio_batch_count(&fbatches->lru_add); > > + pending += folio_batch_count(&fbatches->lru_deactivate); > > + pending += folio_batch_count(&fbatches- > > >lru_deactivate_file); > > + pending += folio_batch_count(&fbatches->lru_lazyfree); > > + > > + /* Don't bother draining unless we have several pages > > pending. */ > > + return pending > SWAP_CLUSTER_MAX; > > +} > > + > > +void maybe_lru_add_drain(void) > > Later it might also make sense to see if other users of > lru_add_drain() > should be fine with maybe_lru_add_drain() as well. Agreed. I think there are a few other users where this could make sense, including munmap. -- All Rights Reversed.