On Wed, May 11, 2022 at 03:33:49PM -0700, Andrew Morton wrote: > On Tue, 10 May 2022 14:54:23 -0700 Minchan Kim <minchan@xxxxxxxxxx> wrote: > > > The rmap locks(i_mmap_rwsem and anon_vma->root->rwsem) could be > > contended under memory pressure if processes keep working on > > their vmas(e.g., fork, mmap, munmap). It makes reclaim path > > stuck. In our real workload traces, we see kswapd is waiting the > > lock for 300ms+(worst case, a sec) and it makes other processes > > entering direct reclaim, which were also stuck on the lock. > > > > This patch makes lru aging path try_lock mode like shink_page_list > > so the reclaim context will keep working with next lru pages > > without being stuck. if it found the rmap lock contended, it rotates > > the page back to head of lru in both active/inactive lrus to make > > them consistent behavior, which is basic starting point rather than > > adding more heristic. > > > > Since this patch introduces a new "contended" field as out-param > > along with try_lock in-param in rmap_walk_control, it's not > > immutable any longer if the try_lock is set so remove const > > keywords on rmap related functions. Since rmap walking is already > > expensive operation, I doubt the const would help sizable benefit( > > And we didn't have it until 5.17). > > > > In a heavy app workload in Android, trace shows following statistics. > > It almost removes rmap lock contention from reclaim path. > > What might be the worst-case failure modes using this approach? > > Could we burn much CPU time pointlessly churning though the LRU? Could > it mess up aging decisions enough to be performance-affecting in any > workload? Yes, correct. However, we are already churning LRUs by several ways. For example, isolate and putback from LRU list for page migration from several sources(typical example is compaction) and trylock_page and sc->gfp_mask not allowing page to be reclaimed in shrink_page_list. > > Something else? One thing I am worry about was the granularity of the churning. Example above was page granuarity churning so might be execuse but this one is address space's churning, especically for file LRU (i_mmap_rwsem) which might cause too many rotating and live-lock in the end(keey rotating in small LRU with heavy memory pressure). If it could be a problem, maybe we use sc->priority to stop the skipping on a certain level of memory pressure. Any thought? Do we really need it?