The quilt patch titled Subject: mm/mglru: clean up workingset has been removed from the -mm tree. Its filename was mm-mglru-clean-up-workingset.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Yu Zhao <yuzhao@xxxxxxxxxx> Subject: mm/mglru: clean up workingset Date: Mon, 30 Dec 2024 21:35:32 -0700 Patch series "mm/mglru: performance optimizations", v4. This series improves performance for some previously reported test cases. Most of the code changes gathered here has been floating on the mailing list [1][2]. They are now properly organized and have gone through various benchmarks on client and server devices, including Android, FIO, memcached, multiple VMs and MongoDB. In addition to the syzbot regressions fixed in v2 [3] and v3 [4], this version fixes two more regressions: one reported by Oliver Sang [5] and the other by Barry Song. [1] https://lore.kernel.org/CAOUHufahuWcKf5f1Sg3emnqX+cODuR=2TQo7T4Gr-QYLujn4RA@xxxxxxxxxxxxxx/ [2] https://lore.kernel.org/CAOUHufawNerxqLm7L9Yywp3HJFiYVrYO26ePUb1jH-qxNGWzyA@xxxxxxxxxxxxxx/ [3] https://lore.kernel.org/67294349.050a0220.701a.0010.GAE@xxxxxxxxxx/ [4] https://lore.kernel.org/67549eca.050a0220.2477f.001b.GAE@xxxxxxxxxx/ [5] https://lore.kernel.org/202412231601.f1eb8f84-lkp@xxxxxxxxx/ This patch (of 7): Move VM_BUG_ON_FOLIO() to cover both the default and MGLRU paths. Also use a pair of rcu_read_lock() and rcu_read_unlock() within each path, to improve readability. This change should not have any side effects. Link: https://lkml.kernel.org/r/20241231043538.4075764-1-yuzhao@xxxxxxxxxx Link: https://lkml.kernel.org/r/20241231043538.4075764-2-yuzhao@xxxxxxxxxx Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> Tested-by: Kalesh Singh <kaleshsingh@xxxxxxxxxx> Cc: Barry Song <v-songbaohua@xxxxxxxx> Cc: Bharata B Rao <bharata@xxxxxxx> Cc: David Stevens <stevensd@xxxxxxxxxxxx> Cc: Kairui Song <kasong@xxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/workingset.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) --- a/mm/workingset.c~mm-mglru-clean-up-workingset +++ a/mm/workingset.c @@ -428,17 +428,17 @@ bool workingset_test_recent(void *shadow struct pglist_data *pgdat; unsigned long eviction; - rcu_read_lock(); - if (lru_gen_enabled()) { - bool recent = lru_gen_test_recent(shadow, file, - &eviction_lruvec, &eviction, workingset); + bool recent; + rcu_read_lock(); + recent = lru_gen_test_recent(shadow, file, &eviction_lruvec, + &eviction, workingset); rcu_read_unlock(); return recent; } - + rcu_read_lock(); unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); eviction <<= bucket_order; @@ -459,14 +459,12 @@ bool workingset_test_recent(void *shadow * configurations instead. */ eviction_memcg = mem_cgroup_from_id(memcgid); - if (!mem_cgroup_disabled() && - (!eviction_memcg || !mem_cgroup_tryget(eviction_memcg))) { - rcu_read_unlock(); - return false; - } - + if (!mem_cgroup_tryget(eviction_memcg)) + eviction_memcg = NULL; rcu_read_unlock(); + if (!mem_cgroup_disabled() && !eviction_memcg) + return false; /* * Flush stats (and potentially sleep) outside the RCU read section. * @@ -544,6 +542,8 @@ void workingset_refault(struct folio *fo bool workingset; long nr; + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + if (lru_gen_enabled()) { lru_gen_refault(folio, shadow); return; @@ -558,7 +558,6 @@ void workingset_refault(struct folio *fo * is actually experiencing the refault event. Make sure the folio is * locked to guarantee folio_memcg() stability throughout. */ - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); nr = folio_nr_pages(folio); memcg = folio_memcg(folio); pgdat = folio_pgdat(folio); _ Patches currently in -mm which might be from yuzhao@xxxxxxxxxx are mm-hugetlb_vmemmap-fix-memory-loads-ordering.patch