Upon running some proactive reclaim tests using memory.reclaim, we noticed some tests flaking where writing to memory.reclaim would be successful even though we did not reclaim the requested amount fully. Looking further into it, I discovered that *sometimes* we over-report the number of reclaimed pages in memcg reclaim. Reclaimed pages through other means than LRU-based reclaim are tracked through reclaim_state in struct scan_control, which is stashed in current task_struct. These pages are added to the number of reclaimed pages through LRUs. For memcg reclaim, these pages generally cannot be linked to the memcg under reclaim and can cause an overestimated count of reclaimed pages. This short series tries to address that. Patch 1 ignores pages reclaimed outside of LRU reclaim in memcg reclaim. The pages are uncharged anyway, so even if we end up under-reporting reclaimed pages we will still succeed in making progress during charging. Patch 2 is just refactoring, it adds helpers that wrap some operations on current->reclaim_state, and rename reclaim_state->reclaimed_slab to reclaim_state->reclaimed. It also adds a huge comment explaining why we ignore pages reclaimed outside of LRU reclaim in memcg reclaim. The patches are divided as such so that patch 1 can be easily backported without all the refactoring noise. v4 -> v5: - Separate the functional fix into its own patch, and squash all the refactoring into a single second patch for ease of backporting (Andrew Morton). v4: https://lore.kernel.org/lkml/20230404001353.468224-1-yosryahmed@xxxxxxxxxx/ Yosry Ahmed (2): mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim mm: vmscan: refactor reclaim_state helpers fs/inode.c | 3 +- fs/xfs/xfs_buf.c | 3 +- include/linux/swap.h | 17 ++++++++++- mm/slab.c | 3 +- mm/slob.c | 6 ++-- mm/slub.c | 5 ++- mm/vmscan.c | 73 +++++++++++++++++++++++++++++++++----------- 7 files changed, 78 insertions(+), 32 deletions(-) -- 2.40.0.348.gf938b09366-goog