The patch titled Subject: mm: vmscan: split shrink_node() into node part and memcgs part has been added to the -mm tree. Its filename is mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Johannes Weiner <hannes@xxxxxxxxxxx> Subject: mm: vmscan: split shrink_node() into node part and memcgs part This function is getting long and unwieldy, split out the memcg bits. The updated shrink_node() handles the generic (node) reclaim aspects: - global vmpressure notifications - writeback and congestion throttling - reclaim/compaction management - kswapd giving up on unreclaimable nodes It then calls a new shrink_node_memcgs() which handles cgroup specifics: - the cgroup tree traversal - memory.low considerations - per-cgroup slab shrinking callbacks - per-cgroup vmpressure notifications Link: http://lkml.kernel.org/r/20191022144803.302233-8-hannes@xxxxxxxxxxx Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Reviewed-by: Roman Gushchin <guro@xxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) --- a/mm/vmscan.c~mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part +++ a/mm/vmscan.c @@ -2722,18 +2722,10 @@ static bool pgdat_memcg_congested(pg_dat (memcg && memcg_congested(pgdat, memcg)); } -static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) +static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) { - struct reclaim_state *reclaim_state = current->reclaim_state; struct mem_cgroup *root = sc->target_mem_cgroup; - unsigned long nr_reclaimed, nr_scanned; - bool reclaimable = false; struct mem_cgroup *memcg; -again: - memset(&sc->nr, 0, sizeof(sc->nr)); - - nr_reclaimed = sc->nr_reclaimed; - nr_scanned = sc->nr_scanned; memcg = mem_cgroup_iter(root, NULL, NULL); do { @@ -2786,6 +2778,22 @@ again: sc->nr_reclaimed - reclaimed); } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); +} + +static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) +{ + struct reclaim_state *reclaim_state = current->reclaim_state; + struct mem_cgroup *root = sc->target_mem_cgroup; + unsigned long nr_reclaimed, nr_scanned; + bool reclaimable = false; + +again: + memset(&sc->nr, 0, sizeof(sc->nr)); + + nr_reclaimed = sc->nr_reclaimed; + nr_scanned = sc->nr_scanned; + + shrink_node_memcgs(pgdat, sc); if (reclaim_state) { sc->nr_reclaimed += reclaim_state->reclaimed_slab; @@ -2793,7 +2801,7 @@ again: } /* Record the subtree's reclaim efficiency */ - vmpressure(sc->gfp_mask, sc->target_mem_cgroup, true, + vmpressure(sc->gfp_mask, root, true, sc->nr_scanned - nr_scanned, sc->nr_reclaimed - nr_reclaimed); _ Patches currently in -mm which might be from hannes@xxxxxxxxxxx are mm-rate-limit-allocation-failure-warnings-more-aggressively.patch mm-memcontrol-fix-network-errors-from-failing-__gfp_atomic-charges.patch mm-memcontrol-remove-dead-code-from-memory_max_write.patch mm-memcontrol-try-harder-to-set-a-new-memoryhigh.patch mm-drop-mmap_sem-before-calling-balance_dirty_pages-in-write-fault.patch mm-vmscan-simplify-lruvec_lru_size.patch mm-clean-up-and-clarify-lruvec-lookup-procedure.patch mm-vmscan-move-inactive_list_is_low-swap-check-to-the-caller.patch mm-vmscan-naming-fixes-global_reclaim-and-sane_reclaim.patch mm-vmscan-replace-shrink_node-loop-with-a-retry-jump.patch mm-vmscan-turn-shrink_node_memcg-into-shrink_lruvec.patch mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part.patch mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part-fix.patch mm-vmscan-harmonize-writeback-congestion-tracking-for-nodes-memcgs.patch