The patch titled Subject: mm: memcontrol: switch soft limit default back to infinity has been removed from the -mm tree. Its filename was mm-memcontrol-switch-soft-limit-default-back-to-infinity.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Johannes Weiner <hannes@xxxxxxxxxxx> Subject: mm: memcontrol: switch soft limit default back to infinity 3e32cb2e0a12 ("mm: memcontrol: lockless page counters") accidentally switched the soft limit default from infinity to zero, which turns all memcgs with even a single page into soft limit excessors and engages soft limit reclaim on all of them during global memory pressure. This makes global reclaim generally more aggressive, but also inverts the meaning of existing soft limit configurations where unset soft limits are usually more generous than set ones. Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxx> Acked-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memcontrol.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff -puN mm/memcontrol.c~mm-memcontrol-switch-soft-limit-default-back-to-infinity mm/memcontrol.c --- a/mm/memcontrol.c~mm-memcontrol-switch-soft-limit-default-back-to-infinity +++ a/mm/memcontrol.c @@ -4679,6 +4679,7 @@ mem_cgroup_css_alloc(struct cgroup_subsy if (parent_css == NULL) { root_mem_cgroup = memcg; page_counter_init(&memcg->memory, NULL); + memcg->soft_limit = PAGE_COUNTER_MAX; page_counter_init(&memcg->memsw, NULL); page_counter_init(&memcg->kmem, NULL); } @@ -4724,6 +4725,7 @@ mem_cgroup_css_online(struct cgroup_subs if (parent->use_hierarchy) { page_counter_init(&memcg->memory, &parent->memory); + memcg->soft_limit = PAGE_COUNTER_MAX; page_counter_init(&memcg->memsw, &parent->memsw); page_counter_init(&memcg->kmem, &parent->kmem); @@ -4733,6 +4735,7 @@ mem_cgroup_css_online(struct cgroup_subs */ } else { page_counter_init(&memcg->memory, NULL); + memcg->soft_limit = PAGE_COUNTER_MAX; page_counter_init(&memcg->memsw, NULL); page_counter_init(&memcg->kmem, NULL); /* @@ -4807,7 +4810,7 @@ static void mem_cgroup_css_reset(struct mem_cgroup_resize_limit(memcg, PAGE_COUNTER_MAX); mem_cgroup_resize_memsw_limit(memcg, PAGE_COUNTER_MAX); memcg_update_kmem_limit(memcg, PAGE_COUNTER_MAX); - memcg->soft_limit = 0; + memcg->soft_limit = PAGE_COUNTER_MAX; } #ifdef CONFIG_MMU _ Patches currently in -mm which might be from hannes@xxxxxxxxxxx are mm-page_alloc-embed-oom-killing-naturally-into-allocation-slowpath.patch mm-memory-remove-vm_file-check-on-shared-writable-vmas.patch mm-memory-merge-shared-writable-dirtying-branches-in-do_wp_page.patch mm-page_alloc-place-zone_id-check-before-vm_bug_on_page-check.patch memcg-zap-__memcg_chargeuncharge_slab.patch memcg-zap-memcg_name-argument-of-memcg_create_kmem_cache.patch memcg-zap-memcg_slab_caches-and-memcg_slab_mutex.patch mm-add-fields-for-compound-destructor-and-order-into-struct-page.patch swap-remove-unused-mem_cgroup_uncharge_swapcache-declaration.patch mm-memcontrol-track-move_lock-state-internally.patch mm-memcontrol-track-move_lock-state-internally-fix.patch mm-page_allocc-__alloc_pages_nodemask-dont-alter-arg-gfp_mask.patch mm-vmscan-wake-up-all-pfmemalloc-throttled-processes-at-once.patch mm-hugetlb-reduce-arch-dependent-code-around-follow_huge_.patch mm-hugetlb-pmd_huge-returns-true-for-non-present-hugepage.patch mm-hugetlb-take-page-table-lock-in-follow_huge_pmd.patch mm-hugetlb-fix-getting-refcount-0-page-in-hugetlb_fault.patch mm-hugetlb-add-migration-hwpoisoned-entry-check-in-hugetlb_change_protection.patch mm-hugetlb-add-migration-entry-check-in-__unmap_hugepage_range.patch mm-hugetlb-fix-suboptimal-migration-hwpoisoned-entry-check.patch mm-hugetlb-cleanup-and-rename-is_hugetlb_entry_migrationhwpoisoned.patch mm-set-page-pfmemalloc-in-prep_new_page.patch mm-page_alloc-reduce-number-of-alloc_pages-functions-parameters.patch mm-reduce-try_to_compact_pages-parameters.patch mm-microoptimize-zonelist-operations.patch list_lru-introduce-list_lru_shrink_countwalk.patch fs-consolidate-nrfree_cached_objects-args-in-shrink_control.patch vmscan-per-memory-cgroup-slab-shrinkers.patch memcg-rename-some-cache-id-related-variables.patch memcg-add-rwsem-to-synchronize-against-memcg_caches-arrays-relocation.patch list_lru-get-rid-of-active_nodes.patch list_lru-organize-all-list_lrus-to-list.patch list_lru-introduce-per-memcg-lists.patch fs-make-shrinker-memcg-aware.patch vmscan-force-scan-offline-memory-cgroups.patch fs-proc-task_mmu-show-page-size-in-proc-pid-numa_maps.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html