Currently mem_cgroup_resize_limit() retries to set limit after reclaiming 32 pages. It makes more sense to reclaim needed amount of pages right away. This works noticeably faster, especially if 'usage - limit' big. E.g. bringing down limit from 4G to 50M: Before: # perf stat echo 50M > memory.limit_in_bytes Performance counter stats for 'echo 50M': 386.582382 task-clock (msec) # 0.835 CPUs utilized 2,502 context-switches # 0.006 M/sec 0.463244382 seconds time elapsed After: # perf stat echo 50M > memory.limit_in_bytes Performance counter stats for 'echo 50M': 169.403906 task-clock (msec) # 0.849 CPUs utilized 14 context-switches # 0.083 K/sec 0.199536900 seconds time elapsed Signed-off-by: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> --- mm/memcontrol.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9d987f3e79dc..09bac2df2f12 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2448,6 +2448,7 @@ static DEFINE_MUTEX(memcg_limit_mutex); static int mem_cgroup_resize_limit(struct mem_cgroup *memcg, unsigned long limit, bool memsw) { + unsigned long nr_pages; bool enlarge = false; int ret; bool limits_invariant; @@ -2479,8 +2480,9 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg, if (!ret) break; - if (!try_to_free_mem_cgroup_pages(memcg, 1, - GFP_KERNEL, !memsw)) { + nr_pages = max_t(long, 1, page_counter_read(counter) - limit); + if (!try_to_free_mem_cgroup_pages(memcg, nr_pages, + GFP_KERNEL, !memsw)) { ret = -EBUSY; break; } -- 2.13.6 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>