We call memcg_oom_recover() in the uncharge_batch() to wakeup OOM task when page uncharged, but for the slab pages, we do not do this when page uncharged. When we drain per cpu stock, we also should do this. The memcg_oom_recover() is small, so make it inline. And the parameter of memcg cannot be NULL, so remove the check. Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> --- mm/memcontrol.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8c035846c7a4..8569f4dbea2a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1925,7 +1925,7 @@ static int memcg_oom_wake_function(wait_queue_entry_t *wait, return autoremove_wake_function(wait, mode, sync, arg); } -static void memcg_oom_recover(struct mem_cgroup *memcg) +static inline void memcg_oom_recover(struct mem_cgroup *memcg) { /* * For the following lockless ->under_oom test, the only required @@ -1935,7 +1935,7 @@ static void memcg_oom_recover(struct mem_cgroup *memcg) * achieved by invoking mem_cgroup_mark_under_oom() before * triggering notification. */ - if (memcg && memcg->under_oom) + if (memcg->under_oom) __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); } @@ -2313,6 +2313,7 @@ static void drain_stock(struct memcg_stock_pcp *stock) page_counter_uncharge(&old->memory, stock->nr_pages); if (do_memsw_account()) page_counter_uncharge(&old->memsw, stock->nr_pages); + memcg_oom_recover(old); stock->nr_pages = 0; } -- 2.11.0