I found a problem in my test machine that should_reclaim_retry() do not get the right node if i set the cpuset.mems 1.Test step and the machines. ------------ root@vm:/sys/fs/cgroup/test# numactl -H | grep size node 0 size: 9477 MB node 1 size: 10079 MB node 2 size: 10079 MB node 3 size: 10078 MB root@vm:/sys/fs/cgroup/test# cat cpuset.mems 2 root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g --vm-keep stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: FAIL: [33430] (425) <-- worker 33431 got signal 9 stress: WARN: [33430] (427) now reaping child worker processes stress: FAIL: [33430] (461) failed run completed in 2s 2. reclaim_retry_zone info: We can only alloc pages from node=2, but the reclaim_retry_zone is node=0 and return true. root@vm:/sys/kernel/debug/tracing# cat trace stress-33431 [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1 stress-33431 [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1 stress-33431 [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1 stress-33431 [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1 stress-33431 [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1 stress-33431 [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1 stress-33431 [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1 stress-33431 [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1 stress-33431 [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1 stress-33431 [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1 stress-33431 [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1 stress-33431 [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1 stress-33431 [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1 stress-33431 [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1 stress-33431 [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1 stress-33431 [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1 3. Root cause: Nodemask usually comes from mempolicy in policy_nodemask(), which is always NULL unless the memory policy is bind or prefer_many. nodemask = NULL __alloc_pages_noprof() prepare_alloc_pages ac->nodemask = &cpuset_current_mems_allowed; get_page_from_freelist() ac.nodemask = nodemask; /*set NULL*/ __alloc_pages_slowpath() { f (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) { ac->nodemask = NULL; ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, ac->highest_zoneidx, ac->nodemask); /* so ac.nodemask = NULL */ } According to the function flow above, we do not have the memory limit to follow cpuset.mems, so we need to add it. Test result: Try 3 times with different cpuset.mems and alloc large memorys than that numa size. echo 1 > cpuset.mems stress --vm 1 --vm-bytes 12g --vm-hang 0 --------------- echo 2 > cpuset.mems stress --vm 1 --vm-bytes 12g --vm-hang 0 --------------- echo 3 > cpuset.mems stress --vm 1 --vm-bytes 12g --vm-hang 0 The retry trace look like: stress-2139 [003] ..... 666.934104: reclaim_retry_zone: node=1 zone=Normal order=0 reclaimable=7 available=7355 min_wmark=8598 no_progress_loops=1 wmark_check=0 stress-2204 [010] ..... 695.447393: reclaim_retry_zone: node=2 zone=Normal order=0 reclaimable=2 available=6916 min_wmark=8598 no_progress_loops=1 wmark_check=0 stress-2271 [008] ..... 725.683058: reclaim_retry_zone: node=3 zone=Normal order=0 reclaimable=17 available=8079 min_wmark=8597 no_progress_loops=1 wmark_check=0 With this patch, we can check the right node and get less retry in __alloc_pages_slowpath() because there is nothing to do. Signed-off-by: Zhongkun He <hezhongkun.hzk@xxxxxxxxxxxxx> --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 29608ca294cf..5ea63bb8f8ff 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4338,6 +4338,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, ac->nodemask = NULL; ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, ac->highest_zoneidx, ac->nodemask); + } else if (in_task() && !ac->nodemask) { + /* Set the nodemask if the request comes from user space. */ + ac->nodemask = &cpuset_current_mems_allowed; } /* Attempt with potentially adjusted zonelist and alloc_flags */ -- 2.20.1