The state of the system where the issue exposed shown in oom kill logs: [ 295.998653] Normal free:7728kB boost:0kB min:804kB low:1004kB high:1204kB reserved_highatomic:8192KB active_anon:4kB inactive_anon:0kB active_file:24kB inactive_file:24kB unevictable:1220kB writepending:0kB present:70732kB managed:49224kB mlocked:0kB bounce:0kB free_pcp:688kB local_pcp:492kB free_cma:0kB [ 295.998656] lowmem_reserve[]: 0 32 [ 295.998659] Normal: 508*4kB (UMEH) 241*8kB (UMEH) 143*16kB (UMEH) 33*32kB (UH) 7*64kB (UH) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 7752kB >From the above, it is seen that ~16MB of memory reserved for high atomic reserves against the expectation of 1% reserves which is fixed in the 1st patch. Don't reserve the high atomic page blocks if 1% of zone memory size is below a pageblock size. Changes in V3: o Seperated the patch of unreserving the high atomic pageblock done from should reclaim retry. o Don't reserve high atomic page blocks for smaller zone sizes. Changes in V2: o Unreserving the high atomic page blocks is done from should_reclaim_retry() o Reserve minimum and max memory for high atomic reserves as a pageblock and 1% of the memory zone respectively. o drain the pcp lists before falling back to OOM. o https://lore.kernel.org/linux-mm/cover.1699104759.git.quic_charante@xxxxxxxxxxx/ Changes in V1: o Unreserving the high atomic page blocks is tried to fix from the oom kill path rather than in should_reclaim_retry() o discussed why a lot more than 1% of managed memory is reserved for high atomic reserves. o https://lore.kernel.org/linux-mm/1698669590-3193-1-git-send-email-quic_charante@xxxxxxxxxxx/ Charan Teja Kalla (2): mm: page_alloc: correct high atomic reserve calculations mm: pagealloc: enforce minimum zone size to do high atomic reserves mm/page_alloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) -- 2.7.4