The patch titled Subject: mm, page_frag: recover from memory pressure has been removed from the -mm tree. Its filename was page_frag-recover-from-memory-pressure.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Dongli Zhang <dongli.zhang@xxxxxxxxxx> Subject: mm, page_frag: recover from memory pressure The ethernet driver may allocate skb (and skb->data) via napi_alloc_skb(). This ends up to page_frag_alloc() to allocate skb->data from page_frag_cache->va. During the memory pressure, page_frag_cache->va may be allocated as pfmemalloc page. As a result, the skb->pfmemalloc is always true as skb->data is from page_frag_cache->va. The skb will be dropped if the sock (receiver) does not have SOCK_MEMALLOC. This is expected behaviour under memory pressure. However, once kernel is not under memory pressure any longer (suppose large amount of memory pages are just reclaimed), the page_frag_alloc() may still re-use the prior pfmemalloc page_frag_cache->va to allocate skb->data. As a result, the skb->pfmemalloc is always true unless page_frag_cache->va is re-allocated, even if the kernel is not under memory pressure any longer. Here is how kernel runs into issue. 1. The kernel is under memory pressure and allocation of PAGE_FRAG_CACHE_MAX_ORDER in __page_frag_cache_refill() will fail. Instead, the pfmemalloc page is allocated for page_frag_cache->va. 2. All skb->data from page_frag_cache->va (pfmemalloc) will have skb->pfmemalloc=true. The skb will always be dropped by sock without SOCK_MEMALLOC. This is an expected behaviour. 3. Suppose a large amount of pages are reclaimed and kernel is not under memory pressure any longer. We expect skb->pfmemalloc drop will not happen. 4. Unfortunately, page_frag_alloc() does not proactively re-allocate page_frag_alloc->va and will always re-use the prior pfmemalloc page. The skb->pfmemalloc is always true even kernel is not under memory pressure any longer. Fix this by freeing and re-allocating the page instead of recycling it. Link: https://lore.kernel.org/lkml/20201103193239.1807-1-dongli.zhang@xxxxxxxxxx/ Link: https://lore.kernel.org/linux-mm/20201105042140.5253-1-willy@xxxxxxxxxxxxx/ Link: https://lkml.kernel.org/r/20201115201029.11903-1-dongli.zhang@xxxxxxxxxx Fixes: 79930f5892e ("net: do not deplete pfmemalloc reserve") Signed-off-by: Dongli Zhang <dongli.zhang@xxxxxxxxxx> Suggested-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx> Cc: Aruna Ramakrishna <aruna.ramakrishna@xxxxxxxxxx> Cc: Bert Barbe <bert.barbe@xxxxxxxxxx> Cc: Rama Nichanamatlu <rama.nichanamatlu@xxxxxxxxxx> Cc: Venkat Venkatsubra <venkat.x.venkatsubra@xxxxxxxxxx> Cc: Manjunath Patil <manjunath.b.patil@xxxxxxxxxx> Cc: Joe Jin <joe.jin@xxxxxxxxxx> Cc: SRINIVAS <srinivas.eeda@xxxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 5 +++++ 1 file changed, 5 insertions(+) --- a/mm/page_alloc.c~page_frag-recover-from-memory-pressure +++ a/mm/page_alloc.c @@ -5103,6 +5103,11 @@ refill: if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; + if (unlikely(nc->pfmemalloc)) { + free_the_page(page, compound_order(page)); + goto refill; + } + #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; _ Patches currently in -mm which might be from dongli.zhang@xxxxxxxxxx are