From: Eric Dumazet <edumazet@xxxxxxxxxx> This patch has been added to the 3.12 stable tree. If you have any objections, please let us know. =============== [ Upstream commit 79930f5892e134c6da1254389577fffb8bd72c66 ] build_skb() should look at the page pfmemalloc status. If set, this means page allocator allocated this page in the expectation it would help to free other pages. Networking stack can do that only if skb->pfmemalloc is also set. Also, we must refrain using high order pages from the pfmemalloc reserve, so __page_frag_refill() must also use __GFP_NOMEMALLOC for them. Under memory pressure, using order-0 pages is probably the best strategy. Signed-off-by: Eric Dumazet <edumazet@xxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Jiri Slaby <jslaby@xxxxxxx> --- net/core/skbuff.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 17313d17a923..d354bb277539 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -308,7 +308,11 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size) memset(skb, 0, offsetof(struct sk_buff, tail)); skb->truesize = SKB_TRUESIZE(size); - skb->head_frag = frag_size != 0; + if (frag_size) { + skb->head_frag = 1; + if (virt_to_head_page(data)->pfmemalloc) + skb->pfmemalloc = 1; + } atomic_set(&skb->users, 1); skb->head = data; skb->data = data; @@ -351,7 +355,8 @@ refill: gfp_t gfp = gfp_mask; if (order) - gfp |= __GFP_COMP | __GFP_NOWARN; + gfp |= __GFP_COMP | __GFP_NOWARN | + __GFP_NOMEMALLOC; nc->frag.page = alloc_pages(gfp, order); if (likely(nc->frag.page)) break; -- 2.3.5 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html