From: Jason Xing <kernelxing@xxxxxxxxxxx> Quoting from the commit 7c80b038d23e ("net: fix sk_wmem_schedule() and sk_rmem_schedule() errors"): "If sk->sk_forward_alloc is 150000, and we need to schedule 150001 bytes, we want to allocate 1 byte more (rounded up to one page), instead of 150001" After applied this patch, we could avoid receive path scheduling extra amount of memory. Fixes: f970bd9e3a06 ("udp: implement memory accounting helpers") Signed-off-by: Jason Xing <kernelxing@xxxxxxxxxxx> --- net/ipv4/udp.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 9592fe3e444a..a13f622cfa36 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1567,16 +1567,16 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) goto uncharge_drop; spin_lock(&list->lock); - if (size >= sk->sk_forward_alloc) { - amt = sk_mem_pages(size); - delta = amt << PAGE_SHIFT; + if (size > sk->sk_forward_alloc) { + delta = size - sk->sk_forward_alloc; + amt = sk_mem_pages(delta); if (!__sk_mem_raise_allocated(sk, delta, amt, SK_MEM_RECV)) { err = -ENOBUFS; spin_unlock(&list->lock); goto uncharge_drop; } - sk->sk_forward_alloc += delta; + sk->sk_forward_alloc += amt << PAGE_SHIFT; } sk->sk_forward_alloc -= size; -- 2.37.3