Patch "tcp: fix a signed-integer-overflow bug in tcp_add_backlog()" has been added to the 5.4-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    tcp: fix a signed-integer-overflow bug in tcp_add_backlog()

to the 5.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     tcp-fix-a-signed-integer-overflow-bug-in-tcp_add_bac.patch
and it can be found in the queue-5.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 8f73bc6c152f7a62f9db0bb995edbdc17ee78f34
Author: Lu Wei <luwei32@xxxxxxxxxx>
Date:   Fri Oct 21 12:06:22 2022 +0800

    tcp: fix a signed-integer-overflow bug in tcp_add_backlog()
    
    [ Upstream commit ec791d8149ff60c40ad2074af3b92a39c916a03f ]
    
    The type of sk_rcvbuf and sk_sndbuf in struct sock is int, and
    in tcp_add_backlog(), the variable limit is caculated by adding
    sk_rcvbuf, sk_sndbuf and 64 * 1024, it may exceed the max value
    of int and overflow. This patch reduces the limit budget by
    halving the sndbuf to solve this issue since ACK packets are much
    smaller than the payload.
    
    Fixes: c9c3321257e1 ("tcp: add tcp_add_backlog()")
    Signed-off-by: Lu Wei <luwei32@xxxxxxxxxx>
    Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx>
    Acked-by: Kuniyuki Iwashima <kuniyu@xxxxxxxxxx>
    Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx>
    Stable-dep-of: ec00ed472bdb ("tcp: avoid premature drops in tcp_add_backlog()")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 1567072071633..d29d4b8192643 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1781,11 +1781,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
 	__skb_push(skb, hdrlen);
 
 no_coalesce:
+	limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
+
 	/* Only socket owner can try to collapse/prune rx queues
 	 * to reduce memory overhead, so add a little headroom here.
 	 * Few sockets backlog are possibly concurrently non empty.
 	 */
-	limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
+	limit += 64 * 1024;
 
 	if (unlikely(sk_add_backlog(sk, skb, limit))) {
 		bh_unlock_sock(sk);




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux