Patch "Revert "udp: avoid calling sock_def_readable() if possible"" has been added to the 6.12-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    Revert "udp: avoid calling sock_def_readable() if possible"

to the 6.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     revert-udp-avoid-calling-sock_def_readable-if-possib.patch
and it can be found in the queue-6.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 81a411dd59fd54e3b9545d683d8f23e57ca040e4
Author: Fernando Fernandez Mancera <ffmancera@xxxxxxxxxx>
Date:   Mon Dec 2 15:56:08 2024 +0000

    Revert "udp: avoid calling sock_def_readable() if possible"
    
    [ Upstream commit 3d501f562f63b290351169e3e9931ffe3d57b2ae ]
    
    This reverts commit 612b1c0dec5bc7367f90fc508448b8d0d7c05414. On a
    scenario with multiple threads blocking on a recvfrom(), we need to call
    sock_def_readable() on every __udp_enqueue_schedule_skb() otherwise the
    threads won't be woken up as __skb_wait_for_more_packets() is using
    prepare_to_wait_exclusive().
    
    Link: https://bugzilla.redhat.com/2308477
    Fixes: 612b1c0dec5b ("udp: avoid calling sock_def_readable() if possible")
    Signed-off-by: Fernando Fernandez Mancera <ffmancera@xxxxxxxxxx>
    Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx>
    Link: https://patch.msgid.link/20241202155620.1719-1-ffmancera@xxxxxxxxxx
    Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 2849b273b1310..ff85242720a0a 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1516,7 +1516,6 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
 	struct sk_buff_head *list = &sk->sk_receive_queue;
 	int rmem, err = -ENOMEM;
 	spinlock_t *busy = NULL;
-	bool becomes_readable;
 	int size, rcvbuf;
 
 	/* Immediately drop when the receive queue is full.
@@ -1557,19 +1556,12 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
 	 */
 	sock_skb_set_dropcount(sk, skb);
 
-	becomes_readable = skb_queue_empty(list);
 	__skb_queue_tail(list, skb);
 	spin_unlock(&list->lock);
 
-	if (!sock_flag(sk, SOCK_DEAD)) {
-		if (becomes_readable ||
-		    sk->sk_data_ready != sock_def_readable ||
-		    READ_ONCE(sk->sk_peek_off) >= 0)
-			INDIRECT_CALL_1(sk->sk_data_ready,
-					sock_def_readable, sk);
-		else
-			sk_wake_async_rcu(sk, SOCK_WAKE_WAITD, POLL_IN);
-	}
+	if (!sock_flag(sk, SOCK_DEAD))
+		INDIRECT_CALL_1(sk->sk_data_ready, sock_def_readable, sk);
+
 	busylock_release(busy);
 	return 0;
 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux