Patch "af_unix: Move spin_lock() in manage_oob()." has been added to the 6.11-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    af_unix: Move spin_lock() in manage_oob().

to the 6.11-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     af_unix-move-spin_lock-in-manage_oob.patch
and it can be found in the queue-6.11 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 95a54cd681822bc2d172d06e41f42f93985cff96
Author: Kuniyuki Iwashima <kuniyu@xxxxxxxxxx>
Date:   Thu Sep 5 12:32:39 2024 -0700

    af_unix: Move spin_lock() in manage_oob().
    
    [ Upstream commit a0264a9f51fe0d196f22efd7538eb749e3448c2d ]
    
    When OOB skb has been already consumed, manage_oob() returns the next
    skb if exists.  In such a case, we need to fall back to the else branch
    below.
    
    Then, we want to keep holding spin_lock(&sk->sk_receive_queue.lock).
    
    Let's move it out of if-else branch and add lightweight check before
    spin_lock() for major use cases without OOB skb.
    
    Signed-off-by: Kuniyuki Iwashima <kuniyu@xxxxxxxxxx>
    Link: https://patch.msgid.link/20240905193240.17565-4-kuniyu@xxxxxxxxxx
    Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx>
    Stable-dep-of: 5aa57d9f2d53 ("af_unix: Don't return OOB skb in manage_oob().")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 91d7877a10794..159d78fc3d14d 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -2657,9 +2657,12 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
 	struct sk_buff *read_skb = NULL, *unread_skb = NULL;
 	struct unix_sock *u = unix_sk(sk);
 
-	if (!unix_skb_len(skb)) {
-		spin_lock(&sk->sk_receive_queue.lock);
+	if (likely(unix_skb_len(skb) && skb != READ_ONCE(u->oob_skb)))
+		return skb;
 
+	spin_lock(&sk->sk_receive_queue.lock);
+
+	if (!unix_skb_len(skb)) {
 		if (copied && (!u->oob_skb || skb == u->oob_skb)) {
 			skb = NULL;
 		} else if (flags & MSG_PEEK) {
@@ -2670,14 +2673,9 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
 			__skb_unlink(read_skb, &sk->sk_receive_queue);
 		}
 
-		spin_unlock(&sk->sk_receive_queue.lock);
-
-		consume_skb(read_skb);
-		return skb;
+		goto unlock;
 	}
 
-	spin_lock(&sk->sk_receive_queue.lock);
-
 	if (skb != u->oob_skb)
 		goto unlock;
 
@@ -2698,6 +2696,7 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
 unlock:
 	spin_unlock(&sk->sk_receive_queue.lock);
 
+	consume_skb(read_skb);
 	kfree_skb(unread_skb);
 
 	return skb;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux