Patch "bpf, sockmap: Convert schedule_work into delayed_work" has been added to the 5.15-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    bpf, sockmap: Convert schedule_work into delayed_work

to the 5.15-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     bpf-sockmap-convert-schedule_work-into-delayed_work.patch
and it can be found in the queue-5.15 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit d2aeeca726ebb44363ea0fe4f7b97d253e8198d4
Author: John Fastabend <john.fastabend@xxxxxxxxx>
Date:   Mon May 22 19:56:06 2023 -0700

    bpf, sockmap: Convert schedule_work into delayed_work
    
    [ Upstream commit 29173d07f79883ac94f5570294f98af3d4287382 ]
    
    Sk_buffs are fed into sockmap verdict programs either from a strparser
    (when the user might want to decide how framing of skb is done by attaching
    another parser program) or directly through tcp_read_sock. The
    tcp_read_sock is the preferred method for performance when the BPF logic is
    a stream parser.
    
    The flow for Cilium's common use case with a stream parser is,
    
     tcp_read_sock()
      sk_psock_verdict_recv
        ret = bpf_prog_run_pin_on_cpu()
        sk_psock_verdict_apply(sock, skb, ret)
         // if system is under memory pressure or app is slow we may
         // need to queue skb. Do this queuing through ingress_skb and
         // then kick timer to wake up handler
         skb_queue_tail(ingress_skb, skb)
         schedule_work(work);
    
    The work queue is wired up to sk_psock_backlog(). This will then walk the
    ingress_skb skb list that holds our sk_buffs that could not be handled,
    but should be OK to run at some later point. However, its possible that
    the workqueue doing this work still hits an error when sending the skb.
    When this happens the skbuff is requeued on a temporary 'state' struct
    kept with the workqueue. This is necessary because its possible to
    partially send an skbuff before hitting an error and we need to know how
    and where to restart when the workqueue runs next.
    
    Now for the trouble, we don't rekick the workqueue. This can cause a
    stall where the skbuff we just cached on the state variable might never
    be sent. This happens when its the last packet in a flow and no further
    packets come along that would cause the system to kick the workqueue from
    that side.
    
    To fix we could do simple schedule_work(), but while under memory pressure
    it makes sense to back off some instead of continue to retry repeatedly. So
    instead to fix convert schedule_work to schedule_delayed_work and add
    backoff logic to reschedule from backlog queue on errors. Its not obvious
    though what a good backoff is so use '1'.
    
    To test we observed some flakes whil running NGINX compliance test with
    sockmap we attributed these failed test to this bug and subsequent issue.
    
    >From on list discussion. This commit
    
     bec217197b41("skmsg: Schedule psock work if the cached skb exists on the psock")
    
    was intended to address similar race, but had a couple cases it missed.
    Most obvious it only accounted for receiving traffic on the local socket
    so if redirecting into another socket we could still get an sk_buff stuck
    here. Next it missed the case where copied=0 in the recv() handler and
    then we wouldn't kick the scheduler. Also its sub-optimal to require
    userspace to kick the internal mechanisms of sockmap to wake it up and
    copy data to user. It results in an extra syscall and requires the app
    to actual handle the EAGAIN correctly.
    
    Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
    Signed-off-by: John Fastabend <john.fastabend@xxxxxxxxx>
    Signed-off-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
    Tested-by: William Findlay <will@xxxxxxxxxxxxx>
    Reviewed-by: Jakub Sitnicki <jakub@xxxxxxxxxxxxxx>
    Link: https://lore.kernel.org/bpf/20230523025618.113937-3-john.fastabend@xxxxxxxxx
    Stable-dep-of: 405df89dd52c ("bpf, sockmap: Improved check for empty queue")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index f18eb6a6f7631..07a8e7c695373 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -107,7 +107,7 @@ struct sk_psock {
 	struct proto			*sk_proto;
 	struct mutex			work_mutex;
 	struct sk_psock_work_state	work_state;
-	struct work_struct		work;
+	struct delayed_work		work;
 	struct rcu_work			rwork;
 };
 
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 6bdb15b05a78d..e9fddceba390e 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -482,7 +482,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
 	}
 out:
 	if (psock->work_state.skb && copied > 0)
-		schedule_work(&psock->work);
+		schedule_delayed_work(&psock->work, 0);
 	return copied;
 }
 EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
@@ -633,7 +633,8 @@ static void sk_psock_skb_state(struct sk_psock *psock,
 
 static void sk_psock_backlog(struct work_struct *work)
 {
-	struct sk_psock *psock = container_of(work, struct sk_psock, work);
+	struct delayed_work *dwork = to_delayed_work(work);
+	struct sk_psock *psock = container_of(dwork, struct sk_psock, work);
 	struct sk_psock_work_state *state = &psock->work_state;
 	struct sk_buff *skb = NULL;
 	bool ingress;
@@ -673,6 +674,12 @@ static void sk_psock_backlog(struct work_struct *work)
 				if (ret == -EAGAIN) {
 					sk_psock_skb_state(psock, state, skb,
 							   len, off);
+
+					/* Delay slightly to prioritize any
+					 * other work that might be here.
+					 */
+					if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
+						schedule_delayed_work(&psock->work, 1);
 					goto end;
 				}
 				/* Hard errors break pipe and stop xmit. */
@@ -727,7 +734,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
 	INIT_LIST_HEAD(&psock->link);
 	spin_lock_init(&psock->link_lock);
 
-	INIT_WORK(&psock->work, sk_psock_backlog);
+	INIT_DELAYED_WORK(&psock->work, sk_psock_backlog);
 	mutex_init(&psock->work_mutex);
 	INIT_LIST_HEAD(&psock->ingress_msg);
 	spin_lock_init(&psock->ingress_lock);
@@ -816,7 +823,7 @@ static void sk_psock_destroy(struct work_struct *work)
 
 	sk_psock_done_strp(psock);
 
-	cancel_work_sync(&psock->work);
+	cancel_delayed_work_sync(&psock->work);
 	mutex_destroy(&psock->work_mutex);
 
 	psock_progs_drop(&psock->progs);
@@ -931,7 +938,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb)
 	}
 
 	skb_queue_tail(&psock_other->ingress_skb, skb);
-	schedule_work(&psock_other->work);
+	schedule_delayed_work(&psock_other->work, 0);
 	spin_unlock_bh(&psock_other->ingress_lock);
 	return 0;
 }
@@ -1011,7 +1018,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
 			spin_lock_bh(&psock->ingress_lock);
 			if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
 				skb_queue_tail(&psock->ingress_skb, skb);
-				schedule_work(&psock->work);
+				schedule_delayed_work(&psock->work, 0);
 				err = 0;
 			}
 			spin_unlock_bh(&psock->ingress_lock);
@@ -1042,7 +1049,7 @@ static void sk_psock_write_space(struct sock *sk)
 	psock = sk_psock(sk);
 	if (likely(psock)) {
 		if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
-			schedule_work(&psock->work);
+			schedule_delayed_work(&psock->work, 0);
 		write_space = psock->saved_write_space;
 	}
 	rcu_read_unlock();
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 4e42bc679bac9..2ded250ac0d2b 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -1577,9 +1577,10 @@ void sock_map_close(struct sock *sk, long timeout)
 		rcu_read_unlock();
 		sk_psock_stop(psock);
 		release_sock(sk);
-		cancel_work_sync(&psock->work);
+		cancel_delayed_work_sync(&psock->work);
 		sk_psock_put(sk, psock);
 	}
+
 	/* Make sure we do not recurse. This is a bug.
 	 * Leak the socket instead of crashing on a stack overflow.
 	 */




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux