Patch "tcp: fix quick-ack counting to count actual ACKs of new data" has been added to the 6.5-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    tcp: fix quick-ack counting to count actual ACKs of new data

to the 6.5-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     tcp-fix-quick-ack-counting-to-count-actual-acks-of-n.patch
and it can be found in the queue-6.5 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 6ca414058761864b36130171a57bd1dbddc60e38
Author: Neal Cardwell <ncardwell@xxxxxxxxxx>
Date:   Sun Oct 1 11:12:38 2023 -0400

    tcp: fix quick-ack counting to count actual ACKs of new data
    
    [ Upstream commit 059217c18be6757b95bfd77ba53fb50b48b8a816 ]
    
    This commit fixes quick-ack counting so that it only considers that a
    quick-ack has been provided if we are sending an ACK that newly
    acknowledges data.
    
    The code was erroneously using the number of data segments in outgoing
    skbs when deciding how many quick-ack credits to remove. This logic
    does not make sense, and could cause poor performance in
    request-response workloads, like RPC traffic, where requests or
    responses can be multi-segment skbs.
    
    When a TCP connection decides to send N quick-acks, that is to
    accelerate the cwnd growth of the congestion control module
    controlling the remote endpoint of the TCP connection. That quick-ack
    decision is purely about the incoming data and outgoing ACKs. It has
    nothing to do with the outgoing data or the size of outgoing data.
    
    And in particular, an ACK only serves the intended purpose of allowing
    the remote congestion control to grow the congestion window quickly if
    the ACK is ACKing or SACKing new data.
    
    The fix is simple: only count packets as serving the goal of the
    quickack mechanism if they are ACKing/SACKing new data. We can tell
    whether this is the case by checking inet_csk_ack_scheduled(), since
    we schedule an ACK exactly when we are ACKing/SACKing new data.
    
    Fixes: fc6415bcb0f5 ("[TCP]: Fix quick-ack decrementing with TSO.")
    Signed-off-by: Neal Cardwell <ncardwell@xxxxxxxxxx>
    Reviewed-by: Yuchung Cheng <ycheng@xxxxxxxxxx>
    Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/20231001151239.1866845-1-ncardwell.sw@xxxxxxxxx
    Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 10fc5c5928f71..b1b1e01c69839 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -350,12 +350,14 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
 struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, gfp_t gfp,
 				     bool force_schedule);
 
-static inline void tcp_dec_quickack_mode(struct sock *sk,
-					 const unsigned int pkts)
+static inline void tcp_dec_quickack_mode(struct sock *sk)
 {
 	struct inet_connection_sock *icsk = inet_csk(sk);
 
 	if (icsk->icsk_ack.quick) {
+		/* How many ACKs S/ACKing new data have we sent? */
+		const unsigned int pkts = inet_csk_ack_scheduled(sk) ? 1 : 0;
+
 		if (pkts >= icsk->icsk_ack.quick) {
 			icsk->icsk_ack.quick = 0;
 			/* Leaving quickack mode we deflate ATO. */
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 9f9ca68c47026..37fd9537423f1 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -177,8 +177,7 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
 }
 
 /* Account for an ACK we sent. */
-static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
-				      u32 rcv_nxt)
+static inline void tcp_event_ack_sent(struct sock *sk, u32 rcv_nxt)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 
@@ -192,7 +191,7 @@ static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
 
 	if (unlikely(rcv_nxt != tp->rcv_nxt))
 		return;  /* Special ACK sent by DCTCP to reflect ECN */
-	tcp_dec_quickack_mode(sk, pkts);
+	tcp_dec_quickack_mode(sk);
 	inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
 }
 
@@ -1372,7 +1371,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
 			   sk, skb);
 
 	if (likely(tcb->tcp_flags & TCPHDR_ACK))
-		tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt);
+		tcp_event_ack_sent(sk, rcv_nxt);
 
 	if (skb->len != tcp_header_size) {
 		tcp_event_data_sent(tp, sk);



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux