Patch "nvme-tcp: strict pdu pacing to avoid send stalls on TLS" has been added to the 6.8-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    nvme-tcp: strict pdu pacing to avoid send stalls on TLS

to the 6.8-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     nvme-tcp-strict-pdu-pacing-to-avoid-send-stalls-on-t.patch
and it can be found in the queue-6.8 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit b854b203ecc0c196c9bc87140cc96452ad761b86
Author: Hannes Reinecke <hare@xxxxxxxxxx>
Date:   Thu Apr 18 12:39:45 2024 +0200

    nvme-tcp: strict pdu pacing to avoid send stalls on TLS
    
    [ Upstream commit 50abcc179e0c9ca667feb223b26ea406d5c4c556 ]
    
    TLS requires a strict pdu pacing via MSG_EOR to signal the end
    of a record and subsequent encryption. If we do not set MSG_EOR
    at the end of a sequence the record won't be closed, encryption
    doesn't start, and we end up with a send stall as the message
    will never be passed on to the TCP layer.
    So do not check for the queue status when TLS is enabled but
    rather make the MSG_MORE setting dependent on the current
    request only.
    
    Signed-off-by: Hannes Reinecke <hare@xxxxxxxxxx>
    Reviewed-by: Sagi Grimberg <sagi@xxxxxxxxxxx>
    Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index a6d596e056021..6eeb96578d1b4 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -352,12 +352,18 @@ static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
 	} while (ret > 0);
 }
 
-static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+static inline bool nvme_tcp_queue_has_pending(struct nvme_tcp_queue *queue)
 {
 	return !list_empty(&queue->send_list) ||
 		!llist_empty(&queue->req_list);
 }
 
+static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+{
+	return !nvme_tcp_tls(&queue->ctrl->ctrl) &&
+		nvme_tcp_queue_has_pending(queue);
+}
+
 static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
 		bool sync, bool last)
 {
@@ -378,7 +384,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
 		mutex_unlock(&queue->send_mutex);
 	}
 
-	if (last && nvme_tcp_queue_more(queue))
+	if (last && nvme_tcp_queue_has_pending(queue))
 		queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
 }
 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux