This is a note to let you know that I've just added the patch titled tls: fix race between tx work scheduling and socket close to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: tls-fix-race-between-tx-work-scheduling-and-socket-close.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e01e3934a1b2d122919f73bc6ddbe1cdafc4bbdb Mon Sep 17 00:00:00 2001 From: Jakub Kicinski <kuba@xxxxxxxxxx> Date: Tue, 6 Feb 2024 17:18:20 -0800 Subject: tls: fix race between tx work scheduling and socket close From: Jakub Kicinski <kuba@xxxxxxxxxx> commit e01e3934a1b2d122919f73bc6ddbe1cdafc4bbdb upstream. Similarly to previous commit, the submitting thread (recvmsg/sendmsg) may exit as soon as the async crypto handler calls complete(). Reorder scheduling the work before calling complete(). This seems more logical in the first place, as it's the inverse order of what the submitting thread will do. Reported-by: valis <sec@valis.email> Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption of records for performance") Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx> Reviewed-by: Simon Horman <horms@xxxxxxxxxx> Reviewed-by: Sabrina Dubroca <sd@xxxxxxxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> [Lee: Fixed merge-conflict in Stable branches linux-6.1.y and older] Signed-off-by: Lee Jones <lee@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/tls/tls_sw.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -449,7 +449,6 @@ static void tls_encrypt_done(crypto_comp struct scatterlist *sge; struct sk_msg *msg_en; struct tls_rec *rec; - bool ready = false; struct sock *sk; rec = container_of(aead_req, struct tls_rec, aead_req); @@ -486,19 +485,16 @@ static void tls_encrypt_done(crypto_comp /* If received record is at head of tx_list, schedule tx */ first_rec = list_first_entry(&ctx->tx_list, struct tls_rec, list); - if (rec == first_rec) - ready = true; + if (rec == first_rec) { + /* Schedule the transmission */ + if (!test_and_set_bit(BIT_TX_SCHEDULED, + &ctx->tx_bitmask)) + schedule_delayed_work(&ctx->tx_work.work, 1); + } } if (atomic_dec_and_test(&ctx->encrypt_pending)) complete(&ctx->async_wait.completion); - - if (!ready) - return; - - /* Schedule the transmission */ - if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) - schedule_delayed_work(&ctx->tx_work.work, 1); } static int tls_encrypt_async_wait(struct tls_sw_context_tx *ctx) Patches currently in stable-queue which might be from kuba@xxxxxxxxxx are queue-6.1/net-hns3-tracing-fix-hclgevf-trace-event-strings.patch queue-6.1/tls-fix-race-between-tx-work-scheduling-and-socket-close.patch