This is a note to let you know that I've just added the patch titled crypto: engine - fix crypto_queue backlog handling to the 6.2-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: crypto-engine-fix-crypto_queue-backlog-handling.patch and it can be found in the queue-6.2 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 271804fc1a893883555eff0c60eb937057ca8aa5 Author: Olivier Bacon <olivierb89@xxxxxxxxx> Date: Thu Apr 20 11:00:35 2023 -0400 crypto: engine - fix crypto_queue backlog handling [ Upstream commit 4140aafcff167b5b9e8dae6a1709a6de7cac6f74 ] CRYPTO_TFM_REQ_MAY_BACKLOG tells the crypto driver that it should internally backlog requests until the crypto hw's queue becomes full. At that point, crypto_engine backlogs the request and returns -EBUSY. Calling driver such as dm-crypt then waits until the complete() function is called with a status of -EINPROGRESS before sending a new request. The problem lies in the call to complete() with a value of -EINPROGRESS that is made when a backlog item is present on the queue. The call is done before the successful execution of the crypto request. In the case that do_one_request() returns < 0 and the retry support is available, the request is put back in the queue. This leads upper drivers to send a new request even if the queue is still full. The problem can be reproduced by doing a large dd into a crypto dm-crypt device. This is pretty easy to see when using Freescale CAAM crypto driver and SWIOTLB dma. Since the actual amount of requests that can be hold in the queue is unlimited we get IOs error and dma allocation. The fix is to call complete with a value of -EINPROGRESS only if the request is not enqueued back in crypto_queue. This is done by calling complete() later in the code. In order to delay the decision, crypto_queue is modified to correctly set the backlog pointer when a request is enqueued back. Fixes: 6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism") Co-developed-by: Sylvain Ouellet <souellet@xxxxxxxxxxx> Signed-off-by: Sylvain Ouellet <souellet@xxxxxxxxxxx> Signed-off-by: Olivier Bacon <obacon@xxxxxxxxxxx> Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/crypto/algapi.c b/crypto/algapi.c index 9de0677b3643d..60b98d2c400e3 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -963,6 +963,9 @@ EXPORT_SYMBOL_GPL(crypto_enqueue_request); void crypto_enqueue_request_head(struct crypto_queue *queue, struct crypto_async_request *request) { + if (unlikely(queue->qlen >= queue->max_qlen)) + queue->backlog = queue->backlog->prev; + queue->qlen++; list_add(&request->list, &queue->list); } diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 48c15f4079bb8..50bac2ab55f17 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -129,9 +129,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, if (!engine->retry_support) engine->cur_req = async_req; - if (backlog) - crypto_request_complete(backlog, -EINPROGRESS); - if (engine->busy) was_busy = true; else @@ -217,6 +214,9 @@ static void crypto_pump_requests(struct crypto_engine *engine, crypto_request_complete(async_req, ret); retry: + if (backlog) + crypto_request_complete(backlog, -EINPROGRESS); + /* If retry mechanism is supported, send new requests to engine */ if (engine->retry_support) { spin_lock_irqsave(&engine->queue_lock, flags);