Re: FAILED: patch "[PATCH] io_uring: hold 'ctx' reference around task_work queue +" failed to apply to 5.8-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/17/20 6:44 AM, Greg KH wrote:
> On Mon, Aug 17, 2020 at 06:21:02AM -0700, Jens Axboe wrote:
>> On 8/17/20 6:13 AM, Greg KH wrote:
>>> On Mon, Aug 17, 2020 at 06:10:04AM -0700, Jens Axboe wrote:
>>>> On 8/17/20 3:44 AM, gregkh@xxxxxxxxxxxxxxxxxxx wrote:
>>>>>
>>>>> The patch below does not apply to the 5.8-stable tree.
>>>>> If someone wants it applied there, or to any other stable or longterm
>>>>> tree, then please email the backport, including the original git commit
>>>>> id to <stable@xxxxxxxxxxxxxxx>.
>>>>
>>>> Here's a 5.8 version.
>>>
>>> Applied, thanks!
>>>
>>> Looks like it applies to 5.7 too, want me to take this for that as well?
>>
>> Heh, didn't see this email, just going through this by kernel revision.
>> Either one should work, sent a specific set for that too.
> 
> Oops, it did not build on 5.7, so I still need a working backport for
> that.

Maybe I missed that, in any case, here it is. This one is for 5.7, to be
specific.

>> commit ebf0d100df0731901c16632f78d78d35f4123bc4
>> Author: Jens Axboe <axboe@xxxxxxxxx>
>> Date:   Thu Aug 13 09:01:38 2020 -0600
>>
>>     task_work: only grab task signal lock when needed
>>
>> as well, to avoid a perf regression with the TWA_SIGNAL change? Thanks!
> 
> Also now queued up, thanks.

Thanks!

-- 
Jens Axboe

>From bf96cb6381c6765e8ae84aef546ea0b3f970599a Mon Sep 17 00:00:00 2001
From: Jens Axboe <axboe@xxxxxxxxx>
Date: Tue, 11 Aug 2020 08:04:14 -0600
Subject: [PATCH] io_uring: hold 'ctx' reference around task_work queue +
 execute

We're holding the request reference, but we need to go one higher
to ensure that the ctx remains valid after the request has finished.
If the ring is closed with pending task_work inflight, and the
given io_kiocb finishes sync during issue, then we need a reference
to the ring itself around the task_work execution cycle.

Cc: stable@xxxxxxxxxxxxxxx # v5.7+
Reported-by: syzbot+9b260fc33297966f5a8e@xxxxxxxxxxxxxxxxxxxxxxxxx
Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
---
 fs/io_uring.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5e6bbcb60fc4..2e7cbe61f64c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4202,6 +4202,8 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
 	tsk = req->task;
 	req->result = mask;
 	init_task_work(&req->task_work, func);
+	percpu_ref_get(&req->ctx->refs);
+
 	/*
 	 * If this fails, then the task is exiting. When a task exits, the
 	 * work gets canceled, so just cancel this request as well instead
@@ -4301,6 +4303,7 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
 static void io_poll_task_func(struct callback_head *cb)
 {
 	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+	struct io_ring_ctx *ctx = req->ctx;
 	struct io_kiocb *nxt = NULL;
 
 	io_poll_task_handler(req, &nxt);
@@ -4311,6 +4314,7 @@ static void io_poll_task_func(struct callback_head *cb)
 		__io_queue_sqe(nxt, NULL);
 		mutex_unlock(&ctx->uring_lock);
 	}
+	percpu_ref_put(&ctx->refs);
 }
 
 static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
@@ -4427,6 +4431,7 @@ static void io_async_task_func(struct callback_head *cb)
 
 	if (io_poll_rewait(req, &apoll->poll)) {
 		spin_unlock_irq(&ctx->completion_lock);
+		percpu_ref_put(&ctx->refs);
 		return;
 	}
 
@@ -4465,6 +4470,7 @@ static void io_async_task_func(struct callback_head *cb)
 
 	kfree(apoll->double_poll);
 	kfree(apoll);
+	percpu_ref_put(&ctx->refs);
 }
 
 static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
-- 
2.28.0


[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux