Re: [PATCH for-next] io_uring: ensure IOSQE_ASYNC file table grabbing works, with SQPOLL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/9/20 7:10 AM, Jens Axboe wrote:
> On 9/9/20 1:09 AM, Pavel Begunkov wrote:
>> On 09/09/2020 01:54, Jens Axboe wrote:
>>> On 9/8/20 3:22 PM, Jens Axboe wrote:
>>>> On 9/8/20 2:58 PM, Pavel Begunkov wrote:
>>>>> On 08/09/2020 20:48, Jens Axboe wrote:
>>>>>> Fd instantiating commands like IORING_OP_ACCEPT now work with SQPOLL, but
>>>>>> we have an error in grabbing that if IOSQE_ASYNC is set. Ensure we assign
>>>>>> the ring fd/file appropriately so we can defer grab them.
>>>>>
>>>>> IIRC, for fcheck() in io_grab_files() to work it should be under fdget(),
>>>>> that isn't the case with SQPOLL threads. Am I mistaken?
>>>>>
>>>>> And it looks strange that the following snippet will effectively disable
>>>>> such requests.
>>>>>
>>>>> fd = dup(ring_fd)
>>>>> close(ring_fd)
>>>>> ring_fd = fd
>>>>
>>>> Not disagreeing with that, I think my initial posting made it clear
>>>> it was a hack. Just piled it in there for easier testing in terms
>>>> of functionality.
>>>>
>>>> But the next question is how to do this right...> 
>>> Looking at this a bit more, and I don't necessarily think there's a
>>> better option. If you dup+close, then it just won't work. We have no
>>> way of knowing if the 'fd' changed, but we can detect if it was closed
>>> and then we'll end up just EBADF'ing the requests.
>>>
>>> So right now the answer is that we can support this just fine with
>>> SQPOLL, but you better not dup and close the original fd. Which is not
>>> ideal, but better than NOT being able to support it.
>>>
>>> Only other option I see is to to provide an io_uring_register()
>>> command to update the fd/file associated with it. Which may be useful,
>>> it allows a process to indeed to this, if it absolutely has to.
>>
>> Let's put aside such dirty hacks, at least until someone actually
>> needs it. Ideally, for many reasons I'd prefer to get rid of
> 
> BUt it is actually needed, otherwise we're even more in a limbo state of
> "SQPOLL works for most things now, just not all". And this isn't that
> hard to make right - on the flush() side, we just need to park/stall the
> thread and clear the ring_fd/ring_file, then mark the ring as needing a
> queue enter. On the queue enter, we re-set the fd/file if they're NULL,
> unpark/unstall the thread. That's it. I'll write this up and test it.

Something like this - make sure the thread is idle if the ring fd is
closed, then clear our ring fd/file. Mark the ring as needing a kernel
enter, and when that enter happens, assign the current fd/file. Once
that's done, we can resume processing.


diff --git a/fs/io_uring.c b/fs/io_uring.c
index bd1ac8581d38..652cc53432d4 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6704,7 +6704,14 @@ static enum sq_ret __io_sq_thread(struct io_ring_ctx *ctx,
 		mutex_unlock(&ctx->uring_lock);
 	}
 
-	to_submit = io_sqring_entries(ctx);
+	/*
+	 * If ->ring_file is NULL, we're waiting on new fd/file assigment.
+	 * Don't submit anything new until that happens.
+	 */
+	if (ctx->ring_file)
+		to_submit = io_sqring_entries(ctx);
+	else
+		to_submit = 0;
 
 	/*
 	 * If submit got -EBUSY, flag us as needing the application
@@ -6748,7 +6755,7 @@ static enum sq_ret __io_sq_thread(struct io_ring_ctx *ctx,
 		}
 
 		to_submit = io_sqring_entries(ctx);
-		if (!to_submit || ret == -EBUSY)
+		if (!to_submit || ret == -EBUSY || !ctx->ring_file)
 			return SQT_IDLE;
 
 		finish_wait(&sqd->wait, &ctx->sqo_wait_entry);
@@ -8521,6 +8528,18 @@ static int io_uring_flush(struct file *file, void *data)
 	if (fatal_signal_pending(current) || (current->flags & PF_EXITING)) {
 		io_sq_thread_stop(ctx);
 		io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, current, true);
+	} else if (ctx->flags & IORING_SETUP_SQPOLL) {
+		struct io_sq_data *sqd = ctx->sq_data;
+
+		/* Ring is being closed, mark us as neding new assignment */
+		if (sqd->thread)
+			kthread_park(sqd->thread);
+		ctx->ring_fd = -1;
+		ctx->ring_file = NULL;
+		set_bit(1, &ctx->sq_check_overflow);
+		io_ring_set_wakeup_flag(ctx);
+		if (sqd->thread)
+			kthread_unpark(sqd->thread);
 	}
 
 	return 0;
@@ -8656,6 +8675,20 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 	if (ctx->flags & IORING_SETUP_SQPOLL) {
 		if (!list_empty_careful(&ctx->cq_overflow_list))
 			io_cqring_overflow_flush(ctx, false);
+		/* marked as needing new fd assigment, process it now */
+		if (test_bit(1, &ctx->sq_check_overflow) &&
+		    test_and_clear_bit(1, &ctx->sq_check_overflow)) {
+			struct io_sq_data *sqd = ctx->sq_data;
+
+			if (sqd->thread)
+				kthread_park(sqd->thread);
+
+			ctx->ring_fd = fd;
+			ctx->ring_file = f.file;
+
+			if (sqd->thread)
+				kthread_unpark(sqd->thread);
+		}
 		if (flags & IORING_ENTER_SQ_WAKEUP)
 			wake_up(&ctx->sq_data->wait);
 		if (flags & IORING_ENTER_SQ_WAIT)

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux