[RFC] .flush and io_uring_cancel_files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It looks like taking ->uring_lock should work like kind of grace
period for struct files_struct and io_uring_flush(), and that would
solve the race with "fcheck(ctx->ring_fd) == ctx->ring_file".

Can you take a look? If you like it, I'll send a proper patch
and a bunch of cleanups on top.


diff --git a/fs/io_uring.c b/fs/io_uring.c
index a3dbd5f40391..012af200dc72 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5557,12 +5557,11 @@ static int io_grab_files(struct io_kiocb *req)
 	 * the fd has changed since we started down this path, and disallow
 	 * this operation if it has.
 	 */
-	if (fcheck(ctx->ring_fd) == ctx->ring_file) {
-		list_add(&req->inflight_entry, &ctx->inflight_list);
-		req->flags |= REQ_F_INFLIGHT;
-		req->work.files = current->files;
-		ret = 0;
-	}
+	list_add(&req->inflight_entry, &ctx->inflight_list);
+	req->flags |= REQ_F_INFLIGHT;
+	req->work.files = current->files;
+	ret = 0;
+
 	spin_unlock_irq(&ctx->inflight_lock);
 	rcu_read_unlock();

@@ -7479,6 +7478,10 @@ static int io_uring_release(struct inode *inode, struct
file *file)
 static void io_uring_cancel_files(struct io_ring_ctx *ctx,
 				  struct files_struct *files)
 {
+	/* wait all submitters that can race for @files */
+	mutex_lock(&ctx->uring_lock);
+	mutex_unlock(&ctx->uring_lock);
+
 	while (!list_empty_careful(&ctx->inflight_list)) {
 		struct io_kiocb *cancel_req = NULL, *req;
 		DEFINE_WAIT(wait);

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux