Re: [PATCH] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

On 2/14/20 6:11 AM, Xiaoguang Wang wrote:
After making ext4 support iopoll method:
   let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
     rm -f testfile; sync;
     sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring  -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.

There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.

Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.

For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list.

I think your analysis is correct, but the various conditional locking
and unlocking in io_sq_thread() is not easy to follow. When I see
things like:

@@ -5101,16 +5095,22 @@ static int io_sq_thread(void *data)
  			if (!to_submit || ret == -EBUSY) {
  				if (kthread_should_park()) {
  					finish_wait(&ctx->sqo_wait, &wait);
+					if (iopoll)
+						mutex_unlock(&ctx->uring_lock);
  					break;
  				}
  				if (signal_pending(current))
  					flush_signals(current);
+				if (iopoll)
+					mutex_unlock(&ctx->uring_lock);
  				schedule();
  				finish_wait(&ctx->sqo_wait, &wait);
ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
  				continue;
  			}
+			if (iopoll)
+				mutex_unlock(&ctx->uring_lock);
  			finish_wait(&ctx->sqo_wait, &wait);
ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;

it triggers the taste senses a bit. Any chance you could take another
look at that part and see if we can clean it up a bit?
Ok, I'll try to make a better version, thanks.

Regards,
Xiaoguang Wang


Even if that isn't possible, then I think it'd help to rename 'iopoll'
to something related to the lock, and have a comment when you first do:

	/* If we're doing polled IO, we need to bla bla */
	if (ctx->flags & IORING_SETUP_IOPOLL)
		needs_uring_lock = true;





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux