On Tue, Jul 11, 2023 at 02:33:21PM -0600, Jens Axboe wrote: > Polled IO is always reaped in the context of the process itself, so it > does not need to be punted to a workqueue for the completion. This is > different than IRQ driven IO, where iomap_dio_bio_end_io() will be > invoked from hard/soft IRQ context. For those cases we currently need > to punt to a workqueue for further processing. For the polled case, > since it's the task itself reaping completions, we're already in task > context. That makes it identical to the sync completion case. > > Testing a basic QD 1..8 dio random write with polled IO with the > following fio job: > > fio --name=polled-dio-write --filename=/data1/file --time_based=1 \ > --runtime=10 --bs=4096 --rw=randwrite --norandommap --buffered=0 \ > --cpus_allowed=4 --ioengine=io_uring --iodepth=$depth --hipri=1 Ok, so this is testing pure overwrite DIOs as fio pre-writes the file prior to starting the random write part of the test. > yields: > > Stock Patched Diff > ======================================= > QD1 180K 201K +11% > QD2 356K 394K +10% > QD4 608K 650K +7% > QD8 827K 831K +0.5% > > which shows a nice win, particularly for lower queue depth writes. > This is expected, as higher queue depths will be busy polling > completions while the offloaded workqueue completions can happen in > parallel. > > Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> > --- > fs/iomap/direct-io.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c > index ea3b868c8355..343bde5d50d3 100644 > --- a/fs/iomap/direct-io.c > +++ b/fs/iomap/direct-io.c > @@ -161,15 +161,16 @@ void iomap_dio_bio_end_io(struct bio *bio) > struct task_struct *waiter = dio->submit.waiter; > WRITE_ONCE(dio->submit.waiter, NULL); > blk_wake_io_task(waiter); > - } else if (dio->flags & IOMAP_DIO_WRITE) { > + } else if ((bio->bi_opf & REQ_POLLED) || > + !(dio->flags & IOMAP_DIO_WRITE)) { > + WRITE_ONCE(dio->iocb->private, NULL); > + iomap_dio_complete_work(&dio->aio.work); I'm not sure this is safe for all polled writes. What if the DIO write was into a hole and we have to run unwritten extent completion via: iomap_dio_complete_work(work) iomap_dio_complete(dio) dio->end_io(iocb) xfs_dio_write_end_io() xfs_iomap_write_unwritten() <runs transactions, takes rwsems, does IO> ..... ki->ki_complete() io_complete_rw_iopoll() ..... I don't see anything in the iomap DIO path that prevents us from doing HIPRI/REQ_POLLED IO on IOMAP_UNWRITTEN extents, hence I think this change will result in bad things happening in general. > + } else { > struct inode *inode = file_inode(dio->iocb->ki_filp); > > WRITE_ONCE(dio->iocb->private, NULL); > INIT_WORK(&dio->aio.work, iomap_dio_complete_work); > queue_work(inode->i_sb->s_dio_done_wq, &dio->aio.work); > - } else { > - WRITE_ONCE(dio->iocb->private, NULL); > - iomap_dio_complete_work(&dio->aio.work); > } > } Regardless of the correctness of the code, I don't think adding this special case is the right thing to do here. We should be able to complete all writes that don't require blocking completions directly here, not just polled writes. We recently had this discussion over hacking a special case "don't queue for writes" for ext4 into this code - I had to point out the broken O_DSYNC completion cases it resulted in there, too. I also pointed out that we already had generic mechanisms in iomap to enable us to make a submission time decision as to whether completion needed to be queued or not. Thread here: https://lore.kernel.org/linux-xfs/20230621174114.1320834-1-bongiojp@xxxxxxxxx/ Essentially, we shouldn't be using IOMAP_DIO_WRITE as the determining factor for queuing completions - we should be using the information the iocb and the iomap provides us at submission time similar to how we determine if we can use REQ_FUA for O_DSYNC writes to determine if iomap IO completion queuing is required. This will do the correct *and* optimal thing for all types of writes, polled or not... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx