Dave Chinner <david@xxxxxxxxxxxxx> writes: > And requeuing work from one workqueue to the next is something that > we can avoid. We know at IO submission time (i.e. > xfs_vm_direct_io)) whether an fsync completion is going to be needed > during Io completion. The ioend->io_needs_fsync flag can be set > then, and the first pass through xfs_finish_ioend() can queue it to > the correct workqueue. i.e. it only needs to be queued if it's not > already an unwritten or append ioend and it needs an fsync. > > As it is, all the data completion workqueues run the same completion > function so all you need to do is handle the fsync case at the end > of the existing processing - it's not an else case. i.e the end of > xfs_end_io() becomes: > > if (ioend->io_needs_fsync) { > error = xfs_ioend_fsync(ioend); > if (error) > ioend->io_error = -error; > goto done; > } > done: > xfs_destroy_ioend(ioend); Works for me, that makes things simpler. > As it is, this code is going to change before these changes go in - > there's a nasty regression in the DIO code that I found this > afternoon that requires reworking this IO completion logic to > avoid. The patch will appear on the list soon.... I'm not on the xfs list, so if you haven't already sent it, mind Cc-ing me? >> --- a/fs/xfs/xfs_mount.h >> +++ b/fs/xfs/xfs_mount.h >> @@ -209,6 +209,7 @@ typedef struct xfs_mount { >> struct workqueue_struct *m_data_workqueue; >> struct workqueue_struct *m_unwritten_workqueue; >> struct workqueue_struct *m_cil_workqueue; >> + struct workqueue_struct *m_aio_blkdev_flush_wq; > > struct workqueue_struct *m_aio_fsync_wq; For the record, m_aio_blkdev_flush_wq is the name you chose previously. ;-) Thanks for the review! Cheers, Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html