On Fri, Mar 02, 2018 at 11:20:31PM +0100, Christoph Hellwig wrote: > > @@ -760,8 +761,19 @@ static ssize_t iomap_dio_complete(struct iomap_dio *dio) > > } > > > > inode_dio_end(file_inode(iocb->ki_filp)); > > - kfree(dio); > > > > + /* > > + * If a FUA write was done, then that is all we required for datasync > > + * semantics -. we don't need to call generic_write_sync() to complete > > + * the write. > > + */ > > + if (ret > 0 && > > + (dio->flags & (IOMAP_DIO_WRITE|IOMAP_DIO_WRITE_FUA)) == > > + IOMAP_DIO_WRITE) { > > + ret = generic_write_sync(iocb, ret); > > + } > > + > > + kfree(dio); > > Can please split the move of the generic_write_sync call into > generic_write_sync a separate prep patch? It's enough of a logic change > on its own that it warrants a separate commit with a separate explanation. Ok, that makes sense. > Also I'd be tempted to invert the IOMAP_DIO_WRITE_FUA flag and replace > it with an IOMAP_DIO_WRITE_SYNC flag to indicate we need the > generic_write_sync call, as that should make the logic much more clear. Ah, much cleaner. > > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c > > index 260ff5e5c264..81aa3b73471e 100644 > > --- a/fs/xfs/xfs_file.c > > +++ b/fs/xfs/xfs_file.c > > @@ -732,6 +732,11 @@ xfs_file_write_iter( > > ret = xfs_file_dio_aio_write(iocb, from); > > if (ret == -EREMCHG) > > goto buffered; > > + /* > > + * Direct IO handles sync type writes internally on I/O > > + * completion. > > + */ > > + return ret; > > } else { > > buffered: > > ret = xfs_file_buffered_aio_write(iocb, from); > > The else is not needed and you can now have a much more sensible > code flow here: > > ret = xfs_file_dio_aio_write(iocb, from); > if (ret != -EREMCHG)) > return ret; > } > > ret = xfs_file_buffered_aio_write(iocb, from); I thought about that, but as you noticed the DAX case gets in the way so I didn't bother for a RFC patch. I'll have a look at making the DAX iomap code do something similar with generic_write_sync() so we can simplify this high level code. Thanks Christoph! Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx