More Details, the code is in kernel cephfs(path: linux-kernel-src/fs/ceph/ioctl.h), the flag will be judged when write from kernel cephfs. On Saturday, September 2, 2017 9:29 PM, Mark Meyers <MarkMeyers.MMY@xxxxxxxxx> wrote: Hi: I want to ask a question about CEPH_IOC_SYNCIO flag. I know that when using O_SYNC flag or O_DIRECT flag, write call executes in other two code paths different than using CEPH_IOC_SYNCIO flag. And I find the comments about CEPH_IOC_SYNCIO here: /* * CEPH_IOC_SYNCIO - force synchronous IO * * This ioctl sets a file flag that forces the synchronous IO that * bypasses the page cache, even if it is not necessary. This is * essentially the opposite behavior of IOC_LAZYIO. This forces the * same read/write path as a file opened by multiple clients when one * or more of those clients is opened for write. * * Note that this type of sync IO takes a different path than a file * opened with O_SYNC/D_SYNC (writes hit the page cache and are * immediately flushed on page boundaries). It is very similar to * O_DIRECT (writes bypass the page cache) except that O_DIRECT writes * are not copied (user page must remain stable) and O_DIRECT writes * have alignment restrictions (on the buffer and file offset). */ #define CEPH_IOC_SYNCIO _IO(CEPH_IOCTL_MAGIC, 5) My question is: "This forces the same read/write path as a file opened by multiple clients when one or more of those clients is opened for write." Does this mean multiple clients can execute in the same code path when they all use the CEPH_IOC_SYNCIO flag? Will the use of CEPH_IOC_SYNCIO in all clients bring effects such as coherency and performance? Thanks for your reading. I will wait for your rely, thanks! Best Regards Mark Meyers _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com