Les Mikesell wrote: > Stuart D. Gathman wrote: >> >> That's been my claim all along - that the broken fsync only affects >> on disk cache. LVM itself does not reorder writes in any way - it just >> fails to pass along the write barrier. fsync() does *start* writing >> the dirty buffers (implemented in the fs code). It just doesn't wait >> for the writes to finish getting to the platters. Apparently, >> it does wait for the write to get to the drive (but I'm not certain). > > Given that fsync() is supposed to return the status of the completion of > the physical write, that sounds broken to me. Do the LVM's in question > here have more than one underlying device, and does it matter? > According to my tests, you get a 50x speedup when you turn the cache on. It means that fsync is waiting for something to happen, and this "something" happens 50 times faster only when you turn the disk write-back cache on. It seems to me that the only explanation is that fsync is waiting for disk I/O to complete (and not just to begin otherwise the time would be the same). With the cache enabled, the disk reports completion when the data is in the cache (write-back behaviour), with cache disabled it waits for the data to be on platters (write-thru behaviour). .TM. _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/