Hi, Martijn van Oosterhout wrote: > And fsync better do what you're asking > (how fast is just a performance issue, just as long as it's done). Where are we on this issue? I've read all of this thread and the one on the lvm-linux mailing list as well, but still don't feel confident. In the following scenario: fsync -> filesystem -> physical disk I'm assuming the filesystem correctly issues an blkdev_issue_flush() on the physical disk upon fsync(), to do what it's told: flush the cache(s) to disk. Further, I'm also assuming the physical disk is flushable (i.e. it correctly implements the blkdev_issue_flush() call). Here we can be pretty certain that fsync works as advertised, I think. The unanswered question to me is, what's happening, if I add LVM in between as follows: fsync -> filesystmem -> device mapper (lvm) -> physical disk(s) Again, assume the filesystem issues a blkdev_issue_flush() to the lower layer and the physical disks are all flushable (and implement that correctly). How does the device mapper behave? I'd expect it to forward the blkdev_issue_flush() call to all affected devices and only return after the last one has confirmed and completed flushing its caches. Is that the case? I've also read about the newish write barriers and about filesystems implementing fsync with such write barriers. That seems fishy to me and would of course break in combination with LVM (which doesn't completely support write barriers, AFAIU). However, that's clearly the filesystem side of the story and has not much to do with whether fsync lies on top of LVM or not. Help in clarifying this issue greatly appreciated. Kind Regards Markus Wanner -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general