On Wed, Aug 26, 2009 at 02:22:36PM -0400, Christoph Hellwig wrote: > On Thu, Aug 20, 2009 at 12:27:29PM -0400, Christoph Hellwig wrote: > > Maybe you can help brain storming, but I still can't see any way in that > > the > > > > - write data > > - write inode > > - wait for data > > > > actually is a benefit in terms of semantics (I agree that it could be > > faster in theory, but even that is debatable with todays seek latencies > > in disks) > > Btw, another reason why our current default is actively harmful: > > barriers > > With volatile write caches we do have to flush the disk write cache in > ->fsync, either implicitly by a metadata operation, or explicitly if > only data changed. Unless the filesystems waits itself for the data > to hit the disk like XFS or btrfs will be issue the cache flush > potentially before the data write has actually reached the disk cache. Ok, this one failed the reality check - no matter how hard I tried I could not reproduce that case in my test harness. It turns out cache flush request are quite sensibly treated as barriers by the block layer and thus we drain the queue before issueing the cache flush. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html