On Fri, Mar 27, 2009 at 05:03:38PM -0400, Chris Mason wrote: > > Ric had asked me about a test program that would show the worst case > > ext3 behavior. So I've modified your ext3 program a little. It now > > creates a 8G file and forks off another proc to do random IO to that > > file. > > > > My understanding of ext4 delalloc is that once blocks are allocated to > file, we go back to data=ordered. Yes, that's correct. > Ext4 is going pretty slowly for this fsync test (slower than ext3), it > looks like we're going for a very long time in > jbd2_journal_commit_transaction -> write_cache_pages. One of the things that we can do to optimize this case for ext4 (and ext3) is that if block has already been written out to disk once, we don't have to flush it to disk a second time. So if we add a new buffer_head flag which can distinguish between blocks that have been newly allocated (and not yet been flushed to disk) versus blocks that have already been flushed to disk at least once, we wouldn't need to force I/O for blocks in the latter case. After all, most of the applications which do random I/O to a file normally will use fsync() appropriately such that they are rewriting already allocated blocks. So there really is no reason to flush those blocks out to disk even in data=ordered mode. We currently flush *all* blocks out to disk in data=ordered mode because we don't have a good way of telling the difference between the two cases. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html