On Jan 18, 2009 19:52 -0500, Theodore Ts'o wrote: > An Ubuntu user recently complained about a large number of recently > updated files which were zero-length after an crash. I started looking > more closely at that, and it's because we have an interesting > interpretation of data=ordered. It applies for blocks which are already > allocated, but not for blocks which haven't been allocated yet. This > can be surprising for users; and indeed, for many workloads where you > aren't using berk_db some other database, all of the files written will > be newly created files (or files which are getting rewritten after > opening with O_TRUNC), so there won't be any difference between > data=writeback and data=ordered. > > So I wonder if we should either: > > (a) make data=ordered force block allocation and writeback --- which > should just be a matter of disabling the > redirty_page_for_writepage() code path in ext4_da_writepage() That would re-introduce the "Firefox" problem where fsync of one file forces all other files being written to flush their data blocks to disk. > (b) add a new mount option, call it data=delalloc-ordered which is (a) I'd prefer a better name, like "flushall-ordered" or similar, because to me "delalloc-ordered" would imply the current behaviour. > (c) change the default mount option to be data=writeback That can expose garbage data to the user, which the current behaviour does not do. > (d) Do (b) and make it the default > > (e) Keep things the way they are > > Thoughts, comments? My personal favorite is (b). This allows users > who want something that works functionally much more like ext3 to get > that, while giving us the current speed advantages of a more aggressive > delayed allocation. Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html