On 2010-11-08, at 21:14, Amir Goldstein wrote: > I would like to propose a simple idea how to automatically de-fragment a file. [snip] > The use case for this, besides healing fragmentation caused by > snapshots move-on-rewrite, is an highly fragmented ext2/3 fs, which was mounted as ext4. ext2/3 old files are slowly being deleted while new (still fragmented) extent mapped files are being created. > This viscous cycle cannot end before there is enough contiguous free > space for writing new files, which may never happen. This will only happen in case the free space is _very_ low. Normally, in a situation like this, mballoc will allocate the largest contiguous chunks of free space, reducing the fragmentation as new files are written, and allocations to highly-fragmented block groups will be avoided until the chunks in those groups have grown larger. > Online de-fragmentation will not help in this case either. > With opportunistic de-fragmentation, if the extent mapped files are > being re-written, the health of the file system will constantly improve over time. BTW, Is this use case relevant for upgraded google chunk servers? While this is true in theory, the problem is that in most cases files are not overwritten in place. Commonly, when files are "rewritten" they are truncated and new blocks allocated, or a new file is written and renamed in place of the old file. Only in rare cases, like databases, are files rewritten in-place. Cheers, Andreas -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html