Hi. In ext4 we have EXT4_IOC_MOVE_EXT ioctl which allow to migrate data block. At this moment the only defragmentation strategy we have in e4defrag(8) is defragmentation of big files. But one can imagine different defragmentation strategies for different file sizes and different purposes. I would like to start a discussion about list of strategies which can be usable for us: * Big file defragmentation Good known strategy to make big files ** Example: In fact fragmented for big files may appear only in such cases 1) Creation big files on FS which has low free space 2) weird io pattern (multi-threaded small chunks random io + fsync) or punch_hole/collapse_range etc. * Compact small old files to continuous chunks. ** Example: news, mail, web or cache server contains a lot of small files in each directory. And files are periodically created and unlinked after some period of time. Files has different(unpredictable) life-time which result in fragmented fs because block allocator tries to compact new files to each other, but later unlink result in fragmentation. In case of thin-provision target this also result in significant waste of space. ** Proposed strategy: Scan directory and collect small old files it to continuous chunks. Core idea is similar to block allocations smaller than s_mb_stream_request. But at this moment we have more information about file history because if mtime is close to ctime then append is unlikely to happen in future so compaction is effective. * Compact files according to IO access pattern. Various tracers may collect statistics about IO access pattern, so we can place such block close to each other and reduce number of seeks. ** Example: 1) Boot io pattern are almost identical across boots 2) Firefox start-up speedup http://glandium.org/blog/?p=1296 -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html