> The most likely reason is that it depends on transaction boundaries. > After a block has been released, we can't reuse it until after the > jbd2 transaction which contains the deletion of the inode has > committed. So even after you've deleted the file, we can't reuse the > blocks right away. The other thing which will influence the block > allocation is which block group the last allocation was for that > particular file. So if blocks become available after a commit > completes, if we've started allocating in another block group, we > won't go back to the initial block group. Ok, makes sense. However it still doesn't answer the question about why the allocator is choosing smaller extents over larger ones nearby. For instance, looking at filefrag -v for testfile and testfile2 again. Remember, these were created immediately one after another. testfile: ... 398 18841 44779580 44779043 26 unwritten 399 18867 44780335 44779606 26 unwritten 400 18893 44780658 44780361 26 unwritten testfile2: ... 13 814 44792388 44788982 189 unwritten 14 1003 44792578 44792577 157 unwritten Those look quite near each other. So when testfile1 was being allocated, there were some bigger extents right nearby that were ignored, and ended up being used when the next file testfile2 was allocated. Why? Also, while e4defrag will try and defrag a file (or multiple files), is there any way to actually defrag the entire filesystem to try and move files around more intelligently to make larger extents? I guess running e4defrag on the entire filesystem multiple times would help, but it still would not move small files that are breaking up large extents. Is there any way to do that? Rob -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html