On Tue, Jun 15, 2010 at 02:28:22PM +0400, Evgeniy Polyakov wrote: > That doesn't coverup large-number-of-small-files pattern, since > untarring subsequently means creating something new, which FS can > optimize. Much more interesting case is when we have dirtied large > number of small files in kind-of random order and submitted them > down to disk. That's why we still have block layer sorting. But for the problem of larger files doing the sorting above the filesystem is a lot more efficient, not just primarily due to the I/O patters but because it makes life for the filesystem writeback code and allocator a lot simpler. > Per-mapping sorting will not do anything good in this case, even if > files were previously created in a good facion being placed closely and > so on, and only block layer will find a correlation between adjacent > blocks in different files. But with existing queue management it has > quite a small opportunity, and that's what I think Andrew is arguing > about. Which is actually more or less true - if we do larger amounts of writeback from kswapd we're toast anyway and performance and allocation patters go down the toilet. Then again throwing a list_sort in is a rather trivial addition. Note that in addition to page->index we can also sort by the inode number in the sort function. At least for XFS and the traditional ufs derived allocators that will give you additional locality for small files. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html