On 3/14/2011 4:37 PM, Eric Sandeen wrote: > On 3/14/11 3:24 PM, Phillip Susi wrote: >> Shouldn't copying or extracting or otherwise populating a large >> directory of many small files at the same time result in a strong >> correlation between the order the names appear in the directory, and the >> order their data blocks are stored on disk, and thus, read performance >> should not be negatively impacted by fragmentation? > > No, because htree (dir_index) dirs returns names in hash-value > order, not inode number order. i.e. "at random." I thought that the htree was used to look up names, but the normal directory was used to enumerate them? In other words, the htree speeds up opening a single file, but slows down traversing the entire directory, so should not be used there. Also isn't htree only enabled for large directories? I still see crummy correlation for small ( < 100 files, even one with only 8 entries ) directories. It seems unreasonable to ask applications to read all directory entries, then sort them by inode number to achieve reasonable performance. This seems like something the fs should be doing. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html