Hi, On Wed, 2003-06-18 at 23:54, Andreas Dilger wrote: > I think one of the earlier claims was that a test with 100 dirs x 1000 files > was faster than a single htree dir with 100,000 files. Theoretically that > should not be the case (extra directory access overhead, etc), but maybe > in real life this is true. It depends on the access pattern. If the 100 dirs case is performed iterating over the directories first and then the files, so that we fill each dir before moving to the next, then we'll only have one dir active at a time, and things might well go faster than dealing with a single dir. But I'd expect the single dir to be faster if you're iterating the other way, constantly skipping from dir to dir adding one file at a time to each. Random access also ought to be faster in the single-dir case. Cheers, Stephen _______________________________________________ Ext3-users@redhat.com https://www.redhat.com/mailman/listinfo/ext3-users