Hi, On Thu, Mar 26, 2009 at 12:58 PM, Christian Kujau <lists@xxxxxxxxxxxxxxx> wrote: > http://nerdbynature.de/bench/sid/2009-03-26/di-b.log.txt > http://nerdbynature.de/bench/sid/2009-03-26/ > (dmesg, .config, JFS oops, benchmark script) > Apparently ext3 start to suck when files > 1000000. Not bad in fact, I will try to run your script on my server for a comparison. Also I might try to measure the random read time when many directories containing many files. But I want to know: If I am writing a script to do such testing, what step is needed to prevent stuffs such as OS caching effect (not sure if it is the right name), so I can arrive a fair testing ? Thanks. _______________________________________________ Ext3-users mailing list Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users