Re: Recommended max. limit of number of files per directory?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/26/2009 09:58 AM, howard chen wrote:
Hi,

On Thu, Mar 26, 2009 at 12:58 PM, Christian Kujau<lists@xxxxxxxxxxxxxxx>  wrote:
http://nerdbynature.de/bench/sid/2009-03-26/di-b.log.txt
http://nerdbynature.de/bench/sid/2009-03-26/
(dmesg, .config, JFS oops, benchmark script)


Apparently ext3 start to suck when files>  1000000. Not bad in fact,

I will try to run your script on my server for a comparison.

Also I might try to measure the random read time when many directories
containing many files. But I want to know:

If I am writing  a script to do such testing, what step is needed to
prevent stuffs such as OS caching effect (not sure if it is the right
name), so I can arrive a fair testing ?

Thanks.

_______________________________________________

I ran similar tests using fs_mark, basically, run it against 1 directory writing 10 or 20 thousand files per iteration and watch as performance (files/sec) degrades as the file system fills or the directory limitation kick in.

If you want to be reproducible, you should probably start with a new file system but note that this does not reflect the reality of a naturally aged (say year or so old) file system well.

You can also unmount/remount to clear out cached state for an older file system (or tweak the /proc/sys/vm/drop_caches knob to clear out the cache).

Regards,

Ric

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux