Re: Question about optimal filesystem with many small files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]





On Wed, Jul 8, 2009 at 2:27 AM, oooooooooooo ooooooooooooo <hhh735@xxxxxxxxxxx> wrote:

Hi,

I have a program that writes lots of files to a directory tree (around 15 Million fo files), and a node can have up to 400000 files (and I don't have any way to split this ammount in smaller ones). As the number of files grows, my application gets slower and slower (the app is works something like a cache for another app and I can't redesign the way it distributes files into disk due to the other app requirements).

The filesystem I use is ext3 with teh following options enabled:

Filesystem features:      has_journal resize_inode dir_index filetype needs_recovery sparse_super large_file

Is there any way to improve performance in ext3? Would you suggest another FS for this situation (this is a prodution server, so I need a stable one) ?

I saw this article some time back.

http://www.linux.com/archive/feature/127055
 
I've not implemented it, but from past experience, you may lose some performance initially, but the database fs performance might be more consistent as the number of files grow.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux