Re: optimising filesystem for many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Viji V Nair wrote:
Hi,

System : Fedora 11 x86_64
Current Filesystem: 150G ext4 (formatted with "-T small" option)
Number of files: 50 Million, 1 to 30K png images

We are generating these files using a python programme and getting very slow IO performance. While generation there in only write, no read. After generation there is heavy read and no write.

I am looking for best practices/recommendation to get a better performance.

Any suggestions of the above are greatly appreciated.

Viji


I would start with using blktrace and/or seekwatcher to see what your IO patterns look like when you're populating the disk; I would guess that you're seeing IO scattered all over.

How you are placing the files in subdirectories will affect this quite a lot; sitting in 1 directory for a while, filling with images, before moving on to the next directory, will probably help. Putting each new file in a new subdirectory will probably give very bad results.

-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux