The limit of the number of files in a directory is variable and set when the FS is created. It depends on the size of the volume; if you do a 'df -i' it will tell you the available inodes you have. This will help to determine the limits of the fs which could help in determining where the bottleneck is. Quantity?...I've had problems doing an 'ls -al' on a single directory with 45000 files in it (EXT3 on external scsi array)...so I'm surprised that you're not having problems already... Suggestions? I've read that XFS and ReiserFS are the best fs types for working with a large amount of small files, although the article was old (2006). Reiser will consume more CPU than the others, but that's just from my personal experience. If you do find the number of files is a bottleneck, hardware is the easiest fix. I'd recommend getting the fastest drives and bus that you can afford... I would definitely research the issue before doing anything about it... Andrew. ------- Hello, I serve static content with an Apache server, and store the files in a storage server, wich is mounted in the webserver via NFS. 95% of the files that I serve are images, and the format of the file name is {number}.png. I have these images all together in a single directory, and there was about 4 million files in this folder. I wanto to change this directory structure to something more secure and dynamic, to permit an easier way to scale and backup these files. My questions are: - When the quantity of files will start to become a bottleneck for my filesystem? ( they are stored in partition with ext3) - When the quantity of files will start to become a bottleneck for my OS? - Suggestions? Thanks []s Fábio Jr. -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list