Re: Files per directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yuri Csapo wrote:

> > Is that going to cause performance issues? The current file system 
> > ext3. Would anyone suggest a limit I should set for the maximum or 
> > say if they think 10K files is acceptable?
> 
> I'm no expert but the answer is probably: "depends on the application."
> 
> As far as I know there's no limit to the number of files in a directory 
> currently in ext3. There IS a limit to the number of files (actually 
> inodes) in the whole filesystem, which is a completely different thing. 

ext3 also has a limit of 32000 hard links, which means that a
directory can't have more than 31998 subdirectories.

However, the original poster wasn't asking about hard limits, but
efficiency.

If the filesystem wasn't created with the dir_index option, then
having thousands of files in a directory will be a major performance
problem, as any lookups will scan the directory linearly.

Even with the dir_index option, large directories could be an issue. I
think that you would really need to conduct tests to see exactly how
much of an issue.

OTOH, even if you keep the directories small, a database consisting of
many small files will be much slower than e.g. BerkeleyDB or DBM.

-- 
Glynn Clements <glynn@xxxxxxxxxxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Newbie]     [Audio]     [Hams]     [Kernel Newbies]     [Util Linux NG]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Device Drivers]     [Samba]     [Video 4 Linux]     [Git]     [Fedora Users]

  Powered by Linux