Re: On the many files problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Dec 31, 2007 11:13 PM, Yannick Gingras <ygingras@xxxxxxxxxxxx> wrote:
> >    but if you want to check odder cases, try creating a huge
> >    directory, and then deleting most files, and then adding a few
> >    new ones. Some filesystems will take a huge hit because they'll
> >    still scan the whole directory, even though it's mostly empty!
> >
> >    (Also, a "readdir() + stat()" loop will often get *much* worse access
> >    patterns if you've mixed deletions and creations)
>
> This is something that will be interesting to benchmark later on.  So,
> an application with a lot of turnaround, say a mail server, should
> delete and re-create the directories from time to time?  I assume this
> is specific to some file system types.

This is indeed the case. Directories with a lot of movement get
fragmented on most FSs -- ext3 is a very bad case for this -- and
there are no "directory defrag" tools other than regenarating them.
The "Maildir" storage used for many IMAP servers these days shows the
problem.

This (longish) threads has some interesting tidbits on getdents() and
directory fragmentation.
http://kerneltrap.org/mailarchive/git/2007/1/7/235215

cheers,


m
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux