RE: Directories with >100K files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff

Quoting Jeff Sturm <jeff.sturm@xxxxxxxxxx>:

> > -----Original Message-----
> > From: linux-cluster-bounces@xxxxxxxxxx
> > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of
> > nick@xxxxxxxxxxxxxxx
> > Sent: Wednesday, January 21, 2009 8:29 AM
> > To: linux clustering
> > Subject: RE:  Directories with >100K files
> >
> > What is the way forward now ? I've got users complaining left
> > right and centre. Should I ditch GFS and use NFS ?
>
> You've hit an area where GFS doesn't work so well.  I don't know if NFS
> will be much better--others with more experience may know.  (For our
> application we solely use GFS over other shared filesystem technologies
> because we require strict posix locking.)
>
> Your options seem to be:
>
> A) Limit FS activity to as few nodes as possible.  (Does it perform
> suitably when mounted on only a single node?)
>
> B) Crank up demote_secs, an hour or more, until it either relieves your
> problem, or cripples the system because too many locks are held too
> long.  (I have a filesystem here with demote_secs=86400 so we can get
> generally good rsync performance with over 50,000 file/directory
> entries.)
>
> C) Use some alternative to GFS.
>
> Sorry if there's not a better answer.

I'm going to have to just keep working at this to see what we can do.
If we get a fix I'll post back.

Thanks for your help.

Nick.






--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux