Re: Re: Why GFS is so slow? What it is waiting for?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Klaus:

Thank you very much for your kind answer.

Tunning the parameters sounds really interesting. I
should give it a try.

By the way, how did you come up with these new
parameter values? Did you calculate them based on 
some measures or simply pick them up and test.

Best,

Jas


--- Klaus Steinberger
<Klaus.Steinberger@xxxxxxxxxxxxxxxxxxxxxx> wrote:

> Hi,
> 
> > However, it took ages to list the subdirectory on
> an
> > absolute idle cluster node. See below:
> >
> > # time ls -la | wc -l
> > 31767
> >
> > real    3m5.249s
> > user    0m0.628s
> > sys     0m5.137s
> >
> > There are about 3 minutes spent on somewhere. Does
> > anyone have any clue what the system was waiting
> for?
> 
> Did you tune glock's?  I found that it's very
> important for performance of 
> GFS.
> 
> I'm doing the following tunings currently:
> 
> gfs_tool settune /export/data/etp quota_account 0
> gfs_tool settune /export/data/etp glock_purge 50
> gfs_tool settune /export/data/etp demote_secs 200
> gfs_tool settune /export/data/etp statfs_fast 1
> 
> Switch off quota off course only if you don't need
> it. All this tunings have 
> to be done every time after mounting, so do it in a
> init.d script running 
> after GFS mount, and of course do it on every node.
> 
> Here is the link to the glock paper:
> 
>
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4
> 
> The glock tuning (glock_purge and demote_secs
> parameters) definitly solved  a 
> problem we had here with the Tivoli Backup Client.
> Before it was running for 
> days and sometimes even did give up. We observed
> heavy lock traffic.
> 
> After changing the glock parameters times for the
> backup did go down 
> dramatically, we now can run a Incremental Backup on
> a 4 TByte filesystem in 
> under 4 hours. So give it a try.
> 
> There is some more tuning, which could be done
> unfortunately just on creation 
> of filesystem. The default number of Resource Groups
> is ways too large for 
> nowadays TByte Filesystems. 
> 
> Sincerly,
> Klaus
> 
> 
> -- 
> Klaus Steinberger         Beschleunigerlaboratorium
> Phone: (+49 89)289 14287  Am Coulombwall 6, D-85748
> Garching, Germany
> FAX:   (+49 89)289 14280  EMail:
> Klaus.Steinberger@xxxxxxxxxxxxxxxxxxxxxx
> URL:
>
http://www.physik.uni-muenchen.de/~Klaus.Steinberger/
> > --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
>
https://www.redhat.com/mailman/listinfo/linux-cluster



      ____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux