Re: GFS2 poor performance (gfs2_tool counters)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Mon, 2008-11-10 at 15:04 -0600, David Merhar wrote:
> Is "gfs2_tool counters" supported?
> 
> Doesn't work for us, and I found reference to correcting the man page  
> so it's no longer included.
> 
> Thanks.
> 
> djm
> 
> 
No, it isn't supported any more. There are plenty of existing methods of
tracing the actions of the filesystem, such as strace, blktrace, and
more recently FIEMAP so that the counters are no longer needed,

Steve.

> 
> On Nov 6, 2008, at 1:53 PM, Jeff Sturm wrote:
> 
> > I looked over the summit document you referenced below.  The value  
> > of demote_secs mentioned is an example setting, and unfortunately no  
> > recommendations or rationale accompany this.
> >
> > For some access patterns you can get better performance by actually  
> > increasing demote_secs.  For example, we have a node that we  
> > routinely rsync a file tree onto using a GFS partition.  Increasing  
> > demote_secs from 300 to 86400 reduced the average rsync time by a  
> > factor of about 4.  The reason is that this node has little lock  
> > contention and needs to lock each file every time we start an rsync  
> > process.  With demote_secs=300, it was doing much more work to  
> > reacquire locks on each run.  Whereas demote_secs=86400 allowed the  
> > locks to persist up to a day, since the overall number of files in  
> > our application is bounded such that they will fit in buffer cache,  
> > together with locks.
> >
> > At another extreme, we have an application that creates a lot of  
> > files but seldom opens them on the same node.  In this case there is  
> > no value in holding onto the locks, so we set demote_secs to a small  
> > value and glock_purge as high as 70 to ensure locks are quickly  
> > released in memory.
> >
> > The best advice I can give in general is to experiment with  
> > different settings for demote_secs and glock_purge while watching  
> > the output of "gfs_tool counters" to see how they behave.
> >
> > Jeff
> >
> > -----Original Message-----
> > From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx 
> > ] On Behalf Of Fabiano F. Vitale
> > Sent: Tuesday, November 04, 2008 3:19 PM
> > To: linux clustering
> > Subject: Re:  GFS2 poor performance
> >
> > Hi,
> >
> > for cluster purpose the two nodes are linked by a  patch cord cat6  
> > and the lan interfaces are gigabit.
> >
> > All nodes have a Fibre Channel Emulex Corporation Zephyr-X  
> > LightPulse and the Storage is a HP EVA8100
> >
> > I read the document
> > http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Summit08presentation_GFSBestPractices_Final.pdf
> > which show some parameters to tune and one of  them is  demote_secs,  
> > to adjust to 100sec
> >
> > thanks
> >
> >> What sort of network and storage device are you using?
> >>
> >> Also, why set demote_secs so low?
> >>
> >> -----Original Message-----
> >> From: linux-cluster-bounces@xxxxxxxxxx
> >> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of  
> >> ffv@xxxxxxxxxxx
> >> Sent: Tuesday, November 04, 2008 2:13 PM
> >> To: linux-cluster@xxxxxxxxxx
> >> Subject:  GFS2 poor performance
> >>
> >> Hi all,
> >>
> >> I´m getting a very poor performance using GFS2.
> >> I have two qmail (mail) servers and one gfs2 filesystem shared by  
> >> them.
> >> In this case, each directory in GFS2 filesystem may have upon to  
> >> 10000
> >> files (mails)
> >>
> >> The problem is in performance of some operations like ls, du, rm, etc
> >> for example,
> >>
> >> # time du -sh /dados/teste
> >> 40M     /dados/teste
> >>
> >> real    7m14.919s
> >> user    0m0.008s
> >> sys     0m0.129s
> >>
> >> this is unacceptable
> >>
> >> Some attributes i already set using gfs2_tool:
> >>
> >> gfs2_tool settune /dados demote_secs 100 gfs2_tool setflag jdata
> >> /dados gfs2_tool setflag sync /dados gfs2_tool setflag directio / 
> >> dados
> >>
> >> but the performance is still very bad
> >>
> >>
> >> Anybody know how to tune the filesystem for a acceptable performance
> >> working with directory with 10000 files?
> >> thanks for any help
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster@xxxxxxxxxx
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >>
> >>
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster@xxxxxxxxxx
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux