Re: Re: gfs tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Doh!! :)

Is it normal?

$ gfs_tool df /mnt
/mnt:
  SB lock proto = "lock_dlm"
  SB lock table = "hotsite:gfs-00"
  SB ondisk format = 1309
  SB multihost format = 1401
  Block size = 4096
  Journals = 2
  Resource Groups = 424
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "hotsite:gfs-00"
  Mounted host data = "jid=1:id=196609:first=0"
  Journal number = 1
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE
  Oopses OK = FALSE

  Type           Total          Used           Free           use%           
  ------------------------------------------------------------------------
  inodes         854            854            0              100%
  metadata       48761          2259           46502          5%
  data           27652913       1061834        26591079       4%


I thinking my load average very high under apacheab...

On Mon, 2008-06-16 at 11:53 -0500, Terry wrote:
> Doh!   Check this out:
> 
> [root@omadvnfs01b ~]# gfs_tool df /data01d
> /data01d:
>   SB lock proto = "lock_dlm"
>   SB lock table = "omadvnfs01:gfs_data01d"
>   SB ondisk format = 1309
>   SB multihost format = 1401
>   Block size = 4096
>   Journals = 2
>   Resource Groups = 16384
>   Mounted lock proto = "lock_dlm"
>   Mounted lock table = "omadvnfs01:gfs_data01d"
>   Mounted host data = "jid=1:id=786434:first=0"
>   Journal number = 1
>   Lock module flags = 0
>   Local flocks = FALSE
>   Local caching = FALSE
>   Oopses OK = FALSE
> 
>   Type           Total          Used           Free           use%
>   ------------------------------------------------------------------------
>   inodes         18417216       18417216       0              100%
>   metadata       21078536       20002007       1076529        95%
>   data           1034059688     744936460      289123228      72%
> 
> 
> The number of inodes is interesting......
> 
> 
> On Mon, Jun 16, 2008 at 11:45 AM, Terry <td3201@xxxxxxxxx> wrote:
> > Hello,
> >
> > I have 4 GFS volumes, each 4 TB.  I am seeing pretty high load
> > averages on the host that is serving these volumes out via NFS.  I
> > notice that gfs_scand, dlm_recv, and dlm_scand are running with high
> > CPU%.  I truly believe the box is I/O bound due to high awaits but
> > trying to dig into root cause.  99% of the activity on these volumes
> > is write.  The number of files is around 15 million per TB.   Given
> > the high number of writes, increasing scand_secs will not help.  Any
> > other optimizations I can do?
> >
> > Thanks!
> >
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
-- 
Tiago Cruz
http://everlinux.com
Linux User #282636


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux