Re: GFS reserved blocks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Shawn,
I have been seeing the same thing on one of my clusters (shown below) under Red Hat 4.6. I found some details on this under an article on the open-shared root web site (http://www.open-sharedroot.org/faq/troubleshooting-guide/file-systems/gfs/file-system-full) and an article in Red Hat's knowledge base (http://kbase.redhat.com/faq/FAQ_78_10697.shtm). It seems to be a bug in the reclaim of metadata blocks when an inode is released. I saw a patch (bz298931) released for this in the 2.99.10 cluster release notes but it was reverted (bz298931) a few days after it was submitted. The only suggestion that I have gotten back from Red Hat is to shutdown the app so the GFS drives are not being accessed and then run the "gfs_tool reclaim <mount point>" command.

[root@omzdwcdrp003 ~]# gfs_tool df /l1load1
/l1load1:
SB lock proto = "lock_dlm"
SB lock table = "DWCDR_prod:l1load1"
SB ondisk format = 1309
SB multihost format = 1401
Block size = 4096
Journals = 20
Resource Groups = 6936
Mounted lock proto = "lock_dlm"
Mounted lock table = "DWCDR_prod:l1load1"
Mounted host data = ""
Journal number = 13
Lock module flags =
Local flocks = FALSE
Local caching = FALSE
Oopses OK = FALSE

Type           Total          Used           Free           use%
------------------------------------------------------------------------
inodes         155300         155300         0              100%
metadata       2016995        675430         1341565        33%
data           452302809      331558847      120743962      73%
[root@omzdwcdrp003 ~]# df -h /l1load1
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/l1load1--vg-l1load1--lv
                    1.7T  1.3T  468G  74% /l1load1
[root@omzdwcdrp003 ~]# du -sh /l1load1
18G     /l1load1

----
Jason Huddleston, RHCE
----
PS-USE-Linux
Partner Support - Unix Support and Engineering
Verizon Information Processing Services



Shawn Hood wrote:
Does GFS reserve blocks for the superuser, a la ext3's "Reserved block
count"?  I've had a ~1.1TB FS report that it's full with df reporting
~100GB remaining.



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux