Re: gfs_fsck memory allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2008-04-03 at 12:16 +0100, Ben Yarwood wrote:
> I'm trying to run gfs_fsck on a 16TB file system and I keep getting the following message
> 
> Initializing fsck
> Unable to allocate bitmap of size 520093697
> This system doesn't have enough memory + swap space to fsck this file system.
> Additional memory needed is approximately: 5952MB
> Please increase your swap space by that amount and run gfs_fsck again.
> 
> I have increased the swap size to 16GB but I still keep getting the message.  Does anyone have any suggestions?

Hi Ben,

The gfs_fsck needs one byte per block in each bitmap.  That message
indicates that it tried to allocate a chunk of 520MB of memory and got
an error on it.  IIRC, the biggest RG size is 2GB, and would therefore
require at most a chunk of 512K.  (Assuming 4K blocks and assuming I did
the math correctly, which I won't promise!)  a 520MB chunk is big enough
to hold an entire RG; much bigger than a bitmap for one.

So this error is most likely caused by corruption in your system rindex
file.  Perhaps you should do gfs_tool rindex and look for anomalies.

I'm planning to do some fixes to gfs_fsck to handle more cases
like this, but that will take some time to resolve.  If you send in your
metadata (gfs2_tool savemeta), that might help me in this task.

Regards,

Bob Peterson
Red Hat Clustering & GFS


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux