Re: gfs2_grow - Error writing new rindex entries; aborted.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Problem resolved by removing some large files before do gfs2_grow, as it requires some free space to process "grow".

refer to:
https://bugzilla.redhat.com/show_bug.cgi?id=490649

On 7/04/2011, at 1:38 PM, Marco Huang wrote:

Hi,

We are running Cenos5.5 (gfs2-utils.x86_64 v0.1.62-20.el5) cluster. We want to add another 5T disk space on the filesystem - expand from 19T to 25T, however it doesn't grow over 20T. There was no error when I did test run. So just wondering if there is a limitation of gfs2_grow which only can grow gfs2 filesystem upto 20T ? Is there anyone had same experience?

Test run
# gfs2_grow -T /mnt/fsbackup
(Test mode--File system will not be changed)
FS: Mount Point: /mnt/fsbackup
FS: Device:      /dev/mapper/fsbackup-fsbackup01
FS: Size:        4882811901 (0x12309cbfd)
FS: RG size:     524244 (0x7ffd4)
DEV: Size:       6103514112 (0x16bcc3c00)
The file system grew by 4768368MB.
gfs2_grow complete.

Actual run
# gfs2_grow /mnt/fsbackup
FS: Mount Point: /mnt/fsbackup
FS: Device:      /dev/mapper/fsbackup-fsbackup01
FS: Size:        4882811901 (0x12309cbfd)
FS: RG size:     524244 (0x7ffd4)
DEV: Size:       6103514112 (0x16bcc3c00)
The file system grew by 4768368MB.
Error writing new rindex entries;aborted.
gfs2_grow complete.


Before gfs2_grow
# df -h /mnt/fsbackup/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/fsbackup-fsbackup01
                       19T   19T  651M 100% /mnt/fsbackup

# gfs2_tool df
/mnt/fsbackup:
  SB lock proto = "lock_dlm"
  SB lock table = "FSC:fsbackup01"
  SB ondisk format = 1801
  SB multihost format = 1900
  Block size = 4096
  Journals = 8
  Resource Groups = 10112
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "FSC:fsbackup01"
  Mounted host data = ""
  Journal number = 0
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE

  Type           Total Blocks   Used Blocks    Free Blocks    use%           
  ------------------------------------------------------------------------
  data           5300270360     4882310027     417960333      92%
  inodes         447901122      29940789       417960333      7%

After gfs2_grow
#  df -h /mnt/fsbackup/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/fsbackup-fsbackup01
                       20T   19T  1.6T  93% /mnt/fsbackup


# gfs2_tool df
/mnt/fsbackup:
  SB lock proto = "lock_dlm"
  SB lock table = "FSC:fsbackup01"
  SB ondisk format = 1801
  SB multihost format = 1900
  Block size = 4096
  Journals = 8
  Resource Groups = 9314
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "FSC:fsbackup01"
  Mounted host data = ""
  Journal number = 0
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE

  Type           Total Blocks   Used Blocks    Free Blocks    use%           
  ------------------------------------------------------------------------
  data           4882476584     4882310009     166575         100%
  inodes         30107364       29940789       166575         99%



cheers
--
Marco

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux