Re: Failed xfs_growfs after lvextend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nathan Scott wrote:
On Sat, Nov 05, 2005 at 01:31:10PM -0500, Randall A. Jones wrote:

I extended an LV to 12.28TB and ran xfs_growfs on the mount point, lv0.
This appeared to work fine except that after this, the filesystem didn't appear any larger.


I looked into this problem awhile back now.  I believe what you're
seeing here is an inconsistency in the kernels block layer - at least,
when I last looked at this, this was the problem - the size increase
was indeed done inside the driver, and /sys/block/xxx/size confirmed
that, but the interfaces which use /dev/xxx to get the device size do
not see the increase (i.e. lseek(SEEK_END) or a BLKGETSIZE64 ioctl).

I wrote the attached program to show the issue, and sent mail to LKML
to let folks know of the issue, I guess noone has got around to trying
to address the problem yet though.  The core of the problem was that
the /dev/xxx inode size (i_size) had not been updated to reflect the
change in device size, IIRC.

Ok. I looked at the things you suggested. It seems that everything is in order with the device mapper and it sees the 12.28TB device size. Your getdevicesize.c code verified this with a BLKGETSIZE64 ioctl call.

root@mapdata<~>$ cat /sys/block/dm-0/size
26367492096

root@mapdata<~>$ ~rajones/getdevicesize /dev/mapper/vg0-lv0
26367492096 512 byte blocks (BLKGETSIZE64)

26367492096 512 byte blocks is approx. 12.28TB.


So, back to xfs. Is it possible xfs_growfs is not working properly? The result after running xfs_growfs was that the primary superblock on the LV filesystem was corrupt or missing.


Is there a workaround?   A 12.28TB LV with xfs filesystem should work, yes?

One idea for a workaround is to relocate the data on the misbehaving LV/filesystem and recreate the LV and filesystem from scratch to avoid using xfs_growfs.
I have enough free PVs to create a temp space to move my existing data.

Are there any possibilities for fixing the problem "in place".

Thank you,
Randall

--
..:.::::
Randall Jones     GST      NASA Goddard Space Flight Center
HPC Visualization Support       http://hpcvis.gsfc.nasa.gov
Scientific Visualization Studio    http://svs.gsfc.nasa.gov
rajones@svs.gsfc.nasa.gov      Code 610.3      301-286-2239

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux