On 04/05/2013 12:34 PM, Laurent Barbe wrote:
Hello, I'm trying online resizing with RBD + XFS. But when i try to make a xfs_growfs, it doesn't seen the new size. I don't use partition table, os is debian squeeze / kernel 3.8.4 / ceph 0.56.4. It seems that the mounted file system prevents update the block device size ? If the file system is not mounted, or if I unmount + mount, xfs_growfs works as expected.
When I block device is in use it can't change. When you unmount the blockdevice is no longer in use and the new size can be detected.
This is a not a RBD limitation, but it's something that goes for all block devices in Linux.
I've seen some patches floating around that could do this online, but I'm not sure if they are in the kernel.
You could try this: $ blockdev --rereadpt /dev/rbd1 Or $ partprobe -s /dev/rbd1 -- Wido den Hollander 42on B.V. Phone: +31 (0)20 700 9902 Skype: contact42on
#### ORIGINAL SIZE #### # parted /dev/rbd1 print Model: Unknown (unknown) Disk /dev/rbd1: *105MB* Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0,00B 105MB 105MB xfs #### RBD RESIZE #### # rbd resize rbdxfs --size=200 Resizing image: 100% complete...done. #### SIZE NOT CHANGE IF FS ON RBD1 IS MOUNTED #### # parted /dev/rbd1 print Model: Unknown (unknown) Disk /dev/rbd1: *105MB* Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0,00B 105MB 105MB xfs #### UMOUNT FS --> SIZE OK #### # umount /mnt/rbdxfs # parted /dev/rbd1 print Model: Unknown (unknown) Disk /dev/rbd1: *210MB* Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0,00B 210MB 210MB xfs Any Ideas ? Thanks -- Laurent Barbe _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com