Re: ssm resize luks on lvm...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/07/16 21:35, Chris Murphy wrote:
On Sun, Jul 10, 2016 at 12:42 PM, Morgan Read <mstuff@xxxxxxxxxxx> wrote:


[root@morgansmachine ~]# ssm resize -s-3G
/dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316
[root@morgansmachine ~]# ssm resize -s-3G
/dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6


These two commands should have at the least reduced the size of the
file system volumes mounted at /home and / but I have no idea why this
was permitted because online shrink is not supported by ext4.

Well, it seems to have reduced the volumes, but not the filesystems:
--------------------------------------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
--------------------------------------------------------------------------------------------------------------------
...
/dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316
crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home
/dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6
crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt /

I've cut and pasted direct from the terminal - no omissions or additions to the series of commands and outputs.

Following the two above commands, ssm lists the two volumes as reduced in size by 3G, but the file system's size (FS size) as remaining the same...

I figured that was strange in itself - but stranger still as I seemed to be able to increase the LV size of [...]/var by 3G, but then trying to increase the underlying volume by the same amount failed politely with: SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB!

And then simply attempting to increase the size of the underlying volume to fill the space caused ssm to fail very rudely and spit the dummy!

---------------
# resize2fs /dev/VG/test4 40G
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/VG/test4 is mounted on /mnt/0; on-line resizing required
resize2fs: On-line shrinking not supported

# ssm resize -s-3G /dev/VG/test4
Do you want to unmount "/mnt/0"? [Y|n] n
fsadm: Cannot proceed with mounted filesystem "/mnt/0"
   fsadm failed: 1
   Filesystem resize failed.
SSM Error (2012): ERROR running command: "lvm lvresize -r -L
49283072.0k /dev/VG/test4"
---------------

If I allow the unmount:

---------------
[root@f24s ~]# ssm resize -s-3G /dev/VG/test4
Do you want to unmount "/mnt/0"? [Y|n] y
fsck from util-linux 2.28

/dev/mapper/VG-test4: 11/3276800 files (0.0% non-contiguous),
251699/13107200 blocks
resize2fs 1.42.13 (17-May-2015)
Resizing the filesystem on /dev/mapper/VG-test4 to 12320768 (4k) blocks.
The filesystem on /dev/mapper/VG-test4 is now 12320768 (4k) blocks long.

   Size of logical volume VG/test4 changed from 50.00 GiB (12800
extents) to 47.00 GiB (12032 extents).
   Logical volume test4 successfully resized.
---------------

So why don't you have any messages about what actually ssm resize did?

Hmm, don't know - I generally understand that no message is a good message: 'all done' ?


I can't tell if it did anything at all, which means it probably did
not resize the file system. This appears to be true when looking at
ssm list results after you did this resize. The file systems are still
mounted at /home and /, and they are still the same size as before, no
change.

Hmm, yes - but volume size has changed - re my following email, it seems to be a discrepancy between the superblock or partition table, which I was trying to correct - is there a way to edit the superblock to conform to the partition table?

This scenario looks similar to what's described here:
https://www.linuxquestions.org/questions/linux-hardware-18/size-in-superblock-is-different-from-the-physical-size-of-the-partition-298175/#post3813076

[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home
   WARNING: Reducing active and open logical volume to 192.73 GiB.
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce fedora_morgansmachine/home? [y/n]: y
   Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB
(50107 extents) to 192.73 GiB (49339 extents).
   Logical volume home successfully resized.
[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/root
   WARNING: Reducing active and open logical volume to 17.00 GiB.
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce fedora_morgansmachine/root? [y/n]: y
   Size of logical volume fedora_morgansmachine/root changed from 20.00 GiB
(5120 extents) to 17.00 GiB (4352 extents).
   Logical volume root successfully resized.

Yeah I think you did just indeed destroy you data on both of these
because the file system was not resized in the first step and then you
asked it to change the size of the LV. So those extents revert back to
the VG.

Had the file system resize happened correctly, ssm will resize the LV
for you, so you didn't need to do this step anyway.

Docs, docs, docs!

[root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var
   Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB
(768 extents) to 6.00 GiB (1536 extents).
   Logical volume var successfully resized.
[root@morgansmachine ~]# ssm resize -s+3G
/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f
SSM Error (2005): There is not enough space in the pool 'none' to grow
volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size
6289408.0 KB!

I think the problem here is now ssm is confused somehow. You should
have just done the 2nd command on the file system itself, because ssm
will know that it first must increase the size of the LV, and then the
size of the LUKS volume, and then the fs. But you only increased the
size of the LV, not the LUKS volume, which now has a different size
than its underlying LV, so SSM seems to get stuck.

Isn't
>> [root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f
An attempt to increase the size of the LUKS volume?

Further, the problem is that by shrinking some LVs, and then growing
another, the extents for /home and / are now with some other LV and
have probably been stepped on, so /home and / are likely a total loss.
It would take some very tedious patience to unwind all of this in the
*exact* reverse order in order to get the same extents linearly
allocated back to the /home and / file systems.

Well, the steps I've followed aren't so complicated they couldn't be retraced...

[root@morgansmachine ~]# ssm resize
/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f
Traceback (most recent call last):

That's a bug. Anytime there's a crash it's a bug.

In performing this operation I took some comfort using ssm - in the absence of any documentation I could find, other than this was the tool for the job - from the changelog Mon Jul 27 2015:
- Error out if file system is not supported (#1196428)
I figured that if an operation wasn't supported, then ssm would say so...

As to resizing a live system - I figured there was some risk there, but due to the lack of documentation and the 'error out if filesystem is not supported' figured again that I'd would be allowed to complete if it couldn't complete... I was most concerned that the operation wouldn't be supported on a crypt system as there was documentation about 3 years back that ssm only supported reading crypt filesystems - never thought ext 4 would be the weak point...

Re resizing the LV before the underlying system - I had no idea that ssm would take account of the LV when operating on the underlying system. Again, documentation seems underwhelming. What I'm trying to do seems to be precisely what ssm was made to simplify and do. But, in any case, doing what needed to be done to the LV before the underlying system seemed the safest option

The best documentation I could find was:
https://fedoraproject.org/wiki/Features/SystemStorageManager
http://storagemanager.sourceforge.net/
Both of which are at least 3 years old
And
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-ssm.html
Which is minimal


Yes, I have filed a bug here:
https://bugzilla.redhat.com/show_bug.cgi?id=1354681

Many thanks for your follow - I'd be interested in your comments?

Regards
Morgan.
--
Morgan Read
<mailto:mstuffATreadDOTorgDOTnz>

Confused about DRM?
Get all the info you need at:
http://drm.info/
--
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://lists.fedoraproject.org/admin/lists/users@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux