On Tue, Jul 12, 2016 at 5:35 AM, Morgan Read <mstuff@xxxxxxxxxxx> wrote: > > Well, it seems to have reduced the volumes, but not the filesystems: > -------------------------------------------------------------------------------------------------------------------- > Volume Pool Volume size FS > FS size Free Type Mount point > -------------------------------------------------------------------------------------------------------------------- > ... > /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 > crypt_pool 192.73 GB ext4 > 195.73 GB 21.40 GB crypt /home > /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 > crypt_pool 17.00 GB ext4 > 20.00 GB 14.70 GB crypt / You're right, this is damn peculiar. The way I'm reading this, each LV is separately encrypted, is that right? And then it's the dmcrypt/LUKS volume that is formatted ext4? So: ext4 | LUKS | LV | VG | PV | Disk If so, as far as I can tell it's correct to point it at the LUKS volume, the thing that is mounted. > I've cut and pasted direct from the terminal - no omissions or additions to > the series of commands and outputs. > > Following the two above commands, ssm lists the two volumes as reduced in > size by 3G, but the file system's size (FS size) as remaining the same... But what I'm seeing is *ONLY* the LUKS volume was reduced. The LV is the same size as before. ssm is apparently confused about the stack relationships. It's treating the command literally for only the dmcrypt volume, not the file system and not the LV. Off hand I'd say that's a really big bug. >> So why don't you have any messages about what actually ssm resize did? > > > Hmm, don't know - I generally understand that no message is a good message: > 'all done' ? Nooo. It *had* to ask and it had to fail if it was an active mounted / or /home. There's no way to unmount it even if you give permission. > > >> I can't tell if it did anything at all, which means it probably did >> not resize the file system. This appears to be true when looking at >> ssm list results after you did this resize. The file systems are still >> mounted at /home and /, and they are still the same size as before, no >> change. > > > Hmm, yes - but volume size has changed - re my following email, it seems to > be a discrepancy between the superblock or partition table, which I was > trying to correct - is there a way to edit the superblock to conform to the > partition table? Only for grow. For shrink it's too late, it has no way to access the now missing space at the end of the fs volume, but you can ask on the ext4 list what the chances are of resizing the file system once the partition (LUKS in this case) is already shrunk, out of the usual order. The LUKS volume shrinking itself doesn't immediately cause a problem, it's the subsequent shrink of the LV, which will return extents used for the file system into the VG, and then after that there was a grow for a different LV which would have moved those extents from the VG to that LV, and then the fs resize would have stepped on all that data. So the portion removed from /home and / is just obliterated more than likely. You'd have to ask on the ext4 list to be sure if this is not fixable. But my expectation from reading the resize.c code for ext4 is that it will not resize a file system after the fact, there's required accounting that has to be done and if it can't be done the operation fails. e2fsck might be able to estimate that there was no meaningful data or metadata in the missing portion, and could fix this *IF* the LUKS volume is returned to original size the FS thinks it's supposed to be on. But I expect e2fsck to fail also so long as the partition its on is smaller than the fs says it should be because it cannot fix the metadata in the missing 3GiB portion and e2fsck can't do a shrink while fixing. So it's catch 22. It can't be shrunk now because the proper accounting can't be done. It can't be fixed until the partition is resized to match the volume size, and even then the fixing may fail for multiple reasons. > > Docs, docs, docs! > >>> [root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var >>> Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB >>> (768 extents) to 6.00 GiB (1536 extents). >>> Logical volume var successfully resized. >>> [root@morgansmachine ~]# ssm resize -s+3G >>> /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f >>> SSM Error (2005): There is not enough space in the pool 'none' to grow >>> volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size >>> 6289408.0 KB! >> >> >> I think the problem here is now ssm is confused somehow. You should >> have just done the 2nd command on the file system itself, because ssm >> will know that it first must increase the size of the LV, and then the >> size of the LUKS volume, and then the fs. But you only increased the >> size of the LV, not the LUKS volume, which now has a different size >> than its underlying LV, so SSM seems to get stuck. > > > Isn't >>> [root@morgansmachine ~]# ssm resize -s+3G >>> /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f > An attempt to increase the size of the LUKS volume? Now I don't know. I'd expect ssm, the whole point of it is that it should understand the layering, would for a shrink operation know to first resize the file system, then LUKS volume header, then LV. In that order. And in exact reverse order for grow. But it only changed LUKS apparently. When I tried it on Fedora 24, without LUKS, it did what I expected. But I'd have to retry with LUKS to see if it gets confused. > As to resizing a live system - I figured there was some risk there, but due > to the lack of documentation and the 'error out if filesystem is not > supported' figured again that I'd would be allowed to complete if it > couldn't complete... I was most concerned that the operation wouldn't be > supported on a crypt system as there was documentation about 3 years back > that ssm only supported reading crypt filesystems - never thought ext 4 > would be the weak point... resize2fs was apparently never even asked, otherwise it would have failed. There is user space code to check for mount, it will not shrink a mounted file system. It had to fail. ssm must be missing some logical checks to totally silently reduce LUKS, which is a rather nonsensical option. How else do you resize the file system in such a case but to point to the logical block device the file system is on, which is exactly what you did? But it did not attempt an fs resize first. I think it's a bug. > Re resizing the LV before the underlying system - I had no idea that ssm > would take account of the LV when operating on the underlying system. Again, > documentation seems underwhelming. What I'm trying to do seems to be > precisely what ssm was made to simplify and do. But, in any case, doing > what needed to be done to the LV before the underlying system seemed the > safest option I don't know what you mean by the last sentence. The LV is the underlying system, LUKS volumes is above that, and the fs is above that. Shrink has to be done top to bottom, which is what it seems you started out doing. But then before really confirming the fs was resized, you shrank the LV and ignored the warnings, which at that point just seemed like ass covering warnings rather than, you are definitely going to lose data now, kind of warnings. -- Chris Murphy -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://lists.fedoraproject.org/admin/lists/users@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org