Wow, Thanks for the heads-up Jason. That explains a lot. I followed the instructions here
http://ceph.com/releases/v12-2-0-luminous-released/ which apparently left out that step. I have now executed that command.
Is there a new master list of the cli’s?
From: Jason Dillaman [mailto:jdillama@xxxxxxxxxx]
Sent: Wednesday, November 8, 2017 9:53 AM
To: James Forde <jimf@xxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade
Are your QEMU VMs using a different CephX user than client.admin? If so, can you double-check your caps to ensure that the QEMU user can blacklist? See step 6 in the upgrade instructions [1]. The fact that "rbd resize" fixed something hints
that your VMs had hard-crashed with the exclusive lock left in the locked position and QEMU wasn't able to break the lock when the VMs were restarted.
On Wed, Nov 8, 2017 at 10:29 AM, James Forde <jimf@xxxxxxxxx> wrote:
Title probably should have read “Ceph Data corruption shortly after Luminous Upgrade”
Problem seems to have been sorted out. Still not sure why original problem other than Upgrade latency?, or mgr errors?
After I resolved the boot problem I attempted to reproduce error, but was unsuccessful which is good. HEALTH_OK
Anyway, to future users running into Windows "Unmountable Boot Volume", or CentOS7 boot to emergency mode, HERE IS SOLUTION.
Get rbd image size and increase by 1GB and restart VM. That’s it. All VM’s booted right up after increasing rbd image by 1024MB. Takes just a couple of seconds.
Rbd info vmtest
Rbd image ‘vmtest’:
Size 20480 MB
Rbd resize –image vmtest –size 21504
Rbd info vmtest
Rbd image ‘vmtest’:
Size 21504 MB
Good luck
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com