Strangely enough, I’m also seeing similar user issues – a strangely high volume of corrupt instance boot disks.
At this point I’m attributing it to the fact that our Ceph cluster is patched 9 months ahead of our RedHat OSP Kilo environment. However that’s a total guess at this point….. From:
ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of "Keynes_Lee@xxxxxxxxxxx" <Keynes_Lee@xxxxxxxxxxx> Hum ~~~seems we have in common We use
rbd snap create
to make snapshot for instances volumes rbd export and rbd export-diff
command to make daily backup. Now we got 29 instances and 33 volumes
From: Ahmed Mostafa [mailto:ahmedmostafadev@xxxxxxxxx]
Actually i have the same problem when starting an instance backed up by librbd But this only happens when trying to start 60+ instance But I decided that this is due to the fact that we are using old hardware that is not able to respond to high demand. Could that be the same issue that you are facing ?
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com