On Fri, Apr 13, 2018 at 8:20 PM, Rhian Resnick <rresnick@xxxxxxx> wrote:
Evening,
When attempting to create an OSD we receive the following error.
[ceph-admin@ceph-storage3 ~]$ sudo ceph-volume lvm create --bluestore --data /dev/sduRunning command: ceph-authtool --gen-print-keyRunning command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c8cb8cff-dad9-48b8-8d77- 6f130a4b629d --> Was unable to complete a new OSD, will rollback changes--> OSD will be fully purged from the cluster, because the ID was generatedRunning command: ceph osd purge osd.140 --yes-i-really-mean-itstderr: purged osd.140--> MultipleVGsError: Got more than 1 result looking for volume group: ceph-6a2e8f21-bca2-492b-8869-eecc995216cc Any hints on what to do? This occurs when we attempt to create osd's on this node.
Can you use a paste site and get the /var/log/ceph/ceph-volume.log contents? Also, if you could try the same command but with:
CEPH_VOLUME_DEBUG=1
I think you are hitting two issues here:
1) Somehow `osd new` is not completing and failing
2) The `purge` command to wipe out the LV is getting multiple LV's and cannot make sure to match the one it used.
#2 definitely looks like something we are doing wrong, and #1 can have a lot of different causes. The logs would be tremendously helpful!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com