Not sure if ceph-deploy has similar functionality, but executing ‘ceph-volume lvm zap <device> --destroy’ on target machine would have removed lvm mapping.
On Jun 10, 2018, 14:41 +0300, Max Cuttins <max@xxxxxxxxxxxxx>, wrote:
I solved by myself.
I' writing here my findings to save some working hours to others.
Sound strange that nobody knew this.The issue is that data is purged but LVM partition are leaved in place.
This means that you need to manually remove.
I just reinstalled the whole OS and on the data disks there are still LVM partition named "ceph-*". These partition are ACTIVE by default.
To get rid of the old data:#find disks
lsblk
See in the result all the "ceph-*" volume groups and remove theme:
vgchange -a n ceph-XXXXXXXXXXXXXXXXXX
vgremove ceph-XXXXXXXXXXXXXXXXXXXDo it for all disks.
Now you can run ceph-deploy osd create correctly without being prompted that disk is in use.
Il 06/06/2018 19:41, Max Cuttins ha scritto:Hi everybody,
I would like to start from zero.
However last time I run the command to purge everything I got an issue.
I had a complete cleaned up system as expected, but disk was still OSD and the new installation refused to overwrite disk in use.
The only way to make it work was manually format the disks with fdisk and zap again with ceph later.
Is there something I shoulded do before purge everything in order to do not have similar issue?
Thanks,
Max
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com