But leaves still entries in crush map and maybe also ceph auth ls, and the dir in /var/lib/ceph/osd -----Original Message----- From: Oliver Freyermuth [mailto:freyermuth@xxxxxxxxxxxxxxxxxx] Sent: zaterdag 2 juni 2018 18:29 To: Marc Roos; ceph-users Subject: Re: Bug? ceph-volume zap not working The command mapping from ceph-disk to ceph-volume is certainly not 1:1. What we are ended up using is: ceph-volume lvm zap /dev/sda --destroy This takes care of destroying Pvs and Lvs (as the documentation says). Cheers, Oliver Am 02.06.2018 um 12:16 schrieb Marc Roos: > > I guess zap should be used instead of destroy? Maybe keep ceph-disk > backwards compatibility and keep destroy?? > > [root@c03 bootstrap-osd]# ceph-volume lvm zap /dev/sdf > --> Zapping: /dev/sdf > --> Unmounting /var/lib/ceph/osd/ceph-19 > Running command: umount -v /var/lib/ceph/osd/ceph-19 > stderr: umount: /var/lib/ceph/osd/ceph-19 (tmpfs) unmounted Running > command: wipefs --all /dev/sdf > stderr: wipefs: error: /dev/sdf: probing initialization failed: > Device or resource busy > --> RuntimeError: command returned non-zero exit status: 1 > > Pvs / lvs are still there, I guess these are keeping the 'resource busy' > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com