Also, looking at your ceph-disk list output, the LVM is probably your
root filesystem and cannot be wiped. If you'd like the send the output
of a 'mount' and 'lvs' command, you should be able to to tell.
-- jacob
On 07/13/2018 03:42 PM, Jacob DeGlopper wrote:
You have LVM data on /dev/sdb already; you will need to remove that
before you can use ceph-disk on that device.
Use the LVM commands 'lvs','vgs', and 'pvs' to list the logical
volumes, volume groups, and physical volumes defined. Once you're
sure you don't need the data, lvremove, vgremove, and pvremove them,
then zero the disk using 'dd if=/dev/zero of=/dev/sdb bs=1M
count=10'. Note that this command wipes the disk - you must be sure
that you're wiping the right disk.
-- jacob
On 07/13/2018 03:26 PM, Satish Patel wrote:
I am installing ceph in my lab box using ceph-ansible, i have two HDD
for OSD and i am getting following error on one of OSD not sure what
is the issue.
[root@ceph-osd-01 ~]# ceph-disk prepare --cluster ceph --bluestore
/dev/sdb
ceph-disk: Error: Device /dev/sdb1 is in use by a device-mapper
mapping (dm-crypt?): dm-0
[root@ceph-osd-01 ~]# ceph-disk list
/dev/dm-0 other, xfs, mounted on /
/dev/sda :
/dev/sda1 other, xfs, mounted on /boot
/dev/sda2 swap, swap
/dev/sdb :
/dev/sdb1 other, LVM2_member
/dev/sdc :
/dev/sdc1 ceph data, active, cluster ceph, osd.3, block /dev/sdc2
/dev/sdc2 ceph block, for /dev/sdc1
/dev/sr0 other, unknown
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com