no change:
root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy
/dev/dm-0
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes
are found for /dev/dm-0
Running command: wipefs --all /dev/dm-0
stderr: wipefs: error: /dev/dm-0: probing initialization failed:
Device or resource busy
--> RuntimeError: command returned non-zero exit status: 1
On 12.06.2018 09:03, Linh Vu wrote:
ceph-volume lvm zap
--destroy $DEVICE
Thanks Sergey.
Could you specify your answer a bit more? When I look into
the manpage of ceph-volume I couldn't find an option named
"--destroy".
I just like to make clear - this script has already migrated
several servers. The problem is appearing when it should
migrate devices in the expansion shelf.
"--> RuntimeError: Cannot use device (/dev/dm-0). A vg/lv
path or an existing device is needed"
Cheers,
Vadim
I would say the handling of devices
On 11.06.2018 23:58, Sergey
Malinin wrote:
“Device
or resource busy” error rises when no “--destroy” option
is passed to ceph-volume.
On Jun 11, 2018, 22:44
+0300, Vadim Bulst
<vadim.bulst@xxxxxxxxxxxxxx>, wrote:
Dear Cephers,
I'm trying to migrate our OSDs to Bluestore using this
little script:
#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] |
select(.osd_objectstore |
contains("filestore")) ]' | jq '[.[] | select(.hostname |
contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
DEV=/dev/`ceph osd metadata | jq -c '.[] |
select(.id=='${OSD}') |
.backend_filestore_dev_node' | sed 's/"//g'`
echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
ceph osd out ${OSD}
while ! ceph osd safe-to-destroy ${OSD} ; do echo
"waiting for full
evacuation"; sleep 60 ; done
systemctl stop ceph-osd@${OSD}
umount /var/lib/ceph/osd/ceph-${OSD}
/usr/sbin/ceph-volume lvm zap ${DEV}
ceph osd destroy ${OSD} --yes-i-really-mean-it
/usr/sbin/ceph-volume lvm create --bluestore --data
${DEV}
--osd-id ${OSD}
done
Unfortunately - under normal circumstances this works
flawlessly. In our
case we have expansion shelfs connected as multipath
devices to our nodes.
/usr/sbin/ceph-volume lvm zap ${DEV} is breaking with an
error:
OSD(s) 1 are safe to destroy without reducing data
durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
stderr: wipefs: error: /dev/dm-0: probing initialization
failed:
Device or resource busy
--> RuntimeError: command returned non-zero exit
status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring
osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
5c8f 1
--> Was unable to complete a new OSD, will rollback
changes
--> OSD will be destroyed, keeping the ID because it
was provided with
--osd-id
Running command: ceph osd destroy osd.1
--yes-i-really-mean-it
stderr: destroyed osd.1
--> RuntimeError: Cannot use device (/dev/dm-0). A
vg/lv path or an
existing device is needed
Does anybody know how to solve this problem?
Cheers,
Vadim
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: +49-341-97-33380
mail:
vadim.bulst@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: ++49-341-97-33380
mail: vadim.bulst@xxxxxxxxxxxxxx
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: ++49-341-97-33380
mail: vadim.bulst@xxxxxxxxxxxxxx
|
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com