Hi,
1) Do I need to erase the wal/db data on the ssd1/db327 Logical Volume? If
so, how should I do that?
Yes, the data on that LV is uselesse without the OSD. I always use
ceph-volume lvm zap --destroy /dev/ceph-journals/journal1
This zaps and destroys the LV completely so you can re-use it for the new OSD.
If you want to re-use an existing OSD-ID that OSD has to be marked as
"destroyed", not removed (otherwise the command will fail with
--osd-id and you'll have to run it without it):
ceph-2:~ # ceph osd destroy-actual 1 --yes-i-really-mean-it
destroyed osd.1
ceph-2:~ # ceph osd tree | grep destroy
1 hdd 0.02339 osd.1 destroyed 1.00000 1.00000
2) Assuming 1) is taken care of (and the "old" OSD is destroyed and the
"bad" hard drive has been physically replaced with a new one), does this
command look correct? `ceph-volume lvm create --osd-id 327 --bluestore
--data /dev/sdai --block.db ssd1/db327`
The command looks good (if the OSD-ID still exists).
Regards,
Eugen
Zitat von "Hayashida, Mami" <mami.hayashida@xxxxxxx>:
We are running the Mimic version of Ceph (13.2.6) and I would like to know
a proper way of replacing a defective OSD disk that has its DB and WAL on a
separate SSD drive which is shared with 9 other OSDs. More specifically,
the failing disk for osd.327 is on /dev/sdai and its wal/db are on
/dev/sdc, which is partitioned into 10 LVs, holding wal/db for osd.320-329.
When I deployed it, I used pv/vg/lvcreate commands to make VG named ssd1,
LV named db320, db321 and so on. Then I used the ceph-deploy command from
an admin node (`ceph-deploy osd create --block-db=ssd1/db327
--data=dev/sdai <node>`). My main question is what to do about the
separate wal/db data as this page (
https://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/) does not
seem to address the issue.
1) Do I need to erase the wal/db data on the ssd1/db327 Logical Volume? If
so, how should I do that?
2) Assuming 1) is taken care of (and the "old" OSD is destroyed and the
"bad" hard drive has been physically replaced with a new one), does this
command look correct? `ceph-volume lvm create --osd-id 327 --bluestore
--data /dev/sdai --block.db ssd1/db327`
*Mami Hayashida*
*Research Computing Associate*
Univ. of Kentucky ITS Research Computing Infrastructure
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx