hi,
in case i care to keep an osd id (up to now) i do
...
ceph osd destroy <osd id> --yes-i-really-mean-it
... replace disk ...
[ ceph-volume lvm zap --destroy /dev/<new disk> ]
ceph-volume lvm prepare --bluestore --osd-id <osd id> --data /dev/<new disk> [ --block.db /dev/<some volume group/<some logical volume> ] [ --block.wal /dev/<some other volume group/<some other logical volume> ]
... retrieve <osd fsid> i.e. with ceph-volume lvm list ...
ceph-volume lvm activate --bluestore <osd id> <osd fsid>
...
... replace disk ...
[ ceph-volume lvm zap --destroy /dev/<new disk> ]
ceph-volume lvm prepare --bluestore --osd-id <osd id> --data /dev/<new disk> [ --block.db /dev/<some volume group/<some logical volume> ] [ --block.wal /dev/<some other volume group/<some other logical volume> ]
... retrieve <osd fsid> i.e. with ceph-volume lvm list ...
ceph-volume lvm activate --bluestore <osd id> <osd fsid>
...
up to now that seems to work fine
cheers, toBias
From: "Nicola Mori" <mori@xxxxxxxxxx>
To: "Janne Johansson" <icepic.dz@xxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Wednesday, 11 December, 2024 11:44:57
Subject: Re: Correct way to replace working OSD disk keeping the same OSD ID
To: "Janne Johansson" <icepic.dz@xxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Wednesday, 11 December, 2024 11:44:57
Subject: Re: Correct way to replace working OSD disk keeping the same OSD ID
Thanks for your insight. So if I remove an OSD without --replace its ID
won't be reused when I e.g. add a new host with new disks? Even if I
completely remove it from the cluster? I'm asking because I maintain a
failure log per OSD and I'd like to avoid that an OSD previously in host
A migrates to host B at a certain point.
thanks again,
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
won't be reused when I e.g. add a new host with new disks? Even if I
completely remove it from the cluster? I'm asking because I maintain a
failure log per OSD and I'd like to avoid that an OSD previously in host
A migrates to host B at a certain point.
thanks again,
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx