Re: cephadm: How to replace failed HDD where DB is on SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, the LVs are not removed automatically, you need to free up the VG, there are a couple of ways to do so, for example remotely:

pacific1:~ # ceph orch device zap pacific4 /dev/vdb --force

or directly on the host with:

pacific1:~ # cephadm ceph-volume lvm zap --destroy /dev/<CEPH_VG>/<DB_LV>



Zitat von Kai Stian Olstad <ceph+list@xxxxxxxxxx>:

On 26.05.2021 08:22, Eugen Block wrote:
Hi,

did you wipe the LV on the SSD that was assigned to the failed HDD? I
just did that on a fresh Pacific install successfully, a couple of
weeks ago it also worked on an Octopus cluster.

No, I did not wipe the LV.
Not sure what you mean by wipe, so I tried overwriting the LV with /dev/zero, but that did solve it.
So I guess with wipe do you mean delete the LV with lvremove?


--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux