Re: How to replace an HDD in a OSD with shared SSD for DB/WAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 21.04.23 05:44, Tao LIU wrote:

I build a Ceph Cluster with cephadm.
Every cehp node has 4 OSDs. These 4 OSD were build with 4 HDD (block) and 1
SDD (DB).
At present , one HDD is broken, and I am trying to replace the HDD,and
build the OSD with the new HDD and the free space of the SDD. I did the
follows:

#ceph osd stop osd.23
#ceph osd out osd.23
#ceph osd crush remove osd.23
#ceph osd rm osd.23
#ceph orch daemon rm osd.23 --force
#lvremove
/dev/ceph-ae21e618-601e-4273-9185-99180edb8453/osd-block-96eda371-1a3f-4139-9123-24ec1ba362c4
#wipefs -af /dev/sda
#lvremove
/dev/ceph-e50203a6-8b8e-480f-965c-790e21515395/osd-db-70f7a032-cf2c-4964-b979-2b90f43f2216
#ceph orch daemon add osd
compute11:data_devices=/dev/sda,db_devices=/dev/sdc,osds_per_device=1

The OSD can be built, but is always down.

Is there anyting that I missed during the building?

Assuming /dev/ceph-UUID/osd-db-UUID is the logical volume for the old OSD you could have run this:

ceph orch osd rm 23

replace the faulty HDD

ceph orch daemon add osd compute11:data_devices=/dev/sda,db_devices=ceph-UUID/osd-db-UUID

This will reuse the existing logical volume for the OSD DB.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux