Re: Adding new server to existing ceph cluster - with separate block.db on NVME

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29.03.23 01:09, Robert W. Eckert wrote:

I did miss seeing the db_devices part. For ceph orch apply -  that would have saved a lot of effort.  Does the osds_per_device create the partitions on the db device?

No, osds_per_device creates multiple OSDs on one data device, could be useful for NVMe, do no use on HDD.

The command automatically creates the number of db slots on the db_device based on how many data_devices you pass it.

If you want more slots for the RocksDB then pass it the db_slots parameter.

Also is there any way to disable --all-available-devices if it was turned on.

The
	ceph orch apply osd --all-available-devices --unmanaged=true

command doesn't seem to disable the behavior of adding new drives.

You can set the service to unmanaged when exporting the specification.

ceph orch ls osd --export > osd.yml

Edit osd.yml and add "unmanaged: true" to the specification. After that

ceph orch apply -i osd.yml

Or you could just remove the specification with "ceph orch rm NAME".
The OSD service will be removed but the OSD will remain.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux