Re: cephadm: How to replace failed HDD where DB is on SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This test was on ceph version 15.2.8.

On Pacific (ceph version 16.2.4) this also works for me for initial deployment of an entire host:

+---------+-------------+----------+----------+----------+-----+
|SERVICE  |NAME         |HOST      |DATA      |DB        |WAL  |
+---------+-------------+----------+----------+----------+-----+
|osd      |ssd-hdd-mix  |pacific1  |/dev/vdb  |/dev/vdd  |-    |
|osd      |ssd-hdd-mix  |pacific1  |/dev/vdc  |/dev/vdd  |-    |
+---------+-------------+----------+----------+----------+-----+

But it doesn't work if I remove one OSD, just like you describe. This is what ceph-volume reports:

---snip---
[ceph: root@pacific1 /]# ceph-volume lvm batch --report /dev/vdc --db-devices /dev/vdd --block-db-size 3G
--> passed data devices: 1 physical, 0 LVM
--> relative data size: 1.0
--> passed block_db devices: 1 physical, 0 LVM
--> 1 fast devices were passed, but none are available

Total OSDs: 0

Type Path LV Size % of device
---snip---

I know that this has already worked in Octopus, I did test it successfully not long ago.


Zitat von Kai Stian Olstad <ceph+list@xxxxxxxxxx>:

On 27.05.2021 11:17, Eugen Block wrote:
That's not how it's supposed to work. I tried the same on an Octopus
cluster and removed all filters except:

data_devices:
 rotational: 1
db_devices:
 rotational: 0

My Octopus test osd nodes have two HDDs and one SSD, I removed all
OSDs and redeployed on one node. This spec file results in three
standalone OSDs! Without the other filters this won't work as
expected, it seems. I'll try again on Pacific with the same test and
see where that goes.

This spec did worked for me when I initially deployed with Octopus 15.2.5.

--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux