Hi
The server run 15.2.9 and has 15 HDD and 3 SSD.
The OSDs was created with this YAML file
hdd.yml
--------
service_type: osd
service_id: hdd
placement:
host_pattern: 'pech-hd-*'
data_devices:
rotational: 1
db_devices:
rotational: 0
The result was that the 3 SSD is added to 1 VG with 15 LV on it.
# vgs | egrep "VG|dbs"
VG #PV #LV #SN Attr
VSize VFree
ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b 3 15 0 wz--n-
<5.24t 48.00m
One of the osd failed and I run rm with replace
# ceph orch osd rm 178 --replace
and the result is
# ceph osd tree | grep "ID|destroyed"
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT
PRI-AFF
178 hdd 12.82390 osd.178 destroyed 0
1.00000
But I'm not able to replace the disk with the same YAML file as shown
above.
# ceph orch apply osd -i hdd.yml --dry-run
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
I guess this is the wrong way to do it, but I can't find the answer in
the documentation.
So how can I replace this failed disk in Cephadm?
--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx