Hi,
did you wipe the LV on the SSD that was assigned to the failed HDD? I
just did that on a fresh Pacific install successfully, a couple of
weeks ago it also worked on an Octopus cluster. Note that I have a few
filters in my specs file but that shouldn't make a difference, I
believe.
pacific1:~ # cat osd-specs.yml
block_db_size: 4G
data_devices:
size: "10G:"
rotational: 1
db_devices:
size: "20G:"
rotational: 0
filter_logic: AND
service_id: ssd-hdd-mix
service_type: osd
service_name: osd.with.ssd.db
placement:
hosts:
- pacific1
- pacific2
Zitat von Kai Stian Olstad <ceph+list@xxxxxxxxxx>:
Hi
The server run 15.2.9 and has 15 HDD and 3 SSD.
The OSDs was created with this YAML file
hdd.yml
--------
service_type: osd
service_id: hdd
placement:
host_pattern: 'pech-hd-*'
data_devices:
rotational: 1
db_devices:
rotational: 0
The result was that the 3 SSD is added to 1 VG with 15 LV on it.
# vgs | egrep "VG|dbs"
VG #PV #LV #SN
Attr VSize VFree
ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b 3 15 0
wz--n- <5.24t 48.00m
One of the osd failed and I run rm with replace
# ceph orch osd rm 178 --replace
and the result is
# ceph osd tree | grep "ID|destroyed"
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
178 hdd 12.82390 osd.178 destroyed 0 1.00000
But I'm not able to replace the disk with the same YAML file as shown above.
# ceph orch apply osd -i hdd.yml --dry-run
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
I guess this is the wrong way to do it, but I can't find the answer
in the documentation.
So how can I replace this failed disk in Cephadm?
--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx