Hi,
Is there any way of using "ceph orch apply osd" to partition an SSD as
wal+db for a HDD OSD, with the rest of the SSD as a separate OSD?
E.g. on a machine (here called 'k1') with a small boot drive and a single
HDD and SSD, this will create an OSD on the HDD, with wal+db on a 60G
logical volume on the SSD:
$ ceph orch apply osd -i <(cat <<'END'
service_type: osd
service_id: k_hdd_ssd
service_name: osd.hdd_ssd_mix
placement:
host_pattern: 'k1'
data_devices:
rotational: 1
size: '8T:'
db_devices:
rotational: 0
filter_logic: AND
block_db_size: 60G
END
)
And k1 ends up with LVs like so, with free space on the SSD:
k1# pvs
PV VG Fmt Attr PSize PFree
/dev/sda ceph-3842b9ec-... lvm2 a-- <223.57g <163.57g
/dev/sdb ceph-6ef9d5b3-... lvm2 a-- 14.55t 0
Any suggestions on how to get "ceph orch apply" to use the remainder of
the space on the SSD for another OSD, or is this something I'm going to
have to create by hand?
In case you're wondering why...
This is for initially just for throughput testing using the in-place
hardware. The HDD OSDs are for the data part of an erasure pool, and
replicated partial-SSD OSDs for the metadata part, to see if the HDD
wal+db on SSD makes any significant difference over pure-HDD OSDs for
"mostly large writes" on rbd.
Assuming it does make a difference, is there any particular reason to
not go this way?
Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx