Hi
We have a bunch of HDD OSD hosts with DB/WAL on PCI NVMe, either 2 x
3.2TB or 1 x 6.4TB. We used to have 4 SSDs pr node for journals before
bluestore and those have been repurposed for an SSD pool (wear level is
fine).
We've been using the following service specs to avoid the PCI NVMe
devices for bluestore being provisioned as OSDs:
---
service_type: osd
service_id: fast
service_name: osd.fast
placement:
host_pattern: '*'
spec:
data_devices:
rotational: 0
size: :1000G <-- only use devices smaller than 1TB = not PCI NVMe
filter_logic: AND
objectstore: bluestore
---
service_type: osd
service_id: slow
service_name: osd.slow
placement:
host_pattern: '*'
spec:
block_db_size: 290966113186
data_devices:
rotational: 1
db_devices:
rotational: 0
size: '1000G:' <-- only use devices larger than 1TB for DB/WAL
filter_logic: AND
objectstore: bluestore
---
We just bought a few 7.68 TB SATA SSDs to add to the SSD pool which
aren't being picked up by the osd.fast spec because they are too large
and they could also be picked up as DB/WAL with the current specs.
As far as I can determine there is no way to achieve what I want with
the existing specs, as I can't filter on PCI vs SATA, only rotational or
not, I can't use size, as it only can define an in between range, not an
outside range, and I can't use filter_logic OR for the sizes because I
need the rotational qualifier to be AND.
I can do a osd.fast2 spec with size: 7000G: and change the db_devices
size for osd.slow to something like 1000G:7000G but curious to see if
anyone would have a different suggestion?
Mvh.
Torkil
--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
Kettegård Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: torkil@xxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx