Hi,
I want to setup a 3-node Ceph cluster with fault domain configured to
"host".
Each node should be equipped with:
6x SAS3 HDD 12TB
1x SAS3 SSD 7TB (should be extended to 2x7 later)
The ceph configuration should be size=3, min_size=2. All nodes are
connected with 2x10Gbit (LACP).
I want to use different CRUSH rules for different pools. CephFS and low
priority/IO VMs stored on RBD should use only HDD drives with default
replication CRUSH rule.
For high priority VMs, I want to create another RBD data pool which uses
a modified CRUSH replication rule:
|# Hybrid storage policy rule hybrid { ruleset 2 type replicated step
take ssd step chooseleaf firstn 1 type host step emit step take hdd step
chooseleaf firstn -1 type host step emit } |
|For pools using this hybrid rule, PGs are stored on one SSD (primary)
and two HDD (secondary) devices. But these have different sizes in my
hardware setup. What happens with the remaining disk space (12-7=5) 5GB
on the secondary devices? Is it just unusable, or will ceph use it for
other pools with default replication? In any case, I don't bother about
these 5GB, just want to know how it works. For the above setup, can you
recommend any important configuration settings and should I modify the
OSD weighting? Thanks. |--
Best regards / Mit freundlichen Grüßen
Daniel Vogelbacher
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx