> > Hi, > > I want to setup a 3-node Ceph cluster with fault domain configured to "host". > > Each node should be equipped with: > > 6x SAS3 HDD 12TB > 1x SAS3 SSD 7TB (should be extended to 2x7 later) Is this existing hardware you’re stuck with? If not, don’t waste your money with SAS. SAS generally requires you to add a PCIe HBA, which often comes with expensive and brittle RAID functionality. SAS SSDs don’t cost more than NVMe SSDs if you procure carefully. Buying NVMe-only chassis can in fact cost LESS up front than SAS-capable chassis. With only 18 OSDs, each a large slow HDD, do you have any performance expectation at all? > The ceph configuration should be size=3, min_size=2. All nodes are connected with 2x10Gbit (LACP). > > I want to use different CRUSH rules for different pools. CephFS and low priority/IO VMs stored on RBD should use only HDD drives with default replication CRUSH rule. > > For high priority VMs, I want to create another RBD data pool which uses a modified CRUSH replication rule: > > |# Hybrid storage policy rule hybrid { ruleset 2 type replicated step take ssd step chooseleaf firstn 1 type host step emit step take hdd step chooseleaf firstn -1 type host step emit } | > |For pools using this hybrid rule, PGs are stored on one SSD (primary) and two HDD (secondary) devices. I believe the upstream docs have an example of such a CRUSH rule, not sure if it’s identical to what you list above. Note that you would want to ensure that primary affinity is limited to the SSD OSDs. Do note that performance with only 3 SSD OSDs is not going to be terrific. In fact it might even be less than a pool using the HDD OSDs, which at least are more numerous. Note that with this strategy, your writes will not be any faster than the HDD-only pool, and may well be slower. > But these have different sizes in my hardware setup. What happens with the remaining disk space (12-7=5) 5GB on the secondary devices? Is it just unusable, It will be shared with your “low priority” pools. > or will ceph use it for other pools with default replication? In any case, I don't bother about these 5GB, just want to know how it works. For the above setup, can you recommend any important configuration settings and should I modify the OSD weighting? Thanks. |-- > Best regards / Mit freundlichen Grüßen > Daniel Vogelbacher > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx