I have a cluster of 3 hosts each with 2 SSD and 4 Spinning disks. I used the example in th ecrush map doco to create a crush map to place the primary on the SSD and replica on spinning disk. If I use the example, I end up with objects replicated on the same host, if I use 2 replicas. Question 1, is the documentation on the rules correct, should they really be both ruleset 4 and why? I used ruleset 5 for the ssd-primary. rule ssd { ruleset 4 type replicated min_size 0 max_size 10 step take ssd step chooseleaf firstn 0 type host step emit } rule ssd-primary { ruleset 4 type replicated min_size 0 max_size 10 step take ssd step chooseleaf firstn 1 type host step emit step take platter step chooseleaf firstn -1 type host step emit } Question 2, Is there any way to ensure that the replicas are on different hosts when we use double rooted trees for the 2 technologies? Obviously, the simplest way is to have them on separate hosts. For the moment, I have increased the number of replicas in the pool to 3 which does ensure that there is at least copies spread across multiple hosts. Darryl The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com