You'll have to check your crush rule to determine that. ceph osd getcrushmap -o crushmap crushtool -d crushmap -o crushmap.txt vi crushmap.txt check the rules near the end of that file. Rule 0 shows placement by host, and rule 1 shows placemeny by osd. You can add another rule to your config and then change any pool to use the other rule. # rules rule sandbox_host { ruleset 0 type replicated min_size 2 max_size 10 step take sandbox step chooseleaf firstn 0 type host step emit } rule sandbox_osd { ruleset 1 type replicated min_size 1 max_size 10 step take sandbox step chooseleaf firstn 0 type osd step emit } To put new rule into use: crushtool -c crushmap.txt -o crushmap ceph osd setcrushmap -i crushmap example of how to change "rbd" pool to "rule 0" ceph osd pool set rbd crush_ruleset 0 Hope this helps! -----Original Message----- From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Rene Hadler Sent: Wednesday, November 26, 2014 7:15 AM To: ceph-users@xxxxxxxxxxxxxx Subject: Many OSDs on one node and replica distribution Hi dear list, i have a question about distribution of replicas on hosts with multiple OSDs. For example this configuration: 4x nodes each node has 4 OSDs replica set to 3 When i save now an object to the pool, how it is replicated? Is there a chance that the original object and the 2 replicas are stored on the same node? I cant believe that but it was never clear for me. If yes and the node fails this means the object is destroyed. Or should i set the replica count to 5 to be sure? Thanks. -- Mit freundlichen Grüßen / Best regards Rene Hadler _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=AAIGaQ&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOSncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=A5TPfHXvfTto86R6BmKuTG4NxfEeW9mYjqg3_xdw35E&s=HIbW9b0YvSjOvTgo2tj_JgEzkXkHUr8TYdm0BG4o1Qc&e= _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com