Hi Sage, I have some questions about the pg membership. Suppose I have a ceph cluster of 3 osds (osd0, osd1, osd2) and the replication number is 2. I observed that if osd2 failed and became down and out, one pg's acting osds would changed from [0,2] to [1,0]. Does it mean that the primary of the pg changed from osd0 to osd1? If the client wants to access the object in the pg at that time, would the client's request be blocked until osd1 has acquired the missing object from osd0? If so, is there any way (e.g. a crush rule?) to choose osd0 as the new primary since it was the replica and had the object already? Then, it should be able to shorten the time the client has to wait. Thanks, Henry -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html