For example: ceph cluster has 10 osd, now add a osd the problem is as follows: 1, when add a osd, osdmap, crushmap changes, Break the original all pg on osd positioning. Cause all pgs to migrate between 11 osd? 2,If a new OSD is added, a small amount of PG is migrated. In the first 10 osd, how does ceph determine which pg needs to split? 3,when osdmap changes, Is ceph most pg having to look for positions again? How do you ensure that a small number of pg find locations? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html