Hi List, The enviroment is: Ceph 12.2.4 Balancer module on and in upmap mode Failure domain is per host, 2 OSD per host EC k=4 m=2 PG distribution is almost even before and after the rebalancing. After marking out one of the osd,I noticed a lot of the data was moving into the other osd on the same host . Ceph osd df result is(osd.20 and osd.21 are in the same host and osd.20 was marked out): ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 19 hdd 9.09560 1.00000 9313G 7079G 2233G 76.01 1.00 135 21 hdd 9.09560 1.00000 9313G 8123G 1190G 87.21 1.15 135 22 hdd 9.09560 1.00000 9313G 7026G 2287G 75.44 1.00 133 23 hdd 9.09560 1.00000 9313G 7026G 2286G 75.45 1.00 134 I am using RBD only so the objects should all be 4m .I don't understand why osd 21 got significant more data with the same pg as other osds. Is this behavior expected or I misconfiged something or some kind of bug? Thanks 2018-06-25 shadow_lin _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com