Rebalance after empty bucket addition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

after reaching certain ceiling of host/PG ratio, moving empty bucket
in causes a small rebalance:

ceph osd crush add-bucket 10.10.2.13
ceph osd crush move 10.10.2.13 root=default rack=unknownrack

I have two pools, one is very large and it is keeping up with proper
amount of pg/osd but another one contains in fact lesser amount of PGs
than the number of active OSDs and after insertion of empty bucket in
it goes to a rebalance, though that the actual placement map is not
changed. Keeping in mind that this case is very far from being
offensive to any kind of a sane production configuration, is this an
expected behavior?

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux