If you are running Luminous or newer, you can simply enable the balancer module [1].
[1]
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Robert LeBlanc <robert@xxxxxxxxxxxxx>
Sent: Tuesday, June 25, 2019 5:22 PM To: jinguk.kwon@xxxxxxxxxxx Cc: ceph-users Subject: Re: rebalancing ceph cluster The placement of PGs is random in the cluster and takes into account any CRUSH rules which may also skew the distribution. Having more PGs will help give more options for placing PGs, but it still may not be adequate. It is recommended to have
between 100-150 PGs per OSD, and you are pretty close. If you aren't planning to add any more pools, then splitting the PGs for pools that have a lot of data can help.
To get things to be more balanced, you can reweight the high utlization OSDs down to cause CRUSH to migrate some PGs off. This won't mean that they will get moved to the lowest utilized OSDs (they might wind up on another one that is pretty full). So,
it may take several iterations to get things balanced. Just be sure that if you reweighted one down and it is now much lower usage than the others to reweight it back up to attract some PGs back to it.
```ceph
osd
reweight
{osd-num}
{weight}```
----------------
Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com