Yeah i agree... the auto balancer is definitely doing a poor job for me.
I have been experimenting with this for weeks and i can make way better optimization than the balancer by looking at "ceph osd df tree" and manually running various ceph upmap commands.
Too bad this is tedious work, and tends to get imbalanced again as soon as i need to replace disks.
On Thu, Apr 4, 2019 at 10:49 AM Iain Buclaw <ibuclaw@xxxxxxxxxx> wrote:
On Mon, 18 Mar 2019 at 16:42, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>
> The balancer optimizes # PGs / crush weight. That host looks already
> quite balanced for that metric.
>
> If the balancing is not optimal for a specific pool that has most of
> the data, then you can use the `optimize myplan <pool>` param.
>
>From experimenting, in three different clusters, this is not quite right.
I've found that the balancer is quite unable to optimize correctly if
you have mixed sized OSDs, even if there's only one OSD that's bigger
by 12 GBs.
--
Iain Buclaw
*(p < e ? p++ : p) = (c & 0x0f) + '0';
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com