Re: ceph mgr balancer bad distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is the score improving?

    ceph balancer eval

It should be decreasing over time as the variances drop toward zero.

You mentioned a crush optimize code at the beginning... how did that
leave your cluster? The mgr balancer assumes that the crush weight of
each OSD is equal to its size in TB.
Do you have any osd reweights? crush-compat will gradually adjust
those back to 1.0.

Cheers, Dan



On Thu, Mar 1, 2018 at 8:27 AM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
> Does anybody have some more input?
>
> I keeped the balancer active for 24h now and it is rebalancing 1-3%
> every 30 minutes but the distribution is still bad.
>
> It seems to balance from left to right and than back from right to left...
>
> Greets,
> Stefan
>
> Am 28.02.2018 um 13:47 schrieb Stefan Priebe - Profihost AG:
>> Hello,
>>
>> with jewel we always used the python crush optimizer which gave us a
>> pretty good distribution fo the used space.
>>
>> Since luminous we're using the included ceph mgr balancer but the
>> distribution is far from perfect and much worse than the old method.
>>
>> Is there any way to tune the mgr balancer?
>>
>> Currently after a balance we still have:
>> 75% to 92% disk usage which is pretty unfair
>>
>> Greets,
>> Stefan
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux