Re: ceph mgr balancer bad distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
Am 01.03.2018 um 09:03 schrieb Dan van der Ster:
> Is the score improving?
> 
>     ceph balancer eval
> 
> It should be decreasing over time as the variances drop toward zero.
> 
> You mentioned a crush optimize code at the beginning... how did that
> leave your cluster? The mgr balancer assumes that the crush weight of
> each OSD is equal to its size in TB.
> Do you have any osd reweights? crush-compat will gradually adjust
> those back to 1.0.

I reweighted them all back to their correct weight.

Now the mgr balancer module says:
mgr[balancer] Failed to find further optimization, score 0.010646

But as you can see it's heavily imbalanced:


Example:
49   ssd 0.84000  1.00000   864G   546G   317G 63.26 1.13  49

vs:

48   ssd 0.84000  1.00000   864G   397G   467G 45.96 0.82  49

45% usage vs. 63%

Greets,
Stefan

> 
> Cheers, Dan
> 
> 
> 
> On Thu, Mar 1, 2018 at 8:27 AM, Stefan Priebe - Profihost AG
> <s.priebe@xxxxxxxxxxxx> wrote:
>> Does anybody have some more input?
>>
>> I keeped the balancer active for 24h now and it is rebalancing 1-3%
>> every 30 minutes but the distribution is still bad.
>>
>> It seems to balance from left to right and than back from right to left...
>>
>> Greets,
>> Stefan
>>
>> Am 28.02.2018 um 13:47 schrieb Stefan Priebe - Profihost AG:
>>> Hello,
>>>
>>> with jewel we always used the python crush optimizer which gave us a
>>> pretty good distribution fo the used space.
>>>
>>> Since luminous we're using the included ceph mgr balancer but the
>>> distribution is far from perfect and much worse than the old method.
>>>
>>> Is there any way to tune the mgr balancer?
>>>
>>> Currently after a balance we still have:
>>> 75% to 92% disk usage which is pretty unfair
>>>
>>> Greets,
>>> Stefan
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux